Skip to content

Commit

Permalink
Update GKE examples (#384)
Browse files Browse the repository at this point in the history
  • Loading branch information
sarvesh-cast authored Sep 30, 2024
1 parent 65f04d9 commit 1d5f500
Show file tree
Hide file tree
Showing 4 changed files with 223 additions and 72 deletions.
64 changes: 16 additions & 48 deletions examples/gke/gke_cluster_existing/castai.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,12 @@

data "google_client_config" "default" {}

provider "castai" {
api_url = var.castai_api_url
api_token = var.castai_api_token
}

data "google_container_cluster" "my_cluster" {
name = var.cluster_name
location = var.cluster_region
project = var.project_id
}


provider "helm" {
kubernetes {
host = "https://${data.google_container_cluster.my_cluster.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
}
}

# Configure GKE cluster connection using CAST AI gke-cluster module.
module "castai-gke-iam" {
source = "castai/gke-iam/castai"
Expand All @@ -47,24 +33,16 @@ module "castai-gke-cluster" {
gke_credentials = module.castai-gke-iam.private_key
delete_nodes_on_disconnect = var.delete_nodes_on_disconnect

default_node_configuration = module.castai-gke-cluster.castai_node_configurations["default"]
default_node_configuration = module.castai-gke-cluster.castai_node_configurations["default"]
install_workload_autoscaler = true

node_configurations = {
default = {
disk_cpu_ratio = 25
min_disk_size = 100
disk_cpu_ratio = 0
subnets = var.subnets
tags = var.tags
}

test_node_config = {
disk_cpu_ratio = 10
subnets = var.subnets
tags = var.tags
max_pods_per_node = 40
disk_type = "pd-ssd",
network_tags = ["dev"]
}

}
node_templates = {
default_by_castai = {
Expand All @@ -75,15 +53,11 @@ module "castai-gke-cluster" {
should_taint = false

constraints = {
on_demand = true
spot = true
use_spot_fallbacks = true

enable_spot_diversity = false
spot_diversity_price_increase_limit_percent = 20
on_demand = true
}
}
spot_tmpl = {

example_spot_template = {
configuration_id = module.castai-gke-cluster.castai_node_configurations["default"]
is_enabled = true
should_taint = true
Expand All @@ -106,23 +80,23 @@ module "castai-gke-cluster" {
}
]
constraints = {
fallback_restore_rate_seconds = 1800
spot = true
use_spot_fallbacks = true
fallback_restore_rate_seconds = 1800
min_cpu = 4
max_cpu = 100
instance_families = {
exclude = ["e2"]
}
compute_optimized_state = "disabled"
storage_optimized_state = "disabled"
custom_priority = {
instance_families = ["c5"]
spot = true
}
custom_instances_enabled = true
}

custom_instances_enabled = true
}
}

# # Commend oout for POC
autoscaler_settings = {
enabled = false
node_templates_partial_matching_enabled = false
Expand All @@ -140,7 +114,7 @@ module "castai-gke-cluster" {

evictor = {
aggressive_mode = false
cycle_interval = "5m10s"
cycle_interval = "60s"
dry_run = false
enabled = false
node_grace_period_minutes = 10
Expand All @@ -149,18 +123,12 @@ module "castai-gke-cluster" {
}

cluster_limits = {
enabled = false
max_reclaim_rate = 0
enabled = false

cpu = {
max_cores = 20
max_cores = 200
min_cores = 1
}

spot_backups = {
enabled = false
spot_backup_restore_rate_seconds = 1800
}
}
}

Expand Down
12 changes: 12 additions & 0 deletions examples/gke/gke_cluster_existing/providers.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
provider "castai" {
api_url = var.castai_api_url
api_token = var.castai_api_token
}

provider "helm" {
kubernetes {
host = "https://${data.google_container_cluster.my_cluster.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
}
}
106 changes: 91 additions & 15 deletions examples/gke/gke_cluster_gitops/README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,106 @@
## GKE and CAST AI example for GitOps onboarding flow

Following example shows how to onboard GKE cluster to CAST AI using GitOps flow.
In GitOps flow CAST AI Node Configuration, Node Templates and Autoscaler policies are managed using Terraform, but all Castware components such as `castai-agent`, `castai-cluster-controller`, `castai-evictor`, `castai-spot-handler`, `castai-kvisor` are to be installed using other means (e.g ArgoCD, manual Helm releases, etc.)
## GitOps flow

Terraform Managed ==> IAM roles, CAST AI Node Configuration, CAST Node Templates and CAST Autoscaler policies

Helm Managed ==> All Castware components such as `castai-agent`, `castai-cluster-controller`, `castai-evictor`, `castai-spot-handler`, `castai-kvisor`, `castai-workload-autoscaler`, `castai-pod-pinner`, `castai-egressd` are to be installed using other means (e.g ArgoCD, manual Helm releases, etc.)


+-------------------------+
| Start |
+-------------------------+
|
| TERRAFORM
+-------------------------+
| 1. Update TF.VARS
2. Terraform Init & Apply|
+-------------------------+
|
|GITOPS
+-------------------------+
| 3. Deploy Helm chart of castai-agent castai-cluster-controller`, `castai-evictor`, `castai-spot-handler`, `castai-kvisor`, `castai-workload-autoscaler`, `castai-pod-pinner`
+-------------------------+
|
|
+-------------------------+
| END |
+-------------------------+

Steps to take to successfully onboard GKE cluster to CAST AI using GitOps flow:

Prerequisites:
- CAST AI account
- Obtained CAST AI [API Access key](https://docs.cast.ai/docs/authentication#obtaining-api-access-key) with Full Access

1. Configure `tf.vars.example` file with required values. If GKE cluster is already managed by Terraform you could instead directly reference those resources.
2. Run `terraform init`
3. Run `terraform apply` and make a note of `cluster_id` and `cluster_token` output values. At this stage you would see that your cluster is in `Connecting` state in CAST AI console.
4. Install CAST AI components using Helm. Use `cluster_id` and `cluster_token` values to configure Helm releases:
- Set `castai.apiKey` property to `cluster_token` for following CAST AI components: `castai-cluster-controller`, `castai-kvisor`.
- Set `additionalEnv.STATIC_CLUSTER_ID` property to `cluster_id` and `apiKey` property to `cluster_token` for `castai-agent`.
- Set `castai.clusterID` property to for `castai-cluster-controller`, `castai-spot-handler`, `castai-kvisor`
Example Helm install command:
```bash
helm install cluster-controller castai-helm/castai-cluster-controller --namespace=castai-agent --set castai.apiKey=<cluster_token>,provider=gke,castai.clusterID=<cluster_id>,createNamespace=false,apiURL="https://api.cast.ai"

### Step 1 & 2: Update TF vars & TF Init, plan & apply
After successful apply, CAST Console UI will be in `Connecting` state. \
Note generated 'CASTAI_CLUSTER_ID' from outputs


### Step 3: Deploy Helm chart of CAST Components
Coponents: `castai-cluster-controller`,`castai-evictor`, `castai-spot-handler`, `castai-kvisor`, `castai-workload-autoscaler`, `castai-pod-pinner` \
After all CAST AI components are installed in the cluster its status in CAST AI console would change from `Connecting` to `Connected` which means that cluster onboarding process completed successfully.

```
CASTAI_API_KEY=""
CASTAI_CLUSTER_ID=""
CAST_CONFIG_SOURCE="castai-cluster-controller"
#### Mandatory Component: Castai-agent
helm upgrade -i castai-agent castai-helm/castai-agent -n castai-agent \
--set apiKey=$CASTAI_API_KEY \
--set provider=gke \
--create-namespace
#### Mandatory Component: castai-cluster-controller
helm upgrade -i cluster-controller castai-helm/castai-cluster-controller -n castai-agent \
--set castai.apiKey=$CASTAI_API_KEY \
--set castai.clusterID=$CASTAI_CLUSTER_ID \
--set autoscaling.enabled=true
#### castai-spot-handler
helm upgrade -i castai-spot-handler castai-helm/castai-spot-handler -n castai-agent \
--set castai.clusterID=$CASTAI_CLUSTER_ID \
--set castai.provider=gcp
#### castai-evictor
helm upgrade -i castai-evictor castai-helm/castai-evictor -n castai-agent --set replicaCount=0
#### castai-pod-pinner
helm upgrade -i castai-pod-pinner castai-helm/castai-pod-pinner -n castai-agent \
--set castai.apiKey=$CASTAI_API_KEY \
--set castai.clusterID=$CASTAI_CLUSTER_ID \
--set replicaCount=0
#### castai-workload-autoscaler
helm upgrade -i castai-workload-autoscaler castai-helm/castai-workload-autoscaler -n castai-agent \
--set castai.apiKeySecretRef=$CAST_CONFIG_SOURCE \
--set castai.configMapRef=$CAST_CONFIG_SOURCE \
#### castai-kvisor
helm upgrade -i castai-kvisor castai-helm/castai-kvisor -n castai-agent \
--set castai.apiKey=$CASTAI_API_KEY \
--set castai.clusterID=$CASTAI_CLUSTER_ID \
--set controller.extraArgs.kube-linter-enabled=true \
--set controller.extraArgs.image-scan-enabled=true \
--set controller.extraArgs.kube-bench-enabled=true \
--set controller.extraArgs.kube-bench-cloud-provider=gke
```

## Steps Overview

1. Configure `tf.vars.example` file with required values. If AKS cluster is already managed by Terraform you could instead directly reference those resources.
2. Run `terraform init`
3. Run `terraform apply` and make a note of `cluster_id` output values. At this stage you would see that your cluster is in `Connecting` state in CAST AI console
4. Install CAST AI components using Helm. Use `cluster_id` and `api_key` values to configure Helm releases:
- Set `castai.apiKey` property to `api_key`
- Set `castai.clusterID` property to `cluster_id`
5. After all CAST AI components are installed in the cluster its status in CAST AI console would change from `Connecting` to `Connected` which means that cluster onboarding process completed successfully.


## Importing already onboarded cluster to Terraform

This example can also be used to import GKE cluster to Terraform which is already onboarded to CAST AI console trough [script](https://docs.cast.ai/docs/cluster-onboarding#how-it-works).
This example can also be used to import GKE cluster to Terraform which is already onboarded to CAST AI console through [script](https://docs.cast.ai/docs/cluster-onboarding#how-it-works).
For importing existing cluster follow steps 1-3 above and change `castai_node_configuration.default` Node Configuration name.
This would allow to manage already onboarded clusters' CAST AI Node Configurations and Node Templates through IaC.
This would allow to manage already onboarded clusters' CAST AI Node Configurations and Node Templates through IaC.
Loading

0 comments on commit 1d5f500

Please sign in to comment.