generated from mrsimonemms/new
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bookmarks #25
Merged
Merged
Add bookmarks #25
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:28:52Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:28:52Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:29:04Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 19:40
8e8bb05
to
76dcacd
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:42:11Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:42:11Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:42:17Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
2 times, most recently
from
November 13, 2024 19:50
38a96e1
to
68f7f52
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:51:45Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:51:45Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:51:52Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
2 times, most recently
from
November 13, 2024 19:54
e8c3aef
to
fe56005
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:54:45Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:54:46Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:54:52Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:55:37Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:55:37Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:55:49Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 19:57
fe56005
to
9aeed43
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T19:58:44Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T19:58:44Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T19:58:55Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 19:59
9aeed43
to
d39da03
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T20:00:38Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T20:00:38Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T20:00:45Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
2 times, most recently
from
November 13, 2024 20:08
3dd0f0b
to
944e186
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T20:09:32Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T20:09:32Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T20:09:40Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T20:10:08Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T20:10:08Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T20:10:15Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T20:20:39Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T20:20:39Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T20:20:45Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T20:24:19Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T20:24:19Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T20:24:27Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: https://avatars.githubusercontent.com/u/30269780
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 20:27
f420b8c
to
e2305dc
Compare
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:31
8760973
to
38bba0e
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:31:42Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:31:42Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:31:50Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:32
38bba0e
to
420a494
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:33:03Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:33:03Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:33:16Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:33:40Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:33:40Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:33:54Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:34
420a494
to
df796e0
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:35:56Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:35:56Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:36:02Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:36
df796e0
to
ded4e8e
Compare
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:37
ded4e8e
to
6b16f58
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:37:31Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:37:31Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:37:38Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:38:31Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:38:31Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:38:37Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
2 times, most recently
from
November 13, 2024 21:40
20f9990
to
be3d485
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:40:50Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:40:50Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:40:57Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:41
be3d485
to
a35c786
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:41:54Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:41:54Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:42:06Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
mrsimonemms
force-pushed
the
sje/bookmarks
branch
from
November 13, 2024 21:42
a35c786
to
ecb84f2
Compare
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:42:41Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:42:41Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:42:47Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Execution result of "run-all plan" in "stacks/prod"time=2024-11-13T21:44:23Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner
Group 2
- Module /github/workspace/stacks/prod/kubernetes
time=2024-11-13T21:44:23Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has been successfully initialized!
hcloud_ssh_key.server: Refreshing state... [id=24523295]
hcloud_network.network: Refreshing state... [id=10348483]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=419413]
hcloud_network_subnet.subnet: Refreshing state... [id=10348483-10.0.0.0/16]
hcloud_server.workers[1]: Refreshing state... [id=55574607]
hcloud_server.workers[0]: Refreshing state... [id=55574609]
hcloud_firewall.firewall: Refreshing state... [id=1733915]
hcloud_server.manager[0]: Refreshing state... [id=55574608]
ssh_resource.workers_ready[0]: Refreshing state... [id=5288905511168723335]
ssh_resource.workers_ready[1]: Refreshing state... [id=4808111347010735470]
ssh_resource.manager_ready[0]: Refreshing state... [id=4705556237620785565]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=4269936867561524992]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=5339061034062184447]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=2594993472446654698]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=1789411602752845713]
local_sensitive_file.kubeconfig: Refreshing state... [id=caa0de20423c784bc13f4b9b74b9e87f1c2437ba]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=237440001120189955]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1836441884112107397]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=6498251973961204370]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# local_sensitive_file.kubeconfig will be created
+ resource "local_sensitive_file" "kubeconfig" {
+ content = (sensitive value)
+ content_base64sha256 = (known after apply)
+ content_base64sha512 = (known after apply)
+ content_md5 = (known after apply)
+ content_sha1 = (known after apply)
+ content_sha256 = (known after apply)
+ content_sha512 = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0600"
+ filename = "/github/workspace/.kubeconfig"
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
time=2024-11-13T21:44:29Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes]
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=1244]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
helm_release.argocd: Refreshing state... [id=argocd]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# helm_release.argocd will be updated in-place
~ resource "helm_release" "argocd" {
id = "argocd"
~ metadata = [
- {
- app_version = "v2.13.0"
- chart = "argo-cd"
- first_deployed = 1731491403
- last_deployed = 1731491403
- name = "argocd"
- namespace = "argocd"
- notes = <<-EOT
Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster:
argocd-redis-ha.argocd.svc.cluster.local
To connect to your Redis server:
1. To retrieve the redis password:
echo $(kubectl get secret argocd-redis-ha -o "jsonpath={.data['auth']}" | base64 --decode)
2. Connect to the Redis master pod that you can use as a client. By default the argocd-redis-ha-server-0 pod is configured as the master:
kubectl exec -it argocd-redis-ha-server-0 -n argocd -c redis -- sh
3. Connect using the Redis CLI (inside container):
redis-cli -a <REDIS-PASS-FROM-SECRET>
In order to access the server UI you have the following options:
1. kubectl port-forward service/argocd-server -n argocd 8080:443
and then open the browser on http://localhost:8080 and accept the certificate
2. enable ingress in the values file `server.ingress.enabled` and either
- Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
- Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts
After reaching the UI the first time you can login using Dex or OIDC.
EOT
- revision = 1
- values = jsonencode(
{
- configs = {
- cm = {
- "admin.enabled" = false
- "dex.config" = <<-EOT
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
EOT
- "statusbadge.enabled" = true
- url = "https://argocd.simonemms.com"
}
- params = {
- "server.insecure" = true
}
- rbac = {
- create = true
- "policy.csv" = <<-EOT
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT
}
}
- global = {
- domain = "argocd.simonemms.com"
}
- redis-ha = {
- enabled = true
}
- repoServer = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
}
- server = {
- autoscaling = {
- enabled = true
- minReplicas = 2
}
- ingress = {
- annotations = {
- "cert-manager.io/cluster-issuer" = "letsencrypt"
- "kubernetes.io/tls-acme" = "true"
- "nginx.ingress.kubernetes.io/backend-protocol" = "HTTP"
- "nginx.ingress.kubernetes.io/force-ssl-redirect" = "true"
}
- enabled = true
- extraTLS = [
- {
- hosts = [
- "argocd.simonemms.com",
]
- secretName = "argocd-tls"
},
]
- ingressClassName = "nginx"
- tls = true
}
}
}
)
- version = "7.7.2"
},
] -> (known after apply)
name = "argocd"
~ values = [
~ <<-EOT
global:
domain: argocd.simonemms.com
redis-ha:
enabled: true
repoServer:
autoscaling:
enabled: true
minReplicas: 2
server:
autoscaling:
enabled: true
minReplicas: 2
ingress:
enabled: true
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTP
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt
+
+ gethomepage.dev/description: Get stuff done with Kubernetes!
+ gethomepage.dev/enabled: "true"
+ gethomepage.dev/group: Cluster Management
+ gethomepage.dev/icon: argocd
+ gethomepage.dev/name: ArgoCD
tls: true
extraTLS:
- hosts:
- argocd.simonemms.com
secretName: argocd-tls
configs:
cm:
admin.enabled: false
dex.config: |-
"connectors":
- "config":
"clientID": "$github-oidc:clientId"
"clientSecret": "$github-oidc:clientSecret"
"orgs":
- "name": "mrsimonemmsorg"
"teams":
- "home-admin"
"id": "github"
"name": "GitHub"
"type": "github"
statusbadge.enabled: true
url: https://argocd.simonemms.com
params:
server.insecure: true
rbac:
create: true
policy.csv: |
p, role:org-admin, applications, *, *, allow
p, role:org-admin, applicationsets, *, *, allow
p, role:org-admin, clusters, *, *, allow
p, role:org-admin, projects, *, *, allow
p, role:org-admin, repositories, *, *, allow
p, role:org-admin, accounts, *, *, allow
p, role:org-admin, certificates, *, *, allow
p, role:org-admin, gpgkeys, *, *, allow
p, role:org-admin, logs, *, *, allow
p, role:org-admin, exec, *, *, allow
p, role:org-admin, extensions, *, *, allow
g, mrsimonemmsorg:home-admin, role:org-admin
EOT,
]
# (26 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Related Issue(s)
Fixes #
How to test