-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to delete the EKS Cluster due to "dial tcp 127.0.0.1:80: connect: connection refused" #3219
Comments
it looks like your issue is with the providers - kubernetes provider specifically. What does your kubernetes provider implementation look like, and have you tried running |
Hello Bryant! Thanks for your response!
No, I haven't executed the terraform refresh, as we have python script that act as a wrapper to perform the TF plan && apply operation. So I would need to reserves engineer my script to perform the TF refresh. |
This issue has been automatically marked as stale because it has been open 30 days |
my guess is you have your kubernetes provider in the same project as your eks module? I think when the cluster is going to be dropped, it has a hard time for the kubernetes provider to get credentials. I think you could a) remove everything from your terraform that uses the kubernetes provider (apply it) then remove the kubernetes provider and try to apply with the removal of the cluster? to test this, you could probably also run a plan and user --target to target just your eks module. |
Hello Folks,
I require help to decom cluster : vfde-webapp-preprod-mobile3-eks.
Context:
We have a cluster : vfde-webapp-preprod-mobile3-eks (EKS 1.23), that was deployed on using Terraform, and was bounded to upgraded using TF only , however due a manual intervention the cluster got upgraded to 1.24.
Now I am unable to delete the said cluster using terraform due a hidden dependency, that I am unable to ascertain, so I look up to you guys for some guidance.
I totally get the part that this issue is not a Bug Report, however, I would thoroughly appreciate your guidance on this matter.
Problem Statement :
The TF plan operation get stuck due to below error,
=========================== | ===========================
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_config_map.aws_auth,
│ on aws_auth.tf line 1, in resource "kubernetes_config_map" "aws_auth":
│ 1: resource "kubernetes_config_map" "aws_auth" {
│
╵
╷
│ Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterroles/readonly": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_cluster_role_v1.readonly,
│ on aws_auth.tf line 27, in resource "kubernetes_cluster_role_v1" "readonly":
│ 27: resource "kubernetes_cluster_role_v1" "readonly" {
│
╵
╷
│ Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/readonly": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_cluster_role_binding_v1.readonly,
│ on aws_auth.tf line 41, in resource "kubernetes_cluster_role_binding_v1" "readonly":
│ 41: resource "kubernetes_cluster_role_binding_v1" "readonly" {
│
╵
╷
│ Error: Get "http://localhost/api/v1/namespaces/kube-system/serviceaccounts/argocd-manager": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_service_account_v1.argocd-manager[0],
│ on cicd-sa.tf line 1, in resource "kubernetes_service_account_v1" "argocd-manager":
│ 1: resource "kubernetes_service_account_v1" "argocd-manager" {
│
╵
╷
│ Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterroles/argocd-manager-role": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_cluster_role_v1.argocd-manager[0],
│ on cicd-sa.tf line 83, in resource "kubernetes_cluster_role_v1" "argocd-manager":
│ 83: resource "kubernetes_cluster_role_v1" "argocd-manager" {
│
╵
╷
│ Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/argocd-manager-role-binding": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_cluster_role_binding_v1.argocd-manager[0],
│ on cicd-sa.tf line 103, in resource "kubernetes_cluster_role_binding_v1" "argocd-manager":
│ 103: resource "kubernetes_cluster_role_binding_v1" "argocd-manager" {
│
╵
=========================== | ===========================
The text was updated successfully, but these errors were encountered: