You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am creating a cluster EKS using the official aws module, and installing addons and tools using eks-blueprints-addons. So ok! Everything was going well, but when I needed to test the Karpenter it wasn't working correctly.
✋ I have searched the open/closed issues and my issue is not listed.
⚠️ Note
Before you submit an issue, please perform the following first:
Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
Re-initialize the project root to pull down modules: terraform init
Re-attempt your terraform plan or apply and check if the issue still persists
Versions
Module version [Required]: 1.16.2
Terraform version: >= 1.4.1
Provider version(s): >= 1.4.1
Reproduction Code [Required]
Steps to reproduce the behavior:
Hmm, I just created my cluster with node group and tried running Karpenter with the below configuration (in additional context). I'm not using local cache or workspace either.
Expected behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. These nodes must be associated in my cluster node group for my new resources and applications.
Actual behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. But he can't add these new instances to the node group. This is happening due to the lack of an access entry for Karpenter role.
Soluction
Me and my team resolved this problem using aws_eks_access_entry resource. Example:
Assuming that you are not providing the aws_auth_roles config map in your EKS config this is expected behavior. See our karpenter blueprint where we show that you have to provide the aws_eks_access_entry resource.
We are looking at how we can improve the user experience for this module and may incorporate this in our next milestone release.
@askulkarni2 What do you think about adding this optionally? @Christoph-Raab and I implemented it like this, and were a bit surprised not to see it in the upstream blueprints - esp. since it is part of the official Karpenter module.
Description
I am creating a cluster EKS using the official aws module, and installing addons and tools using
eks-blueprints-addons
. So ok! Everything was going well, but when I needed to test the Karpenter it wasn't working correctly.Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: 1.16.2
Terraform version: >= 1.4.1
Reproduction Code [Required]
Steps to reproduce the behavior:
Hmm, I just created my cluster with node group and tried running Karpenter with the below configuration (in additional context). I'm not using local cache or workspace either.
Expected behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. These nodes must be associated in my cluster node group for my new resources and applications.
Actual behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. But he can't add these new instances to the node group. This is happening due to the lack of an access entry for Karpenter role.
Soluction
Me and my team resolved this problem using
aws_eks_access_entry
resource. Example:Terminal Output Screenshot(s)
Additional context
Kapenter configuration:
The text was updated successfully, but these errors were encountered: