Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Karpenter] AWS EKS Access Entry for Karpenter role #389

Open
1 task done
LucasRejanio opened this issue Apr 19, 2024 · 3 comments
Open
1 task done

[Karpenter] AWS EKS Access Entry for Karpenter role #389

LucasRejanio opened this issue Apr 19, 2024 · 3 comments
Labels
enhancement New feature or request
Milestone

Comments

@LucasRejanio
Copy link

LucasRejanio commented Apr 19, 2024

Description

I am creating a cluster EKS using the official aws module, and installing addons and tools using eks-blueprints-addons. So ok! Everything was going well, but when I needed to test the Karpenter it wasn't working correctly.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 1.16.2

  • Terraform version: >= 1.4.1

  • Provider version(s): >= 1.4.1

Reproduction Code [Required]

Steps to reproduce the behavior:

Hmm, I just created my cluster with node group and tried running Karpenter with the below configuration (in additional context). I'm not using local cache or workspace either.

Expected behaviour

Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. These nodes must be associated in my cluster node group for my new resources and applications.

Actual behaviour

Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. But he can't add these new instances to the node group. This is happening due to the lack of an access entry for Karpenter role.

Soluction

Me and my team resolved this problem using aws_eks_access_entry resource. Example:

resource "aws_eks_access_entry" "karpenter" {
  cluster_name  = module.eks.cluster_name
  principal_arn = module.eks_blueprints_addons.karpenter.node_iam_role_arn
  tags          = local.tags
  type          = "EC2_LINUX"
}

Terminal Output Screenshot(s)

Additional context

Kapenter configuration:

  enable_karpenter                           = true
  karpenter_enable_spot_termination          = true
  karpenter_enable_instance_profile_creation = true
  karpenter_sqs                              = true
  karpenter_node = {
    iam_role_use_name_prefix = false
  }
  karpenter = {
    set = [
      {
        name  = "clusterName"
        value = module.eks.cluster_name
      },
      {
        name  = "clusterEndpoint"
        value = module.eks.cluster_endpoint
      },
      {
        name  = "controller.resources.requests.cpu"
        value = "1"
      },
      {
        name  = "controller.resources.requests.memory"
        value = "1Gi"
      },
      {
        name  = "controller.resources.limits.cpu"
        value = "1"
      },
      {
        name  = "controller.resources.limits.memory"
        value = "1Gi"
      },
    ]
  }
@askulkarni2
Copy link
Contributor

Assuming that you are not providing the aws_auth_roles config map in your EKS config this is expected behavior. See our karpenter blueprint where we show that you have to provide the aws_eks_access_entry resource.

We are looking at how we can improve the user experience for this module and may incorporate this in our next milestone release.

@askulkarni2 askulkarni2 added this to the v2.0 milestone Apr 19, 2024
@askulkarni2 askulkarni2 added the enhancement New feature or request label Apr 19, 2024
@LucasRejanio
Copy link
Author

@askulkarni2 Thanks for your response. I believe we can improve this dependence. I'm happy to contribute to the project in terms of user experience ;)

@lieberlois
Copy link

@askulkarni2 What do you think about adding this optionally? @Christoph-Raab and I implemented it like this, and were a bit surprised not to see it in the upstream blueprints - esp. since it is part of the official Karpenter module.

I think we could just add something like this:

resource "aws_eks_access_entry" "node" {
  count = var.karpenter_enable_access_entry ? 1 : 0

  cluster_name  = "..."
  principal_arn = "..."

  type = "EC2_LINUX"
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants