Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extension uses standard .kube folder for config - could be a problem #213

Open
chrizzo84 opened this issue Jan 26, 2023 · 4 comments
Open

Comments

@chrizzo84
Copy link

Hello together!
We have noticed that the task always goes fix to the directory /home/user_name/.kube/ to store or read the config.
This can be very problematic if there are several self-hosted agents running on one server, because if there are several agents running at the same time, all running in the same user context (which is normally not a problem), then it can happen that e.g. pipeline 1 has stored the config in the folder, a pipeline 2 is started, which in the meantime overwrites the config of pipeline 1. If there would be a delete step in pipeline 1, pipeline 1 could delete resources, which should be touched only by pipeline 2. In fact, Pipeline 1 runs with data from Pipeline 2 at once.
Of course, the workaround would be to create different users for the different agents. But in my opinion, the standard is simply that a task can be given a working directory, or the task simply stores everything that concerns it in the corresponding run, i.e. the agent folder on which the pipeline is currently running.

@kkizer
Copy link

kkizer commented Mar 7, 2023

This is definitely a problem! We use the same build agents for AKS and OpenShift and this is causing builds to stomp on each other. The config file should be written to a file in the build temp directory and KUBECONFIG set to the path to the file.

@chrizzo84
Copy link
Author

Correct, that's also my opinion.
I think it's not good to just use the standard .kube folder in homedir!

@kkizer
Copy link

kkizer commented Mar 7, 2023

So how do we get someone to fix this? I see where the issue is, but I'm not sure I know how to build and test changes.

@kkizer
Copy link

kkizer commented Mar 8, 2023

Reading through the code, I think I found a workaround. The redhat.openshift-vsts.oc-setup-task.oc-setup@2 task is doing an oc login which by default writes the cluster config to whatever the KUBECONFIG environment variable is set to OR to ~/.kube/config if not defined. So define pipeline variables as follows:

variables:
# We need to keep the desired KUBECONFIG in a variable in a variable named something OTHER than
# KUBECONFIG since the OpenShift tools install will change the KUBECONFIG value.
- name: KUBECONFIG_PATH
  value: $(Agent.TempDirectory)/config-$(System.JobId)

# Setting a pipeline variable also sets an environment variable with the same name (in all caps).
# Set the KUBECONFIG value BEFORE installing the OpenShift tools so that the oc login command
# creates the cluster configuration file in the path set in KUBECONFIG. The OpenShift tools
# install task will CHANGE the KUBECONFIG environment variable back to the default (~/.kube/config)
# which is NO BUENO in a shared build environment where all builds run as the same user.
# Be sure to set the KUBECONFIG value back to the desired path before running ANY oc or helm
# commands!
- name: KUBECONFIG
  value: $(KUBECONFIG_PATH)

The KUBECONFIG_PATH defines the path to a file in the build temp directory. For extra safety, it includes the job id in the name. We need this second variable because the redhat.openshift-vsts.oc-setup-task.oc-setup@2 task will always set KUBECONFIG to ~/.kube/config

Before ANY calls to the Kubernetes API (oc command, helm, etc.), export KUBECONFIG to the correct value:

- task: PowerShell@2
  name: my_python
  displayName: "Invoke script"
  inputs:
    targetType: 'inline'
    script: |
      Write-Host "[debug] Setting KUBECONFIG environment variable BACK to $(KUBECONFIG_PATH) before running any oc or helm commands"
      $env:KUBECONFIG = "$(KUBECONFIG_PATH)"

When you look at the pipeline logs, you will see lines like this in the redhat.openshift-vsts.oc-setup-task.oc-setup@2 task:

W0308 09:07:22.873578    6449 loader.go:223] Config not found: /AzDOAgents/Agent1/_work/_temp/config-12f1170f-54f2-53f3-20dd-22fc7dff55f9

Which is great since that means the oc login created the file.

We run admin pipelines that use service accounts that have cluster admin access. We cannot have another pipeline pick up OUR cluster config file in ~/.kube/config!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants