Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adopted local kind cluster doesn't work #953

Closed
Josca opened this issue Jan 24, 2025 · 3 comments · Fixed by #1066
Closed

Adopted local kind cluster doesn't work #953

Josca opened this issue Jan 24, 2025 · 3 comments · Fixed by #1066
Assignees
Labels
bug Something isn't working

Comments

@Josca
Copy link
Contributor

Josca commented Jan 24, 2025

Describe the bug
I tried to adopt local "kind" cluster as adopted cluster for k0rdent testing. It should be possible according to Adopted cluster guide.
Local setup:

  • kind k0rdent management cluster created using demo setup.
  • a second kind cluster to be used as a managed cluster.

To Reproduce
Steps to reproduce the behavior:

  1. Create secret and credential resource:
apiVersion: v1
data:
  value: YXNkc2RmYXNkZgo= # invalid kubeconfig - but it's still accepted
kind: Secret
metadata:
  name: adopted-cluster-kubeconf
  namespace: k0rdent
  labels:
    k0rdent.mirantis.com/component: "kcm"
type: Opaque
---
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: Credential
metadata:
  name: adopted-cluster-cred
  namespace: k0rdent
spec:
  description: Adopted Credentials
  identityRef:
    apiVersion: v1
    kind: Secret
    name: adopted-cluster-kubeconf
    namespace: k0rdent
  1. Create adopted cluster ClusterDeployment based on adopted-cluster-0-0-2 template:
apiVersion: k0rdent.mirantis.com/v1alpha1
kind: ClusterDeployment
metadata:
  name: kind-adopted-cluster
  namespace: k0rdent
spec:
  template: adopted-cluster-0-0-2
  credential: adopted-cluster-cred
  # dryRun: <"true" or "false": defaults to "false">
  config:
    clusterLabels:
      k0rdent: demo
  serviceSpec:
    services:
      - template: ingress-nginx-4-11-0
        name: ingress-nginx
        namespace: ingress-nginx
  1. Check clusterdeployment cluster status, it shows cluster ready:
k get clusterdeployments.k0rdent.mirantis.com 
# NAME                   READY   STATUS
# kind-adopted-cluster   True    ClusterDeployment is ready
k get clustersummaries.config.projectsveltos.io -A # ... but no cluster summaries
# No resources found
  • Services are not installed
  • sveltos agent is not being installed to manged cluster
  • It behaves the same way no matter whether adopted-cluster-kubeconf.data.value is valid kubeconfig or not. I tried to use just encoded blablabla and it worked the same way.

Expected behavior
Adopted cluster really works.

Additional context
Add any other context about the problem here.

@Josca Josca added the bug Something isn't working label Jan 24, 2025
@github-project-automation github-project-automation bot moved this to Todo in k0rdent Jan 24, 2025
@kylewuolle
Copy link
Contributor

I think the issue is the kubeconfig provided for the kind cluster created is probably pointing at localhost (127.0.0.1) which is the case if it's obtained using kind get kubeconfig -n my-test-cluster. If you instead use a kubeconfig obtained using the "internal" flag you will have a kubeconfig that will work within the docker network of the kind cluster. So for example you could create the credential using the result of kind get kubeconfig --internal -n my-test-cluster| base64 -w 0.

Alternatively, you can create a kind config file similar to this:?

kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: apiServerAddress: "0.0.0.0" apiServerPort: <choose a port>

And then create a kind cluster like this kind create cluster -n my-test-cluster --config <kind config>. This will bind the api port to all interfaces and then you could use your host ip in the kubeconfig file.

@Josca
Copy link
Contributor Author

Josca commented Jan 27, 2025

Hey @kylewuolle , thank you. It really worked when I created a kubeconfig using --internal flag. So the major issue is resolved. But anyway, it would be great to make it more visible when the adopted cluster is not "adopted" properly.

  • Credentials and ClusterDeployment objects should describe the issue in their status to simplify troubleshooting.

@kylewuolle
Copy link
Contributor

kylewuolle commented Feb 11, 2025

Hey @kylewuolle , thank you. It really worked when I created a kubeconfig using --internal flag. So the major issue is resolved. But anyway, it would be great to make it more visible when the adopted cluster is not "adopted" properly.

  • Credentials and ClusterDeployment objects should describe the issue in their status to simplify troubleshooting.

I've made a pull request that adds the sveltos cluster status to the clusterdeployment that should help with these situations.

@DinaBelova DinaBelova moved this from Todo to In Progress in k0rdent Feb 13, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in k0rdent Feb 17, 2025
@github-project-automation github-project-automation bot moved this from Todo to Done in Project 2A Feb 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants