You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to manage a remote user cluster that is owned by a customer.
If we can connect the node-controller to a remote kubernetes cluster, than we can remove the workload on a that cluster.
The node-controller has the credentials to the Citrix ADC. This way we don't have to find a way to hide these credentials.
Currently the situation is fixed for local development with a kubeconfig or inside a kubernetes cluster.
The namespace is pulled from /var/run/secrets inside the node-controller pod.
The k8s api connection has only 2 options build in to connect to kubernetes.
The kubeconfig located at ~/.kube/config
The in-cluster configuration is pulled from `/var/run/secrets inside de node-controller pod.
The text was updated successfully, but these errors were encountered:
BobVanB
changed the title
[Feature] Change the kubernetesURL for node-controller to manage a diffent cluster.
[Feature] Use a kubernetesURL for node-controller to manage a diffent cluster.
Sep 29, 2023
Is there any progress with this feature request. We really would like this, it will solve our multicluster/multicustomer problem.
With kind regards, Bob
Hi Bob,
We are discussing this thread over email and we are exploring CPX as an alternative option for your remote access setup.
Let us know if you find any issues with adopting the suggested architecture.
Feature request
We want to manage a remote user cluster that is owned by a customer.
If we can connect the node-controller to a remote kubernetes cluster, than we can remove the workload on a that cluster.
The node-controller has the credentials to the Citrix ADC. This way we don't have to find a way to hide these credentials.
The same functionality existst in the citrix-ingress-controller:
https://github.com/citrix/citrix-helm-charts/blob/master/citrix-ingress-controller/values.yaml#L42
Current situation
Currently the situation is fixed for local development with a kubeconfig or inside a kubernetes cluster.
/var/run/secrets
inside the node-controller pod.~/.kube/config
The text was updated successfully, but these errors were encountered: