Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to access Kubernetes API (port 6443) from the outside? #3

Closed
cellerich opened this issue May 16, 2022 · 4 comments
Closed

How to access Kubernetes API (port 6443) from the outside? #3

cellerich opened this issue May 16, 2022 · 4 comments
Assignees
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@cellerich
Copy link

Ciao Lorenzo

First of all, thanks for the repo!
After fiddling with all the different parameters Terraform and OCI needs, I was able to start a K3S cluster.
However, I would like to access the cluster from my PC with a Management GUI like Lens or trough my local kubectl.
Since I'm a bit of a network-noob: How would I setup the access from the outside to the server ports 6443?

  • add backend sets to the public LB (how?)
  • if so, create a route to the internal LB port 6443 or round robin to the 2 servers?

Thanks for any pointers!

@garutilorenzo garutilorenzo added help wanted Extra attention is needed question Further information is requested labels May 16, 2022
@garutilorenzo garutilorenzo self-assigned this May 16, 2022
@garutilorenzo
Copy link
Owner

garutilorenzo commented May 16, 2022

Hi @cellerich,

yes you could add a listener, a backend and a backend set to the public LB.
Since we are using only always free resources this is the only way to achieve this. In a production env is better to use a separate Network LB with a dedicated security list.

Something like this should be ok for your use case:

resource "oci_load_balancer_listener" "k8s_kubeapi_listener" {
  default_backend_set_name = oci_load_balancer_backend_set.k8s_kubeapi_backend_set.name
  load_balancer_id         = oci_load_balancer_load_balancer.k8s_public_lb.id
  name                     = "k8s_kubeapi_listener"
  port                     = 6443
  protocol                 = "TCP"
  connection_configuration {
    idle_timeout_in_seconds            = 300
    backend_tcp_proxy_protocol_version = 2
  }
}

resource "oci_load_balancer_backend_set" "k8s_kubeapi_backend_set" {
  health_checker {
    protocol = "TCP"
    port     = 6443
  }

  load_balancer_id = oci_load_balancer_load_balancer.k8s_public_lb.id
  name             = "k8s_kubeapi_backend_set"
  policy           = "ROUND_ROBIN"
}

resource "oci_load_balancer_backend" "k8s_kubeapi_backend" {
  depends_on = [
    oci_core_instance_pool.k8s_servers,
  ]

  backendset_name  = oci_load_balancer_backend_set.k8s_kubeapi_backend_set.name
  ip_address       = data.oci_core_instance.k8s_servers_instances_ips.private_ip
  load_balancer_id = oci_load_balancer_load_balancer.k8s_public_lb.id
  port             = 6443
}

and you need to grant access from your public ip address to the public LB:

resource "oci_core_network_security_group_security_rule" "kubeapi" {
  network_security_group_id = oci_core_network_security_group.public_lb_nsg.id
  direction                 = "INGRESS"
  protocol                  = 6 # tcp

  description = "kubeapi rule"

  source      = var.my_public_ip_cidr
  source_type = "CIDR_BLOCK"
  stateless   = false

  tcp_options {
    destination_port_range {
      max = 6443
      min = 6443
    }
  }
}

@cellerich
Copy link
Author

Thanks a lot for the quick response Lorenzo!
Will try it out and report back...

@cellerich
Copy link
Author

Hi Lorenzo

Just want to report back:
I was able to install a k3s cluster also on the eu-zurich-1 instance of Oracle cloud

While installing I ran into the same issue as this guy, however it ran trough anyway (had to run terraform apply multiple times)

As for the local access to the Kubernetes API from a local computer (trough Lens or kubectl) I added the following to the repo files:

lb.tf

#Kubernetes API
resource "oci_load_balancer_backend_set" "k3s_https_backend_set" {
  health_checker {
    protocol    = "HTTP"
    port        = var.https_lb_port
    url_path    = "/healthz"
    return_code = 200
  }
  load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
  name             = "K3s_https_backend_set"
  policy           = "ROUND_ROBIN"
}

resource "oci_load_balancer_backend" "k3s_https_backend" {
  depends_on = [
    oci_core_instance_pool.k3s_workers,
  ]

  count            = var.k3s_worker_pool_size
  backendset_name  = oci_load_balancer_backend_set.k3s_https_backend_set.name
  ip_address       = data.oci_core_instance.k3s_workers_instances_ips[count.index].private_ip
  load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
  port             = var.https_lb_port
}

resource "oci_load_balancer_listener" "k3s_kubeapi_listener" {
  default_backend_set_name = oci_load_balancer_backend_set.k3s_kubeapi_backend_set.name
  load_balancer_id         = oci_load_balancer_load_balancer.k3s_public_lb.id
  name                     = "k3s_kubeapi_listener"
  port                     = 6443
  protocol                 = "TCP"
  connection_configuration {
    idle_timeout_in_seconds            = 300
    backend_tcp_proxy_protocol_version = 2
  }
}

resource "oci_load_balancer_backend_set" "k3s_kubeapi_backend_set" {
  health_checker {
    protocol = "TCP"
    port     = 6443
  }

  load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
  name             = "k3s_kubeapi_backend_set"
  policy           = "ROUND_ROBIN"
}

resource "oci_load_balancer_backend" "k3s_kubeapi_backend" {
  depends_on = [
    oci_core_instance_pool.k3s_servers,
  ]

  count            = var.k3s_server_pool_size
  backendset_name  = oci_load_balancer_backend_set.k3s_kubeapi_backend_set.name
  ip_address       = data.oci_core_instance.k3s_servers_instances_ips[count.index].private_ip
  load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
  port             = 6443
}

nsg.tf

resource "oci_core_network_security_group_security_rule" "kubeapi" {
  network_security_group_id = oci_core_network_security_group.public_lb_nsg.id
  direction                 = "INGRESS"
  protocol                  = 6 # tcp

  description = "kubeapi rule"

  source      = var.my_public_ip_cidr
  source_type = "CIDR_BLOCK"
  stateless   = false

  tcp_options {
    destination_port_range {
      max = 6443
      min = 6443
    }
  }
}

Since the certificate is build for the internal IP addresses only, we have to use "insecure-skip-tls-verify: true" in our kubeconfig file. There seems to be a bug as well, that you cannot use certificate-authority-data together with the insecure-skip-tls-verify flag. The workaround seems to be to add two underscore like shown in the config file here:

config.k3s-oracle.yaml

apiVersion: v1
clusters:
- cluster:
    __certificate-authority-data: <your-data>
    server: https://<your public lb IP address>:6443
    insecure-skip-tls-verify: true
  name: k3s-oracle
contexts:
- context:
    cluster: k3s-oracle
    user: k3s-oracle
  name: k3s-oracle
kind: Config
preferences: {}
users:
- name: k3s-oracle
  user:
    client-certificate-data: <your data> 
    data: <your data>

Hope this helps fellow users to setup this cool playground.

@garutilorenzo
Copy link
Owner

Hi @cellerich,

with this new release you can now expose the kubeapi to the internet and you don't have to skip the certificate validation anymore. The install procedure will automatically generate the correct SSL certificate, also for the public ip address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants