-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to access Kubernetes API (port 6443) from the outside? #3
Comments
Hi @cellerich, yes you could add a listener, a backend and a backend set to the public LB. Something like this should be ok for your use case:
and you need to grant access from your public ip address to the public LB:
|
Thanks a lot for the quick response Lorenzo! |
Hi Lorenzo Just want to report back: While installing I ran into the same issue as this guy, however it ran trough anyway (had to run As for the local access to the Kubernetes API from a local computer (trough Lens or kubectl) I added the following to the repo files: lb.tf #Kubernetes API
resource "oci_load_balancer_backend_set" "k3s_https_backend_set" {
health_checker {
protocol = "HTTP"
port = var.https_lb_port
url_path = "/healthz"
return_code = 200
}
load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
name = "K3s_https_backend_set"
policy = "ROUND_ROBIN"
}
resource "oci_load_balancer_backend" "k3s_https_backend" {
depends_on = [
oci_core_instance_pool.k3s_workers,
]
count = var.k3s_worker_pool_size
backendset_name = oci_load_balancer_backend_set.k3s_https_backend_set.name
ip_address = data.oci_core_instance.k3s_workers_instances_ips[count.index].private_ip
load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
port = var.https_lb_port
}
resource "oci_load_balancer_listener" "k3s_kubeapi_listener" {
default_backend_set_name = oci_load_balancer_backend_set.k3s_kubeapi_backend_set.name
load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
name = "k3s_kubeapi_listener"
port = 6443
protocol = "TCP"
connection_configuration {
idle_timeout_in_seconds = 300
backend_tcp_proxy_protocol_version = 2
}
}
resource "oci_load_balancer_backend_set" "k3s_kubeapi_backend_set" {
health_checker {
protocol = "TCP"
port = 6443
}
load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
name = "k3s_kubeapi_backend_set"
policy = "ROUND_ROBIN"
}
resource "oci_load_balancer_backend" "k3s_kubeapi_backend" {
depends_on = [
oci_core_instance_pool.k3s_servers,
]
count = var.k3s_server_pool_size
backendset_name = oci_load_balancer_backend_set.k3s_kubeapi_backend_set.name
ip_address = data.oci_core_instance.k3s_servers_instances_ips[count.index].private_ip
load_balancer_id = oci_load_balancer_load_balancer.k3s_public_lb.id
port = 6443
} nsg.tf resource "oci_core_network_security_group_security_rule" "kubeapi" {
network_security_group_id = oci_core_network_security_group.public_lb_nsg.id
direction = "INGRESS"
protocol = 6 # tcp
description = "kubeapi rule"
source = var.my_public_ip_cidr
source_type = "CIDR_BLOCK"
stateless = false
tcp_options {
destination_port_range {
max = 6443
min = 6443
}
}
} Since the certificate is build for the internal IP addresses only, we have to use "insecure-skip-tls-verify: true" in our kubeconfig file. There seems to be a bug as well, that you cannot use config.k3s-oracle.yaml apiVersion: v1
clusters:
- cluster:
__certificate-authority-data: <your-data>
server: https://<your public lb IP address>:6443
insecure-skip-tls-verify: true
name: k3s-oracle
contexts:
- context:
cluster: k3s-oracle
user: k3s-oracle
name: k3s-oracle
kind: Config
preferences: {}
users:
- name: k3s-oracle
user:
client-certificate-data: <your data>
data: <your data> Hope this helps fellow users to setup this cool playground. |
Hi @cellerich, with this new release you can now expose the kubeapi to the internet and you don't have to skip the certificate validation anymore. The install procedure will automatically generate the correct SSL certificate, also for the public ip address. |
Ciao Lorenzo
First of all, thanks for the repo!
After fiddling with all the different parameters Terraform and OCI needs, I was able to start a K3S cluster.
However, I would like to access the cluster from my PC with a Management GUI like Lens or trough my local
kubectl
.Since I'm a bit of a network-noob: How would I setup the access from the outside to the server ports 6443?
Thanks for any pointers!
The text was updated successfully, but these errors were encountered: