This is a prototype, not for production use. See here for Twingate production Helm chart.
- Create a new Twingate API key (read, write and provision), see here
- Create a Twingate Remote Network if not already has one
- Install Helm repo
helm repo add twingate-connector-init-container https://twingate-labs.github.io/connector-init-container/
helm upgrade --install {statefulSet_name} twingate-connector-init-container/connector -n {namespace} --set twingate.apiKey="{twingate_api_key}" --set twingate.account="{twingate_tenant_name}.twingate.com" --set twingate.networkName="{remote_network_name}" --set connector.replicas={number_of_replicas} --values connector-init-container/values.yaml
- deployment_name: the name of the kubernetes statefulSet, e.g. twingate-connector
- namespace: kubernetes namespace to deploy the statefulSet, e.g. default
- twingate_api_key: the value of the twingate api key
- twingate_tenant_name: the twingate tenant name
- remote_network_name: the Twingate remote network name
- number_of_replicas: number of replicas to deploy, each replica is a connector
To scale up or down, simply change the value of number_of_replicas
with the command below.
kubectl scale statefulset {statefulSet_name} --replicas={number_of_replicas}
Or
helm upgrade --install {statefulSet_name} twingate-connector-init-container/connector -n {namespace} --set twingate.apiKey="{twingate_api_key}" --set twingate.account="{twingate_tenant_name}.twingate.com" --set twingate.networkName="{remote_network_name}" --set connector.replicas={number_of_replicas} --values connector-init-container/values.yaml
Note: downscaling connector would cause the disconnection between the Twignate client and removed connector.
- Init Container is released on git
- Workflow:
- Init container provision connector if there is no offline connectors in the remote network
- Else connector token is regenerated
- Init container stores the connector stoken Kubernetes secret
- Connector application pod uses the tokens stored in the Kubernetes secret
- The connector application is deployed as statefulSet
- Antiaffinity set to preferredDuringSchedulingIgnoredDuringExecution to distribute the connectors on separate nodes
- Service Account is used so init container can use kubectl commands
- Replicas can be defined and scaled
- Pod is auto recovered if the pod died/killed
- We should delete tokens after each connector pod is started
- Sidecar most likely required
- Can be improved if needed
- Role is set as cluster admin, which is not ideal
- Cannot limit role access at the moment, as we are doing kubectl apply
- Can be improved if needed
helm uninstall {statefulSet_name}