A lazymans Terraform deployment of an AWS EKS cluster with Managed Hosts and Autoscaling.
-
AWS account with acess to provision resources
-
AWS Cli v2.7.1+
-
A client system with internet access.
In order to deploy the cluster you will need a client, with the software listed in the Requirements section installed, and an internet connection.
It may be wise to set up an EC2 VM as a Bastion/Jumpbox to be used as the client. If this is preferred, Deploy a Debian or Ubuntu VM and run the following commands on it.
$ git clone https://github.com/ubc/aws-eks-terraform.git my-first-eks-cluster
$ cd my-first-eks-cluster
$ sudo bash ./support/setup.sh
These commands may not be needed if the commands in the Setup Client section were run.
$ git clone https://github.com/ubc/aws-eks-terraform.git my-first-eks-cluster
$ cd my-first-eks-cluster
- Saml2AWS: https://github.com/Versent/saml2aws
- AWS Keys: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
Helps find the profile name required in the next step. If you have more than one profile, ensure the correct one is chosen in the next step.
$ cat ~/.aws/credentials | grep -o '\[[^]]*\]'
In order to override the default variables in "variables.tf", create a new file "ENV.tfvar" (where "ENV" is the your environment name, it can be "dev" or "prod"). Add any variables that needs to be overriden.
The most important variables are:
- region - AWS Region for EKS Cluster, e.g. ca-central-1
- profile - AWS Profile Name to be used to deploy the EKS Cluster.
- eks_instance_types - A List of Instance Types available to the Node Groups, e.g. ["t3a.xlarge", "t3a.large"]
- eks_instance_type - The Default Node Group Instance Type from eks_instance_types list, e.g. t3a.large
- cluster_base_name - The Base Name used for EKS Cluster Deployment, e.g. jupyterhub
- tag_project_name - A Project Name that is Tagged onto the EKS Cluster Deployment, e.g. jupyterhub environment - Deployment environment, this variable has to match the workspace name, e.g. dev or prod
Notes: Some regions do not have the same Instance Types as others. During deployment you may encouter a terraform error stating which instance types are incompatible. Remove the incompatible instance types from the variable "eks_instance_types" and ensure that the variable "eks_instance_type" is set to one of the Instance Types listed in the variable "eks_instance_types".
aws s3api create-bucket --bucket jupyter-ubc-ca-terraform-tfstate --create-bucket-configuration LocationConstraint=ca-central-1
Deploy the EKS Cluster with terraform.
$ terraform init -upgrade
$ terraform workspace new ENV # create a new namespace, replace ENV with the environment name and has to match the "environment" variable
$ terraform apply -var-file=ENV.tfvar # replace ENV with the environment name
Notes: Generally if anything goes wrong during deployment its from misconigued variables. You can usually fix this by updating the variables.tf file with the correct infomation and rerunning "terraform apply". If anything goes wrong with the deployment that you cant solve by updaing the variables, you can cleanup by following the Destroy Cluster step.
This will be automatically run during the deployment. However if something goes wrong this command may be usefull.
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name) --profile $(terraform output -raw profile) && export KUBE_CONFIG_PATH=~/.kube/config && export KUBERNETES_MASTER=~/.kube/config
If these commands complete without errors, the deployment is complete!
$ kubectl version
$ kubectl get nodes
$ kubectl get pods -n kube-system # This should list a Pod with the name "coredns" in the name.
Destroy the EKS Cluster with terraform.
$ saml2aws login # (Comment out for non Saml2AWS deployment)
$ terraform destroy -var-file=ENV.tfvar
Please open an issue on GitHub. The support will be based on best effort basis.
Credit should also go to PIMS and Ian A. for providing deployments based on AWS EKS.