🏢 This directory contains the Terraform configuration for deploying EKS clusters. It provides a ready-to-use Terraform module with essential services and security features for different environments like prod, staging, qa, and sandbox.
- ✨ Comprehensive Root Terraform module for quick deployment of EKS clusters.
- 🗄️ Configured to use an external S3 bucket for Terraform state management with a DynamoDB table for state locking.
- 🔒 Utilization of AWS Secrets Manager for secure storage of secrets.
- 📈 Auto-scaling configuration for node groups.
- Direnv for loading environment variables.
- Terraform for infrastructure provisioning.
- TFswitch to switch between Terraform versions easily.
-
Change Directory:
Navigate to the directory containing the Terraform configuration:
cd live/services-platform
-
Create .envrc file:
Create a new
.envrc
file in this directory by copying the.envrc.example
file:cp .envrc.example .envrc
Then, update the
.envrc
file with the values for your environment! -
Load Environment Variables:
Load the environment variables using
direnv
:direnv allow
-
Set Terraform Version:
Ensure you are using the correct Terraform version:
tfswitch
-
Initialize Terraform:
Initialize the working directory with the required providers and modules:
terraform init -backend-config="./configs/${ENVIRONMENT}-backend.tfvars"
-
Workspace Management:
Select or create a new workspace tailored to your deployment environment:
# Select an existing workspace terraform workspace select "${TF_WORKSPACE}" # Create a new workspace if it doesn't exist and select it terraform workspace new "${TF_WORKSPACE}"
🚀 Deployment Instructions:
-
Plan Your Deployment:
Review and verify the deployment plan:
terraform plan -var-file "./configs/${ENVIRONMENT}.tfvars" -out "${ENVIRONMENT}.tfplan"
-
Execute the Plan:
Apply the planned configuration to provision the infrastructure:
terraform apply "${ENVIRONMENT}.tfplan"
After successfully deploying the infrastructure, follow these steps to test the deployment and ensure everything is working as expected.
AWS Session Manager provides secure and auditable instance management without needing to open inbound ports, manage SSH keys, or use bastion hosts. To connect to the bastion host using Session Manager, follow these steps:
You can check the Bastion Host Module documentation for detailed steps on connecting to the bastion host using Session Manager: Connect to Bastion Host Using Session Manager.
To access the EKS cluster, configure your kubectl
to use the new cluster context from the Bastion Host. Follow these steps:
-
Get the Cluster Name:
Get the cluster name from the Terraform output:
CLUSTER_NAME=$(terraform output -raw cluster_name) echo "Cluster Name: ${CLUSTER_NAME}"
-
Update the Kubeconfig in the Bastion Host:
After connecting to the Bastion Host following the Connection Steps, update the kubeconfig to use the new cluster context:
aws eks --region us-west-2 update-kubeconfig --name <CLUSTER_NAME>
-
Test the cluster:
Get the list of nodes:
kubectl get nodes
To destroy the infrastructure, run the following command:
terraform destroy -var-file "./configs/${ENVIRONMENT}.tfvars"
The module documentation is generated with terraform-docs by running the following command from the module directory:
terraform-docs md . > ./docs/MODULE.md
You can also view the latest version of the module documentation here.