This Terraform module facilitates the deployment of a comprehensive infrastructure setup within an AWS environment. The key components deployed by this module include:
- VPC (Virtual Private Cloud): A logically isolated network within the AWS cloud to securely launch AWS resources.
- EKS (Elastic Kubernetes Service): A managed Kubernetes service to run and scale containerized applications.
- Helm: A package manager for Kubernetes that helps in managing and deploying applications.
- AWS CodePipeline: A continuous integration and continuous delivery (CI/CD) service for fast and reliable application and infrastructure updates.
- Karpenter: An open-source node provisioning project built to help manage Kubernetes clusters.
- Prometheus: A monitoring and alerting toolkit designed for reliability and scalability.
This module ensures a robust and scalable environment for deploying and managing containerized applications on AWS.
This Terraform module configures a Virtual Private Cloud (VPC) using the terraform-aws-modules/vpc/aws
module. The configuration includes both public and private subnets, NAT gateways, and flow logs. Below is a summary of the key settings and components.
- Source Module:
terraform-aws-modules/vpc/aws
- VPC Name: Derived from
local.name
- CIDR Block:
var.vpc_cidr
- Availability Zones:
local.azs
- Private Subnets: Derived from
var.vpc_cidr
with a /20 subnet mask - Public Subnets: Derived from
var.vpc_cidr
with a /16 subnet mask, shifted by 48 bits - Public IP Mapping: Enabled for instances in public subnets
- NAT Gateway: Enabled, single NAT gateway setup
- Flow Logs: Enabled, sent to CloudWatch Logs with a retention period of 30 days
-
Private Subnets:
private_subnets = [for k, v in local.azs : cidrsubnet(var.vpc_cidr, 4, k)]
These subnets are designated for internal resources.
-
Public Subnets:
public_subnets = [for k, v in local.azs : cidrsubnet(var.vpc_cidr, 8, k + 48)]
These subnets are designated for resources that need public internet access.
- Single NAT Gateway: Ensures private subnets can access the internet.
enable_nat_gateway = true single_nat_gateway = true
-
Public Subnet Tags:
public_subnet_tags = { "kubernetes.io/role/elb" = 1 }
-
Private Subnet Tags:
private_subnet_tags = { "kubernetes.io/role/internal-elb" = 1 "karpenter.sh/discovery" = "${local.name}-cluster" }
- Enable Flow Logs:
enable_flow_log = var.enable_flow_logs flow_log_destination_type = "cloud-watch-logs" flow_log_cloudwatch_log_group_retention_in_days = 30 create_flow_log_cloudwatch_iam_role = var.enable_flow_logs create_flow_log_cloudwatch_log_group = var.enable_flow_logs
This Terraform module configures an Amazon EKS cluster using the terraform-aws-modules/eks/aws
module. The configuration includes setting up cluster add-ons, managed node groups, and authentication.
- Source Module:
terraform-aws-modules/eks/aws
- EKS Cluster Name:
${local.name}-cluster
- EKS Version:
1.28
- VPC: Uses
module.vpc.vpc_id
and private subnets - Public Endpoint: Enabled
- CoreDNS, kube-proxy, VPC-CNI, AWS EBS CSI Driver: Configured to use the most recent version.
- Node Group Configuration:
eks_managed_node_groups = { baseline-infra = { instance_types = [var.node_type] min_size = 2 max_size = 2 desired_size = 2 metadata_options = { http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 3 instance_metadata_tags = "disabled" } iam_role_additional_policies = { AmazonEBSCSIDriverPolicy = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" } } }
- Node Security Group Tags:
node_security_group_tags = { "kubernetes.io/cluster/${local.name}-cluster" = null }
- Tags:
tags = merge(local.tags, { "karpenter.sh/discovery" = "${local.name}-cluster" })
- EKS Authentication:
module "eks_auth" { source = "aidanmelen/eks-auth/aws" eks = module.eks map_roles = [ { rolearn = module.eks_blueprints_addons.karpenter.node_iam_role_arn username = "system:node:{{EC2PrivateDNSName}}" groups = ["system:bootstrappers", "system:nodes"] } ] }
This configuration sets up a robust EKS cluster with necessary add-ons, managed node groups, and authentication mechanisms.
This Terraform module configures various add-ons for an Amazon EKS cluster using the aws-ia/eks-blueprints-addons/aws
module. It includes configurations for Karpenter, AWS Load Balancer Controller, and several Helm releases for deploying additional resources.
- Source Module:
aws-ia/eks-blueprints-addons/aws
- Version:
1.16.3
- Cluster Name:
module.eks.cluster_name
- OIDC Provider ARN:
module.eks.oidc_provider_arn
- Key Add-ons:
- AWS Load Balancer Controller: Enabled
- Karpenter: Enabled with spot termination handling
-
Repository Authentication:
karpenter = { repository_username = data.aws_ecrpublic_authorization_token.token.user_name repository_password = data.aws_ecrpublic_authorization_token.token.password }
-
Spot Termination Handling:
karpenter_enable_spot_termination = true
- Karpenter Resources:
helm_releases = { karpenter-resources-default = { name = "karpenter-resources-basic" description = "A Helm chart for karpenter CPU based resources basic tier" chart = "${path.module}/helm-charts/karpenter" values = [<<-EOT clusterName: ${module.eks.cluster_name} subnetname: ${local.name} deviceName: "/dev/xvda" instanceName: backend-karpenter-node iamRoleName: ${module.eks_blueprints_addons.karpenter.node_iam_role_name} instanceFamilies: ["m5"] amiFamily: AL2023 taints: [] labels: [] EOT ] } }
This configuration sets up essential EKS add-ons and deploys additional resources using Helm charts.
This section provides a detailed explanation of the Helm releases configured in the eks_blueprints_addons
module. Each Helm release is used to deploy specific resources to the EKS cluster, with custom configurations provided through Helm values.
Name: karpenter-resources-basic
Description: A Helm chart for Karpenter CPU-based resources basic tier.
Chart Path: ${path.module}/helm-charts/karpenter
Values:
clusterName: ${module.eks.cluster_name}
subnetname: ${local.name}
deviceName: "/dev/xvda"
instanceName: backend-karpenter-node
iamRoleName: ${module.eks_blueprints_addons.karpenter.node_iam_role_name}
instanceFamilies: ["m5"]
amiFamily: AL2023
taints: []
labels: []
Explanation:
- clusterName: The name of the EKS cluster.
- subnetname: The name of the subnet for Karpenter.
- deviceName: Device name for instances.
- instanceName: Name assigned to Karpenter nodes.
- iamRoleName: IAM role name for Karpenter nodes.
- instanceFamilies: Instance families to be used (e.g., m5).
- amiFamily: AMI family to be used (e.g., AL2023).
- taints: Node taints for scheduling.
- labels: Node labels for identification.
The Kube Prometheus Stack is a collection of Kubernetes-native services and configurations for monitoring and observability within Kubernetes clusters. It leverages the Prometheus monitoring system and integrates it with Grafana for visualization and Alertmanager for alerting.
Here's a breakdown of its components:
-
Prometheus: A time-series database and monitoring system that collects metrics from configured targets at regular intervals, evaluates rule expressions, displays the results, and can trigger alerts if necessary.
-
Grafana: A popular open-source platform for analytics and monitoring. Grafana allows users to query, visualize, alert on, and understand metrics no matter where they are stored.
-
Alertmanager: Handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration (like email, PagerDuty, or custom webhooks).
The Kube Prometheus Stack simplifies the deployment and management of these components within Kubernetes clusters. It provides predefined configurations, dashboards, and alerting rules tailored for monitoring Kubernetes infrastructure and applications running within it.
This Helm values file is configuring the installation of the Kube Prometheus Stack, which includes Grafana, Prometheus, and Alertmanager components. Here's a summary:
-
Grafana:
- Enabled with admin password generated dynamically.
- Additional data source configured for CloudWatch.
- Ingress configured with ALB (Application Load Balancer) settings for external access.
-
Prometheus:
- Retention policy set to 15 days.
- Ingress configured with ALB settings for external access.
-
Alertmanager:
- Enabled with ALB settings for external access.
- Inhibition rules defined for handling alerts.
- Routing rules defined for different receivers, including Discord webhook integration for alert notifications.
The setup is designed to provide monitoring capabilities with external access through ALB, dynamic configuration using Terraform variables, and integration with Discord for alert notifications.
This pipeline is designed to automate the process of building and pushing Docker images to Amazon Elastic Container Registry (ECR) using AWS CodePipeline and AWS CodeBuild.
The aws_codebuild_project
resource defines a CodeBuild project with the following key functionalities:
- Name:
ecr-push-pipeline-project
- Build Timeout: 5 minutes
- Artifacts: Integrates with CodePipeline
- Environment: Configured to use a small Linux container with Docker support and necessary environment variables for the AWS region, account ID, and ECR repository name.
The aws_codepipeline
resource sets up a pipeline with two main stages:
- Source Stage:
- Retrieves the source code from a Bitbucket repository.
- Monitors the
adeel-dev
branch for changes.
- Build Stage:
- Uses AWS CodeBuild to build the Docker image.
- Pushes the built image to the specified ECR repository.
The aws_codestarconnections_connection
resource establishes a connection to a Bitbucket repository for the source stage of the pipeline.
The buildspec.yaml
file contains the build instructions:
- Pre-Build:
- Logs in to Amazon ECR.
- Calculates the image tag using the current date and build number.
- Build:
- Builds the Docker image.
- Tags the image with the calculated tag.
- Post-Build:
- Pushes the Docker image to ECR.
This pipeline automates the process of deploying Docker images to an Amazon EKS (Elastic Kubernetes Service) cluster using AWS CodePipeline and AWS CodeBuild.
The aws_codebuild_project
resource defines a CodeBuild project with the following key functionalities:
- Name:
eks-deploy-pipeline-project
- Build Timeout: 5 minutes
- Artifacts: Integrates with CodePipeline
- Environment: Configured to use a small Linux container with Docker support and necessary environment variables for the AWS region, EKS cluster, IAM roles, and secrets.
The aws_codepipeline
resource sets up a pipeline with two main stages:
- Source Stage:
- Retrieves the source code from a Bitbucket repository.
- Monitors the
dev
branch for changes.
- Build Stage:
- Uses AWS CodeBuild to build the Docker image and deploy it to the EKS cluster.
The aws_codestarconnections_connection
resource establishes a connection to a Bitbucket repository for the source stage of the pipeline.
The buildspec.yaml
file contains the build instructions:
- Pre-Build:
- Checks if the specified Docker image tag exists in ECR.
- Install:
- Installs Helm and kubectl if they are not already installed.
- Build:
- Assumes the necessary IAM role to update the kubeconfig for EKS.
- Deploys the application to the EKS cluster using Helm.
This guide outlines the steps to deploy Fast API on Amazon EKS using Terraform. The deployment includes setting up the necessary infrastructure such as VPC, EKS cluster, and other resources required to run Fast API.
Before you begin, ensure you have the following:
- AWS CLI installed and configured with appropriate credentials.
- Terraform CLI installed on your local machine.
- Basic understanding of Amazon EKS, Terraform, and Kubernetes concepts.
Details are mentioned in this readme
Details are mentioned in this readme
Details are mentioned in this readme
In the scripts
dir, pre-create.sh
script used to deploy the intial resources in new account.
It will deploy the s3 bucket and dynamoDB for keeping the terraform stae and lock files. edit these values accordingly or let them remain same
dynamo_table_name="terraform-lock"
bucket_name="twin-citi-terraform-state-bucket"
After updating these values run the script using this command:
bash ./scripts/pre-create.sh <region>
exapmle:
bash ./scripts/pre-create.sh us-east-1
Update the backend in providers.tf
file. like name of s3 bucket and dynamoDB table if same as the specified ones in pre-create.sh script
backend "s3" {
bucket = "twin-citi-terraform-state-bucket"
key = "terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock"
}
also update the account ID.
allowed_account_ids = ["058264243973"]
Open the variables.tf
file and update the variables according to your requirements. You may need to modify variables such as aws_region
, vpc_cidr
, node_type
, etc., to match your environment.
Initialize the Terraform environment to download required plugins and modules:
terraform init
Review the Terraform plan to ensure it will create the desired infrastructure:
terraform plan
Apply the Terraform configuration to deploy the infrastructure on AWS:
terraform apply
Details are mentioned in this readme
terraform destroy
Confirm the deletion when prompted.
- Ensure you have necessary permissions and privileges to create and manage resources on AWS.
- Customize the Fast API application according to your requirements before deploying it on EKS.
Name | Version |
---|---|
terraform | >= 1.0 |
aws | >= 4.5.0 |
cloudflare | >= 4.20 |
helm | >= 2.4.1 |
kubectl | >= 1.14 |
kubernetes | >= 2.10 |
random | >= 3.4 |
Name | Version |
---|---|
aws | 5.52.0 |
aws.ecr | 5.52.0 |
cloudflare | 4.34.0 |
local | 2.5.1 |
random | 3.6.2 |
tls | 4.0.5 |
Name | Source | Version |
---|---|---|
eks | terraform-aws-modules/eks/aws | ~> 20.0 |
eks_auth | aidanmelen/eks-auth/aws | n/a |
eks_blueprints_addons | aws-ia/eks-blueprints-addons/aws | 1.16.3 |
vpc | terraform-aws-modules/vpc/aws | n/a |
Name | Description | Type | Default | Required |
---|---|---|---|---|
account_id | AWS account ID | string |
n/a | yes |
aws_region | Region where all the resources will be deployed | string |
n/a | yes |
backend_app_code_repo_name | Name of the Bitbucket repo for backend | string |
n/a | yes |
backend_aws_secret_name | Name of the secret where all the AWS secrets are stored | string |
n/a | yes |
backend_domain_prefix | Prefix for backend | string |
n/a | yes |
backend_repo_branch_name | Name of the backend repo branch | string |
n/a | yes |
cloudflare_secret_name | Name of secret where Cloudflare API key is stored | string |
n/a | yes |
cloudflare_zone_name | Name of the zone where to create the domain records | string |
n/a | yes |
enable_flow_logs | Enable flow logs for VPC | bool |
n/a | yes |
environament | Name of the env like dev,prod and test | string |
n/a | yes |
helm_repo_branch | Name of the Helm repo branch | string |
n/a | yes |
helm_repo_name | Name of the Helm repo from Bitbucket | string |
n/a | yes |
node_type | Type of nodes for initial EKS deployment | string |
n/a | yes |
third_party_secrets_id | Name of the secret where all the third-party secrets are stored | string |
n/a | yes |
vpc_cidr | CIDR block for the VPC | string |
n/a | yes |
Name | Description |
---|---|
configure_kubectl | Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig |