Skip to content

Latest commit

 

History

History
226 lines (183 loc) · 28.7 KB

File metadata and controls

226 lines (183 loc) · 28.7 KB

infrastructure-terraform-eks

License: GPL v3

A custom-build terraform module, leveraging terraform-aws-eks to create a managed Kubernetes cluster on AWS EKS. In addition to provisioning simply an EKS cluster, this module alongside additional components to complete an entire end-to-end base stack for a functional kubernetes cluster for development and production level environments, including a base set of software that can/should be commonly used across all clusters. Primary integrated sub-modules include:

Assumptions

  • You want to create an EKS cluster and an autoscaling group of workers for the cluster.
  • You want these resources to exist within security groups that allow communication and coordination.
  • You want to generate an accompanying Virtual Private Cloud (VPC) and subnets where the EKS resources will reside. The VPC satisfies EKS requirements.
  • You want specific security provisions in place, including the use of private subnets for all nodes.
  • The base ingress controller to be used is Nginx-Ingress Controller with internet-facing AWS network load balancers instead of being controlled by more reactive AWS Application Load Balancers.

Important note

The cluster_version is the required variable. Kubernetes is evolving a lot, and each major version includes new features, fixes, or changes.

Always check Kubernetes Release Notes before updating the major version.

You also need to ensure your applications and add ons are updated, or workloads could fail after the upgrade is complete. For action, you may need to take before upgrading, see the steps in the EKS documentation.

An example of harming update was the removal of several commonly used, but deprecated APIs, in Kubernetes 1.16. More information on the API removals, see the Kubernetes blog post.

For windows users, please read the following doc.

Usage

Provisioning Sequence

  1. Once AWS Credentials (see notes below) are set up on your local machine, it is recommended to follow docker-compose command in order to initialise terraform state within the context of Gitlab. However, you can also manage terraform state in other ways. An example of initialising terraform with a gitlab-managed state is shown here:
export TF_ADDRESS=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${TF_VAR_app_name}-${TF_VAR_app_namespace}-${TF_VAR_tfenv} && \ 
terraform init \
  -backend-config="address=${TF_ADDRESS}" \
  -backend-config="lock_address=${TF_ADDRESS}/lock" \
  -backend-config="unlock_address=${TF_ADDRESS}/lock" \
  -backend-config="username=${TF_VAR_gitlab_username}" \
  -backend-config="password=${TF_VAR_gitlab_token}" \
  -backend-config="lock_method=POST" \
  -backend-config="unlock_method=DELETE" \
  -backend-config="retry_wait_min=5"
  1. terraform plan
  2. terraform apply
  3. terraform destroy

Other documentation

Doc generation

Code formatting and documentation for variables and outputs is generated using pre-commit-terraform hooks which uses terraform-docs.

Follow these instructions to install pre-commit locally.

And install terraform-docs with go get github.com/segmentio/terraform-docs or brew install terraform-docs.

Contributing

Report issues/questions/feature requests on in the issues section.

Full contributing guidelines are covered here.

Change log

  • The changelog captures all important release notes from 1.1.17

Authors

Created by Aaron Baideme - [email protected]

Supported by Ronel Cartas - [email protected]

License

MIT Licensed. See LICENSE for full details.

Requirements

Name Version
terraform >= 1.1
aws ~> 4.5
gitlab ~> 3.4
helm ~> 2.0
kubectl ~> 1.14.0
kubernetes ~> 2.11.0

Providers

Name Version
aws ~> 4.5
aws.secondary ~> 4.5
kubectl ~> 1.14.0
kubernetes ~> 2.11.0
local n/a
random n/a

Modules

Name Source Version
argocd ./provisioning/kubernetes/argocd n/a
aws-cluster-autoscaler ./aws-cluster-autoscaler n/a
aws-support ./aws-support n/a
certmanager ./provisioning/kubernetes/certmanager n/a
consul ./provisioning/kubernetes/hashicorp-consul n/a
eks terraform-aws-modules/eks/aws ~> 18.23.0
eks-vpc terraform-aws-modules/vpc/aws ~> 3.14
eks-vpc-endpoints terraform-aws-modules/vpc/aws//modules/vpc-endpoints ~> 3.14
eks_managed_node_group terraform-aws-modules/eks/aws//modules/eks-managed-node-group ~> 18.23.0
elastic-stack ./provisioning/kubernetes/elastic-stack n/a
gitlab-k8s-agent ./provisioning/kubernetes/gitlab-kubernetes-agent n/a
kubernetes-dashboard ./provisioning/kubernetes/kubernetes-dashboard n/a
metrics-server ./provisioning/kubernetes/metrics-server n/a
monitoring-stack ./provisioning/kubernetes/monitoring n/a
nginx-controller-ingress ./provisioning/kubernetes/nginx-controller n/a
stakater-reloader ./provisioning/kubernetes/stakater-reloader n/a
subnet_addrs hashicorp/subnets/cidr 1.0.0
vault ./provisioning/kubernetes/hashicorp-vault n/a
vault-operator ./provisioning/kubernetes/bonzai-vault-operator n/a
vault-secrets-webhook ./provisioning/kubernetes/bonzai-vault-secrets-webhook n/a

Resources

Name Type
aws_iam_policy.node_additional resource
aws_iam_role_policy_attachment.additional resource
aws_kms_alias.eks resource
aws_kms_key.eks resource
aws_kms_replica_key.eks resource
aws_route53_zone.hosted_zone resource
aws_vpc_endpoint.rds resource
kubectl_manifest.aws-auth resource
kubernetes_namespace.cluster resource
kubernetes_secret.regcred resource
random_integer.cidr_vpc resource
aws_availability_zones.available_azs data source
aws_caller_identity.current data source
aws_eks_cluster.cluster data source
aws_eks_cluster_auth.cluster data source
aws_ssm_parameter.regcred_password data source
aws_ssm_parameter.regcred_username data source
local_file.infrastructure-terraform-eks-version data source

Inputs

Name Description Type Default Required
app_name Application Name string "eks" no
app_namespace Tagged App Namespace any n/a yes
autoscaling_configuration n/a
object({
scale_down_util_threshold = number
skip_nodes_with_local_storage = bool
skip_nodes_with_system_pods = bool
cordon_node_before_term = bool
})
{
"cordon_node_before_term": true,
"scale_down_util_threshold": 0.7,
"skip_nodes_with_local_storage": false,
"skip_nodes_with_system_pods": false
}
no
aws_installations AWS Support Components including Cluster Autoscaler, EBS/EFS Storage Classes, etc.
object({
storage_ebs = optional(object({
eks_irsa_role = bool
gp2 = bool
gp3 = bool
st1 = bool
}))
storage_efs = optional(object({
eks_irsa_role = bool
eks_security_groups = bool
efs = bool
}))
cluster_autoscaler = optional(bool)
route53_external_dns = optional(bool)
kms_secrets_access = optional(bool)
cert_manager = optional(bool)
})
{
"cert_manager": true,
"cluster_autoscaler": true,
"kms_secrets_access": true,
"route53_external_dns": true,
"storage_ebs": {
"eks_irsa_role": true,
"gp2": true,
"gp3": true,
"st1": true
},
"storage_efs": {
"efs": true,
"eks_irsa_role": true,
"eks_security_groups": true
}
}
no
aws_profile AWS Profile string "" no
aws_region AWS Region for all primary configurations any n/a yes
aws_secondary_region Secondary Region for certain redundant AWS components any n/a yes
billingcustomer Which Billingcustomer, aka Cost Center, is responsible for this infra provisioning any n/a yes
cluster_addons An add-on is software that provides supporting operational capabilities to Kubernetes applications, but is not specific to the application: coredns, kube-proxy, vpc-cni any
{
"coredns": {
"resolve_conflicts": "OVERWRITE"
},
"kube-proxy": {},
"vpc-cni": {
"resolve_conflicts": "OVERWRITE"
}
}
no
cluster_endpoint_public_access_cidrs If the cluster endpoint is to be exposed to the public internet, specify CIDRs here that it should be restricted to list(string) [] no
cluster_name Optional override for cluster name instead of standard {name}-{namespace}-{env} string "" no
cluster_root_domain Domain root where all kubernetes systems are orchestrating control
object({
create = optional(bool)
name = string
ingress_records = optional(list(string))
})
n/a yes
cluster_version Kubernetes Cluster Version string "1.21" no
create_launch_template enable launch template on node group bool false no
custom_aws_s3_support_infra Adding the ability to provision additional support infrastructure required for certain EKS Helm chart/App-of-App Components
list(object({
name = string
bucket_acl = string
aws_kms_key_id = optional(string)
lifecycle_rules = any
versioning = bool
k8s_namespace_service_account_access = any
}))
[] no
custom_namespaces Adding namespaces to a default cluster provisioning process list(string) [] no
default_ami_type Default AMI used for node provisioning string "AL2_x86_64" no
default_capacity_type Default capacity configuraiton used for node provisioning. Valid values: ON_DEMAND, SPOT string "ON_DEMAND" no
eks_managed_node_groups Override default 'single nodegroup, on a private subnet' with more advaned configuration archetypes any [] no
elastic_ip_custom_configuration By default, this module will provision new Elastic IPs for the VPC's NAT Gateways; however, one can also override and specify separate, pre-existing elastic IPs as needed in order to preserve IPs that are whitelisted; reminder that the list of EIPs should have the same count as nat gateways created.
object({
enabled = bool
reuse_nat_ips = optional(bool)
external_nat_ip_ids = optional(list(string))
})
{
"enabled": false,
"external_nat_ip_ids": [],
"reuse_nat_ips": false
}
no
gitlab_kubernetes_agent_config Configuration for Gitlab Kubernetes Agent
object({
gitlab_agent_url = string
gitlab_agent_secret = string
})
{
"gitlab_agent_secret": "",
"gitlab_agent_url": ""
}
no
google_authDomain Used for Infrastructure OAuth: Google Auth Domain any n/a yes
google_clientID Used for Infrastructure OAuth: Google Auth Client ID any n/a yes
google_clientSecret Used for Infrastructure OAuth: Google Auth Client Secret any n/a yes
helm_configurations n/a
object({
dashboard = optional(string)
gitlab_runner = optional(string)
vault_consul = optional(object({
consul_values = optional(string)
vault_values = optional(string)
enable_aws_vault_unseal = optional(bool) # If Vault is enabled and deployed, by default, the unseal process is manual; Changing this to true allows for automatic unseal using AWS KMS"
vault_nodeselector = optional(string) # Allow for vault node selectors without extensive reconfiguration of the standard values file
vault_tolerations = optional(string) # Allow for tolerating certain taint on nodes, example usage, string:'NoExecute:we_love_hashicorp:true'
}))
ingress = optional(object({
nginx_values = optional(string)
certmanager_values = optional(string)
}))
elasticstack = optional(string)
monitoring = optional(object({
values = optional(string)
version = optional(string)
}))
argocd = optional(object({
value_file = optional(string)
application_set = optional(list(string))
repository_secrets = optional(list(object({
name = string
url = string
type = string
username = string
password = string
secrets_store = string
})))
credential_templates = optional(list(object({
name = string
url = string
username = string
password = string
secrets_store = string
})))
registry_secrets = optional(list(object({
name = string
username = string
password = string
url = string
email = string
secrets_store = string
})))
generate_plugin_repository_secret = optional(bool)
}))
})
{
"argocd": null,
"dashboard": null,
"elasticstack": null,
"gitlab_runner": null,
"ingress": null,
"monitoring": null,
"vault_consul": null
}
no
helm_installations n/a
object({
dashboard = bool
gitlab_runner = bool
gitlab_k8s_agent = bool
vault_consul = bool
ingress = bool
elasticstack = bool
monitoring = bool
argocd = bool
stakater_reloader = bool
metrics_server = bool
})
{
"argocd": false,
"dashboard": false,
"elasticstack": false,
"gitlab_k8s_agent": false,
"gitlab_runner": false,
"ingress": true,
"metrics_server": true,
"monitoring": true,
"stakater_reloader": false,
"vault_consul": false
}
no
instance_desired_size Count of instances to be spun up within the context of a kubernetes cluster. Minimum: 2 number 2 no
instance_max_size Count of instances to be spun up within the context of a kubernetes cluster. Minimum: 2 number 4 no
instance_min_size Count of instances to be spun up within the context of a kubernetes cluster. Minimum: 2 number 1 no
instance_type AWS Instance Type for provisioning string "c5a.medium" no
ipv6 n/a
object({
enable = bool
assign_ipv6_address_on_creation = bool
private_subnet_assign_ipv6_address_on_creation = bool
public_subnet_assign_ipv6_address_on_creation = bool
})
{
"assign_ipv6_address_on_creation": false,
"enable": false,
"private_subnet_assign_ipv6_address_on_creation": false,
"public_subnet_assign_ipv6_address_on_creation": false
}
no
map_accounts Additional AWS account numbers to add to the aws-auth configmap. list(string) [] no
map_roles Additional IAM roles to add to the aws-auth configmap.
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
map_users Additional IAM users to add to the aws-auth configmap.
list(object({
userarn = string
username = string
groups = list(string)
}))
[] no
nat_gateway_custom_configuration Override the default NAT Gateway configuration, which configures a single NAT gateway for non-prod, while one per AZ on tfenv=prod
object({
enabled = bool
enable_nat_gateway = bool
enable_dns_hostnames = bool
single_nat_gateway = bool
one_nat_gateway_per_az = bool
enable_vpn_gateway = bool
propagate_public_route_tables_vgw = bool
})
{
"enable_dns_hostnames": true,
"enable_nat_gateway": true,
"enable_vpn_gateway": false,
"enabled": false,
"one_nat_gateway_per_az": true,
"propagate_public_route_tables_vgw": false,
"single_nat_gateway": false
}
no
node_key_name EKS Node Key Name string "" no
node_public_ip assign public ip on the nodes bool false no
operator_domain_name Domain root of operator cluster string "" no
registry_credentials Create list of registry credential for different namespaces, username and password are fetched from AWS parameter store
list(object({
name = string
namespace = string
docker_username = string
docker_password = string
docker_server = string
docker_email = string
secrets_store = string
}))
[] no
root_vol_size Root Volume Size string "50" no
tech_email Tech Contact E-Mail for services such as LetsEncrypt any n/a yes
tfenv Environment any n/a yes
vpc_flow_logs Manually enable or disable VPC flow logs; Please note, for production, these are enabled by default otherwise they will be disabled; setting a value for this object will override all defaults regardless of environment
object({
enabled = optional(bool)
})
{} no
vpc_subnet_configuration Configure VPC CIDR and relative subnet intervals for generating a VPC. If not specified, default values will be generated.
object({
base_cidr = string
subnet_bit_interval = object({
public = number
private = number
})
autogenerate = optional(bool)
})
{
"autogenerate": true,
"base_cidr": "172.%s.0.0/16",
"subnet_bit_interval": {
"private": 6,
"public": 2
}
}
no

Outputs

Name Description
aws_profile n/a
aws_region # ----------- ## Region and AWS Profile Checks # -----------
base_cidr_block n/a
eks_managed_node_groups n/a
kubernetes-cluster-auth n/a
kubernetes-cluster-certificate-authority-data # ----------- # MODULE: EKS # -----------
kubernetes-cluster-endpoint n/a
kubernetes-cluster-id n/a
private_route_table_ids n/a
private_subnet_ids n/a
private_subnets_cidr_blocks n/a
public_subnet_ids n/a
public_subnets_cidr_blocks n/a
vpc_id # ----------- # MODULE: VPC # -----------