Skip to content

Commit

Permalink
Merge pull request #35 from ixdlabs/terraform-nileeka
Browse files Browse the repository at this point in the history
Terraform nileeka
  • Loading branch information
kdsuneraavinash authored Dec 1, 2023
2 parents fa7f3ef + 6387583 commit e528954
Show file tree
Hide file tree
Showing 11 changed files with 512 additions and 18 deletions.
93 changes: 93 additions & 0 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# CD README

## Overview

This repository contains Infrastructure as Code (IaC) for deploying a web application on AWS using Terraform. The deployment process includes provisioning a Virtual Private Cloud (VPC), EC2 instances, and optionally, an Elastic Beanstalk environment. The CI/CD pipeline is set up with GitHub Actions to automate the deployment process.

## Prerequisites

Before running the CI/CD pipeline, make sure to complete the following steps:

1. **Remote State File:**
- Add the remote state file storing path (S3 folder name) to the `providers.tf` file in the `terraform` folder.
```hcl
terraform {
backend "s3" {
bucket = "ixd-terraform-tfstate-bucket"
key = "add your key here" #ex: terraform-aws-beanstalk-deployment/terraform.tfstate
region = "us-east-1"
}
}
```
2. **S3 Bucket for Media Files:**
- Manually create an S3 bucket to store media files. The bucket name should be in this format: `<your-project-name+env>-media`. Example: "demo-project-dev-media"
3. **GitHub Secrets:**
- `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`: Add AWS user access keys with relevant permissions to create infrastructure. These should be added as GitHub Secrets.
- `AWS_S3_ACCESS_KEY_ID` and `AWS_S3_SECRET_ACCESS_KEY`: Create a user with access to the S3 bucket for media files that created in step 2. Add the user's access key ID and secret access key as GitHub Secrets.
- `DATABASE_URL`: Create a database and add the database connection string as a GitHub Secret. Example: `postgresql://db_user:<password>@ixd-common-db-server.cycideyygjht.us-east-2.rds.amazonaws.com:5432/<database_name>`
![Alt text](secrets.png)
4. **Key Pair:**
- Manually create an EC2 key pair. The name should be in the format `<your-project-name+env>-kp`. Example: "demo-project-dev-kp"
5. Adjust Terraform variables in the cd.yml according to your project needs. Such as PROJECT_NAME, ENV, AWS_REGION, VPC_CIDR_BLOCK
```yaml
# Configure the following environment variables
PROJECT_NAME: "<your-project-name>" # Name of the project ex: "demo-project"
ENV: "dev" # (dev, stag, or prod)
AWS_REGION: "ap-south-1"
VPC_CIDR_BLOCK: "10.0.0.0/16" # CIDR block for the Virtual Private Cloud (VPC)
PUBLIC_SUBNET_1_CIDR_BLOCK: "10.0.1.0/24"
PUBLIC_SUBNET_1_AVAIL_ZONE: "ap-south-1a"
INSTANCE_TYPE: "t2.micro" # Define the instance type (e.g., t2.micro, m5.large)
STACK_NAME: "64bit Amazon Linux 2023 v4.0.6 running Python 3.9"
EC2_KEY_NAME: "<your-project-name+env>-kp" # Name of the key pair created manually ex: "demo-project-dev-kp"
DJANGO_ALLOWED_HOSTS: "*"
DJANGO_SETTINGS_MODULE: "config.settings"
#S3 media bucket
USE_AWS_S3: "true"
AWS_S3_REGION_NAME: "us-east-1"
AWS_STORAGE_BUCKET_NAME: "<your-project-name+env>-media" # ex: "demo-project-dev-media"
#env vars related to deploy_to_eb
EB_PACKAGE_S3_BUCKET_NAME : "<your-project-name+env>-deployments" # ex: "demo-project-dev-deployments"
EB_APPLICATION_NAME : "<your-project-name+env>" # ex: "demo-project-dev
EB_ENVIRONMENT_NAME : "<your-project-name+env>-env" # ex: "demo-project-dev-env
DEPLOY_PACKAGE_NAME : "<your-project-name+env>-deployment-${{ github.sha }}.zip"
```
## GitHub Actions
The repository includes three GitHub Actions workflows:
1. **terraform-build:**
- This workflow initializes and applies Terraform configurations. It also includes a destroy step to clean up resources.
2. **push_to_s3:**
- This workflow creates a deployment package (ZIP file) and copies it to the specified S3 bucket for Elastic Beanstalk.
3. **deploy_to_eb:**
- This workflow configures AWS credentials, creates a new Elastic Beanstalk application version, and deploys the application to the specified environment.
## Usage
1. Push changes to the `master` branch to trigger the CI/CD pipeline.
2. Monitor the progress of each workflow in the GitHub Actions tab.
3. Ensure that secrets and prerequisites are set up correctly for successful execution.
## Note
- The deployment assumes the use of an Amazon Linux 2 AMI running Python 3.9.
- Adjust Terraform variables in the workflows according to your project needs.
- The provided workflows are basic examples and may need customization based on specific project requirements.
**Congratulations on setting up your CI/CD pipeline! 🚀**
123 changes: 106 additions & 17 deletions .github/workflows/cd.yml
Original file line number Diff line number Diff line change
@@ -1,20 +1,109 @@
name: Release CD
name: CD

env:
# TODO: Configure the following environment variables
EB_PACKAGE_S3_BUCKET_NAME : "example-dev-deployments"
EB_APPLICATION_NAME : "example"
EB_ENVIRONMENT_NAME : "example-env"
DEPLOY_PACKAGE_NAME : "example-dev-deployment-${{ github.sha }}.zip"
AWS_REGION_NAME : "ap-southeast-2"

on:
# Configure the following environment variables
PROJECT_NAME: "<your-project-name>" # Name of the project ex: "demo-project"
ENV: "dev" # (dev, stag, or prod)
AWS_REGION: "ap-south-1"
VPC_CIDR_BLOCK: "10.0.0.0/16" # CIDR block for the Virtual Private Cloud (VPC)
PUBLIC_SUBNET_1_CIDR_BLOCK: "10.0.1.0/24"
PUBLIC_SUBNET_1_AVAIL_ZONE: "ap-south-1a"
INSTANCE_TYPE: "t2.micro" # Define the instance type (e.g., t2.micro, m5.large)
STACK_NAME: "64bit Amazon Linux 2023 v4.0.6 running Python 3.9"
EC2_KEY_NAME: "<your-project-name+env>-kp" # Name of the key pair created manually ex: "demo-project-dev-kp"
DJANGO_ALLOWED_HOSTS: "*"
DJANGO_SETTINGS_MODULE: "config.settings"


#S3 media bucket
USE_AWS_S3: "true"
AWS_S3_REGION_NAME: "us-east-1"
AWS_STORAGE_BUCKET_NAME: "<your-project-name+env>-media" # ex: "demo-project-dev-media"

#env vars related to deploy_to_eb
EB_PACKAGE_S3_BUCKET_NAME : "<your-project-name+env>-deployments" # ex: "demo-project-dev-deployments"
EB_APPLICATION_NAME : "<your-project-name+env>" # ex: "demo-project-dev
EB_ENVIRONMENT_NAME : "<your-project-name+env>-env" # ex: "demo-project-dev-env
DEPLOY_PACKAGE_NAME : "<your-project-name+env>-deployment-${{ github.sha }}.zip"

on:
push:
branches:
- release
branches:
- master

jobs:

terraform-build:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v2

- name: Set up Terraform
uses: hashicorp/setup-terraform@v1

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Initialize Terraform
working-directory: ./terraform
run: terraform init

- name: Apply Terraform Configuration
working-directory: ./terraform
run: |
terraform apply -auto-approve \
-var "project_name=${{ env.PROJECT_NAME }}" \
-var "env=${{ env.ENV }}" \
-var "vpc_cidr_block=${{ env.VPC_CIDR_BLOCK }}" \
-var "public_subnet_1_cidr_block=${{ env.PUBLIC_SUBNET_1_CIDR_BLOCK }}" \
-var "public_subnet_1_avail_zone=${{ env.PUBLIC_SUBNET_1_AVAIL_ZONE }}" \
-var "instance_type=${{ env.INSTANCE_TYPE }}" \
-var "stack_name=${{ env.STACK_NAME }}" \
-var "ec2_keypair=${{ env.EC2_KEY_NAME }}" \
-var "DATABASE_URL=${{ secrets.DATABASE_URL }}" \
-var "USE_AWS_S3=${{ env.USE_AWS_S3 }}" \
-var "AWS_S3_ACCESS_KEY_ID=${{ secrets.AWS_S3_ACCESS_KEY_ID }}" \
-var "AWS_S3_SECRET_ACCESS_KEY=${{ secrets.AWS_S3_SECRET_ACCESS_KEY }}" \
-var "AWS_STORAGE_BUCKET_NAME=${{ env.AWS_STORAGE_BUCKET_NAME }}" \
-var "AWS_S3_REGION_NAME=${{ env.AWS_S3_REGION_NAME }}" \
-var "DJANGO_ALLOWED_HOSTS=${{ env.DJANGO_ALLOWED_HOSTS }}" \
-var "DJANGO_SETTINGS_MODULE=${{ env.DJANGO_SETTINGS_MODULE }}"
# - name: Terraform Destroy
# working-directory: ./terraform
# run: |
# terraform destroy -auto-approve \
# -var "project_name=${{ env.PROJECT_NAME }}" \
# -var "env=${{ env.ENV }}" \
# -var "vpc_cidr_block=${{ env.VPC_CIDR_BLOCK }}" \
# -var "public_subnet_1_cidr_block=${{ env.PUBLIC_SUBNET_1_CIDR_BLOCK }}" \
# -var "public_subnet_1_avail_zone=${{ env.PUBLIC_SUBNET_1_AVAIL_ZONE }}" \
# -var "instance_type=${{ env.INSTANCE_TYPE }}" \
# -var "stack_name=${{ env.STACK_NAME }}" \
# -var "ec2_keypair=${{ env.EC2_KEY_NAME }}" \
# -var "DATABASE_URL=${{ secrets.DATABASE_URL }}" \
# -var "USE_AWS_S3=${{ env.USE_AWS_S3 }}" \
# -var "AWS_S3_ACCESS_KEY_ID=${{ secrets.AWS_S3_ACCESS_KEY_ID }}" \
# -var "AWS_S3_SECRET_ACCESS_KEY=${{ secrets.AWS_S3_SECRET_ACCESS_KEY }}" \
# -var "AWS_STORAGE_BUCKET_NAME=${{ env.AWS_STORAGE_BUCKET_NAME }}" \
# -var "AWS_S3_REGION_NAME=${{ env.AWS_S3_REGION_NAME }}" \
# -var "DJANGO_ALLOWED_HOSTS=${{ env.DJANGO_ALLOWED_HOSTS }}" \
# -var "DJANGO_SETTINGS_MODULE=${{ env.DJANGO_SETTINGS_MODULE }}"

- name: Print nice message on completion of Terraform Pipeline
run : echo "Terraform Pipeline part finished successfully"


push_to_s3:
runs-on: ubuntu-latest
needs: [terraform-build]

steps:
- name: Git clone our repository
Expand All @@ -26,9 +115,9 @@ jobs:
- name: Configure my AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id : ${{ secrets.DEPLOYMENTUSER_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.DEPLOYMENTUSER_SECRET_KEY }}
aws-region : ${{ env.AWS_REGION_NAME }}
aws-access-key-id : ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region : ${{ env.AWS_REGION }}

- name: Copy our Deployment package to S3 bucket
run : aws s3 cp ${{ env.DEPLOY_PACKAGE_NAME }} s3://${{ env.EB_PACKAGE_S3_BUCKET_NAME}}/
Expand All @@ -44,9 +133,9 @@ jobs:
- name: Configure my AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id : ${{ secrets.DEPLOYMENTUSER_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.DEPLOYMENTUSER_SECRET_KEY }}
aws-region : ${{ env.AWS_REGION_NAME }}
aws-access-key-id : ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region : ${{ env.AWS_REGION }}

- name: Create new ElasticBeanstalk Applicaiton Version
run : |
Expand Down
Binary file added .github/workflows/secrets.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion Procfile

This file was deleted.

11 changes: 11 additions & 0 deletions terraform/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Ignore .terraform directory
.terraform/*

# Ignore Terraform state files
*.tfstate
*.tfstate.*

# Ignore sensitive files containing credentials, private keys, etc.
*.pem
*.key
*.tfvars
25 changes: 25 additions & 0 deletions terraform/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

66 changes: 66 additions & 0 deletions terraform/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Terraform AWS Elastic Beanstalk Deployment

This Terraform project sets up an AWS Elastic Beanstalk environment along with a VPC, security group, and S3 bucket for media storage. Follow the steps below to deploy the infrastructure.


## Initial Setup

1. **Add s3 backend key to `providers.tf` file**

```hcl
terraform {
backend "s3" {
bucket = "ixd-terraform-tfstate-bucket"
key = "add your key here" #ex: terraform-aws-beanstalk-deployment/terraform.tfstate
region = "us-east-1"
}
}
```
2. **Create S3 Bucket for Media Files**
- Manually create an S3 bucket to store media files. Note the bucket name for use in later steps.
3. **Add User for Media Bucket Access**
- Create an IAM user with S3 access and provide necessary permissions to access the media bucket.
- Retrieve the access key ID and secret access key for this user.
- add relevent env vars to terraform.yml file - USE_AWS_S3, AWS_STORAGE_BUCKET_NAME, AWS_S3_REGION_NAME, AWS_S3_ACCESS_KEY_ID and AWS_S3_SECRET_ACCESS_KEY
4. **Add other varibales also to `terraform.yml` file**
```hcl
project_name = "your-project-name"
env = "dev"
vpc_cidr_block = "10.0.0.0/16"
public_subnet_1_cidr_block = "10.0.1.0/24"
public_subnet_1_avail_zone = "us-east-1a"
stack_name = "64bit Amazon Linux 2 v5.8.1 running Python 3.8"
instance_type = "t2.micro"
ec2_keypair = "your-key-pair-name"
# Additional variables as needed
DATABASE_URL = "your_database_url"
USE_AWS_S3 = true
AWS_S3_ACCESS_KEY_ID = "your_s3_access_key_id"
AWS_S3_SECRET_ACCESS_KEY = "your_s3_secret_access_key"
AWS_STORAGE_BUCKET_NAME = "your_media_bucket_name"
AWS_S3_REGION_NAME = "your_s3_region"
DJANGO_ALLOWED_HOSTS = "your_allowed_hosts"
DJANGO_SETTINGS_MODULE = "your_django_settings_module"
```
## Terraform Deployment
1. **Initialize Terraform**
Run the following command to initialize the Terraform working directory:
```bash
terraform init
2. **Review and Apply Terraform Changes**
```bash
terraform plan
terraform apply
Loading

0 comments on commit e528954

Please sign in to comment.