.github/workflows/ci.yml
- This is the main workflow with CI/CD pipeline
Pre-requirements
- Account on DigitalOcean and API token with read/write access (See How to Create a Personal Access Token for help).
- Kubernetes cluster on DigitalOcean with nginx configured and mapped to domain (This example assumes -
production.myapp.com
is mapped to your load balancer. See How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm for help.) - Access and credentials for a Docker registry (This example uses Docker Hub -
registry.hub.docker.com
and assumes provided credentials have access to the repository named -myapp
) - A valid
Dockerfile
on root of your project with entrypoint (See Dockerfile for example).
Tip - We also got platform-digitalocean-tf Terraform project for DOKS setup.
Tip - For more examples and end-to-end setup, see workflows for our MERN Boilerplate.
Example
# .github/workflows/production.yml
name: production
# define event on which this workflow should run
# @see - https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows
# this is an example of running the workflow when "main" branch is updated
on:
push:
branches:
- main
# required permissions by this workflow
permissions:
contents: read
jobs:
ci:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
app_name: myapp
app_env: production
app_hostname: 'production.myapp.com'
branch: ${{ github.event.ref }}
docker_registry: 'registry.hub.docker.com'
docker_username: '<docker_username>'
do_cluster_id: '<digital_ocean_cluster_id>'
secrets:
docker_password: '<docker_password>'
do_access_token: '<digital_ocean_access_token>'
# lib/kube/shared/app.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: $KUBE_DEPLOYMENT_IMAGE
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 8080
selector:
app: myapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
labels:
app: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: $KUBE_INGRESS_HOSTNAME
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: myapp-service
port:
number: 8080
Output
Upon successfully invocation, you will have:
- A built image pushed to your repository
- A deployment with service and ingress up and running in your Kubernetes cluster
Pre-requirements
- Account on AWS and credentials (Access key and secret). For required permissions, see iam.tf.
- Kubernetes cluster on AWS with nginx configured and mapped to domain (This example assumes -
production.myapp.com
is mapped to your load balancer. See Exposing Kubernetes Applications, Part 3: Ingress-Nginx Controller for help.) - Access and credentials for a Docker registry (This example uses ECR and assumes provided credentials have access to the ECR repository named -
myapp
). - A valid
Dockerfile
on root of your project with entrypoint (See Dockerfile for example).
Tip - We also got platform-aws-tf Terraform project for EKS, ECR and IAM setup.
# .github/workflows/production.yml
name: production
on:
push:
branches:
- main
permissions:
contents: read
jobs:
ci:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
app_name: myapp
app_env: production
app_hostname: 'production.myapp.com'
aws_cluster_name: '<aws_cluster_name>'
aws_region: '<aws_region>'
aws_use_ecr: true
branch: ${{ github.event.pull_request.head.ref }}
secrets:
aws_access_key_id: '<aws_access_key_id>'
aws_secret_access_key: '<aws_access_secret_key>'
Output
Upon successfully invocation, you will have:
- A built image pushed to your repository
- A deployment with service and ingress up and running in your Kubernetes cluster
- Use
build_context
parameter for changing the build context. By default, workflow builds from root of the repository. - Use
build_args
parameter for sending build arguments to the build. - Use
build_secrets
for using sensitive config to the build. See Using secrets with GitHub Actions. - The build step uses the value for
app_name
parameter to obtain repository name.
Example
# .github/workflows/production.yml
name: production
on:
push:
branches:
- main
permissions:
contents: read
jobs:
ci:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
build_args: |
CI=true
NODE_ENV=production
build_context: apps/backend
secrets:
build_secrets: |
github_token=${{ secrets.GITHUB_TOKEN }}
# apps/backend/Dockerfile
FROM node:20.11.1-buster
RUN --mount=type=secret,id=github_token,env=GITHUB_TOKEN
ARG CI
ARG NODE_ENV
RUN npm ci
RUN npm run build
CMD [ "npm", "start" ]
Authenticating with Docker Registry
For authentication, following parameters are supported:
Name | Type | Description |
---|---|---|
docker_registry | input | Registry to use with docker images. By default it uses Docker Hub (registry.hub.docker.com) |
docker_username | input | Username for authenticating with docker, see docker login. |
docker_password | secret | Password for authenticating with docker, see docker login. |
Using these parameters, workflow then creates a Secret
with name regcred
in default namespace which deployments can use to pull built images:
# lib/kube/shared/app.yml
apiVersion: apps/v1
kind: Deployment
spec:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: $KUBE_DEPLOYMENT_IMAGE
Using ECR
If ECR support is required, it can be used in following ways:
- Using AWS credentials - The recommended method fo using ECR where simply setting
aws_use_ecr
parameter totrue.
It will then simply use the provided AWS credentials (aws_access_key_id
,aws_secret_access_key
) to authenticate with the registry. - Using Docker credentials - Simply generating docker credentials against ECR works but AWS does not allow long-lived tokens to be generated. This method is not recommended.
When using AWS credentials, EKS role is used directly to pull docker images. See Using Amazon ECR Images with Amazon EKS on how it works.
Caching
- When building - Make sure
Dockerfile
is set up in the way layers can be cached, see Docker build cache. - This workflow uses the experimental GitHub Cache as cache store for managing the cache.
- Cache works on multiple levels. By default - The builds are always cached per branch. But in case a
base
is present, it can also use the base branch for obtaining caches. Useful when working with Pull Requests.
- Kubernetes resources are applied using
kubectl apply -f
fromyaml
files. - Workflow by default looks in
lib/kube
directory from root for specification files. This can be changed viadeploy_root
parameter.
Directory Structure
Directory defined in deploy_root
parameter needs to have the following structure which is recognized by the workflow:
core
- Define Kubernetes resources which are to be applied in all environments but should be excluded from clean up (see - Clean)shared
- Define Kubernetes resources which are to be applied in all environments and should be included in cleanup.<app_env>
- Define Kubernetes resources in this directory which are only applied based on the value forapp_env
input.
Examples
# lib/kube/core/secret.yml
# this will always get applied + excluded from cleanup
apiVersion: v1
kind: Secret
metadata:
name: dotfile-secret
data:
.secret-file: dmFsdWUtMg0KDQo=
# lib/kube/staging/app.yml
# this will only get applied if app_env = staging
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: doks.digitalocean.com/node-pool
operator: In
values:
- cluster-staging-pool
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: $KUBE_DEPLOYMENT_IMAGE
imagePullPolicy: Always
ports:
- containerPort: 8080
# lib/kube/production/app.yml
# this will only get applied if app_env = production
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: doks.digitalocean.com/node-pool
operator: In
values:
- cluster-production-pool
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: $KUBE_DEPLOYMENT_IMAGE
imagePullPolicy: Always
ports:
- containerPort: 8080
# lib/kube/shared/app.yml
# this will always get applied + marked for cleanup
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 8080
selector:
app: myapp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
labels:
app: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: $KUBE_INGRESS_HOSTNAME
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: myapp-service
port:
number: 8080
Placeholders
Following placeholders are available for use within Kubernetes specifications:
Key | Description | Example |
---|---|---|
DOCKER_REGISTRY | Provided docker registry. See Configuring the Build. | registry.hub.docker.com |
DOCKER_USERNAME | Provided docker username. See Configuring the Build. | myusername |
DOCKER_PASSWORD | Provided docker password. See Configuring the Build. | mypassword |
DOPPLER_MANAGED_SECRET_NAME | Name of the doppler Kubernetes secret the can be used in deployments. See Injecting Configuration using Doppler. | doppler-myapp-production-managed-secret |
KUBE_ROOT | Value for deploy_root input. |
lib/kube |
KUBE_NS | Combination of app_name and app_env inputs. |
myapp-production |
KUBE_APP | Combination of app_name and app_env inputs with branch ID. See Running on Branches and PullRequest. |
myapp-production-6905caad90e785f |
KUBE_ENV | Value for app_env input. |
production |
KUBE_DEPLOYMENT_IMAGE | Reference to the built image. Can be directly used with spec.containers.image in Kubernetes deployment specifications. |
registry.hub.docker.com/myusername/myapp@sha256:b5a3a... |
KUBE_INGRESS_HOSTNAME | Generated value based on app_host input. See Hostname and TLS. |
production.myapp.com |
KUBE_DEPLOY_ID | Unique ID based on run_number provided by GitHub. |
230 |
All environment variables exposed by GitHub actions are also available. See here for the list.
Example
# lib/kube/shared/app.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
deploy_id: $DEPLOY_ID
event: $GITHUB_EVENT_NAME
spec:
# ...
Labeling
On every workflow run, following labels are applied to every resource that is being applied:
gh/workflow-version
- Workflow version being used (example -v2.5
)gh/workflow-run
- Combination of run information provided by GitHub in format -run_id-run_number-run_attempt
. See GitHub Context for more info.gh/actor
- Username of the person who initiated the workflow. LookupGITHUB_ACTOR
here for more info.gh/commit
- The commit SHA that triggered the workflow. LookupGITHUB_SHA
here for more info.
Tip - See kubectl apply to see how apply
works and how file input is processed.
- The workflow accepts
app_hostname
parameter for injecting hostname values. - Kubernetes specifications can use
KUBE_INGRESS_HOSTNAME
placeholders to inject hostname. See Defining Kubernetes Resources for more info.
Example
# lib/kube/shared/app.yml
# app_hostname = production.myapp.com
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
- host: $KUBE_INGRESS_HOSTNAME # will get replaced with "production.myapp.com"
Wildcard Support
The input can also accept placeholders as well to allow environment or branch based deployments. The supported placeholders are:
{0}
- Value forapp_env
parameter.{1}
- Branch ID generated from provided branch. See Running on Branches and PullRequest for more info.
Example
# lib/kube/shared/app.yml
# app_hostname = {0}.myapp.com
# app_env = staging
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
- host: $KUBE_INGRESS_HOSTNAME # will get replaced with "staging.myapp.com"
To support wildcards, following entry can be added as DNS records, assuming that DNS provider supports use of wildcards:
Type: A Hostname: *.myapp.com Value: <nginx_load_balancer_ip>
TLS
For TLS, this example uses Cert Manager with LetsEncrypt. See this for setup guide.
# lib/kube/shared/app.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
labels:
app: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "<cluster_issuer_name>"
spec:
ingressClassName: nginx
tls:
- hosts:
- $KUBE_INGRESS_HOSTNAME
secretName: myapp-cert-key
rules:
- host: $KUBE_INGRESS_HOSTNAME
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: myapp-service
port:
number: 8080
- Workflow supports running bash scripts as hooks during deployment lifecycle.
- Hooks can be defined under
scripts
directory residing withindeploy_root
parameter (So for example, ifdeploy_root
=lib/kube
, workflow will look for hooks underlib/kube/scripts
directory). - If the hook fails with non 0 error code, it will fail the workflow run.
- The bash scripts have same variables available for usage as same as Kubernetes resources, see Defining Kubernetes Resources for list.
Available Hooks
pre-deploy.sh
- Hook to be run before any Kubernetes resources are applied.post-deploy.sh
- Hook to be run after all Kubernetes resources have been applied.
Example
#!/bin/bash
# lib/kube/scripts/post-deploy.sh
# this script waits for deployment to succeed
# in case status could not be verified, the script would fail, thus failing the workflow run
kubectl rollout status deploy/"$KUBE_APP"-deployment -n "$KUBE_NS"
- Workflow supports running static code analyser using SonarQube
- Analysis is only run when
sonar_host_url
input is provided. - The analysis waits on configured Quality Gate to provide results. If the check fails, it fails the check named
analyze
.
Parameters
Name | Type | Description |
---|---|---|
sonar_host_url | input | URL to SonarQube instance |
sonar_token | secret | API token for accessing SonarQube instance |
branch | input | Branch from which this workflow was run. See Analysis Modes below. |
analyze_base | input | Base branch for running incremental analysis. See Analysis Modes below. |
pull_request_number | input | Pull request number to be used for providing incremental analysis details. |
Analysis Modes
The SonarQube analysis can run in following modes:
-
Incremental Analysis
- Analysis to be run on new code being added, usually run on short-lived branches other than
main
. - This analysis is only run if
analyze_base
parameter was provided. - This analysis also uses
pull_request_number
if provided to provide analysis results. - See Branch Analysis and Pull Request Analysis for more info.
- Analysis to be run on new code being added, usually run on short-lived branches other than
-
Full Analysis
- Analysis to be run on whole code, usually run on long-lived branches like
main
. - This analysis is only run if
analyze_base
parameter was not provided.
- Analysis to be run on whole code, usually run on long-lived branches like
- Workflow can also run checks such as test, lint etc. against the built image.
- The
checks
parameter can be used to enable checks. The input accepts encoded json array of string values, example -['npm:lint', 'compose:test', 'compose:e2e']
. - Each entry here can in the format -
scheme:input
, where:scheme
=npm
- Can run theinput
script againstnpm
to run the check. Basicallynpm run input
. Example -npm:lint
will runnpm run lint
against the built docker image.scheme
=compose
- Can run the compose file with namedocker-compose.<input>.yml
to run the check. Useful for running the check a check with dependencies. Example -compose:test
will rundocker compose -f docker-compose.test.yml
. For this to work as expected, make sureservices.app.image
is same asapp_name
input. See docker-compose.test.yml for example.
Generating Coverage Report
- Running checks also supports reporting code coverages on Pull Request using Code Coverage Summary.
- This also makes sure to fail the check if code coverage falls within configured thresholds. The default threshold is
60 80
. See thresholds. - To enable code coverage, simply dump coverage report in Cobertura format to
/app/output/coverage.xml
the check would pick it up. Check docker-compose.test.yml out for an example. - If no file is presented, this step is simply ignored.
- Ideally you'd want to run only one instance of the CI workflow to run for a branch.
- The concurrency control is in hands of the caller - The workflow itself does not control it.
- Read Control the concurrency of workflows and jobs for more info.
Examples
# .github/workflows/production.yml
# this workflow only allow a single invocation on the main branch
# this would cancel existing running workflow if there's a new push
name: production
on:
push:
branches:
- main
jobs:
ci:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
concurrency:
group: ci-production-${{ github.event.ref }}
cancel-in-progress: true
with:
# ...
secrets:
# ...
# .github/workflows/pull_request.yml
# this workflow only allow a single invocation on the pull request head branch
# this would cancel existing running workflow if there's a new push
name: pull_request
on:
pull_request:
types: [ opened, synchronize, reopened ]
jobs:
ci:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
concurrency:
group: ci-pull-request-${{ github.event.pull_request.head.ref }}
cancel-in-progress: true
with:
# ...
secrets:
# ...
The workflow's advanced runtime configuration also makes it useful for deploying short-lived deployments for feature branches. Here's a deployment specification for reference which utilizes configuration provided by the workflow, making it suitable for deploying a short-lived app for a branch:
Example
# .github/workflows/preview.yml
name: preview
on:
pull_request:
types: [ opened, synchronize, reopened ]
# required permissions by this workflow
permissions:
contents: read
pull-requests: write
jobs:
preview:
# only run when updating an 'Open' PR
if: github.event.pull_request.state == 'open'
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
app_name: myapp
app_env: preview
app_hostname: '{1}.preview.myapp.com'
branch: ${{ github.event.pull_request.head.ref }}
docker_registry: 'registry.hub.docker.com'
docker_username: '<docker_username>'
pull_request_number: ${{ github.event.number }}
do_cluster_id: '<digital_ocean_cluster_id>'
secrets:
docker_password: '<docker_password>'
do_access_token: '<digital_ocean_access_token>'
# lib/kube/shared/app.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $KUBE_APP-deployment
namespace: $KUBE_NS
labels:
app: $KUBE_APP
spec:
replicas: 1
selector:
matchLabels:
app: $KUBE_APP
template:
metadata:
labels:
app: $KUBE_APP
spec:
imagePullSecrets:
- name: regcred
containers:
- name: $KUBE_APP
image: $KUBE_DEPLOYMENT_IMAGE
imagePullPolicy: Always
resources:
requests:
memory: '256Mi'
limits:
memory: '512Mi'
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: $KUBE_APP-service
namespace: $KUBE_NS
labels:
app: $KUBE_APP
spec:
type: NodePort
ports:
- port: 8080
selector:
app: $KUBE_APP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: $KUBE_APP-ingress
namespace: $KUBE_NS
labels:
app: $KUBE_APP
spec:
ingressClassName: nginx
rules:
- host: $KUBE_INGRESS_HOSTNAME
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: $KUBE_APP-service
port:
number: 8080
---
KUBE_NS
is available inapp_name-app_env
format. This is a Kubernetes namespace created by the workflow which can be useful for grouping resources together / environment and application.KUBE_APP
is available inapp_name-app_env-branch_id
format, making it useful for defining resources for each branch that is provided. See Branch ID below.KUBE_INGRESS_HOSTNAME
is available inbranch_id.preview.myapp.com
format, making it useful for to be used as hostname for each branch that is provided. See Branch ID below.
Branch ID
- This is a unique ID generated by the workflow for every branch.
- The branch name is obtained from
branch
parameter. - The format is based on the
sha1sum
output.
Pull Request
- When building against a Pull Request, the workflow also supports providing the link to the deployed app using a GitHub comment.
- For this to work,
permissions.pull-requests
withwrite
permissions andpull_request_number
parameter is required. - To turn this off, set
deploy_annotate_pr
parameter tofalse
.
Example:
Deployment (boilerplate-mern) is available at - https://a191a6e8889fc35.preview.platform.jalantechnologies.com
- The workflow supports injecting application configuration via Doppler
- For this integration to work, install Doppler Kubernetes Operator.
- The workflow basically sets up the DopplerSecret CRD based on provided
app_name
,app_env
using thedoppler_token
.app_name
- Doppler project nameapp_env
- Doppler environment namedoppler_token
- Service token authorized to read the obtained configuration
- See the example Deployment on how to use secrets with the deployment.
Example
On Doppler, say you have a project with name myapp
and an environment with name production
. You can inject configuration via:
# lib/kube/shared/app.yml
# app_name = "myapp"
# app_env = "production"
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
imagePullSecrets:
- name: regcred
containers:
- name: myapp
image: $KUBE_DEPLOYMENT_IMAGE
imagePullPolicy: Always
ports:
- containerPort: 8080
envFrom:
- secretRef:
name: $DOPPLER_MANAGED_SECRET_NAME # << name of the secret provided by workflow
Live Reload
- The integration also supports reloading dependent deployments whenever configuration is updated on Doppler.
- Simply add
annotations.secrets.doppler.com/reload
with valuetrue
to enable this. - See Automatic Redeployments for more info.
Here's the complete reference of input/output supported by the workflow:
Inputs
Name | Type | Description | Required | Default |
---|---|---|---|---|
app_name | input | Application name based on which docker repository, doppler project and kube namespace would be selected | Yes | - |
app_env | input | Application environment based on which doppler configuration, kube namespace and kube spec files would be selected | Yes | - |
app_hostname | input | Application hostname where application would be deployed. Available placeholders - {0} Provided application environment, {1} Branch ID generated from provided branch. | Yes | - |
branch | input | Branch from which this workflow was run | Yes | - |
analyze_base | input | Base branch against with sonarqube will run code analysis | No | - |
aws_cluster_name | input | Kubernetes cluster name if deploying on EKS | No | - |
aws_region | input | Kubernetes cluster region if deploying on EKS | No | us-east-1 |
aws_use_ecr | input | Whether or not ECR login is required. If enabled, provided AWS credentials will be used to authenticating with docker registry. | No | false |
build_args | input | Build arguments provided to the docker daemon when building docker image | No | - |
build_context | input | Build context to use with docker. Default to checked out Git directory. | No | . |
checks | input | Checks to run. Provide here list of checks in scheme:input format where scheme can be - npm, compose. | No | - |
deploy_root | input | Directory where deployment would look for kubernetes specification files | No | lib/kube |
deploy_annotate_pr | input | Enable pull request annotation with deployment URL. Requires pull_request_number to work. | No | true |
docker_registry | input | Docker registry where built images will be pushed. By default uses Docker Hub. | No | registry.hub.docker.com |
docker_username | input | Username for authenticating with provided Docker registry | No | - |
do_cluster_id | input | Kubernetes cluster ID on DigitalOcean if deploying on DOKS | No | - |
pull_request_number | input | Pull request number running the workflow against a pull request | No | - |
aws_access_key_id | secret | Access key ID for AWS if deploying on EKS | No | - |
aws_secret_access_key | secret | Access key Secret for AWS if deploying on EKS | No | - |
build_secrets | secret | Build secrets provided to the docker daemon when building docker image | No | - |
docker_password | secret | Password for authenticating with provided Docker registry | No | - |
do_access_token | secret | DigitalOcean access token if deploying on DOKS | No | - |
doppler_token | secret | Doppler token for accessing environment variables | No | - |
sonar_token | secret | Authentication token for SonarQube | No | - |
Outputs
Name | Description | Example |
---|---|---|
deploy_url | URL where application was deployed | https://production.myapp.com |
.github/workflows/clean.yml
- This is the cleanup workflow which takes care of cleaning up resources. This workflow is typically useful for cleaning up short-lived environments.
See Running on Branches and PullRequest for setup instructions.
How this works?
- The cleanup workflow basically runs
kubectl delete
against resources found in thedeploy_root/shared
anddeploy_root/app_env
directories. - The resources are matched against the
name
, so it's important to follow the naming convention including the Branch ID as defined in Running on Branches and PullRequest.
# .github/workflows/clean.yml
# this workflow takes care of cleaning up resources upon PR merge
# the cleanup workflow makes sure to only remove resources associated by the branch
name: clean
on:
pull_request:
types: [ closed ]
jobs:
clean:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
app_name: myapp
app_env: preview
branch: ${{ github.event.pull_request.head.ref }}
docker_registry: 'registry.hub.docker.com'
docker_username: '<docker_username>'
do_cluster_id: '<digital_ocean_cluster_id>'
secrets:
docker_password: '<docker_password>'
do_access_token: '<digital_ocean_access_token'
# lib/kube/shared/app.yml
# notice name - it follows the convention of using KUBE_APP which includes Branch ID
apiVersion: apps/v1
kind: Deployment
metadata:
name: $KUBE_APP-deployment
namespace: $KUBE_NS
labels:
app: $KUBE_APP
spec:
# ...
# lib/kube/preview/app.yml
# same resource following the same convention but in different app_env specific directory
# this will get deleted as well if app_env = preview
apiVersion: apps/v1
kind: Deployment
metadata:
name: $KUBE_APP-deployment
namespace: $KUBE_NS
labels:
app: $KUBE_APP
spec:
# ...
# .github/workflows/clean.yml
name: clean
on:
pull_request:
types: [ closed ]
permissions:
contents: read
pull-requests: write
jobs:
clean:
uses: jalantechnologies/github-ci/.github/workflows/[email protected]
with:
app_name: myapp
app_env: preview
aws_cluster_name: '<aws_cluster_name>'
aws_region: '<aws_region>'
branch: ${{ github.event.ref }}
secrets:
aws_access_key_id: '<aws_access_key_id>'
aws_secret_access_key: '<aws_access_secret_key>'
- Workflow supports running bash scripts as hooks during cleanup lifecycle.
- Hooks can be defined under
scripts
directory residing withindeploy_root
parameter (So for example, ifdeploy_root
=lib/kube
, workflow will look for hooks underlib/kube/scripts
directory). - If the hook fails with non 0 error code, it will fail the workflow run.
- The bash scripts along with the Kubernetes specifications have same variables available for usage, see the list Placeholders below.
Available Hooks
pre-clean.sh
- Hook to be run before any Kubernetes resources are deleted.post-clean.sh
- Hook to be run after all Kubernetes resources have been deleted.
Example
#!/bin/bash
# lib/kube/scripts/post-clean.sh
# this script deletes a custom resource
kubectl delete pvc mycustompvc -n "$KUBE_NS"
Placeholders
Following placeholders are available for use within scripts and Kubernetes specifications:
Key | Description | Example |
---|---|---|
KUBE_ROOT | Value for deploy_root input. |
lib/kube |
KUBE_NS | Combination of app_name and app_env inputs. |
myapp-production |
KUBE_APP | Combination of app_name and app_env inputs with branch ID. See Running on Branches and PullRequest. |
myapp-production-6905caad90e785f |
KUBE_ENV | Value for app_env input. |
production |
Here's the complete reference of input/output supported by the workflow:
Inputs
Name | Type | Description | Required | Default |
---|---|---|---|---|
app_name | input | Application name based on which docker repository and kube namespace would be selected | Yes | - |
app_env | input | Application environment based on which kube namespace and kube spec files would be selected | Yes | - |
branch | input | Branch from which this workflow was run | Yes | - |
aws_cluster_name | input | Kubernetes cluster name if deploying on EKS | No | - |
aws_region | input | Kubernetes cluster region if deploying on EKS | No | us-east-1 |
deploy_root | input | Directory where deployment would look for kubernetes specification files | No | lib/kube |
docker_registry | input | Docker registry where built images were pushed. By default uses Docker Hub. | No | registry.hub.docker.com |
docker_username | input | Username for authenticating with provided Docker registry | No | - |
do_cluster_id | input | Kubernetes cluster ID on DigitalOcean if deploying on DOKS | No | - |
aws_access_key_id | secret | Access key ID for AWS if deploying on EKS | No | - |
aws_secret_access_key | secret | Access key Secret for AWS if deploying on EKS | No | - |
docker_password | secret | Password for authenticating with provided Docker registry | No | - |
do_access_token | secret | DigitalOcean access token if deploying on DOKS | No | - |
Outputs
This workflow does not have any outputs