In this project, I created an end-to-end Production like CI-CD pipeline while keeping in mind Securities Best Practices,DevSecOps principles and used all these tools Git, GitHub, Jenkins, Maven, JUnit, SonarQube, Jfrog Artifactory, Docker, Trivy, AWS S3, Docker Hub, GitHub CLI, EKS, ArgoCD, Prometheus, Grafana, Slack and Hashicorp Vault, to achieve the goal.
-
When an event (commit) will occur in the application code GitHub repo, the GitHub webhook will push the code to Jenkins and Jenkins will start the build.
-
Maven will build the code, if the build fails, the whole pipeline will become a failure and Jenkins will notify the user using Slack, If build success then
-
Junit will do unit testing, if the application passes test cases then will go to the next step otherwise the whole pipeline will become a failure Jenkins will notify the user that your build fails.
-
SonarQube scanner will scan the code and will send the report to the SonarQube server, where the report will go through the quality gate and gives the output to the web Dashboard.
-
In the quality gate, we define conditions or rules like how many bugs or vulnerabilities, or code smells should be present in the code. Also, we have to create a webhook to send the status of quality gate status to Jenkins. If the quality gate status becomes a failure, the whole pipeline will become a failure then Jenkins will notify the user that your build fails.
-
After the quality gate passes, Artifacts will be sent to Jfrog Artifactory. If artifacts send to the artifactory successfully then will go to the next stage otherwise the whole pipeline will become a failure Jenkins will notify the user that your build fails.
-
After successful artifacts push to Artifactory, Docker will build the docker image. if the docker build fails when the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
-
Trivy will scan the docker image, if it finds any Vulnerability then the whole pipeline will become a failure, and the generated report will be sent to s3 for future review and Jenkins will notify the user that your build fails.
-
After the Trivy scan docker images will be pushed to the docker hub, if the docker fails to push docker images to the docker hub then the pipeline will become a failure and Jenkins will notify the user that your build fails.
-
After the docker push, Jenkins will clone the Kubernetes manifest repo from the feature branch, if the repo is already present then it will only pull the changes. If Jenkins is unable to clone the repo then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
-
After Cloning the repo, Jenkins will update the image tag in the deployment manifest. If Jenkins is unable to update the image tag then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
-
After updating the image tag, Jenkins will commit the change and push the code to the feature branch. If Jenkins is unable to push the changes to the feature branch then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
-
After pushing changes to the feature branch, Jenkins will create a pull request against the main branch. If Jenkins is unable to create a pull request then the whole pipeline will become a failure and Jenkins will notify the user that your build fails.
-
After the pull request creation, a senior person from the team will review and merge the pull request.
-
After merging the feature branch into the main branch, ArgoCD will pull the changes and deploy the application into Kubernetes.
- JDK
- Git
- Github
- GitHub CLI
- Jenkins
- Sonarqube
- Jfrog Artifactory
- Docker
- Trivy
- AWS account
- AWS CLI
- Docker Hub account
- Terraform
- EKS Cluster
- kubectl
- ArgoCD
- Helm
- Prometheus & Grafana
- Hashicorp Vault
- Slack
-
2 t2.medium ( ubuntu) EC2 Instances – 1. one for sonarqube and Hashicorp vault server 2. another for Jfrog Artifactory
-
1 t2.large (ubuntu ) EC2 Instance - For Jenkins, Docker, Trivy, AWS CLI, Github CLI, Terraform
-
EKS Cluster with t3.medium nodes
Push all the web application code files into GitHub
Stage-02: Install Jenkins, Docker, Trivy, AWS CLI, Github CLI, Terraform ( t2.large node1 -Jenkins-server)
Jenkins Installation Prerequisites https://www.jenkins.io/doc/book/installing/linux/
- Installation guide is available here https://github.com/praveensirvi1212/DevSecOps-project/blob/main/Jenkins_installation.md
- After installation, install the suggested plugins
- Open Jenkins Dashboard and install required plugins – SonarQube Scanner, Quality gates, Artifactory, Hashicorp Vault, Slack, Open Blue Ocean
- go to manage Jenkins > manage plugins > search for plugins > Download now and install after restart
- Install docker using this command
sudo apt install docker.io
- add the current user and Jenkins user into the docker group
sudo usermod -aG docker $USER
sudo usermod -aG docker jenkins
- Insatll trivy using these commands
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
- install aws cli using these commands
sudo apt install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
- install GitHub cli using these commands
type -p curl >/dev/null || (sudo apt update && sudo apt install curl -y)
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- install terraform using these commands
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
- install docker on the sonarqube server
sudo apt update
sudo apt install docker.io
- create a docker container to install SonarQube
sudo docker run -d -p 9000:9000 --name sonarqube sonarqube
HashiCorp Vault is a secret-management tool specifically designed to control access to sensitive credentials in a low-trust environment.
- Installation vault using these commands
sudo curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update
sudo apt install vault -y
- install the docker
sudo apt update
sudo apt install docker.io
- install Jfrog Artifactory
sudo docker pull docker.bintray.io/jfrog/artifactory-oss:latest
sudo mkdir -p /jfrog/artifactory
sudo chown -R 1030 /jfrog/
sudo docker run --name artifactory -d -p 8081:8081 -p 8082:8082 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
docker.bintray.io/jfrog/artifactory-oss:latest
Slack is a workplace communication tool, “a single place for messaging, tools and files.” .
Install Slack from the official website of Slack https://slack.com/intl/en-in/downloads/linux
To create EKS Cluster using Terraform, I have put the Terraform code here - https://github.com/praveensirvi1212/medicure-project/tree/master/eks_module
Suggestion – create eks cluster after successful configuration of jenkins server. When jenkins is able to create pull request in the manifest repo.
Note: I have installed the terraform in the jenkins server and configured aws cli to create eks cluster. But you can use your local system to create eks cluster . for this, you have to install Terraform and aws cli on your local system.
Run this command after eks cluster creation to update or configure .kube/config
file
aws eks --region your-region-name update-kubeconfig --name cluster-name
I am assuming that you have already Kubernetes cluster running
- use these commands to install argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- edit the argocd-server service to NodePort to access argocd ui
kubectl -n argocd edit svc argocd-server
- use these commands to install the helm
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
- use helm to install Prometheus and grafana
helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm search repo prometheus-community
kubectl create namespace prometheus
helm install stable prometheus-community/kube-prometheus-stack -n prometheus
kubectl get pods -n prometheus
kubectl get svc -n prometheus
#in order to make Prometheus and grafana available outside the cluster, use LoadBalancer or NodePort instead of ClusterIP.
#Edit Prometheus Service
kubectl edit svc stable-kube-prometheus-sta-prometheus -n prometheus
#Edit Grafana Service
kubectl edit svc stable-grafana -n prometheus
kubectl get svc -n prometheus
#Access Grafana UI in the browser using load balancer or nodeport
UserName: admin
Password: prom-operator
- go to Manage Jenkins > configure tools > go to maven> give some name and click on install automatically
- go to Manage Jenkins > configure tools > go to sonarqube scanner > give some name and click on install automatically
I am assuming that your Vault server is installed and running
- open the
/etc/vault.d/vault.hcl
file with vi or nano - replace all the content of this file with this
storage "raft" {
path = "/opt/vault/data"
node_id = "raft_node_1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
api_addr = "http://127.0.0.1:8200"
cluster_addr = "https://127.0.0.1:8201"
ui = true
sudo systemctl stop vault
sudo systemctl start vault
export VAULT_ADDR='http://127.0.0.1:8200'
vault operator init
copy the unseal tokens and initial root token, save it somewhere for later use
vault operator unseal
Paste the first unseal token here
vault operator unseal
Paste the second unseal token here
vault operator unseal
Paste the third unseal token here
vault login <Initial_Root_Token>
<Initial_Root_Token>
is found in the output ofvault operator init
vault auth enable approle
vault write auth/approle/role/jenkins-role token_num_uses=0 secret_id_num_uses=0 policies="jenkins"
This app role will use for jenkins integration
vault read auth/approle/role/jenkins-role/role-id
Copy the role_id and token, and store somewhere
vault write -f auth/approle/role/jenkins-role/secret-id
Copy the secret-id and token, store them somewhere
- Access the sonarqube ui using the public IP of the server and port 9000
- default username-password
admin
andadmin
- chose use global setting
- create a project manually, give some name to project, project key
- click on setup
- click on other ci
- give some name to the token and click on generate, save it for a later user
- click on global
- select the project type, in this case, I used Maven
- copy the whole command and save it somewhere
- Click on Quality Gates
- create a new Quality Gate according to your conditions
- click on Projects > click on all > select your project
- set as default
- click on Administration
- click on Configuration > click on webhooks
- create a webhook > Give some name
- for url use
http://jenkins-server-url-with-port/sonarqube-webhook/
- in secret leave the blank
- click on create
Note: if this webhook does not work fine, then recreate the webhook after integrating the soanrqube with jenkins
- access the UI with public IP and port 8081
- use username-password as
admin
andpassword
- update the password
- create a Maven repo
- now go to Administration > User management > Create a new user
- now go to Repositories > create > locally > give some name like
my-local-repo
- login into aws account
- search for S3
- create a bucket with a unique name
If you already have a dockerhub account then no need to create another
- go to dockerhub official website
- click on sign up
- fill in the details and sing up
- login into dockerhub
Note: we can create token from dockerhub to integrate jenkins but in this case, I am using docker username and password.
- go to this site https://slack.com/intl/en-in
- sign up with Google
- create a workspace
- just give a name
- you can skip add team member
- give some name to team working ( channel name like - cicd-pipeline)
- now go to this site https://slack-t8s2905.slack.com/intl/en-in/apps
- login into your workspace (if need)
- now click on Get Essential Apps
- search Jenkins in search bar > click on it
- click on Add to Slack
- select channel name
- click on Add Jenkins CI Integration
- go down to step 3 and copy Integration Token Credential ID and save it somewhere
- click on Save settings
- Go to this site https://api.slack.com/apps
- click on Create New App
- select From scratch
- give some app name
- pick your workspace
- click on Create App
- go to OAuth & Permission
- go to Scope > Bot Token Scope
- search in the search bar and select
chat:write
,chat:write.customize
- now to up to OAuth Token for Your Workspace
- click on Install to workspace
- allow your app to access your workspace
- now copy Bot user OAuth Token and save it somewhere
- now open slack , you will find your app in Apps
- now create a channel - click on channels > create channel > give some name to channel > create
- now in message bar type
@your-app-name
and click on send icon > click on add to channel
run all these commands into the vault server
- enable secrets path
vault secrets enable -path=secrets kv
- write the secret into secret path
vault write secrets/creds/docker username=abcd password=xyz
likewise, we can store all the credentials in the vault server. I have stored only the docker credential but you can store all your credential like this.
- create a jenkins policy file with vi or nano
jenkins-policy.hcl
path "secrets/creds/*" {
capabilities = ["read"]
}
the policy is created with * means the vault server can read credentials from every path. No need to create policies for each path like secrets/creds/docker
, secrets/creds/slack
etc…
- run this command to create a policy
vault policy write jenkins jenkins-policy.hcl
- go to Jenkins> Manage Jenkins >Manage Credentials > system > Add credentials > Vault App Role Credentials > Paste role-id and secret-id token (we create in Vault - approle) and save and apply.
- now to Manage Jenkins > configure system/system> search for vault plugin
- give the URL of the vault server
- attach the credentials we created
- click on Advanced
- select k/v engine as 1
- click on skip ssl verification
- Apply and Save
- go to Manage Jenkins>Manage Credentials > system > Add credentials > secret text file > paste the token we create in sonarqube and save and apply.
- now to Manage Jenkins > configure system/system> search for SonarQube Server
- enable the environment variables
- write the name of Sonaqube server
- paste the url of SonarQube server
- select the credential
- Apply and Save
- go to Manage Jenkins>Manage Credentials > system > Add credentials > username and password > write the username and password we created in jforg artifacotry and save and apply.
- now to Manage Jenkins > configure system/system> search for Jfrog
- give instance id as artifactory name
- Jfrog Platform url - artifactory url like - http://localhost:8082
- click on Advanced
- JFrog Artifactory URL - http://localhost:8082/artifactory
- JFrog Distribution URL - http://localhost:8082/distribution
- Default Deployer Credentials – give username and password of artifactory (not admin user)
- Apply and save
- for S3 integration we will configure aws cli in the pipeline itself
- create credentials for aws cli, use both as the secret text
- go to Manage jenkins >Manage Credentials > system > Add credentials > vault Username-Password Credential
- namespace – leave blank
- Prefix Path – leave blank
- Path – secrets/creds/docker
- Username Key- username
- Password key – password
- k/v engine – 1
- id – give some id like docker-cred
- Description - give some description
- click on Test Vault Secrets retrieval > should give output as secrets retrieved successfully otherwise reconfigure the vault server in jenkins
- Apply and Save
- go to Manage Jenkins>Manage Credentials > system > Add credentials > secret text file > give some name to credentials, paste the token we create in the Slack app and save and apply.
- now to Manage Jenkins > configure system/system> search for Slack
- workspace – your workspace name ( you create after login into Slack)
- Credential – attach the slack token
- Default channel name – write the channel name we created at the time of slack installation like - #cicd-pipeline
- Apply and save
- go to GitHub > go to application code repo > settings
- go to webhook > Add webhook
- Payload URL - http://jenkins-server-public-ip-with-port/github-webhook/
- click on Add Webhook
- access the argocd UI – node public ip and node port
- user username as
admin
- for password run this command
kubectl -n argocd get secret argocd-initial-admin-secret -o yaml
- copy the password and decode it using
echo “copied-password” | base64 -d
- copy the decoded password and login into argocd
- go to User Info – update password
- now go to Application
- click on New Application
- give app name
- chose Project Name as default
- SYNC Policy – Automatic
- enable PRUNE RESOURCES and SELF HEAL
- SOURCE-
- Repository URL – give your repo URL where you stored the k8s manifest
- Path – yamls
- DESTINATION -
- Cluster Url – chose default
- namespace- default
- click on create
- encode your Slack token using this command ( Bot Token )
echo "your-slack-token" | base64
- edit
argocd-notifications-secret
secret to add slack token
kubectl -n argocd edit secret argocd-notifications-secret
- add data field after
apiVersion: v1
, Replacexxxx-xxxxx-xx
with your encoded slack token
data:
slack-token: xxxxx-xxxxxx-xxxxxx
- now edit
argocd-notifications-cm
configmap
kubectl -n argocd edit cm argocd-notifications-cm
- add this service for slack after
apiVersion: v1
data:
service.slack: |
token: $slack-token
username: argocd-bot
icon: ":rocket:"
template.app-sync-succeeded-slack: "message: | \n Application {{.app.metadata.name}}
is now {{.app.status.sync.status}}\n"
trigger.on-sync-succeeded: |
- when: app.status.sync.status == 'Synced'
send: [app-sync-succeeded-slack]
- add slack notification annotation in application
kubectl -n argocd edit application your-app-name-you-created-in-argocd
- add this annotation in metadata section like this
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
notifications.argoproj.io/subscribe.on-sync-succeeded.slack: your-slack-channel-name
name: argocd-demo
namespace: argocd
use this docs to import grafana Dashboard into grafana https://www.coachdevops.com/2022/05/how-to-setup-monitoring-on-kubernetes.html
pipeline {
agent any
tools {
maven 'apache-maven-3.0.1'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
- Define a stage as git checkout
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for checkout: check out version control
- give your GitHub URL, branch and generate the pipeline syntax
- paste it into stage steps git check
stage('Checkout git') {
steps {
git branch: 'main', url:'https://github.com/praveensirvi1212/DevOps_MasterPiece-CI-with-Jenkins.git'
}
}
- Define a stage as Build and Junit test
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for sh: shell script
- give your shell command and generate the pipeline syntax
- paste it into stage > steps > sh ‘ shell command’
stage ('Build & JUnit Test') {
steps {
sh 'mvn install'
}
}
In this stage, I used withSonarQubeEnv to Prepare the SonarQube Scanner environment and shell command sh
- Define a stage SonarQube Analysis
- paste the command that we created at the time of the sonarqube project creation
stage('SonarQube Analysis'){
steps{
withSonarQubeEnv('SonarQube-server') {
sh '''mvn clean verify sonar:sonar \
-Dsonar.projectKey=gitops-with-argocd \
-Dsonar.projectName='gitops-with-argocd' \
-Dsonar.host.url=$sonarurl \
-Dsonar.login=$sonarlogin'''
}
}
}
This step pauses Pipeline execution and waits for the previously submitted SonarQube analysis to be completed and returns quality gate status. Setting the parameter abortPipeline to true will abort the pipeline if the quality gate status is not green.
- Define a stage as a Quality gate
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for waitForQualityGate: Wait for SonarQube analysis to be completed and return quality gate status
- generate pipeline syntax and paste it into steps
- timeout is optional
stage("Quality Gate") {
steps {
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: true
}
}
}
- We wrap the steps in a script block to execute them as a Groovy script.
- Inside the try block, we create a reference to the Artifactory server using the Artifactory.newServer method. You need to provide the URL of your Artifactory instance and the credentials ID configured in Jenkins.
- We define an uploadSpec as a JSON string, specifying the pattern of files to upload and the target repository in Artifactory.
- We call the server.upload(uploadSpec) method to upload the files to Artifactory based on the specified upload specification.
- If any exception occurs during the upload process, the catch block will be executed, and an error message will be displayed.
steps {
script {
try {
def server = Artifactory.newServer url: 'http://13.232.95.58:8082/artifactory', credentialsId: 'jfrog-cred'
def uploadSpec = """{
"files": [
{
"pattern": "target/*.jar",
"target": "${TARGET_REPO}/"
}
]
}"""
server.upload(uploadSpec)
} catch (Exception e) {
error("Failed to deploy artifacts to Artifactory: ${e.message}")
}
}
}
}
First, write your dockerfile to build docker images. I have posted my dockerfile in the application repo code In this stage, I used the shell command sh to build the docker image
- Define a stage Docker Build
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for sh: shell script
- give your shell command to build image > generate pipeline syntax
- I used the build-id of jenkins and git commit id for versions of docker images
sstage('Docker Build') {
steps {
sh 'docker build -t ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT} .'
}
}
In this stage, I trivy shell command sh to scan the docker image
- Define a stage Trivy Image scan
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for sh: shell script
- give your Trivy shell command to scan the docker image
Note – There are 3 types of report output formats of trivy ( Table, JSON, Template). I used HTML template for the output report of the trivy scan
stage('Image Scan') {
steps {
sh ' trivy image --format template --template "@/usr/local/share/trivy/templates/html.tpl" -o report.html ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT} '
}
}
}
}
In this stage, I used the shell command sh to Upload Scan report to AWS S3
- Define a stage Upload report to AWS S3
- first create an AWS s3 bucket
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for sh: shell script
- give your shell command to upload the object to aws s3
stage('Upload Scan report to AWS S3') {
steps {
// sh 'aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" && aws configure set aws_secret_access_key "$AWS_ACCESS_KEY_SECRET" && aws configure set region ap-south-1 && aws configure set output "json"'
sh 'aws s3 cp report.html s3://devops-mastepiece/'
}
}
In this stage, I used the shell command sh to push the docker image to the docker hub. I stored Credentials in Vault and accessed them in jenkins using the Vault key. You can store DockerHub credentials in jenkins and use them as environment variables
- Define a stage Docker images push
- go to this site https://opensource.triology.de/jenkins/pipeline-syntax/
- search for sh: shell script
- give your shell command to push docker images to the docker hub
stage ('Docker Build') {
steps {
withVault(configuration: [skipSslVerification: true, timeout: 60, vaultCredentialId: 'vault-token', vaultUrl: 'http://13.232.53.209:8200'], vaultSecrets: [[path: 'secrets/creds/docker', secretValues: [[vaultKey: 'username'], [vaultKey: 'password']]]]) {
sh "docker login -u ${username} -p ${password} "
sh 'docker push ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}'
sh 'docker rmi ${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}'
}
}
}
- in this stage first we check whether the repo already exists or not
- if exists then pull the changes
- if not then will clone the repo
stage('Clone/Pull Repo') {
steps {
script {
if (fileExists('DevOps_MasterPiece-CD-with-argocd')) {
echo 'Cloned repo already exists - Pulling latest changes'
dir("DevOps_MasterPiece-CD-with-argocd") {
sh 'git pull'
}
} else {
echo 'Repo does not exists - Cloning the repo'
sh 'git clone -b feature https://github.com/praveensirvi1212/DevOps_MasterPiece-CD-with-argocd.git'
}
}
}
}
- used sed command to replace images tag in deployment manifests
stage('Update Manifest') {
steps {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh 'sed -i "s#praveensirvi.*#${IMAGE_REPO}/${NAME}:${VERSION}-${GIT_COMMIT}#g" deployment.yaml'
sh 'cat deployment.yaml'
}
}
}
- set the global username
- set the remote repo URL
- checkout the branch to feature
- stage the changes
- commit the changes
- push the changes to the feature branch
stage('Commit & Push') {
steps {
withCredentials([string(credentialsId: 'GITHUB_TOKEN', variable: 'GITHUB_TOKEN')]) {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh "git config --global user.email '[email protected]'"
sh 'git remote set-url origin https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME}'
sh 'git checkout feature'
sh 'git add deployment.yaml'
sh "git commit -am 'Updated image version for Build- ${VERSION}-${GIT_COMMIT}'"
sh 'git push origin feature'
}
}
}
}
The reason to create a pull request is that argocd is sync automatically with Git Hub. GitHub is the only single source of truth for argocd. So if jenkins push the changes to the main branch then argocd will deploy changes directly without reviewing the changes. This should not happen in the Production environment. That’s why we create pull requests against the main branch. So a senior person from the team can review the changes and merge them into the main branch. Then n then only changes should go to the production environment.
Here token.txt
contain the GitHub token, the reason for storing the GitHub token in the text file bcoz gh auth login --with-token
accept only STDIN Input
stage('Raise PR') {
steps {
withCredentials([string(credentialsId: 'GITHUB_TOKEN', variable: 'GITHUB_TOKEN')]) {
dir("DevOps_MasterPiece-CD-with-argocd/yamls") {
sh '''
set +u
unset GITHUB_TOKEN
gh auth login --with-token < token.txt
'''
sh 'git branch'
sh 'git checkout feature'
sh "gh pr create -t 'image tag updated' -b 'check and merge it'"
}
}
}
}
In post build action I used Slack notification. After the build jenkins will send a notification message to Slack whether your build success or failure.
- go to jenkins > your project > pipeline syntax > search for slacksend: send slack message
- write your channel name and message > generate pipeline syntax.
Note – I used custom messages for my project. I created a function for Slack notification and called the function into Post build.
post{
always{
sendSlackNotifcation()
}
}
sendSlackNotification function
def sendSlackNotifcation()
{
if ( currentBuild.currentResult == "SUCCESS" ) {
buildSummary = "Job_name: ${env.JOB_NAME}\n Build_id: ${env.BUILD_ID} \n Status: *SUCCESS*\n Build_url: ${BUILD_URL}\n Job_url: ${JOB_URL} \n"
slackSend( channel: "#devops", token: 'slack-token', color: 'good', message: "${buildSummary}")
}
else {
buildSummary = "Job_name: ${env.JOB_NAME}\n Build_id: ${env.BUILD_ID} \n Status: *FAILURE*\n Build_url: ${BUILD_URL}\n Job_url: ${JOB_URL}\n \n "
slackSend( channel: "#devops", token: 'slack-token', color : "danger", message: "${buildSummary}")
}
}
https://github.com/praveensirvi1212/DevOps_MasterPiece-CI-with-Jenkins/blob/main/Jenkinsfile
Sorry, I forgot to change the stage name while building the job, but don't worry I made changes in the Jenkins file.
SonarQube Quality gate status is green and passed.
You can apply your custom quality gate like there should be zero ( bug, Vulnerability, code smell ) and if your code has greater than 0 (bugs, vulnerability, code smells). Then your quality gate status will become a failure or red. If your quality gate status becomes a failure, stages after the quality gate will be a failure.