This repository contains reusable workflows to check, build, and deploy our projects.
Here we list only the workflows to be referenced externally with some examples of how to implement them. The reason why we skip some workflows is due to the fact that these are already included inside other workflows in order to reduce boilerplate when writing the final workflows. If you would like to get more details of these tasks, just look at this doc.
- Composite Actions
- Reusable Workflows
Composite actions allow to group together a set of steps and use them inside other jobs' steps, reducing the duplicated code in the workflows. Each composite action must be located in its own folder and the path to this folder will be the way to reference the action externally.
For example, the action of building a NodeJs project is located in the folder .github/actions/node/build. If you need to use this action, you need to append to the repository name the path .github/actions/node/build:
zupit-it/pipeline-templates/.github/actions/node/build
Each composite should be located inside the path
zupit-it/pipeline-templates/.github/actions/<technology>/<action-to-execute>
Where:
- technology is the technology used to execute the action. For example, the build of a NodeJS project, NodeJS is the technology.
- action-to-execute is the action that you want to execute. In the previous example, build is the action.
In this way, all actions for the same technology are grouped together.
Sometimes you could need more than one nesting level because you want to group multiple actions. The only case, as of now, is to group deploy actions by service and by provider.
For example:
- Azure
- App Service
- Functions
- Storage Accounts
- AWS
- App Runner
- Lambda
- S3 bucket
This workflow requires this command in order to succeed:
- build:{environment}: Build the project based on the target environment (e.g. testing, staging and production)
This workflow call automatically the action checkout to download the codebase.
This workflow uses npm as package manager.
.github/actions/node/build is the action that builds a NodeJS project.
It requires these inputs:
- NODE_VERSION: The NodeJS version required to build the project.
- RELEASE_ENVIRONMENT: The environment for which the project must be compiled (e.g. testing, staging, production).
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
- PROJECT: The project to use when running npm scripts. If set, the executed npm script will be
{PROJECT}:{SCRIPT_NAME}
instead of{SCRIPT_NAME}
. - CHECKOUT_REF: The ref of the branch/tag to check out before running the build. See the ref parameter of the checkout action. By default, it is
''
. - RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
build-and-push-image:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
NODE_VERSION: 16.17.0
RELEASE_ENVIRONMENT: testing
WORKING_DIRECTORY: frontend
REGISTRY_URL: ghcr.io
DOCKERFILE_PATH: frontend/docker/Dockerfile
DOCKER_IMAGE_NAME: ionic
DOCKER_IMAGE_TAG: latest
BUILD_ARGS: |
DIST_PATH=dist/apps/enci
secrets: inherit
This workflow requires a Dockerfile inside the working directory to create the docker image to publish on a docker registry.
Before calling this workflow, remember to call the action checkout to download the codebase.
This workflow uses npm as package manager.
.github/actions/docker/build-and-push is the composite action that builds the docker image and then push it to the registry.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
- REGISTRY_URL: The registry url where to push the Docker image. By default, it is ghcr.io.
- REGISTRY_USER: The registry url where to push the Docker image. By default, it is the GitHub variable github.actor, the user who started the workflow.
- REGISTRY_PASSWORD: The user's password to access the registry.
- DOCKERFILE_PATH: The path to the Dockerfile to build.
- DOCKER_IMAGE_NAME: The name to assign to the built Docker image.
- DOCKER_IMAGE_TAG: The tag to assign to the built Docker image.
- BUILD_ARGS: Additional data to pass when building the Dockerfile.
- ENV_VARIABLES: A stringified json to pass multiple values at once, since Github does not propagate env variables.
It then outputs this variable:
- DOCKER_IMAGE_NAME: The final Docker image name with the registry path included.
This is an example to show how data should be formatted.
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Build & Push Docker
id: docker
uses: zupit-it/pipeline-templates/.github/actions/docker/[email protected]
with:
REGISTRY_URL: ghcr.io
REGISTRY_USER: ${{ github.actor }}
REGISTRY_PASSWORD: ${{ github.token }}
WORKING_DIRECTORY: frontend
DOCKERFILE_PATH: frontend/docker/Dockerfile
DOCKER_IMAGE_NAME: angular
DOCKER_IMAGE_TAG: latest
BUILD_ARGS: |
DIST_PATH=dist/testing
env: "${{secrets}}"
This action:
- auto-generate a
global.json
, if not provided; - install .NET SDK dependencies on Alpine OS;
- install the specified .NET SDK version. The
dotnet
command becomes globally available.
- The
WORKING_DIRECTORY
directory must contain a solution or a project file.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/install is the action that install .NET in the current runner.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. It must contain a solution (
.sln
) or a project (.csproj
) file. - DOTNET_VERSION: The .NET SDK version to install. See documentation for allowed values.
- ALPINE_OS: Whatever or not the current Linux distribution is Alpine. This could be auto-detected in the future.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
This is an example to show how data should be formatted.
steps:
- name: Install .NET
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
DOTNET_VERSION: "7"
ALPINE_OS: true
SHELL: "bash"
This action:
- download NuGet packages from the cache, if available;
- restore NuGet packages;
- run the
dotnet build
command on theWORKING_DIRECTORY
.
- The
WORKING_DIRECTORY
directory must contain a solution or a project file. - A
packages.lock.json
file must be provided for the solution in order to enable repeatable package restoration. - The correct .NET version must be installed.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/build is the action that builds a .NET project or solution.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. It must contain a solution (
.sln
) or a project (.csproj
) file. - BUILD_CONFIG: The configuration to use when building the solution or the project. Usually
Debug
orRelease
.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
This is an example to show how data should be formatted.
steps:
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
BUILD_CONFIG: "Release"
SHELL: "bash"
This action:
- install or update CSharpier;
- run the
dotnet-csharpier . --check
command on theWORKING_DIRECTORY
.
- The
WORKING_DIRECTORY
directory must contain a solution or a project file. - The correct .NET version must be installed.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/format is the action that checks the code formatting of a .NET solution.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. It must contain a solution (
.sln
) or a project (.csproj
) file.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
- CSHARPIER_VERSION: The CSharpier version to install. By default, it is the latest.
This is an example to show how data should be formatted.
steps:
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
SHELL: "bash"
CSHARPIER_VERSION: "0.25.0"
This action:
- run the
dotnet format
command on theWORKING_DIRECTORY
.
- The
WORKING_DIRECTORY
directory must contain a solution or a project file. - The correct .NET (6+) version must be installed.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/lint is the action that lints the code of a .NET solution.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. It must contain a solution (
.sln
) or a project (.csproj
) file.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
This is an example to show how data should be formatted.
steps:
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "backend"
SHELL: "bash"
This action:
- discovers and executes tests on the .NET solution contained in the
WORKING_DIRECTORY
directory; - if specified, it generates tests and code coverage results.
- The
WORKING_DIRECTORY
directory must contain a solution or a project file. - The correct .NET version must be installed.
- The project must be already built.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/test is the action that tests a .NET solution.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. It must contain a solution (
.sln
) or a project (.csproj
) file.
In addition, it is possible to specify this optional input:
- GENERATE_CODE_COVERAGE: Whatever or not the test results and code coverage files should be generated. If
true
, aTestResults
folder containing.trx
test results and acoverage.opencover.xml
cover file are generated inside each test project folder. By default, it is true. - SHELL: The shell type to use. By default, it is bash.
This is an example to show how data should be formatted.
steps:
- name: Run tests
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
GENERATE_CODE_COVERAGE: true
This action run the dotnet publish command on the WORKING_DIRECTORY
directory.
- The
WORKING_DIRECTORY
directory must be an ancestor of the project file (PROJECT
parameter). - The correct .NET version must be installed.
- The project must be already built.
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/publish is the action that publishes a .NET project.
It requires these inputs:
- WORKING_DIRECTORY: The ancestor directory of the project.
- BUILD_CONFIG: The configuration to use when publishing the project. Usually
Release
. - PROJECT: The path to the
.csproj
file, relative to theWORKING_DIRECTORY
directory. - OUTPUT_DIRECTORY: The directory where output binaries will be created. This is relative to the
WORKING_DIRECTORY
directory.
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
This is an example to show how data should be formatted.
steps:
- name: Install .NET
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
PROJECT: "My.Api/My.Api.csproj"
OUTPUT_DIRECTORY: "binaries"
BUILD_CONFIG: "Release"
SHELL: "bash"
This action executes the following child-actions:
It's a convenience action for repeated actions used together for most of the time.
Check the requirements of the child actions:
This workflow doesn't download the codebase. You have to check out the repo by yourself.
.github/actions/dotnet/release is the action that installs .NET, builds and publishes a .NET project.
It requires these inputs:
- WORKING_DIRECTORY
- BUILD_CONFIG
- PROJECT
- OUTPUT_DIRECTORY
In addition, it is possible to specify this optional input:
- SHELL: The shell type to use. By default, it is bash.
Each parameter is passed down to the homonym parameter of child actions (if available). Check out child actions' parameters definition.
This is an example to show how data should be formatted.
steps:
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: "back-end"
BUILD_CONFIG: "Release"
PROJECT: "My.Api/My.Api.csproj"
OUTPUT_DIRECTORY: "binaries"
This action:
- logs in to Azure CLI;
- deploy an application to an Azure App Service or Azure Function instance.
- logs out from Azure CLI.
Note: Azure Functions are built on top of Azure App Service infrastructure, reason for which this action is named just App Service.
- The
WORKING_DIRECTORY
directory must be an ancestor of theBINARIES_DIRECTORY
directory. - The App Service/Function must be correctly configured with the correct technology and runtime version.
- This action must run in an environment with the Azure CLI installed.
- This action must run in an environment without any other action performing AZ login/logout in parallel.
- Bash
.github/actions/azure/app-service/deploy is the action that deploys an application to an Azure App Service or Azure Function instance.
It requires these inputs:
- WORKING_DIRECTORY: The ancestor directory of the
BINARIES_DIRECTORY
directory. - BINARIES_DIRECTORY: The folder containing binaries to publish to the App Service/Function.
- WEBAPP_NAME: The name of the AppService/Function.
It also requires these secrets:
- AZURE_CREDENTIALS: The secret json containing credentials to connect using Azure CLI. See the documentation for more information.
In addition, it is possible to specify this optional input:
- WEBAPP_SLOT: The App Service/Function slot where the binaries should be published to. By default, it is production.
Note: this action restarts the App Service/Function.
Note: after this action completes it is not guaranteed that the App Service/Function will immediately run the new code. It may require some time based on the technology and hosting (e.g. App Service on Linux).
This is an example to show how data should be formatted.
steps:
- name: Publish to Azure App Service
uses: zupit-it/pipeline-templates/.github/actions/azure/app-service/[email protected]
with:
WORKING_DIRECTORY: "back-end"
BINARIES_DIRECTORY: "output"
AZURE_CREDENTIALS: ${{ secrets.CI_AZURE_CREDENTIALS }}
WEBAPP_NAME: "my-app-001"
This action:
- logs in to Azure CLI;
- deploy a static web-app to Azure Storage Blob Service.
- [optional] cleans the Azure CDN or Azure Front-door cache.
- logs out from Azure CLI.
- The
WORKING_DIRECTORY
directory must be an ancestor of theBINARIES_DIRECTORY
directory. - The Storage Account must be configured to serve static content.
- This action must run in an environment with the Azure CLI installed.
- This action must run in an environment without any other action performing AZ login/logout in parallel.
- Bash
.github/actions/azure/storage/deploy is the action that deploys a static web-app to an Azure Storage Account. It also cleans the cache of the Azure CDN or Azure Front-door if specified.
It requires these inputs:
- WORKING_DIRECTORY: The ancestor directory of the
BINARIES_DIRECTORY
directory. - BINARIES_DIRECTORY: The folder containing binaries to publish to the Storage Account.
- STORAGE_ACCOUNT_NAME: The name of the Storage Account.
In addition, it is possible to specify this optional input:
- CDN_PROFILE_NAME: Name of the Azure CDN profile name. Required if
CDN_RG_NAME
is specified. - CDN_ENDPOINT_NAME: Name of the Azure CDN endpoint name. It must be a child of the
CDN_PROFILE_NAME
CDN profile. Required ifCDN_RG_NAME
is specified. - CDN_RG_NAME: Resource group name where the Azure CDN profile is hold.
- FD_ENDPOINT_NAME: Name of the Azure Front-door endpoint name. Required if
FD_RG_NAME
is specified. - FD_DOMAIN_NAME: Domain name of the Azure Front-door endpoint. It must be a child of the
FD_ENDPOINT_NAME
Front-door endpoint. Required ifFD_RG_NAME
is specified. - FD_PROFILE_NAME: Name of the Azure Front-door profile name. Required if
FD_RG_NAME
is specified. - FD_RG_NAME: Resource group name where the Front-door instance is hold.
If no Front-door or CDN is specified, the action will only upload the files to the Storage Account.
If you want to purge the CDN cache, you must specify:
- CDN_PROFILE_NAME
- CDN_ENDPOINT_NAME
- CDN_RG_NAME
If you want to purge the Front-door cache, you must specify:
- FD_ENDPOINT_NAME
- FD_DOMAIN_NAME
- FD_PROFILE_NAME
- FD_RG_NAME
It also requires these secrets:
- AZURE_CREDENTIALS: The secret json containing credentials to connect using Azure CLI. See the documentation for more information.
This is an example to show how data should be formatted.
steps:
- name: Deploy to Azure Storage
uses: zupit-it/pipeline-templates/.github/actions/azure/storage/[email protected]
with:
WORKING_DIRECTORY: front-end
BINARIES_DIRECTORY: dist/apps/my-app
AZURE_CREDENTIALS: ${{ secrets.CI_AZURE_CREDENTIALS }}
STORAGE_ACCOUNT_NAME: stmyproject001
CDN_PROFILE_NAME: cdnp-myproject-001
CDN_ENDPOINT_NAME: cdne-myproject-001
CDN_RG_NAME: rg-myproject-001
This action:
- deploy an application to IIS.
- IIS 6
- The account used to run the GitHub runner must be part of the
Administrators
group. - The application pool must have the same name as the folder of the application.
- The entrypoint for the IIS website must be located inside the application's folder, and it must be named
htdocs
.
Example
- Application pool name:
example.zupit.software
- Application folder:
C:\inetpub\example.zupit.software
- IIS website entrypoint:
C:\inetpub\example.zupit.software\htdocs
.github/actions/iis/deploy is the action that deploys an application to IIS.
It requires these inputs:
- ARTIFACT_NAME: The artifact's name holding the application's binaries.
- APPS_PATH: The folder path where IIS websites are hosted. This must be the parent of the application's folder.
- APP_POOL_NAME: The name of the application pool.
This is an example to show how data should be formatted.
steps:
- name: Deploy to IIS
uses: zupit-it/pipeline-templates/.github/actions/iis/[email protected]
with:
ARTIFACT_NAME: my-artifact-name
APPS_PATH: 'C:\inetpub'
APP_POOL_NAME: "example.zupit.software"
This action:
- generates a unique name for an artifact using the specified prefix
The generated artifact name is in the format prefix-<random-string>
.
- Bash
.github/actions/artifact/generate-name is the action that generates a unique name for an artifact using the specified prefix. This is useful when you have multiple artifacts to upload on the same workflow, and you want to avoid name collisions.
It requires these inputs:
- NAME_PREFIX: The prefix to use when generating the artifact name.
It then outputs this variable:
- ARTIFACT_NAME: The generated artifact name.
This is an example to show how to use this action with the support of the Generate artifact name action.
- name: Generate artifact name
id: artifact-name
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
NAME_PREFIX: dotnet-build
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: ${{ inputs.WORKING_DIRECTORY }}
BUILD_CONFIG: "Release"
PROJECT: my-project
OUTPUT_DIRECTORY: ${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
- name: Upload build artifact
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
SOURCE_FOLDER: my-source-folder
ARTIFACT_NAME: ${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
This action:
- downloads an archived artifact.
- extracts the artifact in the specified directory.
- See the requirements of Artifact Action - Extract archive.
.github/actions/artifact/download is the action that downloads an artifact and extracts the archive it holds in the specified directory.
It requires these inputs:
- ARTIFACT_NAME: The artifact's name. Usually, it is the name generated using the action artifact/generate-name.
In addition, it is possible to specify this optional input:
- OUTPUT_FOLDER: The folder where the artifact will be extracted. By default, it is /tmp.
- ARCHIVE_NAME: The name of the archive hold in the artifact. By default, it is dist.tar.gz.
This is an example to show how data should be formatted.
steps:
- name: Download artifact
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
ARTIFACT_NAME: my-artifact-name
This action:
- creates an archive containing the files in the specified folder and uploads it as an artifact.
- See the requirements of Artifact Action - Create archive.
.github/actions/artifact/upload is the action that creates an archive containing the files in the specified folder and uploads it as an artifact.
It requires these inputs:
- SOURCE_FOLDER: The folder containing the files to archive and upload.
- ARTIFACT_NAME: The name of the artifact to create.
In addition, it is possible to specify this optional input:
- ARCHIVE_PATH: The path to the archive to create. By default, it is /tmp/dist.tar.gz.
- RETENTION_DAYS: The number of days to keep the artifact. By default, it is 1.
This is an example to show how to use this action with the support of the Generate artifact name action.
- name: Generate artifact name
id: artifact-name
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
NAME_PREFIX: dotnet-build
- name: Build
uses: zupit-it/pipeline-templates/.github/actions/dotnet/[email protected]
with:
WORKING_DIRECTORY: my-dir
BUILD_CONFIG: "Release"
PROJECT: my-project
OUTPUT_DIRECTORY: ${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
- name: Upload build artifact
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
SOURCE_FOLDER: my-source-folder
ARTIFACT_NAME: ${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
This action:
- creates an archive containing the files in the specified folder.
- Bash
- OS: Linux or Windows 10 Build 17063 and more recent. The action is based on the
tar
command.
.github/actions/artifact/create-archive is the action that creates an archive containing the files in the specified folder.
It requires these inputs:
- SOURCE_FOLDER: The folder containing the files to archive.
In addition, it is possible to specify this optional input:
- ARCHIVE_PATH: The path to the archive to create. By default, it is /tmp/dist.tar.gz.
It then outputs this variable:
- ARCHIVE_PATH: The path to the archive created.
You may want to use the Artifact Action - Upload instead of this action, as it creates an archive and uploads it as an artifact.
This is an example to show how data should be formatted.
- name: Create archive
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
SOURCE_FOLDER: my-source-folder
ARCHIVE_NAME: my-archive
This action:
- extracts an archive in the specified directory.
- Bash
- OS: Linux or Windows 10 Build 17063 and more recent. The action is based on the
tar
command.
.github/actions/artifact/extract-archive is the action that extracts an archive in the specified directory.
It requires these inputs:
- ARCHIVE_PATH: The path to the archive to extract.
- OUTPUT_FOLDER: The folder where the archive will be extracted.
You may want to use the Artifact Action - Download instead of this action, as it downloads an archived artifact and extracts it in the specified directory.
This is an example to show how data should be formatted.
- name: Extract archive
uses: zupit-it/pipeline-templates/.github/actions/artifact/[email protected]
with:
ARCHIVE_PATH: /tmp/my-archive.tar.gz
OUTPUT_FOLDER: my-output-folder
In all the examples, we set secrets: inherit to pass all secrets to the reusable workflows, but it is also possible to pass only a subset of secrets.
In addition, we added for all step workflows the input LABELS as GitHub does not allow to set the runs-on from the caller side, but only inside the reusable workflows. As we want to define the runners as late as possible, we decided to add this input variable.
We've defined 2 different types of workflows:
- step: a reusable workflow that runs a set of specific tasks that can be grouped together (e.g. checking if the project is linted and builds, run the tests, build and push a docker image, ...).
- workflow: a reusable workflow that contains a set of our "steps" workflows to reduce the boilerplate when writing the final workflows. One of the use cases is to check if the code is linted, it builds correctly and the tests pass, as this is used in almost all of our projects.
Our reusable workflows are named to follow this standard:
<technology-or-application>-<workflow-type>-<action-to-execute>.yml
Thus, it is easy to understand that the workflows uses a specific technology or application to execute the desired action.
The following workflows are deprecated:
- node-step-docker-build-and-push-image.yml
- node-step-format-lint-build.yml
- node-workflow-common.yml
This workflow requires these commands in order to succeed:
- [PROJECT:]ci:format:check: Check that the code is formatted correctly.
- [PROJECT:]ci:lint: Check that the code is linted correctly.
- [PROJECT:]ci:build: Check that the project builds correctly
- [PROJECT:]ci:e2e: Check that all cypress tests pass (only if tests are enabled).
This command must generate the coverage report lcov.info inside the coverage folder in the NodeJS directory.
(e.g.
frontend/coverage/lcov.info
)
The optional PROJECT
value is used in configurations where a single Node solution hosts multiple projects. This is the case for NX, where multiple applications and libraries exist in the same Node project.
This workflow uses npm as package manager.
node-workflow-common.yml is the reusable workflow to check that the code is correctly formatted and linted, that it builds correctly and that all tests pass.
It groups together these reusable workflows:
- node-step-format-lint-build.yml
- node-step-test-cypress.yml
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. This is basically the directory which contains the NodeJS application.
- NODE_VERSION: The NodeJS Docker image where the runner execute all the commands.
- CYPRESS_IMAGE: The Cypress Docker image where the runner execute all the commands.
- DIST_PATH: The output distribution path of the node build
In addition, it is possible to specify these optional inputs:
- COVERAGE_ARTIFACT_NAME: The artifact's name for the lcov.info file. By default, it is lcov.info.
- ENABLE_TESTS: Whether it should skip or not the cypress tests workflow. By default, it is true.
- TIMEOUT: Used for tests, if the tests take more than the given time in minutes, Github stops forcefully the workflow. By default, it is 30.
- RUN: Whether to run all the jobs inside workflows or not. This is useful when you want to skip checks since the code didn't change. By default, it is true.
- PROJECT: The project to use when running npm scripts. If set, the executed npm script will be
{PROJECT}:{SCRIPT_NAME}
instead of{SCRIPT_NAME}
. - RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
node-common:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: frontend
NODE_VERSION: 16.17.0
CYPRESS_IMAGE: cypress/browsers:node16.17.0-chrome106
secrets: inherit
This workflow allows to skip the inner jobs using the input variable RUN. This is useful when the code didn't change, and you want to skip the required checks and allow the PR to move on. One way to check whether the code changed or not is by using the dorny/paths-filter@v2 action. Here is an example of how to know if the code changed and based from that, run or not the workflows.
jobs:
check-changes:
runs-on: [pinga, pipeline, container]
outputs:
backend: ${{ steps.changes.outputs.backend }}
frontend: ${{ steps.changes.outputs.frontend }}
steps:
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
backend:
- 'backend/**'
frontend:
- 'frontend/**'
angular-common:
needs: check-changes
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: "frontend"
NODE_VERSION: "14.11.0"
ENABLE_TESTS: false
RUN: ${{ needs.check-changes.outputs.frontend == 'true' }}
secrets: inherit
This basically checks if the 2 folders: backend and frontend, were touched or not. If there are any change in the frontend folder, then execute all inner workflows inside node-workflow-common.
This workflow requires this command in order to succeed:
- build:{environment}: Build the project based on the target environment (e.g. testing, staging and production)
It also requires a Dockerfile inside the working directory to create the docker image to publish on a docker registry.
This workflow uses npm as package manager.
node-step-docker-build-and-push-image.yml is the workflow that builds the docker image and then push it to the registry. This is a similar version of the docker-step-build-and-push-image.yml as this adds the NodeJS build of the project.
This workflow uses these composite actions:
- actions/node/build: builds NodeJS project
- actions/docker/build-and-push: creates the Docker image and pushes it to the desired registry.
This workflow uses a NodeJS Docker image, hence remember to use labels to match runners specific for Docker.
It requires these inputs:
- NODE_VERSION: The NodeJS version required to build the project.
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
- RELEASE_ENVIRONMENT: The environment for which the project must be compiled (e.g. testing, staging, production).
- DOCKERFILE_PATH: The path to the Dockerfile to build.
- DOCKER_IMAGE_NAME: The name to assign to the built Docker image.
- DOCKER_IMAGE_TAG: The tag to assign to the built Docker image.
- BUILD_ARGS: Additional data to pass when building the Dockerfile.
- DIST_PATH: The output distribution path of the node build
- ARTIFACT_NAME: The name of the artifact. Should be changed when using multiple node builds for the same project at the same time
In addition, it is possible to specify these optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- REGISTRY_URL: The registry url where to push the Docker image. By default, it is ghcr.io.
- REGISTRY_USER: The registry url where to push the Docker image. By default, it is the GitHub variable github.actor, the user who started the workflow. If you need a different user, remember to override the GITHUB_TOKEN secret.
- PROJECT: The project to use when running npm scripts. If set, the executed npm script will be
{PROJECT}:{SCRIPT_NAME}
instead of{SCRIPT_NAME}
. - CHECKOUT_REF: The ref of the branch/tag to check out before running the build. See the ref parameter of the checkout action. By default, it is
''
.
It then outputs this variable:
- DOCKER_IMAGE_NAME: The final Docker image name with the registry path included.
This is an example to show how data should be formatted.
jobs:
build-and-push-image:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
NODE_VERSION: 16.17.0
RELEASE_ENVIRONMENT: testing
WORKING_DIRECTORY: frontend
REGISTRY_URL: ghcr.io
DOCKERFILE_PATH: frontend/docker/Dockerfile
DOCKER_IMAGE_NAME: ionic
DOCKER_IMAGE_TAG: latest
BUILD_ARGS: |
DIST_PATH=dist/apps/enci
secrets: inherit
This workflow combines two main actions:
The input parameters of this workflow have the same name of the corresponding parameters in child actions. Refer to them for more information.
Also, these input parameters are optional:
- IMAGE: the docker image to use when running the node build. By default, it is ubuntu:23.04.
- AZURE_CLI_IMAGE: the docker image to use when running the deployment to Azure Storage. By default, it is mcr.microsoft.com/azure-cli:2.50.0.
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
build-and-push-image:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: front-end
NODE_VERSION: "16.17.0"
RELEASE_ENVIRONMENT: testing
DIST_PATH: dist/apps/cta-conta
STORAGE_ACCOUNT_NAME: stmyproject001
CDN_PROFILE_NAME: cdnp-myproject-001
CDN_ENDPOINT_NAME: cdne-myproject-001
CDN_RG_NAME: rg-myproject-001
This workflow requires these files inside the Django directory:
- requirements.txt with Coverage, Black and Flake8 to check the coverage and the code style.
- env.github with the required environment variables in order to run the checks and tests in the workflows.
This workflow uses pip as package manager.
django-workflow-common.yml is the reusable workflow to check that the code is correctly linted, that all migrations are not broken and that all tests pass.
It groups together these reusable workflows:
- django-step-lint-check.yml
- django-step-tests.yml
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. This is basically the directory which contains the Django application.
- PYTHON_IMAGE: The Python Docker image where the runner execute all the commands.
In addition, it is possible to specify this optional input:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- COVERAGE_ARTIFACT_NAME: The artifact's name for the coverage-django.xml file. By default, it is coverage-django.xml.
- RUN: Whether to run all the jobs inside workflows or not. This is useful when you want to skip checks since the code didn't change. By default, it is true.
- DJANGO_MIGRATIONS_CHECK_APPS: The Django apps on which to run migration checks.
- SETUP_COMMANDS: Allow to execute commands before the download of the dependencies. Useful to install packages required for Python dependencies.
- ENABLE_LFS: To enable Git LFS support on checkout
- LFS_REPO_PATH: Required when ENABLE_LFS is true. Workaround for actions/checkout#1169. Set to "/__w/repo-name/repo-name"
- COVERAGE_THRESHOLD: The minimal code coverage for this project. If the coverage is lower than this value, the workflow will fail. By default, it is 50.
This is an example to show how data should be formatted.
jobs:
django-common:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: backend
PYTHON_IMAGE: python:3.8.2-slim-buster
COVERAGE_ARTIFACT_NAME: coverage-django.xml
SETUP_COMMANDS: "apt update && apt install -y gcc"
secrets: inherit
This workflow allows to skip the inner jobs using the input variable RUN. This is useful when the code didn't change, and you want to skip the required checks and allow the PR to move on. One way to check whether the code changed or not is by using the dorny/paths-filter@v2 action. Here is an example of how to know if the code changed and based from that, run or not the workflows.
jobs:
check-changes:
runs-on: [pinga, pipeline, container]
outputs:
backend: ${{ steps.changes.outputs.backend }}
frontend: ${{ steps.changes.outputs.frontend }}
steps:
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
backend:
- 'backend/**'
frontend:
- 'frontend/**'
django-common:
needs: check-changes
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: "backend"
PYTHON_IMAGE: "python:3.8.2-slim-buster"
RUN: ${{ needs.check-changes.outputs.backend == 'true' }}
secrets: inherit
This basically checks if the 2 folders: backend and frontend, were touched or not. If there are any change in the backend folder, then execute all inner workflows inside django-workflow-common.
This workflow requires these plugins:
- Spotless & Checkstyle to check that formatting and coding style are correct.
- Jacoco to create report from tests.
In addition, the maven command Verify should generate coverage reports.
This workflow uses maven as package manager.
springboot-workflow-common.yml is the reusable workflow to check that the code is correctly linted and that all tests pass.
It groups together these reusable workflows:
- springboot-step-lint-check.yml
- springboot-step-tests.yml
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands. This is basically the directory which contains the Django application.
- JAVA_IMAGE: The Java Docker image where the runner execute all the commands.
In addition, it is possible to specify this optional input:
- COVERAGE_ARTIFACT_NAME: The artifact's name for the jacoco reports file. By default, it is target.
- MAVEN_USER_HOME: T RUN_ON: required: false type: string default: 'zupit-agents' RUNNERS_CONTAINER_GROUP: required: false type: string default: 'Container'he path to Maven directory. By default, it is ./m2.
- EXTRA_MAVEN_ARGS: Additional arguments for Maven. By default, it is "".
- USE_CI_POSTGRES: Whether to use Postgres for tests or not. If enabled, it injects the connection string to the DB for tests. By default, it is true.
- RUN: Whether to run all the jobs inside workflows or not. This is useful when you want to skip checks since the code didn't change. By default, it is true.
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
java-common:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: backend
JAVA_IMAGE: openjdk:12
USE_CI_POSTGRES: false
secrets: inherit
This workflow allows to skip the inner jobs using the input variable RUN. This is useful when the code didn't change, and you want to skip the required checks and allow the PR to move on. One way to check whether the code changed or not is by using the dorny/paths-filter@v2 action. Here is an example of how to know if the code changed and based from that, run or not the workflows.
jobs:
check-changes:
runs-on: [pinga, pipeline, containers]
outputs:
backend: ${{ steps.changes.outputs.backend }}
frontend: ${{ steps.changes.outputs.frontend }}
steps:
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
backend:
- 'backend/**'
frontend:
- 'frontend/**'
java-common:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
CONTAINER_CI_LABELS: "['pinga', 'pipeline', 'container']"
WORKING_DIRECTORY: backend
JAVA_IMAGE: openjdk:12
USE_CI_POSTGRES: false
RUN: ${{ needs.check-changes.outputs.backend == 'true' }}
secrets: inherit
This basically checks if the 2 folders: backend and frontend, were touched or not. If there are any change in the backend folder, then execute all inner workflows inside django-workflow-common.
This workflow requires this plugin:
- jib: Build and publish docker image to the given registry.
This workflow uses Maven as package manager.
springboot-step-docker-build-and-push-image.yml is the workflow that builds the docker image and then push it to the registry.
This workflow uses a Java Docker image, hence remember to use labels to match runners specific for Docker.
It requires these inputs:
- JAVA_IMAGE: The Java image required to build the project.
- RELEASE_ENVIRONMENT: The environment for which the project must be compiled (e.g. testing, staging, production).
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
- REGISTRY_URL: The registry url where to push the Docker image.
- DOCKER_IMAGE_NAME: The name to assign to the built Docker image.
- DOCKER_IMAGE_TAG: The tag to assign to the built Docker image.
In addition, it is possible to specify this optional input:
- MAVEN_USER_HOME: The path to Maven directory. By default, it is ./m2.
- EXTRA_MAVEN_ARGS: Additional arguments for Maven. By default, it is "".
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
It then outputs this variable:
- DOCKER_IMAGE_NAME: The final Docker image name with the registry path included.
This is an example to show how data should be formatted.
jobs:
springboot-build-and-push-image:
needs: [common]
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
JAVA_IMAGE: openjdk:12
RELEASE_ENVIRONMENT: testing
WORKING_DIRECTORY: backend
REGISTRY_URL: ghcr.io
DOCKER_IMAGE_NAME: springboot
DOCKER_IMAGE_TAG: latest
secrets: inherit
This workflow is based on the following actions:
Check these actions requirements before using this workflow.
dotnet-workflow-common.yml is the reusable workflow to check that the code is correctly linted, formatted, and that all tests pass.
It requires these inputs:
- WORKING_DIRECTORY: check actions used by this workflow for more information.
- DOTNET_IMAGE: the .NET docker image (usually 'mcr.microsoft.com/dotnet/sdk') to use.
In addition, it is possible to specify these optional inputs:
- DOTNET_IMAGE_ENV_VARIABLES: The environment variables to set when running the .NET docker image.
- CSHARPIER_VERSION: The version of the CSharpier tool to use. For the default value, see the
dotnet/format
action. - RUN_LINT: Whatever or not the lint command should be executed. By default, it is true.
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
common:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: "backend"
DOTNET_IMAGE: "'mcr.microsoft.com/dotnet/sdk:7.0"
This workflow requires a Dockerfile inside the working directory to create the docker image to publish on a docker registry.
The github runner which will execute this workflow should be capable of running docker commands.
docker-step-build-and-push-image.yml is the workflow that builds the Docker image and then push it to the registry.
This workflow uses this composite action:
- actions/docker/build-and-push: creates the Docker image and pushes it to the desired registry.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
- RELEASE_ENVIRONMENT: The environment for which the project must be compiled (e.g. testing, staging, production).
- DOCKERFILE_PATH: The path to the Dockerfile to build.
- DOCKER_IMAGE_NAME: The name to assign to the built Docker image.
- DOCKER_IMAGE_TAG: The tag to assign to the built Docker image.
- BUILD_ARGS: Additional data to pass when building the Dockerfile.
In addition, it is possible to specify these optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- REGISTRY_URL: The registry url where to push the Docker image. By default, it is ghcr.io.
- REGISTRY_USER: The registry url where to push the Docker image. By default, it is the GitHub variable github.actor, the user who started the workflow. If you need a different user, remember to override the GITHUB_TOKEN secret.
- CHECKOUT_REF: The ref of the branch/tag to check out before running the build. See the ref parameter of the checkout action. By default, it is
''
.
It then outputs these variables:
- DOCKER_IMAGE_NAME: The final Docker image name with the registry path included.
This is an example to show how data should be formatted.
jobs:
build-and-push-image:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
RELEASE_ENVIRONMENT: testing
WORKING_DIRECTORY: backend
REGISTRY_URL: ghcr.io
DOCKERFILE_PATH: backend/docker/Dockerfile
DOCKER_IMAGE_NAME: django
DOCKER_IMAGE_TAG: latest
secrets: inherit
This workflow requires a docker-compose file to start all services required from the application to deploy.
docker-step-deploy.yml is the workflow that starts a Docker compose file on the targeted host.
It requires these inputs:
- DEPLOY_ON: the labels to select the correct github-runner that will execute this workflow.
- ENVIRONMENT: The target environment that will show GitHub on the GitHub action page.
- DEPLOY_URL: The target environment url that will show GitHub on the GitHub action page.
- REGISTRY_URL: The registry url where to pull the Docker images.
- PROJECT_NAME: The name that will be associated to the Docker Compose stack.
- DOCKER_COMPOSE_PATH: The path to the docker-compose file to start.
- DOCKER_COMPOSE_EXTRA_ARGS: Extra arguments to pass to the docker-compose command. Optional
- IMAGES: A stringified json object containing as key the environment variables images used in the
Docker compose file and as value the name of the images that will be downloaded from the registry.
You can retrieve dynamically the image name from the docker build and push step by adding the step's name to the needs array of the workflow
and using
${{ needs.{STEP_NAME}.outputs.DOCKER_IMAGE_NAME }}
where STEP_NAME is the step's name. - RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container. If the runner has no group, set it to ''.
This is an example to show how data should be formatted.
jobs:
deploy:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
DEPLOY_ON: 'sevensedie'
ENVIRONMENT: testing
DEPLOY_URL: https://workflows-example.testing.zupit.software
REGISTRY_URL: ghcr.io
PROJECT_NAME: workflows-example
DOCKER_COMPOSE_PATH: docker/testing/docker-compose.yml
IMAGES: "{
'BACKEND_IMAGE_TAG': '${{ needs.backend-step.outputs.DOCKER_IMAGE_NAME }}',
'FRONTEND_IMAGE_TAG': '${{ needs.frontend-step.outputs.DOCKER_IMAGE_NAME }}'
}"
secrets: inherit
docker-step-delete-images.yml is the workflow that manage the retention policy on the GitHub Registry. It deletes all untagged images and it allows to have a maximum of N tagged images for staging and other N tagged images for production. This workflow should be scheduled using cron to achieve the retention policy.
The images' tags must follow this naming convention:
latest
: for testing environment. This won't be deleted.v[0-9]+.[0-9]+.[0-9]+-rc
: for staging environment.v[0-9]+.[0-9]+.[0-9]+
: for production environment.
It requires these inputs:
- IMAGE_NAME: The image name to apply the retention policy.
- KEEP_AT_LEAST: The number of tagged version to maintain for both staging and production environments.
It also requires these secrets:
- RETENTION_POLICY_TOKEN: A PAT with permissions to read:packages and delete:packages
In addition, it is possible to specify these optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- DRY_RUN: Only for tagged images, it shows which ones will be deleted without deleting them. By default, it is false.
This is an example to show how data should be formatted.
jobs:
clean-ionic-images:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
IMAGE_NAME: "ionic"
secrets: inherit
jira-step-move-issue.yml is the workflow that moves Jira issues to the desired state.
NOTE: If the issue is in the 'Verified' state, the issue won't be moved to the desired state.
It requires these inputs:
- STATUS: the final status of the Jira issue.
- BRANCH_OR_COMMIT_TITLE: the branch or commit title from where extract the Jira issue key.
It also requires these secrets:
- JIRA_BASE_URL: the JIRA url.
- JIRA_USER_EMAIL: the JIRA user account email.
- JIRA_API_TOKEN: the token to login the Jira user account email.
In addition, it is possible to specify this optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
jira-move-issue-to-developed:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
STATUS: Developed
BRANCH_OR_COMMIT_TITLE: ${{ github.event.workflow_run.head_commit.message }}
secrets: inherit
Here we show 3 use cases that you can copy paste in your project to have the default configuration for transitioning Jira issues to these 3 states: In Progress, Merge Request and Developed, without worrying about how to retrieve the branch or commit title based on the workflow type.
Basically, these workflows starts with these events:
- Pull Request opened: Move the Jira issue to In Progress.
- Pull Request review: Move the Jira issue to Merge Request if the PR is not in draft.
- Pull Request ready for review: Move the Jira issue to Merge Request.
- On main workflow completion: Move the Jira issue to Developed.
Move to In Progress - jira-move-in-progress.yml
name: Jira Move to In Progress
on:
pull_request:
types: [opened]
jobs:
jira-move-issue-to-in-progress:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
STATUS: "In progress"
BRANCH_OR_COMMIT_TITLE: ${{ github.head_ref }}
secrets: inherit
Move to Merge Request - jira-move-merge-request.yml
name: Jira Move to Merge Request
on:
pull_request:
types: [review_requested, ready_for_review]
jobs:
jira-move-issue-to-merge-request:
if: ${{ !github.event.pull_request.draft }}
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
STATUS: "Merge request"
BRANCH_OR_COMMIT_TITLE: ${{ github.head_ref }}
secrets: inherit
Move to Developed - jira-move-developed.yml
name: Jira Move to Developed
on:
push:
branches: [ "main", "release/*" ]
jobs:
jira-move-issue-to-developed:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
STATUS: "Developed"
BRANCH_OR_COMMIT_TITLE: ${{ github.event.head_commit.message }}
secrets: inherit
jira-add-description-to-pr.yml is the workflow that adds the Jira issue description to the pull request description.
It requires these secrets:
- GITHUB_TOKEN: The GitHub token to allow the workflow to make changes to the pull request.
- JIRA_BASE_URL: the JIRA url.
- JIRA_USER_EMAIL: the JIRA user account email.
- JIRA_API_TOKEN: the token to login the Jira user account email.
In addition, it is possible to specify this optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- DIND_IMAGE: Docker image to use. Default is docker:26.0.0-dind.
This is an example to show how data should be formatted.
jobs:
jira-description:
uses:
zupit-it/pipeline-templates/.github/workflows/[email protected]
secrets: inherit
jira-step-create-todo-issues.yml is the workflow that creates Jira issues based on the TODO comments in the code.
It requires these inputs:
- PROJECT_KEY: the Jira project key.
It requires these secrets:
- GITHUB_TOKEN: The GitHub token to allow the workflow to make changes to the pull request.
- JIRA_BASE_URL: the JIRA url.
- JIRA_USER_EMAIL: the JIRA user account email.
- JIRA_API_TOKEN: the token to login the Jira user account email.
In addition, it is possible to specify this optional inputs:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
- DIND_IMAGE: Docker image to use. Default is docker:26.0.0-dind.
- ISSUE_TYPE: The type of the issue to create. Default is Task.
- ISSUE_DESCRIPTION: The description of the issue to create. Default is "Created automatically via GitHub Actions".
- LINK: A link to put in the issue description. Default is an empty string.
This is an example to show how data should be formatted.
jobs:
jira-create-todo-issue:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
PROJECT_KEY: DDSO
LINK: ${{ github.event.compare }}
secrets: inherit
conventional-commits-step-lint.yml is the workflow that lint the commit messages of a pull request.
It is possible to specify this optional input:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
lint-pr:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
CONFIG_FILE: .commitlintrc.json
secrets: inherit
conventional-commits-step-release.yml is the workflow that automatically creates a new release based on the commit messages.
It requires these secrets:
- RELEASE_TOKEN: a personal access token with grants to create a release and to push new commits. (use the zupit bot)
In addition, it is possible to specify this optional input:
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
lint-pr:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
secrets: inherit
This workflow requires a sonar-project.properties file inside the working directory with the configuration for Sonarqube.
sonar-step-analyze.yml is the workflow that analyze the coverage and sends the results to Sonarqube.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
It also requires these secrets:
- SONAR_TOKEN: The Sonarqube token.
- SONAR_HOST_URL: The Sonarqube host to where submit analyzed data. By default, it is on organization secrets: https://sonarqube.zupit.software
In addition, it is possible to specify these optional inputs:
- SONAR_IMAGE: The Sonarqube docker image where the runner execute all commands. By default, it is sonarsource/sonar-scanner-cli.
- DOWNLOAD_ARTIFACT: Whether it should download an artifact or not to analyze. By default, it is true.
- ARTIFACT_FILENAME: The name of the artifact. By default, it is an empty string.
- RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
angular-sonar-analyze:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: frontend
ARTIFACT_FILENAME: lcov.info
secrets: inherit
If you want to analyze a Dart or Flutter project, you should use this workflow in the following way:
flutter-sonar-analyze:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: '.'
DOWNLOAD_ARTIFACT: false
PRE_SCAN_COMMANDS: 'git config --global --add safe.directory /opt/flutter && mv .env.github .env && flutter pub get && flutter test --machine --coverage > tests.output'
SONAR_IMAGE: 'ghcr.io/zupit-it/pipeline-templates/flutter-sonar-scanner-cli:latest'
secrets: inherit
This workflow DOES NOT use the sonar-project.properties as the Sonar Analyze workflow does. See the documentation.
Additional properties are provided by this workflow and the required ones are exposed as required inputs.
sonar-step-dotnet-analyze.yml is the workflow that analyze a .NET solution, including the coverage, and sends the results to Sonarqube.
IMPORTANT: since this step relies on bash scripting, the CONTAINER_CI_LABELS you provide must reference a container with bash already installed.
It requires these inputs:
- WORKING_DIRECTORY: The directory where the runner can execute all the commands.
- SONAR_PROJECT_KEY: The SonarQube project key.
It also requires these secrets:
- SONAR_TOKEN: The Sonarqube token.
- SONAR_HOST_URL: The Sonarqube host to where submit analyzed data. By default, it is on organization secrets: https://sonarqube.zupit.software
In addition, it is possible to specify these optional inputs:
- SONAR_IMAGE: The SonarQube docker image where the runner execute all commands. By default, it is
sonarsource/sonar-scanner-cli
. - SONAR_EXCLUSIONS: A comma separated list of glob patterns to match files and/or folders that should be excluded from Sonarqube analysis. You can't use a
sonar-project.properties
file since it's not supported from SonarScanner for .NET. - COVERAGE_EXCLUSIONS: A comma separated list of glob patterns to match files and/or folders that should be excluded when computing tests code coverage (docs). Since
dotnet test
expect absolute path for the exclusion list, you should provide this parameter in the form**/my-path/*.cs
(always starting with**/*
). - DOTNET_VERSION: The .NET version to build the solution. By default, it is
7
. - RUN_ON: the label to select the correct github-runner that will execute this workflow. Default is zupit-agents.
- RUNNERS_CONTAINER_GROUP: The runners group used to execute this workflow. Default is Container.
This is an example to show how data should be formatted.
jobs:
sonar-analyze:
uses: zupit-it/pipeline-templates/.github/workflows/[email protected]
with:
WORKING_DIRECTORY: "back-end"
SONAR_PROJECT_KEY: "my-project-key"
secrets: inherit
Use the concurrency feature to manage overlapping workflow runs, ensuring only the most recent commit in branches like main triggers a workflow. This optimizes resource usage and keeps deployments current.
on:
push:
branches: [ "main", "release/*" ]
concurrency:
group: ${{ github.ref }}
cancel-in-progress: true
jobs:
...
Note: When using concurrency with workflows that trigger others, ensure subsequent workflows account for potential cancellations of initiating workflows. This may involve status checks or logic adjustments for such cases.