First, you may want to Ramp up on Kubernetes and Custom Resource Definitions (CRDs) as Tekton implements several Kubernetes resource controllers configured by Tekton CRDs. Then follow these steps to start developing and contributing project code:
- Setting up a development environment
- Building and deploying Tekton source code from a local clone.
- Developing and testing Tekton pipelines
- Learn how to iterate on code changes
- Managing Tekton Objects using
ko
in Kubernetes - Standing up a K8s cluster with Tekton using the kind tool
- Accessing logs
- Adding new CRD types
Welcome to the project! 👏👏👏 You may find these resources helpful to "ramp up" on some of the technologies this project builds and runs on. This project extends Kubernetes (aka
k8s
) with Custom Resource Definitions (CRDs). To find out more, read:
- The Kubernetes docs on Custom Resources - These will orient you on what words like "Resource" and "Controller" concretely mean
- Understanding Kubernetes objects - This will further solidify k8s nomenclature
- API conventions - Types(kinds) - Another useful set of words describing words. "Objects" and "Lists" in k8s land
- Extend the Kubernetes API with CustomResourceDefinitions- A tutorial demonstrating how a Custom Resource Definition can be added to Kubernetes without anything actually "happening" beyond being able to list Objects of that kind
At this point, you may find it useful to return to these Tekton Pipeline
docs:
- Tekton Pipeline README - Some of the terms here may make more sense!
- Install via official installation docs or continue through getting started for development
- Tekton Pipeline "Hello World" tutorial -
Define
Tasks
,Pipelines
, andPipelineResources
(i.e., Tekton CRDs), and see what happens when they are run
GitHub is used for project Source Code Management (SCM) using the SSH protocol for authentication.
- Create a GitHub account if you do not already have one.
- Setup GitHub access via SSH
You must install these tools:
-
git
: For source control -
go
: The language Tekton Pipelines is built in.Note Golang version v1.15 or higher is recommended.
-
ko
: The Tekton project usesko
to simplify the building of its container images fromgo
source, push these images to the configured image repository and deploy these images into Kubernetes clusters.Note
ko
version v0.5.1 or higher is required forpipeline
to work correctly. -
kubectl
: For interacting with your Kubernetes cluster⚠️ The user interacting with your K8s cluster must be a cluster admin to create role bindings.Google Cloud Platform (GCP) example:
# Using gcloud to get your current user USER=$(gcloud config get-value core/account) # Make that user a cluster admin kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user="${USER}"
To build, deploy and run your Tekton Objects with ko
, you'll need to set these environment variables:
-
GOROOT
: SetGOROOT
to the location of the Go installation you wantko
to use for builds.Note: You may need to set
GOROOT
if you installed Go tools to a non-default location or have multiple Go versions installed.If it is not set,
ko
infers the location by effectively usinggo env GOROOT
. -
KO_DOCKER_REPO
: The docker repository to which developer images should be pushed. For example:-
Using Google Container Registry (GCR):
# format: `gcr.io/${GCP-PROJECT-NAME}` export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
-
Using Docker Desktop (Docker Hub):
# format: 'docker.io/${DOCKER_HUB_USERNAME}' export KO_DOCKER_REPO='docker.io/my-dockerhub-username'
-
You can also host your own Docker Registry server and reference it:
# format: ${localhost:port}/{} export KO_DOCKER_REPO=`localhost:5000/mypipelineimages`
-
-
Optionally, add
$HOME/go/bin
to your systemPATH
so that any tooling installed viago get
will work properly. For example:export PATH="${PATH}:$HOME/go/bin"
Note: It is recommended to add these environment variables to your shell's configuration files (e.g.,
~/.bash_profile
or~/.bashrc
).
The Tekton project requires that you develop (commit) code changes to branches that belong to a fork of the tektoncd/pipeline
repository in your GitHub account before submitting them as Pull Requests (PRs) to the actual project repository.
-
Create a fork of the
tektoncd/pipeline
repository in your GitHub account. -
Create a clone of your fork on your local machine:
git clone [email protected]:${YOUR_GITHUB_USERNAME}/pipeline.git
Note: Tekton uses Go Modules (i.e.,
go mod
) for package management so you may clone the repository to a location of your choosing. -
Configure
git
remote repositoriesAdding
tektoncd/pipelines
as theupstream
and your fork as theorigin
remote repositories to your.git/config
sets you up nicely for regularly syncing your fork and submitting pull requests.-
Change into the project directory
cd pipeline
-
Configure Tekton as the
upstream
repositorygit remote add upstream [email protected]:tektoncd/pipeline.git # Optional: Prevent accidental pushing of commits by changing the upstream URL to `no_push` git remote set-url --push upstream no_push
-
Configure your fork as the
origin
repositorygit remote add origin [email protected]:${YOUR_GITHUB_USERNAME}/pipeline.git
-
Depending on your chosen container registry that you set in the KO_DOCKER_REPO
environment variable, you may need to additionally configure access control to allow ko
to authenticate to it.
Docker Desktop provides seamless integration with both a local (default) image registry as well as Docker Hub remote registries. To use Docker Hub registries with ko
, all you need do is to configure Docker Desktop with your Docker ID and password in its dashboard.
If using GCR with ko
, make sure to configure
authentication
for your KO_DOCKER_REPO
if required. To be able to push images to
gcr.io/<project>
, you need to run this once:
gcloud auth configure-docker
The example GKE setup in this guide grants service accounts permissions to push and pull GCR images in the same project. If you choose to use a different setup with fewer default permissions, or your GKE cluster that will run Tekton is in a different project than your GCR registry, you will need to provide the Tekton pipelines controller and webhook service accounts with GCR credentials. See documentation on using GCR with GKE for more information. To do this, create a secret for your docker credentials and reference this secret from the controller and webhook service accounts, as follows.
-
Create a secret, for example:
kubectl create secret generic ${SECRET_NAME} \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson --namespace=tekton-pipelines
See Configuring authentication for Docker for more detailed information on creating secrets containing registry credentials.
-
Update the
tekton-pipelines-controller
andtekton-pipelines-webhook
service accounts to reference the newly created secret by modifying the definitions of these service accounts as shown below.apiVersion: v1 kind: ServiceAccount metadata: name: tekton-pipelines-controller namespace: tekton-pipelines labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines imagePullSecrets: - name: ${SECRET_NAME} --- apiVersion: v1 kind: ServiceAccount metadata: name: tekton-pipelines-webhook namespace: tekton-pipelines labels: app.kubernetes.io/component: webhook app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines imagePullSecrets: - name: ${SECRET_NAME}
The recommended minimum development configuration is:
- Kubernetes version 1.20 or later
- 4 (virtual) CPU nodes
- 8 GB of (actual or virtualized) platform memory
- Node autoscaling, up to 3 nodes
- Follow the instructions for running locally with Minikube
- Follow the instructions for running locally with Docker Desktop
- Set up a GCP Project and enable the GKE API.
You may find it useful to save the ID of the project in an environment
variable (e.g.
PROJECT_ID
).
-
Create a GKE cluster (with
--cluster-version=latest
but you can use any version 1.18 or later):export PROJECT_ID=my-gcp-project export CLUSTER_NAME=mycoolcluster gcloud container clusters create $CLUSTER_NAME \ --enable-autoscaling \ --min-nodes=1 \ --max-nodes=3 \ --scopes=cloud-platform \ --no-issue-client-certificate \ --project=$PROJECT_ID \ --region=us-central1 \ --machine-type=n1-standard-4 \ --image-type=cos \ --num-nodes=1 \ --cluster-version=1.20
Note: The recommended GCE machine type is
'n1-standard-4'
.Note: The
'--scopes'
argument on the'gcloud container cluster create'
command controls what GCP resources the cluster's default service account has access to; for example, to give the default service account full access to your GCR registry, you can add'storage-full'
to the--scopes
arg. See Authenticating to GCP for more details. -
Grant cluster-admin permissions to the current user:
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account)
While iterating on code changes to the project, you may need to:
- Manage Tekton objects
- Verify installation and make sure there are no errors by accessing the logs
- Use various development scripts, as needed, in the 'hack' directory, For example:
- Update your (external) dependencies with:
./hack/update-deps.sh
- Update your type definitions with:
./hack/update-codegen.sh
- Update your OpenAPI specs with:
./hack/update-openapigen.sh
- Update your (external) dependencies with:
- Update or add new CRD types as needed
- Update, add and run tests
To make changes to these CRDs, you will probably interact with:
- The CRD type definitions in ./pkg/apis/pipeline/v1beta1
- The reconcilers in ./pkg/reconciler
- The clients are in ./pkg/client (these are generated by
./hack/update-codegen.sh
)
The ko
command is the preferred method to manage (i.e., create, modify or delete) Tekton Objects in Kubernetes from your local fork of the project. Some common operations include:
You can stand up a version of Tekton using your local clone's code to the currently configured K8s context (i.e., kubectl config current-context
):
ko apply -f config/
You can verify your development installation using ko
was successful by checking to see if the Tekton pipeline pods are running in Kubernetes:
kubectl get pods -n tekton-pipelines
You can clean up everything with:
ko delete -f config/
As you make changes to the code, you can redeploy your controller with:
ko apply -f config/controller.yaml
When managing different development branches of code (with changed Tekton objects and controllers) in the same K8s instance, it may be helpful to install them into a custom (non-default) namespace. The ability to map a code branch to a corresponding namespace may make it easier to identify and manage the objects as a group as well as isolate log output.
To install into a different namespace you can use this script:
#!/usr/bin/env bash
set -e
# Set your target namespace here
TARGET_NAMESPACE=new-target-namespace
ko resolve -f config | sed -e '/kind: Namespace/!b;n;n;s/:.*/: '"${TARGET_NAMESPACE}"'/' | \
sed "s/namespace: tekton-pipelines$/namespace: ${TARGET_NAMESPACE}/" | \
kubectl apply -f-
kubectl set env deployments --all SYSTEM_NAMESPACE=${TARGET_NAMESPACE} -n ${TARGET_NAMESPACE}
This script will cause ko
to:
- Change (resolve) all
namespace
values in K8s configuration files within theconfig/
subdirectory to be updated to a name of your choosing. - Builds and push images with the new namespace to your container registry and
- Update all Tekton Objects in K8s using these images
It will also update the default system namespace used for K8s deployments
to the new value for all subsequent kubectl
commands.
An alternative to standing up your own K8s cluster and installing Tekton using ko
is by using the kind tool. It was designed to help create and run local Kubernetes clusters in Docker to assist in local development and testing.
The Tekton "plumbing" project provides a convenience script, named 'tekton_in_kind.sh', that leverages kind
to create a cluster and then deploy Tekton Pipeline, Tekton Triggers and Tekton Dashboard components into it.
See Tekton Plumbing's DEVELOPMENT.md for more details on this and other helpful scripts and tools.
- Clone the Tekton Plumbing repository which has the 'tekton_in_kind.sh' and other helpful scripts.
kind
: Install using its "quick start" documentation.Docker
:kind
also requires Docker to be installed and running locally. Use the Get Docker instructions.
Change into the Tekton plumbing repository you cloned and invoke the script:
cd plumbing
./hack/tekton_in_kind.sh
The script, after using kind
to create the K8s cluster, uses kubectl
to install the latest
released versions of Tekton components. After successful completion, the script will have:
- Created a K8s cluster named
tekton
- Created a
cluster-context
forkubectl
named'kind-tekton'
and set it as thecurrent-context
- Deployed the latest Tekton Pipeline, Trigger and Dashboard components
- Made Tekton Dashboard available at
http://localhost:9097
After the Tekton components are installed using this script, you can than use the ko
tool to build and apply your development changes to the cluster for testing.
You can also specify a different cluster name and released versions of components you want installed:
# Script usage:
tekton_in_kind.sh [-c cluster-name -p pipeline-version -t triggers-version -d dashboard-version]"
If you wish to delete the cluster that the script created, use the following command:
# Delete the cluster using the default name
kind delete cluster --name tekton
Note: Be sure to set a
kubectl
context, as its unset as a side-effect of the delete (i.e.,kubectl config use-context
<context-name>
).
To look at the controller logs, run:
kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-controller -o name)
To look at the webhook logs, run:
kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-webhook -o name)
To look at the logs for individual TaskRuns
or PipelineRuns
, see
docs on accessing logs.
If you need to add a new CRD type, you will need to add:
- A yaml definition in config/
- Add the type to the cluster roles in:
- Add go structs for the types in pkg/apis/pipeline/v1alpha1 e.g condition_types.go This should implement the Defaultable and Validatable interfaces as they are needed for the webhook in the next step.
- Register it with the webhook
- Add the new type to the list of known types