diff --git a/content/en/contact/_index.md b/content/en/contact/_index.md new file mode 100755 index 00000000..a741faa5 --- /dev/null +++ b/content/en/contact/_index.md @@ -0,0 +1,25 @@ +--- +title: "Contact Us" +linkTitle: "Contact" +draft: false +weight: 30 +menu: + main: + weight: 30 +--- + + + +
 
+ +
+ +The Shipwright contributors would love to hear from you! You can reach us in the following forums: + +- Users can discuss help, feature requests, or potential bugs at [shipwright-users@lists.shipwright.io](https://lists.shipwright.io/archives/list/shipwright-users@lists.shipwright.io/). + Click [here](https://lists.shipwright.io/admin/lists/shipwright-users.lists.shipwright.io/) to join. +- Contributors can discuss active development topics at [shipwright-dev@lists.shipwright.io](https://lists.shipwright.io/archives/list/shipwright-dev@lists.shipwright.io/). + Click [here](https://lists.shipwright.io/admin/lists/shipwright-dev.lists.shipwright.io/) to join. +- We can also be reached on Kubernetes Slack: [#shipwright](https://kubernetes.slack.com/messages/shipwright) + +
diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md index 8197e199..18fc2300 100755 --- a/content/en/docs/_index.md +++ b/content/en/docs/_index.md @@ -3,20 +3,136 @@ title: "Documentation" linkTitle: "Documentation" draft: false weight: 20 +no_list: true menu: main: weight: 20 --- -Documentation can be found on GitHub at -[shipwright-io/build](https://github.com/shipwright-io/build/blob/master/README.md). -## Contact Us -The Shipwright contributors would love to hear from you! You can reach us in the following forums: +Shipwright is an extensible framework for building container images on Kubernetes. -- Users can discuss help, feature requests, or potential bugs at [shipwright-users@lists.shipwright.io](https://lists.shipwright.io/archives/list/shipwright-users@lists.shipwright.io/). - Click [here](https://lists.shipwright.io/admin/lists/shipwright-users.lists.shipwright.io/) to join. -- Contributors can discuss active development topics at [shipwright-dev@lists.shipwright.io](https://lists.shipwright.io/archives/list/shipwright-dev@lists.shipwright.io/). - Click [here](https://lists.shipwright.io/admin/lists/shipwright-dev.lists.shipwright.io/) to join. -- We can also be reached on Kubernetes Slack: [#shipwright](https://kubernetes.slack.com/messages/shipwright) +Shipwright supports popular tools such as Kaniko, Cloud Native Buildpacks, Buildah, and more! + +Shipwright is based around four elements for each build: + +1. Source code - the "what" you are trying to build +1. Output image - "where" you are trying to deliver your application +1. Build strategy - "how" your application is assembled +1. Invocation - "when" you want to build your application + +## Comparison with local image builds + +Developers who use Docker are familiar with this process: + +1. Clone source from a git-based repository ("what") +2. Build the container image ("when" and "how") + + ```bash + docker build -t registry.mycompany.com/myorg/myapp:latest . + ``` + +3. Push the container image to your registry ("where") + + ```bash + docker push registry.mycompany.com/myorg/myapp:latest + ``` + +## Shipwright Build APIs + +Shipwright's Build API consists of four core +[CustomResourceDefinitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) +(CRDs): + +1. [`Build`](/docs/api/build/) - defines what to build, and where the application should be delivered. +1. [`BuildStrategy` and `ClusterBuildStrategy`](/docs/api/buildstrategies/) - defines how to build an application for an image + building tool. +1. [`BuildRun`](/docs/api/buildrun/) - invokes the build. + You create a `BuildRun` to tell Shipwright to start building your application. + + +### Build + +The `Build` object provides a playbook on how to assemble your specific application. The simplest +build consists of a git source, a build strategy, and an output image: + +```yaml +apiVersion: build.dev/v1alpha1 +kind: Build +metadata: + name: kaniko-golang-build + annotations: + build.build.dev/build-run-deletion: "true" +spec: + source: + url: https://github.com/sbose78/taxi + strategy: + name: kaniko + kind: ClusterBuildStrategy + output: + image: registry.mycompany.com/my-org/taxi-app:latest +``` + +Builds can be extended to push to private registries, use a different Dockerfile, and more. + +### BuildStrategy and ClusterBuildStrategy + +`BuildStrategy` and `ClusterBuildStrategy` are related APIs to define how a given tool should be +used to assemble an application. They are distinguished by their scope - `BuildStrategy` objects +are namespace scoped, whereas `ClusterBuildStrategy` objects are cluster scoped. + +The spec of a `BuildStrategy` or `ClusterBuildStrategy` consists of a `buildSteps` object, which look and feel like Kubernetes container +specifications. Below is an example spec for Kaniko, which can build an image from a +Dockerfile within a container: + +```yaml +# this is a fragment of a manifest +spec: + buildSteps: + - name: build-and-push + image: gcr.io/kaniko-project/executor:v1.3.0 + workingDir: /workspace/source + securityContext: + runAsUser: 0 + capabilities: + add: + - CHOWN + - DAC_OVERRIDE + - FOWNER + - SETGID + - SETUID + - SETFCAP + env: + - name: DOCKER_CONFIG + value: /tekton/home/.docker + command: + - /kaniko/executor + args: + - --skip-tls-verify=true + - --dockerfile=$(build.dockerfile) + - --context=/workspace/source/$(build.source.contextDir) + - --destination=$(build.output.image) + - --oci-layout-path=/workspace/output/image + - --snapshotMode=redo + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 65Mi +``` + +### BuildRun + +Each `BuildRun` object invokes a build on your cluster. You can think of these as a Kubernetes +`Jobs` or Tekton `TaskRuns` - they represent a workload on your cluster, ultimately resulting in a +running `Pod`. See [`BuildRun`](/docs/buildrun/) for more details. + +## Further reading + +- [Configuration](/docs/configuration/) +- Build controller observability + - [Metrics](/docs/metrics/) + - [Profiling](/docs/profiling/) diff --git a/content/en/docs/api/build.md b/content/en/docs/api/build.md new file mode 100644 index 00000000..c84f52ef --- /dev/null +++ b/content/en/docs/api/build.md @@ -0,0 +1,315 @@ +--- +title: Build +weight: 10 +--- + +- [Overview](#overview) +- [Build Controller](#build-controller) +- [Build Validations](#build-validations) +- [Configuring a Build](#configuring-a-build) + - [Defining the Source](#defining-the-source) + - [Defining the Strategy](#defining-the-strategy) + - [Defining the Builder or Dockerfile](#defining-the-builder-or-dockerfile) + - [Defining the Output](#defining-the-output) + - [Runtime-Image](#Runtime-Image) +- [BuildRun deletion](#BuildRun-deletion) + +## Overview + +A `Build` resource allows the user to define: + +- `source` +- `strategy` +- `builder` +- `dockerfile` +- `output` + +A `Build` is available within a namespace. + +## Build Controller + +The controller watches for: + +- Updates on the `Build` resource (_CRD instance_) + +When the controller reconciles it: + +- Validates if the referenced `StrategyRef` exists. +- Validates if the container `registry` output secret exists. +- Validates if the referenced `spec.source.url` endpoint exists. + +## Build Validations + +In order to prevent users from triggering `BuildRuns` (_execution of a Build_) that will eventually fail because of wrong or missing dependencies or configuration settings, the Build controller will validate them in advance. If all validations are successful, users can expect a `Succeeded` `Status.Reason`, however if any of the validations failed, users can rely on the `Status.Reason` and `Status.Message` fields, in order to understand the root cause. + +| Status.Reason | Description | +| --- | --- | +| BuildStrategyNotFound | The referenced namespace-scope strategy doesn´t exist. | +| ClusterBuildStrategyNotFound | The referenced cluster-scope strategy doesn´t exist. | +| SetOwnerReferenceFailed | Setting ownerreferences between a Build and a BuildRun failed. This is triggered when making use of the `build.shipwright.io/build-run-deletion` annotation in a Build. | +| SpecSourceSecretNotFound | The secret used to authenticate to git doesn´t exist. | +| SpecOutputSecretRefNotFound | The secret used to authenticate to the container registry doesn´t exist. | +| SpecRuntimeSecretRefNotFound | The secret used to authenticate to the container registry doesn´t exist.| +| MultipleSecretRefNotFound | More than one secret is missing. At the moment, only three paths on a Build can specify a secret. | +| RuntimePathsCanNotBeEmpty | The Runtime feature is used, but the runtime path was not defined. This is mandatory. | +| RemoteRepositoryUnreachable | The defined `spec.source.url` was not found. This validation only take place for http/https protocols. | + +## Configuring a Build + +The `Build` definition supports the following fields: + +- Required: + - [`apiVersion`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the API version, for example `shipwright.io/v1alpha1`. + - [`kind`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the Kind type, for example `Build`. + - [`metadata`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Metadata that identify the CRD instance, for example the name of the `Build`. + - `spec.source.URL` - Refers to the Git repository containing the source code. + - `spec.strategy` - Refers to the `BuildStrategy` to be used, see the [examples](../samples/buildstrategy) + - `spec.builder.image` - Refers to the image containing the build tools to build the source code. (_Use this path for Dockerless strategies, this is just required for `source-to-image` buildStrategy_) + - `spec.output`- Refers to the location where the generated image would be pushed. + - `spec.output.credentials.name`- Reference an existing secret to get access to the container registry. + +- Optional: + - `spec.parameters` - Refers to a list of `name-value` that could be used to loosely type parameters in the `BuildStrategy`. + - `spec.dockerfile` - Path to a Dockerfile to be used for building an image. (_Use this path for strategies that require a Dockerfile_) + - `spec.runtime` - Runtime-Image settings, to be used for a multi-stage build. + - `spec.timeout` - Defines a custom timeout. The value needs to be parsable by [ParseDuration](https://golang.org/pkg/time/#ParseDuration), for example `5m`. The default is ten minutes. The value can be overwritten in the `BuildRun`. + - `metadata.annotations[build.shipwright.io/build-run-deletion]` - Defines if delete all related BuildRuns when deleting the Build. The default is `false`. + +### Defining the Source + +A `Build` resource can specify a Git source, together with other parameters like: + +- `source.credentials.name` - For private repositories, the name is a reference to an existing secret on the same namespace containing the `ssh` data. +- `source.revision` - An specific revision to select from the source repository, this can be a commit or branch name. If not defined, it will fallback to the git repository default branch. +- `source.contextDir` - For repositories where the source code is not located at the root folder, you can specify this path here. Currently, only supported by `buildah`, `kaniko` and `buildpacks` build strategies. + +By default, the Build controller will validate that the Git repository exists. If the validation is not desired, users can define the `build.shipwright.io/verify.repository` annotation with `false`. For example: + +Example of a `Build` with the **build.shipwright.io/verify.repository** annotation, in order to disable the `spec.source.url` validation. + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build + annotations: + build.shipwright.io/verify.repository: "false" +spec: + source: + url: https://github.com/shipwright-io/sample-go + contextDir: docker-build +``` + +_Note_: The Build controller only validates two scenarios. The first one where the endpoint uses an `http/https` protocol, the second one when a `ssh` protocol (_e.g. `git@`_) is defined and none referenced secret was provided(_e.g. source.credentials.name_). + +Example of a `Build` with a source with **credentials** defined by the user. + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildpack-nodejs-build +spec: + source: + url: https://github.com/sclorg/nodejs-ex + credentials: + name: source-repository-credentials +``` + +Example of a `Build` with a source that specifies an specific subfolder on the repository. + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-custom-context-dockerfile +spec: + source: + url: https://github.com/SaschaSchwarze0/npm-simple + contextDir: renamed +``` + +Example of a `Build` that specifies an specific branch on the git repository: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build +spec: + source: + url: https://github.com/shipwright-io/sample-go + contextDir: docker-build +``` + +### Defining the Strategy + +A `Build` resource can specify the `BuildStrategy` to use, these are: + +- [Source-to-Image](buildstrategies.md#source-to-image) +- [Buildpacks-v3](buildstrategies.md#buildpacks-v3) +- [Buildah](buildstrategies.md#buildah) +- [Kaniko](buildstrategies.md#kaniko) +* [ko](docs/buildstrategies.md#ko) + +Defining the strategy is straightforward, you need to define the `name` and the `kind`. For example: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildpack-nodejs-build +spec: + strategy: + name: buildpacks-v3 + kind: ClusterBuildStrategy +``` + +### Defining the Builder or Dockerfile + +A `Build` resource can specify an image containing the tools to build the final image. Users can do this via the `spec.builder` or the `spec.dockerfile`. For example, the user choose the `Dockerfile` file under the source repository. + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build +spec: + source: + url: https://github.com/shipwright-io/sample-go + contextDir: docker-build + strategy: + name: buildah + kind: ClusterBuildStrategy + dockerfile: Dockerfile +``` + +Another example, when the user chooses to use a `builder` image ( This is required for `source-to-image` buildStrategy, because for different code languages, they have different builders. ): + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: s2i-nodejs-build +spec: + source: + url: https://github.com/sclorg/nodejs-ex + strategy: + name: source-to-image + kind: ClusterBuildStrategy + builder: + image: docker.io/centos/nodejs-10-centos7 +``` + +### Defining the Output + +A `Build` resource can specify the output where the image should be pushed. For external private registries it is recommended to specify a secret with the related data to access it. + +For example, the user specify a public registry: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: s2i-nodejs-build +spec: + source: + url: https://github.com/sclorg/nodejs-ex + strategy: + name: source-to-image + kind: ClusterBuildStrategy + builder: + image: docker.io/centos/nodejs-10-centos7 + output: + image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex +``` + +Another example, is when the user specifies a private registry: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: s2i-nodejs-build +spec: + source: + url: https://github.com/sclorg/nodejs-ex + strategy: + name: source-to-image + kind: ClusterBuildStrategy + builder: + image: docker.io/centos/nodejs-10-centos7 + output: + image: us.icr.io/source-to-image-build/nodejs-ex + credentials: + name: icr-knbuild +``` + +### Runtime-Image + +Runtime-image is a new image composed with build-strategy outcome. On which you can compose a multi-stage image build, copying parts out the original image into a new one. This feature allows replacing the base-image of any container-image, creating leaner images, and other use-cases. + +The following examples illustrates how to the `runtime`: + +```yml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: nodejs-ex-runtime +spec: + strategy: + name: buildpacks-v3 + kind: ClusterBuildStrategy + source: + url: https://github.com/sclorg/nodejs-ex.git + output: + image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex + runtime: + base: + image: docker.io/node:latest + workDir: /home/node/app + run: + - echo "Before copying data..." + user: + name: node + group: "1000" + paths: + - $(workspace):/home/node/app + entrypoint: + - npm + - start +``` + +This build will produce a Node.js based application where a single directory is imported from the image built by buildpacks strategy. The data copied is using the `.spec.runtime.user` directive, and the image also runs based on it. + +Please consider the description of the attributes under `.spec.runtime`: + +- `.base`: specifies the runtime base-image to be used, using Image as type +- `.workDir`: path to WORKDIR in runtime-image +- `.env`: runtime-image additional environment variables, key-value +- `.labels`: runtime-image additional labels, key-value +- `.run`: arbitrary commands to be executed as `RUN` blocks, before `COPY` +- `.user.name`: username employed on `USER` directive, and also to change ownership of files copied to the runtime-image +- `.user.group`: group name (or GID), employed to change ownership and on `USER` directive +- `.paths`: list of files or directory paths to be copied to runtime-image, those can be defined as `:` split by colon (`:`). You can use the `$(workspace)` placeholder to access the directory where your source repository is cloned, if `spec.source.contextDir` is defined, then `$(workspace)` to context directory location +- `.entrypoint`: entrypoint command, specified as a list + +> ⚠️ **Image Tag Overwrite** +> +> Specifying the runtime section will cause a `BuildRun` to push `spec.output.image` twice. First, the image produced by chosen `BuildStrategy` is pushed, and next it gets reused to construct the runtime-image, which is pushed again, overwriting `BuildStrategy` outcome. +> Be aware, specially in situations where the image push action triggers automation steps. Since the same tag will be reused, you might need to take this in consideration when using runtime-images. + +Under the cover, the runtime image will be an additional step in the generated Task spec of the TaskRun. It uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to run a container build using the `gcr.io/kaniko-project/executor:v1.5.1` image. You can overwrite this image by adding the environment variable `KANIKO_CONTAINER_IMAGE` to the [build controller deployment](../deploy/controller.yaml). + +## BuildRun deletion + +A `Build` can automatically delete a related `BuildRun`. To enable this feature set the `build.shipwright.io/build-run-deletion` annotation to `true` in the `Build` instance. By default the annotation is never present in a `Build` definition. See an example of how to define this annotation: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: kaniko-golang-build + annotations: + build.shipwright.io/build-run-deletion: "true" +``` diff --git a/content/en/docs/api/buildrun.md b/content/en/docs/api/buildrun.md new file mode 100644 index 00000000..1e0b69ba --- /dev/null +++ b/content/en/docs/api/buildrun.md @@ -0,0 +1,161 @@ +--- +title: "BuildRun" +draft: false +weight: 40 +--- + +- [Overview](#overview) +- [BuildRun Controller](#buildrun-controller) +- [Configuring a BuildRun](#configuring-a-buildrun) + - [Defining the BuildRef](#defining-the-buildref) + - [Defining the ServiceAccount](#defining-the-serviceaccount) +- [BuildRun Status](#buildrun-status) + - [Understanding the state of a BuildRun](#understanding-the-state-of-a-BuildRun) + - [Understanding failed BuildRuns](#understanding-failed-buildruns) +- [Relationship with Tekton Tasks](#relationship-with-tekton-tasks) + +## Overview + +The resource `BuildRun` (`buildruns.dev/v1alpha1`) is the build process of a `Build` resource definition which is executed in Kubernetes. + +A `BuildRun` resource allows the user to define: + +- The `BuildRun` _name_, through which the user can monitor the status of the image construction. +- A referenced `Build` instance to use during the build construction. +- A [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for hosting all related secrets in order to build the image. + +A `BuildRun` invocation can be concise: + +```yaml +apiVersion: build.dev/v1alpha1 +kind: BuildRun +metadata: + name: kaniko-golang-buildrun +spec: + buildRef: + name: kaniko-golang-build +``` + +As the build runs, the status of the `BuildRun` object is updated with the current progress. Work +is in progress to standardize this status reporting with status conditions. + +## BuildRun Controller + +The controller watches for: + +- Updates on a `Build` resource (_CRD instance_) +- Updates on a `TaskRun` resource (_CRD instance_) + +When the controller reconciles it: + +- Looks for any existing owned `TaskRuns` and update its parent `BuildRun` status. +- Retrieves the specified `SA` and sets this with the specify output secret on the `Build` resource. +- Generates a new tekton `TaskRun` if it does not exist, and set a reference to this resource(_as a child of the controller_). +- On any subsequent updates on the `TaskRun`, the parent `BuildRun` resource instance will be updated. + +## Configuring a BuildRun + +The `BuildRun` definition supports the following fields: + +- Required: + - [`apiVersion`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the API version, for example `shipwright.io/v1alpha1`. + - [`kind`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the Kind type, for example `BuildRun`. + - [`metadata`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Metadata that identify the CRD instance, for example the name of the `BuildRun`. + - `spec.buildRef` - Specifies an existing `Build` resource instance to use. + +- Optional: + - `spec.serviceAccount` - Refers to the SA to use when building the image. (_defaults to the `default` SA_) + - `spec.timeout` - Defines a custom timeout. The value needs to be parsable by [ParseDuration](https://golang.org/pkg/time/#ParseDuration), for example `5m`. The value overwrites the value that is defined in the `Build`. + - `spec.output.image` - Refers to a custom location where the generated image would be pushed. The value will overwrite the `output.image` value which is defined in `Build`. ( Note: other properties of the output, for example, the credentials cannot be specified in the buildRun spec. ) + +### Defining the BuildRef + +A `BuildRun` resource can reference a `Build` resource, that indicates what image to build. For example: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: BuildRun +metadata: + name: buildpack-nodejs-buildrun-namespaced +spec: + buildRef: + name: buildpack-nodejs-build-namespaced +``` + +### Defining the ServiceAccount + +A `BuildRun` resource can define a ServiceAccount to use. Usually this SA will host all related secrets referenced on the `Build` resource, for example: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: BuildRun +metadata: + name: buildpack-nodejs-buildrun-namespaced +spec: + buildRef: + name: buildpack-nodejs-build-namespaced + serviceAccount: + name: pipeline +``` + +You can also use set the `spec.serviceAccount.generate` path to `true`. This will generate the service account during runtime for you. + +_**Note**_: When the SA is not defined, the `BuildRun` will default to the `default` SA in the namespace. + +## BuildRun Status + +The `BuildRun` resource is updated as soon as the current image building status changes: + +```sh +$ kubectl get buildrun buildpacks-v3-buildrun +NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME +buildpacks-v3-buildrun Unknown Pending Pending 1s +``` + +And finally: + +```sh +$ kubectl get buildrun buildpacks-v3-buildrun +NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME +buildpacks-v3-buildrun True Succeeded All Steps have completed executing 4m28s 16s +``` + +The above allows users to get an overview of the building mechanism state. + +### Understanding the state of a BuildRun + +A `BuildRun` resource stores the relevant information regarding the state of the object under `Status.Conditions`. + +[Conditions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) allow users to easily understand the resource state, without needing to understand resource-specific details. + +For the `BuildRun` we use a Condition of the type `Succeeded`, which is a well-known type for resources that run to completion. + +The `Status.Conditions` hosts different fields, like `Status`, `Reason` and `Message`. Users can expect this fields to be populated with relevant information. + +The following table illustrates the different states a BuildRun can have under its `Status.Conditions`: + +| Status | Reason | CompletionTime is set | Description | +| --- | --- | --- | --- | +| Unknown | Pending | No | The BuildRun is waiting on a Pod in status Pending. | +| Unknown | Running | No | The BuildRun has been validate and started to perform its work. | +| True | Succeeded | Yes | The BuildRun Pod is done. | +| False | Failed | Yes | The BuildRun failed in one of the steps. | +| False | BuildRunTimeout | Yes | The BuildRun timed out. | + +_Note_: We heavily rely on the Tekton TaskRun [Conditions](https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md#monitoring-execution-status) for populating the BuildRun ones, with some exceptions. + +### Understanding failed BuildRuns + +To make it easier for users to understand why did a BuildRun failed, users can infer from the `Status.FailedAt` field, the pod and container where the failure took place. + +In addition, the `Status.Conditions` will host under the `Message` field a compacted message containing the `kubectl` command to trigger, in order to retrieve the logs. + +### Build Snapshot + +For every BuildRun controller reconciliation, the `buildSpec` in the Status of the `BuildRun` is updated if an existing owned `TaskRun` is present. During this update, a `Build` resource snapshot is generated and embedded into the `status.buildSpec` path of the `BuildRun`. A `buildSpec` is just a copy of the original `Build` spec, from where the `BuildRun` executed a particular image build. The snapshot approach allows developers to see the original `Build` configuration. + +## Relationship with Tekton Tasks + +The `BuildRun` resource abstracts the image construction by delegating this work to the Tekton Pipeline [TaskRun](https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md). Compared to a Tekton Pipeline [Task](https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md), a `TaskRun` runs all `steps` until completion of the `Task` or until a failure occurs in the `Task`. + +The `BuildRun` controller during the Reconcile will generate a new `TaskRun`. During the execution, the controller will embed in the `TaskRun` `Task` definition the requires `steps` to execute. These `steps` are define in the strategy defined in the `Build` resource, either a `ClusterBuildStrategy` or a `BuildStrategy`. diff --git a/content/en/docs/api/buildstrategies.md b/content/en/docs/api/buildstrategies.md new file mode 100644 index 00000000..1533d868 --- /dev/null +++ b/content/en/docs/api/buildstrategies.md @@ -0,0 +1,519 @@ +--- +title: BuildStrategy and ClusterBuildStrategy +weight: 20 +--- + +- [Overview](#overview) +- [Available ClusterBuildStrategies](#available-clusterbuildstrategies) +- [Available BuildStrategies](#available-buildstrategies) +- [Buildah](#buildah) + - [Installing Buildah Strategy](#installing-buildah-strategy) +- [Buildpacks v3](#buildpacks-v3) + - [Installing Buildpacks v3 Strategy](#installing-buildpacks-v3-strategy) + - [Try it](#try-it) +- [Kaniko](#kaniko) + - [Installing Kaniko Strategy](#installing-kaniko-strategy) +- [ko](#ko) + - [Installing ko Strategy](#installing-ko-strategy) +- [Source to Image](#source-to-image) + - [Installing Source to Image Strategy](#installing-source-to-image-strategy) + - [Build Steps](#build-steps) +- [Steps Resource Definition](#steps-resource-definition) + - [Strategies with different resources](#strategies-with-different-resources) + - [How does Tekton Pipelines handle resources](#how-does-tekton-pipelines-handle-resources) + - [Examples of Tekton resources management](#examples-of-tekton-resources-management) +- [Annotations](#annotations) + +## Overview + +There are two types of strategy API: the `ClusterBuildStrategy` (`clusterbuildstrategies.shipwright.io/v1alpha1`) and the `BuildStrategy` (`buildstrategies.shipwright.io/v1alpha1`). Both strategies define a shared group of steps, needed to fullfil the application build. + + A `ClusterBuildStrategy` is available cluster-wide, while a `BuildStrategy` is available within a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/). + +## Available ClusterBuildStrategies + +Well-known strategies can be boostrapped from [GitHub](https://github.com/shipwright-io/build/tree/master/samples/buildstrategy). The current supported Cluster BuildStrategy are: + +- [buildah](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml) +- [buildpacks-v3-heroku](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml) +- [buildpacks-v3](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml) +- [kaniko](../samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml) +- [source-to-image](../samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml) + +## Available BuildStrategies + +The current supported namespaces BuildStrategy are: + +- [buildpacks-v3-heroku](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml) +- [buildpacks-v3](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml) + +--- + +## Buildah + +The `buildah` ClusterBuildStrategy consists of using [`buildah`](https://github.com/containers/buildah) to build and push a container image, out of a `Dockerfile`. The `Dockerfile` should be specified on the `Build` resource. Also, instead of the `spec.dockerfile`, the `spec.builderImage` can be used with `quay.io/buildah/stable` as the value when defining the `Build` resource. + +### Installing Buildah Strategy + +To install use: + +```sh +kubectl apply -f samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml +``` + +--- + +## Buildpacks v3 + +The [buildpacks-v3][buildpacks] BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder ([CNB][cnb]) container image, and is able to implement [lifecycle commands][lifecycle]. The following CNB images are the most common options: + +- [`heroku/buildpacks:18`][hubheroku] +- [`cloudfoundry/cnb:bionic`][hubcloudfoundry] +- [`docker.io/paketobuildpacks/builder:full`](https://hub.docker.com/r/paketobuildpacks/builder/tags) + +### Installing Buildpacks v3 Strategy + +You can install the `BuildStrategy` in your namespace or install the `ClusterBuildStrategy` at cluster scope so that it can be shared across namespaces. + +To install the cluster scope strategy, use (below is a heroku example, you can also use paketo sample): + +```sh +kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml +``` + +To install the namespaced scope strategy, use: + +```sh +kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml +``` + +### Try it + +To use this strategy follow this steps: + +- Create the Kubernetes secret that host the configuration to access the container registry. + +- Create a `Build` resource that uses `quay.io` or `DockerHub` image repository for pushing the image. Also, provide credentials to access it. + + ```yaml + apiVersion: shipwright.io/v1alpha1 + kind: Build + metadata: + name: buildpack-nodejs-build + spec: + source: + url: https://github.com/sclorg/nodejs-ex + strategy: + name: buildpacks-v3 + kind: ClusterBuildStrategy + output: + image: quay.io/yourorg/yourrepo + credentials: + ``` + +- Start a `BuildRun` resource. + + ```yaml + apiVersion: shipwright.io/v1alpha1 + kind: BuildRun + metadata: + name: buildpack-nodejs-buildrun + spec: + buildRef: + name: buildpack-nodejs-build + ``` + +--- + +## Kaniko + +The `kaniko` ClusterBuildStrategy is composed by Kaniko's `executor` [kaniko], with the objective of building a container-image, out of a `Dockerfile` and context directory. + +### Installing Kaniko Strategy + +To install the cluster scope strategy, use: + +```sh +kubectl apply -f samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml +``` + +--- + +## ko + +The `ko` ClusterBuilderStrategy is using [ko](https://github.com/google/ko)'s publish command to build an image from a Golang main package. + +### Installing ko Strategy + +To install the cluster scope strategy, use: + +```sh +kubectl apply -f samples/buildstrategy/ko/buildstrategy_ko_cr.yaml +``` + +**Note**: The build strategy currently uses the `spec.contextDir` of the Build in a different way than this property is designed for: the Git repository must be a Go module with the go.mod file at the root. The `contextDir` specifies the path to the main package. You can check the [example](../samples/build/build_ko_cr.yaml) which is set up to build the Shipwright Build controller. This behavior will eventually be corrected once [Exhaustive list of generalized Build API/CRD attributes #184](https://github.com/shipwright-io/build/issues/184) / [Custom attributes from the Build CR could be used as parameters while defining a BuildStrategy #537](https://github.com/shipwright-io/build/issues/537) are done. + +## Source to Image + +This BuildStrategy is composed by [`source-to-image`][s2i] and [`kaniko`][kaniko] in order to generate a `Dockerfile` and prepare the application to be built later on with a builder. + +`s2i` requires a specially crafted image, which can be informed as `builderImage` parameter on the `Build` resource. + +### Installing Source to Image Strategy + +To install the cluster scope strategy use: + +```sh +kubectl apply -f samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml +``` + +### Build Steps + +1. `s2i` in order to generate a `Dockerfile` and prepare source-code for image build; +2. `kaniko` to create and push the container image to what is defined as `output.image`; + +[buildpacks]: https://buildpacks.io/ +[cnb]: https://buildpacks.io/docs/concepts/components/builder/ +[lifecycle]: https://buildpacks.io/docs/concepts/components/lifecycle/ +[hubheroku]: https://hub.docker.com/r/heroku/buildpacks/ +[hubcloudfoundry]: https://hub.docker.com/r/cloudfoundry/cnb +[kaniko]: https://github.com/GoogleContainerTools/kaniko +[s2i]: https://github.com/openshift/source-to-image +[buildah]: https://github.com/containers/buildah + +## Steps Resource Definition + +All strategies steps can include a definition of resources(_limits and requests_) for CPU, memory and disk. For strategies with more than one step, each step(_container_) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements. + +### Strategies with different resources + +If the strategy admins would require to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type: + +```yaml +--- +apiVersion: shipwright.io/v1alpha1 +kind: ClusterBuildStrategy +metadata: + name: kaniko-small +spec: + buildSteps: + - name: build-and-push + image: gcr.io/kaniko-project/executor:v1.5.1 + workingDir: /workspace/source + securityContext: + runAsUser: 0 + capabilities: + add: + - CHOWN + - DAC_OVERRIDE + - FOWNER + - SETGID + - SETUID + - SETFCAP + - KILL + env: + - name: DOCKER_CONFIG + value: /tekton/home/.docker + - name: AWS_ACCESS_KEY_ID + value: NOT_SET + - name: AWS_SECRET_KEY + value: NOT_SET + command: + - /kaniko/executor + args: + - --skip-tls-verify=true + - --dockerfile=$(build.dockerfile) + - --context=/workspace/source/$(build.source.contextDir) + - --destination=$(build.output.image) + - --oci-layout-path=/workspace/output/image + - --snapshotMode=redo + - --push-retry=3 + resources: + limits: + cpu: 250m + memory: 65Mi + requests: + cpu: 250m + memory: 65Mi +--- +apiVersion: shipwright.io/v1alpha1 +kind: ClusterBuildStrategy +metadata: + name: kaniko-medium +spec: + buildSteps: + - name: build-and-push + image: gcr.io/kaniko-project/executor:v1.5.1 + workingDir: /workspace/source + securityContext: + runAsUser: 0 + capabilities: + add: + - CHOWN + - DAC_OVERRIDE + - FOWNER + - SETGID + - SETUID + - SETFCAP + - KILL + env: + - name: DOCKER_CONFIG + value: /tekton/home/.docker + - name: AWS_ACCESS_KEY_ID + value: NOT_SET + - name: AWS_SECRET_KEY + value: NOT_SET + command: + - /kaniko/executor + args: + - --skip-tls-verify=true + - --dockerfile=$(build.dockerfile) + - --context=/workspace/source/$(build.source.contextDir) + - --destination=$(build.output.image) + - --oci-layout-path=/workspace/output/image + - --snapshotMode=redo + - --push-retry=3 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi +``` + +The above provides more control and flexibility for the strategy admins. For `end-users`, all they need to do, is to reference the proper strategy. For example: + +```yaml +--- +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: kaniko-medium +spec: + source: + url: https://github.com/shipwright-io/sample-go + contextDir: docker-build + strategy: + name: kaniko + kind: ClusterBuildStrategy + dockerfile: Dockerfile +``` + +### How does Tekton Pipelines handle resources + +The **Build** controller relies on the Tekton [pipeline controller](https://github.com/tektoncd/pipeline) to schedule the `pods` that execute the above strategy steps. In a nutshell, the **Build** controller creates on run-time a Tekton **TaskRun**, and the **TaskRun** generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one. + +Tekton manage each step resources **request** in a very particular way, see the [docs](https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md#defining-steps). From this document, it mentions the following: + +> The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once. + +### Examples of Tekton resources management + +For a more concrete example, let´s take a look on the following scenarios: + +--- + +**Scenario 1.** Namespace without `LimitRange`, both steps with the same resource values. + +If we will apply the following resources: + +- [buildahBuild](../samples/build/buildah/build_buildah_cr.yaml) +- [buildahBuildRun](../samples/buildrun/buildah/buildrun_buildah_cr.yaml) +- [buildahClusterBuildStrategy](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml) + +We will see some differences between the `TaskRun` definition and the `pod` definition. + +For the `TaskRun`, as expected we can see the resources on each `step`, as we previously define on our [strategy](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml). + +```sh +$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} +``` + +The pod definition is different, while Tekton will only use the **highest** values of one container, and set the rest(lowest) to zero: + +```sh +$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "ephemeral-storage": "0", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "0", <------------------- See how the request is set to ZERO. + "ephemeral-storage": "0", <------------------- See how the request is set to ZERO. + "memory": "0" <------------------- See how the request is set to ZERO. + } +} +``` + +In this scenario, only one container can have the `spec.resources.requests` definition. Even when both steps have the same values, only one container will get them, the others will be set to zero. + +--- + +**Scenario 2.** Namespace without `LimitRange`, steps with different resources: + +If we will apply the following resources: + +- [buildahBuild](../samples/build/buildah/build_buildah_cr.yaml) +- [buildahBuildRun](../samples/buildrun/buildah/buildrun_buildah_cr.yaml) +- We will use a modified buildah strategy, with the following steps resources: + + ```yaml + - name: buildah-bud + image: quay.io/buildah/stable:latest + workingDir: /workspace/source + securityContext: + privileged: true + command: + - /usr/bin/buildah + args: + - bud + - --tag=$(build.output.image) + - --file=$(build.dockerfile) + - $(build.source.contextDir) + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 65Mi + volumeMounts: + - name: buildah-images + mountPath: /var/lib/containers/storage + - name: buildah-push + image: quay.io/buildah/stable:latest + securityContext: + privileged: true + command: + - /usr/bin/buildah + args: + - push + - --tls-verify=false + - docker://$(build.output.image) + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 100Mi <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step + ``` + +For the `TaskRun`, as expected we can see the resources on each `step`. + +```sh +$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "100Mi" + } +} +``` + +The pod definition is different, while Tekton will only use the **highest** values of one container, and set the rest(lowest) to zero: + +```sh +$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", <------------------- See how the CPU is preserved + "ephemeral-storage": "0", + "memory": "0" <------------------- See how the memory is set to ZERO + } +} +$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "0", <------------------- See how the CPU is set to zero. + "ephemeral-storage": "0", + "memory": "100Mi" <------------------- See how the memory is preserved on this container + } +} +``` + +In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container `step-buildah-push` gets the `100mi` for the memory requests, while it was the one defining the highest number. At the same time, the container `step-buildah-bud` is assigned a `0` for its memory request. + +--- + +**Scenario 3.** Namespace **with** a `LimitRange`. + +When a `LimitRange` exists on the namespace, `Tekton Pipeline` controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the `minimum values of the LimitRange`. + +## Annotations + +Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example: + +- The Kubernetes [Network Traffic Shaping](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping) feature looks for the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to limit the network bandwidth the `Pod` is allowed to use. +- The [AppArmor profile of a container](https://kubernetes.io/docs/tutorials/clusters/apparmor/) is defined using the `container.apparmor.security.beta.kubernetes.io/` annotation. + +The following annotations are not propagated: + +- `kubectl.kubernetes.io/last-applied-configuration` +- `clusterbuildstrategy.shipwright.io/*` +- `buildstrategy.shipwright.io/*` +- `build.shipwright.io/*` +- `buildrun.shipwright.io/*` + +A Kubernetes administrator can further restrict the usage of annotations by using policy engines like [Open Policy Agent](https://www.openpolicyagent.org/). diff --git a/content/en/docs/authentication.md b/content/en/docs/authentication.md new file mode 100644 index 00000000..5fee5a8b --- /dev/null +++ b/content/en/docs/authentication.md @@ -0,0 +1,170 @@ +--- +title: Authentication during builds +--- + +The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller. + +- [Overview](#overview) +- [Build Secrets Annotation](#build-secrets-annotation) +- [Authentication for Git](#authentication-for-git) + - [Basic authentication](#basic-authentication) + - [SSH authentication](#ssh-authentication) + - [Usage of git secret](#usage-of-git-secret) +- [Authentication to container registries](#authentication-to-container-registries) + - [Docker Hub](#docker-hub) + - [Usage of registry secret](#usage-of-registry-secret) +- [References](#references) + +## Overview + +There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definion of [secrets](https://kubernetes.io/docs/concepts/configuration/secret/) in which the require sensitive data will be stored. + +## Build Secrets Annotation + +Users need to add an annotation `build.shipwright.io/referenced.secret: "true"` to a build secret so that build controller can decide to take a reconcile action when a secret event (`create`, `update` and `delete`) happens. Below is a secret example with build annotation: + +```yaml +apiVersion: v1 +data: + .dockerconfigjson: xxxxx +kind: Secret +metadata: + annotations: + build.shipwright.io/referenced.secret: "true" + name: secret-docker +type: kubernetes.io/dockerconfigjson +``` + +This annotation will help us filter secrets which are not referenced on a Build instance. That means if a secret doesn't have this annotation, then although event happens on this secret, Build controller will not reconcile. Being able to reconcile on secrets events allow the Build controller to re-trigger validations on the Build configuration, allowing users to understand if a dependency is missing. + +If you are using `kubectl` command create secrets, then you can first create build secret using `kubectl create secret` command and annotate this secret using `kubectl annotate secrets`. Below is an example: + +```sh +kubectl -n ${namespace} create secret docker-registry example-secret --docker-server=${docker-server} --docker-username="${username}" --docker-password="${password}" --docker-email=me@here.com +kubectl -n ${namespace} annotate secrets example-secret build.shipwright.io/referenced.secret='true' +``` + +## Authentication for Git + +There are two ways for authenticating into Git (_applies to both GitLab or GitHub_). Also, see more information in the official Tekton [docs](https://github.com/tektoncd/pipeline/blob/main/docs/auth.md#configuring-ssh-auth-authentication-for-git). + +### SSH authentication + +For the SSH authentication you must use the tekton annotations to specify the hostname(s) of the git repository providers that you use. This is github.com for GitHub, or gitlab.com for GitLab. + +As seen in the following example, there are three things to notice: + +- The Kubernetes secret should be of the type `kubernetes.io/ssh-auth` +- The annotations include a value to support both github and gitlab +- The `data.ssh-privatekey` can be generated by following the command example `base64 <~/.ssh/id_rsa`, where `~/.ssh/id_rsa` is the key used to authenticate into Git. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-git-ssh-auth + annotations: + tekton.dev/git-0: github.com + tekton.dev/git-1: gitlab.com + build.shipwright.io/referenced.secret: "true" +type: kubernetes.io/ssh-auth +data: + ssh-privatekey: +``` + +### Basic authentication + +The Basic authentication is very similar to the ssh one, but with the following differences: + +- The Kubernetes secret should be of the type `kubernetes.io/basic-auth` +- The `stringData` should host your user and password in clear text. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-git-basic-auth + annotations: + tekton.dev/git-0: https://github.com + tekton.dev/git-1: https://gitlab.com + build.shipwright.io/referenced.secret: "true" +type: kubernetes.io/basic-auth +stringData: + username: + password: +``` + +### Usage of git secret + +With the right secret in place(_note: Ensure creation of secret in the proper Kubernetes namespace_), users should reference it on their Build YAML definitions. + +Depending on the secret type, there are two ways of doing this: + +When using ssh auth, users should follow: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build +spec: + source: + url: git@gitlab.com:eduardooli/newtaxi.git + credentials: + name: secret-git-ssh-auth +``` + +When using basic auth, users should follow: + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build +spec: + source: + url: https://gitlab.com/eduardooli/newtaxi.git + credentials: + name: secret-git-basic-auth +``` + +## Authentication to container registries + +For pushing images to private registries, users require to define a secret in their respective namespace. + +### Docker Hub + +Follow the following command to generate your secret: + +```sh +kubectl --namespace create secret docker-registry \ + --docker-server= \ + --docker-username= \ + --docker-password= \ + --docker-email=me@here.com +kubectl --namespace annotate secrets build.shipwright.io/referenced.secret='true' +``` + +_Notes:_ When generating a secret to access docker hub, the `REGISTRY_HOST` value should be `https://index.docker.io/v1/`, the username is the Docker ID. +_Notes:_ The value of `PASSWORD` can be your user docker hub password, or an access token. A docker access token can be created via _Account Settings_, then _Security_ in the sidebar, and the _New Access Token_ button. + +### Usage of registry secret + +With the right secret in place (_note: Ensure creation of secret in the proper Kubernetes namespace_), users should reference it on their Build YAML definitions. +For container registries, the secret should be placed under the `spec.output.credentials` path. + +```yaml +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: buildah-golang-build + ... + output: + image: docker.io/foobar/sample:latest + credentials: + name: +``` + +## References + +See more information in the official Tekton [documentation](https://github.com/tektoncd/pipeline/blob/main/docs/auth.md#configuring-ssh-auth-authentication-for-git) for authentication. diff --git a/content/en/docs/configuration.md b/content/en/docs/configuration.md new file mode 100644 index 00000000..7383d1a7 --- /dev/null +++ b/content/en/docs/configuration.md @@ -0,0 +1,23 @@ +--- +title: "Configuration" +draft: false +--- + +The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in [`500-controller.yaml`](https://github.com/shipwright-io/build/blob/master/deploy/500-controller.yaml). + +The following environment variables are available: + +| Environment Variable | Description | +| --- | --- | +| `CTX_TIMEOUT` | Override the default context timeout used for all Custom Resource Definition reconciliation operations. | +| `KANIKO_CONTAINER_IMAGE` | Specify the Kaniko container image to be used for the runtime image build instead of the default, for example `gcr.io/kaniko-project/executor:v1.5.1`. | +| `BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE` | Set the namespace to be used to store the `shipwright-build-controller` lock, by default it is in the same namespace as the controller itself. | +| `BUILD_CONTROLLER_LEASE_DURATION` | Override the `LeaseDuration`, which is the duration that non-leader candidates will wait to force acquire leadership. | +| `BUILD_CONTROLLER_RENEW_DEADLINE` | Override the `RenewDeadline`, which is the duration that the acting leader will retry refreshing leadership before giving up. | +| `BUILD_CONTROLLER_RETRY_PERIOD` | Override the `RetryPeriod`, which is the duration the LeaderElector clients should wait between tries of actions. | +| `BUILD_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | +| `BUILDRUN_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the buildrun controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | +| `BUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the buildstrategy controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | +| `CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the clusterbuildstrategy controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | +| `KUBE_API_BURST` | Burst to use for the Kubernetes API client. See [Config.Burst](https://pkg.go.dev/k8s.io/client-go/rest#Config.Burst). A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0. | +| `KUBE_API_QPS` | QPS to use for the Kubernetes API client. See [Config.QPS](https://pkg.go.dev/k8s.io/client-go/rest#Config.QPS). A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0. | diff --git a/content/en/docs/metrics.md b/content/en/docs/metrics.md new file mode 100644 index 00000000..0c519af1 --- /dev/null +++ b/content/en/docs/metrics.md @@ -0,0 +1,84 @@ +--- +title: Build Controller Metrics +linkTitle: Metrics +--- + +The Build component exposes OpenMetrics (Prometheus format) data to help you monitor the health and behavior of your build resources. + +Following build metrics are exposed on port `8383`. + +| Name | Type | Description | Labels | Status | +|:-----------------------------------------------------|:----------|:--------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------| +| `build_builds_registered_total` | Counter | Number of total registered Builds. | buildstrategy= 1
namespace= 1
build= 1 | experimental | +| `build_buildruns_completed_total` | Counter | Number of total completed BuildRuns. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | +| `build_buildrun_establish_duration_seconds` | Histogram | BuildRun establish duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | +| `build_buildrun_completion_duration_seconds` | Histogram | BuildRun completion duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | +| `build_buildrun_rampup_duration_seconds` | Histogram | BuildRun ramp-up duration in seconds | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | +| `build_buildrun_taskrun_rampup_duration_seconds` | Histogram | BuildRun taskrun ramp-up duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | +| `build_buildrun_taskrun_pod_rampup_duration_seconds` | Histogram | BuildRun taskrun pod ramp-up duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | + +1 Labels for metric are disabled by default. See [Configuration of metric labels](#configuration-of-metric-labels) to enable them. + +## Configuration of histogram buckets + +Environment variables can be set to use custom buckets for the histogram metrics: + +| Metric | Environment variable | Default | +| ---------------------------------------------------- | ---------------------------------- | ---------------------------------------- | +| `build_buildrun_establish_duration_seconds` | `PROMETHEUS_BR_EST_DUR_BUCKETS` | `0,1,2,3,5,7,10,15,20,30` | +| `build_buildrun_completion_duration_seconds` | `PROMETHEUS_BR_COMP_DUR_BUCKETS` | `50,100,150,200,250,300,350,400,450,500` | +| `build_buildrun_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | +| `build_buildrun_taskrun_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | +| `build_buildrun_taskrun_pod_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | + +The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller: + +```bash +export PROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480 +make local +``` + +When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, [`500-controller.yaml`](https://github.com/shipwright-io/build/blob/master/deploy/500-controller.yaml). +Add an additional entry: + +```yaml +[...] + env: + - name: PROMETHEUS_BR_COMP_DUR_BUCKETS + value: "30,60,90,120,180,240,300,360,420,480" +[...] +``` + +## Configuration of metric labels + +As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the `PROMETHEUS_ENABLED_LABELS` environment variable. The supported labels are: + +* `buildstrategy` +* `namespace` +* `build` +* `buildrun` + +Use a comma-separated value to enable multiple labels. For example: + +```bash +export PROMETHEUS_ENABLED_LABELS=namespace +make local +``` + +or + +```bash +export PROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build +make local +``` + +When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, [`500-controller.yaml`](https://github.com/shipwright-io/build/blob/master/deploy/500-controller.yaml). +Add an additional entry: + +```yaml +[...] + env: + - name: PROMETHEUS_ENABLED_LABELS + value: namespace +[...] +``` diff --git a/content/en/docs/profiling.md b/content/en/docs/profiling.md new file mode 100644 index 00000000..e0bd6e76 --- /dev/null +++ b/content/en/docs/profiling.md @@ -0,0 +1,40 @@ +--- +title: Build Controller Profiling +linkTitle: Profiling +--- + +The build controller supports a `pprof` profiling mode, which is omitted from the binary by default. To use the profiling, use the controller image that was built with `pprof` enabled. + +## Enable `pprof` in the build controller + +In the Kubernetes cluster, edit the `shipwright-build-controller` deployment to use the container tag with the `debug` suffix. + +```sh +kubectl --namespace set image \ + deployment/shipwright-build-controller \ + shipwright-build-controller="$(kubectl --namespace get deployment shipwright-build-controller --output jsonpath='{.spec.template.spec.containers[].image}')-debug" +``` + +## Connect `go pprof` to build controller + +Depending on the respective setup, there could be multiple build controller pods for high availability reasons. In this case, you have to look-up the current leader first. The following command can be used to verify the currently active leader: + +```sh +kubectl --namespace get configmap shipwright-build-controller-lock --output json \ + | jq --raw-output '.metadata.annotations["control-plane.alpha.kubernetes.io/leader"]' \ + | jq --raw-output .holderIdentity +``` + +The `pprof` endpoint is not exposed in the cluster and can only be used from inside the container. Therefore, set-up port-forwarding to make the `pprof` port available locally. + +```sh +kubectl --namespace port-forward 8383:8383 +``` + +Now, you can setup a local webserver to browse through the profiling data. + +```sh +go tool pprof -http localhost:8080 http://localhost:8383/debug/pprof/heap +``` + +_Please note:_ For it to work, you have to have `graphviz` installed on your system, for example using `brew install graphviz`, `apt-get install graphviz`, `yum install graphviz`, or similar.