Skip to content
This repository has been archived by the owner on Oct 22, 2021. It is now read-only.

Commit

Permalink
Merge pull request #1212 from cloudfoundry-incubator/backport-ci-to-2.2
Browse files Browse the repository at this point in the history
Backport pipeline from release-2.3 to release-2.2
  • Loading branch information
viovanov authored Aug 14, 2020
2 parents 8b0dc64 + 6fefefb commit f74803e
Show file tree
Hide file tree
Showing 11 changed files with 1,615 additions and 590 deletions.
82 changes: 24 additions & 58 deletions .concourse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,78 +2,44 @@

[This pipeline](https://concourse.suse.dev/teams/main/pipelines/kubecf) lints
builds and tests kubecf both with Eirini and Diego. The clusters used are
deployed with [kind](https://github.com/kubernetes-sigs/kind).
GKE preemptible ones.

The pipeline tests the kubecf master branch as well as PRs with the tag
"Trigger: CI".

It uses [Catapult](https://github.com/SUSE/catapult) and requires at least one
[EKCP](https://github.com/mudler/ekcp) host (or a federated EKCP network) to
delegate the Kubernetes cluster creation to another machine, so the Concourse
workers can consume it.
It uses [Catapult](https://github.com/SUSE/catapult) for the logic implementation.

## Run the tests locally
## Deploying the pipeline

It is possible to run the job of the pipeline locally without having a full
Concourse + EKCP deployment.
Catapult allows to replicate every step regardless the Kubernetes provider.
[See the Catapult wiki page for a short summary on how to run the same tests locally](https://github.com/SUSE/catapult/wiki/KubeCF-testing).
$ ./create_pipeline.sh <concourse-target> <pipeline-name>

You can also deploy the pipeline in your Concourse instance,
the following paragraphs are documenting the needed requirements.
E.g: to deploy the `kubecf` pipeline:

## Deploy the pipeline
$ ./create_pipeline.sh <concourse-target> kubecf

The following section describes the requirements and the steps needed to deploy
the pipeline from scratch.
E.g: to deploy the `kubecf-pool-reconciler` pipeline:

### Requirements
$ ./create_pipeline.sh <concourse-target> kubecf-pool-reconciler

The only requirement of this pipeline is an
[EKCP](https://github.com/mudler/ekcp) instance deployed in the network, which
the Concourse workers can reach.
All the required config options are in `<pipeline-name>.yaml`.

EKCP is an API on top of Kind that allows the programmatic creation of
externally accessible clusters.
### Developing the pipeline

For more information, see also
[EKCP Deployment setup](https://github.com/mudler/ekcp/wiki/Deployment-setups)
and the [Catapult-web wiki page](https://github.com/SUSE/catapult/wiki/Catapult-web)
for a full guided setup.
If you wish to deploy a custom pipeline:
1. copy either `kubecf.yaml` or `kubecf-pool-reconciler.yaml` into
`<your-pipeline-name>.yaml`
2. Edit the yaml and disable production options as said by the NOTEs (publishing
artifacts, updating github status, s3 buckets to consume, etc)
3. If needed, change the branches to track in the `branches` map in
`<your-pipeline-name>`.yaml
4. Deploy as usual with `$ ./create_pipeline.sh <concourse-target> <your-pipeline-name>`

To make the pipeline request new clusters from a different node, only adjust
the `EKCP_HOST` parameter on the pipelines/environment variable accordingly to
point to your new EKCP API endpoint.

## Deploy on Concourse

If you wish to deploy the pipeline on Concourse, run the following
command and use the `fly` script that you can find in this directory:
## Running the tests locally

```
./fly -t target set-pipeline -p kubecf
```

## Pool for the pipeline

The kubecf pipeline is using the [concourse pool
resource](https://github.com/concourse/pool-resource) to obtain k8s clusters
for running the jobs. Once used, the clusters are destroyed from EKCP and
removed from the pool by the kubecf pipeline itself as needed.

There is an additional pipeline "kubecf-pool-reconciler" that automatically
creates k8s clusters and adds them to the pool, up to the specificied maximum
number of clusters.

To deployed the kubecf-pool-reconciler pipeline do:

```
fly -t suse.dev set-pipeline -p kubecf-pool-reconciler --config <(gomplate -V -f kubecf-pool-reconciler.yaml.gomplate)
```


## Pipeline development

If you wish to deploy a copy of this pipeline without publishing artifacts to
the official buckets, create a `config.yaml` file ( you can use `config.yaml.sample`
as a guide ) and deploy the same command above.
It is possible to run the job of the pipeline locally without having a full
Concourse + GKE clusters.
Catapult allows to replicate every step regardless the Kubernetes provider.
[See the Catapult wiki page for a short summary on how to run the same tests
locally](https://github.com/SUSE/catapult/wiki/KubeCF-testing).
8 changes: 0 additions & 8 deletions .concourse/config.yaml.sample

This file was deleted.

20 changes: 0 additions & 20 deletions .concourse/fly

This file was deleted.

6 changes: 3 additions & 3 deletions .concourse/kubecf-pool-reconciler.yaml.gomplate
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
{{ $kind_pool_size := 31 }}
# load the config provided when evaluating the template
{{ $config := (datasource "config") }}

resources:
- name: kind-environments
Expand Down Expand Up @@ -26,7 +27,7 @@ deploy_args: &deploy_args
export CLUSTER_NAME="$CLUSTER_PREFIX-$(date +%s | sha256sum | base64 | head -c 32 ; echo)"
CURRENT_CLUSTERS=$(ls kind-environments/kind/**/* | grep -v .keep | wc -l)

if [ $CURRENT_CLUSTERS -lt {{ $kind_pool_size }} ]; then
if [ $CURRENT_CLUSTERS -lt {{ $config.kind_pool_size }} ]; then
pushd catapult
make k8s
else
Expand All @@ -45,7 +46,6 @@ jobs:
trigger: true
- get: catapult
- task: deploy
privileged: true
timeout: 30m
config:
platform: linux
Expand Down
62 changes: 62 additions & 0 deletions .concourse/kubecf-release-2.2.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
# NOTE: unique prefix for resource names. This way they don't collide among
# different pipelines. Use your own when developing. Must not start with a
# number as it breaks gcloud clusters.
resource_prefix: release-2-2

workertags: # list of worker tags, to run the pipeline on specific (tagged) workers
# - yourtag

# NOTE use your own bucket when developing
s3_bucket: kubecf-ci
s3_bucket_region: eu-central-1

# NOTE use your own bucket when developing
s3_final_bucket: kubecf
s3_final_bucket_region: us-west-2

kubecf_repository: cloudfoundry-incubator/kubecf

# for preemptible gke clusters
gke_project: suse-225215
gke_zone: europe-west3-c
gke_dns_zone: kubecf-ci
gke_domain: kubecf.ci

# NOTE please disable the following when developing or deploying a copy of
# the pipeline:
trigger_publish: true
github_status: true

availableCfSchedulers: # Diego / Eirini
- diego
- eirini

branches: # Repository branches to track
- release-2.2
pr_resources:
- pr
pr_base_branch: release-2.2 # filter PRs depending on the branch they target. `~` for all branches

# Stable Jobs
stable_jobs:
- lint
- build
- deploy-diego
- deploy-eirini
- smoke-tests-diego
- cf-acceptance-tests-diego
- cleanup-diego-cluster
- cleanup-eirini-cluster

experimental_jobs:
- cf-acceptance-tests-eirini
- cats-internetless-eirini
- ccdb-rotate-eirini
- smoke-tests-post-rotate-eirini
- cats-internetless-diego
- sync-integration-tests
- ccdb-rotate-diego
- smoke-tests-post-rotate-diego
- upgrade-test
- smoke-tests-eirini
Loading

0 comments on commit f74803e

Please sign in to comment.