There are multiple ways to set up the pipeline. There is a single
official prod mode, which can only be instantiated in CentOS CI under
the fedora-coreos
namespace. However, one can set up everything
equally well in a separate namespace (such as fedora-coreos-devel
), or
a separate cluster entirely.
The main tool to deploy and update these resource is ./deploy
, which
is located in this repo. It is a simple wrapper around
oc process/create/replace
to make deploying and updating the various
resources located in the manifests/pipeline.yaml
OpenShift template
easier. It will set up a Jenkins pipeline job which uses the
Kubernetes Jenkins plugin.
In the following sections, the section header may indicate whether the
section applies only to the local cluster case ([LOCAL]
) or the
official prod case ([PROD]
).
You'll want to be sure you have kubevirt available in your cluster. See this section of the coreos-assembler docs.
This is recommended for production pipelines, and also gives you a lot of flexibility. The coreos-assembler document above has multiple options for this. To be clear, we would also likely support running on "vanilla" Kubernetes if someone interested showed up wanting that.
The easiest way to set up your own local OCP4 cluster for developing on the pipeline is using CodeReady Containers, though the author has not yet tried setting up the pipeline on that footprint yet. (You'll need to allocate the VM at least ~14G of RAM.)
Alternatively, you can use the OpenShift installer directly to create a full cluster either in libvirt or GCP (for nested virt). For more details, see:
https://github.com/openshift/installer/tree/master/docs/dev/libvirt https://github.com/coreos/coreos-assembler/blob/master/doc/openshift-gcp-nested-virt.md
Once you have a cluster up, you will want to deploy the OpenShift CNV operator for access to Kubevirt:
It is preferable to match the project name used for prod in CentOS CI
(fedora-coreos
), but feel free to use a different project name like
fedora-coreos-devel
if you'd like.
oc new-project fedora-coreos
If you're planning to test changes, it would be best to fork this repo so that you do your work there. The workflow requires a remote repo to which to push changes.
If you are in production where we upload builds to S3 OR you want to test uploading to S3 as part of your pipeline development, you need to create a credentials config as a secret within OpenShift.
First create files with your secret content:
mkdir dir
cat <<'EOF' > dir/config
[default]
aws_access_key_id=keyid
aws_secret_access_key=key
EOF
echo keyid > dir/accessKey
echo key > dir/secretKey
We expose it in different ways (as an AWS config file and as direct
fields) because those credentials are used both directly as Jenkins
credentials (which use the direct fields) and by e.g. cosa
, which
accepts an AWS config file.
Then create the secret in OpenShift:
oc create secret generic aws-fcos-builds-bot-config --from-file=dir
We also have a second AWS config that can be used for running kola
tests. If you have a single account that has enough permissions for
both then you can use the same account for both uploading builds and
running kola tests (i.e. re-use upload-secret
from above. If not then
you can use a second set of credentials for the kola tests.
cat <<'EOF' > /path/to/kola-secret
[default]
aws_access_key_id=keyid
aws_secret_access_key=key
EOF
oc create secret generic aws-fcos-kola-bot-config --from-file=config=/path/to/kola-secret
If you are in production where we upload images to GCP OR you want to test uploading to GCP as part of your pipeline development, you need to create a upload credentials for a service account as a secret within OpenShift. For more information on creating a service account see the Google Cloud Docs.
Once you have the json file that represents the credentials for your service account from GCP, create the secret in OpenShift:
oc create secret generic gcp-image-upload-config --from-file=config=/path/to/upload-secret
We also have a second GCP config that can be used for running kola tests. If you have a single account that you'd like to use for both image uploading and tests you can do that assuming they have enough permissions.
oc create secret generic gcp-kola-tests-config --from-file=config=/path/to/kola-secret
If you want to store builds persistently, now is a good time to allocate S3 storage. See the upstream coreos-assembler docs around build architecture.
Today, the FCOS pipeline is oriented towards having its own bucket; this will likely be fixed in the future. But using your credentials, you should now do e.g.:
$ aws s3 mb my-fcos-bucket
And provide it to --bucket
below.
If you want to be able to have build status messages appear in Slack,
create a slack-api-token
secret:
$ echo -n "$TOKEN" > slack-token
$ oc create secret generic slack-api-token --from-file=token=slack-token
You can obtain a token when creating a new instance of the Jenkins CI app in your Slack workspace.
Create a shared webhook secret using e.g. uuidgen -r
:
uuidgen -r > secret
oc create secret generic github-webhook-shared-secret --from-file=secret
oc new-app --file=manifests/jenkins.yaml --param=NAMESPACE=fedora-coreos
Notice the NAMESPACE
parameter. This makes the Jenkins master use the
image from our namespace, which we'll create in the next step. (The
reason we create the app first is that otherwise OpenShift will
automatically instantiate Jenkins with default parameters when creating
the Jenkins pipeline).
Now, create the Jenkins configmap:
oc create configmap jenkins-casc-cfg --from-file=jenkins/config
If working on the production pipeline, you may simply do:
./deploy --official --update
If working in a different namespace/cluster, the command is the same,
but without the --official
switch:
./deploy --update
You may also want to provide additional switches depending on the circumstances. Here are some of them:
--prefix PREFIX
- The prefix to prepend to created developer-specific resources. By default, this will be your username, but you can provide a specific prefix as well.
--pipeline <URL>[@REF]
- Git source URL and optional git ref for pipeline Jenkinsfile.
--config <URL>[@REF]
- Git source URL and optional git ref for FCOS config.
--pvc-size <SIZE>
- Size of the cache PVC to create. Note that the PVC size cannot be
changed after creation. The format is the one understood by
Kubernetes, e.g.
30Gi
.
- Size of the cache PVC to create. Note that the PVC size cannot be
changed after creation. The format is the one understood by
Kubernetes, e.g.
--cosa-img <FQIN>
- Image of coreos-assembler to use.
--bucket BUCKET
- AWS S3 bucket in which to store builds (or blank for none).
For example, to target a specific combination of pipeline, FCOS config, cosa image, and bucket:
./deploy --update \
--pipeline https://github.com/jlebon/fedora-coreos-pipeline \
--config https://github.com/jlebon/fedora-coreos-config@feature \
--cosa-img quay.io/jlebon/coreos-assembler:random-tag \
--bucket jlebon-fcos
See ./deploy --help
for more information.
This will create:
- the Jenkins master imagestream,
- the Jenkins slave imagestream,
- the coreos-assembler imagestream,
- the
PersistentVolumeClaim
in which we'll cache, and - the Jenkins pipeline build.
The default size of the PVC is 100Gi. There is a --pvc-size
parameter
one can use to make this smaller if you do not have enough space. E.g.
--pvc-size 30Gi
.
We can now start a build of the Jenkins master:
oc start-build --follow jenkins
Once the Jenkins master image is built, Jenkins should start up (verify
with oc get pods
). Once the pod is marked READY, you should be able to
login to the Jenkins UI at https://jenkins-$NAMESPACE.$CLUSTER_URL/
(oc get route jenkins
will show the URL). As previously noted, any
password works to log in as developer
.
It may be a good idea to set the Kubernetes plugin to use DNS for service names.
Once Jenkins is ready, we can now start the Fedora CoreOS pipeline!
oc start-build fedora-coreos-pipeline
(Or if running as a developer pipeline, it would be e.g.
jlebon-fedora-coreos-pipeline
).
You can verify the build status using oc get builds
or by checking the
logs in the web UI.
The procedure is the same for updating the pipeline:
./deploy --update
Note any value you don't pass to deploy
will be reset to its default
value from the manifests/pipeline.yaml
OpenShift template. This is
currently as designed (see
#65).
oc set triggers bc/fedora-coreos-pipeline-mechanical --from-webhook
First create the configmap:
oc create configmap fedora-messaging-cfg --from-file=configs/fedmsg.toml
Then add the client secrets:
oc create secret generic fedora-messaging-coreos-key \
--from-file=coreos.crt --from-file=coreos.key
echo $coreosbot_token > token
oc create secret generic github-coreosbot-token --from-file=token
When hacking locally, it might be useful to look at the contents of the PV to see the builds if one isn't uploading to S3. One alternative to creating a "sleeper" pod with the PV mounted is to expose a simple httpd server:
oc create -f manifests/simple-httpd.yaml
You'll then be able to browse the contents of the PV at:
http://simple-httpd-fedora-coreos.$CLUSTER_URL
(Or simply check the output of oc get route simple-httpd
).
One can leverage Kubernetes labels to delete all objects related to the pipeline:
oc delete all -l app=fedora-coreos
This won't delete a few resources. Notably the PVC and the SA. Deleting the PVC is usually not recommended since it may require cluster admin access to reallocate it in the future (if it doesn't, feel free to delete it!). To delete the other objects:
oc delete serviceaccounts -l app=fedora-coreos
oc delete rolebindings -l app=fedora-coreos
oc delete configmaps -l app=fedora-coreos