- Overview
- Phabricator integration
- Buildkite pipelines
- Life of a pre-merge check
- Cluster parts
- Enabled projects and project detection
- Agent machines
- Compilation caching
- Buildkite monitoring
-
Buildkite orchestrates each build.
-
multiple Linux and windows agents connected to Buildkite. Agents are run at Google Cloud Platform.
-
small proxy service that takes build requests from reviews.llvm.org and converts them into Buildkite build request. Buildkite job sends build results directly to Phabricator.
-
every review creates a new branch in fork of llvm-project.
-
Harbormaster build plan the Phabricator side these things were configured
-
Herald rule for everyone and for beta testers. Note that right now there is no difference between beta and "normal" builds.
-
the merge_guards_bot user account for writing comments.
Buildkite allows dynamically define pipelines as the output of a command. That gives us the flexibility to generate pipeline code using the code from a specific branch of pre-merge checks. Thus, changes can be tested before affecting everyone.
For example, "pre-merge" pipeline has a single "setup" step, that checks out this repo and runs a python script to generate further steps:
export SRC="${BUILDKITE_BUILD_PATH}"/llvm-premerge-checks
export SCRIPT_DIR="${SRC}"/scripts
rm -rf "${SRC}"
git clone --depth 1 https://github.com/google/llvm-premerge-checks.git "${SRC}"
cd "${SRC}"
git fetch origin "${ph_scripts_refspec:-main}":x
git checkout x
cd "$BUILDKITE_BUILD_CHECKOUT_PATH"
${SCRIPT_DIR}/buildkite/build_branch_pipeline.py | tee /dev/tty | buildkite-agent pipeline upload
One typically edits corresponding script, instead of manually updating a pipeline in the Buildkite interface.
When new diff arrives for review it triggers a Herald rule ("everyone" or "beta testers").
That in sends an HTTP POST request to phab-proxy
that submits a new buildkite job diff-checks. All parameters from the
original request are put in the build's environment with ph_
prefix (to avoid
shadowing any Buildkite environment variable). "ph_scripts_refspec" parameter
defines refspec of llvm-premerge-checks to use ("main" by default).
diff-checks pipeline (create_branch_pipeline.py) downloads a patch (or series of patches) and applies it to a fork of the llvm-project repository. Then it pushes a new state as a new branch (e.g. "phab-diff-288211") and triggers "premerge-checks" on it (all "ph_" env variables are passed to it). This new branch can now be used to reproduce the build or by another tooling. Periodical cleanup-branches pipeline deletes branches older than 30 days.
premerge-checks pipeline (build_branch_pipeline.py) builds and tests changes on Linux and Windows agents. Then it uploads a combined result to Phabricator.
We use NGINX ingress for Kubernetes. Right now it's only used to provide basic HTTP authentication and forwards all requests from load balancer to phabricator proxy application.
Follow up to date docs to install reverse proxy.
[cert-manager] is installed with helm https://cert-manager.io/docs/installation/helm/
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.9.1
--set installCRDs=true
We also have certificate manager and lets-encrypt configuration in place, but they are not used at the moment and should be removed if we decide to live with static IP.
HTTP auth is configured with k8s secret 'http-auth' in 'buildkite' namespace (see how to update auth).
-
docker image buildkite-premerge-debian.
-
docker image agent-windows-buildkite.
-
VMs are manually managed and updated, use RDP to access.
-
there is an 'windows development' VM to do Windows-related development.
To reduce build times and mask unrelated problems, we're only building and testing the projects that were modified by a patch. choose_projects.py uses manually maintained config file to define inter-project dependencies and exclude projects:
-
Get prefix (e.g. "llvm", "clang") of all paths modified by a patch.
-
Add all dependant projects.
-
Add all projects that this extended list depends on, completing the dependency subtree.
-
Remove all disabled projects.
All build machines are running from Docker containers so that they can be debugged, updated, and scaled easily:
-
Linux. We use Kubernetes deployment to manage these agents.
-
Windows. At the moment they are run as multiple individual VM instances.
See playbooks how to manage and set up machines.
Each build is performed on a clean copy of the git repository. To speed up the builds ccache is used on Linux and sccache on Windows.
VM instance buildkite-monitoring
exposes Buildkite metrics to GCP.
To set up a new instance:
-
Create as small Linux VM with full access to Stackdriver Monitoring API.
-
Follow instructions to install monitoring agent and enable statsd plugin.
-
Download recent release of buildkite-agent-metrics.
-
Run in SSH session:
chmod +x buildkite-agent-metrics-linux-amd64
nohup ./buildkite-agent-metrics-linux-amd64 -token XXXX -interval 30s -backend statsd &
Metrics are exported as "custom/statsd/gauge".
TODO: update "Testing scripts locally" playbook on how to run Linux build locally with Docker. TODO: migrate 'builkite-monitoring' to k8s deployment.