diff --git a/kernelci.org/content/en/legacy/_index.md b/kernelci.org/content/en/legacy/_index.md
deleted file mode 100644
index d9578237..00000000
--- a/kernelci.org/content/en/legacy/_index.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "Legacy"
-date: 2023-08-11
-description: "KernelCI Legacy Architecture"
----
-
-> **Note:** This section describes the legacy KernelCI architecture. Please
-> see the [API transition timeline](/blog/posts/2023/api-timeline/) blog post
-> for more details about when it will be premanently retired.
-
-## KernelCI native tests
-
-KernelCI native tests are orchestrated using the following components:
-
-* [Core tools](../core) contain all the primitive functions implemented in
- Python as well as the system [configuration](../core/config). This is how
- kernels are built, test definitions are generated etc.
-* [kernelci-backend](https://github.com/kernelci/kernelci-backend) which
- provides an API on top of Mongo DB to store all the data. It also performs
- some post-processing such as generating email reports, detecting regressions
- and triggering automated bisections.
-* [kernelci-frontend](https://github.com/kernelci/kernelci-frontend) which
- provides a web dashboard such as the one hosted on
- [linux.kernelci.org](https://linux.kernelci.org). This makes use of the backend API
- to retrieve results.
-* [kernelci-jenkins](https://github.com/kernelci/kernelci-jenkins) to run a
- [Jenkins](https://www.jenkins.io/) instance and orchestrate all the builds
- and tests being scheduled. It also relies on
- [Kubernetes](https://kubernetes.io/) provided by [Microsoft
- Azure](https://azure.microsoft.com/) and [Google Compute
- Engine](https://cloud.google.com/) to run all the kernel builds.
-* [Test labs](../labs), typically using LAVA but not only, are hosted by people
- and organisations outside of the KernelCI project. They are however
- connected to KernelCI services to run tests and send results directly to the
- backend.
-
-There are several [instances](/legacy/instances) hosted by the KernelCI
-project, for different needs as explained in the documentation. Each instance
-is made up of all the components listed above. It's possible for anyone to set
-up their own private instance too. However, developers typically don't need to
-set up a full instance but only the components they need to make changes to.
-Here's how they all relate to each other:
-
-```mermaid
-graph TD
- frontend(Web frontend) --> backend(Backend API)
- backend --> mongo[(Mongo DB)]
- core(Core tools) -.-> backend
- core -.-> lava[LAVA labs]
- lava -.-> backend
- jenkins(Jenkins) --> core
- jenkins --> k8s[Kubernetes]
-```
-
-Dotted lines are optional dependencies, and solid lines are required ones. To
-put this in words:
-
-The Core tools can be used on their own on the command line without anything
-else installed. They may be used to build kernels locally, submit data to a
-Backend API, or schedule jobs in test labs such as LAVA. Jobs may be run
-without any Backend API, but if there is one then results can be sent to it.
-Then Jenkins uses Core tools to do all that, via Kubernetes for kernel builds.
-Finally, the Web frontend uses a Backend API but nothing depends on it (apart
-from end users) so it's entirely optional.
diff --git a/kernelci.org/content/en/legacy/bisection.md b/kernelci.org/content/en/legacy/bisection.md
deleted file mode 100644
index 26c14157..00000000
--- a/kernelci.org/content/en/legacy/bisection.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-title: "Bisection"
-date: 2021-02-10T11:41:21Z
-description: "KernelCI Automated Bisection support"
----
-
-## Why run automated bisections?
-
-KernelCI periodically monitors a number of kernel branches (mainline, stable,
-next...), and builds them when it detects new revisions. It then runs some
-tests with the resulting kernel binaries on a variety of platforms. When a
-test fails, it compares the results with previous kernel revisions from that
-same branch and on the same platform. If it was working previously, then
-KernelCI has detected a new failure and stores it as a regression.
-
-As there may have been an incoming branch merge with many commits between the
-last working revision and the now failing one, a bisection is needed in order
-to isolate the individual commit that introduced the failure. At least this is
-the idea, it can get more complicated if several different failures were
-introduced in the meantime or if the branch got rebased. In many cases, it
-works.
-
-## How does it work?
-
-The KernelCI automated bisection is implemented as a [Jenkins Pipeline
-job](https://github.com/kernelci/kernelci-jenkins/blob/main/jobs/bisect.jpl),
-with some functionality in Python.
-
-The current status of automated bisection is as follows:
-
-- Triggered for each regression found, with some logic to avoid duplicates
-- Run on mainline, stable, next and several maintainer trees
-- Several checks are in place to avoid false positives:
- - Check the initial good and bad revisions coming from the regression
- - When the bisection finds a commit, check that it does fail 3 times
- - Revert the found commit in-place and check that it does pass 3 times
- - When started manually, it's also possible to test each kernel
- iteration several times
-- Send an email report to a set of recipients determined from the
- breaking commit found
-
-## Where are the results?
-
-The bisection results are only shared by email. The
-[kernelci-results](https://groups.io/g/kernelci-results/topics) list is always
-on Cc so all the reports can be found in the archive.
-
-## What's left to do?
-
-Potential improvements to the automated bisection include:
-
-- Dealing with persistent regressions that keep producing the same bisection
- results: reply to previous email report rather than sending a new one.
-- Possibility to manually start semi-automatic bisections for special cases,
- with parameters entered by hand.
-- Better support for identifying which tree a bisection failure occurs in (eg,
- identifying if a bisection failure was merged in from mainline and reporting
- it as a mainline issue).
-- Include list of other platforms or configurations that show have the same
- regression as the one used for the bisection.
-- Web interface for viewing bisection results.
diff --git a/kernelci.org/content/en/legacy/contrib.md b/kernelci.org/content/en/legacy/contrib.md
deleted file mode 100644
index 0f31f660..00000000
--- a/kernelci.org/content/en/legacy/contrib.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "Contributing Guidelines"
-date: 2022-12-08T09:21:56Z
-draft: false
-weight: 2
----
-
-KernelCI core project is open for contributions. Contributions may consist of
-adding new builds, tests and device types as well as features and bugfixes for
-KernelCI core tools.
-The best way to contribute is to send a PR to [kernelci-core](https://github.com/kernelci/kernelci-core).
-When the PR is created, the [KernelCI staging](https://docs.kernelci.org/instances/staging)
-instance takes care of updating the [staging.kernelci.org branch](https://github.com/kernelci/kernelci-core/tree/staging.kernelci.org).
-In general the branch is updated every 8h and a limited set of builds and tests
-are run on it. More detailed information about the logic behind staging runs can
-be found [here](https://docs.kernelci.org/instances/staging).
-
-There are several guidelines which can facilitate the PR review process:
-
-1. Make sure the PR is well described
- 1. Describe the purpose of the changes
- 2. Example use cases are welcome
-2. Attach staging build/test results when possible. Keep in mind that staging jobs are run every 8 hours.
- 1. If the PR is expected to produce build/test results
- check [staging dashboard](https://staging.kernelci.org) and make sure these are mentioned in the PR comment
- 1. Build artifacts including logs are not kept permanently (9 days for staging), so it's generally recommended to put them in a place that'd make them last if you want them to be part of the PR. Good way to do that seem to be:
- * Putting important information such as log fragments in the PR comments
- * Using services like [pastebin](https://pastebin.com/) to store data important for the PR (e.g. full logs) and pasting the links.
- 2. If the results are not visible on staging and you think they should be, mention it in the comments, too
-3. Make sure that reviewers' comments and questions are addressed
- 1. When there are comments unanswered for more than 1 month the PR will be closed
-4. In case there is a need to consult the PR with KernelCI maintainers join the open hours
- 1. Open hours take place every Thursday at 12:00 UTC at KernelCI [Jitsi](https://meet.kernel.social/kernelci-dev)
-5. Should you need help, you can reach KernelCI [maintainers](/org/maintainers/)
diff --git a/kernelci.org/content/en/legacy/core b/kernelci.org/content/en/legacy/core
deleted file mode 120000
index 56e7e986..00000000
--- a/kernelci.org/content/en/legacy/core
+++ /dev/null
@@ -1 +0,0 @@
-../../../external/kernelci-core/doc
\ No newline at end of file
diff --git a/kernelci.org/content/en/legacy/how-to.md b/kernelci.org/content/en/legacy/how-to.md
deleted file mode 100644
index ad21a4a3..00000000
--- a/kernelci.org/content/en/legacy/how-to.md
+++ /dev/null
@@ -1,347 +0,0 @@
----
-title: "How-To"
-date: 2024-01-18
-description: "How to add a new native test suite"
----
-
-This guide contains all the steps needed to add a native test suite to
-KernelCI. It will cover the [LAVA](../../labs/lava) use-case in particular as
-it is currently the most popular framework to run tests for KernelCI.
-
-## Background
-
-KernelCI is still very much about running tests on hardware platforms. More
-abstract tests such as static analysis and KUnit are on the horizon but they
-are not quite supported yet. So this guide will only cover functional testing
-on "real" hardware.
-
-The only moving part is the kernel, which will get built in many flavours for
-every revision covered by KernelCI. All the tests are part of fixed root file
-systems (rootfs), which get updated typically once a week. Adding a test
-therefore involves either reusing an existing rootfs or creating a new one.
-
-A good way to start prototyping things is to use the plain [Debian Bullseye NFS
-rootfs](https://storage.kernelci.org/images/rootfs/debian/bullseye/) and install
-or compile anything at runtime on the target platform itself. This is of
-course slower and less reliable than using a tailored rootfs with everything
-already set up, but it allows a lot more flexibility. It is the approach
-followed in this guide: first using a generic rootfs and then creating a
-dedicated one.
-
-## A simple test
-
-For the sake of this guide, here's a very simple test to check the current OS
-is Linux:
-
-```
-[ $(uname -o) = "GNU/Linux" ]
-```
-
-Let's test this locally first, just to prove it works:
-
-```
-$ [ $(uname -o) = "GNU/Linux" ]
-$ echo $?
-0
-```
-
-and to prove it would return an error if the test failed:
-
-```
-$ [ $(uname -o) = "OpenBSD" ]
-$ echo $?
-1
-```
-
-All the steps required to enable this test to run in KernelCI are detailed
-below. There is also a sample Git branch with the changes:
-
- https://github.com/kernelci/kernelci-core/commits/how-to-bullseye
-
-## Step 1: Enable basic test plan
-
-The first step is to make the minimal changes required to run the command
-mentioned above.
-
-### LAVA job template
-
-See commit: [`config/lava: add uname test plan for the How-To
-guide`](https://github.com/kernelci/kernelci-core/commit/b1464ff3986ac70513c251f3f1d87f892c556d61)
-
-KernelCI LAVA templates use [Jinja](https://jinja.palletsprojects.com/). To
-add this `uname` test plan, create a template file
-`config/lava/uname/uname.jinja2`:
-
-```yaml
-- test:
- timeout:
- minutes: 1
- definitions:
- - repository:
- metadata:
- format: Lava-Test Test Definition 1.0
- name: {{ plan }}
- description: "uname"
- os:
- - oe
- scope:
- - functional
- run:
- steps:
- - lava-test-case uname-os --shell '[ $(uname -o) = "GNU/Linux" ]'
- from: inline
- name: {{ plan }}
- path: inline/{{ plan }}.yaml
- lava-signal: kmsg
-```
-
-This is pretty much all boiler plate, except for the `lava-test-case` line
-which runs the test and uses the exit code to set the result (0 is pass, 1 is
-fail). Some extra templates need to be added for each boot method, such as
-GRUB, U-Boot and Depthcharge. For example, here's the Depthcharge one to use
-on Chromebooks `generic-depthcharge-tftp-nfs-uname-template.jinja2`:
-
-```yaml
-{% extends 'boot-nfs/generic-depthcharge-tftp-nfs-template.jinja2' %}
-{% block actions %}
-{{ super () }}
-
-{% include 'uname/uname.jinja2' %}
-
-{% endblock %}
-```
-
-The name of the template follows a convention to automatically generate the
-variant required for a particular platform. This one is for the `uname` test
-plan on a platform using `depthcharge`, with the kernel downloaded over `tftp`
-and the rootfs available over `nfs`.
-
-### KernelCI YAML configuration
-
-See commit: [`config/core: enable uname test plan using Bullseye
-NFS`](https://github.com/kernelci/kernelci-core/commit/e7c1a1a0277fec215b778da3ada8885581464a16)
-
-Once the LAVA templates have been created, the next step is to enable the test
-plan in the [KernelCI YAML configuration](/legacy/core/config/).
-
-First add the `uname` test plan with the chosen rootfs (Debian Bullseye NFS in
-this case) in `test-configs.yaml`:
-
-```yaml
-test_plans:
-
- uname:
- rootfs: debian_bullseye_nfs
-```
-
-Then define which platforms should run this test plan, still in
-`test-configs.yaml`:
-
-```yaml
-test_configs:
-
- - device_type: hp-11A-G6-EE-grunt
- test_plans:
- - uname
-
- - device_type: minnowboard-max-E3825
- test_plans:
- - uname
-```
-
-Each test plan also needs to be enabled to run in particular test labs in
-`runtime-configs.yaml`. Some labs such as the Collabora one allow all tests to be
-run, and it contains the platforms listed above so no extra changes are
-required at this point.
-
-
-These changes are enough to make an intial pull request in
-[`kernelci-core`](https://github.com/kernelci/kernelci-core), and the test will
-automatically get run on [staging](/legacy/instances/staging/). Then the
-results will appear on the [web dashboard](https://staging.kernelci.org/job/).
-
-> **Note** First-time contributors needed to be added to the [list of trusted
-GitHub
-users](https://github.com/kernelci/kernelci-deploy/blob/main/data/staging.ini#L4)
-by a maintainer before their pull requests get merged and deployed on staging.
-
-## Step 2: Modify the file system at runtime
-
-Most tests will require more than what is already available in a plain Bullseye
-rootfs. Let's see how this can be done in a simple way.
-
-### Add a C file: uname-os.c
-
-See commit: [`config/lava: add
-uname-os.c`](https://github.com/kernelci/kernelci-core/commit/d31ee9462c0edf680509431f01d7ffce0ef23074)
-
-For example, we could have the test implemented as a C program rather than a
-shell script. See the
-[`uname-os.c`](https://github.com/kernelci/kernelci-core/blob/how-to-bullseye/config/lava/uname/uname-os.c)
-file.
-
-To test it locally:
-
-```
-$ gcc -o uname-os uname-os.c
-$ ./uname-os
-System: 'Linux', expected: 'Linux', result: PASS
-$ echo $?
-0
-```
-
-and to test it would fail if the OS name was not the expected one:
-
-```
-$ ./uname-os OpenBSD
-System: 'Linux', expected: 'OpenBSD', result: FAIL
-$ echo $?
-1
-```
-
-Now, let's see how this can be used with KernelCI.
-
-### Build it and run the C implementation
-
-See commit: [`config/lava: download and build uname-os.c and use
-it`](https://github.com/kernelci/kernelci-core/commit/66eb1aab440157747d458e088610a1764b983441)
-
-Any arbitrary commands can be added to the `uname.jinja2` template before
-running the actual test cases. In this example, we can install Debian packages
-then download the `uname-os.c` file and compile it to be able to finally run it
-as a test case:
-
-```yaml
- steps:
- - apt update
- - apt install -y wget gcc
- - wget https://raw.githubusercontent.com/kernelci/kernelci-core/how-to-bullseye/config/lava/uname/uname-os.c
- - gcc -o uname-os uname-os.c
- - lava-test-case uname-os-shell --shell '[ $(uname -o) = "GNU/Linux" ]'
- - lava-test-case uname-os-c --shell './uname-os'
-```
-
-We now have 2 test cases, one with the shell version and one with the C
-version. After updating the pull request on GitHub, this will also get tested
-automatically on staging.
-
-> **Note** If one of the steps fails, the job will abort. So if `apt install`
-or `wget` fails, the tests won't be run and the LAVA job status will show an
-error.
-
-> **Note** Some test labs don't enable a route to the Internet from their
-hardware platforms so installing things at runtime may not always work. This
-would typically be discussed as part of the pull request review, depending on
-what the jobs are trying to do and which device types have been enabled to run
-them.
-
-## Step 3: Going further
-
-With Step 2, pretty much anything can already be run within the limitations of
-the CPU and network bandwidth on the target platform. Even if this doesn't
-take too long to run, there are many reasons why it's not really suitable to
-enable in production. There's no point installing the same packages and
-building the same source code over and over again, it will add up as
-significant wasted resources and extra causes for test job failures. Once the
-steps required to run a test suite are well defined, having a rootfs image with
-everything pre-installed solves these issues.
-
-Then for more complex tests, results may be produced in other forms than an
-exit code from a command. Some file may need to be parsed, or any extra logic
-may need to be added. For example, the
-[`v4l2-parser.sh`](https://github.com/kernelci/kernelci-core/blob/main/config/rootfs/debos/overlays/v4l2/usr/bin/v4l2-parser.sh)
-script will run `v4l2-compliance` and parse the output to then call
-`lava-test-case` for each result found. It also uses [LAVA test
-sets](https://docs.lavasoftware.org/lava/actions-test.html#testsets), which is
-a more advanced feature for grouping test results together inside a test suite.
-
-### Adding a rootfs variant
-
-Root file systems are built using the
-[`kci_rootfs`](/legacy/core/kci_rootfs) command. All the variants are
-defined in the
-[`config/core/rootfs-configs.yaml`](https://github.com/kernelci/kernelci-core/blob/main/config/core/rootfs-configs.yaml)
-file with some parameters. There are also extra dedicated files in
-[`config/rootfs`](https://github.com/kernelci/kernelci-core/tree/main/config/rootfs/)
-such as additional build scripts.
-
-Let's take a look at the `bullseye-v4l2` rootfs for example:
-
-```yaml
-rootfs_configs:
- bullseye-v4l2:
- rootfs_type: debos
- debian_release: bullseye
- arch_list:
- - amd64
- - arm64
- - armhf
- extra_packages:
- - libasound2
- - libelf1
- - libjpeg62-turbo
- - libudev1
- extra_packages_remove:
- - bash
- - e2fslibs
- - e2fsprogs
- extra_firmware:
- - mediatek/mt8173/vpu_d.bin
- - mediatek/mt8173/vpu_p.bin
- - mediatek/mt8183/scp.img
- - mediatek/mt8186/scp.img
- - mediatek/mt8192/scp.img
- - mediatek/mt8195/scp.img
- script: "scripts/bullseye-v4l2.sh"
- test_overlay: "overlays/v4l2"
-```
-
-* `arch_list` is to define for which architectures the rootfs should be built.
-* `extra_packages` is a list passed to the package manager to install them.
-* `extra_packages_remove` is a list passed to the package manager to remove
- them.
-* `extra_firmware` is a list of Linux kernel firmware blobs to be installed in
- the rootfs image.
-* `script` is an arbitrary script to be run after packages have been installed.
- In this case, it will build and install the `v4l2` tools to be able to run
- `v4l2-compliance`.
-* `test_overlay` is the path to a directory with extra files to be copied on
- top of the file system. In this case, it will install the `v4l2-parser.sh`
- script to parse the output of the test suite and report test case results to
- LAVA:
-
- ```
- $ tree config/rootfs/debos/overlays/v4l2/
- config/rootfs/debos/overlays/v4l2/
- └── usr
- └── bin
- └── v4l2-parser.sh
- ```
-
-Here's a sample command using `kci_rootfs` to build the `bullseye-v4l2` root file
-system for `amd64`:
-
-```
-$ docker run -it \
- -v $PWD:/tmp/kernelci-core \
- --privileged \
- --device /dev/kvm \
- kernelci/debos
-root@759fc147da29:/# cd /tmp/kernelci-core
-root@759fc147da29:~/kernelci-core# ./kci_rootfs build \
- --rootfs-config=bullseye-v4l2 \
- --arch=amd64
-```
-
-### Writing more advanced test definitions
-
-Running fully featured test suites can involve more than just invoking a few
-commands with the `lave-test-case` helper. This very much depends on the test
-itself. Existing KernelCI native tests such as `v4l2-compliance`, `ltp`,
-`kselftest`, `igt` and others provide good examples for how to do this. The
-[`test-definitions`](https://github.com/kernelci/test-definitions) repository
-(forked from Linaro for KernelCI) can also be used as a reference, and new
-tests may even be added there to be able to use them in LAVA outside of
-KernelCI. Finally, the LAVA documentation about [writing
-tests](https://docs.lavasoftware.org/lava/writing-tests.html) describes all the
-available features in detail.
diff --git a/kernelci.org/content/en/legacy/instances/_index.md b/kernelci.org/content/en/legacy/instances/_index.md
deleted file mode 100644
index c956afef..00000000
--- a/kernelci.org/content/en/legacy/instances/_index.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Instances"
-date: 2023-08-11
-description: "KernelCI public instances"
----
-
-There are a number of KernelCI instances for various purposes. The main
-"production" one on [linux.kernelci.org](https://linux.kernelci.org) is
-dedicated to the upstream Linux kernel. It gets updated typically once a week
-and continuously builds and tests all the trees listed in
-[`build-configs.yaml`](https://github.com/kernelci/kernelci-core/blob/main/config/core/build-configs.yaml).
-The "staging" instance on [staging.kernelci.org](https://staging.kernelci.org)
-is used to test changes made to KernelCI itself before they get deployed in
-production. Then there are specific instances such as the Chrome OS one on
-[chromeos.kernelci.org](https://chromeos.kernelci.org) and
-for CIP on [cip.kernelci.org](https://cip.kernelci.org).
diff --git a/kernelci.org/content/en/legacy/instances/chromeos/_index.md b/kernelci.org/content/en/legacy/instances/chromeos/_index.md
deleted file mode 100644
index 1b91b919..00000000
--- a/kernelci.org/content/en/legacy/instances/chromeos/_index.md
+++ /dev/null
@@ -1,316 +0,0 @@
----
-title: "ChromeOS"
-date: 2022-09-23
-description: "The chromeos.kernelci.org instance"
-weight: 3
----
-
-The [chromeos.kernelci.org](https://chromeos.kernelci.org) instance is
-dedicated to testing upstream kernels with
-[ChromeOS](https://www.google.com/chromebook/chrome-os/) user-space. It
-focuses primarily on LTS kernel branches (5.15, 5.10) with some ChromeOS
-configuration fragments to run open-source builds of [Chromium
-OS](https://www.chromium.org/chromium-os/) on
-[Chromebook](https://www.google.com/intl/en_uk/chromebook/) devices.
-
-While the main [linux.kernelci.org](https://linux.kernelci.org) instance uses
-generic distros such as buildroot and Debian, ChromeOS is specific to a range
-of products. This may lead to different results compared to Debian for a same
-test when user-space components have a role to play (for example, the C library
-or the toolchain). Also, some ChromeOS kernels may be built by KernelCI in the
-future using production branches which are not upstream. For these reasons, a
-separate instance appeared to be necessary in order to keep the main production
-instance entirely dedicated to upstream.
-
-The [Tast](https://chromium.googlesource.com/chromiumos/platform/tast/)
-framework provides a comprehensive series of tests to run on ChromeOS. This is
-typically what the ChromeOS KernelCI instance is running, in addition to some
-generic tests such as kselftest, LTP etc.
-
-## Development workflow
-
-While the ChromeOS KernelCI instance is hosted on the production server, it has
-a development workflow closer to [staging](../staging). An integration branch
-`chromeos.kernelci.org` is created for each GitHub code repository used on the
-ChromeOS instance with all the pending pull requests on top of a `chromeos`
-branch. This is analogous to the `staging.kernelci.org` branch which is based
-on `main` instead.
-
-Pull requests for the chromeos.kernelci.org instance should be made with the
-`chromeos` branch as a base and commits should include `CHROMEOS` in their
-subject. For example:
-
-```
-CHROMEOS section: Describe your patch
-```
-
-Once the changes have run successfully, and once reviewed and approved, they
-get merged on the `chromeos` branch. This branch then gets rebased
-periodically on top of `main`, typically once a week after each production
-update.
-
-Changes on the `chromeos` branch may be merged into the `main` branch to reduce
-the number of "downstream" commits it contains. This can involve refactoring
-the changes to make it suitable for the general use-cases, renaming things to
-follow some conventions, squashing commits etc.
-
-> **Note** A potential new feature could be to add a `chromeos` label to pull
-> requests made against the `main` branch so that they also get deployed via
-> the `chromeos.kernelci.org` branch but are merged directly into `main`.
-
-## How ChromeOS tests are run by KernelCI
-
-It is assumed that the reader has some experience with
-[`LAVA`](https://www.lavasoftware.org/index.html).
-
-With the Chromebook boot process used on products, the bootloader
-([Coreboot](https://www.coreboot.org/) /
-[Depthcharge](https://chromium.googlesource.com/chromiumos/platform/depthcharge))
-looks for a special kernel partition, and then loads the latest working kernel.
-See the [ChromeOS boot
-documentation](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/disk_format.md#Google-ChromeOS-devices)
-for more details. In KernelCI, each boot needs to be done with a different
-kernel image. For this reason, a modified version of Depthcharge is used to
-load the kernel image over TFTP via an interactive command line interface on
-the serial console. This is managed by the [LAVA
-depthcharge](https://docs.lavasoftware.org/lava/actions-boot.html#depthcharge)
-boot method.
-
-An additional complication is that ChromeOS can't be easily run over NFS. It
-requires a specific partition layout, and running on the eMMC provides similar
-performance to a regular product. Many tests are about performance so this is
-a major aspect to take into account. It was also not designed to boot with an
-initrd, and as a result the kernel modules have to be installed on the eMMC
-root partition before the Chromebook starts. This is done via 2-stage LAVA
-job, as can be seen in the
-[`cros-boot.jinja2`](https://github.com/kernelci/kernelci-core/blob/chromeos/config/lava/chromeos/cros-boot.jinja2)
-LAVA job template. The Chromebook first boots with Debian NFS to install the
-kernel modules on the eMMC (`modules` namespace), then reboots with ChromeOS
-(`chromeos` namespace).
-
-> **Note** It is worth noting that testing in **QEMU** is a bit different from
-> testing on Chromebooks as the image can be manipulated more easily.
-> Installing the modules is done in the rootfs file using deploy postprocess
-> before booting the QEMU image. See the
-> [`cros-boot-qemu.jinja2`](https://github.com/kernelci/kernelci-core/blob/chromeos.kernelci.org/config/lava/chromeos/cros-boot-qemu.jinja2)
-> LAVA job template for more details.
-
-In the next step, we expect a login prompt on the serial console and
-successfull login over it. The serial console is not used to run tests,
-instead they are run over SSH from a Docker container running on a LAVA server.
-This proves to be more reliable as serial console hardware can more easily have
-issues and kernel console messages can interfere with test results. The only
-downside is that we can't detect networking errors this way.
-
-## Types of tests performed
-
-At the moment, we are running baseline tests that check for critical errors
-when loading the kernel (dmesg) and special internal ChromeOS tests "tast".
-
-Tests that have -fixed suffix use downstream kernel (from Google or
-compiled inside ChromeOS rootfs) and are "reference" to compare against
-the tested, upstream kernel.
-
-## Building and structure of ChromeOS rootfs
-
-Building a ChromeOS rootfs image involves building by [`Google recipe`](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md#Building-ChromiumOS)
-with our own tweaks, as following:
-* Enable USB serial support for console
-* Build not only by a specific manifest tag, but fix (manifest snapshot)
-to certain commits level, this will allow us to get identical builds
-and consistent test results on all devices
-* Add generated ".bin" image to special "flasher" debos image to save
-time
-* Extract the kernel and modules for -fixed tests from the generated image
-and publish them separately as bzImage and modules.tar.xz
-* Extract required tast files to run tast tests in LAVA Docker (tast.tgz)
-
-You can build it manually using following commands, for qemu (amd64-generic):
-```yaml
-git clone -b chromeos https://github.com/kernelci/kernelci-core/
-cd kernelci-core
-mkdir temp
-chmod 0777 temp
-docker run \
- --name kernelci-build-cros \
- -itd \
- -v $(pwd):/kernelci-core \
- --device /dev/kvm \
- -v /dev:/dev \
- --privileged \
- kernelci/cros-sdk
-docker exec \
- -it kernelci-build-cros \
- ./kci_rootfs \
- build \
- --rootfs-config chromiumos-amd64-generic \
- --data-path config/rootfs/chromiumos \
- --arch amd64 \
- --output temp
-```
-Or specify the appropriate parameters in jenkins chromeos/rootfs-builder,
-and after that the rootfs will be automatically published on [`storage server`](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/)
-
-## How to add new Chromebook type to KernelCI
-
-To build rootfs (used for flashing chromebook, kernel and modules for -fixed tests)
-you need to add similar fragment to the config
-file **config/core/rootfs-configs-chromeos.yaml**:
-```yaml
- chromiumos-octopus:
- rootfs_type: chromiumos
- board: octopus
- branch: release-R100-14526.B
- serial: ttyS1
- arch_list:
- - amd64
-```
-* **board**: codename of Chromebook board, check for your at this [`list`](https://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices/)
-
-* **branch**: it is recommended to use the same one as other Chromebooks,
-unless an exception needs to be made
-(for example, a new model is only supported in the latest release)
-
-* **serial**: is hardware dependent, you need to check Chromebook
-manual for correct value.
-
-In order to add a Chromebook to the testing, you need to add a similar
-entry to the config file **config/core/test-configs-chromeos.yaml**:
-```
- asus-C436FA-Flip-hatch_chromeos:
- base_name: asus-C436FA-Flip-hatch
- mach: x86
- arch: x86_64
- boot_method: depthcharge
- filters: &pineview-filter
- - passlist: {defconfig: ['chromeos-intel-pineview']}
- params:
- block_device: nvme0n1
- cros_flash_nfs:
- base_url: 'https://storage.kernelci.org/images/rootfs/debian/bullseye-cros-flash/20220623.0/amd64/'
- initrd: 'initrd.cpio.gz'
- initrd_compression: 'gz'
- rootfs: 'full.rootfs.tar.xz'
- rootfs_compression: 'xz'
- cros_flash_kernel:
- base_url: 'http://storage.chromeos.kernelci.org/images/kernel/v5.10.112-x86-chromebook/'
- image: 'kernel/bzImage'
- modules: 'modules.tar.xz'
- modules_compression: 'xz'
- cros_image:
- base_url: 'https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20220621.0/amd64/'
- flash_tarball: 'cros-flash.tar.gz'
- flash_tarball_compression: 'gz'
- tast_tarball: 'tast.tgz'
- reference_kernel:
- image: 'https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20220621.0/amd64/bzImage'
- modules: 'https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20220621.0/amd64/modules.tar.xz'
-```
-* **filters**, indicates which kernel builds are suitable for this Chromebook,
-fragment names can be found by the name of board in the ChromeOS sources.
-
-For example:
-chromiumos-sdk/src/overlays/baseboard-octopus in
-chromiumos-sdk/src/overlays/baseboard-octopus/profiles/base/parent is set:
-chipset-glk:base
-
-Then:
-chromiumos-sdk/src/overlays/chipset-glk/profiles/base/make.defaults
-CHROMEOS_KERNEL_SPLITCONFIG="chromeos-intel-pineview"
-
-* **block_device** device name of the persistent storage of Chromebook.
-In some devices, this may be eMMC (mmcblkN) or NVMe, as in this case,
-in some exceptional cases it can be set to detect (like on "grunt",
-where block device name is not persistent)
-* **cros_flash_nfs** points to the debos filesystem that flashes the Chromebook.
-* **cros_flash_kernel** points to the upstream kernel that will be used for
-the above system (it must support the peripherals of the flashed Chromebook,
-especially persistent storage, such as eMMC, NVMe, etc)
-* **cros_image points** to **cros_flash_nfs.rootfs** repacked together with rootfs
-.bin image we built
-* **reference_kernel** is ChromeOS downstream kernel that is used for -fixed tests,
-in some cases this is the kernel provided by Google, and in some cases
-it is extracted from the rootfs we built.
-
-And also a snippet with the tests you want to, run in test configs section:
-```
- - device_type: asus-C436FA-Flip-hatch_chromeos
- test_plans:
- - cros-boot
- - cros-boot-fixed
- - cros-tast
- - cros-tast-fixed
-```
-
-# How to Prepare a known type of Chromebook for KernelCI testing
-
-If you got a new Chromebook, connected it to LAVA, and want to set it up,
-so it can be used in KernelCI, you need to flash the Chromebook with
-the firmware(rootfs) compiled and set in kernelci configuration files.
-
-The easiest way is to generate a LAVA template for the flashing job.
-If you look in test-configs-chromeos.yaml you will see test definition
-"cros-flash".
-To generate a flashing template for "octopus", you need to add **cros-flash**
-to our local configuration **test-configs-chromeos.yaml**. For example like this:
-```
- - device_type: hp-x360-12b-ca0010nr-n4020-octopus_chromeos
- test_plans:
- - cros-flash
-```
-Then you need to generate, using any suitable kernel, LAVA job template:
-
-```
-./kci_test generate --lab-config=lab-collabora --install-path=_install_\
- --output=jobs --db-config=staging.kernelci.org\
- --storage=https://storage.chromeos.kernelci.org/\
- --plan=cros-flash
-```
-Installing OS image can take quite a long time, 10-15 minutes, and successful
-firmware log ends with sync, partprobe, exit commands.
-
-To check if the new firmware was successfully loaded, it is recommended
-to generate the **cros-baseline-fixed** job definition for LAVA in a similar way.
-
-## "cros://" kernel configuration fragments
-
-Perhaps by looking at the source code at ChromeOS branch in kernelci-core
-you have already noticed unusual kernel fragments with prefix **"cros://"**.
-
-As this instance kernel builds is different from the "main" ones
-in many cases it borrows the configuration fragments from the ChromeOS source code.
-
-For example **"cros://chromeos-5.10/armel/chromiumos-arm.flavour.config"** using configuration fragment archive file:
-https://chromium.googlesource.com/chromiumos/third_party/kernel/+archive/refs/heads/chromeos-5.10/chromeos/config.tar.gz
-then fragments from it:
-* **base.config**
-* architecture dependent (in our case armel) **armel/common.config**
-* **chromeos/armel/chromiumos-arm.flavour.config** from this archive.
-
-## Prebuilt ChromiumOS images published by KernelCI
-
-KernelCI builds and publishes ChromiumOS images for supported boards at the
-following location:
-
-https://storage.chromeos.kernelci.org/images/rootfs/chromeos/
-
-Please consult the official [ChromiumOS flashing instructions](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/cros_flash.md)
-for examples how to install the images within the published `chromiumos_test_image.bin.gz`.
-
-KernelCI maintains a [Changelog](../chromeos/chromeos_changelog/) which tracks the evolution
-between published ChromiumOS releases as well as divergences between the KernelCI
-images and ChromiumOS upstream.
-
-# Upstreaming ChromiumOS changes
-
-Sometimes we have to fix issues and push changes upstream to ChromiumOS to avoid divergence.
-Please take a look at the [official contributing documentation](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/contributing.md) for more details.
-
-## Some upstreaming tips & tricks
-
-* Ensure a [buganizer](https://issuetracker.google.com) issue is created and [properly linked](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/contributing.md#Link-to-issue-trackers) from Gerrit CLs.
-* Signed-off-by's are not necessary. Some owners might ask to remove them.
-* The purpouse of the Code-Review process is to get CR+2 from code owners. Once a CR+2 has been given, proceed to running Commit-Queue (CQ) to land the change.
-* CR+1 means "LGTM but someone else must approve" and is not enough to land changes. Please ensure you get a CR+2 from owners.
-* CQ+1 is a "dry-run", i.e. run all tests but do not merge the change. CQ+2 means run all tests and if they pass, merge/land the change.
-* The Verified+1 tag is required before running CQ+2 (Gerrit will give an error informing about this).
diff --git a/kernelci.org/content/en/legacy/instances/chromeos/chromeos_changelog.md b/kernelci.org/content/en/legacy/instances/chromeos/chromeos_changelog.md
deleted file mode 100644
index e88a0d55..00000000
--- a/kernelci.org/content/en/legacy/instances/chromeos/chromeos_changelog.md
+++ /dev/null
@@ -1,260 +0,0 @@
----
-title: "ChromeOS image changelog"
-date: 2023-04-25
-weight: 4
----
-
-This file tracks divergences between the KernelCI built ChromiumOS images and upstream ChromiumOS as well as any changes between image relases.
-
-Divergences can be considered tech-debt and in the long run need to be kept under control and minimized, therefore this chanelog should reflect its evolution from version to version.
-
-### Fetching latest images
-
-It is recommended to use the latest published image versions for each board from [this list](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/).
-
-The latest version can be found either from the directory date name (e.g. `chromiumos-asurada/20230208.0`) or by the `distro_version` field in the `manifest.json` file, where for e.g. R106 is greater than R100.
-
-### ChromiumOS release documentation
-
-[This page](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/releases.md) contains information on how ChromiumOS manages its releases, schedules, support windows and other such useful information.
-
-For an up-to-date overview of current and planned releases, please visit the [schedule dashboard](https://chromiumdash.appspot.com/schedule).
-
-## R118
-
-### Repo manifest
-
-The following images have been built using [this manifest](https://raw.githubusercontent.com/kernelci/kernelci-core/chromeos/config/rootfs/chromiumos/cros-snapshot-release-R118-15604.B.xml). The [repo tool](https://code.google.com/archive/p/git-repo/) can fetch the sources specified in the manifest file.
-
-Specific instructions on how to fetch and build ChromiumOS from a manifest file can be found in the [developer guide](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md).
-
-### Supported boards
-
-Direct links for each supported board in this release are provided below for convenience.
-| Board | Kernels shipped in image | Kernels tested by KernelCI (replacing image kernels during boot) |
-|-------------|:------------:|:-------:|
-| [arcada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [asurada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-asurada/20231106.0/arm64) | v6.6.x
+ panfrost | stable:linux-6.1.y
stable:linux-6.6.y |
-| [brya](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-brya/20231106.0/amd64) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [cherry](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-cherry/20231106.0/arm64) | v6.6.x
+ 3 display patches
+ panfrost | stable:linux-6.6.y |
-| [coral](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-coral/20231106.0/amd64) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [corsola](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-corsola/20231206.0/arm64) | linux-next 20231106
+ mtk HW enablement patches
+ panfrost | |
-| [dedede](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-dedede/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [grunt](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-grunt/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [guybrush](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-guybrush/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [hatch](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [jacuzzi](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-jacuzzi/20231106.0/arm64/) | v6.6.x + panfrost | stable:linux-6.1.y
stable:linux-6.6.y |
-| [nami](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-nami/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [octopus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-octopus/20231106.0/amd64/) | chromeos-5.15 | stable:linux-6.1.y
stable:linux-6.6.y |
-| [puff](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-puff/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [rammus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-rammus/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [sarien](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [trogdor](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-trogdor/20231106.0/arm64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [volteer](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-volteer/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-| [zork](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-zork/20231106.0/amd64/) | default | stable:linux-6.1.y
stable:linux-6.6.y |
-
-
-### New workarounds/patches since previous version (R116)
-
-#### minigbm:
-- Dropped mediatek backend in favor of drv_dumb backend, no more divergence for the platform/minigbm component.
-
-#### chromiumos-overlay:
-- Reworked the minigbm patch into a single two liner patch to activate drv_dumb
-- Droppped the bump to make chromeos-kernel-5_10 use our own forked 5.10 branch, in favor of the 5.10 branch shipped in R118 by CrOS upstream.
-- Fixed broken portage manifest for media-libs/cros-camera-libfs
-- Masked newer chromeos-chrome ebuild version which doesn't have a binpkg, to avoid unnecessarily long build times.
-- Bumped chromeos-installer ebuild as a result of forking the platform2 overlay.
-- Bumped MTK kernel used for jacuzzi, asurada and cherry to 6.6.
-
-#### platform2:
-- Forked to disable failing installer tpm check, until a fix is landed for b:291036452.
-- Backported [a fix](https://chromium-review.googlesource.com/c/chromiumos/platform2/+/4894393) for secagentd builds against kernel 6.6.
-
-#### graphics:
-- Backported patch from R119 to avoid a conflict, still need to keep this patch in R118
-
-#### board-overlays:
-- Forward ported the last remaining commit without conflicts. Will keep it for now but gradually will reduce its footprint as MTK SoCs start using the newer Google kernels with 6.6
-- Bumped octopus to 5.15 because tast fails with 4.14.
-- Cherry is now using the 6.6.x stable upstream kernel branch instead of the older 6.2 -next based branch.
-
-#### third_party/kernel/v5.10:
-- Removed fork in favor of chromeos-kernel-5_10
-
-#### third_party/kernel/upstream:
-- Bumped to 6.6.x plus 3 remaining display patches for cherry.
-
-## R116
-
-### Repo manifest
-
-The following images have been built using [this manifest](https://raw.githubusercontent.com/kernelci/kernelci-core/chromeos/config/rootfs/chromiumos/cros-snapshot-release-R116-15509.B.xml). The [repo tool](https://code.google.com/archive/p/git-repo/) can fetch the sources specified in the manifest file.
-
-Specific instructions on how to fetch and build ChromiumOS from a manifest file can be found in the [developer guide](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md).
-
-### Supported boards
-Direct links for each supported board in this release are provided below for convenience.
-| Board | Kernels shipped in image | Kernels tested by KernelCI (replacing image kernels during boot) |
-|-------------|:------------:|:-------:|
-| [asurada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-asurada/20230825.0/arm64) | v6.2.7
+ display patches
+ panfrost | stable:linux-6.1.y |
-| [brya](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-brya/20230825.0/amd64) | default | stable:linux-6.1.y |
-| [cherry](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-cherry/20230825.0/arm64) | linux-next 20230203
+ mtk HW enablement patches
+ panfrost | |
-| [coral](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-coral/20230825.0/amd64) | default | stable:linux-6.1.y |
-| [dedede](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-dedede/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [grunt](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-grunt/20230825.0/amd64/) | chromeos-5_10 | stable:linux-6.1.y |
-| [guybrush](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-guybrush/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [hatch](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [jacuzzi](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-jacuzzi/20230825.0/arm64/) | v6.2.7
+ panfrost | stable:linux-6.1.y
next-integration-branch (for-kernelci) |
-| [nami](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-nami/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [octopus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-octopus/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [puff](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-puff/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [rammus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-rammus/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [sarien](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [trogdor](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-trogdor/20230825.0/arm64/) | default | stable:linux-6.1.y |
-| [volteer](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-volteer/20230825.0/amd64/) | default | stable:linux-6.1.y |
-| [zork](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-zork/20230825.0/amd64/) | default | stable:linux-6.1.y |
-
-
-### New workarounds/patches since previous version (R114)
-
-* Updated minigbm backend for Mediatek
-* Updated separate patches for tpm2 flag where it is necessary
-* Added workaround for b/295364868 (orphan_files feature not supported by old kernels) by updating mke2fs.conf
-* Added workaround for PS1/command prompt
-* Backported fix for b/300303585. The fix was upstreamed starting with R119, after which KernelCI should drop it.
-
-### Removed workarounds since previous version (R114)
-
-* Removed trogdor patch revert for arm64 userspace
-* Removed manual kernel version override for grunt
-* Removed build fixes for coral, sarien
-
-## R114
-
-### Repo manifest
-
-The following images have been built using [this manifest](https://github.com/kernelci/kernelci-core/blob/chromeos/config/rootfs/chromiumos/cros-snapshot-release-R114-15437.B.xml). The [repo tool](https://code.google.com/archive/p/git-repo/) can fetch the sources specified in the manifest file.
-
-Specific instructions on how to fetch and build ChromiumOS from a manifest file can be found in the [developer guide](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md).
-
-### Supported boards
-
-Direct links for each supported board in this release are provided below for convenience.
-| Board | Kernels shipped in image | Kernels tested by KernelCI (replacing image kernels during boot) |
-|-------------|:------------:|:-------:|
-| [asurada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-asurada/20230620.0/arm64) | v6.2.7
+ display patches
+ panfrost | stable:linux-6.1.y |
-| [cherry](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-cherry/20230620.0/arm64) | linux-next 20230203
+ mtk HW enablement patches
+ panfrost | |
-| [coral](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-coral/20230620.0/amd64) | default | stable:linux-6.1.y |
-| [dedede](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-dedede/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [grunt](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-grunt/20230620.0/amd64/) | chromeos-5_10 | stable:linux-6.1.y |
-| [hatch](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [jacuzzi](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-jacuzzi/20230620.0/arm64/) | v6.2.7
+ panfrost | stable:linux-6.1.y
next-integration-branch (for-kernelci) |
-| [nami](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-nami/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [octopus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-octopus/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [rammus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-rammus/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [sarien](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [trogdor](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-trogdor/20230620.0/arm64/) | default | stable:linux-6.1.y |
-| [volteer](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-volteer/20230620.0/amd64/) | default | stable:linux-6.1.y |
-| [zork](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-zork/20230620.0/amd64/) | default | stable:linux-6.1.y |
-
-### Changes since previous version (R111)
-
-* Dropped separate initramfs patches, as it is upstream now, [this CL](https://chromium-review.googlesource.com/c/chromiumos/platform/initramfs/+/4262007)
-
-## R111
-
-### Repo manifest
-
-The following images have been built using [this manifest](https://github.com/kernelci/kernelci-core/blob/chromeos/config/rootfs/chromiumos/cros-snapshot-release-R111-15329.B.xml). The [repo tool](https://code.google.com/archive/p/git-repo/) can fetch the sources specified in the manifest file.
-
-Specific instructions on how to fetch and build ChromiumOS from a manifest file can be found in the [developer guide](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md).
-
-### Supported boards
-
-Direct links for each supported board in this release are provided below for convenience.
-| Board | Kernels shipped in image | Kernels tested by KernelCI (replacing image kernels during boot) |
-|-------------|:------------:|:-------:|
-| [amd64-generic](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-amd64-generic/20230511.0/amd64) | chromeos-5_15 | stable:linux-6.1.y |
-| [asurada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-asurada/20230511.0/arm64) | v6.2.7
+ display patches
+ panfrost | stable:linux-6.1.y |
-| [cherry](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-cherry/20230511.0/arm64) | linux-next 20230203
+ mtk HW enablement patches
+ panfrost | |
-| [coral](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-coral/20230511.0/amd64) | chromeos-5_10 | stable:linux-6.1.y |
-| [dedede](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-dedede/20230511.0/amd64/) | chromeos-5_4 | stable:linux-6.1.y |
-| [grunt](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-grunt/20230606.0/amd64/) | chromeos-5_10 | stable:linux-6.1.y |
-| [hatch](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20230511.0/amd64/) | chromeos-4_19 | stable:linux-6.1.y |
-| [jacuzzi](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-jacuzzi/20230511.0/arm64/) | v6.2.7
+ panfrost | stable:linux-6.1.y
next-integration-branch (for-kernelci) |
-| [nami](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-nami/20230511.0/amd64/) | chromeos-4_4 | stable:linux-6.1.y |
-| [octopus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-octopus/20230511.0/amd64/) | chromeos-4_14 | stable:linux-6.1.y |
-| [rammus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-rammus/20230511.0/amd64/) | chromeos-4_4 | stable:linux-6.1.y |
-| [sarien](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20230511.0/amd64/) | chromeos-4_19 | stable:linux-6.1.y |
-| [trogdor](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-trogdor/20230606.0/arm64/) | chromeos-5_4 | stable:linux-6.1.y |
-| [volteer](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-volteer/20230511.0/amd64/) | chromeos-5_4 | stable:linux-6.1.y |
-| [zork](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-zork/20230511.0/amd64/) | chromeos-5_4 | stable:linux-6.1.y |
-
-### Changes since previous version (R106)
-* Grunt kernel manually forced from 4.14 to 5.10 in overlay [patch](https://github.com/kernelci/kernelci-core/pull/1948/commits/71ee9f81a4c6ed9b4d50813eef37dbbd20c25f35)
-* Trogdor patch to enable arm64 userspace reverted [patch](https://github.com/kernelci/kernelci-core/pull/1948/commits/71ee9f81a4c6ed9b4d50813eef37dbbd20c25f35)
-* CR50 firmware extracted from image to prevent automatic upgrade, available in same directory for standalone upgrade [patch1](https://github.com/kernelci/kernelci-core/pull/1816/commits/194a3173be29bab9ae035c2d1b7247fb205ca923) [patch2](https://github.com/kernelci/kernelci-core/pull/1872/commits/3ce3959fd1b26876f975a6e6132c9510d05166d2)
-
-## R106
-
-### Repo manifest
-
-The following images have been built using [this manifest](https://github.com/kernelci/kernelci-core/blob/chromeos/config/rootfs/chromiumos/cros-snapshot-release-R106-15054.B.xml). The [repo tool](https://code.google.com/archive/p/git-repo/) can fetch the sources specified in the manifest file.
-
-Specific instructions on how to fetch and build ChromiumOS from a manifest file can be found in the [developer guide](https://chromium.googlesource.com/chromiumos/docs/+/main/developer_guide.md).
-
-### Supported boards
-
-Direct links for each supported board in this release are provided here for convenience:
-- [amd64-generic](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-amd64-generic/20221102.0/arm64)
-- [asurada](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-asurada/20230208.0/arm64)
-- [cherry](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-cherry/20230330.0/arm64)
-- [coral](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-coral/20221026.0/amd64)
-- [dedede](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-dedede/20221113.0/amd64/)
-- [grunt](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-grunt/20221028.0/amd64/)
-- [hatch](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-hatch/20221027.0/amd64/)
-- [jacuzzi](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-jacuzzi/20230206.0/arm64/)
-- [nami](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-nami/20221120.0/amd64/)
-- [octopus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-octopus/20221025.0/amd64/)
-- [rammus](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-rammus/20221116.0/amd64/)
-- [sarien](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-sarien/20221111.0/amd64/)
-- [trogdor](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-trogdor/20230214.0/arm64/)
-- [volteer](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-volteer/20221115.0/amd64/)
-- [zork](https://storage.chromeos.kernelci.org/images/rootfs/chromeos/chromiumos-zork/20221115.0/amd64/)
-
-### Changes since previous version (R100)
-- A custom repo manifest is used to build images which points to forked repositories.
-- SElinux is now disabled in userspace, see issue https://github.com/kernelci/kernelci-core/issues/1372 .
-- chromeos-kernel-upstream is used for Mediatek image builds.
-
-### Known divergences
-
-#### src/third-party/kernel/next
-- Points to the [Mediatek Integration branch](https://gitlab.collabora.com/google/chromeos-kernel/-/tree/for-kernelci).
-- Currently only used for cherry board builds because upstream support is still WIP.
-
-#### src/third-party/kernel/upstream
-- Based on v6.2.7 stable kernel release.
-- `arch/arm64/configs/defconfig` was extended with Mediatek specific config fragments. In the future we might find a better way to fetch these for the upstream kernel builds.
-- Backported and cherry-picked ~ 19 patches to enable Panfrost on mediatek. These will be dropped in future kernel versions.
-
-#### src/third-party/chromiumos-overlay
-- Disable selinux in the global profile for all boards.
-- Upgrade mesa-panfrost to latest 22.3.3 for Mali valhall GPU support.
-- Add USE flag to skip cr50 FW upgrades.
-- Bump ebuilds for divergence in other components (kernel, minigbm).
-
-#### src/platform/minigbm
-- Add patch to allow minigbm to work with panfrost BO ioctls. This works but needs significant changes before being sent upstream.
-
-#### src/platform/initramfs
-- Contains a backport of a commit which got upstreamed in [this CL](https://chromium-review.googlesource.com/c/chromiumos/platform/initramfs/+/4262007).
-- This fork can be removed when upgrading to a newer ChromiumOS version containing the above commit.
-
-#### src/overlays
-- Added fix for broken upstream chipset-mt8183 virtual/opengles panfrost dependencies.
-- Panfrost added as a graphics alternative for all Mediatek chipsets.
-- Removed Mali G-57 empty job workaround firmware which is not required for upstream graphics.
-- Instructed mt8183/8192 builds to use upstream kernel.
-- Instructed mt8195 builds to use linux-next kernel / Mediatek Integration branch (see above).
diff --git a/kernelci.org/content/en/legacy/instances/cip.md b/kernelci.org/content/en/legacy/instances/cip.md
deleted file mode 100644
index 36224486..00000000
--- a/kernelci.org/content/en/legacy/instances/cip.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: "CIP"
-date: 2022-09-20T15:16:00Z
-description: "About the cip.kernelci.org instance"
-weight: 4
----
-
-The [Civil Infrastructure Platform](https://www.cip-project.org/) project (CIP)
-manages a separate instance of KernelCI. In reality this "instance" is part of
-the main [linux.kernelci.org](https://linux.kernelci.org) instance but the
-configuration of what is built and tested is managed in separate configuration
-files by [maintainers](https://kernelci.org/org/tsc/#cip-instance) from the
-CIP project.
-
-The development and production workflows are identical to the main KernelCI
-instance. Visit the
-[production documentation](https://docs.kernelci.org/instances/production/) to
-learn more about the process.
-
-The CIP "instance" can be accessed at the
-[cip.kernelci.org](https://cip.kernelci.org/) URI, which is essentially a
-shortcut to [linux.kernelci.org/job/cip/](https://linux.kernelci.org/job/cip/).
-
-## Trees
-KernelCI currently monitors two CIP Linux kernel trees.
-* *cip*: The "upstream" CIP Linux kernel tree is located at on
-[kernel.org](https://git.kernel.org/pub/scm/linux/kernel/git/cip/linux-cip.git/).
-* *cip-gitlab*: This is a mirror of the upstream CIP tree hosted on GitLab. In
-addition this tree has extra, unique branches that are used to trigger specific
-CI build and test jobs
-
-## Configuration
-The build configurations (trees, branches, configurations etc.) are defined in
-[build-configs-cip.yaml](https://github.com/kernelci/kernelci-core/blob/main/config/core/build-configs-cip.yaml)
-from the [kernelci-core](https://github.com/kernelci/kernelci-core) project.
-
-The test configurations (devices, test cases, rootfs etc.) are defined in
-[test-configs-cip.yaml](https://github.com/kernelci/kernelci-core/blob/main/config/core/test-configs-cip.yaml).
-
-## Project Board
-There is a separate [issues board](https://github.com/orgs/kernelci/projects/11)
-on GitHub for the CIP instance. Recent, current and future activities can be
-viewed.
-
-## Contributing
-If you have any new features/requests/bugs to request/report, please raise a
-GitHub issue against the relevant [KernelCI](https://github.com/kernelci)
-repository and add `CIP` to the subject. One of the CIP maintainers will then
-assign the issue to the *CIP* GitHub project as appropriate.
-
-We also welcome any GitHub Pull Requests. Again, please prepend these with `CIP`
-so that they can be identified easily.
diff --git a/kernelci.org/content/en/legacy/instances/local.md b/kernelci.org/content/en/legacy/instances/local.md
deleted file mode 100644
index 15d2715f..00000000
--- a/kernelci.org/content/en/legacy/instances/local.md
+++ /dev/null
@@ -1,233 +0,0 @@
----
-title: "Local (Dev setup)"
-date: 2021-08-12T10:15:37Z
-description: "How to set up a local KernelCI instance"
----
-
-This section describes how to set up a KernelCI instance which is suitable for development. It's not meant to mimic the production-like setup on your local machine, but rather give directions on how to setup a minimal set of KernelCI components to simplify the process of development and testing new features.
-
-## Minimal set of KernelCI components
-
-The minimal setup for KernelCI consists of:
-
-- kernelci-backend
-- storage
-- kernelci-core
-
-This is sufficient for building the kernel, running tests and pushing build and test information as well as build artifacts to the backend and storage respectively.
-
-> **Note**
-> These instructions are not suitable and should not be used to setup a production environment. They purposely neglect certain aspects of performance and security for the sake of simplicity.
-
-## Prerequisites
-
-Currently, the best option is to create a separate Virtual Machine that will run `kernelci-backend` and `storage`
-
-It's assumed in this document that you're running a following setup:
-
-- Virtual Machine running Debian Buster accessible via IP and domain name e.g. _kci-vm_. The following requirements need to be met:
-
- - The VM must be reachable with a domain name from the host used for installation.
- - SSH access is provided with key authorization
-
-> You can use DNS to access the VM with its domain name or simply put an entry to the `/etc/hosts` file.
-
-- Host machine which will be used to connect to the VM
- It needs to meet following requirements:
- - Installed apps:
- - _git_
- - _ansible_
- - _Python 2.7_
- - _Python 3.6+_
-
-## Deploy KernelCI backend and storage
-
-> **Note** It is assumed in this section that your kernelci VM is available from your host with the hostname `kci-vm`
-
-- Make sure _ansible_ is installed on the host machine.
-- Clone `kernelci-backend-config` repository
-
- ```bash
- git clone git@github.com:kernelci/kernelci-backend-config.git
- ```
-
-### Configure ansible to be used with your VM
-
-```bash
-cd kernelci-backend-config
-```
-
-- Modify `[local]` section of the `hosts` file to match your configuration
-
-```ini
-[local]
-kci-vm ansible_ssh_user=kci
-```
-
-- Create `host_vars/kci-vm`
-
-```bash
-touch host_vars/kci-vm
-```
-
-- Edit `host_vars/kci-vm` with your favorite text editor and put content there:
-
-```ini
-hostname: kci-vm
-role: production
-certname: kci-vm
-storage_certname: kci-vm
-kci_storage_fqdn: kci-vm
-become_method: su
-```
-
-- Create `dev.yml` playbook file with the following content:
-
-```
-- hosts: kci-vm
- become: yes
- become_method: su
- become_user: root
- gather_facts: yes
- roles:
- - common
- - install-deps
- - install-app
- - init-conf
- - kernelci-storage
-```
-
-### Prepare secrets
-
-- Copy `secrets.yml` template
-
-```
-cp templates/secrets.yml dev-secrets.yml
-```
-
-- Fill out necessary values
-
-```
-master_key:
- ""
-
-# The url location where the backend will be running
-backend_url:
- "http://kci-vm"
-# The url location where the frontend will be running
-base_url:
- "http://kci-vm"
-# The backend token OR master key for a fresh install
-# If set to the master-key this field will have to be updated
-# once the tokens created
-backend_token:
- ""
-# A secret key internally used by Flask
-secret_key:
- ""
-# The url location of the storage server for the backend
-file_server:
- "http://kci-vm"
-```
-
-```
-ssl_stapling_resolver:
- "localhost"
-```
-
-> **Note** You can use UUID as your random string. You can easily generate one with
-> ```
-> python -c "import uuid;print(uuid.uuid4())"
-> ```
-
-### Run ansible playbook to set up your VM
-
-```
-ansible-playbook -i hosts -l kci-vm -D -b -K -e git_head="main" -e "@./dev-secrets.yml" --key-file ~/.ssh/org/id_rsa dev.yml
-```
-
-> **Note** You may validate your ansible playbook with
-> ```
-> ansible-playbook dev.yml --syntax-check
-> ```
-
-> **Note** If you face unexpected behavior of ansible increasing verbosity by adding `-vvv` option may help in debugging
-
-> **Note** The nginx configuration step may fail due to SSL issues, but that's fine for the development setup.
-
-### Tweak your VM config
-
-- Log in to your VM as root
-- Delete backend nginx config
-
-```
-rm /etc/nginx/sites-enabled/kci-vm
-```
-
-- Replace content of the storage config (`/etc/nginx/sites-enabled/kci-vm.conf`)
-
-```
-server {
- listen *;
- listen [::];
-
- server_name kci-vm;
- root /var/www/images/kernel-ci;
- charset utf-8;
-
- access_log /var/log/nginx/kci-vm-access.log combined buffer=16k;
- error_log /var/log/nginx/kci-vm-error.log crit;
-
- location / {
- if (-f $document_root/maintenance.html) {
- return 503;
- }
-
- fancyindex on;
- fancyindex_exact_size off;
- }
-}
-```
-
-- Restart nginx
-
-```
-systemctl restart nginx
-```
-
-- Start KernelCI services
-
-```
-systemctl start kernelci-backend
-systemctl start kernelci-celery
-```
-
-> **Note** At this point your kernelci-backend instance should be available at `http://kci-vm:8888`
-
-> **Note** If you want to follow your kenrelci logs use journalctl
-> ```
-> journalctl -f -u kernelci-celery -u kernelci-backend
-> ```
-
-## Configure KernelCI Backend
-
-As the instance of `kernelci-backend` is up and running you can add necessary configuration to it.
-
-### Create admin token
-
-```
-curl -XPOST -H "Content-Type: application/json" -H "Authorization: YOUR_MASTER_KEY" "http://kci-vm:8888/token" -d '{"email": "me@example.com", "username": "admin", "admin": 1}'
-```
-
-If this goes well, you should see the admin token in the output
-e.g.
-
-```
-{"result":[{"token":"3f1fbc0f-4146-408c-a2da-02748f595bfe"}],"code":201}
-```
-
-## Build kernel and push results
-
-Congratulations, your `kernelci-backend` instance is up and running.
-You can now build your kernel and push the artifacts with `kci_build` and `kci_data`.
-See the [`kci_build`](https://docs.kernelci.org/core/kci_build/) documentation to get you started.
diff --git a/kernelci.org/content/en/legacy/instances/production.md b/kernelci.org/content/en/legacy/instances/production.md
deleted file mode 100644
index a171f9c4..00000000
--- a/kernelci.org/content/en/legacy/instances/production.md
+++ /dev/null
@@ -1,168 +0,0 @@
----
-title: "Production"
-date: 2021-07-27T19:30:00Z
-description: "How linux.kernelci.org works"
-weight: 1
----
-
-The main dashboard on [linux.kernelci.org](https://linux.kernelci.org) shows
-all the results for the [native tests](../../tests) run by KernelCI with
-upstream kernels. It is updated typically once a week with all the changes
-that have been merged in the various components: [core tools](https://github.com/kernelci/kernelci-core), [YAML configuration](https://github.com/kernelci/kernelci-core/tree/main/config/core), [backend](https://github.com/kernelci/kernelci-backend), [frontend](https://github.com/kernelci/kernelci-frontend), [Jenkins](https://github.com/kernelci/kernelci-jenkins)
-...
-
-Each project has a `main` branch used for development, this is where all the
-PRs are normally merged. Then there is also a `kernelci.org` branch in each
-project for production deployment. These branches are updated every time the
-production instance is updated. All the changes should be tested on
-[staging](../staging) before being merged into the `main` branches and later on
-deployed in production.
-
-The [kernelci-deploy](https://github.com/kernelci/kernelci-deploy) project
-contains some tools to automate parts of the production update. This is
-gradually getting more automated but a few steps still require manual
-intervention. Updating the production instance requires having SSH access to
-the main kernelci.org server.
-
-## Update procedure
-
-1. Release new versions
-
- Each component which has some changes ready to be deployed needs a new
- version tag. This is something that respective maintainers normally do.
- They should also look if there are any PRs ready to be merged before
- creating the new version.
-
-1. Update the kernelci-deploy checkout
-
- The production update uses tools in `kernelci-deploy`. The first step is to
- update it to the latest version, which can be done with this command:
-
- ```sh
- ./kernelci.org checkout
- ```
-
-1. Build new rootfs images
-
- Root file system (rootfs) images should be built ahead of the production
- update, so they can be tested on staging. It can take several hours to
- build all of them, so ideally this should be a couple of days before the
- production update. For example, with production updates usually occurring
- every Monday, the root file system images can be built every Friday to get
- tested on staging over the weekend.
-
- To build a new set of rootfs images:
-
- ```sh
- ./kernelci.org rootfs
- ```
-
- This will first rebuild the Docker images used to build the rootfs images,
- then trigger the rootfs image builds, and wait for them to complete. You
- may abort while waiting for the builds to complete and resume later with
- this command:
-
- ```sh
- ./kernelci.org rootfs_flush
- ```
-
-1. Flush kernel builds
-
- Once rootfs images have been tested and the new URLs have been merged in
- `test-configs.yaml`, the next step is to abort any pending kernel revision
- to be built in Jenkins and wait for all the ones currently in the pipeline
- to complete. The production update needs to be done while the pipeline is
- empty to avoid glitches while multiple components are being updated. The
- `kernelci.org` script in `kernelci-deploy` has a command to automate this
- part:
-
- ```sh
- ./kernelci.org pause
- ```
-
- This may take a while, typically 1h or so, depending on the number of kernel
- builds still pending. The script will be monitoring the status and exit
- once completed. It will also cancel all bisections as they can take a very
- long time to run. Regressions that persist will cause other bisections to
- be run after the production update.
-
-1. Restart jenkins
-
- Once there are no more kernel builds running in Jenkins, it should be
- restarted with the latest changes from `kernelci-jenkins` and the encrypted
- data in `kernelci-jenkins-data` (labs API tokens etc.). This is also a good
- time for rebooting all the nodes VMs and pruning Docker resources. Doing
- this requires SSH access to all the nodes. There is a
- [`reboot.sh`](https://github.com/kernelci/kernelci-jenkins-data/blob/main/bot.kernelci.org/reboot.sh)
- helper script in `kernelci-jenkins-data`:
-
- ```sh
- ./bot.kernelci.org/reboot.sh
- ```
-
- There is currently no automated way to detect that Jenkins as fully
- restarted. This is a gap to be filled in the `kernelci.org` helper script
- before the production update can be fully automated.
-
-1. Update everything
-
- Once Jenkins has restarted, the `kernelci.org` script provides another
- command to automatically updated everything:
-
- ```sh
- ./kernelci.org update
- ```
-
- This does the following:
-
- * Checkout the `kernelci-core` repository locally with the head of the
- `kernelci.org` branch
- * Run the Jenkins DSL job to recreate all the job definitions
- * Redeploy the backend using Ansible
- * Redeploy the frontend using Ansible
- * Update the static website on kernelci.org
- * Update the [KernelCI LAVA test-definitions
- fork](https://github.com/kernelci/test-definitions) with a rebase on top
- of upstream
- * Rebuild all the kernel Docker images (toolchains, device tree validation,
- QEMU...) and push them to the Docker hub
- * Push a test revision to the [KernelCI linux
- tree](https://github.com/kernelci/linux) to run a "pipe cleaner" build
- * Start a "pipe cleaner" build trigger job in Jenkins to build the test
- kernel revision
-
- It should take around 30min for the test kernel builds to complete, and then
- a while longer for any tests to complete in all the labs. The results
- should be checked manually, by comparing with previous revisions on the web
- frontend. The number of builds and test results as well as their pass rates
- should be similar. These builds can be seen here:
-
- https://linux.kernelci.org/job/kernelci/branch/kernelci.org/
-
- If there appear to be some issues with the results, the monitor job should
- be stopped so no new revisions will be automatically built. Some changes
- may be reverted or a fix applied depending on the root cause of the issues.
- If the results look fine, then the monitor job will start discovering new
- kernel revisions periodically and the production update is complete.
-
-## Known issues
-
-1. Scheduler settings of the `kernel-tree-monitor` job are not restored
- after Jenkins restart.
-
- Once Jenkins is restarted with latest changes from `kernelci-jenkins`, the
- scheduler settings are never set up. This is known issue and the only
- solution is to set the scheduler manually.
-
- * Log in to Jenkins on https://bot.kernelci.org
- * Go to `kernel-tree-monitor` job configuration page at
- https://bot.kernelci.org/job/kernel-tree-monitor/configure
- * In __Build Triggers__ section choose __Build periodically__
- * Paste the following settings to the __Schedule__ text box to have the
- monitor job run every hour:
-
- ```
- H * * * *
- ```
-
- * Save settings by clicking __Save__ button
diff --git a/kernelci.org/content/en/legacy/instances/staging.md b/kernelci.org/content/en/legacy/instances/staging.md
deleted file mode 100644
index 1fffa252..00000000
--- a/kernelci.org/content/en/legacy/instances/staging.md
+++ /dev/null
@@ -1,94 +0,0 @@
----
-title: "Staging"
-date: 2021-07-22T08:30:00Z
-description: "How staging.kernelci.org works"
-weight: 2
----
-
-While the production instance is hosted on
-[linux.kernelci.org](https://linux.kernelci.org), another independent instance
-is hosted on [staging.kernelci.org](https://staging.kernelci.org). This is
-where all the changes to the KernelCI code and configuration get tested before
-getting merged and deployed in production. It consists of a Jenkins instance
-on [bot.staging.kernelci.org](https://bot.staging.kernelci.org), a Mongo
-database with a
-[`kernelci-backend`](https://github.com/kernelci/kernelci-backend) instance and
-a [`kernelci-frontend`](https://github.com/kernelci/kernelci-frontend) instance
-for the web dashboard.
-
-Jobs are run every 8h on staging, using all the code from open pull requests on
-GitHub. The kernel revisions being tested are rotating between mainline,
-stable and linux-next. An extra commit and a staging tag is created on top of
-each branch to artifically create a new revision in the [KernelCI kernel
-tree](https://github.com/kernelci/linux) even if there was no changes upstream,
-to ensure that jobs are run with a separate set of results. A reduced set of
-build configurations is used to limit the resources used on staging and to get
-results quicker.
-
-There is also a plain mainline build every day, and a full linux-next build
-every Friday to at least have some complete coverage and potentially catch
-issues that can't be seen with the reduced number of configs in staging builds.
-
-## GitHub pull requests
-
-A special feature of the staging instance is the ability to test code from open
-GitHub pull requests before they get merged. This is handled by tools in the
-[`kernelci-deploy`](https://github.com/kernelci/kernelci-deploy) project, to
-pull all the open pull requests for a given project, apply some arbitrary
-patches and push a resulting `staging.kernelci.org` branch back to the
-repository with a tag. This branch is being replaced (force-pushed) every time
-the tool is run.
-
-Things to note:
-
-* Pull requests are only merged from users that are on a trusted list, stored
- in the `kernelci-deploy` configuration files.
-* Pull requests are merged in chronological order, so older ones take
- precedence.
-* Pull requests that fail to merge are ignored and will not be tested.
-* Pull requests will be skipped and not merged if they have the `staging-skip`
- label set.
-* If any patch from `kernelci-deploy` doesn't apply, the resulting branch is
- not pushed. It is required that all the patches always apply since some of
- them are necessary to adjust the staging behaviour (say, to not send
- bisection email reports). They will need to get updated if they conflict
- with pending PRs.
-* A tag is created with the current date and pushed with the branch.
-
-
-## Jenkins: bot.staging.kernelci.org
-
-The staging instance is running Jenkins, just like production. The main
-difference is that the staging one is publicly visible, read-only for anonymous
-users: [bot.staging.kernelci.org](https://bot.staging.kernelci.org/)
-
-This allows for the job logs to be inspected. Also, some developers have a
-personal folder there to run modified versions of the Jenkins job but still
-using the available resources (builders, API tokens to submit jobs in test
-labs...).
-
-
-## Run every 8h
-
-There is a timer on the staging.kernelci.org server which starts a job every
-8h, so 3 times per day. The job does the following:
-
-1. update [staging branch for `kernelci-jenkins`](https://github.com/kernelci/kernelci-jenkins/tree/staging.kernelci.org)
-1. recreate Jenkins jobs by running the job-dsl "seed" job
-1. update [staging branch for `kernelci-core`](https://github.com/kernelci/kernelci-core/tree/staging.kernelci.org)
-1. update [staging branch for `kernelci-backend`](https://github.com/kernelci/kernelci-backend/tree/staging.kernelci.org)
-1. update the `kernelci-backend` service using Ansible from [`kernelci-backend-config`](https://github.com/kernelci/kernelci-backend-config) with the staging branch
-1. update [staging branch for `kernelci-frontend`](https://github.com/kernelci/kernelci-frontend/tree/staging.kernelci.org)
-1. update the `kernelci-frontend` service using Ansible from [`kernelci-frontend-config`](https://github.com/kernelci/kernelci-frontend-config) with the staging branch
-1. pick a kernel tree between mainline, stable and next
-1. create and push a `staging-${tree}` branch with a tag to the [KernelCI kernel repo](https://github.com/kernelci/linux)
-1. trigger a monitor job in Jenkins with the `kernelci_staging` build config
-
-The last step should cause the monitor job to detect that the staging kernel
-branch has been updated, and run a kernel build trigger job which in turn will
-cause tests to be run. Builds and test results will be sent to the staging
-backend instance, and results will be available on the staging web dashboard.
-Regressions will cause bisections to be run on the staging instance, and
-results to be sent to the
-[`kernelci-results-staging@groups.io`](https://groups.io/g/kernelci-results-staging)
-mailing list.
diff --git a/kernelci.org/content/en/legacy/maintainers.md b/kernelci.org/content/en/legacy/maintainers.md
deleted file mode 100644
index 15641ba7..00000000
--- a/kernelci.org/content/en/legacy/maintainers.md
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: "Maintainers"
-date: 2023-08-11
-description: "KernelCI maintainers (legacy system)"
----
-
-## Software Maintainers
-
-### Frontend (deprecated)
-
-The KernelCI frontend provides a dynamic [web
-dashboard](https://linux.kernelci.org/job/) showing the data available from the
-backend. A new workboard is currently being developed to replace it and
-include data from KCIDB.
-
-* Main repository:
- [`kernelci-frontend`](https://github.com/kernelci/kernelci-frontend)
-* Ansible config repository:
- [`kernelci-frontend-config`](https://github.com/kernelci/kernelci-frontend-config)
-* Maintainer: `mgalka`
-* Deputy: `apereira`
-
-### Backend (deprecated)
-
-The KernelCI backend provides a [web API](https://api.kernelci.org/) to send
-build and test results and let the frontend dashboard retrieve. It also
-generates email reports, tracks regressions and triggers automated bisections.
-The new API will eventually replace it.
-
-* Main repository:
- [`kernelci-backend`](https://github.com/kernelci/kernelci-backend)
-* Ansible config repository:
- [`kernelci-backend-config`](https://github.com/kernelci/kernelci-backend-config)
-* Maintainer: `mgalka`
-* Deputy: `gtucker`
-
-### Jenkins (deprecated)
-
-The current KernelCI pipeline is using Jenkins. While this is about to be
-replaced with the new pipeline and API, the purpose remains essentially the
-same: orchestrating the builds and tests on kernelci.org.
-
-* Maintainers: `broonie`, `mgalka`, `nuclearcat`
-* Components:
- [`kernelci-jenkins`](https://github.com/kernelci/kernelci-jenkins)
-* Resources: Azure, GCE
-
-## Instance maintainers
-
-As there are several KernelCI instances, it's necessary to have people
-dedicated to each of them.
-
-### Production instance (legacy)
-
-The KernelCI components and services need to be regularly updated on the
-production instance with the latest code and configuration changes. This
-includes typically enabling coverage for new kernel branches or running new
-tests, as well as updating rootfs and Docker images with the latest versions of
-all the packages being used.
-
-It is currently done once a week on average, although deployment may become
-gradually more continuous as services start to get hosted in the Cloud and run
-in Docker containers.
-
-* Dashboard: [linux.kernelci.org](https://linux.kernelci.org)
-* Description: [Production](/instances/production)
-* Maintainers: `mgalka`, `nuclearcat`
-* Components: [`kernelci-deploy`](https://github.com/kernelci/kernelci-deploy)
-
-### Staging instance (legacy)
-
-All the incoming pull requests are merged into temporary integration branches
-and deployed on [staging.kernelci.org](https://staging.kernelci.org] for
-testing. This is explained in greater detail in the
-[Staging](/instances/staging) section.
-
-* Dashboard: [staging.kernelci.org](https://staging.kernelci.org)
-* Description: [Staging](/instances/staging)
-* Maintainers: `gtucker`, `broonie`, `nuclearcat`
-* Components: [`kernelci-deploy`](https://github.com/kernelci/kernelci-deploy)
-
-### ChromeOS instance (legacy)
-
-The Chrome OS KernelCI instance is dedicated to building specific kernels and
-running Chrome OS tests on Chromebooks. This is very close to the code used in
-production but has continuous deployment like the staging one, including open
-pull requests for the `chromeos` branches. These branches need to be regularly
-rebased with any extra patches that are not merged upstream, typically after
-each production update.
-
-* Dashboard: [chromeos.kernelci.org](https://chromeos.kernelci.org)
-* Description: [ChromeOS](/instances/chromeos)
-* Maintainers: `mgalka`, `nuclearcat`
-* Components: [`kernelci-deploy`](https://github.com/kernelci/kernelci-deploy)
-
-### CIP instance (legacy)
-
-The CIP instance is dedicated to building CIP specific kernels with CIP
-configurations. Currently the CIP KernelCI code is in production.
-
-* Dashboard: [cip.kernelci.org](https://cip.kernelci.org)
-* Description: [CIP](/instances/cip)
-* Maintainers: `arisut`
-* Components: [`kernelci-deploy`](https://github.com/kernelci/kernelci-deploy)
diff --git a/kernelci.org/content/en/legacy/tests/_index.md b/kernelci.org/content/en/legacy/tests/_index.md
deleted file mode 100644
index 8bbe479a..00000000
--- a/kernelci.org/content/en/legacy/tests/_index.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: "Tests"
-date: 2023-08-11
-description: "Tests"
----
-
-## Native tests
-
-KernelCI native tests are orchestrated by KernelCI. They are initiated in test
-labs directly connected to KernelCI and results are visible on the [web
-frontend](https://linux.kernelci.org/job/).
-
-Tests run natively by KernelCI such as [kselftest](kselftest) and [LTP](ltp)
-but also some KernelCI specific ones are all described with dedicated pages
-under this current section.
-
-It's possible for anyone to add new native tests. See the [How-To
-guide](howto) to get started.
-
-## External tests and KCIDB
-
-Non-native tests are run in fully autonomous systems, such as syzbot or CKI.
-Their results are shared alongside KernelCI native test results via
-[KCIDB](https://kcidb.kernelci.org).
diff --git a/kernelci.org/content/en/legacy/tests/kselftest.md b/kernelci.org/content/en/legacy/tests/kselftest.md
deleted file mode 100644
index ea576e27..00000000
--- a/kernelci.org/content/en/legacy/tests/kselftest.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-title: "kselftest"
-date: 2021-08-05
-description: "Native tests: kselftest"
-weight: 2
----
-
-## About
-
-[kselftest](https://www.kernel.org/doc/html/latest/dev-tools/kselftest.html) is
-one of the main test suites that comes with the Linux kernel source tree
-itself. As such, it is an obvious one to cover by KernelCI. For each kernel
-revision built by KernelCI, an extra build is produced for each CPU
-architecture with a kselftest config fragment merged on top of the main
-defconfig. The resulting kernel and kselftest binaries are then stored
-together and used together, to guarantee compatibility between the tests and
-the kernel.
-
-## KernelCI coverage
-
-Initial Github Issue: [#331](https://github.com/kernelci/kernelci-core/issues/331)
-
-The aim is to run all of kselftest, whenever applicable. Only a subset of all
-the available test collections are currently being run while infrastructure is
-getting prepared for gradually expanding coverage to the full set.
-
-The table below shows a summary of the current KernelCI kselftest coverage per
-CPU architecture and platform for each collection. Until a more dynamic
-orchestration becomes available, this is all defined in
-[`test-configs.yaml`](https://github.com/kernelci/kernelci-core/blob/master/config/core/test-configs.yaml).
-The goal is to have each kselftest collection run on at least 2 platforms of
-each CPU architecture. All these tests are typically run on every kernel
-revision built by KernelCI, except for those that aren't present in older
-kernel revisions.
-
-| Platform | arch | cpufreq | filesystems | futex | lib | livepatch | lkdtm | rtc | seccomp | vm |
-|---------------------------|---------|---------|-------------|-------|-----|-----------|-------|-----|---------|----|
-| asus-C433TA-AJ0005-rammus | x86\_64 | | | | | | ✔ | | ✔ | |
-| asus-C436FA-Flip-hatch | x86\_64 | | ✔ | ✔ | | | | ✔ | | |
-| asus-C523NA-A20057-coral | x86\_64 | | ✔ | ✔ | ✔ | | ✔ | | ✔ | |
-| asus-cx9400-volteer | x86\_64 | | | | ✔ | | | | | ✔ |
-| hip07-d05 | arm64 | | | ✔ | | | | | | |
-| hp-11A-G6-EE-grunt | x86\_64 | | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
-| hp-x360-12b-n4000-octopus | x86\_64 | | ✔ | ✔ | ✔ | | | | | |
-| hp-x360-14-G1-sona | x86\_64 | ✔ | | | | | | | | |
-| meson-g12b-odroid-n2 | arm | | | | ✔ | | | | | |
-| mt8173-elm-hana | arm64 | ✔ | ✔ | ✔ | ✔ | | ✔ | ✔ | ✔ | |
-| qcom-qdf2400 | arm | | ✔ | ✔ | ✔ | | ✔ | | ✔ | |
-| r8a774a1-hihope-rzg2m-ex | arm64 | | ✔ | ✔ | ✔ | | ✔ | | ✔ | |
-| rk3288-veyron-jaq | arm | | | | ✔ | | | | | |
-| rk3399-gru-kevin | arm | | | | | | | ✔ | | |
diff --git a/kernelci.org/content/en/legacy/tests/ltp.md b/kernelci.org/content/en/legacy/tests/ltp.md
deleted file mode 100644
index db9b0b03..00000000
--- a/kernelci.org/content/en/legacy/tests/ltp.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: "LTP"
-date: 2021-08-05
-description: "Native tests: LTP"
-weight: 3
----
-
-## About
-
-The [Linux Test Project](https://linux-test-project.github.io/) is one of the
-major open-source test suites for Linux systems at large. Only a subset of it
-is being run by KernelCI, to focus on the ones that appear to be the most
-relevant to kernel testing.
-
-A series of Debian Buster user-space images to run these tests are being built
-regularly, typically once a week. They contain all of LTP from the latest
-version of the `master` branch built from source. They are stored on the
-[KernelCI storage
-server](https://storage.kernelci.org/images/rootfs/debian/buster-ltp/?C=M&O=D).
-
-## KernelCI coverage
-
-Initial GitHub issue: [#506](https://github.com/kernelci/kernelci-core/issues/506)
-
-The table below shows a summary of the current KernelCI LTP coverage per CPU
-architecture and platform for each subset. Until a more dynamic orchestration
-becomes available, this is all defined in
-[`test-configs.yaml`](https://github.com/kernelci/kernelci-core/blob/master/config/core/test-configs.yaml).
-The goal is to have each LTP subset run on at least 2 platforms of each CPU
-architecture. All these tests are typically run on every kernel revision built
-by KernelCI, except for trees filtered out by labs or if the kernel is too old
-to support the platform.
-
-| Platform | arch | crypto | fcntl-locktests | ima | ipc | mm | pty | timers |
-|:-------------------------:|:-------:|:------:|:---------------:|:---:|:---:|:--:|:---:|:------:|
-| asus-C433TA-AJ0005-rammus | x86\_64 | | ✔ | | | | ✔ | ✔ |
-| asus-C436FA-Flip-hatch | x86\_64 | | | | ✔ | ✔ | | ✔ |
-| asus-C523NA-A20057-coral | x86\_64 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
-| asus-cx9400-volteer | x86\_64 | | | | ✔ | | | |
-| bcm2836-rpi-2-b | arm | ✔ | | | | | | |
-| beaglebone-black | arm | | | | ✔ | | | |
-| hip07-d05 | arm64 | | | | | | | |
-| hp-11A-G6-EE-grunt | x86\_64 | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ |
-| hp-x360-12b-n4000-octopus | x86\_64 | ✔ | | | ✔ | ✔ | | |
-| hp-x360-14-G1-sona | x86\_64 | | | | | | | |
-| meson-g12b-odroid-n2 | arm | | | | | | | |
-| mt8173-elm-hana | arm64 | ✔ | ✔ | | ✔ | ✔ | ✔ | ✔ |
-| qcom-qdf2400 | arm | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
-| qemu\_x86\_64 | x86\_64 | | | | | | | ✔ |
-| r8a774a1-hihope-rzg2m-ex | arm64 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
-| rk3288-rock2-square | arm | | | | | ✔ | | |
-| rk3288-veyron-jaq | arm | | | | | | | ✔ |
-| rk3399-gru-kevin | arm | | ✔ | | | | ✔ | ✔ |