Skip to content
hw-claudio edited this page Nov 18, 2014 · 53 revisions

The AArch64 Port of OSv has been started, initially targeting the QEMU Mach-virt platform, running on the ARM Foundation Model v8.

Functional Status

The aarch64-next branch already can support the loader image and running of a simple "hello world" C application on top of OSv, as well as other test applications (default image contains the uush micro shell).

With mainline, the only thing required is to add manually the needed SOs to the bootfs manifest skeleton, since we build only the first stage OSv image.

The latest AArch64 work is happening in the PCI branch at:

https://github.com/hw-claudio/osv_aarch64 "pci"

https://github.com/hw-claudio/osv_aarch64/tree/pci

The PCI branch requires a fair amount of QEMU patches to be useful. The latest version of the patch series needed for getting PCI to work on AArch64 QEMU should be posted soon by Alvise Rigo on qemu-devel.

You can find a QEMU tree which includes all necessary patches at:

https://github.com/hw-claudio/qemu-aarch64-queue "pci-on-machvirt"

https://github.com/hw-claudio/qemu-aarch64-queue/tree/pci-on-machvirt

In order to build the final usr.img there are issues to solve: we need pci, virtio-blk and virtio-net, due to the fact that mkfs.py (for ZFS) and upload_manifest.py communicate with the OSv instance via virtio-net, and we need to have virtio-blk in order to put the ZFS somewhere. For that to happen we need a PCI in Mach-virt to be available and functional. In practice this is also not sufficient: the build system relies on the ACPI power management to "turn off" the VM. Also the python scripts in scripts/ which are called as part of the build are cross-compilation unfriendly as they check for features on the build host instead of the target, and include lots of X86-specific stuff.

Component Status

* build system: the first pass mostly works through cross-compilation, but there are issues to proceed any further. The build system produces the final image on x86_64 by performing multiple passes where at each stage the image generated in the previous step is executed to generate the successive. This multiple stage build has not been designed with cross-compilation in mind, and this involves not only the compilation and linking of binaries, but a lot of scripts launched externally by the build system as well (see scripts/). One short term workaround solution identified is to use qemu tcg system emulation to run an AArch64 image in software on x86_64. The scripts still need to be reworked quite a bit, and this has shown not to work in practice due to a range of issues (ACPI assumption, ...).

A better solution would be to avoid the multi-stage build completely, and just perform the tasks that currently require running OSv in a user space process as part of the normal build. However there seem to be issues related to zfs to make this work.

* qemu-system-aarch64 tcg software system emulation: currently the AArch64 image runs on the Foundation Model but also successfully on the QEMU tcg software system emulation. This requires the very latest QEMU which fixes issues with pte attributes and the latest OSv where we fixed a GIC cpumask issue.

* tests with available hardware (development boards): preliminary tests with the hardware currently available should be done. This is also an alternative route to be attempted to get a full build working (with the real long term solution still remaining making the build fully cross-compilable, and avoiding unnecessary multistage builds),

* external dependencies: Avi has successfully added the Fedora packages for AArch64, and seems ok.

* devices support: NONE available at the moment. A tentative set of patches for PCI bus for the Mach-Virt Platform is flying around qemu-devel. On top of that, virtio devices should be added and tested (virtio-net, virtio-blk).

* smp support: currently only one CPU can boot. Having multiple cpus boot already requires many dependencies (possibly ACPI) and work. Running smp will trigger bugs and missing functionality which will need to be sorted out.

* libc: We need to implement setjmp, longjmp and signal code, and also the broken/missing floating point support for the libc math functions which depend on floating point representation. Floating point is necessary to get a basic virtio-blk test case to work (fsmark).

* hardware information passing from the host to the guest is not available yet, addresses, interrupts, etc are hard-coded based on the QEMU Mach-virt platform. Need: ACPI from QEMU. Work in progress for the ACPI guest support. Interim solution will probably involve device trees.

* ELF64: initial relocations for get_init() are supported, plus basic relocs necessary to run simple applications. More work is foreseen to support more complex applications.

* console: pl011 UART output and input is now available, no FIFO (or, FIFO depth = 1).

* backtrace: basic functionality now available.

* power: currently stubbed, depends on the ACPI work.

* exception handling: basic infrastructure there, but need work for sync exceptions to differentiate page_faults from other sync aborts.

* MMU: done, but need to revisit for the sigfault detection speedup feature added to x64 (and stubbed on AArch64). Also there might be an issue with permissions of the page tables which triggers in QEMU sys emulation and not in the Foundation Model.

* page fault and VMA handling: basic functionality now available.

* interrupts: basic functionality now available.

* GIC: more or less done, v2.

* Generic Timers: basic functionality available, needs a patch for the host bootwrapper for the foundation models (CNTFRQ_EL0 needs to be set to the correct value in the Makefile => 100Mhz)

* scheduling: basic support for task switching available (switch-to-first, switch-to)

* signals: work started.

* arch trace: nothing available.

* sampler: sampler support missing for AArch64.

* scripts: most scripts have not even been looked at for AArch64

* management tools: management tools have not been looked at yet for AArch64

* tests: some tests build but most don't because of other missing components. No attempt to run any tests beside tst-hello.so.

* memcpy optimizations: not a priority atm, still not implemented. Trivial memcpy and memcpy_backwards functions are used on AArch64.

Build instructions

These are some brief instructions about how to cross-compile OSv's loader.img (very incomplete still) on an X86-64 build host machine.

At the time of writing this, the available functionality is minimal: the loader image boots, gic is initialized, timers are initialized, etc, and a simple hello world application is started on top of OSv (in the case of aarch64-next), or you will get an abort with a backtrace in zfs (for master).

Crosscompiler Tools and Host Image from Linaro

You can find a 32bit (needs multilib 😟) crosscompiler from Linaro, in particular the package

gcc-linaro-aarch64-linux-gnu-4.8-2013.12_linux, which is not distro-specific 😃, and includes all tools needed for building.

http://releases.linaro.org/13.12/components/toolchain/binaries

For the host root filesystem for AArch64, a good option is the Linaro LEG Image

linaro-image-leg-java-genericarmv8-20131215-598.rootfs.tar.gz

http://releases.linaro.org/13.12/openembedded/aarch64/

You can experiment with other images and compilers from Linaro, but those I am using right now.

Crosscompiler Tools for Ubuntu

For Ubuntu there are AArch64 crosscompilers available in the official repositories as well. Packages are named g++-4.8-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu.

Ubuntu is also used over here, and works ok.

Preparing the AArch64 Host

You will need to have or build an AArch64 linux kernel for the Host, which will be run on top of the Foundation v8 Model. In addition to that, you will need the bootwrapper, which you can get from:

http://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/boot-wrapper-aarch64.git

Use the foundation-v8.dts, and my suggestion is to use nfsroot to mount the root filesystem (the linaro LEG image). The boot wrapper takes as input the linux kernel Image and produces linux-system.axf, which will be the input for the Foundation Model.

Running ARMv8 Foundation Model

Start the Foundation model:

./Foundation_v8 --image=linux-system.axf --cores=1 --network=nat --network-nat-ports=1234=1234

The latter option will expose the 1234 port on the host side to the same port number in the guest running inside the model. You can add additional mappings as desired/needed.

If you are skipping the user space initialization via something like init=/bin/sh for speedup (edit the bootwrapper Makefile), in Foundation model you might need to run:

/sbin/udhcpc eth0

Preparing the guest: External dependencies

Nothing to do anymore, since they are now part of the mainline tree.

Preparing the guest: Environment Variables for make

In addition to the general requirements for building OSv (see README.md),

note that the simple build system recognizes the ARCH and CROSS_PREFIX environment variables, and looks for the following build tools:

CXX=$(CROSS_PREFIX)g++
CC=$(CROSS_PREFIX)gcc
LD=$(CROSS_PREFIX)ld
STRIP=$(CROSS_PREFIX)strip
OBJCOPY=$(CROSS_PREFIX)objcopy
HOST_CXX=g++

In order to build AArch64, countrary to the past when the target architecture was automatically detected by running the supplied compiler, you need to explicitly say make ARCH=aarch64, otherwise the build system will try to detect ARCH running uname on the host machine, and try to build x64.

At the beginning of the build process, look for this message:

build.mk:
build.mk: building arch=aarch64, override with ARCH env
build.mk:

If the message does not say arch=aarch64, the crosscompiler could not be found or run correctly. In this case, check the CROSS_PREFIX variable, or the compiler binary name if it's not canonical (do you need to add a symlink for example from g++-4.8.3 to g++ ?).

Running the guest

An example of command line for QEMU which works running on top of Foundation Model with kvm with an AArch64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt -enable-kvm \
    -kernel ./loader.img -cpu host -m 1024M

An example of command line for QEMU running on system emulation on an x86_64 host with an x86_64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt \
    -kernel ./loader.img -cpu cortex-a57 -m 1024M

Jani Kokkonen <[email protected]>
Claudio Fontana <[email protected]>
Clone this wiki locally