Xen has always supported a wide variety of operating systems as guests while the host-side has always been less bright. Infact, at the moment, most of the host choice is basically around Linux or NetBSD. Seemingly, a renewed interest into improving the FreeBSD support for XEN may drastically change the landscape by adding a further options for hosts. In the last few months, infact, a collaborative effort has been started in order to bring FreeBSD support for XEN to the state of the art and possibly to the same performance as Linux.
FreeBSD-CURRENT XEN support is pretty much the same since the last three years. More precisely, it is defined by the following set of cases (please note: for the reminder of the post, as FreeBSD nomenclature demands, i386 is 32-bit version of x86 architecture while amd64 is the 64-bit version):
- No Dom0 support.
- PV support, for i386: kernel support to run FreeBSD as a paravirtualized guest, including PV frontend drivers for blk and net. Specific only to i386 archiecture.
- HVM specific kernel configurations (which enable and disable kernel features based on how much they help in case of a DomU hw virtualized guest), including PV frontend drivers for both amd64 and i386.
- No XEN support for arm.
In other words, FreeBSD can run only as a DomU guest, in HVM mode for both amd64 and i386 and in PV mode only for i386. This discrepancy in the paravirtualized support likely happened because the 4 IOPLs present in i386 makes the changes to the kernel pmap layer (and in general to the VM subsystem) less invasive and difficult than what would happen on amd64, having the possibility to reuse ring 0 for the hypervisor in slightly easy way.
In the last year, many companies showed a strong interest in moving FreeBSD-XEN support into high gears. More precisely, a particulary determined effort has been initiated by SpectraLogic which has directly sponsored Cherry G. Mathew, whom already worked on NetBSD port of XEN, to bring FreeBSD on par with Linux and NetBSD in terms of XEN features. Additively, Citrix Systems and Dell Inc. have also sponsored some of their developers time, to help out with the porting.
The new development is then focused on the following items:
- Add the support for paravirtualized guests in amd64 architecture
- Improve performance in PV and HVM guests by providing more PV drivers and other kernel refinements (already present in other operating systems like Linux)
- Add the support for Dom0 and then backend drivers
- Finally, integrate the XEN hypervisor into FreeBSD ports subsystem
The new development has been happening on a svn FreeBSD official branch located here:
http://svn.freebsd.org/base/projects/amd64_xen_pv/
Having a centralized repository will help developers to coordinate, testers to get always cutting-edge code and finally it will make merging back to -CURRENT as simple as possible. History of changes and commit logs, infact, will be kept as-is by the time the dolphin branch is written back.
A quick way to test the state of task 1, for example, would consist, on an amd64 machine, in checking out the branch and build a kernel as:
# mkdir /dirprefix/
# mkdir /amd64_xen_pv/
# svn checkout svn://svn.freebsd.org/base/projects/amd64_xen_pv/ /amd64_xen_pv/
# cd /amd64_xen_pv/
# setenv MAKEOBJDIRPREFIX ../dirprefix/
# make -j10 -s buildkernel KERNCONF=XEN
# ls ../dirprefix/amd64_xen_pv/sys/XEN/kernel
which represents finally the kernel binary ready to be run as a guest within your XEN configuration.
At the current stage, the code present in the branch builds a kernel which works only in single-processor configuration and does most of the booting process. Infact, after having initialized most of the x86 structures, the VM subsystem, the eventchannel subsystem and finally the console driver it ends up by invoking KDB (FreeBSD’s kernel debugger) in a controlled manner. A big emphasis has been put on the pmap layer modifications, required to let the kernel fit in the same IOPL of userland applications in order to let hypervisor taking over of ring 0.
The next natural steps, to complete phase 1, are about bringing the booting sequence to completion (which may require mostly drivers changes) and then add the support for multi-processing, by implementing the IPI logic. Finally, in order to have a fully capable amd64 PV guest, the support for MSI should be introduced.