Skip to main content

Xen 4.4 Released

By March 10, 2014March 4th, 2019Announcements

Xenproject.org is pleased to announce the release of Xen 4.4.0. The release is available from the download page:

Xen 4.4 is the work of 8 months of development, with 1193 changesets. It’s our first release made with an attempt at a 6-month development cycle. Between Christmas, and a few important blockers, we missed that by about 6 weeks; but still not too bad overall.
Additionally, this cycle we’ve had a massive increase in the amount of testing. The XenProject’s regression testing system, osstest, has recieved a number of additional tests, and the XenServer team at Citrix have put Xen through their massive testing suite (XenRT). Additionally, early in this development cycle we had the go-ahead to use Coverity static analysis engine to comb through the source code for hard-to-spot bugs. The result should be that Xen 4.4 is one of the most secure, reliable releases yet.

Highlights

Although the development part of the release cycle was shorter than the previous one, we still have far too many exciting improvements than we can mention in this blog post; I’ll call out just a few.
Probably one of the most important is solid libvirt support for libxl. Jim Fehlig from SuSE and Ian Jackson from Citrix worked together to test and improve the interface between libvirt and libxl, making it fast and reliable. This lays the foundation for solid integration into any tools that can use libvirt, from GUI VM managers to cloud orchestration layers like CloudStack or OpenStack.
Another big one is a new scalable event channel interface, designed and implemented by David Vrabel from Citrix. The original Xen event channel interface was limited to the number of bits on the platform squared — 1024 for 32-bit guests and 4096 for 64-bit guests. With many VMs requiring 4 event channels each, that means a theoretical maximum of 256 guests on a 32-bit dom0 — more than enough back when a large machine had 8 cores, and every VM was a full OS; but a major limitation on systems with 128 cores, or those using cloud OSes like Mirage or OSv. The new “FIFO” event channel interface by default scales up to over 200,000 event channels, and in the future can be extended even further if necessary in a backwards-compatible manner. This should be enough for many years to come.
The ARM port is maturing quickly. As of 4.4, the hypervisor ABI for ARM has been declared stable, meaning that any guest which uses the 4.4 ARM ABI can rely on being able to boot on all future versions of Xen. There are a number of improvements making Xen on ARM more flexible, easier to set up and use, and easier to extend to new platforms. More details can be found in the Xen 4.4 feature list.
One other feature worth a note is Nested Virtualization on Intel hardware. It’s not ready for production use yet, but it has improved to the point where we feel comfortable moving it from “experimental” to “tech preview”. Please feel free to take it for a spin and report any issues you find.
There are many more improvements and changes under the hood. For a more complete list, see the Xen 4.4 feature list.

Features in related projects

The Xen Project is part of a much larger ecosystem of projects. We are typically very closely tied to Linux and qemu, but a number of other projects have had important developments that are worth a mention.
The first is the pv port of grub2. Rather than having a re-implementation of grub in the Xen tree, grub2 now has native support for running in Xen and using the Xen pv block protocol. This guarantees 100% compatibility with grub2 going forward.
Another project worth a mention is the 3.3 release of Xen Orchestra. Xen Orchestra is a web interface that interfacer with the xapi protocol (and thus can be used for XCP, XenServer, or other xapi-based systems). New creating snapshots, revert or delete, remove host from pool, restart toolstack/reboot/shutdown host) and more stable upgrade process from appliance
Finally, GlusterFS 3.5 now supports creating iSCSI nodes. One of the benefits of this is that now, by creating iSCSI devices in dom0, Xen guest disks can be stored in GlusterFS.

Updates