We just wrapped up a weekend at FOSDEM 2026 where Xen and XCP-ng both had dedicated booths. This was my first time at FOSDEM and the legend lived up to the hype.
It was an insightful weekend full of interesting folks who were genuinely interested in the Xen Project. Everything from folks who used Xen 20 years ago, folks who know Xen of today, and folks who had limited insight into why hypervisors are important.
By far, the most common question I received was, “How is Xen different from KVM or Proxmox?” Those are actually two separate questions, and they are worth taking one at a time.
First, Proxmox (Proxmox VE) is a virtualization solution built on top of the KVM hypervisor. Xen, on the other hand, is a hypervisor. To make a fair comparison with Proxmox, you really want to compare it with a Xen-based virtualization solution, such as XCP-ng. Under the hood, what we are really comparing is KVM versus Xen.
To compare these two hypervisor technologies, it helps to understand the different types of hypervisors: Type 1 and Type 2. A Type 2 hypervisor runs on top of a host operating system. Common examples include VirtualBox and Parallels.
In contrast, a Type 1 hypervisor runs directly on the hardware, with no host operating system sitting between the hypervisor and the hardware. KVM is implemented as a set of kernel modules and runs in kernel space. For that reason, it is often described as a Type 1 hypervisor. At the same time, KVM depends on the full Linux kernel for scheduling, memory management, drivers, and device support, along with everything that comes with that.
Xen is what we often refer to as a “true” Type 1 hypervisor. The Xen hypervisor boots directly on the hardware and does not depend on the Linux kernel. Linux is commonly used in a privileged control domain, known as Dom0, but it is not required for Xen itself to function.
At a high level, the architectural difference looks like this:

What does this mean in practice? Xen offers a very small, purpose-built hypervisor layer. In some workloads, this can translate into more predictable performance and stronger isolation, because the hypervisor is not competing with a general-purpose operating system kernel at the same layer. Xen also has fewer lines of code in the hypervisor itself, which reduces the attack surface and the amount of code that needs to be audited and maintained. There are also fewer dependencies to update when updating the hypervisor.
Another common question I heard was, “What about scheduling processes? Xen must be missing the Linux kernel scheduler.” That’s true. Xen does not rely on the Linux scheduler. Instead, Xen has its own CPU schedulers that are specifically designed for virtual machines.
At a high level, Xen distinguishes between physical CPUs (pCPUs), which are the actual cores on the machine, and virtual CPUs (vCPUs), which are presented to guest VMs. Xen can dedicate physical CPUs to specific VMs using CPU pinning, ensuring that a VM runs only on a defined set of cores. Xen can also expose vCPUs that are scheduled onto shared physical CPUs, allowing multiple VMs to share CPU resources in a controlled way.
Xen supports several scheduling algorithms depending on the use case. These include general-purpose schedulers such as Credit and Credit2, which aim to fairly share CPU time across VMs, as well as real-time schedulers like RTDS for latency-sensitive and deterministic workloads. This flexibility allows operators to choose between throughput, fairness, or predictability depending on their needs.
If we look at the embedded space, this is where Xen’s architecture really shines, though it is not limited to embedded environments. In safety-critical applications, such as automotive systems, the smaller Xen hypervisor codebase means fewer lines of code that need to be audited and certified. Members of the Xen Project are actively working toward safety certification goals. By comparison, certifying KVM together with the full Linux kernel is a much larger undertaking and may not always be worth the investment. Today, the community’s expectation is that Xen can achieve safety certification for certain automotive use cases on a timeline around 2026 or 2027. Fewer lines of code also mean faster certification cycles and simpler long-term maintenance, including security updates.
At the booth, I also had a small proof of concept running to make this more tangible. The demo showed an automotive-oriented workload running on a consumer embedded device, using Xen to isolate and manage multiple domains. While still a work in progress, it sparked a lot of good conversations about how Xen can be used in constrained environments with mixed-criticality and as a foundation for safety-critical systems.

Overall, Xen has a clean, built-for-purpose architecture for virtualization. The Xen hypervisor itself does not depend on the Linux kernel, and that architectural choice is what sets Xen apart from Proxmox and other KVM-based solutions. You get a purpose-built hypervisor designed from the ground up for isolation, configurability, and security, with fewer lines of code and fewer moving parts at the lowest level.
Come and see what we are working on at the Xen Project. We’re a Linux Foundation project with open governance, supported by a broad set of members and contributors from across industry and academia. Whether you are running Xen today, evaluating virtualization technologies, or exploring safety-critical and embedded use cases, we’d love to hear from you. Join the conversation, follow the project’s evolution, and help shape where Xen goes next.
Further reading and links:
- Additional perspectives from the Xen ecosystem at FOSDEM 2026:
- XCP-ng project
- Xen Project Matrix Channels (come chat with us)
- Xen CPU scheduling (Credit scheduler)
- Xen RTDS scheduler
- KVM in the Linux kernel
- Proxmox VE documentation