From Andre Przywara:
while experimenting with guest NUMA configurations I realized that Xen injects the host’s core number into each guest.
I believe this behavior is wrong, the number of cores should somehow be dependent from the number of VCPUs.
Currently a CPUID decoding tool of mine gives me the following output for a 4 VCPU guest:
————
HTT: 1, CmpLegacy: 1, LogicalProcessorCount: 24 AMD decoding:
NC: 23, ApicIdCoreIdSize: 16
24 cores  (legacy method)
24/16 cores (extended method)
————
(This is on a 12-core host CPU).
Applying my previous patch reduces the 24 to 12, but that still does not match the 4 VCPUs seen.
For proper NUMA functionality we need more sane values here, it seems that at least Linux does not care about the strange numbers as long as NUMA is not used. When a SRAT table is found, the guest kernel panics in the scheduler’s rebalancer with those bogus numbers.
How shall we solve this issue? I see several ways:
1. Always inject one core per processor. SMP guests are then n-way, the CPUID setup is trivial and works well. But we may run into licensing issues, as some software (MS Windows comes to mind) is limited by the number of processors, but not by the number of cores.
2. Inject exactly the same number of cores as there are VCPUs. This could lead to potentially strange core numbers, but software should cope with this (as there are 3-core, 6-core and 12-cores processors).
This would lead to problems with a NUMA setup, though.
3. Let the user specify the number of cores in the config file. Needs user interaction and can lead to problems if it somehow conflicts the number of VCPUs. But would be nice to have as an additional tuning parameter. I could implement this.
4. Implement solution 2), but tune the behavior if guest NUMA is enabled. We could make sure that the number of cores is not bigger than the number of VCPUs on one NUMA node.
What approach shall I use? Are there other concerns regarding the CPUID’s readout of the nubmer of cores?