* [Qemu-devel] inconsistent handling of "qemu64" CPU model
@ 2016-05-26 5:13 Chris Friesen
2016-05-26 9:45 ` [Qemu-devel] [libvirt] " Kashyap Chamarthy
2016-05-26 10:41 ` Jiri Denemark
0 siblings, 2 replies; 5+ messages in thread
From: Chris Friesen @ 2016-05-26 5:13 UTC (permalink / raw)
To: libvir-list, qemu-devel@nongnu.org
Hi,
I'm not sure where the problem lies, hence the CC to both lists. Please copy me
on the reply.
I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a
Celeron 2961Y CPU. (libvirt detects it as a Nehalem with a bunch of extra
features.) Qemu gives version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.7~cloud2).
If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU,
and /proc/cpuinfo in the guest instance looks something like this:
processor 0
vendor_id GenuineIntel
cpu family 6
model 6
model name: QEMU Virtual CPU version 2.2.0
stepping: 3
microcode: 0x1
flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush
mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt
hypervisor lahf_lm abm vnmi ept
However, if I explicitly specify a custom CPU model of "qemu64" the instance
refuses to boot and I get a log saying:
libvirtError: unsupported configuration: guest and host CPU are not compatible:
Host CPU does not provide required features: svmlibvirtError: unsupported
configuration: guest and host CPU are not compatible: Host CPU does not provide
required features: svm
When this happens, some of the XML for the domain looks like this:
<os>
<type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
....
<cpu mode='custom' match='exact'>
<model fallback='allow'>qemu64</model>
<topology sockets='1' cores='1' threads='1'/>
</cpu>
Of course "svm" is an AMD flag and I'm running an Intel CPU. But why does it
work when I just rely on the default virtual CPU? Is kvm_default_unset_features
handled differently when it's implicit vs explicit?
If I explicitly specify a custom CPU model of "kvm64" then it boots, but of
course I get a different virtual CPU from what I get if I don't specify anything.
Following some old suggestions I tried turning off nested kvm, deleting
/var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd. Didn't help.
So...anyone got any ideas what's going on? Is there no way to explicitly
specify the model that you get by default?
Thanks,
Chris
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
2016-05-26 5:13 [Qemu-devel] inconsistent handling of "qemu64" CPU model Chris Friesen
@ 2016-05-26 9:45 ` Kashyap Chamarthy
2016-05-26 10:41 ` Jiri Denemark
1 sibling, 0 replies; 5+ messages in thread
From: Kashyap Chamarthy @ 2016-05-26 9:45 UTC (permalink / raw)
To: Chris Friesen; +Cc: libvir-list, qemu-devel@nongnu.org
On Wed, May 25, 2016 at 11:13:24PM -0600, Chris Friesen wrote:
[...]
> However, if I explicitly specify a custom CPU model of "qemu64" the
> instance refuses to boot and I get a log saying:
[Not a direct answer to the exact issue you're facing, but a related
issue that is being investigated presently...]
Currently there's a related (regression) in upstream libvirt 1.3.4:
The crux of the issue here is: the libvirt custom 'gate64' model is not
being translated into a CPU definition that QEMU can recognize
(which you can find from `qemu-system-x86 -cpu \?`).
See this bug (it has reproducer, and discussion):
https://bugzilla.redhat.com/show_bug.cgi?id=1339680 -- libvirt CPU
driver fails to translate a custom CPU model into something that
QEMU recognizes
The bug (regression) is bisected, by Jiri Denemark, to this commit:
v1.2.9-31-g445a09b "qemu: Don't compare CPU against host for TCG".
> libvirtError: unsupported configuration: guest and host CPU are not
> compatible: Host CPU does not provide required features: svmlibvirtError:
> unsupported configuration: guest and host CPU are not compatible: Host CPU
> does not provide required features: svm
>
> When this happens, some of the XML for the domain looks like this:
> <os>
> <type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
> ....
>
> <cpu mode='custom' match='exact'>
> <model fallback='allow'>qemu64</model>
> <topology sockets='1' cores='1' threads='1'/>
> </cpu>
>
> Of course "svm" is an AMD flag and I'm running an Intel CPU. But why does
> it work when I just rely on the default virtual CPU? Is
> kvm_default_unset_features handled differently when it's implicit vs
> explicit?
>
> If I explicitly specify a custom CPU model of "kvm64" then it boots, but of
> course I get a different virtual CPU from what I get if I don't specify
> anything.
>
> Following some old suggestions I tried turning off nested kvm, deleting
> /var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd. Didn't
> help.
>
> So...anyone got any ideas what's going on? Is there no way to explicitly
> specify the model that you get by default?
>
>
> Thanks,
> Chris
>
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
--
/kashyap
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
2016-05-26 5:13 [Qemu-devel] inconsistent handling of "qemu64" CPU model Chris Friesen
2016-05-26 9:45 ` [Qemu-devel] [libvirt] " Kashyap Chamarthy
@ 2016-05-26 10:41 ` Jiri Denemark
2016-05-26 14:08 ` Chris Friesen
1 sibling, 1 reply; 5+ messages in thread
From: Jiri Denemark @ 2016-05-26 10:41 UTC (permalink / raw)
To: Chris Friesen; +Cc: libvir-list, qemu-devel@nongnu.org
On Wed, May 25, 2016 at 23:13:24 -0600, Chris Friesen wrote:
> Hi,
>
> If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU,
> and /proc/cpuinfo in the guest instance looks something like this:
>
> processor 0
> vendor_id GenuineIntel
> cpu family 6
> model 6
> model name: QEMU Virtual CPU version 2.2.0
> stepping: 3
> microcode: 0x1
> flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush
> mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt
> hypervisor lahf_lm abm vnmi ept
>
>
> However, if I explicitly specify a custom CPU model of "qemu64" the instance
> refuses to boot and I get a log saying:
>
> libvirtError: unsupported configuration: guest and host CPU are not compatible:
> Host CPU does not provide required features: svmlibvirtError: unsupported
> configuration: guest and host CPU are not compatible: Host CPU does not provide
> required features: svm
The qemu64 CPU model contains svm and thus libvirt will always consider
it incompatible with any Intel CPUs (which have vmx instead of svm). On
the other hand, QEMU by default ignores features that are missing in the
host CPU and has no problem using qemu64 CPU, the guest just won't see
some of the features defined in qemu64 model.
In your case, you should be able to use
<cpu mode'custom' match='exact'>
<model>qemu64</model>
<feature name='svm' policy='disable'/>
</cpu>
to get the same CPU model you'd get by default (if not, you may need to
also add <feature name='vmx' policy='require'/>).
Alternatively
<cpu mode'custom' match='exact'>
<model>qemu64</model>
<feature name='svm' policy='force'/>
</cpu>
should work too (and it would be better in case you use it on an AMD
host).
But why you even want to use qemu64 CPU in a domain XML explicitly? If
you're fine with that CPU, just let QEMU use a default one. If not, use
a CPU model that fits your host/needs better.
BTW, using qemu64 with TCG (i.e., domain type='qemu' as oppose to
type='kvm') is fine because libvirt won't check it against host CPU and
QEMU will emulate all features so you'd get even the features that host
CPU does not support.
Jirka
P.S. Kashyap is right, the issue he mentioned is not related at all to
your case.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
2016-05-26 10:41 ` Jiri Denemark
@ 2016-05-26 14:08 ` Chris Friesen
0 siblings, 0 replies; 5+ messages in thread
From: Chris Friesen @ 2016-05-26 14:08 UTC (permalink / raw)
To: libvir-list, qemu-devel@nongnu.org
On 05/26/2016 04:41 AM, Jiri Denemark wrote:
> The qemu64 CPU model contains svm and thus libvirt will always consider
> it incompatible with any Intel CPUs (which have vmx instead of svm). On
> the other hand, QEMU by default ignores features that are missing in the
> host CPU and has no problem using qemu64 CPU, the guest just won't see
> some of the features defined in qemu64 model.
>
> In your case, you should be able to use
>
> <cpu mode'custom' match='exact'>
> <model>qemu64</model>
> <feature name='svm' policy='disable'/>
> </cpu>
>
> to get the same CPU model you'd get by default (if not, you may need to
> also add <feature name='vmx' policy='require'/>).
>
> Alternatively
>
> <cpu mode'custom' match='exact'>
> <model>qemu64</model>
> <feature name='svm' policy='force'/>
> </cpu>
>
> should work too (and it would be better in case you use it on an AMD
> host).
It's actually OpenStack that is setting up the XML, not me, so I'd have to
special-case the "qemu64" model and it'd get ugly. :)
The question remains, why is "qemu64" okay when used implicitly but not
explicitly? I would have expected them to behave the same.
> But why you even want to use qemu64 CPU in a domain XML explicitly? If
> you're fine with that CPU, just let QEMU use a default one. If not, use
> a CPU model that fits your host/needs better.
Working around another issue would be simpler/cleaner if I could just explicitly
set the model to qemu64.
Chris
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
@ 2016-10-15 21:05 Divan Santana
0 siblings, 0 replies; 5+ messages in thread
From: Divan Santana @ 2016-10-15 21:05 UTC (permalink / raw)
To: qemu-devel
> In your case, you should be able to use
>
> <cpu mode'custom' match='exact'>
> <model>qemu64</model>
> <feature name='svm' policy='disable'/>
> </cpu>
>
> to get the same CPU model you'd get by default (if not, you may need to
> also add <feature name='vmx' policy='require'/>).
>
> Alternatively
>
> <cpu mode'custom' match='exact'>
> <model>qemu64</model>
> <feature name='svm' policy='force'/>
> </cpu>
>
> should work too (and it would be better in case you use it on an AMD
> host).
Does anyone know how one can one make the above the default behavior on
a system, using libvirt?
The reason, is that I make use on some vagrant boxes, which have this
issue. I understand that the boxes have incorrectly configured and
hence I have this issue.
I'd prefer to set the above as default on my system, rather then editing
all the vagrant boxes I'm using
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-10-15 21:36 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-26 5:13 [Qemu-devel] inconsistent handling of "qemu64" CPU model Chris Friesen
2016-05-26 9:45 ` [Qemu-devel] [libvirt] " Kashyap Chamarthy
2016-05-26 10:41 ` Jiri Denemark
2016-05-26 14:08 ` Chris Friesen
-- strict thread matches above, loose matches on Subject: below --
2016-10-15 21:05 Divan Santana
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).