* nested virtualization on Intel and needed cpu flags to pass
@ 2011-11-22 16:52 Gianluca Cecchi
2011-11-23 11:01 ` Nadav Har'El
0 siblings, 1 reply; 3+ messages in thread
From: Gianluca Cecchi @ 2011-11-22 16:52 UTC (permalink / raw)
To: kvm
Hello,
I'm going to test nested virtualization on Intel with Fedora 16 host.
I have used it successfully with Amd, where it is enabled by default
in its kvm-amd module.
Based on
https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt
and F16 having now
[root at f16 ~]# uname -r
3.1.1-2.fc16.x86_64
and the same confirmations in its kernel-doc
/usr/share/doc/kernel-doc-3.1.1/Documentation/virtual/kvm/nested-vmx.txt file
In F15:
# uname -r
2.6.40.6-0.fc15.x86_64
# modinfo kvm-intel
...
parm: bypass_guest_pf:bool
parm: vpid:bool
parm: flexpriority:bool
parm: ept:bool
parm: unrestricted_guest:bool
parm: emulate_invalid_guest_state:bool
parm: vmm_exclusive:bool
parm: yield_on_hlt:bool
parm: ple_gap:int
parm: ple_window:int
In F16 indeed we have now:
# modinfo kvm-intel
...
vermagic: 3.1.1-2.fc16.x86_64 SMP mod_unload
parm: vpid:bool
parm: flexpriority:bool
parm: ept:bool
parm: unrestricted_guest:bool
parm: emulate_invalid_guest_state:bool
parm: vmm_exclusive:bool
parm: yield_on_hlt:bool
parm: nested:bool
parm: ple_gap:int
parm: ple_window:int
Based on docs, the "nested" option is disabled by default on Intel
(and probably not changed in Fedora)
Just made some tests with these F16 packages some days ago:
kernel-3.1.0-7.fc16.x86_64
qemu-kvm-0.15.1-1.fc16.x86_64
virt-manager-0.9.0-7.fc16.noarch
host is Asus u36sd laptop with 8Gb of ram and an ssd disk
its cpu is:
model name : Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
on host:
$ sudo systool -m kvm_intel -v|grep nested
[sudo] password for g.cecchi:
nested = "Y"
Preliminary results are not so good.
I created an F16 guest (f16vm), with default options
I then put its virtio disk in cache mode = none and I/O=native
I then selected "copy to guest" as cpu and it was put to "nehalem".
Inside the guest I create a windows xp with default values proposed by
virt-manager
(cd iso is winxp sp3)
Until now not able to complete installation
Better results if I choose "core2duo" as the cpu of f16vm.
But installations blocks in different points never arriving at the end
of the first "copy files" of windows xp install.
This because f16vm freezes (no more network, no more console...)
I have t power off it
I set up f16vm with serial console to see if something appears.. but
nothing appears when f16vm freezes
host run its F16 guest (f16vm) with this command:
# ps -ef|grep qemu
qemu 18638 1 85 15:39 ? 00:03:45 /usr/bin/qemu-kvm -S
-M pc-0.14 -cpu
core2duo,+rdtscp,+x2apic,+xtpr,+tm2,+est,+vmx,+ds_cpl,+pbe,+tm,+ht,+ss,+acpi,+ds
-enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name f16 -uuid
690251ac-b691-f320-1eba-9f7f61c2e0d3 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/f16.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
file=/var/lib/libvirt/images/f16.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=23,id=hostnet0 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:62:12:4a,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
spicevmc,id=charchannel0,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
-usb -device usb-tablet,id=input0 -spice
port=5900,addr=127.0.0.1,disable-ticketing -vga qxl -global
qxl-vga.vram_size=67108864 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
F16 guest (f16vm) runs its windows xp install guest with this command:
qemu 1355 1 92 15:40 ? 00:03:38 /usr/bin/qemu-kvm -S
-M pc-0.14 -cpu
core2duo,+lahf_lm,+rdtscp,+hypervisor,+x2apic,+cx16,+vmx,+ss,-monitor
-enable-kvm -m 768 -smp 1,sockets=1,cores=1,threads=1 -name winxp
-uuid 9e7ed89e-1bab-b354-cccb-b545d0cc2c29 -nodefconfig -nodefaults
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/winxp.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime
-drive file=/var/lib/libvirt/images/winxp.img,if=none,id=drive-ide0-0-0,format=raw
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2
-drive file=/var/lib/libvirt/images/winxp_sp3.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=24,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:91:6b:e6,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device
usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga std -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
root 1602 1251 0 15:44 pts/0 00:00:00 grep --color=auto qemu
F16 guest (f16vm) serial console output:
Fedora release 16 (Verne)
Kernel 3.1.0-7.fc16.x86_64 on an x86_64 (ttyS0)
f16vm login:
Fedora release 16 (Verne)
Kernel 3.1.0-7.fc16.x86_64 on an x86_64 (ttyS0)
f16vm login: [ 0.443443] ioremap error for 0x7ffff000-0x80000000,
requested 0x10, got 0x0
[ 3.885697] microcode: CPU0 update to revision 0xba failed
Started LSB: daemon for libvirt virtualization API.
Starting LSB: suspend/resume libvirt guests on shutdown/boot...
Started NFS file locking service..
Started Sendmail Mail Transport Client.
Started LSB: suspend/resume libvirt guests on shutdown/boot.
Started LSB: Starts and stops login and scanning of iSCSI devices..
Fedora release 16 (Verne)
Kernel 3.1.0-7.fc16.x86_64 on an x86_64 (ttyS0)
f16vm login:
---> no more output until it then freezes
/var/log/messages of f16vm
[root at f16vm ~]# tail -f /var/log/messages
Nov 9 15:40:33 f16vm dbus-daemon[783]: ** Message: No devices in use, exit
Nov 9 15:40:42 f16vm kernel: [ 50.520366] device vnet0 entered
promiscuous mode
Nov 9 15:40:42 f16vm kernel: [ 50.521677] virbr1: topology change
detected, propagating
Nov 9 15:40:42 f16vm kernel: [ 50.521680] virbr1: port 2(vnet0)
entering forwarding state
Nov 9 15:40:42 f16vm kernel: [ 50.521687] virbr1: port 2(vnet0)
entering forwarding state
Nov 9 15:40:42 f16vm kernel: [ 50.521773] ADDRCONF(NETDEV_CHANGE):
virbr1: link becomes ready
Nov 9 15:40:42 f16vm NetworkManager[752]: <warn>
/sys/devices/virtual/net/vnet0: couldn't determine device driver;
ignoring...
Nov 9 15:40:42 f16vm NetworkManager[752]: NetworkManager[752]: <warn>
/sys/devices/virtual/net/vnet0: couldn't determine device driver;
ignoring...
Nov 9 15:40:42 f16vm qemu-kvm: Could not find keytab file:
/etc/qemu/krb5.tab: No such file or directory
Nov 9 15:40:43 f16vm avahi-daemon[755]: Registering new address
record for fe80::fc54:ff:fe91:6be6 on vnet0.*.
---> no more until it freezes
more tests to come with newer kernel 3.1.1-2.fc16.x86_64
But before proceeding, probably I need to adjust particular features
to enable/disable about
cpu to pass to f16vm guest...
What are current advises about cpu flags to pass to optimally use
nested virtualization with intel at this time?
Thanks in advance.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: nested virtualization on Intel and needed cpu flags to pass
2011-11-22 16:52 nested virtualization on Intel and needed cpu flags to pass Gianluca Cecchi
@ 2011-11-23 11:01 ` Nadav Har'El
2011-11-24 15:05 ` Gianluca Cecchi
0 siblings, 1 reply; 3+ messages in thread
From: Nadav Har'El @ 2011-11-23 11:01 UTC (permalink / raw)
To: Gianluca Cecchi; +Cc: kvm
On Tue, Nov 22, 2011, Gianluca Cecchi wrote about "nested virtualization on Intel and needed cpu flags to pass":
> I'm going to test nested virtualization on Intel with Fedora 16 host.
>...
> [root at f16 ~]# uname -r
> 3.1.1-2.fc16.x86_64
This Linux version indeed has nested VMX, while
> # uname -r
> 2.6.40.6-0.fc15.x86_64
doesn't.
> Based on docs, the "nested" option is disabled by default on Intel
Indeed. You need to load the kvm_intel module with the "nested=1"
option. You also need to tell qemu that the emulated CPU will advertise
VMX - the simplest way to do this is with "-cpu host" option to qemu.
It seems you did all of this correctly.
> Preliminary results are not so good.
> I created an F16 guest (f16vm), with default options
>..
> Inside the guest I create a windows xp with default values proposed by
>..
> Until now not able to complete installation
Unfortunately, this is a known bug - which I promised to work on, but
haven't yet got around to :(
nested-vmx.txt explictly lists under "known limitations" that: "The
current code supports running Linux guests under KVM guests."
> more tests to come with newer kernel 3.1.1-2.fc16.x86_64
> But before proceeding, probably I need to adjust particular features
> to enable/disable about
> cpu to pass to f16vm guest...
I don't think this is the problem. The underlying problem is that the
VMX spec is very complex, and ideally nested VMX should correctly emulate
each and every bit and each and every corner of it. Because all our
testing was done with KVM L1s and Linux L2s, all the paths that get
exercised in that case were tested and their bugs ironed out - but it is
possible that Windows L2s end up excercising slightly different code
paths that still have bugs, and those need to be fixed.
Unfortunately, I doubt a newer kernel will solve your problem. I think
there are genuine bugs that will have to be fixed.
> What are current advises about cpu flags to pass to optimally use
> nested virtualization with intel at this time?
I don't think there is any such guidelines. The only thing you really
need is "-cpu qemu64,+vmx" (replace qemu64 by whatever you want) to
advertise the exisance of VMX.
--
Nadav Har'El | Wednesday, Nov 23 2011,
nyh@math.technion.ac.il |-----------------------------------------
Phone +972-523-790466, ICQ 13349191 |When you lose, don't lose the lesson.
http://nadav.harel.org.il |
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: nested virtualization on Intel and needed cpu flags to pass
2011-11-23 11:01 ` Nadav Har'El
@ 2011-11-24 15:05 ` Gianluca Cecchi
0 siblings, 0 replies; 3+ messages in thread
From: Gianluca Cecchi @ 2011-11-24 15:05 UTC (permalink / raw)
To: kvm
Resend, because probably it didn't reach the ml due to attachments size...
I'm posting now links instead...
On Wed, Nov 23, 2011 at 12:01 PM, Nadav Har'El wrote:
> Unfortunately, this is a known bug - which I promised to work on, but
> haven't yet got around to :(
> nested-vmx.txt explictly lists under "known limitations" that: "The
> current code supports running Linux guests under KVM guests."
[snip]
> I don't think there is any such guidelines. The only thing you really
> need is "-cpu qemu64,+vmx" (replace qemu64 by whatever you want) to
> advertise the exisance of VMX.
>
Ok, thanks for the answer.
right now tested this config
Host F16 with these packages
kernel-3.1.1-2.fc16.x86_64
virt-manager-0.9.0-7.fc16.noarch
qemu-kvm-0.15.1-3.fc16.x86_64
libvirt-0.9.6-2.fc16.x86_64
L1 guest f16vm with same virtualization related packages as the host
L2 guest c56 configured as red hat 5.4 or above and configured to boot from cd
cd is a iso of centos 5.6 live x86_64
L1 guest configured with "copy host cpu configuration" in virt-manager
its cpuinfo gives:
[root@f16vm ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz
stepping : 11
cpu MHz : 2693.880
cache size : 4096 KB
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm
constant_tsc up nopl pni vmx ssse3 cx16 sse4_1 sse4_2 x2apic popcnt
hypervisor lahf_lm
bogomips : 5387.76
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
qemu command line on my host for f16vm is
/usr/bin/qemu-kvm -S -M pc-0.14 -cpu
core2duo,+lahf_lm,+rdtscp,+popcnt,+x2apic,+sse4.2,+sse4.1,+xtpr,+cx16,+tm2,+est,+vmx,+ds_cpl,+pbe,+tm,+ht,+ss,+acpi,+ds
-enable-kvm -m 3192 -smp 1,sockets=1,cores=1,threads=1 -name f16vm
....
On L1 f16vm qemu command line for its L2 guest c56 is:
[root@f16vm ~]# ps -ef|grep qemu
qemu 1834 1 10 12:49 ? 00:01:07 /usr/bin/qemu-kvm -S
-M pc-0.14 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1
-name c56 -uuid 15526957-51c9-1958-8a15-dea8f2626e5d -nodefconfig
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c56.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -drive
file=/var/lib/libvirt/images/CentOS-5.6-x86_64-LiveCD.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:0 -vga
cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
[root@f16vm ~]# virsh start c56
After a while c56 goes into a paused state:
[root@f16vm ~]# virsh domstate c56
paused
[root@f16vm ~]# virsh domstate c56 --reason
paused (unknown)
I got only up to these lines attached with virt-dmesg run from f16vm
against c56:
https://docs.google.com/open?id=0BwoPbcrMv8mvMWVmYTRkNDMtMjMzMi00OWViLWI1NTctYjA1YzU2NmM0ZmU5
I post also the image related to what I see in console of c56 L2
guest before it gets paused....
https://docs.google.com/open?id=0BwoPbcrMv8mvMWVjNGNmYTUtNTcxOC00MzBkLWI5YWYtZDhmNDkxMzM1OTEx
Thanks,
Gianluca
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-11-24 15:05 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-22 16:52 nested virtualization on Intel and needed cpu flags to pass Gianluca Cecchi
2011-11-23 11:01 ` Nadav Har'El
2011-11-24 15:05 ` Gianluca Cecchi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox