* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-01 6:16 ` Gleb Natapov
@ 2013-08-05 8:35 ` Zhanghaoyu (A)
2013-08-05 8:43 ` Gleb Natapov
2013-08-05 18:27 ` Xiao Guangrong
0 siblings, 2 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 8:35 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C), KVM, Michael S. Tsirkin, Luonengjun,
Xiahai, Marcelo Tosatti, paolo.bonzini@gmail.com, qemu-devel,
Bruce Rogers, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Hanweidong, Andreas Färber
>> >> >> hi all,
>> >> >>
>> >> >> I met similar problem to these, while performing live migration or
>> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> guest,
>> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >>
>> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> degradation of this process.
>> >> >>
>> >> >> If EPT disabled, this problem gone.
>> >> >>
>> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> and analyze the differences between this two versions, or apply the
>> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >>
>> >> >> Any better ideas?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >
>> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >
>> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >
>> >> >Bruce
>> >>
>> >> I found the first bad
>> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >>
>> >> And,
>> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> git diff
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >>
>> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> came to a conclusion that all of the differences between
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >>
>> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >>
>> >> Thanks,
>> >> Zhang Haoyu
>> >>
>> >
>> >There should be no read-only memory maps backing guest RAM.
>> >
>> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >And if it is false, please capture the associated GFN.
>> >
>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + if (!map_writable)
>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>>
>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>
>The flooding you see happens during migrate to file stage because of dirty
>page tracking. If you clear dmesg after virsh-save you should not see any
>flooding after virsh-restore. I just checked with latest tree, I do not.
I made a verification again.
I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
no guest-write happens during this switching period.
After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
On VM's normal starting stage, only very few GFNs are printed, shown as below
gfn = 16
gfn = 604
gfn = 605
gfn = 606
gfn = 607
gfn = 608
gfn = 609
but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
2042600
2797777
2797778
2797779
2797780
2797781
2797782
2797783
2797784
2797785
2042602
2846482
2042603
2846483
2042606
2846485
2042607
2846486
2042610
2042611
2846489
2846490
2042614
2042615
2846493
2846494
2042617
2042618
2846497
2042621
2846498
2042622
2042625
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:35 ` [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
@ 2013-08-05 8:43 ` Gleb Natapov
2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 18:27 ` Xiao Guangrong
1 sibling, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2013-08-05 8:43 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C), KVM, Michael S. Tsirkin, Luonengjun,
Xiahai, Marcelo Tosatti, paolo.bonzini@gmail.com, qemu-devel,
Bruce Rogers, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Hanweidong, Andreas Färber
On Mon, Aug 05, 2013 at 08:35:09AM +0000, Zhanghaoyu (A) wrote:
> >> >> >> hi all,
> >> >> >>
> >> >> >> I met similar problem to these, while performing live migration or
> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> >> >> >> guest:suse11sp2), running tele-communication software suite in
> >> >> >> guest,
> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >> >> >>
> >> >> >> After live migration or virsh restore [savefile], one process's CPU
> >> >> >> utilization went up by about 30%, resulted in throughput
> >> >> >> degradation of this process.
> >> >> >>
> >> >> >> If EPT disabled, this problem gone.
> >> >> >>
> >> >> >> I suspect that kvm hypervisor has business with this problem.
> >> >> >> Based on above suspect, I want to find the two adjacent versions of
> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
> >> >> >> and analyze the differences between this two versions, or apply the
> >> >> >> patches between this two versions by bisection method, finally find the key patches.
> >> >> >>
> >> >> >> Any better ideas?
> >> >> >>
> >> >> >> Thanks,
> >> >> >> Zhang Haoyu
> >> >> >
> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
> >> >> >
> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
> >> >> >
> >> >> >Bruce
> >> >>
> >> >> I found the first bad
> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
> >> >>
> >> >> And,
> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
> >> >> git diff
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
> >> >>
> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
> >> >> came to a conclusion that all of the differences between
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
> >> >>
> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
> >> >>
> >> >> Thanks,
> >> >> Zhang Haoyu
> >> >>
> >> >
> >> >There should be no read-only memory maps backing guest RAM.
> >> >
> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
> >> >And if it is false, please capture the associated GFN.
> >> >
> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
> >> int pt_write = 0;
> >> gfn_t pseudo_gfn;
> >>
> >> + if (!map_writable)
> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
> >> +
> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> >> if (iterator.level == level) {
> >> unsigned pte_access = ACC_ALL;
> >>
> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
> >>
> >The flooding you see happens during migrate to file stage because of dirty
> >page tracking. If you clear dmesg after virsh-save you should not see any
> >flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
Interesting, is this with upstream kernel? For me the situation is
exactly the opposite. What is your command line?
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:43 ` Gleb Natapov
@ 2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 9:15 ` Andreas Färber
2013-08-05 9:37 ` Gleb Natapov
0 siblings, 2 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:09 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C), KVM, Michael S. Tsirkin, Luonengjun,
Xiahai, Marcelo Tosatti, paolo.bonzini@gmail.com, qemu-devel,
Bruce Rogers, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Hanweidong, Andreas Färber
>> >> >> >> hi all,
>> >> >> >>
>> >> >> >> I met similar problem to these, while performing live migration or
>> >> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>> >> >> >> guest:suse11sp2), running tele-communication software suite in
>> >> >> >> guest,
>> >> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>> >> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>> >> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>> >> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>> >> >> >>
>> >> >> >> After live migration or virsh restore [savefile], one process's CPU
>> >> >> >> utilization went up by about 30%, resulted in throughput
>> >> >> >> degradation of this process.
>> >> >> >>
>> >> >> >> If EPT disabled, this problem gone.
>> >> >> >>
>> >> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> >> and analyze the differences between this two versions, or apply the
>> >> >> >> patches between this two versions by bisection method, finally find the key patches.
>> >> >> >>
>> >> >> >> Any better ideas?
>> >> >> >>
>> >> >> >> Thanks,
>> >> >> >> Zhang Haoyu
>> >> >> >
>> >> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>> >> >> >
>> >> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>> >> >> >
>> >> >> >Bruce
>> >> >>
>> >> >> I found the first bad
>> >> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>> >> >>
>> >> >> And,
>> >> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>> >> >> git diff
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>> >> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>> >> >>
>> >> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>> >> >> came to a conclusion that all of the differences between
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>> >> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>> >> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>> >> >>
>> >> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >>
>> >> >
>> >> >There should be no read-only memory maps backing guest RAM.
>> >> >
>> >> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>> >> >And if it is false, please capture the associated GFN.
>> >> >
>> >> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>> >> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>> >> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>> >> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>> >> int pt_write = 0;
>> >> gfn_t pseudo_gfn;
>> >>
>> >> + if (!map_writable)
>> >> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>> >> +
>> >> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> >> if (iterator.level == level) {
>> >> unsigned pte_access = ACC_ALL;
>> >>
>> >> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>> >>
>> >The flooding you see happens during migrate to file stage because of dirty
>> >page tracking. If you clear dmesg after virsh-save you should not see any
>> >flooding after virsh-restore. I just checked with latest tree, I do not.
>>
>> I made a verification again.
>> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
>> no guest-write happens during this switching period.
>> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
>> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
>> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>>
>Interesting, is this with upstream kernel? For me the situation is
>exactly the opposite. What is your command line?
>
I made the verification on the first bad commit 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, not the upstream.
When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
cp /boot/config-3.0.13-0.27-default ./.config
yes "" | make oldconfig
make && make modules_install && make install
then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
I'm trying to resolve it.
The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:09 ` Zhanghaoyu (A)
@ 2013-08-05 9:15 ` Andreas Färber
2013-08-05 9:22 ` Zhanghaoyu (A)
2013-08-05 9:37 ` Gleb Natapov
1 sibling, 1 reply; 15+ messages in thread
From: Andreas Färber @ 2013-08-05 9:15 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C), KVM, Gleb Natapov,
Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini@gmail.com, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong@linux.vnet.ibm.com, Hanweidong
Hi,
Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
> When I build the upstream, encounter a problem that I compile and install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) on sles11sp2 environment via below command
> cp /boot/config-3.0.13-0.27-default ./.config
> yes "" | make oldconfig
> make && make modules_install && make install
> then, I reboot the host, and select the upstream kernel, but during the starting stage, below problem happened,
> Could not find /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>
> I'm trying to resolve it.
Possibly you need to enable loading unsupported kernel modules?
At least that's needed when testing a kmod with a SUSE kernel.
Regards,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:15 ` Andreas Färber
@ 2013-08-05 9:22 ` Zhanghaoyu (A)
0 siblings, 0 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-05 9:22 UTC (permalink / raw)
To: Andreas Färber
Cc: Xiejunyong, Huangweidong (C), KVM, Gleb Natapov,
Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini@gmail.com, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong@linux.vnet.ibm.com, Hanweidong
>Hi,
>
>Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
>> When I build the upstream, encounter a problem that I compile and
>> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29)
>> on sles11sp2 environment via below command cp
>> /boot/config-3.0.13-0.27-default ./.config yes "" | make oldconfig
>> make && make modules_install && make install then, I reboot the host,
>> and select the upstream kernel, but during the starting stage, below
>> problem happened, Could not find
>> /dev/disk/by-id/scsi-3600508e000000000864407c5b8f7ad01-part3
>>
>> I'm trying to resolve it.
>
>Possibly you need to enable loading unsupported kernel modules?
>At least that's needed when testing a kmod with a SUSE kernel.
>
I have tried to set " allow_unsupported_modules 1" in /etc/modprobe.d/unsupported-modules, but the problem still happened.
I replace the whole kernel with the kvm kernel, not only the kvm modules.
>Regards,
>Andreas
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 9:15 ` Andreas Färber
@ 2013-08-05 9:37 ` Gleb Natapov
2013-08-06 10:47 ` Zhanghaoyu (A)
1 sibling, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2013-08-05 9:37 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C), KVM, Michael S. Tsirkin, Luonengjun,
Xiahai, Marcelo Tosatti, paolo.bonzini@gmail.com, qemu-devel,
Bruce Rogers, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Hanweidong, Andreas Färber
On Mon, Aug 05, 2013 at 09:09:56AM +0000, Zhanghaoyu (A) wrote:
> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>
Which QEMU version is this? Can you try with e1000 NICs instead of
virtio?
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 8:35 ` [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
2013-08-05 8:43 ` Gleb Natapov
@ 2013-08-05 18:27 ` Xiao Guangrong
1 sibling, 0 replies; 15+ messages in thread
From: Xiao Guangrong @ 2013-08-05 18:27 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Xiejunyong, Huangweidong (C), KVM, Gleb Natapov,
Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini@gmail.com, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong@linux.vnet.ibm.com, Hanweidong,
Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 5707 bytes --]
On Aug 5, 2013, at 4:35 PM, "Zhanghaoyu (A)" <haoyu.zhang@huawei.com> wrote:
>>>>>>> hi all,
>>>>>>>
>>>>>>> I met similar problem to these, while performing live migration or
>>>>>>> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>>>>>>> guest:suse11sp2), running tele-communication software suite in
>>>>>>> guest,
>>>>>>> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>>>>>>> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>>>>>>> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>>>>>>> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>>>>>>
>>>>>>> After live migration or virsh restore [savefile], one process's CPU
>>>>>>> utilization went up by about 30%, resulted in throughput
>>>>>>> degradation of this process.
>>>>>>>
>>>>>>> If EPT disabled, this problem gone.
>>>>>>>
>>>>>>> I suspect that kvm hypervisor has business with this problem.
>>>>>>> Based on above suspect, I want to find the two adjacent versions of
>>>>>>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>>>>>>> and analyze the differences between this two versions, or apply the
>>>>>>> patches between this two versions by bisection method, finally find the key patches.
>>>>>>>
>>>>>>> Any better ideas?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Zhang Haoyu
>>>>>>
>>>>>> I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>>>>>>
>>>>>> So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>>>>>>
>>>>>> Bruce
>>>>>
>>>>> I found the first bad
>>>>> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>>>>
>>>>> And,
>>>>> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>>>>> git diff
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>>>>> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>>>>
>>>>> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>>>>> came to a conclusion that all of the differences between
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>>>>> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>>>>> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>>>>
>>>>> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>>>>
>>>>> Thanks,
>>>>> Zhang Haoyu
>>>>>
>>>>
>>>> There should be no read-only memory maps backing guest RAM.
>>>>
>>>> Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>>>> And if it is false, please capture the associated GFN.
>>>>
>>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + if (!map_writable)
>>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL;
>>>
>>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>>
>> The flooding you see happens during migrate to file stage because of dirty
>> page tracking. If you clear dmesg after virsh-save you should not see any
>> flooding after virsh-restore. I just checked with latest tree, I do not.
>
> I made a verification again.
> I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
> no guest-write happens during this switching period.
> After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
> and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
> I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>
> On VM's normal starting stage, only very few GFNs are printed, shown as below
> gfn = 16
> gfn = 604
> gfn = 605
> gfn = 606
> gfn = 607
> gfn = 608
> gfn = 609
>
> but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
That's really strange. Could you please disable ept and add your trace code to FNAME(fetch)( ), then
test again to see what will happen?
If there is still have many !rmap_writable cases, please measure the performance to see if it still has
regression.
Many thanks!
[-- Attachment #2: Type: text/html, Size: 6851 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-05 9:37 ` Gleb Natapov
@ 2013-08-06 10:47 ` Zhanghaoyu (A)
2013-08-07 1:34 ` Zhanghaoyu (A)
0 siblings, 1 reply; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-06 10:47 UTC (permalink / raw)
To: Gleb Natapov
Cc: Xiejunyong, Huangweidong (C), KVM, Michael S. Tsirkin, Luonengjun,
Xiahai, Marcelo Tosatti, paolo.bonzini@gmail.com, qemu-devel,
Bruce Rogers, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Hanweidong, Andreas Färber
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none
>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32
>> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,n
>> owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=localtime -no-shutdown -device
>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cach
>> e=none -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id
>> =virtio-disk0,bootindex=1 -netdev
>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.0
>> ,addr=0x3,bootindex=2 -netdev
>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.0
>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.0
>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.0
>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.0
>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.0
>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> -watchdog-action poweroff -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>
>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>
This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
Thanks,
Zhang Haoyu
>--
> Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-06 10:47 ` Zhanghaoyu (A)
@ 2013-08-07 1:34 ` Zhanghaoyu (A)
2013-08-07 5:52 ` Gleb Natapov
0 siblings, 1 reply; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-07 1:34 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov
Cc: Marcelo Tosatti, Huangweidong (C), KVM, Michael S. Tsirkin,
paolo.bonzini@gmail.com, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Bruce Rogers, Hanweidong,
Andreas Färber
>>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> QEMU_AUDIO_DRV=none
>>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> -chardev
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=localtime -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>>> h
>>> e=none -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>>> d
>>> =virtio-disk0,bootindex=1 -netdev
>>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> 0
>>> ,addr=0x3,bootindex=2 -netdev
>>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> 0
>>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> 0
>>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> 0
>>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> 0
>>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> 0
>>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> -watchdog-action poweroff -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>>
>>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>
>This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>
>Thanks,
>Zhang Haoyu
>
>>--
>> Gleb.
Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
I applied below patch to __direct_map(),
@@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
int pt_write = 0;
gfn_t pseudo_gfn;
+ map_writable = true;
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and rebuild the kvm-kmod, then re-insmod it.
After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
Thanks,
Zhang Haoyu
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 1:34 ` Zhanghaoyu (A)
@ 2013-08-07 5:52 ` Gleb Natapov
2013-08-14 9:05 ` Zhanghaoyu (A)
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Gleb Natapov @ 2013-08-07 5:52 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: Marcelo Tosatti, Huangweidong (C), KVM, Michael S. Tsirkin,
paolo.bonzini@gmail.com, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Bruce Rogers, Hanweidong,
Andreas Färber
On Wed, Aug 07, 2013 at 01:34:41AM +0000, Zhanghaoyu (A) wrote:
> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
> >>> QEMU_AUDIO_DRV=none
> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
> >>> -chardev
> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
> >>> base=localtime -no-shutdown -device
> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
> >>> h
> >>> e=none -device
> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
> >>> d
> >>> =virtio-disk0,bootindex=1 -netdev
> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x3,bootindex=2 -netdev
> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
> >>> 0
> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
> >>> -watchdog-action poweroff -device
> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
> >>>
> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
> >>
> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
> >
> >Thanks,
> >Zhang Haoyu
> >
> >>--
> >> Gleb.
>
> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>
Not really. There is no point in debugging very old version compiled
with kvm-kmod, there are to many variables in the environment. I cannot
reproduce the GFN flooding on upstream, so the problem may be gone, may
be a result of kvm-kmod problem or something different in how I invoke
qemu. So the best way to proceed is for you to reproduce with upstream
version then at least I will be sure that we are using the same code.
> I applied below patch to __direct_map(),
> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
> int pt_write = 0;
> gfn_t pseudo_gfn;
>
> + map_writable = true;
> +
> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
> if (iterator.level == level) {
> unsigned pte_access = ACC_ALL;
> and rebuild the kvm-kmod, then re-insmod it.
> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>
If hva_to_pfn() returns map_writable == false it means that page is
mapped as read only on primary MMU, so it should not be mapped writable
on secondary MMU either. This should not happen usually.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
@ 2013-08-08 11:31 Zhanghaoyu (A)
2013-08-08 12:29 ` Paolo Bonzini
0 siblings, 1 reply; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-08 11:31 UTC (permalink / raw)
To: Xiao Guangrong
Cc: Xiejunyong, Huangweidong (C), KVM, Gleb Natapov,
Michael S. Tsirkin, Luonengjun, Xiahai, Marcelo Tosatti,
paolo.bonzini@gmail.com, qemu-devel, Bruce Rogers, Zanghongyong,
Xin Rong Fu, Yi Li, xiaoguangrong@linux.vnet.ibm.com, Hanweidong,
Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 7359 bytes --]
>>>> >> >> hi all,
>>>> >> >>
>>>> >> >> I met similar problem to these, while performing live migration or
>>>> >> >> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
>>>> >> >> guest:suse11sp2), running tele-communication software suite in
>>>> >> >> guest,
>>>> >> >> https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
>>>> >> >> http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
>>>> >> >> http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
>>>> >> >> https://bugzilla.kernel.org/show_bug.cgi?id=58771
>>>> >> >>
>>>> >> >> After live migration or virsh restore [savefile], one process's CPU
>>>> >> >> utilization went up by about 30%, resulted in throughput
>>>> >> >> degradation of this process.
>>>> >> >>
>>>> >> >> If EPT disabled, this problem gone.
>>>> >> >>
>>>> >> >> I suspect that kvm hypervisor has business with this problem.
>>>> >> >> Based on above suspect, I want to find the two adjacent versions of
>>>> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>>>> >> >> and analyze the differences between this two versions, or apply the
>>>> >> >> patches between this two versions by bisection method, finally find the key patches.
>>>> >> >>
>>>> >> >> Any better ideas?
>>>> >> >>
>>>> >> >> Thanks,
>>>> >> >> Zhang Haoyu
>>>> >> >
>>>> >> >I've attempted to duplicate this on a number of machines that are as similar to yours as I am able to get my hands on, and so far have not been able to see any performance degradation. And from what I've read in the above links, huge pages do not seem to be part of the problem.
>>>> >> >
>>>> >> >So, if you are in a position to bisect the kernel changes, that would probably be the best avenue to pursue in my opinion.
>>>> >> >
>>>> >> >Bruce
>>>> >>
>>>> >> I found the first bad
>>>> >> commit([612819c3c6e67bac8fceaa7cc402f13b1b63f7e4] KVM: propagate fault r/w information to gup(), allow read-only memory) which triggers this problem by git bisecting the kvm kernel (download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git) changes.
>>>> >>
>>>> >> And,
>>>> >> git log 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 -n 1 -p >
>>>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log
>>>> >> git diff
>>>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1..612819c3c6e67bac8fceaa7cc4
>>>> >> 02f13b1b63f7e4 > 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff
>>>> >>
>>>> >> Then, I diffed 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.log and
>>>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4.diff,
>>>> >> came to a conclusion that all of the differences between
>>>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4~1 and
>>>> >> 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4
>>>> >> are contributed by no other than 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4, so this commit is the peace-breaker which directly or indirectly causes the degradation.
>>>> >>
>>>> >> Does the map_writable flag passed to mmu_set_spte() function have effect on PTE's PAT flag or increase the VMEXITs induced by that guest tried to write read-only memory?
>>>> >>
>>>> >> Thanks,
>>>> >> Zhang Haoyu
>>>> >>
>>>> >
>>>> >There should be no read-only memory maps backing guest RAM.
>>>> >
>>>> >Can you confirm map_writable = false is being passed to __direct_map? (this should not happen, for guest RAM).
>>>> >And if it is false, please capture the associated GFN.
>>>> >
>>>> I added below check and printk at the start of __direct_map() at the fist bad commit version,
>>>> --- kvm-612819c3c6e67bac8fceaa7cc402f13b1b63f7e4/arch/x86/kvm/mmu.c 2013-07-26 18:44:05.000000000 +0800
>>>> +++ kvm-612819/arch/x86/kvm/mmu.c 2013-07-31 00:05:48.000000000 +0800
>>>> @@ -2223,6 +2223,9 @@ static int __direct_map(struct kvm_vcpu
>>>> int pt_write = 0;
>>>> gfn_t pseudo_gfn;
>>>>
>>>> + if (!map_writable)
>>>> + printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
>>>> +
>>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>>> if (iterator.level == level) {
>>>> unsigned pte_access = ACC_ALL;
>>>>
>>>> I virsh-save the VM, and then virsh-restore it, so many GFNs were printed, you can absolutely describe it as flooding.
>>>>
>>>The flooding you see happens during migrate to file stage because of dirty
>>>page tracking. If you clear dmesg after virsh-save you should not see any
>>>flooding after virsh-restore. I just checked with latest tree, I do not.
>>
>>I made a verification again.
>>I virsh-save the VM, during the saving stage, I run 'dmesg', no GFN printed, maybe the switching from running stage to pause stage takes so short time,
>>no guest-write happens during this switching period.
>>After the completion of saving operation, I run 'demsg -c' to clear the buffer all the same, then I virsh-restore the VM, so many GFNs are printed by running 'dmesg',
>>and I also run 'tail -f /var/log/messages' during the restoring stage, so many GFNs are flooded dynamically too.
>>I'm sure that the flooding happens during the virsh-restore stage, not the migration stage.
>>
>>On VM's normal starting stage, only very few GFNs are printed, shown as below
>>gfn = 16
>>gfn = 604
>>gfn = 605
>>gfn = 606
>>gfn = 607
>>gfn = 608
>>gfn = 609
>>
>>but on the VM's restoring stage, so many GFNs are printed, taking some examples shown as below,
>>
>That's really strange. Could you please disable ept and add your trace code to FNAME(fetch)( ), then
>test again to see what will happen?
>
>If there is still have many !rmap_writable cases, please measure the performance to see if it still has
>regression.
>
I made a test on SLES11-SP2 environment (kernel version: 3.0.13-0.27), and applied below patch to arch/x86/kvm/paging_tmpl.h
@@ -480,6 +480,9 @@ static u64 *FNAME(fetch)(struct kvm_vcpu
if (!is_present_gpte(gw->ptes[gw->level - 1]))
return NULL;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gw->gfn);
+
direct_access = gw->pt_access & gw->pte_access;
if (!dirty)
direct_access &= ~ACC_WRITE_MASK;
And, rebuild the kvm-kmod, then re-insmod kvm-intel.ko with ept=0,
I virsh-save the VM, and then virsh-restore the VM, the performance degradation disappeared, and no GFN printed.
But, I also made a test on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4), and applied above patch too,
With EPT disabled, as soon as the restoring completed, the GFNs’ flooding is starting, take some examples to show as below,
130419
130551
3030618
3030619
3030620
3030621
3030622
3030623
3030624
3030625
3030626
3030627
3030628
3030629
3030630
3030631
3030632
2062054
2850204
2850205
2850207
2850208
2850271
2850273
2850274
2850277
2850278
2850281
2850282
2850284
2850285
2850286
2850288
2850289
2850292
2850293
2850296
2850297
2850299
2850300
2850301
2850365
2850366
2850369
2850370
2850372
2850373
2850374
2850376
2850377
2850380
…
but, after a period of time, only gfn = 240 printed constantly. And some processes restore failed, so the performance cannot be measured.
Thanks,
Zhang Haoyu
>Many thanks!
[-- Attachment #2: Type: text/html, Size: 35669 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-08 11:31 [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
@ 2013-08-08 12:29 ` Paolo Bonzini
0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2013-08-08 12:29 UTC (permalink / raw)
To: Zhanghaoyu (A)
Cc: xiaoguangrong@linux.vnet.ibm.com, Marcelo Tosatti,
Huangweidong (C), Xiao Guangrong, Gleb Natapov,
Michael S. Tsirkin, paolo.bonzini@gmail.com, Xiejunyong,
Luonengjun, qemu-devel, Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
KVM, Bruce Rogers, Hanweidong, Andreas Färber
On 08/08/2013 01:31 PM, Zhanghaoyu (A) wrote:
> And, rebuild the kvm-kmod, then re-insmod kvm-intel.ko with ept=0,
>
> I virsh-save the VM, and then virsh-restore the VM, the performance
> degradation disappeared, and no GFN printed.
>
> But, I also made a test on the first bad
> commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4), and applied above
> patch too,
>
> With EPT disabled, as soon as the restoring completed, the GFNs’
> flooding is starting, take some examples to show as below,
>
> but, after a period of time, only gfn = 240 printed constantly. And
> some processes restore failed, so the performance cannot be measured.
This is 0xF0000 which is the BIOS. It makes some sense, though I'm not
sure why a running OS would poke there.
It can be a useful hint, but please do as Gleb said. Try your kernel
(+kvm-kmod) with current QEMU, current kernel with your QEMU, current
kernel with current QEMU. Make a table of which kernel/QEMU combos work
and which do not. Then we can try to understand if the bug still exists
and perhaps get hints about how your vendor could backport the fix.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
@ 2013-08-14 9:05 ` Zhanghaoyu (A)
2013-08-20 13:33 ` Zhanghaoyu (A)
2013-08-31 7:45 ` Zhanghaoyu (A)
2 siblings, 0 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-14 9:05 UTC (permalink / raw)
To: Gleb Natapov
Cc: Marcelo Tosatti, Huangweidong (C), KVM, Michael S. Tsirkin,
paolo.bonzini@gmail.com, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Bruce Rogers, Hanweidong,
Andreas Färber
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,server,
>> >>> n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25 -device
>> >>> virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27 -device
>> >>> virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29 -device
>> >>> virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31 -device
>> >>> virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled
>with kvm-kmod, there are to many variables in the environment. I cannot
>reproduce the GFN flooding on upstream, so the problem may be gone, may
>be a result of kvm-kmod problem or something different in how I invoke
>qemu. So the best way to proceed is for you to reproduce with upstream
>version then at least I will be sure that we are using the same code.
>
Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
Thanks,
Zhang Haoyu
>> I applied below patch to __direct_map(),
>> @@ -2223,6 +2223,8 @@ static int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL;
>> and rebuild the kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is
>mapped as read only on primary MMU, so it should not be mapped writable
>on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
2013-08-14 9:05 ` Zhanghaoyu (A)
@ 2013-08-20 13:33 ` Zhanghaoyu (A)
2013-08-31 7:45 ` Zhanghaoyu (A)
2 siblings, 0 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-20 13:33 UTC (permalink / raw)
To: Zhanghaoyu (A), Gleb Natapov, Eric Blake, pl@kamp.de,
Paolo Bonzini
Cc: Marcelo Tosatti, Huangweidong (C), KVM, Michael S. Tsirkin,
paolo.bonzini@gmail.com, Xiejunyong, Luonengjun, qemu-devel,
Xiahai, Zanghongyong, Xin Rong Fu, Yi Li,
xiaoguangrong@linux.vnet.ibm.com, Bruce Rogers, Hanweidong,
Andreas Färber
>>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>>> >>> QEMU_AUDIO_DRV=none
>>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>>> >>> -uuid
>>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>>> >>> -chardev
>>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,ser
>>> >>> ver, n owait -mon chardev=charmonitor,id=monitor,mode=control
>>> >>> -rtc base=localtime -no-shutdown -device
>>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw
>>> >>> ,cac
>>> >>> h
>>> >>> e=none -device
>>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-dis
>>> >>> k0,i
>>> >>> d
>>> >>> =virtio-disk0,bootindex=1 -netdev
>>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x3,bootindex=2 -netdev
>>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>>> >>> 0
>>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>>> >>> -watchdog-action poweroff -device
>>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>> >>>
>>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>> >>
>>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>>> >
>>> >Thanks,
>>> >Zhang Haoyu
>>> >
>>> >>--
>>> >> Gleb.
>>>
>>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>>
>>Not really. There is no point in debugging very old version compiled
>>with kvm-kmod, there are to many variables in the environment. I cannot
>>reproduce the GFN flooding on upstream, so the problem may be gone, may
>>be a result of kvm-kmod problem or something different in how I invoke
>>qemu. So the best way to proceed is for you to reproduce with upstream
>>version then at least I will be sure that we are using the same code.
>>
>Thanks, I will test the combos of upstream kvm kernel and upstream qemu.
>And, the guest os version above I said was wrong, current running guest os is SLES10SP4.
>
I tested below combos of qemu and kernel,
+-----------------+-----------------+-----------------+
| kvm kernel | QEMU | test result |
+-----------------+-----------------+-----------------+
| kvm-3.11-2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.0.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.0 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.4.2 | BAD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0-rc0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.0 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.1 | GOOD |
+-----------------+-----------------+-----------------+
| SLES11SP2 | qemu-1.5.2 | GOOD |
+-----------------+-----------------+-----------------+
NOTE:
1. above kvm-3.11-2 in the table is the whole tag kernel download from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2's kernel version is 3.0.13-0.27
Then I git bisect the qemu changes between qemu-1.4.2 and qemu-1.5.0-rc0 by marking the good version as bad, and the bad version as good,
so the first bad commit is just the patch which fixes the degradation problem.
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 1 | 03e94e39ce5259efdbdeefa1f249ddb499d57321 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 2 | 99835e00849369bab726a4dc4ceed1f6f9ed967c | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 3 | 62e1aeaee4d0450222a0ea43c713b59526e3e0fe | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 4 | 9d9801cf803cdceaa4845fe27150b24d5ab083e6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 5 | d76bb73549fcac07524aea5135280ea533a94fd6 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 6 | d913829f0fd8451abcb1fd9d6dfce5586d9d7e10 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 7 | d2f38a0acb0a1c5b7ab7621a32d603d08d513bea | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 8 | e344b8a16de429ada3d9126f26e2a96d71348356 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 9 | 56ded708ec38e4cb75a7c7357480ca34c0dc6875 | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 11 | 70c8652bf3c1fea79b7b68864e86926715c49261 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | GOOD |
+------------+-------------------------------------------+-----------------+-----------------+
NOTE: above tests were made on SLES11SP2.
So, the commit f1c72795af573b24a7da5eb52375c9aba8a37972 is just the patch which fixes the degradation.
Then, I replace SLES11SP2's default kvm-kmod with kvm-kmod-3.6, and applied below patch to __direct_map(),
@@ -2599,6 +2599,9 @@ static int __direct_map(struct kvm_vcpu
int emulate = 0;
gfn_t pseudo_gfn;
+ if (!map_writable)
+ printk(KERN_ERR "%s: %s: gfn = %llu \n", __FILE__, __func__, gfn);
+
for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
if (iterator.level == level) {
unsigned pte_access = ACC_ALL;
and, I rebuild the kvm-kmod, then re-insmod it, test the adjacent commits again, test results shown as below,
+------------+-------------------------------------------+-----------------+-----------------+
| bisect No. | commit | save-restore | migration |
+------------+-------------------------------------------+-----------------+-----------------+
| 10 | 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d | BAD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
| 12 | f1c72795af573b24a7da5eb52375c9aba8a37972 | GOOD | BAD |
+------------+-------------------------------------------+-----------------+-----------------+
While testing commit 78d07ae7ac74bcc7f79aeefbaff17fb142f44b4d, as soon as the restoration/migration complete, the GFNs flooding is starting,
take some examples shown as below,
2073462
2857203
2073463
2073464
2073465
3218751
2073466
2857206
2857207
2073467
2073468
2857210
2857211
3218752
2857214
2857215
3218753
2857217
2857218
2857221
2857222
3218754
2857225
2857226
3218755
2857229
2857230
2857232
2857233
3218756
2780393
2780394
2857236
2780395
2857237
2780396
2780397
2780398
2780399
2780400
2780401
3218757
2857240
2857241
2857244
3218758
2857247
2857248
2857251
2857252
3218759
2857255
2857256
3218760
2857289
2857290
2857293
2857294
3218761
2857297
2857298
3218762
3218763
3218764
3218765
3218766
3218767
3218768
3218769
3218770
3218771
3218772
but, after a period of time, the flooding rate slowed down.
while testing commit f1c72795af573b24a7da5eb52375c9aba8a37972, after restoration, no GFN was printed, and no performance degradation.
but as soon as live migration complete, GFNs flooding is starting, and performance degradation also happened.
NOTE: The test results of commit f1c72795af573b24a7da5eb52375c9aba8a37972 seemed to be unstable, I will make verification again.
>Thanks,
>Zhang Haoyu
>
>>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@
>>> static int __direct_map(struct kvm_vcpu
>>> int pt_write = 0;
>>> gfn_t pseudo_gfn;
>>>
>>> + map_writable = true;
>>> +
>>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>>> if (iterator.level == level) {
>>> unsigned pte_access = ACC_ALL; and rebuild
>>> the kvm-kmod, then re-insmod it.
>>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>>
>>If hva_to_pfn() returns map_writable == false it means that page is
>>mapped as read only on primary MMU, so it should not be mapped writable
>>on secondary MMU either. This should not happen usually.
>>
>>--
>> Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled
2013-08-07 5:52 ` Gleb Natapov
2013-08-14 9:05 ` Zhanghaoyu (A)
2013-08-20 13:33 ` Zhanghaoyu (A)
@ 2013-08-31 7:45 ` Zhanghaoyu (A)
2 siblings, 0 replies; 15+ messages in thread
From: Zhanghaoyu (A) @ 2013-08-31 7:45 UTC (permalink / raw)
To: Gleb Natapov, pl@kamp.de, Eric Blake, quintela@redhat.com,
Paolo Bonzini, Andreas Färber,
xiaoguangrong@linux.vnet.ibm.com
Cc: Marcelo Tosatti, Huangweidong (C), KVM, Michael S. Tsirkin,
Xiejunyong, Luonengjun, qemu-devel, Xiahai, Zanghongyong,
Xin Rong Fu, Yi Li, Bruce Rogers, Hanweidong
I tested below combos of qemu and kernel,
+------------------------+-----------------+-------------+
| kernel | QEMU | migration |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.6.0* | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6*| qemu-1.5.1 | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.1* | GOOD |
+------------------------+-----------------+-------------+
| SLES11SP2+kvm-kmod-3.6 | qemu-1.5.2 | BAD |
+------------------------+-----------------+-------------+
| kvm-3.11-2 | qemu-1.5.1 | BAD |
+------------------------+-----------------+-------------+
NOTE:
1. kvm-3.11-2 : the whole tag kernel downloaded from https://git.kernel.org/pub/scm/virt/kvm/kvm.git
2. SLES11SP2+kvm-kmod-3.6 : our release kernel, replace the SLES11SP2's default kvm-kmod with kvm-kmod-3.6, SLES11SP2's kernel version is 3.0.13-0.27
3. qemu-1.6.0* : revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0
4. kvm-kmod-3.6* : kvm-kmod-3.6 with EPT disabled
5. qemu-1.5.1* : apply below patch to qemu-1.5.1 to delete qemu_madvise() statement in ram_load() function
--- qemu-1.5.1/arch_init.c 2013-06-27 05:47:29.000000000 +0800
+++ qemu-1.5.1_fix3/arch_init.c 2013-08-28 19:43:42.000000000 +0800
@@ -842,7 +842,6 @@ static int ram_load(QEMUFile *f, void *o
if (ch == 0 &&
(!kvm_enabled() || kvm_has_sync_mmu()) &&
getpagesize() <= TARGET_PAGE_SIZE) {
- qemu_madvise(host, TARGET_PAGE_SIZE, QEMU_MADV_DONTNEED);
}
#endif
} else if (flags & RAM_SAVE_FLAG_PAGE) {
If I apply above patch to qemu-1.5.1 to delete the qemu_madvise() statement, the test result of the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.5.1 is good.
Why do we perform the qemu_madvise(QEMU_MADV_DONTNEED) for those zero pages?
Does the qemu_madvise() have sustained effect on the range of virtual address? In other words, does qemu_madvise() have sustained effect on the VM performance?
If later frequently read/write the range of virtual address which have been advised to DONTNEED, could performance degradation happen?
The reason why the combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is good, is because of commit 211ea74022f51164a7729030b28eec90b6c99a08,
if I revert the commit 211ea74022f51164a7729030b28eec90b6c99a08 on qemu-1.6.0, the test result of combos of SLES11SP2+kvm-kmod-3.6 and qemu-1.6.0 is bad, performance degradation happened, too.
Thanks,
Zhang Haoyu
>> >>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> >>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
>> >>> QEMU_AUDIO_DRV=none
>> >>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu
>> >>> qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1
>> >>> -uuid
>> >>> 0505ec91-382d-800e-2c79-e5b286eb60b5 -no-user-config -nodefaults
>> >>> -chardev
>> >>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/ATS1.monitor,serv
>> >>> er, n owait -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> >>> base=localtime -no-shutdown -device
>> >>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> >>> file=/opt/ne/vm/ATS1.img,if=none,id=drive-virtio-disk0,format=raw,
>> >>> cac
>> >>> h
>> >>> e=none -device
>> >>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk
>> >>> 0,i
>> >>> d
>> >>> =virtio-disk0,bootindex=1 -netdev
>> >>> tap,fd=20,id=hostnet0,vhost=on,vhostfd=21 -device
>> >>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x3,bootindex=2 -netdev
>> >>> tap,fd=22,id=hostnet1,vhost=on,vhostfd=23 -device
>> >>> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x4 -netdev tap,fd=24,id=hostnet2,vhost=on,vhostfd=25
>> >>> -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x5 -netdev tap,fd=26,id=hostnet3,vhost=on,vhostfd=27
>> >>> -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x6 -netdev tap,fd=28,id=hostnet4,vhost=on,vhostfd=29
>> >>> -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x7 -netdev tap,fd=30,id=hostnet5,vhost=on,vhostfd=31
>> >>> -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:00,bus=pci.
>> >>> 0
>> >>> ,addr=0x9 -chardev pty,id=charserial0 -device
>> >>> isa-serial,chardev=charserial0,id=serial0 -vnc *:0 -k en-us -vga
>> >>> cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb
>> >>> -watchdog-action poweroff -device
>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>> >>>
>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, the GFNs' flooding is initiated at post-restore stage (i.e. running stage), as soon as the restoring completed, the flooding is starting.
>> >
>> >Thanks,
>> >Zhang Haoyu
>> >
>> >>--
>> >> Gleb.
>>
>> Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding?
>>
>Not really. There is no point in debugging very old version compiled with kvm-kmod, there are to many variables in the environment. I cannot reproduce the GFN flooding on upstream, so the problem may be gone, may be a result of kvm-kmod problem or something different in how I invoke qemu. So the best way to proceed is for you to reproduce with upstream version then at least I will be sure that we are using the same code.
>
>> I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@ static
>> int __direct_map(struct kvm_vcpu
>> int pt_write = 0;
>> gfn_t pseudo_gfn;
>>
>> + map_writable = true;
>> +
>> for_each_shadow_entry(vcpu, (u64)gfn << PAGE_SHIFT, iterator) {
>> if (iterator.level == level) {
>> unsigned pte_access = ACC_ALL; and rebuild the
>> kvm-kmod, then re-insmod it.
>> After I started a VM, the host seemed to be abnormal, so many programs cannot be started successfully, segmentation fault is reported.
>> In my opinion, after above patch applied, the commit: 612819c3c6e67bac8fceaa7cc402f13b1b63f7e4 should be of no effect, but the test result proved me wrong.
>> Dose the map_writable value's getting process in hva_to_pfn() have effect on the result?
>>
>If hva_to_pfn() returns map_writable == false it means that page is mapped as read only on primary MMU, so it should not be mapped writable on secondary MMU either. This should not happen usually.
>
>--
> Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-08-31 7:45 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-08 11:31 [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
2013-08-08 12:29 ` Paolo Bonzini
-- strict thread matches above, loose matches on Subject: below --
2013-07-11 9:36 [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled Zhanghaoyu (A)
2013-07-11 18:20 ` Bruce Rogers
2013-07-27 7:47 ` Zhanghaoyu (A)
2013-07-29 23:47 ` Marcelo Tosatti
2013-07-30 9:04 ` Zhanghaoyu (A)
2013-08-01 6:16 ` Gleb Natapov
2013-08-05 8:35 ` [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled Zhanghaoyu (A)
2013-08-05 8:43 ` Gleb Natapov
2013-08-05 9:09 ` Zhanghaoyu (A)
2013-08-05 9:15 ` Andreas Färber
2013-08-05 9:22 ` Zhanghaoyu (A)
2013-08-05 9:37 ` Gleb Natapov
2013-08-06 10:47 ` Zhanghaoyu (A)
2013-08-07 1:34 ` Zhanghaoyu (A)
2013-08-07 5:52 ` Gleb Natapov
2013-08-14 9:05 ` Zhanghaoyu (A)
2013-08-20 13:33 ` Zhanghaoyu (A)
2013-08-31 7:45 ` Zhanghaoyu (A)
2013-08-05 18:27 ` Xiao Guangrong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).