qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable
@ 2013-03-26 15:10 张磊强
  2013-03-27 13:32 ` Stefan Hajnoczi
  2013-03-27 15:41 ` Stefan Hajnoczi
  0 siblings, 2 replies; 23+ messages in thread
From: 张磊强 @ 2013-03-26 15:10 UTC (permalink / raw)
  To: stefanha, pbonzini; +Cc: qemu-devel, 张磊强

[-- Attachment #1: Type: text/plain, Size: 930 bytes --]

Hi,  Paolo && Stefan:

When I test the dataplane feature with qemu master, I find that
Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
enabled. But if I try to start a Fedora VM, it can start normally.

The command I boot QEMU is:
x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
-device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk

I found the similar bug has reported some days ago:
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
. And a patch for this bug has already committed by Paolo at Mar 13:
http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
.

But it cannot work under my environment. Could you give me some advise
to debug this problem ? I can provide more information if need.

-- 
Best Regards,

Leiqiang Zhang

[-- Attachment #2: Type: text/html, Size: 1288 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable
  2013-03-26 15:10 [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable 张磊强
@ 2013-03-27 13:32 ` Stefan Hajnoczi
  2013-03-27 14:02   ` Stefan Hajnoczi
  2013-03-27 15:41 ` Stefan Hajnoczi
  1 sibling, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-03-27 13:32 UTC (permalink / raw)
  To: 张磊强; +Cc: pbonzini, qemu-devel

On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> Hi,  Paolo && Stefan:
> 
> When I test the dataplane feature with qemu master, I find that
> Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> enabled. But if I try to start a Fedora VM, it can start normally.
> 
> The command I boot QEMU is:
> x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> 
> I found the similar bug has reported some days ago:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> . And a patch for this bug has already committed by Paolo at Mar 13:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> .
> 
> But it cannot work under my environment. Could you give me some advise
> to debug this problem ? I can provide more information if need.

Hi,
I haven't gotten to the bottom of it yet but wanted to let you know that
I'm seeing a hang at the boot screen after the installer reboots.  The
Windows logo animation runs but the guest seems unable to make progress.

Will let you know when there is a fix.  The guest I'm testing is Windows
7 Professional 64-bit.

As a workaround you can set x-data-plane=off to boot the guest for the
first time.  Subsequent boots will succeed with x-data-plane=on.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable
  2013-03-27 13:32 ` Stefan Hajnoczi
@ 2013-03-27 14:02   ` Stefan Hajnoczi
  2013-03-27 15:15     ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-03-27 14:02 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: pbonzini, qemu-devel, 张磊强

On Wed, Mar 27, 2013 at 02:32:19PM +0100, Stefan Hajnoczi wrote:
> On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> > Hi,  Paolo && Stefan:
> > 
> > When I test the dataplane feature with qemu master, I find that
> > Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> > enabled. But if I try to start a Fedora VM, it can start normally.
> > 
> > The command I boot QEMU is:
> > x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> > file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> > -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> > 
> > I found the similar bug has reported some days ago:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > . And a patch for this bug has already committed by Paolo at Mar 13:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > .
> > 
> > But it cannot work under my environment. Could you give me some advise
> > to debug this problem ? I can provide more information if need.
> 
> Hi,
> I haven't gotten to the bottom of it yet but wanted to let you know that
> I'm seeing a hang at the boot screen after the installer reboots.  The
> Windows logo animation runs but the guest seems unable to make progress.
> 
> Will let you know when there is a fix.  The guest I'm testing is Windows
> 7 Professional 64-bit.
> 
> As a workaround you can set x-data-plane=off to boot the guest for the
> first time.  Subsequent boots will succeed with x-data-plane=on.

Next data point, it's not caused by the Paolo's AioContext conversion.
This means it's unrelated to the recent bug that you mentioned.

The boot gets stuck immediately after the switch from the BIOS
virtio-blk driver to the Windows viostor driver.  We get two requests
(VIRTIO_BLK_T_GET_ID and VIRTIO_BLK_T_READ) and then the guest stops.
The descriptor index of the second request has been incremented, perhaps
the guest is not receiving host->guest notifies.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable
  2013-03-27 14:02   ` Stefan Hajnoczi
@ 2013-03-27 15:15     ` Stefan Hajnoczi
  0 siblings, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-03-27 15:15 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: pbonzini, Michael S. Tsirkin, qemu-devel,
	张磊强

On Wed, Mar 27, 2013 at 03:02:01PM +0100, Stefan Hajnoczi wrote:
> On Wed, Mar 27, 2013 at 02:32:19PM +0100, Stefan Hajnoczi wrote:
> > On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> > > Hi,  Paolo && Stefan:
> > > 
> > > When I test the dataplane feature with qemu master, I find that
> > > Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> > > enabled. But if I try to start a Fedora VM, it can start normally.
> > > 
> > > The command I boot QEMU is:
> > > x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> > > file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> > > -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> > > 
> > > I found the similar bug has reported some days ago:
> > > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > > . And a patch for this bug has already committed by Paolo at Mar 13:
> > > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > > .
> > > 
> > > But it cannot work under my environment. Could you give me some advise
> > > to debug this problem ? I can provide more information if need.
> > 
> > Hi,
> > I haven't gotten to the bottom of it yet but wanted to let you know that
> > I'm seeing a hang at the boot screen after the installer reboots.  The
> > Windows logo animation runs but the guest seems unable to make progress.
> > 
> > Will let you know when there is a fix.  The guest I'm testing is Windows
> > 7 Professional 64-bit.
> > 
> > As a workaround you can set x-data-plane=off to boot the guest for the
> > first time.  Subsequent boots will succeed with x-data-plane=on.
> 
> Next data point, it's not caused by the Paolo's AioContext conversion.
> This means it's unrelated to the recent bug that you mentioned.
> 
> The boot gets stuck immediately after the switch from the BIOS
> virtio-blk driver to the Windows viostor driver.  We get two requests
> (VIRTIO_BLK_T_GET_ID and VIRTIO_BLK_T_READ) and then the guest stops.
> The descriptor index of the second request has been incremented, perhaps
> the guest is not receiving host->guest notifies.

Okay, getting closer to the root cause now.  The guest driver has not
enabled MSI-X and the guest notifier that dataplane uses is not hooked
up to anything.  When MSI-X is enabled the guest notifier is hooked up
to irqfd and when MSI-X is disabled it is supposed to bounce back into
QEMU which issues an ioctl for the interrupt.  We've gotten into a state
where the guest notifier exists but nothing is listening to it :).

Therefore the guest is stuck waiting for virtio-blk I/O requests to
complete.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable
  2013-03-26 15:10 [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable 张磊强
  2013-03-27 13:32 ` Stefan Hajnoczi
@ 2013-03-27 15:41 ` Stefan Hajnoczi
  2013-03-28  4:03   ` [Qemu-devel] 回复: " leiqzhang
  1 sibling, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-03-27 15:41 UTC (permalink / raw)
  To: 张磊强; +Cc: pbonzini, qemu-devel, stefanha

On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> Hi,  Paolo && Stefan:
> 
> When I test the dataplane feature with qemu master, I find that
> Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> enabled. But if I try to start a Fedora VM, it can start normally.
> 
> The command I boot QEMU is:
> x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> 
> I found the similar bug has reported some days ago:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> . And a patch for this bug has already committed by Paolo at Mar 13:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> .
> 
> But it cannot work under my environment. Could you give me some advise
> to debug this problem ? I can provide more information if need.

I sent a fix and CCed you on the patch.  Please test it if you have
time.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 回复:  dataplane bug: fail to start Windows VM with dataplane enable
  2013-03-27 15:41 ` Stefan Hajnoczi
@ 2013-03-28  4:03   ` leiqzhang
  2013-04-01 13:34     ` [Qemu-devel] question about performance of dataplane Zhangleiqiang
  0 siblings, 1 reply; 23+ messages in thread
From: leiqzhang @ 2013-03-28  4:03 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: pbonzini, qemu-devel, stefanha, 张 磊强

[-- Attachment #1: Type: text/plain, Size: 1487 bytes --]

Hi, Stefan:

    Thank for your reply and patch. I have done the test for Windows 7, Windows 2003, and fedora.

     The patch has fixed the problem now, thanks.   

--  
leiqzhang
Sent by  Sparrow (http://www.sparrowmailapp.com/?sig)


在 2013年3月27日星期三,23:41,Stefan Hajnoczi 写道:

> On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> > Hi, Paolo && Stefan:
> >  
> > When I test the dataplane feature with qemu master, I find that
> > Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> > enabled. But if I try to start a Fedora VM, it can start normally.
> >  
> > The command I boot QEMU is:
> > x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> > file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> > -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> >  
> > I found the similar bug has reported some days ago:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > . And a patch for this bug has already committed by Paolo at Mar 13:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > .
> >  
> > But it cannot work under my environment. Could you give me some advise
> > to debug this problem ? I can provide more information if need.
> >  
>  
>  
> I sent a fix and CCed you on the patch. Please test it if you have
> time.
>  
> Stefan  


[-- Attachment #2: Type: text/html, Size: 2755 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel]  question about performance of dataplane
  2013-03-28  4:03   ` [Qemu-devel] 回复: " leiqzhang
@ 2013-04-01 13:34     ` Zhangleiqiang
  2013-04-02  2:02       ` [Qemu-devel] 答复: " Zhangleiqiang
  2013-04-07 14:05       ` [Qemu-devel] " Anthony Liguori
  0 siblings, 2 replies; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-01 13:34 UTC (permalink / raw)
  To: Stefan Hajnoczi, stefanha@redhat.com
  Cc: Zhangleiqiang, Luohao (brian), qemu-devel@nongnu.org, Haofeng,
	leiqzhang

Hi, Stefan:

	I have done some testing to compare the performance of dataplane and non-dataplane.  But the result did not meet my expectations, the performance of disk with dataplane enabled did not have advantage over non-dataplane. 

	The following  contains the  environment info and testing results.  Does my testing method or testing environment have something wrong ?  Could you give me some advice?  I can provide more information if need.


1. Environment:
	a). Qemu 1.4 master branch
	b). kernel:  3.5.0-2.fc17.x86_64
         c). virtual disks location:  the same local SATA harddisk with ext4 fs
	d). VM start cmd (os: win7/qed, disk1: raw/non-dataplane/10G/NTFS, disk2: raw/dataplane/10G/NTFS) :

./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -name win7 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/win7.qed,if=none,format=qed,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3007,addr=****,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native -device virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-wce=off


2. Testing Tool And Testing Params:
	a). Only IOMeter App Running in VM, and no other IO in Host (Dom0)
	b). 100% Random IO, data_size(16K/32K/4K), RW_Ratio(0%/25%/75% Read)
	c). Testing of two disks separately, not simultaneously

3. Testing Results:

datasize/RW_Ratio		IOPS_dataplane	IOPS_non_dataplane	MBPS_dataplane	MBPS_non_dataplane
	16K/0%			294.094948			293.609606			4.595234			4.58765
	16K/25%			283.096745			281.649258			4.423387			4.40077
	16K/75%			316.039801			309.585336			4.938122			4.837271
	32K/0%			257.529537			258.806128			8.047798			8.087692
	32K/25%			253.729281			253.756673			7.92904			7.929896
	32K/75%			292.384568			280.991434			9.137018			8.780982
	4K/0%			321.599352			324.116063			1.256247			1.266078
	4K/25%			309.906635			309.294278			1.210573			1.208181
	4K/75%			350.168882			350.772329			1.367847			1.370204



----------
Leiqzhang

Best Regards


> 发件人: qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org [mailto:qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org] 代表 leiqzhang
> 发送时间: 2013年3月28日 12:03
> 收件人: Stefan Hajnoczi
> 抄送: pbonzini@redhat.com; qemu-devel@nongnu.org; stefanha@redhat.com; 张 磊强
> 主题: [Qemu-devel] 回复: dataplane bug: fail to start Windows VM with dataplane enable
> 
> Hi, Stefan:
> 
>     Thank for your reply and patch. I have done the test for Windows 7, Windows 2003, and fedora.
> 
>      The patch has fixed the problem now, thanks. 
> 
> -- 
> leiqzhang
> Sent by  Sparrow
> 
> 在 2013年3月27日星期三,23:41,Stefan Hajnoczi 写道:
> On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> Hi, Paolo && Stefan:
> 
> When I test the dataplane feature with qemu master, I find that
> Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> enabled. But if I try to start a Fedora VM, it can start normally.
> 
> The command I boot QEMU is:
> x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
> 
> I found the similar bug has reported some days ago:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> . And a patch for this bug has already committed by Paolo at Mar 13:
> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> .
> 
> But it cannot work under my environment. Could you give me some advise
> to debug this problem ? I can provide more information if need.
> 
> I sent a fix and CCed you on the patch. Please test it if you have
> time.
> 
> Stefan
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 答复:  question about performance of dataplane
  2013-04-01 13:34     ` [Qemu-devel] question about performance of dataplane Zhangleiqiang
@ 2013-04-02  2:02       ` Zhangleiqiang
  2013-04-04  7:20         ` Stefan Hajnoczi
  2013-04-07 14:05       ` [Qemu-devel] " Anthony Liguori
  1 sibling, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-02  2:02 UTC (permalink / raw)
  To: Stefan Hajnoczi, stefanha@redhat.com
  Cc: Zhangleiqiang, Luohao (brian), qemu-devel@nongnu.org, Haofeng,
	leiqzhang

Hi, Stefan:

	I have also finished the perf testing under Fedora 17 using IOZone, and the results also shown that the performance of disk with dataplane enabled did not have advantage over non-dataplane.

1. Environment:
	a). Qemu 1.4 master branch
	b). kernel:  3.5.0-2.fc17.x86_64
    c). virtual disks location:  the same local SATA harddisk with ext4 fs
	d). VM start cmd (os: fedora17/raw, disk1: raw/non-dataplane/10G, disk2: raw/dataplane/10G) :

./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -name win7 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/fedora.img,if=none,format=raw,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3007,addr=****,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native -device virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-wce=off


2. Testing Tool And Testing Params:
	a). Only IOZone App Running in VM, and no other IO in Host (Dom0)
	b). Testing of two disks separately, not simultaneously
    c). IOZone cmd:  ./iozone -a -n 5g -g 5g -i 0 -i 2 -f /dev/vdc1(/dev/vdb1) 

3. Testing Results:

	a). Writer Report: 

	data_size	KBPS_dataplane	KBPS_non_dataplane
		64K			127751			129003
		128K		125585			108917
		256K		128572			114106
		512K		118301			116945
		1024K		122934			121302
		2048K		126224			123462
		4096K		127413			124632
		8192K		129049			125825
		16384K		131293			126310

	b). Re-writer Report:

	data_size	KBPS_dataplane	KBPS_non_dataplane
		64K			128363			125999
		128K		128326			125836
		256K		128299			125769
		512K		127641			125834
		1024K		128222			126132
		2048K		127979			125770
		4096K		128046			125736
		8192K		128104			125520
		16384K		128022			125739

	c). Random Read Report:

	data_size	KBPS_dataplane	KBPS_non_dataplane
		64K			8988			8700
		128K		16160			15805
		256K		27114			27022
		512K		45212			45659
		1024K		66262			68581
		2048K		91530			91457
		4096K		107785			107398
		8192K		117001			116474
		16384K		122036			121043

	d). Random Write Report:

	data_size	KBPS_dataplane	KBPS_non_dataplane
		64K			21344			21309
		128K		27215			26989
		256K		37587			37851
		512K		55336			54930
		1024K		73673			73146
		2048K		93614			92415
		4096K		107470			105759
		8192K		104951			104102
		16384K		116080			113490

> -----邮件原件-----
> 发件人: Zhangleiqiang
> 发送时间: 2013年4月1日 21:35
> 收件人: Stefan Hajnoczi; stefanha@redhat.com
> 抄送: qemu-devel@nongnu.org; leiqzhang; Haofeng; Luohao (brian);
> Zhangleiqiang
> 主题: [Qemu-devel] question about performance of dataplane
> 
> Hi, Stefan:
> 
> 	I have done some testing to compare the performance of dataplane and
> non-dataplane.  But the result did not meet my expectations, the
> performance of disk with dataplane enabled did not have advantage over
> non-dataplane.
> 
> 	The following  contains the  environment info and testing results.
> Does my testing method or testing environment have something wrong ?
> Could you give me some advice?  I can provide more information if need.
> 
> 
> 1. Environment:
> 	a). Qemu 1.4 master branch
> 	b). kernel:  3.5.0-2.fc17.x86_64
>          c). virtual disks location:  the same local SATA harddisk with ext4 fs
> 	d). VM start cmd (os: win7/qed, disk1: raw/non-dataplane/10G/NTFS,
> disk2: raw/dataplane/10G/NTFS) :
> 
> ./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -name win7 -M pc-0.15
> -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5
> -monitor stdio -drive
> file=/home/win7.qed,if=none,format=qed,cache=none,id=drive0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,na
> me=com.redhat.spice.0 -chardev pty,id=charchannel3 -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,na
> me=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice
> port=3007,addr=****,disable-ticketing -vga qxl -global
> qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device
> virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive
> id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device
> virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive
> id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native
> -device
> virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-
> wce=off
> 
> 
> 2. Testing Tool And Testing Params:
> 	a). Only IOMeter App Running in VM, and no other IO in Host (Dom0)
> 	b). 100% Random IO, data_size(16K/32K/4K), RW_Ratio(0%/25%/75%
> Read)
> 	c). Testing of two disks separately, not simultaneously
> 
> 3. Testing Results:
> 
> datasize/RW_Ratio		IOPS_dataplane	IOPS_non_dataplane
> 	MBPS_dataplane	MBPS_non_dataplane
> 	16K/0%			294.094948			293.609606
> 	4.595234			4.58765
> 	16K/25%			283.096745			281.649258
> 	4.423387			4.40077
> 	16K/75%			316.039801			309.585336
> 	4.938122			4.837271
> 	32K/0%			257.529537			258.806128
> 	8.047798			8.087692
> 	32K/25%			253.729281			253.756673
> 	7.92904			7.929896
> 	32K/75%			292.384568			280.991434
> 	9.137018			8.780982
> 	4K/0%			321.599352			324.116063
> 	1.256247			1.266078
> 	4K/25%			309.906635			309.294278
> 	1.210573			1.208181
> 	4K/75%			350.168882			350.772329
> 	1.367847			1.370204
> 
> 
> 
> ----------
> Leiqzhang
> 
> Best Regards
> 
> 
> > 发件人: qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org
> [mailto:qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org] 代表
> leiqzhang
> > 发送时间: 2013年3月28日 12:03
> > 收件人: Stefan Hajnoczi
> > 抄送: pbonzini@redhat.com; qemu-devel@nongnu.org;
> stefanha@redhat.com; 张 磊强
> > 主题: [Qemu-devel] 回复: dataplane bug: fail to start Windows VM with
> dataplane enable
> >
> > Hi, Stefan:
> >
> >     Thank for your reply and patch. I have done the test for Windows 7,
> Windows 2003, and fedora.
> >
> >      The patch has fixed the problem now, thanks.
> >
> > --
> > leiqzhang
> > Sent by  Sparrow
> >
> > 在 2013年3月27日星期三,23:41,Stefan Hajnoczi 写道:
> > On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> > Hi, Paolo && Stefan:
> >
> > When I test the dataplane feature with qemu master, I find that
> > Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> > enabled. But if I try to start a Fedora VM, it can start normally.
> >
> > The command I boot QEMU is:
> > x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> > file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> > -device
> virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=
> virtio-disk
> >
> > I found the similar bug has reported some days ago:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > . And a patch for this bug has already committed by Paolo at Mar 13:
> > http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> > .
> >
> > But it cannot work under my environment. Could you give me some advise
> > to debug this problem ? I can provide more information if need.
> >
> > I sent a fix and CCed you on the patch. Please test it if you have
> > time.
> >
> > Stefan
> >

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复:  question about performance of dataplane
  2013-04-02  2:02       ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-04  7:20         ` Stefan Hajnoczi
  2013-04-07 11:31           ` [Qemu-devel] 答复: " Zhangleiqiang
  0 siblings, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-04-04  7:20 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Stefan Hajnoczi, Luohao (brian), qemu-devel@nongnu.org, Haofeng,
	leiqzhang

On Tue, Apr 02, 2013 at 02:02:54AM +0000, Zhangleiqiang wrote:
> 	I have also finished the perf testing under Fedora 17 using IOZone, and the results also shown that the performance of disk with dataplane enabled did not have advantage over non-dataplane.

virtio-blk data plane is a win for parallel I/O workloads (that means
iodepth > 1).  The advantage becomes clearer with SMP guests.

In other words the big advantage is that data plane processes requests
without blocking the QEMU main loop or vCPU threads.

If your guest has 1 vCPU and/or your benchmarks only do a single stream
of I/O requests, then the difference may not be measurable.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 答复: 答复:  question about performance of dataplane
  2013-04-04  7:20         ` Stefan Hajnoczi
@ 2013-04-07 11:31           ` Zhangleiqiang
  2013-04-07 11:42             ` Abel Gordon
  0 siblings, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-07 11:31 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Zhangleiqiang, Stefan Hajnoczi, Luohao (brian),
	qemu-devel@nongnu.org, Haofeng

Hi, Stefan:

	Follow your advice, I have finished the benchmarks with multiple vcpu (smp) and parallel I/O workloads. 
	The results still show that the performance of disk with dataplane enabled did not have advantage over non-dataplane under Random write mode. But under the Sequence write mode, the former has obvious advantage.

1. Environment:
	a). Qemu 1.4 master branch
	b). kernel:  3.5.0-2.fc17.x86_64
    c). virtual disks location:  the same local SATA harddisk with ext4 fs
	d). VM start cmd (os: win7/qed, disk1: raw/non-dataplane/10G/NTFS, disk2: raw/dataplane/10G/NTFS) :
	e). vcpu: 4

./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -smp 4 -name win7 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/win7.qed,if=none,format=qed,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3007,addr=186.100.8.121,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native -device virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-wce=off

2. Testing Tool And Testing Params:
	a). Only IOMeter App Running in VM, and no other IO in Host (Dom0)
	b). 100% Random, 0% Read, 16K data size, 50 outstanding IO;  0% Random, 25% Read, 16K data size, 50 outstanding IO
	c). Testing of two disks separately, not simultaneously

3. Testing Results:

	RW_mode				IOPS_dataplane		IOPS_non_dataplane		MBPS_dataplane		MBPS_non_dataplane
	100% Random	/0% Read			303.178867			300.511928				4.737170				4.695499
	100% Sequence/25% Read		21748.887189			7631.164060				339.826362			119.236938



----------
Leiqzhang

Best Regards

> -----邮件原件-----
> 发件人: Stefan Hajnoczi [mailto:stefanha@redhat.com]
> 发送时间: 2013年4月4日 15:21
> 收件人: Zhangleiqiang
> 抄送: Stefan Hajnoczi; qemu-devel@nongnu.org; leiqzhang; Haofeng; Luohao
> (brian)
> 主题: Re: 答复: [Qemu-devel] question about performance of dataplane
> 
> On Tue, Apr 02, 2013 at 02:02:54AM +0000, Zhangleiqiang wrote:
> > 	I have also finished the perf testing under Fedora 17 using IOZone, and the
> results also shown that the performance of disk with dataplane enabled did not
> have advantage over non-dataplane.
> 
> virtio-blk data plane is a win for parallel I/O workloads (that means
> iodepth > 1).  The advantage becomes clearer with SMP guests.
> 
> In other words the big advantage is that data plane processes requests
> without blocking the QEMU main loop or vCPU threads.
> 
> If your guest has 1 vCPU and/or your benchmarks only do a single stream
> of I/O requests, then the difference may not be measurable.
> 
> Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复: 答复:  question about performance of dataplane
  2013-04-07 11:31           ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-07 11:42             ` Abel Gordon
  2013-04-07 13:34               ` [Qemu-devel] 答复: " Zhangleiqiang
  0 siblings, 1 reply; 23+ messages in thread
From: Abel Gordon @ 2013-04-07 11:42 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Stefan Hajnoczi, Luohao (brian), Stefan Hajnoczi,
	qemu-devel@nongnu.org, Haofeng



qemu-devel-bounces+abelg=il.ibm.com@nongnu.org wrote on 07/04/2013 02:31:20
PM:

> From: Zhangleiqiang <zhangleiqiang@huawei.com>
> To: Stefan Hajnoczi <stefanha@redhat.com>,
> Cc: Zhangleiqiang <zhangleiqiang@huawei.com>, Stefan Hajnoczi
> <stefanha@gmail.com>, "Luohao \(brian\)" <brian.luohao@huawei.com>,
> "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, Haofeng
<haofeng@huawei.com>
> Date: 07/04/2013 02:31 PM
> Subject: [Qemu-devel] 答复: 答复:  question about performance of
dataplane
> Sent by: qemu-devel-bounces+abelg=il.ibm.com@nongnu.org
>
> Hi, Stefan:
>
>    Follow your advice, I have finished the benchmarks with multiple
> vcpu (smp) and parallel I/O workloads.
>    The results still show that the performance of disk with
> dataplane enabled did not have advantage over non-dataplane under
> Random write mode. But under the Sequence write mode, the former has
> obvious advantage.

Hi, Leiqzhang

Interesting numbers. How many cores the host has ? was HT enabled ?
Did you try to see what happens when you run more than 1 guest and when
you create at least 1 VCPU per core in the host  ?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 答复:  答复: 答复:  question about performance of dataplane
  2013-04-07 11:42             ` Abel Gordon
@ 2013-04-07 13:34               ` Zhangleiqiang
  2013-04-07 14:01                 ` Abel Gordon
  0 siblings, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-07 13:34 UTC (permalink / raw)
  To: Abel Gordon
  Cc: Zhangleiqiang, Luohao (brian), Haofeng, Stefan Hajnoczi,
	qemu-devel@nongnu.org, Stefan Hajnoczi

Hi, Abel Gordon:

	The CPU info of host is as follows:

	Physical CPU:  		2
	Core Per Phy CPU:		6
	HT:					enabled

	According to your advice, I have finished another benchmark which ensures vcpus (32) is more than the cpu cores of host (24). The Results showed that the dataplane enabled did have obvious advantage over non-dataplane under Random write mode too.

1. Environment:
	a). Qemu 1.4 master branch
	b). kernel:  3.5.0-2.fc17.x86_64
    c). virtual disks location:  the same local SATA harddisk with ext4 fs
	d). Main Testing VM:  os: win2003 server/raw, disk1: raw/non-dataplane/10G/NTFS, disk2: raw/dataplane/10G/NTFS, vcpu: 8 (the limit of win203 server ) 

./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -smp 8 -name win2003 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/win2003.img,if=none,format=raw,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3007,addr=186.100.8.121,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7 -drive id=drive2,if=none,cache=none,format=raw,file=/home/data2.img,aio=native -device virtio-blk-pci,drive=drive2,bus=pci.0,addr=0x8,scsi=off,x-data-plane=on,config-wce=off

	e). Other 3 VMs:  os: win2003 server/raw, disk: raw/non-dataplane/10G/NTFS, vcpu: 8

./x86_64-softmmu/qemu-system-x86_64  -enable-kvm -smp 8 -name win2006 -M pc-0.15 -m 1024 -boot c -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -monitor stdio -drive file=/home/win2006.img,if=none,format=raw,cache=none,id=drive0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charchannel3 -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel3,id=channel3,name=arbitrary.virtio.serial.port.name -usb -device usb-tablet,id=input0 -spice port=3010,addr=186.100.8.121,disable-ticketing -vga qxl -global qxl.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-blk-pci,drive=drive0,bus=pci.0,addr=0x6,bootindex=1 -drive id=drive1,if=none,cache=none,format=raw,file=/home/data_2006.img -device virtio-blk-pci,drive=drive1,bus=pci.0,addr=0x7

	
2. Testing Tool And Testing Params:
	a). Only IOMeter App Running in all the 4 VMs, and no other IO in Host (Dom0)
	b). 100% Random, 0% Read, 16K data size, 50 outstanding IO
	c). Main Testing VM:  testing of two disks separately, not simultaneously, both for 15 minutes
	e). Other 3 VMs:     make IO pressure continuously

3. Testing Results:

	IOPS_dataplane		IOPS_non_dataplane		MBPS_dataplane		MBPS_non_dataplane
		201.4612				74.0764					3.147832					1.157444
			

> -----邮件原件-----
> 发件人: Abel Gordon [mailto:ABELG@il.ibm.com]
> 发送时间: 2013年4月7日 19:42
> 收件人: Zhangleiqiang
> 抄送: Luohao (brian); Haofeng; qemu-devel@nongnu.org; Stefan Hajnoczi;
> Stefan Hajnoczi
> 主题: Re: [Qemu-devel] 答复: 答复: question about performance of dataplane
> 
> 
> 
> qemu-devel-bounces+abelg=il.ibm.com@nongnu.org wrote on 07/04/2013
> 02:31:20
> PM:
> 
> > From: Zhangleiqiang <zhangleiqiang@huawei.com>
> > To: Stefan Hajnoczi <stefanha@redhat.com>,
> > Cc: Zhangleiqiang <zhangleiqiang@huawei.com>, Stefan Hajnoczi
> > <stefanha@gmail.com>, "Luohao \(brian\)" <brian.luohao@huawei.com>,
> > "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>, Haofeng
> <haofeng@huawei.com>
> > Date: 07/04/2013 02:31 PM
> > Subject: [Qemu-devel] 答复: 答复:  question about performance of
> dataplane
> > Sent by: qemu-devel-bounces+abelg=il.ibm.com@nongnu.org
> >
> > Hi, Stefan:
> >
> >    Follow your advice, I have finished the benchmarks with multiple
> > vcpu (smp) and parallel I/O workloads.
> >    The results still show that the performance of disk with
> > dataplane enabled did not have advantage over non-dataplane under
> > Random write mode. But under the Sequence write mode, the former has
> > obvious advantage.
> 
> Hi, Leiqzhang
> 
> Interesting numbers. How many cores the host has ? was HT enabled ?
> Did you try to see what happens when you run more than 1 guest and when
> you create at least 1 VCPU per core in the host  ?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复:  答复: 答复:  question about performance of dataplane
  2013-04-07 13:34               ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-07 14:01                 ` Abel Gordon
       [not found]                   ` <CAO-gD_kYmnph95UGUVbGBcvNqZ4zKDbzBdRXh+MD7KSoZFgyrw@mail.gmail.com>
  0 siblings, 1 reply; 23+ messages in thread
From: Abel Gordon @ 2013-04-07 14:01 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Stefan Hajnoczi, Luohao (brian), Stefan Hajnoczi,
	qemu-devel@nongnu.org, Haofeng



Zhangleiqiang <zhangleiqiang@huawei.com> wrote on 07/04/2013 04:34:45 PM:

> Hi, Abel Gordon:
>
>    The CPU info of host is as follows:
>
>    Physical CPU:        2
>    Core Per Phy CPU:      6
>    HT:               enabled
>
>    According to your advice, I have finished another benchmark which
> ensures vcpus (32) is more than the cpu cores of host (24). The
> Results showed that the dataplane enabled did have obvious advantage
> over non-dataplane under Random write mode too.

Thank you very much for sharing these numbers.
Did you try to evaluate how both mechanisms scale with many VMs ?

For example, you can run:
 1 VM with 2 VCPUs
  then 2 VMs with 2 VCPUs each
  then 3 VMs with 2 VCPUs each
  ...
  up to 12 VMs with 2 VCPUs each

and compare the results: average performance per VM for each
configuration.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] question about performance of dataplane
  2013-04-01 13:34     ` [Qemu-devel] question about performance of dataplane Zhangleiqiang
  2013-04-02  2:02       ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-07 14:05       ` Anthony Liguori
  2013-04-08  3:13         ` [Qemu-devel] 答复: " Zhangleiqiang
  2013-04-08 11:08         ` [Qemu-devel] " Stefan Hajnoczi
  1 sibling, 2 replies; 23+ messages in thread
From: Anthony Liguori @ 2013-04-07 14:05 UTC (permalink / raw)
  To: Zhangleiqiang, Stefan Hajnoczi, stefanha@redhat.com
  Cc: Luohao (brian), qemu-devel@nongnu.org, Haofeng, leiqzhang

Zhangleiqiang <zhangleiqiang@huawei.com> writes:

> Hi, Stefan:
>
> 	I have done some testing to compare the performance of dataplane and non-dataplane.  But the result did not meet my expectations, the performance of disk with dataplane enabled did not have advantage over non-dataplane. 
>
> 	The following  contains the  environment info and testing results.  Does my testing method or testing environment have something wrong ?  Could you give me some advice?  I can provide more information if need.
>
>
> 1. Environment:
> 	a). Qemu 1.4 master branch
> 	b). kernel:  3.5.0-2.fc17.x86_64
>          c). virtual disks location:  the same local SATA harddisk
> 	with ext4 fs

I doubt you'll see any performance difference with a single SATA drive.
There's no real parallelism possible as you have exactly one spindle
available.

> datasize/RW_Ratio		IOPS_dataplane	IOPS_non_dataplane	MBPS_dataplane	MBPS_non_dataplane
> 	16K/0%			294.094948			293.609606			4.595234			4.58765
> 	16K/25%			283.096745			281.649258			4.423387			4.40077
> 	16K/75%			316.039801			309.585336			4.938122			4.837271
> 	32K/0%			257.529537			258.806128			8.047798			8.087692
> 	32K/25%			253.729281			253.756673			7.92904			7.929896
> 	32K/75%			292.384568			280.991434			9.137018			8.780982
> 	4K/0%			321.599352			324.116063			1.256247			1.266078
> 	4K/25%			309.906635			309.294278			1.210573			1.208181
> 	4K/75%			350.168882
> 	350.772329			1.367847
> 	1.370204


You are getting 300 MB/s to a single SATA disk?  I strongly suspect
you are seeing cache interaction.  I'd suggest disabling WCE on the disk
if you're going to benchmark like this.

I'd also guess 300 MB/s is what you're maxing out at with native too.
So presumably normal virtio-blk is already getting close to native
performance.  dataplane certainly isn't going to give you better than
native performance :-)

Regards,

Anthony Liguori

>
>
>
> ----------
> Leiqzhang
>
> Best Regards
>
>
>> 发件人: qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org [mailto:qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org] 代表 leiqzhang
>> 发送时间: 2013年3月28日 12:03
>> 收件人: Stefan Hajnoczi
>> 抄送: pbonzini@redhat.com; qemu-devel@nongnu.org; stefanha@redhat.com; 张 磊强
>> 主题: [Qemu-devel] 回复: dataplane bug: fail to start Windows VM with dataplane enable
>> 
>> Hi, Stefan:
>> 
>>     Thank for your reply and patch. I have done the test for Windows 7, Windows 2003, and fedora.
>> 
>>      The patch has fixed the problem now, thanks. 
>> 
>> -- 
>> leiqzhang
>> Sent by  Sparrow
>> 
>> 在 2013年3月27日星期三,23:41,Stefan Hajnoczi 写道:
>> On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
>> Hi, Paolo && Stefan:
>> 
>> When I test the dataplane feature with qemu master, I find that
>> Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
>> enabled. But if I try to start a Fedora VM, it can start normally.
>> 
>> The command I boot QEMU is:
>> x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
>> file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
>> -device virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=virtio-disk
>> 
>> I found the similar bug has reported some days ago:
>> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
>> . And a patch for this bug has already committed by Paolo at Mar 13:
>> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
>> .
>> 
>> But it cannot work under my environment. Could you give me some advise
>> to debug this problem ? I can provide more information if need.
>> 
>> I sent a fix and CCed you on the patch. Please test it if you have
>> time.
>> 
>> Stefan
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复: 答复: 答复: question about performance of dataplane
       [not found]                   ` <CAO-gD_kYmnph95UGUVbGBcvNqZ4zKDbzBdRXh+MD7KSoZFgyrw@mail.gmail.com>
       [not found]                     ` <OF71AA8EBF.56A0BFE1-ONC2257B46.0059C6C3-C2257B46.005B8DA6@il.ibm. <3A6795EA1206904E94BEC8EF9DF109AE05D3AD3F@szxeml510-mbx.china.huawei.com>
@ 2013-04-07 16:40                     ` Abel Gordon
  2013-04-08  9:06                       ` [Qemu-devel] 答复: " Zhangleiqiang
  1 sibling, 1 reply; 23+ messages in thread
From: Abel Gordon @ 2013-04-07 16:40 UTC (permalink / raw)
  To: 张磊强
  Cc: Zhangleiqiang, Luohao (brian), Haofeng, Stefan Hajnoczi,
	qemu-devel@nongnu.org, Stefan Hajnoczi



张磊强 <leiqzhang@gmail.com> wrote on 07/04/2013 07:10:24 PM:

>
> HI, Abel & Stefan:
>
>      After thinking twice about the benchmarks and the idea of
> dataplane, I am still confused.

Please note while I am familiar with the documentation and architecture
of dataplane, I didn't contribute to the dataplane code. So Stefan is
actually the right person to answer your questions.

>      It's my understanding that the advantage of dataplane mainly
> includes two parts. The first is that splitting the IO thread from
> vcpu thread will avoid the global mutex competition, and the second
> is that the individual IO thread will not be blocked when "vm exit"
> occurs in vcpu thread.  Am I right?

As far as I understand you are right. But wait for Stefan's confirmation.

>      These two advantages will always be effective whether the vcpus
> is more than host's cores or not.

I would say that the advantage is effective as far as at least one vcpu
thread executing guest I/O runs simultaneously with the back-end
I/O thread (each thread on a different core).


> But why the advantage of dataplane
> is only so obvious when vcpus is more than host's cores?

Maybe the issue is related to the fact that no-dataplane experiences
more bottlenecks when your run more and more VCPU threads. So it not
that dataplane performs better with more VCPUs, the issue is that
no-dataplane performs worse with more VCPUs.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 答复:   question about performance of dataplane
  2013-04-07 14:05       ` [Qemu-devel] " Anthony Liguori
@ 2013-04-08  3:13         ` Zhangleiqiang
  2013-04-08 11:08         ` [Qemu-devel] " Stefan Hajnoczi
  1 sibling, 0 replies; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-08  3:13 UTC (permalink / raw)
  To: Anthony Liguori, Stefan Hajnoczi, stefanha@redhat.com
  Cc: Zhangleiqiang, Luohao (brian), qemu-devel@nongnu.org, Haofeng,
	leiqzhang



> -----邮件原件-----
> 发件人: Anthony Liguori [mailto:anthony@codemonkey.ws]
> 发送时间: 2013年4月7日 22:06
> 收件人: Zhangleiqiang; Stefan Hajnoczi; stefanha@redhat.com
> 抄送: Zhangleiqiang; Luohao (brian); qemu-devel@nongnu.org; Haofeng;
> leiqzhang
> 主题: Re: [Qemu-devel] question about performance of dataplane
> 
> Zhangleiqiang <zhangleiqiang@huawei.com> writes:
> 
> >
> > 1. Environment:
> > 	a). Qemu 1.4 master branch
> > 	b). kernel:  3.5.0-2.fc17.x86_64
> >          c). virtual disks location:  the same local SATA harddisk
> > 	with ext4 fs
> 
> I doubt you'll see any performance difference with a single SATA drive.
> There's no real parallelism possible as you have exactly one spindle
> available.
> 



> > datasize/RW_Ratio		IOPS_dataplane	IOPS_non_dataplane
> 	MBPS_dataplane	MBPS_non_dataplane
> > 	16K/0%			294.094948			293.609606
> 	4.595234			4.58765
> > 	16K/25%			283.096745			281.649258
> 	4.423387			4.40077
> > 	16K/75%			316.039801			309.585336
> 	4.938122			4.837271
> 
> 
> You are getting 300 MB/s to a single SATA disk?  I strongly suspect
> you are seeing cache interaction.  I'd suggest disabling WCE on the disk
> if you're going to benchmark like this.
> 
> I'd also guess 300 MB/s is what you're maxing out at with native too.
> So presumably normal virtio-blk is already getting close to native
> performance.  dataplane certainly isn't going to give you better than
> native performance :-)
>

Maybe the result table in mail is not so clear, the IOPS is about 300, but the throughput is just about 1-9 MB/s, :)

> Regards,
> 
> Anthony Liguori
> 
> >
> >
> >
> > ----------
> > Leiqzhang
> >
> > Best Regards
> >
> >
> >> 发件人: qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org
> [mailto:qemu-devel-bounces+zhangleiqiang=huawei.com@nongnu.org] 代表
> leiqzhang
> >> 发送时间: 2013年3月28日 12:03
> >> 收件人: Stefan Hajnoczi
> >> 抄送: pbonzini@redhat.com; qemu-devel@nongnu.org;
> stefanha@redhat.com; 张 磊强
> >> 主题: [Qemu-devel] 回复: dataplane bug: fail to start Windows VM with
> dataplane enable
> >>
> >> Hi, Stefan:
> >>
> >>     Thank for your reply and patch. I have done the test for Windows 7,
> Windows 2003, and fedora.
> >>
> >>      The patch has fixed the problem now, thanks.
> >>
> >> --
> >> leiqzhang
> >> Sent by  Sparrow
> >>
> >> 在 2013年3月27日星期三,23:41,Stefan Hajnoczi 写道:
> >> On Tue, Mar 26, 2013 at 11:10:35PM +0800, 张磊强 wrote:
> >> Hi, Paolo && Stefan:
> >>
> >> When I test the dataplane feature with qemu master, I find that
> >> Windows (Windows 7 and Windows 2003) VM will hang if dataplane is
> >> enabled. But if I try to start a Fedora VM, it can start normally.
> >>
> >> The command I boot QEMU is:
> >> x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -smp 2 -drive
> >>
> file=win7.img,if=none,id=drive-virtio-disk,format=raw,cache=none,aio=native
> >> -device
> virtio-blk-pci,config-wce=off,scsi=off,x-data-plane=on,drive=drive-virtio-disk,id=
> virtio-disk
> >>
> >> I found the similar bug has reported some days ago:
> >> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> >> . And a patch for this bug has already committed by Paolo at Mar 13:
> >> http://lists.nongnu.org/archive/html/qemu-devel/2013-03/msg02200.html
> >> .
> >>
> >> But it cannot work under my environment. Could you give me some advise
> >> to debug this problem ? I can provide more information if need.
> >>
> >> I sent a fix and CCed you on the patch. Please test it if you have
> >> time.
> >>
> >> Stefan
> >>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Qemu-devel] 答复:  答复: 答复: 答复: question about performance of dataplane
  2013-04-07 16:40                     ` Abel Gordon
@ 2013-04-08  9:06                       ` Zhangleiqiang
  2013-04-08 11:04                         ` Abel Gordon
  0 siblings, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-08  9:06 UTC (permalink / raw)
  To: Abel Gordon, Stefan Hajnoczi
  Cc: Luohao (brian), Haofeng, Stefan Hajnoczi, qemu-devel@nongnu.org,
	anthony@codemonkey.ws, 张磊强

> -----邮件原件-----
> 发件人: Abel Gordon [mailto:ABELG@il.ibm.com]
> 发送时间: 2013年4月8日 0:40
> 收件人: 张磊强
> 抄送: Luohao (brian); Haofeng; qemu-devel@nongnu.org; Stefan Hajnoczi;
> Stefan Hajnoczi; Zhangleiqiang
> 主题: Re: [Qemu-devel] 答复: 答复: 答复: question about performance of
> dataplane
> 
> 
> 
> 张磊强 <leiqzhang@gmail.com> wrote on 07/04/2013 07:10:24 PM:
> 
> >
> > HI, Abel & Stefan:
> >
> >      After thinking twice about the benchmarks and the idea of
> > dataplane, I am still confused.
> 
> Please note while I am familiar with the documentation and architecture
> of dataplane, I didn't contribute to the dataplane code. So Stefan is
> actually the right person to answer your questions.
> 
> >      It's my understanding that the advantage of dataplane mainly
> > includes two parts. The first is that splitting the IO thread from
> > vcpu thread will avoid the global mutex competition, and the second
> > is that the individual IO thread will not be blocked when "vm exit"
> > occurs in vcpu thread.  Am I right?
> 
> As far as I understand you are right. But wait for Stefan's confirmation.

OK, I will also wait for Stefan's confirmation, :)

> 
> >      These two advantages will always be effective whether the vcpus
> > is more than host's cores or not.
> 
> I would say that the advantage is effective as far as at least one vcpu
> thread executing guest I/O runs simultaneously with the back-end
> I/O thread (each thread on a different core).
> 
> 
> > But why the advantage of dataplane
> > is only so obvious when vcpus is more than host's cores?
> 
> Maybe the issue is related to the fact that no-dataplane experiences
> more bottlenecks when your run more and more VCPU threads. So it not
> that dataplane performs better with more VCPUs, the issue is that
> no-dataplane performs worse with more VCPUs.

I think maybe Anthony is right. In previous benchmarks, maybe the non-dataplane already reached the physical disk's IOPS upper limit. 

So I did another benchmark which ensures the vcpus is less than the host's cores, but also make continuous IO pressure by one VM when testing in the other VM. The result showed that dataplane did have some advantage over non-dataplane.

1. IO Pressure Mode:  8 worker, 16K IO size, 25% Read, 100% Random,  and 50 outstanding IO
2. Benchmark Mode:  8 worker, 16K IO size, 0% Read,  100% Random,  and 50 outstanding IO
2. Testing Results:
	a). IOPS:  	178.324867 (non-dataplane)  vs  230.956328 (dataplane)
	b). MBPS:  	2.786326 (non-dataplane)  vs  3.608693 (dataplane)

----------
Leiqzhang

Best Regards

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复:  答复: 答复: 答复: question about performance of dataplane
  2013-04-08  9:06                       ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-08 11:04                         ` Abel Gordon
  2013-04-08 11:13                           ` Zhangleiqiang
  0 siblings, 1 reply; 23+ messages in thread
From: Abel Gordon @ 2013-04-08 11:04 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Luohao (brian), anthony@codemonkey.ws, Stefan Hajnoczi,
	Stefan Hajnoczi, qemu-devel@nongnu.org, Haofeng,
	张磊强

Zhangleiqiang <zhangleiqiang@huawei.com> wrote on 08/04/2013 12:06:17 PM:

> I think maybe Anthony is right. In previous benchmarks, maybe the
> non-dataplane already reached the physical disk's IOPS upper limit.

Yep, agree. Try to run the same benchmark in the host to see
what is the bare-metal performance of your system (upper limit)
and how far are dataplane and non-dataplane from this value.
Note your are currently focusing on throughput but you should also
consider latency and CPU utilization.

> So I did another benchmark which ensures the vcpus is less than the
> host's cores, but also make continuous IO pressure by one VM when
> testing in the other VM. The result showed that dataplane did have
> some advantage over non-dataplane.
>
> 1. IO Pressure Mode:  8 worker, 16K IO size, 25% Read, 100% Random,
> and 50 outstanding IO
> 2. Benchmark Mode:  8 worker, 16K IO size, 0% Read,  100% Random,
> and 50 outstanding IO
> 2. Testing Results:
>    a). IOPS:     178.324867 (non-dataplane)  vs  230.956328 (dataplane)
>    b). MBPS:     2.786326 (non-dataplane)  vs  3.608693 (dataplane)

Note that running other VM just to "synthetically" degrade the
performance of the system may cause some side effects and confuse the
results (e.g. the "other" VM may stress the system differently and
do more pressure when you use dataplane than when you don't use
dataplane)

Last thing, IMHO, you should also evaluate scalability:
how dataplane and no-dataplane perform  when you run multiple VMs ?

For example,
  first  1 VM  with 2 VCPUs
  then   2 VMs with 2 VCPUs each
  then   3 VMs with 2 VCPUs each
  ...
  up to 12 VMs with 2 VCPUs each

It seems like you unintentionally tested what happens with 2 VMs when
you added the "other" VM to create I/O pressure.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] question about performance of dataplane
  2013-04-07 14:05       ` [Qemu-devel] " Anthony Liguori
  2013-04-08  3:13         ` [Qemu-devel] 答复: " Zhangleiqiang
@ 2013-04-08 11:08         ` Stefan Hajnoczi
  2013-04-08 13:01           ` Zhangleiqiang
  1 sibling, 1 reply; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-04-08 11:08 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Zhangleiqiang, Luohao (brian), Haofeng, qemu-devel@nongnu.org,
	stefanha@redhat.com, leiqzhang

On Sun, Apr 07, 2013 at 09:05:47AM -0500, Anthony Liguori wrote:
> Zhangleiqiang <zhangleiqiang@huawei.com> writes:
> 
> > Hi, Stefan:
> >
> > 	I have done some testing to compare the performance of dataplane and non-dataplane.  But the result did not meet my expectations, the performance of disk with dataplane enabled did not have advantage over non-dataplane. 
> >
> > 	The following  contains the  environment info and testing results.  Does my testing method or testing environment have something wrong ?  Could you give me some advice?  I can provide more information if need.
> >
> >
> > 1. Environment:
> > 	a). Qemu 1.4 master branch
> > 	b). kernel:  3.5.0-2.fc17.x86_64
> >          c). virtual disks location:  the same local SATA harddisk
> > 	with ext4 fs
> 
> I doubt you'll see any performance difference with a single SATA drive.
> There's no real parallelism possible as you have exactly one spindle
> available.

Sorry for the delay, catching up on emails after my Easter vacation.

I agree with Anthony here.  It seems your benchmark configuration is
bottlenecked below the point where data plane makes a difference.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复:  答复: 答复: 答复: question about performance of dataplane
  2013-04-08 11:04                         ` Abel Gordon
@ 2013-04-08 11:13                           ` Zhangleiqiang
  2013-04-08 11:31                             ` Abel Gordon
  0 siblings, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-08 11:13 UTC (permalink / raw)
  To: Abel Gordon
  Cc: Luohao (brian), anthony@codemonkey.ws, Stefan Hajnoczi,
	Stefan Hajnoczi, qemu-devel@nongnu.org, Haofeng,
	张磊强



> -----Original Message-----
> From: Abel Gordon [mailto:ABELG@il.ibm.com]
> Sent: Monday, April 08, 2013 7:04 PM
> To: Zhangleiqiang
> Cc: anthony@codemonkey.ws; Luohao (brian); Haofeng; 张磊强;
> qemu-devel@nongnu.org; Stefan Hajnoczi; Stefan Hajnoczi
> Subject: Re: 答复: [Qemu-devel] 答复: 答复: 答复: question about
> performance of dataplane
> 
> Zhangleiqiang <zhangleiqiang@huawei.com> wrote on 08/04/2013 12:06:17
> PM:
> 
> > I think maybe Anthony is right. In previous benchmarks, maybe the
> > non-dataplane already reached the physical disk's IOPS upper limit.
> 
> Yep, agree. Try to run the same benchmark in the host to see
> what is the bare-metal performance of your system (upper limit)
> and how far are dataplane and non-dataplane from this value.
> Note your are currently focusing on throughput but you should also
> consider latency and CPU utilization.
> 
> > So I did another benchmark which ensures the vcpus is less than the
> > host's cores, but also make continuous IO pressure by one VM when
> > testing in the other VM. The result showed that dataplane did have
> > some advantage over non-dataplane.
> >
> > 1. IO Pressure Mode:  8 worker, 16K IO size, 25% Read, 100% Random,
> > and 50 outstanding IO
> > 2. Benchmark Mode:  8 worker, 16K IO size, 0% Read,  100% Random,
> > and 50 outstanding IO
> > 2. Testing Results:
> >    a). IOPS:     178.324867 (non-dataplane)  vs  230.956328
> (dataplane)
> >    b). MBPS:     2.786326 (non-dataplane)  vs  3.608693 (dataplane)
> 
> Note that running other VM just to "synthetically" degrade the
> performance of the system may cause some side effects and confuse the
> results (e.g. the "other" VM may stress the system differently and
> do more pressure when you use dataplane than when you don't use
> dataplane)
> 

I think do multiple benchmarks with the same situation and calc the average value will eliminate the "side effects".

> Last thing, IMHO, you should also evaluate scalability:
> how dataplane and no-dataplane perform  when you run multiple VMs ?
> 
> For example,
>   first  1 VM  with 2 VCPUs
>   then   2 VMs with 2 VCPUs each
>   then   3 VMs with 2 VCPUs each
>   ...
>   up to 12 VMs with 2 VCPUs each
> 
> It seems like you unintentionally tested what happens with 2 VMs when
> you added the "other" VM to create I/O pressure.

Indeed, the fact I used 2 VMs in previous benchmark is to ensure the vcpus is less than the host's cores, eg, each vm had 8 vpus. 
Thanks for your advice, I will evaluate the scalability,  :)


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] 答复:  答复: 答复: 答复: question about performance of dataplane
  2013-04-08 11:13                           ` Zhangleiqiang
@ 2013-04-08 11:31                             ` Abel Gordon
  0 siblings, 0 replies; 23+ messages in thread
From: Abel Gordon @ 2013-04-08 11:31 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Luohao (brian), Haofeng, Stefan Hajnoczi, qemu-devel@nongnu.org,
	Stefan Hajnoczi, anthony@codemonkey.ws, 张磊强



Zhangleiqiang <zhangleiqiang@huawei.com> wrote on 08/04/2013 02:13:50 PM:


> I think do multiple benchmarks with the same situation and calc the
> average value will eliminate the "side effects".

Calculating the average of multiple benchmarks may not solve the issue.
For example, if for the dataplane scenario the "other" VM do X IOPs
and for no-dataplane the "other" VM do Y IOPs then you may not be
comparing apples to apples (unless you consider the "other" VM
as part of the results).

>
> > Last thing, IMHO, you should also evaluate scalability:
> > how dataplane and no-dataplane perform  when you run multiple VMs ?
> >
> > For example,
> >   first  1 VM  with 2 VCPUs
> >   then   2 VMs with 2 VCPUs each
> >   then   3 VMs with 2 VCPUs each
> >   ...
> >   up to 12 VMs with 2 VCPUs each
> >
> > It seems like you unintentionally tested what happens with 2 VMs when
> > you added the "other" VM to create I/O pressure.
>
> Indeed, the fact I used 2 VMs in previous benchmark is to ensure the
> vcpus is less than the host's cores, eg, each vm had 8 vpus.
> Thanks for your advice, I will evaluate the scalability,  :)


Thanks, I am looking forward for the results :)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] question about performance of dataplane
  2013-04-08 11:08         ` [Qemu-devel] " Stefan Hajnoczi
@ 2013-04-08 13:01           ` Zhangleiqiang
  2013-04-09 11:49             ` Stefan Hajnoczi
  0 siblings, 1 reply; 23+ messages in thread
From: Zhangleiqiang @ 2013-04-08 13:01 UTC (permalink / raw)
  To: Stefan Hajnoczi, Anthony Liguori
  Cc: Zhangleiqiang, Luohao (brian), Haofeng, qemu-devel@nongnu.org,
	stefanha@redhat.com, leiqzhang


> -----Original Message-----
> From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
> Sent: Monday, April 08, 2013 7:09 PM
> 
> On Sun, Apr 07, 2013 at 09:05:47AM -0500, Anthony Liguori wrote:
> > Zhangleiqiang <zhangleiqiang@huawei.com> writes:
> >
> > > 1. Environment:
> > > 	a). Qemu 1.4 master branch
> > > 	b). kernel:  3.5.0-2.fc17.x86_64
> > >          c). virtual disks location:  the same local SATA harddisk
> > > 	with ext4 fs
> >
> > I doubt you'll see any performance difference with a single SATA drive.
> > There's no real parallelism possible as you have exactly one spindle
> > available.
> 
> Sorry for the delay, catching up on emails after my Easter vacation.
> 
> I agree with Anthony here.  It seems your benchmark configuration is
> bottlenecked below the point where data plane makes a difference.
> 

I'm not understand what the "no real parallelism possible as you have exactly one spindle" means. Dose it means I should put os vdisk onto a different harddisk from the data vdisks? Or it means I should put data vdisk on different harddisks?  

But under the benchmark scenario mentioned above, even the os vdisk and data vdisks was on the same harddisk, the IOMeter only generated IO to data vdisks; Besides, the benchmark of dataplane disk and non-dataplane disk was did continuously other than simultaneously.

I have finished some benchmarks these days, and the results have already been sent to the mail list. It seems the dataplane has advantage when there is IO competition. I guess maybe the non-dataplane already reached the physical disk's IOPS upper limit in the scenario above so that data plane didn't make a difference? 

----------
Leiqzhang

Best Regards

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Qemu-devel] question about performance of dataplane
  2013-04-08 13:01           ` Zhangleiqiang
@ 2013-04-09 11:49             ` Stefan Hajnoczi
  0 siblings, 0 replies; 23+ messages in thread
From: Stefan Hajnoczi @ 2013-04-09 11:49 UTC (permalink / raw)
  To: Zhangleiqiang
  Cc: Luohao (brian), Haofeng, Stefan Hajnoczi, qemu-devel@nongnu.org,
	Anthony Liguori, leiqzhang

On Mon, Apr 08, 2013 at 01:01:53PM +0000, Zhangleiqiang wrote:
> I guess maybe the non-dataplane already reached the physical disk's IOPS upper limit in the scenario above so that data plane didn't make a difference?

Yes.

Stefan

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2013-04-09 11:49 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-26 15:10 [Qemu-devel] dataplane bug: fail to start Windows VM with dataplane enable 张磊强
2013-03-27 13:32 ` Stefan Hajnoczi
2013-03-27 14:02   ` Stefan Hajnoczi
2013-03-27 15:15     ` Stefan Hajnoczi
2013-03-27 15:41 ` Stefan Hajnoczi
2013-03-28  4:03   ` [Qemu-devel] 回复: " leiqzhang
2013-04-01 13:34     ` [Qemu-devel] question about performance of dataplane Zhangleiqiang
2013-04-02  2:02       ` [Qemu-devel] 答复: " Zhangleiqiang
2013-04-04  7:20         ` Stefan Hajnoczi
2013-04-07 11:31           ` [Qemu-devel] 答复: " Zhangleiqiang
2013-04-07 11:42             ` Abel Gordon
2013-04-07 13:34               ` [Qemu-devel] 答复: " Zhangleiqiang
2013-04-07 14:01                 ` Abel Gordon
     [not found]                   ` <CAO-gD_kYmnph95UGUVbGBcvNqZ4zKDbzBdRXh+MD7KSoZFgyrw@mail.gmail.com>
     [not found]                     ` <OF71AA8EBF.56A0BFE1-ONC2257B46.0059C6C3-C2257B46.005B8DA6@il.ibm. <3A6795EA1206904E94BEC8EF9DF109AE05D3AD3F@szxeml510-mbx.china.huawei.com>
2013-04-07 16:40                     ` Abel Gordon
2013-04-08  9:06                       ` [Qemu-devel] 答复: " Zhangleiqiang
2013-04-08 11:04                         ` Abel Gordon
2013-04-08 11:13                           ` Zhangleiqiang
2013-04-08 11:31                             ` Abel Gordon
2013-04-07 14:05       ` [Qemu-devel] " Anthony Liguori
2013-04-08  3:13         ` [Qemu-devel] 答复: " Zhangleiqiang
2013-04-08 11:08         ` [Qemu-devel] " Stefan Hajnoczi
2013-04-08 13:01           ` Zhangleiqiang
2013-04-09 11:49             ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).