qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
@ 2013-09-02  8:17 Jonghwan Choi
  0 siblings, 0 replies; 5+ messages in thread
From: Jonghwan Choi @ 2013-09-02  8:17 UTC (permalink / raw)
  Cc: qemu-devel

Hello All.

Nowdays i measured io performance with Virtio-Blk-Data-Plane.
There was something strange in test.
When vcpu count is 1, io performance is increased in test
But vcpu count is over 2, io performance is decreased in test.

i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1

What should i check in my test?

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
@ 2013-09-02  8:24 Jonghwan Choi
  2013-09-04  8:58 ` Stefan Hajnoczi
  0 siblings, 1 reply; 5+ messages in thread
From: Jonghwan Choi @ 2013-09-02  8:24 UTC (permalink / raw)
  Cc: qemu-devel

Hello All.

Nowdays i measured io performance with Virtio-Blk-Data-Plane.
There was something strange in test.
When vcpu count is 1, io performance is increased in test
But vcpu count is over 2, io performance is decreased in test.

i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1

What should i check in my test?

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
  2013-09-02  8:24 [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane Jonghwan Choi
@ 2013-09-04  8:58 ` Stefan Hajnoczi
  2013-09-05  1:18   ` Jonghwan Choi
  0 siblings, 1 reply; 5+ messages in thread
From: Stefan Hajnoczi @ 2013-09-04  8:58 UTC (permalink / raw)
  To: Jonghwan Choi; +Cc: qemu-devel

On Mon, Sep 02, 2013 at 05:24:09PM +0900, Jonghwan Choi wrote:
> Nowdays i measured io performance with Virtio-Blk-Data-Plane.
> There was something strange in test.
> When vcpu count is 1, io performance is increased in test
> But vcpu count is over 2, io performance is decreased in test.
> 
> i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1
> 
> What should i check in my test?

It's hard to say without any details on your benchmark configuration.

In general, x-data-plane=on performs well with SMP guests and with
multiple disks.  This is because the dataplane threads can process I/O
requests without contending on the QEMU global mutex or the iothread
event loop.

In order to investigate further, please post:

1. The fio results so it's clear which cases performed worse and by how
   much.

2. The fio job files.

3. The QEMU command-line to launch the guest.

4. The host disk configuration (e.g. file systems, local
   SATA/FibreChannel/NFS, etc).

5. The basic host specs including RAM and number of logical CPUs.

Stefan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
  2013-09-04  8:58 ` Stefan Hajnoczi
@ 2013-09-05  1:18   ` Jonghwan Choi
  2013-09-23 13:27     ` Stefan Hajnoczi
  0 siblings, 1 reply; 5+ messages in thread
From: Jonghwan Choi @ 2013-09-05  1:18 UTC (permalink / raw)
  To: 'Stefan Hajnoczi'; +Cc: qemu-devel

Thanks for your reply.

> 
> 1. The fio results so it's clear which cases performed worse and by how
>    much.
>
When I set vcpu = 8, read performance is decreased about 25%.
In my test, when vcpu  = 1, I got a best formance.

> 2. The fio job files.
> 
[testglobal]
description=high_iops
exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
group_reporting=1
rw=read
direct=1
ioengine=sync
bs=4m
numjobs=1
size=2048m

> 3. The QEMU command-line to launch the guest.
> 
<domain type='kvm' id='6'
xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>  <name>vm1</name>
  <uuid>76d6fca9-904d-1e3e-77c7-67a048be9d60</uuid>
  <memory unit='KiB'>50331648</memory>
  <currentMemory unit='KiB'>50331648</currentMemory>
  <vcpu placement='static'>8</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/var/lib/libvirt/images/vm1/ubuntu-kvm/tmpJQKe7w.raw'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </disk>
...
<qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
  </qemu:commandline>

> 4. The host disk configuration (e.g. file systems, local
>    SATA/FibreChannel/NFS, etc).
>
-> SSD 
> 5. The basic host specs including RAM and number of logical CPUs.
 -> Host : 256GB, CPUs : 31, Guest  : 48GB, VCPUs : 8


Thanks.
Best Regards.

> -----Original Message-----
> From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
> Sent: Wednesday, September 04, 2013 5:59 PM
> To: Jonghwan Choi
> Cc: qemu-devel@nongnu.org
> Subject: Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-
> Data-Plane
> 
> On Mon, Sep 02, 2013 at 05:24:09PM +0900, Jonghwan Choi wrote:
> > Nowdays i measured io performance with Virtio-Blk-Data-Plane.
> > There was something strange in test.
> > When vcpu count is 1, io performance is increased in test
> > But vcpu count is over 2, io performance is decreased in test.
> >
> > i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1
> >
> > What should i check in my test?
> 
> It's hard to say without any details on your benchmark configuration.
> 
> In general, x-data-plane=on performs well with SMP guests and with
> multiple disks.  This is because the dataplane threads can process I/O
> requests without contending on the QEMU global mutex or the iothread
> event loop.
> 
> In order to investigate further, please post:
> 
> 1. The fio results so it's clear which cases performed worse and by how
>    much.
> 
> 2. The fio job files.
> 
> 3. The QEMU command-line to launch the guest.
> 
> 4. The host disk configuration (e.g. file systems, local
>    SATA/FibreChannel/NFS, etc).
> 
> 5. The basic host specs including RAM and number of logical CPUs.
> 
> Stefan

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane
  2013-09-05  1:18   ` Jonghwan Choi
@ 2013-09-23 13:27     ` Stefan Hajnoczi
  0 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2013-09-23 13:27 UTC (permalink / raw)
  To: Jonghwan Choi; +Cc: qemu-devel

On Thu, Sep 05, 2013 at 10:18:28AM +0900, Jonghwan Choi wrote:

Thanks for posting these details.

Have you tried running x-data-plane=off with vcpu = 8 and how does the
performance compare to x-data-plane=off with vcpu = 1?

> > 1. The fio results so it's clear which cases performed worse and by how
> >    much.
> >
> When I set vcpu = 8, read performance is decreased about 25%.
> In my test, when vcpu  = 1, I got a best formance.

Performance with vcpu = 8 is 25% worse than performance with vcpu = 1?

Can you try pinning threads to host CPUs?  See libvirt emulatorpin and
vcpupin attributes:
http://libvirt.org/formatdomain.html#elementsCPUTuning

> > 2. The fio job files.
> > 
> [testglobal]
> description=high_iops
> exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
> group_reporting=1
> rw=read
> direct=1
> ioengine=sync
> bs=4m
> numjobs=1
> size=2048m

A couple of points to check:

1. This test case is synchronous and latency-sensitive, you are not
   benchmarking parallel I/Os so x-data-plane=on is not expected to
   perform any better than x-data-plane=off.

   The point of x-data-plane=on is for smp > 1 guests with parallel I/O
   to scale well.  If both those conditions are not met by the workload
   then I don't expect you to see any gains over x-data-plane=off.

   If you want to try parallel I/Os, I suggest using:

   ioengine=linuxaio
   iodepth=16

2. size=2048m with bs=4m on an SSD drive seems quite small because the
   test would complete quickly.  What is the overall running time of
   this test?

   In order to collect stable results it's usually a good idea for the
   test to run for a couple of minutes (e.g. 2 minutes minimum).
   Otherwise outliers can influence the results too much.

   You may need to increase 'size' or use the 'runtime=2m' option
   instead.

Stefan

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-09-23 13:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-02  8:24 [Qemu-devel] I/O performance degradation with Virtio-Blk-Data-Plane Jonghwan Choi
2013-09-04  8:58 ` Stefan Hajnoczi
2013-09-05  1:18   ` Jonghwan Choi
2013-09-23 13:27     ` Stefan Hajnoczi
  -- strict thread matches above, loose matches on Subject: below --
2013-09-02  8:17 Jonghwan Choi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).