public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* virtio-blk performance regression and qemu-kvm
@ 2012-02-10 14:36 Dongsu Park
  2012-02-12 23:55 ` Rusty Russell
                   ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Dongsu Park @ 2012-02-10 14:36 UTC (permalink / raw)
  To: qemu-devel; +Cc: kvm

Hi,

Recently I observed performance regression regarding virtio-blk,
especially different IO bandwidths between qemu-kvm 0.14.1 and 1.0.
So I want to share the benchmark results, and ask you what the reason
would be.

1. Test condition

 - On host, ramdisk-backed block device (/dev/ram0)
 - qemu-kvm is configured with virtio-blk driver for /dev/ram0,
   which is detected as /dev/vdb inside the guest VM.
 - Host System: Ubuntu 11.10 / Kernel 3.2
 - Guest System: Debian 6.0 / Kernel 3.0.6
 - Host I/O scheduler : deadline
 - testing tool : fio

2. Raw performance on the host

 If we test I/O with fio on /dev/ram0 on the host,

 - Sequential read (on the host)
  # fio -name iops -rw=read -size=1G -iodepth 1 \
   -filename /dev/ram0 -ioengine libaio -direct=1 -bs=4096

 - Sequential write (on the host)
  # fio -name iops -rw=write -size=1G -iodepth 1 \
   -filename /dev/ram0 -ioengine libaio -direct=1 -bs=4096

 Result:

  read   1691,6 MByte/s
  write   898,9 MByte/s

 No wonder, it's extremely fast.

3. Comparison with different qemu-kvm versions

 Now I'm running benchmarks with both qemu-kvm 0.14.1 and 1.0.

 - Sequential read (Running inside guest)
   # fio -name iops -rw=read -size=1G -iodepth 1 \
    -filename /dev/vdb -ioengine libaio -direct=1 -bs=4096

 - Sequential write (Running inside guest)
   # fio -name iops -rw=write -size=1G -iodepth 1 \
    -filename /dev/vdb -ioengine libaio -direct=1 -bs=4096

 For each one, I tested 3 times to get the average.

 Result:

  seqread with qemu-kvm 0.14.1   67,0 MByte/s
  seqread with qemu-kvm 1.0      30,9 MByte/s

  seqwrite with qemu-kvm 0.14.1  65,8 MByte/s
  seqwrite with qemu-kvm 1.0     30,5 MByte/s

 So the newest stable version of qemu-kvm shows only the half of
 bandwidth compared to the older version 0.14.1.

The question is, why is it so slower?
How can we improve the performance, except for downgrading to 0.14.1?

I know there have been already several discussions on this issue,
for example, benchmark and trace on virtio-blk latency [1],
or in-kernel accelerator "vhost-blk" [2].
I'm going to continue testing with those ones, too.
But does anyone have a better idea or know about recent updates?

Regards,
Dongsu

[1] http://www.linux-kvm.org/page/Virtio/Block/Latency
[2] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/76893


^ permalink raw reply	[flat|nested] 29+ messages in thread
* Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm
@ 2012-03-08 23:56 Ross Becker
  2012-03-09 10:01 ` Stefan Hajnoczi
  0 siblings, 1 reply; 29+ messages in thread
From: Ross Becker @ 2012-03-08 23:56 UTC (permalink / raw)
  To: kvm

I just joined in order to chime in here-

I'm seeing the exact same thing as Reeted;  I've got a machine with a
storage subsystem capable of 400k IOPs, and when I punch the storage up to
VMs, each VM seems to top out at around 15-20k IOPs.   I've managed to get
to 115k IOPs by creating 8 VMs, doing appropriate CPU pinning to spread
them amongst physical cores, and running IO in them simultaneously, but
I'm unable to get a single VM past 20k IOPs.

I'm using kvm-qemu 12.1.2, as distributed in RHEL 6.2.

The hardware is a Dell R910 chassis, with 4 intel E7 processors.  I am
poking LVM logical volume block devices directly up to VMs as disks,
format raw, virtio driver, write caching none, IO mode native.  Each VM
has 4 vCPUs.

I'm also using fio to do my testing.

The interesting thing is that throughput is actually pretty fantastic; I'm
able to push 6.3 GB/sec using 256k blocks, but the IOPs @ 4k block size
are poor.

I am happy to provide any config details, or try any tests suggested.


--Ross



^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2012-03-09 10:01 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-10 14:36 virtio-blk performance regression and qemu-kvm Dongsu Park
2012-02-12 23:55 ` Rusty Russell
2012-02-21 16:45   ` Dongsu Park
2012-02-21 22:16     ` Rusty Russell
2012-02-13 11:57 ` Stefan Hajnoczi
2012-02-21 15:57   ` Dongsu Park
2012-02-21 17:27     ` Stefan Hajnoczi
2012-02-22 16:48       ` Dongsu Park
2012-02-22 19:53         ` Stefan Hajnoczi
2012-02-28 16:39           ` Martin Mailand
2012-02-28 17:05             ` Stefan Hajnoczi
2012-02-28 17:15               ` Martin Mailand
2012-02-29  8:38                 ` Stefan Hajnoczi
2012-02-29 13:12                   ` Martin Mailand
2012-02-29 13:44                     ` Stefan Hajnoczi
2012-02-29 13:52                       ` Stefan Hajnoczi
2012-03-05 16:13 ` Martin Mailand
2012-03-05 16:35   ` Stefan Hajnoczi
2012-03-05 16:44     ` Martin Mailand
2012-03-06 12:59       ` Stefan Hajnoczi
2012-03-06 22:07         ` [Qemu-devel] " Reeted
2012-03-07  8:04           ` Stefan Hajnoczi
2012-03-07 14:21             ` Reeted
2012-03-07 14:33               ` Stefan Hajnoczi
2012-03-07 10:39         ` Martin Mailand
2012-03-07 11:21           ` Paolo Bonzini
2012-03-06 14:32   ` Dongsu Park
  -- strict thread matches above, loose matches on Subject: below --
2012-03-08 23:56 [Qemu-devel] " Ross Becker
2012-03-09 10:01 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox