qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: ein <ein.net@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] Very poor IO performance which looks like some design problem.
Date: Sat, 11 Apr 2015 21:00:55 +0200	[thread overview]
Message-ID: <55296F67.9000509@gmail.com> (raw)
In-Reply-To: <55295570.6010900@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 2692 bytes --]

Let me present some tests:
Before every tests I did drop caches on the host and reboot the VM.


  *1.* 8 cores:

Coping large file inside VM from/to different physical disks (source is
soft raid, destination is SSD in hardware RAID1):
write

read


  *1.* 1 core:





write

read



  *3.* 1 core, image @ block device:




write

read


  *3.* 1 core, image @ block device + AIO=native,cache=none:




Write

Read


  *4.* 8 cores, disk image @ lvm/xfs @ SSD (hw RAID1):


Copy & paste big file in VM from SSD to SSD (same logical volume).



Copy & paste big file in VM from SSD to SSD (same logical volume).


  *5.* 1 core, disk image @ lvm/xfs @ SSD (hw RAID1):




Copy & paste big file in VM from SSD to SSD (same logical volume).



As u can see there is significant correlation between vCPU number and
throughput. Increasing vCPU number *will* decease throughput.


On 04/11/2015 07:10 PM, ein wrote:
> On 04/11/2015 03:09 PM, Paolo Bonzini wrote:
>> On 10/04/2015 22:38, ein wrote:
>>> Qemu creates more than 70 threads and everyone of them tries to write to
>>> disk, which results in:
>>> 1. High I/O time.
>>> 2. Large latency.
>>> 2. Poor sequential read/write speeds.
>>>
>>> When I limited number of cores, I guess I limited number of threads as
>>> well. That's why I got better numbers.
>>>
>>> I've tried to combine AIO native and thread setting with deadline
>>> scheduler. Native AIO was much more worse.
>>>
>>> The final question, is there any way to prevent Qemu for making so large
>>> number of processes when VM does only one sequential R/W operation?
>> Use "aio=native,cache=none".  If that's not enough, you'll need to use
>> XFS or a block device; ext4 suffers from spinlock contention on O_DIRECT
>> I/O.
> Hello Paolo and thank you for reply.
>
> Firstly, I do use ext2 now, which gave me more MiB/s than XFS in the
> past. I've tried combination with XFS and block_device with NTFS (4KB)
> on it. I did tests with AIO=native,cache=none. Results in this workload
> was significantly worse. I don't have numbers on me right now but if
> somebody is interested, I'll redo the tests. From my experience I can
> say that disabling every software caches gives significant boost in
> sequential RW ops. I mean: Qemu cache, linux kernel dirty pages or even
> caching on VM itself. It makes somehow speed of data flow softer and
> more stable. Using cache creates hiccups. Firstly there's enormous speed
> for couple of seconds, more than hardware is capable of, then flush and
> no data flow at all (or very little) in few / over a dozen / seconds.
>
>
>
>


[-- Attachment #1.2.1: Type: text/html, Size: 5830 bytes --]

[-- Attachment #1.2.2: idfdjbih.png --]
[-- Type: image/png, Size: 17844 bytes --]

[-- Attachment #1.2.3: fhjgcejd.png --]
[-- Type: image/png, Size: 49265 bytes --]

[-- Attachment #1.2.4: dagcajaf.png --]
[-- Type: image/png, Size: 35415 bytes --]

[-- Attachment #1.2.5: ifdgddbb.png --]
[-- Type: image/png, Size: 12063 bytes --]

[-- Attachment #1.2.6: ibfhejgh.png --]
[-- Type: image/png, Size: 193728 bytes --]

[-- Attachment #1.2.7: fghjbbfc.png --]
[-- Type: image/png, Size: 18776 bytes --]

[-- Attachment #1.2.8: jgcaadgh.png --]
[-- Type: image/png, Size: 160790 bytes --]

[-- Attachment #1.2.9: gjjgfcfi.png --]
[-- Type: image/png, Size: 67497 bytes --]

[-- Attachment #1.2.10: aicchjad.png --]
[-- Type: image/png, Size: 33038 bytes --]

[-- Attachment #1.2.11: ihedgdjj.png --]
[-- Type: image/png, Size: 18106 bytes --]

[-- Attachment #1.2.12: acjfijff.png --]
[-- Type: image/png, Size: 110234 bytes --]

[-- Attachment #1.2.13: igaeahch.png --]
[-- Type: image/png, Size: 16484 bytes --]

[-- Attachment #1.2.14: bgegcfec.png --]
[-- Type: image/png, Size: 116731 bytes --]

[-- Attachment #1.2.15: cffjjdij.png --]
[-- Type: image/png, Size: 15763 bytes --]

[-- Attachment #1.2.16: degjffdh.png --]
[-- Type: image/png, Size: 61304 bytes --]

[-- Attachment #1.2.17: debfdbaf.png --]
[-- Type: image/png, Size: 50363 bytes --]

[-- Attachment #1.2.18: bbdefehh.png --]
[-- Type: image/png, Size: 14555 bytes --]

[-- Attachment #1.2.19: hcacddhb.png --]
[-- Type: image/png, Size: 70929 bytes --]

[-- Attachment #1.2.20: bcfbeegh.png --]
[-- Type: image/png, Size: 17987 bytes --]

[-- Attachment #1.2.21: facfabee.png --]
[-- Type: image/png, Size: 268526 bytes --]

[-- Attachment #1.2.22: ecddhghj.png --]
[-- Type: image/png, Size: 17411 bytes --]

[-- Attachment #1.2.23: ahacecag.png --]
[-- Type: image/png, Size: 69755 bytes --]

[-- Attachment #1.2.24: icgffjia.png --]
[-- Type: image/png, Size: 40828 bytes --]

[-- Attachment #1.2.25: ccibechd.png --]
[-- Type: image/png, Size: 16238 bytes --]

[-- Attachment #1.2.26: ejiadgjd.png --]
[-- Type: image/png, Size: 62981 bytes --]

[-- Attachment #1.2.27: aifdhije.png --]
[-- Type: image/png, Size: 17620 bytes --]

[-- Attachment #1.2.28: bfcjcicc.png --]
[-- Type: image/png, Size: 62605 bytes --]

[-- Attachment #1.2.29: fgjagadd.png --]
[-- Type: image/png, Size: 17095 bytes --]

[-- Attachment #1.2.30: cdeidehf.png --]
[-- Type: image/png, Size: 62478 bytes --]

[-- Attachment #1.2.31: dccacadc.png --]
[-- Type: image/png, Size: 46731 bytes --]

[-- Attachment #1.2.32: ffcebeie.png --]
[-- Type: image/png, Size: 15277 bytes --]

[-- Attachment #1.2.33: cjfddddj.png --]
[-- Type: image/png, Size: 161907 bytes --]

[-- Attachment #1.2.34: gdgbecgf.png --]
[-- Type: image/png, Size: 71380 bytes --]

[-- Attachment #1.2.35: jafdahej.png --]
[-- Type: image/png, Size: 70888 bytes --]

[-- Attachment #1.2.36: eiecfeef.png --]
[-- Type: image/png, Size: 16920 bytes --]

[-- Attachment #1.2.37: gcjcgjab.png --]
[-- Type: image/png, Size: 132695 bytes --]

[-- Attachment #2: 0xF2C6EA10.asc --]
[-- Type: application/pgp-keys, Size: 4055 bytes --]

  reply	other threads:[~2015-04-11 19:01 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-10 20:38 [Qemu-devel] Very poor IO performance which looks like some design problem ein
2015-04-11 13:09 ` Paolo Bonzini
2015-04-11 17:10   ` ein
2015-04-11 19:00     ` ein [this message]
2015-04-13  1:45 ` Fam Zheng
2015-04-13 12:28   ` ein
2015-04-13 13:53     ` Paolo Bonzini
2015-04-14 10:31     ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55296F67.9000509@gmail.com \
    --to=ein.net@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).