qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Ming Lei <ming.lei@canonical.com>, Kevin Wolf <kwolf@redhat.com>
Cc: tom.leiming@gmail.com, Fam Zheng <famz@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support
Date: Mon, 11 Aug 2014 21:37:01 +0200	[thread overview]
Message-ID: <53E91B5D.4090009@redhat.com> (raw)
In-Reply-To: <20140810114624.0305b7af@tom-ThinkPad-T410>

Il 10/08/2014 05:46, Ming Lei ha scritto:
> Hi Kevin, Paolo, Stefan and all,
> 
> 
> On Wed, 6 Aug 2014 10:48:55 +0200
> Kevin Wolf <kwolf@redhat.com> wrote:
> 
>> Am 06.08.2014 um 07:33 hat Ming Lei geschrieben:
> 
>>
>> Anyhow, the coroutine version of your benchmark is buggy, it leaks all
>> coroutines instead of exiting them, so it can't make any use of the
>> coroutine pool. On my laptop, I get this (where fixed coroutine is a
>> version that simply removes the yield at the end):
>>
>>                 | bypass        | fixed coro    | buggy coro
>> ----------------+---------------+---------------+--------------
>> time            | 1.09s         | 1.10s         | 1.62s
>> L1-dcache-loads | 921,836,360   | 932,781,747   | 1,298,067,438
>> insns per cycle | 2.39          | 2.39          | 1.90
>>
>> Begs the question whether you see a similar effect on a real qemu and
>> the coroutine pool is still not big enough? With correct use of
>> coroutines, the difference seems to be barely measurable even without
>> any I/O involved.
> 
> Now I fixes the coroutine leak bug, and previous crypt bench is a bit high
> loading, and cause operations per sec very low(~40K/sec), finally I write a new
> and simple one which can generate hundreds of kilo operations per sec and
> the number should match with some fast storage devices, and it does show there
> is not small effect from coroutine.
> 
> Extremely if just getppid() syscall is run in each iteration, with using coroutine,
> only 3M operations/sec can be got, and without using coroutine, the number can
> reach 16M/sec, and there is more than 4 times difference!!!

I should be on vacation, but I'm following a couple threads in the mailing list
and I'm a bit tired to hear the same argument again and again...

The different characteristics of asynchronous I/O vs. any synchronous workload
are such that it is hard to be sure that microbenchmarks make sense.

The below patch is basically the minimal change to bypass coroutines.  Of course
the block.c part is not acceptable as is (the change to refresh_total_sectors
is broken, the others are just ugly), but it is a start.  Please run it with
your fio workloads, or write an aio-based version of a qemu-img/qemu-io *I/O*
benchmark.

Paolo

diff --git a/block.c b/block.c
index 3e252a2..0b6e9cf 100644
--- a/block.c
+++ b/block.c
@@ -704,7 +704,7 @@ static int refresh_total_sectors(BlockDriverState *bs, int64_t hint)
         return 0;
 
     /* query actual device if possible, otherwise just trust the hint */
-    if (drv->bdrv_getlength) {
+    if (!hint && drv->bdrv_getlength) {
         int64_t length = drv->bdrv_getlength(bs);
         if (length < 0) {
             return length;
@@ -2651,9 +2651,6 @@ static int bdrv_check_byte_request(BlockDriverState *bs, int64_t offset,
     if (!bdrv_is_inserted(bs))
         return -ENOMEDIUM;
 
-    if (bs->growable)
-        return 0;
-
     len = bdrv_getlength(bs);
 
     if (offset < 0)
@@ -3107,7 +3104,7 @@ static int coroutine_fn bdrv_co_do_preadv(BlockDriverState *bs,
     if (!drv) {
         return -ENOMEDIUM;
     }
-    if (bdrv_check_byte_request(bs, offset, bytes)) {
+    if (!bs->growable && bdrv_check_byte_request(bs, offset, bytes)) {
         return -EIO;
     }
 
@@ -3347,7 +3344,7 @@ static int coroutine_fn bdrv_co_do_pwritev(BlockDriverState *bs,
     if (bs->read_only) {
         return -EACCES;
     }
-    if (bdrv_check_byte_request(bs, offset, bytes)) {
+    if (!bs->growable && bdrv_check_byte_request(bs, offset, bytes)) {
         return -EIO;
     }
 
@@ -4356,6 +4353,20 @@ BlockDriverAIOCB *bdrv_aio_readv(BlockDriverState *bs, int64_t sector_num,
 {
     trace_bdrv_aio_readv(bs, sector_num, nb_sectors, opaque);
 
+    if (bs->drv && bs->drv->bdrv_aio_readv &&
+        bs->drv->bdrv_aio_readv != bdrv_aio_readv_em &&
+        nb_sectors >= 0 && nb_sectors <= (UINT_MAX >> BDRV_SECTOR_BITS) &&
+        !bdrv_check_byte_request(bs, sector_num << BDRV_SECTOR_BITS,
+                                 nb_sectors << BDRV_SECTOR_BITS) &&
+        !bs->copy_on_read && !bs->io_limits_enabled &&
+        bs->request_alignment <= BDRV_SECTOR_SIZE) {
+        BlockDriverAIOCB *acb =
+            bs->drv->bdrv_aio_readv(bs, sector_num, qiov, nb_sectors,
+                                    cb, opaque);
+        assert(acb);
+        return acb;
+    }
+
     return bdrv_co_aio_rw_vector(bs, sector_num, qiov, nb_sectors, 0,
                                  cb, opaque, false);
 }
@@ -4366,6 +4377,24 @@ BlockDriverAIOCB *bdrv_aio_writev(BlockDriverState *bs, int64_t sector_num,
 {
     trace_bdrv_aio_writev(bs, sector_num, nb_sectors, opaque);
 
+    if (bs->drv && bs->drv->bdrv_aio_writev &&
+        bs->drv->bdrv_aio_writev != bdrv_aio_writev_em &&
+        nb_sectors >= 0 && nb_sectors <= (UINT_MAX >> BDRV_SECTOR_BITS) &&
+        !bdrv_check_byte_request(bs, sector_num << BDRV_SECTOR_BITS,
+                                 nb_sectors << BDRV_SECTOR_BITS) &&
+        !bs->read_only && !bs->io_limits_enabled &&
+        bs->request_alignment <= BDRV_SECTOR_SIZE &&
+        bs->enable_write_cache &&
+        QLIST_EMPTY(&bs->before_write_notifiers.notifiers) &&
+        bs->wr_highest_sector >= sector_num + nb_sectors - 1 &&
+        QLIST_EMPTY(&bs->dirty_bitmaps)) {
+        BlockDriverAIOCB *acb =
+            bs->drv->bdrv_aio_writev(bs, sector_num, qiov, nb_sectors,
+                                     cb, opaque);
+        assert(acb);
+        return acb;
+    }
+
     return bdrv_co_aio_rw_vector(bs, sector_num, qiov, nb_sectors, 0,
                                  cb, opaque, true);
 }
diff --git a/block/raw_bsd.c b/block/raw_bsd.c
index 492f58d..b86f26b 100644
--- a/block/raw_bsd.c
+++ b/block/raw_bsd.c
@@ -48,6 +48,22 @@ static int raw_reopen_prepare(BDRVReopenState *reopen_state,
     return 0;
 }
 
+static BlockDriverAIOCB *raw_aio_readv(BlockDriverState *bs, int64_t sector_num,
+                                     QEMUIOVector *qiov, int nb_sectors,
+                                    BlockDriverCompletionFunc *cb, void *opaque)
+{
+    BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
+    return bdrv_aio_readv(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
+}
+
+static BlockDriverAIOCB *raw_aio_writev(BlockDriverState *bs, int64_t sector_num,
+                                      QEMUIOVector *qiov, int nb_sectors,
+                                     BlockDriverCompletionFunc *cb, void *opaque)
+{
+    BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
+    return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
+}
+
 static int coroutine_fn raw_co_readv(BlockDriverState *bs, int64_t sector_num,
                                      int nb_sectors, QEMUIOVector *qiov)
 {
@@ -181,6 +197,8 @@ static BlockDriver bdrv_raw = {
     .bdrv_open            = &raw_open,
     .bdrv_close           = &raw_close,
     .bdrv_create          = &raw_create,
+    .bdrv_aio_readv       = &raw_aio_readv,
+    .bdrv_aio_writev      = &raw_aio_writev,
     .bdrv_co_readv        = &raw_co_readv,
     .bdrv_co_writev       = &raw_co_writev,
     .bdrv_co_write_zeroes = &raw_co_write_zeroes,

  parent reply	other threads:[~2014-08-11 19:37 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-05  3:33 [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 01/17] qemu/obj_pool.h: introduce object allocation pool Ming Lei
2014-08-05 11:55   ` Eric Blake
2014-08-05 12:05     ` Michael S. Tsirkin
2014-08-05 12:21       ` Eric Blake
2014-08-05 12:51         ` Michael S. Tsirkin
2014-08-06  2:35     ` Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 02/17] dataplane: use object pool to speed up allocation for virtio blk request Ming Lei
2014-08-05 12:30   ` Eric Blake
2014-08-06  2:45     ` Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 03/17] qemu coroutine: support bypass mode Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 04/17] block: prepare for supporting selective bypass coroutine Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 05/17] garbage collector: introduced for support of " Ming Lei
2014-08-05 12:43   ` Eric Blake
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 06/17] block: introduce bdrv_co_can_bypass_co Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 07/17] block: support to bypass qemu coroutinue Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 08/17] Revert "raw-posix: drop raw_get_aio_fd() since it is no longer used" Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 09/17] dataplane: enable selective bypassing coroutine Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 10/17] linux-aio: fix submit aio as a batch Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 11/17] linux-aio: handling -EAGAIN for !s->io_q.plugged case Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 12/17] linux-aio: increase max event to 256 Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 13/17] linux-aio: remove 'node' from 'struct qemu_laiocb' Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 14/17] hw/virtio/virtio-blk.h: introduce VIRTIO_BLK_F_MQ Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 15/17] virtio-blk: support multi queue for non-dataplane Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 16/17] virtio-blk: dataplane: support multi virtqueue Ming Lei
2014-08-05  3:33 ` [Qemu-devel] [PATCH v1 17/17] hw/virtio-pci: introduce num_queues property Ming Lei
2014-08-05  9:38 ` [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support Stefan Hajnoczi
2014-08-05  9:50   ` Ming Lei
2014-08-05  9:56     ` Kevin Wolf
2014-08-05 10:50       ` Ming Lei
2014-08-05 13:59     ` Stefan Hajnoczi
2014-08-05  9:48 ` Kevin Wolf
2014-08-05 10:00   ` Ming Lei
2014-08-05 11:44     ` Paolo Bonzini
2014-08-05 13:48     ` Stefan Hajnoczi
2014-08-05 14:47       ` Kevin Wolf
2014-08-06  5:33         ` Ming Lei
2014-08-06  7:45           ` Paolo Bonzini
2014-08-06  8:38             ` Ming Lei
2014-08-06  8:50               ` Paolo Bonzini
2014-08-06 13:53                 ` Ming Lei
2014-08-06  8:48           ` Kevin Wolf
2014-08-06  9:37             ` Ming Lei
2014-08-06 10:09               ` Kevin Wolf
2014-08-06 11:28                 ` Ming Lei
2014-08-06 11:44                   ` Ming Lei
2014-08-06 15:40                   ` Kevin Wolf
2014-08-07 10:27                     ` Ming Lei
2014-08-07 10:52                       ` Ming Lei
2014-08-07 11:06                         ` Kevin Wolf
2014-08-07 13:03                           ` Ming Lei
2014-08-07 13:51                       ` Kevin Wolf
2014-08-08 10:32                         ` Ming Lei
2014-08-08 11:26                           ` Ming Lei
2014-08-10  3:46             ` Ming Lei
2014-08-11 14:03               ` Kevin Wolf
2014-08-12  7:53                 ` Ming Lei
2014-08-12 11:40                   ` Kevin Wolf
2014-08-12 12:14                     ` Ming Lei
2014-08-11 19:37               ` Paolo Bonzini [this message]
2014-08-12  8:12                 ` Ming Lei
2014-08-12 19:08                   ` Paolo Bonzini
2014-08-13  9:54                     ` Kevin Wolf
2014-08-13 13:16                       ` Paolo Bonzini
2014-08-13 13:49                         ` Ming Lei
2014-08-14  9:39                           ` Stefan Hajnoczi
2014-08-14 10:12                             ` Ming Lei
2014-08-15 20:16                             ` Paolo Bonzini
2014-08-13 10:19                     ` Ming Lei
2014-08-13 12:35                       ` Paolo Bonzini
2014-08-13  8:55                 ` Stefan Hajnoczi
2014-08-13 11:43                 ` Ming Lei
2014-08-13 12:35                   ` Paolo Bonzini
2014-08-13 13:07                     ` Ming Lei
2014-08-14 10:46                 ` Kevin Wolf
2014-08-15 10:39                   ` Ming Lei
2014-08-15 20:15                   ` Paolo Bonzini
2014-08-16  8:20                     ` Ming Lei
2014-08-17  5:29                     ` Paolo Bonzini
2014-08-18  8:58                       ` Kevin Wolf
2014-08-06  9:37           ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53E91B5D.4090009@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=famz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=ming.lei@canonical.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).