From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Roman Pen <roman.penyaev@profitbricks.com>,
Fam Zheng <famz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
Ming Lei <ming.lei@canonical.com>
Subject: Re: [Qemu-devel] [PATCH v4 0/7] virtio-blk: multiqueue support
Date: Tue, 21 Jun 2016 14:25:24 +0200 [thread overview]
Message-ID: <57693234.9070701@de.ibm.com> (raw)
In-Reply-To: <1466511196-12612-1-git-send-email-stefanha@redhat.com>
On 06/21/2016 02:13 PM, Stefan Hajnoczi wrote:
> v4:
> * Rebased onto qemu.git/master
> * Included latest performance results
>
> v3:
> * Drop Patch 1 to batch guest notify for non-dataplane
>
> The Linux AIO completion BH and the virtio-blk batch notify BH changed order
> in the AioContext->first_bh list as a side-effect of moving the BH from
> hw/block/dataplane/virtio-blk.c to hw/block/virtio-blk.c. This caused a
> serious performance regression for both dataplane and non-dataplane.
>
> I've decided not to move the BH in this series and work on a separate
> solution for making batch notify generic.
>
> The remaining patches have been reordered and cleaned up.
>
> * See performance data below.
>
> v2:
> * Simplify s->rq live migration [Paolo]
> * Use more efficient bitmap ops for batch notification [Paolo]
> * Fix perf regression due to batch notify BH in wrong AioContext [Christian]
>
> The virtio_blk guest driver has supported multiple virtqueues since Linux 3.17.
> This patch series adds multiple virtqueues to QEMU's virtio-blk emulated
> device.
>
> Ming Lei sent patches previously but these were not merged. This series
> implements virtio-blk multiqueue for QEMU from scratch since the codebase has
> changed. Live migration support for s->rq was also missing from the previous
> series and has been added.
>
> It's important to note that QEMU's block layer does not support multiqueue yet.
> Therefore virtio-blk device processes all virtqueues in the same AioContext
> (IOThread). Further work is necessary to take advantage of multiqueue support
> in QEMU's block layer once it becomes available.
>
> Performance results:
>
> Using virtio-blk-pci,num-queues=4 can produce a speed-up but -smp 4
> introduces a lot of variance across runs. No pinning was performed.
>
> RHEL 7.2 guest on RHEL 7.2 host with 1 vcpu and 1 GB RAM unless otherwise
> noted. The default configuration of the Linux null_blk driver is used as
> /dev/vdb.
>
> $ cat files/fio.job
> [global]
> filename=/dev/vdb
> ioengine=libaio
> direct=1
> runtime=60
> ramp_time=5
> gtod_reduce=1
>
> [job1]
> numjobs=4
> iodepth=16
> rw=randread
> bs=4K
>
> $ ./analyze.py runs/
> Name IOPS Error
> v4-smp-4-dataplane 13326598.0 ± 6.31%
> v4-smp-4-dataplane-no-mq 11483568.0 ± 3.42%
> v4-smp-4-no-dataplane 18108611.6 ± 1.53%
> v4-smp-4-no-dataplane-no-mq 13951225.6 ± 7.81%
This differs from the previous numbers. What is with
and what is without patch? I am surprised to see dataplane
to be slower than no-dataplane - this contradicts everything
that I have seen in the past.
>
> Stefan Hajnoczi (7):
> virtio-blk: add VirtIOBlockConf->num_queues
> virtio-blk: multiqueue batch notify
> virtio-blk: tell dataplane which vq to notify
> virtio-blk: associate request with a virtqueue
> virtio-blk: live migrate s->rq with multiqueue
> virtio-blk: dataplane multiqueue support
> virtio-blk: add num-queues device property
>
> hw/block/dataplane/virtio-blk.c | 81 +++++++++++++++++++++++++++++------------
> hw/block/dataplane/virtio-blk.h | 2 +-
> hw/block/virtio-blk.c | 52 +++++++++++++++++++++-----
> include/hw/virtio/virtio-blk.h | 6 ++-
> 4 files changed, 105 insertions(+), 36 deletions(-)
>
next prev parent reply other threads:[~2016-06-21 12:25 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-21 12:13 [Qemu-devel] [PATCH v4 0/7] virtio-blk: multiqueue support Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 1/7] virtio-blk: add VirtIOBlockConf->num_queues Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 2/7] virtio-blk: multiqueue batch notify Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 3/7] virtio-blk: tell dataplane which vq to notify Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 4/7] virtio-blk: associate request with a virtqueue Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 5/7] virtio-blk: live migrate s->rq with multiqueue Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 6/7] virtio-blk: dataplane multiqueue support Stefan Hajnoczi
2016-06-21 12:13 ` [Qemu-devel] [PATCH v4 7/7] virtio-blk: add num-queues device property Stefan Hajnoczi
2016-06-21 12:25 ` Christian Borntraeger [this message]
2016-06-21 13:28 ` [Qemu-devel] [PATCH v4 0/7] virtio-blk: multiqueue support Stefan Hajnoczi
2016-06-21 13:30 ` Stefan Hajnoczi
2016-06-22 6:16 ` Fam Zheng
2016-06-24 9:39 ` Stefan Hajnoczi
2016-06-24 15:32 ` [Qemu-devel] [RFC v2] virtio-blk: simple multithreaded MQ implementation for bdrv_raw Roman Pen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57693234.9070701@de.ibm.com \
--to=borntraeger@de.ibm.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=ming.lei@canonical.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=roman.penyaev@profitbricks.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).