From: Paolo Bonzini <pbonzini@redhat.com>
To: Ming Lei <ming.lei@canonical.com>,
qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
Stefan Hajnoczi <stefanha@redhat.com>,
Kevin Wolf <kwolf@redhat.com>
Cc: Fam Zheng <famz@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 00/13] linux-aio/virtio-scsi: support AioContext wide IO submission as batch
Date: Tue, 18 Nov 2014 14:57:08 +0100 [thread overview]
Message-ID: <546B5034.1050102@redhat.com> (raw)
In-Reply-To: <1415518978-2837-1-git-send-email-ming.lei@canonical.com>
On 09/11/2014 08:42, Ming Lei wrote:
> This patch implements AioContext wide IO submission as batch, and
> the idea behind is very simple:
>
> - linux native aio(io_submit) supports to enqueue read/write requests
> to different files
>
> - in one AioContext, I/O requests from VM can be submitted to different
> backend in host, one typical example is multi-lun scsi
>
> This patch changes 'struct qemu_laio_state' as per AioContext, and
> multiple 'bs' can be associted with one single instance of
> 'struct qemu_laio_state', then AioContext wide IO submission as batch
> becomes easy to implement.
>
> One simple test in my laptop shows ~20% throughput improvement
> on randread from VM(using AioContext wide IO batch vs. not using io batch)
> with below config:
>
> -drive id=drive_scsi1-0-0-0,if=none,format=raw,cache=none,aio=native,file=/dev/nullb2 \
> -drive id=drive_scsi1-0-0-1,if=none,format=raw,cache=none,aio=native,file=/dev/nullb3 \
> -device virtio-scsi-pci,num_queues=4,id=scsi1,addr=07,iothread=iothread0 \
> -device scsi-disk,bus=scsi1.0,channel=0,scsi-id=1,lun=0,drive=drive_scsi1-0-0-0,id=scsi1-0-0-0 \
> -device scsi-disk,bus=scsi1.0,channel=0,scsi-id=1,lun=1,drive=drive_scsi1-0-0-1,id=scsi1-0-0-1 \
>
> BTW, maybe more boost can be obtained since ~33K/sec write() system call
> can be observed when this test case is running, and it might be a recent
> regression(BH?).
Ming,
these patches are interesting. I would like to compare them with the
opposite approach (and, I think, more similar to your old work) where
the qemu_laio_state API is moved entirely into AioContext, with lazy
allocation (reference-counted too, probably).
Most of the patches would be the same, but you would replace
aio_attach_aio_bs/aio_detach_aio_bs with something like
aio_native_get/aio_native_unref. Ultimately block/{linux,win32}-aio.c
could be merged into block/aio-{posix,win32}.c, but you do not have to
do that now.
Could you try that? This way we can see which API turns out to be nicer.
Thanks,
Paolo
> This patchset can be found on below tree too:
>
> git://kernel.ubuntu.com/ming/qemu.git aio-io-batch.2
>
> and these patches depend on "linux-aio: fix batch submission" patches
> in below link:
>
> http://marc.info/?l=qemu-devel&m=141528663106557&w=2
>
> Any comments and suggestions are welcome.
>
> async.c | 1 +
> block.c | 16 +++
> block/linux-aio.c | 251 ++++++++++++++++++++++++++++++---------
> block/raw-aio.h | 6 +-
> block/raw-posix.c | 4 +-
> hw/scsi/virtio-scsi-dataplane.c | 8 ++
> hw/scsi/virtio-scsi.c | 2 -
> include/block/aio.h | 27 +++++
> include/block/block.h | 3 +
> 9 files changed, 259 insertions(+), 59 deletions(-)
>
> Thanks,
> Ming Lei
>
next prev parent reply other threads:[~2014-11-18 13:57 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-09 7:42 [Qemu-devel] [PATCH 00/13] linux-aio/virtio-scsi: support AioContext wide IO submission as batch Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 01/13] block/linux-aio: allocate io queue dynamically Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 02/13] block: linux-aio: rename 'ctx' of qemu_laiocb as 'laio_state' Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 03/13] block/linux-aio: allocate 'struct qemu_laio_state' dynamically Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 04/13] block/linux-aio: do more things in laio_state_alloc() and its pair Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 05/13] block/linux-aio: pass 'BlockDriverState' to laio_attach_aio_context " Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 06/13] AioContext: introduce aio_attach_aio_bs() " Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 07/13] block/linux-aio: support IO submission as batch in AioContext wide Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 08/13] block/linux-aio.c: allocate events dynamically Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 09/13] block/linux-aio.c: introduce laio_alloc_resource() Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 10/13] block/linux-aio.c: prepare for elastical resource's allocation Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 11/13] block/linux-aio: reallocate I/O resources when aio attached Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 12/13] block: introduce bdrv_aio_io_plug() and its pair Ming Lei
2014-11-09 7:42 ` [Qemu-devel] [PATCH 13/13] virtio-scsi-dataplane: support AioContext wide IO submission as batch Ming Lei
2014-11-18 13:57 ` Paolo Bonzini [this message]
2014-11-22 12:33 ` [Qemu-devel] [PATCH 00/13] linux-aio/virtio-scsi: " Ming Lei
2014-11-25 10:47 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=546B5034.1050102@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=ming.lei@canonical.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).