From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
qemu-devel@nongnu.org, famz@redhat.com, qemu-block@nongnu.org,
stefanha@redhat.com
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v4 8/8] linux-aio: share one LinuxAioState within an AioContext
Date: Tue, 10 May 2016 11:40:33 +0200 [thread overview]
Message-ID: <20160510094033.GI4921@noname.str.redhat.com> (raw)
In-Reply-To: <20160510093040.GB11408@stefanha-x1.localdomain>
[-- Attachment #1: Type: text/plain, Size: 2968 bytes --]
Am 10.05.2016 um 11:30 hat Stefan Hajnoczi geschrieben:
> On Mon, May 09, 2016 at 06:31:44PM +0200, Paolo Bonzini wrote:
> > On 19/04/2016 11:09, Stefan Hajnoczi wrote:
> > >> > This has better performance because it executes fewer system calls
> > >> > and does not use a bottom half per disk.
> > > Each aio_context_t is initialized for 128 in-flight requests in
> > > laio_init().
> > >
> > > Will it be possible to hit the limit now that all drives share the same
> > > aio_context_t?
> >
> > It was also possible before, because the virtqueue can be bigger than
> > 128 items; that's why there is logic to submit I/O requests after an
> > io_get_events. As usual when the answer seems trivial, am I
> > misunderstanding your question?
>
> I'm concerned about a performance regression rather than correctness.
>
> But looking at linux-aio.c there *is* a correctness problem:
>
> static void ioq_submit(struct qemu_laio_state *s)
> {
> int ret, len;
> struct qemu_laiocb *aiocb;
> struct iocb *iocbs[MAX_QUEUED_IO];
> QSIMPLEQ_HEAD(, qemu_laiocb) completed;
>
> do {
> len = 0;
> QSIMPLEQ_FOREACH(aiocb, &s->io_q.pending, next) {
> iocbs[len++] = &aiocb->iocb;
> if (len == MAX_QUEUED_IO) {
> break;
> }
> }
>
> ret = io_submit(s->ctx, len, iocbs);
> if (ret == -EAGAIN) {
> break;
> }
> if (ret < 0) {
> abort();
> }
>
> s->io_q.n -= ret;
> aiocb = container_of(iocbs[ret - 1], struct qemu_laiocb, iocb);
> QSIMPLEQ_SPLIT_AFTER(&s->io_q.pending, aiocb, next, &completed);
> } while (ret == len && !QSIMPLEQ_EMPTY(&s->io_q.pending));
> s->io_q.blocked = (s->io_q.n > 0);
> }
>
> io_submit() may have submitted some of the requests when -EAGAIN is
> returned. QEMU gets no indication of which requests were submitted.
My understanding (which is based on the manpage rather than code) is
that -EAGAIN is only returned if no request could be submitted. In other
cases, the number of submitted requests is returned (similar to how
short reads work).
> It may be possible to dig around in the s->ctx rings to find out or we
> need to keep track of the number of in-flight requests so we can
> prevent ever hitting EAGAIN.
>
> ioq_submit() pretends that no requests were submitted on -EAGAIN and
> will submit them again next time. This could result in double
> completions.
Did you check in the code that this can happen?
> Regarding performance, I'm thinking about a guest with 8 disks (queue
> depth 32). The worst case is when the guest submits 32 requests at once
> but the Linux AIO event limit has already been reached. Then the disk
> is starved until other disks' requests complete.
Sounds like a valid concern.
Kevin
[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]
next prev parent reply other threads:[~2016-05-10 9:40 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-07 16:33 [Qemu-devel] [PATCH v4 0/8] bdrv_flush_io_queue removal, shared LinuxAioState Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 1/8] block: Don't disable I/O throttling on sync requests Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 2/8] block: make bdrv_start_throttled_reqs return void Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 3/8] block: move restarting of throttled reqs to block/throttle-groups.c Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 4/8] block: extract bdrv_drain_poll/bdrv_co_yield_to_drain from bdrv_drain/bdrv_co_drain Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 5/8] block: introduce bdrv_no_throttling_begin/end Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 6/8] block: plug whole tree at once, introduce bdrv_io_unplugged_begin/end Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 7/8] linux-aio: make it more type safe Paolo Bonzini
2016-04-07 16:33 ` [Qemu-devel] [PATCH v4 8/8] linux-aio: share one LinuxAioState within an AioContext Paolo Bonzini
2016-04-19 9:09 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-05-09 16:31 ` Paolo Bonzini
2016-05-10 9:30 ` Stefan Hajnoczi
2016-05-10 9:40 ` Kevin Wolf [this message]
2016-05-10 10:32 ` Paolo Bonzini
2016-05-11 13:22 ` Stefan Hajnoczi
2016-05-11 13:18 ` Stefan Hajnoczi
2016-05-11 13:23 ` [Qemu-devel] " Stefan Hajnoczi
2016-04-19 9:10 ` [Qemu-devel] [Qemu-block] [PATCH v4 0/8] bdrv_flush_io_queue removal, shared LinuxAioState Stefan Hajnoczi
2016-04-19 15:09 ` Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160510094033.GI4921@noname.str.redhat.com \
--to=kwolf@redhat.com \
--cc=famz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).