From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51563) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XwxQ9-00044j-Nq for qemu-devel@nongnu.org; Fri, 05 Dec 2014 13:16:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XwxQ3-00017b-D6 for qemu-devel@nongnu.org; Fri, 05 Dec 2014 13:16:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49638) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XwxQ2-00017M-L3 for qemu-devel@nongnu.org; Fri, 05 Dec 2014 13:15:54 -0500 Message-ID: <5481F64A.3070203@redhat.com> Date: Fri, 05 Dec 2014 19:15:38 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1417795568-3201-1-git-send-email-kwolf@redhat.com> <1417795568-3201-7-git-send-email-kwolf@redhat.com> In-Reply-To: <1417795568-3201-7-git-send-email-kwolf@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC PATCH v2 6/6] linux-aio: Queue requests instead of returning -EAGAIN List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf , qemu-devel@nongnu.org Cc: ming.lei@canonical.com, pl@kamp.de, stefanha@redhat.com, mreitz@redhat.com On 05/12/2014 17:06, Kevin Wolf wrote: > If the queue array for io_submit() is already full, but a new request > arrives, we cannot add it to that queue anymore. We can, however, use a > CoQueue, which is implemented as a list and can therefore queue as many > requests as we want. > > Signed-off-by: Kevin Wolf > --- > block/linux-aio.c | 31 ++++++++++++++++++++++++++----- > 1 file changed, 26 insertions(+), 5 deletions(-) > > diff --git a/block/linux-aio.c b/block/linux-aio.c > index 373ec4b..8e6328b 100644 > --- a/block/linux-aio.c > +++ b/block/linux-aio.c > @@ -44,6 +44,7 @@ typedef struct { > int plugged; > unsigned int size; > unsigned int idx; > + CoQueue waiting; > } LaioQueue; > > struct qemu_laio_state { > @@ -160,6 +161,8 @@ static void ioq_init(LaioQueue *io_q) > io_q->size = MAX_QUEUED_IO; > io_q->idx = 0; > io_q->plugged = 0; > + > + qemu_co_queue_init(&io_q->waiting); > } > > static int ioq_submit(struct qemu_laio_state *s) > @@ -201,15 +204,29 @@ static int ioq_submit(struct qemu_laio_state *s) > s->io_q.idx * sizeof(s->io_q.iocbs[0])); > } > > + /* Now there should be room for some more requests */ > + if (!qemu_co_queue_empty(&s->io_q.waiting)) { > + if (qemu_in_coroutine()) { > + qemu_co_queue_next(&s->io_q.waiting); > + } else { > + qemu_co_enter_next(&s->io_q.waiting); We should get better performance by wrapping these with plug/unplug. Trivial for the qemu_co_enter_next case, much less for qemu_co_queue_next... This exposes what I think is the main wrinkle in these patches: I'm not sure linux-aio is a great match for the coroutine architecture. You introduce some infrastructure duplication with block.c to track coroutines, and I don't find the coroutine code to be an improvement over Ming Lei's asynchronous one---in fact I actually find it more complicated. Paolo > + } > + } > + > return ret; > } > > static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) > { > unsigned int idx = s->io_q.idx; > + bool was_waiting = false; > + int ret = 0; > > - if (unlikely(idx == s->io_q.size)) { > - return -EAGAIN; > + while (unlikely(idx == s->io_q.size)) { > + /* Wait for iocb slots to become free */ > + qemu_co_queue_wait(&s->io_q.waiting); > + was_waiting = true; > } > > s->io_q.iocbs[idx++] = iocb; > @@ -217,10 +234,14 @@ static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) > > /* submit immediately if queue is not plugged or full */ > if (!s->io_q.plugged || idx == s->io_q.size) { > - return ioq_submit(s); > - } else { > - return 0; > + ret = ioq_submit(s); > + } else if (was_waiting) { > + /* We were just woken up and there's stil room in the queue. Wake up > + * someone else, too. */ > + qemu_co_queue_next(&s->io_q.waiting); > } > + > + return ret; > } > > void laio_io_plug(BlockDriverState *bs, void *aio_ctx) >