* [Qemu-devel] linux-aio: fix batch submission @ 2014-08-14 9:41 Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch Ming Lei ` (4 more replies) 0 siblings, 5 replies; 9+ messages in thread From: Ming Lei @ 2014-08-14 9:41 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Hi, The 1st patch fixes batch submission. The 2nd one fixes -EAGAIN for non-batch case. The 3rd one increase max event to 256 for supporting the comming multi virt-queue. The 4th one is a cleanup. This patchset is splitted from previous patchset(dataplane: optimization and multi virtqueue support), as suggested by Stefan. Thanks, -- Ming Lei ^ permalink raw reply [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei @ 2014-08-14 9:41 ` Ming Lei 2014-09-04 14:59 ` Benoît Canet 2014-08-14 9:41 ` [Qemu-devel] [PATCH 2/4] linux-aio: handling -EAGAIN for !s->io_q.plugged case Ming Lei ` (3 subsequent siblings) 4 siblings, 1 reply; 9+ messages in thread From: Ming Lei @ 2014-08-14 9:41 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Cc: Ming Lei In the enqueue path, we can't complete request, otherwise "Co-routine re-entered recursively" may be caused, so this patch fixes the issue with below ideas: - for -EAGAIN or partial completion, retry the submision by schedule an BH in following completion cb - for part of completion, also update the io queue - for other failure, return the failure if in enqueue path, otherwise, abort all queued I/O Signed-off-by: Ming Lei <ming.lei@canonical.com> --- block/linux-aio.c | 99 +++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 77 insertions(+), 22 deletions(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index 7ac7e8c..4cdf507 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -38,11 +38,19 @@ struct qemu_laiocb { QLIST_ENTRY(qemu_laiocb) node; }; +/* + * TODO: support to batch I/O from multiple bs in one same + * AIO context, one important use case is multi-lun scsi, + * so in future the IO queue should be per AIO context. + */ typedef struct { struct iocb *iocbs[MAX_QUEUED_IO]; int plugged; unsigned int size; unsigned int idx; + + /* handle -EAGAIN and partial completion */ + QEMUBH *retry; } LaioQueue; struct qemu_laio_state { @@ -86,6 +94,12 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s, qemu_aio_release(laiocb); } +static void qemu_laio_start_retry(struct qemu_laio_state *s) +{ + if (s->io_q.idx) + qemu_bh_schedule(s->io_q.retry); +} + static void qemu_laio_completion_cb(EventNotifier *e) { struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e); @@ -108,6 +122,7 @@ static void qemu_laio_completion_cb(EventNotifier *e) qemu_laio_process_completion(s, laiocb); } } + qemu_laio_start_retry(s); } static void laio_cancel(BlockDriverAIOCB *blockacb) @@ -127,6 +142,7 @@ static void laio_cancel(BlockDriverAIOCB *blockacb) ret = io_cancel(laiocb->ctx->ctx, &laiocb->iocb, &event); if (ret == 0) { laiocb->ret = -ECANCELED; + qemu_laio_start_retry(laiocb->ctx); return; } @@ -154,45 +170,80 @@ static void ioq_init(LaioQueue *io_q) io_q->plugged = 0; } -static int ioq_submit(struct qemu_laio_state *s) +static void abort_queue(struct qemu_laio_state *s) +{ + int i; + for (i = 0; i < s->io_q.idx; i++) { + struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i], + struct qemu_laiocb, + iocb); + laiocb->ret = -EIO; + qemu_laio_process_completion(s, laiocb); + } +} + +static int ioq_submit(struct qemu_laio_state *s, bool enqueue) { int ret, i = 0; int len = s->io_q.idx; + int j = 0; - do { - ret = io_submit(s->ctx, len, s->io_q.iocbs); - } while (i++ < 3 && ret == -EAGAIN); + if (!len) { + return 0; + } + + ret = io_submit(s->ctx, len, s->io_q.iocbs); + if (ret == -EAGAIN) { /* retry in following completion cb */ + return 0; + } else if (ret < 0) { + if (enqueue) { + return ret; + } - /* empty io queue */ - s->io_q.idx = 0; + /* in non-queue path, all IOs have to be completed */ + abort_queue(s); + ret = len; + } else if (ret == 0) { + goto out; + } - if (ret < 0) { - i = 0; - } else { - i = ret; + for (i = ret; i < len; i++) { + s->io_q.iocbs[j++] = s->io_q.iocbs[i]; } - for (; i < len; i++) { - struct qemu_laiocb *laiocb = - container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb); + out: + /* + * update io queue, for partial completion, retry will be + * started automatically in following completion cb. + */ + s->io_q.idx -= ret; - laiocb->ret = (ret < 0) ? ret : -EIO; - qemu_laio_process_completion(s, laiocb); - } return ret; } -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) +static void ioq_submit_retry(void *opaque) +{ + struct qemu_laio_state *s = opaque; + ioq_submit(s, false); +} + +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) { unsigned int idx = s->io_q.idx; + if (unlikely(idx == s->io_q.size)) { + return -1; + } + s->io_q.iocbs[idx++] = iocb; s->io_q.idx = idx; - /* submit immediately if queue is full */ - if (idx == s->io_q.size) { - ioq_submit(s); + /* submit immediately if queue depth is above 2/3 */ + if (idx > s->io_q.size * 2 / 3) { + return ioq_submit(s, true); } + + return 0; } void laio_io_plug(BlockDriverState *bs, void *aio_ctx) @@ -214,7 +265,7 @@ int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug) } if (s->io_q.idx > 0) { - ret = ioq_submit(s); + ret = ioq_submit(s, false); } return ret; @@ -258,7 +309,9 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, goto out_free_aiocb; } } else { - ioq_enqueue(s, iocbs); + if (ioq_enqueue(s, iocbs) < 0) { + goto out_free_aiocb; + } } return &laiocb->common; @@ -272,6 +325,7 @@ void laio_detach_aio_context(void *s_, AioContext *old_context) struct qemu_laio_state *s = s_; aio_set_event_notifier(old_context, &s->e, NULL); + qemu_bh_delete(s->io_q.retry); } void laio_attach_aio_context(void *s_, AioContext *new_context) @@ -279,6 +333,7 @@ void laio_attach_aio_context(void *s_, AioContext *new_context) struct qemu_laio_state *s = s_; aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb); + s->io_q.retry = aio_bh_new(new_context, ioq_submit_retry, s); } void *laio_init(void) -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch 2014-08-14 9:41 ` [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch Ming Lei @ 2014-09-04 14:59 ` Benoît Canet 2014-09-04 16:16 ` Ming Lei 0 siblings, 1 reply; 9+ messages in thread From: Benoît Canet @ 2014-09-04 14:59 UTC (permalink / raw) To: Ming Lei Cc: Kevin Wolf, Peter Maydell, qemu-devel, Stefan Hajnoczi, Paolo Bonzini The Thursday 14 Aug 2014 à 17:41:41 (+0800), Ming Lei wrote : > In the enqueue path, we can't complete request, otherwise > "Co-routine re-entered recursively" may be caused, so this > patch fixes the issue with below ideas: s/with below ideas/with the following ideas/g > > - for -EAGAIN or partial completion, retry the submision by s/submision/submission/ > schedule an BH in following completion cb s/schedule an/sheduling a/ > - for part of completion, also update the io queue > - for other failure, return the failure if in enqueue path, > otherwise, abort all queued I/O > > Signed-off-by: Ming Lei <ming.lei@canonical.com> > --- > block/linux-aio.c | 99 +++++++++++++++++++++++++++++++++++++++++------------ > 1 file changed, 77 insertions(+), 22 deletions(-) > > diff --git a/block/linux-aio.c b/block/linux-aio.c > index 7ac7e8c..4cdf507 100644 > --- a/block/linux-aio.c > +++ b/block/linux-aio.c > @@ -38,11 +38,19 @@ struct qemu_laiocb { > QLIST_ENTRY(qemu_laiocb) node; > }; > > +/* > + * TODO: support to batch I/O from multiple bs in one same > + * AIO context, one important use case is multi-lun scsi, > + * so in future the IO queue should be per AIO context. > + */ > typedef struct { In QEMU we typically write the name twice in these kind of declarations: typedef struct LaioQueue { ... stuff ... } LaioQueue; > struct iocb *iocbs[MAX_QUEUED_IO]; > int plugged; Are plugged values either 0 and 1 ? If so it should be "bool plugged;" > unsigned int size; > unsigned int idx; See: benoit@Laure:~/code/qemu$ git grep "unsigned int"|wc 2283 14038 154201 benoit@Laure:~/code/qemu$ git grep "uint32"|wc 12535 63129 810822 Maybe you could use the most popular type. > + > + /* handle -EAGAIN and partial completion */ > + QEMUBH *retry; > } LaioQueue; > > struct qemu_laio_state { > @@ -86,6 +94,12 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s, > qemu_aio_release(laiocb); > } > > +static void qemu_laio_start_retry(struct qemu_laio_state *s) > +{ > + if (s->io_q.idx) > + qemu_bh_schedule(s->io_q.retry); In QEMU this test is writen like this: if (s->io_q.idx) { qemu_bh_schedule(s->io_q.retry); } I suggest you ran ./scripts/checkpatch.pl on your series before submitting it. > +} > + > static void qemu_laio_completion_cb(EventNotifier *e) > { > struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e); > @@ -108,6 +122,7 @@ static void qemu_laio_completion_cb(EventNotifier *e) > qemu_laio_process_completion(s, laiocb); > } > } > + qemu_laio_start_retry(s); > } > > static void laio_cancel(BlockDriverAIOCB *blockacb) > @@ -127,6 +142,7 @@ static void laio_cancel(BlockDriverAIOCB *blockacb) > ret = io_cancel(laiocb->ctx->ctx, &laiocb->iocb, &event); > if (ret == 0) { > laiocb->ret = -ECANCELED; > + qemu_laio_start_retry(laiocb->ctx); > return; > } > > @@ -154,45 +170,80 @@ static void ioq_init(LaioQueue *io_q) > io_q->plugged = 0; > } > > -static int ioq_submit(struct qemu_laio_state *s) > +static void abort_queue(struct qemu_laio_state *s) > +{ > + int i; > + for (i = 0; i < s->io_q.idx; i++) { > + struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i], > + struct qemu_laiocb, > + iocb); > + laiocb->ret = -EIO; > + qemu_laio_process_completion(s, laiocb); > + } > +} > + > +static int ioq_submit(struct qemu_laio_state *s, bool enqueue) > { > int ret, i = 0; > int len = s->io_q.idx; > + int j = 0; > > - do { > - ret = io_submit(s->ctx, len, s->io_q.iocbs); > - } while (i++ < 3 && ret == -EAGAIN); > + if (!len) { > + return 0; > + } > + > + ret = io_submit(s->ctx, len, s->io_q.iocbs); > + if (ret == -EAGAIN) { /* retry in following completion cb */ > + return 0; > + } else if (ret < 0) { > + if (enqueue) { > + return ret; > + } > > - /* empty io queue */ > - s->io_q.idx = 0; > + /* in non-queue path, all IOs have to be completed */ > + abort_queue(s); > + ret = len; > + } else if (ret == 0) { > + goto out; > + } > > - if (ret < 0) { > - i = 0; > - } else { > - i = ret; > + for (i = ret; i < len; i++) { > + s->io_q.iocbs[j++] = s->io_q.iocbs[i]; > } > > - for (; i < len; i++) { > - struct qemu_laiocb *laiocb = > - container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb); > + out: > + /* > + * update io queue, for partial completion, retry will be > + * started automatically in following completion cb. > + */ > + s->io_q.idx -= ret; > > - laiocb->ret = (ret < 0) ? ret : -EIO; > - qemu_laio_process_completion(s, laiocb); > - } > return ret; > } > > -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) > +static void ioq_submit_retry(void *opaque) > +{ > + struct qemu_laio_state *s = opaque; > + ioq_submit(s, false); > +} > + > +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) > { > unsigned int idx = s->io_q.idx; > > + if (unlikely(idx == s->io_q.size)) { > + return -1; > + } > + > s->io_q.iocbs[idx++] = iocb; > s->io_q.idx = idx; > > - /* submit immediately if queue is full */ > - if (idx == s->io_q.size) { > - ioq_submit(s); > + /* submit immediately if queue depth is above 2/3 */ > + if (idx > s->io_q.size * 2 / 3) { > + return ioq_submit(s, true); > } > + > + return 0; > } > > void laio_io_plug(BlockDriverState *bs, void *aio_ctx) > @@ -214,7 +265,7 @@ int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug) > } > > if (s->io_q.idx > 0) { > - ret = ioq_submit(s); > + ret = ioq_submit(s, false); > } > > return ret; > @@ -258,7 +309,9 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, > goto out_free_aiocb; > } > } else { > - ioq_enqueue(s, iocbs); > + if (ioq_enqueue(s, iocbs) < 0) { > + goto out_free_aiocb; > + } > } > return &laiocb->common; > > @@ -272,6 +325,7 @@ void laio_detach_aio_context(void *s_, AioContext *old_context) > struct qemu_laio_state *s = s_; > > aio_set_event_notifier(old_context, &s->e, NULL); > + qemu_bh_delete(s->io_q.retry); > } > > void laio_attach_aio_context(void *s_, AioContext *new_context) > @@ -279,6 +333,7 @@ void laio_attach_aio_context(void *s_, AioContext *new_context) > struct qemu_laio_state *s = s_; > > aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb); > + s->io_q.retry = aio_bh_new(new_context, ioq_submit_retry, s); > } > > void *laio_init(void) > -- > 1.7.9.5 > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch 2014-09-04 14:59 ` Benoît Canet @ 2014-09-04 16:16 ` Ming Lei 0 siblings, 0 replies; 9+ messages in thread From: Ming Lei @ 2014-09-04 16:16 UTC (permalink / raw) To: Benoît Canet Cc: Kevin Wolf, Peter Maydell, qemu-devel, Stefan Hajnoczi, Paolo Bonzini On Thu, Sep 4, 2014 at 10:59 PM, Benoît Canet <benoit.canet@irqsave.net> wrote: > The Thursday 14 Aug 2014 à 17:41:41 (+0800), Ming Lei wrote : >> In the enqueue path, we can't complete request, otherwise >> "Co-routine re-entered recursively" may be caused, so this >> patch fixes the issue with below ideas: > > s/with below ideas/with the following ideas/g > >> >> - for -EAGAIN or partial completion, retry the submision by > > s/submision/submission/ > >> schedule an BH in following completion cb > > s/schedule an/sheduling a/ > >> - for part of completion, also update the io queue >> - for other failure, return the failure if in enqueue path, >> otherwise, abort all queued I/O >> >> Signed-off-by: Ming Lei <ming.lei@canonical.com> >> --- >> block/linux-aio.c | 99 +++++++++++++++++++++++++++++++++++++++++------------ >> 1 file changed, 77 insertions(+), 22 deletions(-) >> >> diff --git a/block/linux-aio.c b/block/linux-aio.c >> index 7ac7e8c..4cdf507 100644 >> --- a/block/linux-aio.c >> +++ b/block/linux-aio.c >> @@ -38,11 +38,19 @@ struct qemu_laiocb { >> QLIST_ENTRY(qemu_laiocb) node; >> }; >> >> +/* >> + * TODO: support to batch I/O from multiple bs in one same >> + * AIO context, one important use case is multi-lun scsi, >> + * so in future the IO queue should be per AIO context. >> + */ >> typedef struct { > > In QEMU we typically write the name twice in these kind of declarations: > > typedef struct LaioQueue { > ... stuff ... > } LaioQueue; > >> struct iocb *iocbs[MAX_QUEUED_IO]; >> int plugged; > > Are plugged values either 0 and 1 ? > If so it should be "bool plugged;" It is actually a reference counter. > >> unsigned int size; >> unsigned int idx; > > See: > > benoit@Laure:~/code/qemu$ git grep "unsigned int"|wc > 2283 14038 154201 > benoit@Laure:~/code/qemu$ git grep "uint32"|wc > 12535 63129 810822 > > Maybe you could use the most popular type. > >> + >> + /* handle -EAGAIN and partial completion */ >> + QEMUBH *retry; >> } LaioQueue; >> >> struct qemu_laio_state { >> @@ -86,6 +94,12 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s, >> qemu_aio_release(laiocb); >> } >> >> +static void qemu_laio_start_retry(struct qemu_laio_state *s) >> +{ > > >> + if (s->io_q.idx) >> + qemu_bh_schedule(s->io_q.retry); > > In QEMU this test is writen like this: > > if (s->io_q.idx) { > qemu_bh_schedule(s->io_q.retry); > } > > I suggest you ran ./scripts/checkpatch.pl on your series before submitting it. I don't know why the blow pre-commit hook didn't complain that: [tom@qemu]$cat .git/hooks/pre-commit #!/bin/bash exec git diff --cached | scripts/checkpatch.pl --no-signoff -q - > > >> +} >> + >> static void qemu_laio_completion_cb(EventNotifier *e) >> { >> struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e); >> @@ -108,6 +122,7 @@ static void qemu_laio_completion_cb(EventNotifier *e) >> qemu_laio_process_completion(s, laiocb); >> } >> } >> + qemu_laio_start_retry(s); >> } >> >> static void laio_cancel(BlockDriverAIOCB *blockacb) >> @@ -127,6 +142,7 @@ static void laio_cancel(BlockDriverAIOCB *blockacb) >> ret = io_cancel(laiocb->ctx->ctx, &laiocb->iocb, &event); >> if (ret == 0) { >> laiocb->ret = -ECANCELED; >> + qemu_laio_start_retry(laiocb->ctx); >> return; >> } >> >> @@ -154,45 +170,80 @@ static void ioq_init(LaioQueue *io_q) >> io_q->plugged = 0; >> } >> >> -static int ioq_submit(struct qemu_laio_state *s) >> +static void abort_queue(struct qemu_laio_state *s) >> +{ >> + int i; >> + for (i = 0; i < s->io_q.idx; i++) { >> + struct qemu_laiocb *laiocb = container_of(s->io_q.iocbs[i], >> + struct qemu_laiocb, >> + iocb); >> + laiocb->ret = -EIO; >> + qemu_laio_process_completion(s, laiocb); >> + } >> +} >> + >> +static int ioq_submit(struct qemu_laio_state *s, bool enqueue) >> { >> int ret, i = 0; >> int len = s->io_q.idx; >> + int j = 0; >> >> - do { >> - ret = io_submit(s->ctx, len, s->io_q.iocbs); >> - } while (i++ < 3 && ret == -EAGAIN); >> + if (!len) { >> + return 0; >> + } >> + >> + ret = io_submit(s->ctx, len, s->io_q.iocbs); >> + if (ret == -EAGAIN) { /* retry in following completion cb */ >> + return 0; >> + } else if (ret < 0) { >> + if (enqueue) { >> + return ret; >> + } >> >> - /* empty io queue */ >> - s->io_q.idx = 0; >> + /* in non-queue path, all IOs have to be completed */ >> + abort_queue(s); >> + ret = len; >> + } else if (ret == 0) { >> + goto out; >> + } >> >> - if (ret < 0) { >> - i = 0; >> - } else { >> - i = ret; >> + for (i = ret; i < len; i++) { >> + s->io_q.iocbs[j++] = s->io_q.iocbs[i]; >> } >> >> - for (; i < len; i++) { >> - struct qemu_laiocb *laiocb = >> - container_of(s->io_q.iocbs[i], struct qemu_laiocb, iocb); >> + out: >> + /* >> + * update io queue, for partial completion, retry will be >> + * started automatically in following completion cb. >> + */ >> + s->io_q.idx -= ret; >> >> - laiocb->ret = (ret < 0) ? ret : -EIO; >> - qemu_laio_process_completion(s, laiocb); >> - } >> return ret; >> } >> >> -static void ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) >> +static void ioq_submit_retry(void *opaque) >> +{ >> + struct qemu_laio_state *s = opaque; >> + ioq_submit(s, false); >> +} >> + >> +static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) >> { >> unsigned int idx = s->io_q.idx; >> >> + if (unlikely(idx == s->io_q.size)) { >> + return -1; >> + } >> + >> s->io_q.iocbs[idx++] = iocb; >> s->io_q.idx = idx; >> >> - /* submit immediately if queue is full */ >> - if (idx == s->io_q.size) { >> - ioq_submit(s); >> + /* submit immediately if queue depth is above 2/3 */ >> + if (idx > s->io_q.size * 2 / 3) { >> + return ioq_submit(s, true); >> } >> + >> + return 0; >> } >> >> void laio_io_plug(BlockDriverState *bs, void *aio_ctx) >> @@ -214,7 +265,7 @@ int laio_io_unplug(BlockDriverState *bs, void *aio_ctx, bool unplug) >> } >> >> if (s->io_q.idx > 0) { >> - ret = ioq_submit(s); >> + ret = ioq_submit(s, false); >> } >> >> return ret; >> @@ -258,7 +309,9 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, >> goto out_free_aiocb; >> } >> } else { >> - ioq_enqueue(s, iocbs); >> + if (ioq_enqueue(s, iocbs) < 0) { >> + goto out_free_aiocb; >> + } >> } >> return &laiocb->common; >> >> @@ -272,6 +325,7 @@ void laio_detach_aio_context(void *s_, AioContext *old_context) >> struct qemu_laio_state *s = s_; >> >> aio_set_event_notifier(old_context, &s->e, NULL); >> + qemu_bh_delete(s->io_q.retry); >> } >> >> void laio_attach_aio_context(void *s_, AioContext *new_context) >> @@ -279,6 +333,7 @@ void laio_attach_aio_context(void *s_, AioContext *new_context) >> struct qemu_laio_state *s = s_; >> >> aio_set_event_notifier(new_context, &s->e, qemu_laio_completion_cb); >> + s->io_q.retry = aio_bh_new(new_context, ioq_submit_retry, s); >> } >> >> void *laio_init(void) >> -- >> 1.7.9.5 >> >> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH 2/4] linux-aio: handling -EAGAIN for !s->io_q.plugged case 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch Ming Lei @ 2014-08-14 9:41 ` Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 Ming Lei ` (2 subsequent siblings) 4 siblings, 0 replies; 9+ messages in thread From: Ming Lei @ 2014-08-14 9:41 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Cc: Ming Lei Previously -EAGAIN is simply ignored for !s->io_q.plugged case, and sometimes it is easy to cause -EIO to VM, such as NVME device. This patch handles -EAGAIN by io queue for !s->io_q.plugged case, and it will be retried in following aio completion cb. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> --- block/linux-aio.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index 4cdf507..0e21f76 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -238,6 +238,11 @@ static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb) s->io_q.iocbs[idx++] = iocb; s->io_q.idx = idx; + /* don't submit until next completion for -EAGAIN of non plug case */ + if (unlikely(!s->io_q.plugged)) { + return 0; + } + /* submit immediately if queue depth is above 2/3 */ if (idx > s->io_q.size * 2 / 3) { return ioq_submit(s, true); @@ -305,10 +310,25 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd, io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e)); if (!s->io_q.plugged) { - if (io_submit(s->ctx, 1, &iocbs) < 0) { + int ret; + + if (!s->io_q.idx) { + ret = io_submit(s->ctx, 1, &iocbs); + } else { + ret = -EAGAIN; + } + /* + * Switch to queue mode until -EAGAIN is handled, we suppose + * there is always uncompleted I/O, so try to enqueue it first, + * and will be submitted again in following aio completion cb. + */ + if (ret == -EAGAIN) { + goto enqueue; + } else if (ret < 0) { goto out_free_aiocb; } } else { + enqueue: if (ioq_enqueue(s, iocbs) < 0) { goto out_free_aiocb; } -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 2/4] linux-aio: handling -EAGAIN for !s->io_q.plugged case Ming Lei @ 2014-08-14 9:41 ` Ming Lei 2014-09-04 14:36 ` Benoît Canet 2014-08-14 9:41 ` [Qemu-devel] [PATCH 4/4] linux-aio: remove 'node' from 'struct qemu_laiocb' Ming Lei 2014-08-21 9:24 ` [Qemu-devel] linux-aio: fix batch submission Ming Lei 4 siblings, 1 reply; 9+ messages in thread From: Ming Lei @ 2014-08-14 9:41 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Cc: Ming Lei This patch increases max event to 256 for the comming virtio-blk multi virtqueue support. Signed-off-by: Ming Lei <ming.lei@canonical.com> --- block/linux-aio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index 0e21f76..bf94ae9 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -23,7 +23,7 @@ * than this we will get EAGAIN from io_submit which is communicated to * the guest as an I/O error. */ -#define MAX_EVENTS 128 +#define MAX_EVENTS 256 #define MAX_QUEUED_IO 128 -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 2014-08-14 9:41 ` [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 Ming Lei @ 2014-09-04 14:36 ` Benoît Canet 0 siblings, 0 replies; 9+ messages in thread From: Benoît Canet @ 2014-09-04 14:36 UTC (permalink / raw) To: Ming Lei Cc: Kevin Wolf, Peter Maydell, qemu-devel, Stefan Hajnoczi, Paolo Bonzini The Thursday 14 Aug 2014 à 17:41:43 (+0800), Ming Lei wrote : > This patch increases max event to 256 for the comming s/comming/coming/ See http://en.wiktionary.org/wiki/comming and https://www.wordnik.com/words/comming > virtio-blk multi virtqueue support. > > Signed-off-by: Ming Lei <ming.lei@canonical.com> > --- > block/linux-aio.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/block/linux-aio.c b/block/linux-aio.c > index 0e21f76..bf94ae9 100644 > --- a/block/linux-aio.c > +++ b/block/linux-aio.c > @@ -23,7 +23,7 @@ > * than this we will get EAGAIN from io_submit which is communicated to > * the guest as an I/O error. > */ > -#define MAX_EVENTS 128 > +#define MAX_EVENTS 256 > > #define MAX_QUEUED_IO 128 > > -- > 1.7.9.5 > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* [Qemu-devel] [PATCH 4/4] linux-aio: remove 'node' from 'struct qemu_laiocb' 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei ` (2 preceding siblings ...) 2014-08-14 9:41 ` [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 Ming Lei @ 2014-08-14 9:41 ` Ming Lei 2014-08-21 9:24 ` [Qemu-devel] linux-aio: fix batch submission Ming Lei 4 siblings, 0 replies; 9+ messages in thread From: Ming Lei @ 2014-08-14 9:41 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Cc: Ming Lei No one uses the 'node' field any more, so remove it from 'struct qemu_laiocb', and this can save 16byte for the struct on 64bit arch. Signed-off-by: Ming Lei <ming.lei@canonical.com> --- block/linux-aio.c | 1 - 1 file changed, 1 deletion(-) diff --git a/block/linux-aio.c b/block/linux-aio.c index bf94ae9..da50ea5 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -35,7 +35,6 @@ struct qemu_laiocb { size_t nbytes; QEMUIOVector *qiov; bool is_read; - QLIST_ENTRY(qemu_laiocb) node; }; /* -- 1.7.9.5 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] linux-aio: fix batch submission 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei ` (3 preceding siblings ...) 2014-08-14 9:41 ` [Qemu-devel] [PATCH 4/4] linux-aio: remove 'node' from 'struct qemu_laiocb' Ming Lei @ 2014-08-21 9:24 ` Ming Lei 4 siblings, 0 replies; 9+ messages in thread From: Ming Lei @ 2014-08-21 9:24 UTC (permalink / raw) To: qemu-devel, Peter Maydell, Paolo Bonzini, Stefan Hajnoczi, Kevin Wolf Hi Guys, I appreciate much if anyone can review or just take this patchset, which is needed by my following patches. Thanks, On Thu, Aug 14, 2014 at 5:41 PM, Ming Lei <ming.lei@canonical.com> wrote: > Hi, > > The 1st patch fixes batch submission. > > The 2nd one fixes -EAGAIN for non-batch case. > > The 3rd one increase max event to 256 for supporting the comming > multi virt-queue. > > The 4th one is a cleanup. > > This patchset is splitted from previous patchset(dataplane: optimization > and multi virtqueue support), as suggested by Stefan. > > Thanks, > -- > Ming Lei > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-09-04 16:16 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-08-14 9:41 [Qemu-devel] linux-aio: fix batch submission Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 1/4] linux-aio: fix submit aio as a batch Ming Lei 2014-09-04 14:59 ` Benoît Canet 2014-09-04 16:16 ` Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 2/4] linux-aio: handling -EAGAIN for !s->io_q.plugged case Ming Lei 2014-08-14 9:41 ` [Qemu-devel] [PATCH 3/4] linux-aio: increase max event to 256 Ming Lei 2014-09-04 14:36 ` Benoît Canet 2014-08-14 9:41 ` [Qemu-devel] [PATCH 4/4] linux-aio: remove 'node' from 'struct qemu_laiocb' Ming Lei 2014-08-21 9:24 ` [Qemu-devel] linux-aio: fix batch submission Ming Lei
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).