qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers
@ 2012-11-13 15:51 Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 1/4] block: Improve bdrv_aio_co_cancel_em Kevin Wolf
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-13 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini

This series has two patches that fixes current qemu_aio_flush() callers, which
should call bdrv_drain_all() instead. The other two patches are for changing
the coroutine request cancellation to waiting for a single request (Paolo
didn't want to change this to bdrv_drain_all()) and removing qemu_aio_flush()
altogether in order to avoid future misuse.

Kevin Wolf (4):
  block: Improve bdrv_aio_co_cancel_em
  megasas: Use bdrv_drain_all instead of qemu_aio_flush
  qemu-io: Use bdrv_drain_all instead of qemu_aio_flush
  aio: Get rid of qemu_aio_flush()

 async.c        |    5 -----
 block.c        |   19 ++++++++++++++++++-
 block/commit.c |    2 +-
 block/mirror.c |    2 +-
 block/stream.c |    2 +-
 hw/megasas.c   |    2 +-
 main-loop.c    |    5 -----
 qemu-aio.h     |    9 ++-------
 qemu-io.c      |    2 +-
 9 files changed, 25 insertions(+), 23 deletions(-)

-- 
1.7.6.5

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Qemu-devel] [RFC PATCH 1/4] block: Improve bdrv_aio_co_cancel_em
  2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
@ 2012-11-13 15:51 ` Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [PATCH 2/4] megasas: Use bdrv_drain_all instead of qemu_aio_flush Kevin Wolf
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-13 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini

Instead of waiting for all requests to complete, wait just for the
specific request that should be cancelled.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block.c |   19 ++++++++++++++++++-
 1 files changed, 18 insertions(+), 1 deletions(-)

diff --git a/block.c b/block.c
index 626d6c2..a2a060d 100644
--- a/block.c
+++ b/block.c
@@ -3788,12 +3788,20 @@ typedef struct BlockDriverAIOCBCoroutine {
     BlockDriverAIOCB common;
     BlockRequest req;
     bool is_write;
+    bool *done;
     QEMUBH* bh;
 } BlockDriverAIOCBCoroutine;
 
 static void bdrv_aio_co_cancel_em(BlockDriverAIOCB *blockacb)
 {
-    qemu_aio_flush();
+    BlockDriverAIOCBCoroutine *acb =
+        container_of(blockacb, BlockDriverAIOCBCoroutine, common);
+    bool done = false;
+
+    acb->done = &done;
+    while (!done) {
+        qemu_aio_wait();
+    }
 }
 
 static const AIOCBInfo bdrv_em_co_aiocb_info = {
@@ -3806,6 +3814,11 @@ static void bdrv_co_em_bh(void *opaque)
     BlockDriverAIOCBCoroutine *acb = opaque;
 
     acb->common.cb(acb->common.opaque, acb->req.error);
+
+    if (acb->done) {
+        *acb->done = true;
+    }
+
     qemu_bh_delete(acb->bh);
     qemu_aio_release(acb);
 }
@@ -3844,6 +3857,7 @@ static BlockDriverAIOCB *bdrv_co_aio_rw_vector(BlockDriverState *bs,
     acb->req.nb_sectors = nb_sectors;
     acb->req.qiov = qiov;
     acb->is_write = is_write;
+    acb->done = NULL;
 
     co = qemu_coroutine_create(bdrv_co_do_rw);
     qemu_coroutine_enter(co, acb);
@@ -3870,6 +3884,8 @@ BlockDriverAIOCB *bdrv_aio_flush(BlockDriverState *bs,
     BlockDriverAIOCBCoroutine *acb;
 
     acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque);
+    acb->done = NULL;
+
     co = qemu_coroutine_create(bdrv_aio_flush_co_entry);
     qemu_coroutine_enter(co, acb);
 
@@ -3898,6 +3914,7 @@ BlockDriverAIOCB *bdrv_aio_discard(BlockDriverState *bs,
     acb = qemu_aio_get(&bdrv_em_co_aiocb_info, bs, cb, opaque);
     acb->req.sector = sector_num;
     acb->req.nb_sectors = nb_sectors;
+    acb->done = NULL;
     co = qemu_coroutine_create(bdrv_aio_discard_co_entry);
     qemu_coroutine_enter(co, acb);
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH 2/4] megasas: Use bdrv_drain_all instead of qemu_aio_flush
  2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 1/4] block: Improve bdrv_aio_co_cancel_em Kevin Wolf
@ 2012-11-13 15:51 ` Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [PATCH 3/4] qemu-io: " Kevin Wolf
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-13 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini

Calling qemu_aio_flush() directly can hang when combined with I/O
throttling.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/megasas.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/hw/megasas.c b/hw/megasas.c
index 7a2036e..d332d41 100644
--- a/hw/megasas.c
+++ b/hw/megasas.c
@@ -1296,7 +1296,7 @@ static int megasas_dcmd_get_properties(MegasasState *s, MegasasCmd *cmd)
 
 static int megasas_cache_flush(MegasasState *s, MegasasCmd *cmd)
 {
-    qemu_aio_flush();
+    bdrv_drain_all();
     return MFI_STAT_OK;
 }
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [PATCH 3/4] qemu-io: Use bdrv_drain_all instead of qemu_aio_flush
  2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 1/4] block: Improve bdrv_aio_co_cancel_em Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [PATCH 2/4] megasas: Use bdrv_drain_all instead of qemu_aio_flush Kevin Wolf
@ 2012-11-13 15:51 ` Kevin Wolf
  2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 4/4] aio: Get rid of qemu_aio_flush() Kevin Wolf
  2012-11-13 17:39 ` [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Paolo Bonzini
  4 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-13 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini

This is harmless as of today because I/O throttling is not used in
qemu-io, however as soon as .bdrv_drain handlers will be introduced,
qemu-io must be sure to call bdrv_drain_all().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 qemu-io.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/qemu-io.c b/qemu-io.c
index 1ad7d3a..92cdb2a 100644
--- a/qemu-io.c
+++ b/qemu-io.c
@@ -1362,7 +1362,7 @@ static int aio_write_f(int argc, char **argv)
 
 static int aio_flush_f(int argc, char **argv)
 {
-    qemu_aio_flush();
+    bdrv_drain_all();
     return 0;
 }
 
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Qemu-devel] [RFC PATCH 4/4] aio: Get rid of qemu_aio_flush()
  2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
                   ` (2 preceding siblings ...)
  2012-11-13 15:51 ` [Qemu-devel] [PATCH 3/4] qemu-io: " Kevin Wolf
@ 2012-11-13 15:51 ` Kevin Wolf
  2012-11-13 17:39 ` [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Paolo Bonzini
  4 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-13 15:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, pbonzini

The remaining callers are simply bugs and should be using
bdrv_drain_all() in the first place.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 async.c        |    5 -----
 block/commit.c |    2 +-
 block/mirror.c |    2 +-
 block/stream.c |    2 +-
 main-loop.c    |    5 -----
 qemu-aio.h     |    9 ++-------
 6 files changed, 5 insertions(+), 20 deletions(-)

diff --git a/async.c b/async.c
index 04f9dcb..5e88f8d 100644
--- a/async.c
+++ b/async.c
@@ -217,8 +217,3 @@ void aio_context_unref(AioContext *ctx)
 {
     g_source_unref(&ctx->source);
 }
-
-void aio_flush(AioContext *ctx)
-{
-    while (aio_poll(ctx, true));
-}
diff --git a/block/commit.c b/block/commit.c
index fae7958..e2bb1e2 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -103,7 +103,7 @@ static void coroutine_fn commit_run(void *opaque)
 
 wait:
         /* Note that even when no rate limit is applied we need to yield
-         * with no pending I/O here so that qemu_aio_flush() returns.
+         * with no pending I/O here so that bdrv_drain_all() returns.
          */
         block_job_sleep_ns(&s->common, rt_clock, delay_ns);
         if (block_job_is_cancelled(&s->common)) {
diff --git a/block/mirror.c b/block/mirror.c
index d6618a4..b1f5d4f 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -205,7 +205,7 @@ static void coroutine_fn mirror_run(void *opaque)
             }
 
             /* Note that even when no rate limit is applied we need to yield
-             * with no pending I/O here so that qemu_aio_flush() returns.
+             * with no pending I/O here so that bdrv_drain_all() returns.
              */
             block_job_sleep_ns(&s->common, rt_clock, delay_ns);
             if (block_job_is_cancelled(&s->common)) {
diff --git a/block/stream.c b/block/stream.c
index 0c0fc7a..0dcd286 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -108,7 +108,7 @@ static void coroutine_fn stream_run(void *opaque)
 
 wait:
         /* Note that even when no rate limit is applied we need to yield
-         * with no pending I/O here so that qemu_aio_flush() returns.
+         * with no pending I/O here so that bdrv_drain_all() returns.
          */
         block_job_sleep_ns(&s->common, rt_clock, delay_ns);
         if (block_job_is_cancelled(&s->common)) {
diff --git a/main-loop.c b/main-loop.c
index c87624e..7dba6f6 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -432,11 +432,6 @@ QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
     return aio_bh_new(qemu_aio_context, cb, opaque);
 }
 
-void qemu_aio_flush(void)
-{
-    aio_flush(qemu_aio_context);
-}
-
 bool qemu_aio_wait(void)
 {
     return aio_poll(qemu_aio_context, true);
diff --git a/qemu-aio.h b/qemu-aio.h
index 3889fe9..31884a8 100644
--- a/qemu-aio.h
+++ b/qemu-aio.h
@@ -162,10 +162,6 @@ void qemu_bh_cancel(QEMUBH *bh);
  */
 void qemu_bh_delete(QEMUBH *bh);
 
-/* Flush any pending AIO operation. This function will block until all
- * outstanding AIO operations have been completed or cancelled. */
-void aio_flush(AioContext *ctx);
-
 /* Return whether there are any pending callbacks from the GSource
  * attached to the AioContext.
  *
@@ -196,7 +192,7 @@ typedef int (AioFlushHandler)(void *opaque);
 
 /* Register a file descriptor and associated callbacks.  Behaves very similarly
  * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
- * be invoked when using either qemu_aio_wait() or qemu_aio_flush().
+ * be invoked when using qemu_aio_wait().
  *
  * Code that invokes AIO completion functions should rely on this function
  * instead of qemu_set_fd_handler[2].
@@ -211,7 +207,7 @@ void aio_set_fd_handler(AioContext *ctx,
 
 /* Register an event notifier and associated callbacks.  Behaves very similarly
  * to event_notifier_set_handler.  Unlike event_notifier_set_handler, these callbacks
- * will be invoked when using either qemu_aio_wait() or qemu_aio_flush().
+ * will be invoked when using qemu_aio_wait().
  *
  * Code that invokes AIO completion functions should rely on this function
  * instead of event_notifier_set_handler.
@@ -228,7 +224,6 @@ GSource *aio_get_g_source(AioContext *ctx);
 
 /* Functions to operate on the main QEMU AioContext.  */
 
-void qemu_aio_flush(void);
 bool qemu_aio_wait(void);
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
                                  EventNotifierHandler *io_read,
-- 
1.7.6.5

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers
  2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
                   ` (3 preceding siblings ...)
  2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 4/4] aio: Get rid of qemu_aio_flush() Kevin Wolf
@ 2012-11-13 17:39 ` Paolo Bonzini
  2012-11-14  7:29   ` Kevin Wolf
  4 siblings, 1 reply; 7+ messages in thread
From: Paolo Bonzini @ 2012-11-13 17:39 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel

Il 13/11/2012 16:51, Kevin Wolf ha scritto:
> This series has two patches that fixes current qemu_aio_flush() callers, which
> should call bdrv_drain_all() instead. The other two patches are for changing
> the coroutine request cancellation to waiting for a single request (Paolo
> didn't want to change this to bdrv_drain_all()) and removing qemu_aio_flush()
> altogether in order to avoid future misuse.
> 
> Kevin Wolf (4):
>   block: Improve bdrv_aio_co_cancel_em
>   megasas: Use bdrv_drain_all instead of qemu_aio_flush
>   qemu-io: Use bdrv_drain_all instead of qemu_aio_flush
>   aio: Get rid of qemu_aio_flush()
> 
>  async.c        |    5 -----
>  block.c        |   19 ++++++++++++++++++-
>  block/commit.c |    2 +-
>  block/mirror.c |    2 +-
>  block/stream.c |    2 +-
>  hw/megasas.c   |    2 +-
>  main-loop.c    |    5 -----
>  qemu-aio.h     |    9 ++-------
>  qemu-io.c      |    2 +-
>  9 files changed, 25 insertions(+), 23 deletions(-)
> 

Patches 2 and 3 look good.  The rest is 1.4 material, looks good but
maybe we can do something better than the ->done boolean... no ideas,
just thinking out loud, but perhaps it will come out naturally of the
AioContext/data-plane work.

Paolo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers
  2012-11-13 17:39 ` [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Paolo Bonzini
@ 2012-11-14  7:29   ` Kevin Wolf
  0 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2012-11-14  7:29 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel

Am 13.11.2012 18:39, schrieb Paolo Bonzini:
> Il 13/11/2012 16:51, Kevin Wolf ha scritto:
>> This series has two patches that fixes current qemu_aio_flush() callers, which
>> should call bdrv_drain_all() instead. The other two patches are for changing
>> the coroutine request cancellation to waiting for a single request (Paolo
>> didn't want to change this to bdrv_drain_all()) and removing qemu_aio_flush()
>> altogether in order to avoid future misuse.
>>
>> Kevin Wolf (4):
>>   block: Improve bdrv_aio_co_cancel_em
>>   megasas: Use bdrv_drain_all instead of qemu_aio_flush
>>   qemu-io: Use bdrv_drain_all instead of qemu_aio_flush
>>   aio: Get rid of qemu_aio_flush()
>>
>>  async.c        |    5 -----
>>  block.c        |   19 ++++++++++++++++++-
>>  block/commit.c |    2 +-
>>  block/mirror.c |    2 +-
>>  block/stream.c |    2 +-
>>  hw/megasas.c   |    2 +-
>>  main-loop.c    |    5 -----
>>  qemu-aio.h     |    9 ++-------
>>  qemu-io.c      |    2 +-
>>  9 files changed, 25 insertions(+), 23 deletions(-)
>>
> 
> Patches 2 and 3 look good.  The rest is 1.4 material, looks good but
> maybe we can do something better than the ->done boolean... no ideas,
> just thinking out loud, but perhaps it will come out naturally of the
> AioContext/data-plane work.

At the moment I can't see how, but if it does, we can still replace it.
I just don't think that the old paths go away in the very near future,
so I'd prefer not waiting for the data-plane work to be completed.

Kevin

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-11-14  7:29 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-13 15:51 [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Kevin Wolf
2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 1/4] block: Improve bdrv_aio_co_cancel_em Kevin Wolf
2012-11-13 15:51 ` [Qemu-devel] [PATCH 2/4] megasas: Use bdrv_drain_all instead of qemu_aio_flush Kevin Wolf
2012-11-13 15:51 ` [Qemu-devel] [PATCH 3/4] qemu-io: " Kevin Wolf
2012-11-13 15:51 ` [Qemu-devel] [RFC PATCH 4/4] aio: Get rid of qemu_aio_flush() Kevin Wolf
2012-11-13 17:39 ` [Qemu-devel] [PATCH 0/4] Fix qemu_aio_flush callers Paolo Bonzini
2012-11-14  7:29   ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).