qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments
@ 2015-07-02 16:24 Stefan Hajnoczi
  2015-07-03 12:49 ` Markus Armbruster
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2015-07-02 16:24 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Markus Armbruster, Stefan Hajnoczi

The doc comments for bdrv_drain_all() and bdrv_drain() are outdated:

 * The bdrv_drain() comment is a poor man's bdrv_lock()/bdrv_unlock()
   which Fam Zheng is currently developing.  Unfortunately this warning
   was never really enough because devices keep submitting I/O and op
   blockers don't prevent that.

 * The bdrv_drain_all() comment is still partially correct but reflects
   the nature of the implementation rather than API documentation.

Do make it clear that bdrv_drain() is only appropriate within an
AioContext.  For anything spanning AioContexts you need
bdrv_drain_all().

Cc: Markus Armbruster <armbru@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/io.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/block/io.c b/block/io.c
index e295992..6f5704f 100644
--- a/block/io.c
+++ b/block/io.c
@@ -236,12 +236,12 @@ static bool bdrv_requests_pending(BlockDriverState *bs)
 /*
  * Wait for pending requests to complete on a single BlockDriverState subtree
  *
- * See the warning in bdrv_drain_all().  This function can only be called if
- * you are sure nothing can generate I/O because you have op blockers
- * installed.
- *
  * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriverState
  * AioContext.
+ *
+ * Only this BlockDriverState's AioContext is run, so in-flight requests must
+ * not depend on events in other AioContexts.  In that case, use
+ * bdrv_drain_all() instead.
  */
 void bdrv_drain(BlockDriverState *bs)
 {
@@ -260,12 +260,6 @@ void bdrv_drain(BlockDriverState *bs)
  *
  * This function does not flush data to disk, use bdrv_flush_all() for that
  * after calling this function.
- *
- * Note that completion of an asynchronous I/O operation can trigger any
- * number of other I/O operations on other devices---for example a coroutine
- * can be arbitrarily complex and a constant flow of I/O can come until the
- * coroutine is complete.  Because of this, it is not possible to have a
- * function to drain a single device's I/O queue.
  */
 void bdrv_drain_all(void)
 {
@@ -288,6 +282,12 @@ void bdrv_drain_all(void)
         }
     }
 
+    /* Note that completion of an asynchronous I/O operation can trigger any
+     * number of other I/O operations on other devices---for example a
+     * coroutine can submit an I/O request to another device in response to
+     * request completion.  Therefore we must keep looping until there was no
+     * more activity rather than simply draining each device independently.
+     */
     while (busy) {
         busy = false;
 
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments
  2015-07-02 16:24 [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments Stefan Hajnoczi
@ 2015-07-03 12:49 ` Markus Armbruster
  2015-07-06  3:41 ` Fam Zheng
  2015-07-07  9:32 ` Stefan Hajnoczi
  2 siblings, 0 replies; 4+ messages in thread
From: Markus Armbruster @ 2015-07-03 12:49 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Paolo Bonzini, qemu-devel

Stefan Hajnoczi <stefanha@redhat.com> writes:

> The doc comments for bdrv_drain_all() and bdrv_drain() are outdated:
>
>  * The bdrv_drain() comment is a poor man's bdrv_lock()/bdrv_unlock()
>    which Fam Zheng is currently developing.  Unfortunately this warning
>    was never really enough because devices keep submitting I/O and op
>    blockers don't prevent that.
>
>  * The bdrv_drain_all() comment is still partially correct but reflects
>    the nature of the implementation rather than API documentation.
>
> Do make it clear that bdrv_drain() is only appropriate within an
> AioContext.  For anything spanning AioContexts you need
> bdrv_drain_all().
>
> Cc: Markus Armbruster <armbru@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

For what it's worth (I'm not deep into our AIO):
Reviewed-by: Markus Armbruster <armbru@redhat.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments
  2015-07-02 16:24 [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments Stefan Hajnoczi
  2015-07-03 12:49 ` Markus Armbruster
@ 2015-07-06  3:41 ` Fam Zheng
  2015-07-07  9:32 ` Stefan Hajnoczi
  2 siblings, 0 replies; 4+ messages in thread
From: Fam Zheng @ 2015-07-06  3:41 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Paolo Bonzini, qemu-devel, Markus Armbruster

On Thu, 07/02 17:24, Stefan Hajnoczi wrote:
> The doc comments for bdrv_drain_all() and bdrv_drain() are outdated:
> 
>  * The bdrv_drain() comment is a poor man's bdrv_lock()/bdrv_unlock()
>    which Fam Zheng is currently developing.  Unfortunately this warning
>    was never really enough because devices keep submitting I/O and op
>    blockers don't prevent that.
> 
>  * The bdrv_drain_all() comment is still partially correct but reflects
>    the nature of the implementation rather than API documentation.
> 
> Do make it clear that bdrv_drain() is only appropriate within an
> AioContext.  For anything spanning AioContexts you need
> bdrv_drain_all().
> 
> Cc: Markus Armbruster <armbru@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments
  2015-07-02 16:24 [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments Stefan Hajnoczi
  2015-07-03 12:49 ` Markus Armbruster
  2015-07-06  3:41 ` Fam Zheng
@ 2015-07-07  9:32 ` Stefan Hajnoczi
  2 siblings, 0 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2015-07-07  9:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Markus Armbruster

[-- Attachment #1: Type: text/plain, Size: 1044 bytes --]

On Thu, Jul 02, 2015 at 05:24:41PM +0100, Stefan Hajnoczi wrote:
> The doc comments for bdrv_drain_all() and bdrv_drain() are outdated:
> 
>  * The bdrv_drain() comment is a poor man's bdrv_lock()/bdrv_unlock()
>    which Fam Zheng is currently developing.  Unfortunately this warning
>    was never really enough because devices keep submitting I/O and op
>    blockers don't prevent that.
> 
>  * The bdrv_drain_all() comment is still partially correct but reflects
>    the nature of the implementation rather than API documentation.
> 
> Do make it clear that bdrv_drain() is only appropriate within an
> AioContext.  For anything spanning AioContexts you need
> bdrv_drain_all().
> 
> Cc: Markus Armbruster <armbru@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/io.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)

Thanks, applied to my block tree:
https://github.com/stefanha/qemu/commits/block

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-07-07  9:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-02 16:24 [Qemu-devel] [PATCH] block: update bdrv_drain_all()/bdrv_drain() comments Stefan Hajnoczi
2015-07-03 12:49 ` Markus Armbruster
2015-07-06  3:41 ` Fam Zheng
2015-07-07  9:32 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).