qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Andrea Arcangeli <aarcange@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Kevin Wolf <kwolf@redhat.com>, qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH] fix qemu_aio_flush
Date: Thu, 4 Jun 2009 13:26:45 +0200	[thread overview]
Message-ID: <20090604112645.GQ25483@random.random> (raw)
In-Reply-To: <20090530121709.GA22104@random.random>

Hello,

Kevin has good point that when highlevel callback handler completes we
should be guaranteed all underlying layers of callbacks events
completed for that specific aio operation. So it seems the main bug
was only in qemu_aio_flush() (only made visible by the debug code
included in the ide_dma_cancel patch). I guess that's a problem for
savevm/reset that assumes there is no outstanding aio waiting to be
run while in fact there can be because of the bug. Patch is much
simpler as seen below:

----------

From: Andrea Arcangeli <aarcange@redhat.com>

qemu_aio_wait by invoking the bh or one of the aio completion
callbacks, could end up submitting new pending aio, breaking the
invariant that qemu_aio_poll returns only when no pending aio is
outstanding (possibly a problem for migration as such).

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---

diff --git a/aio.c b/aio.c
index 11fbb6c..dc9b85d 100644
--- a/aio.c
+++ b/aio.c
@@ -103,11 +103,15 @@ void qemu_aio_flush(void)
     do {
         ret = 0;
 
+	/*
+	 * If there are pending emulated aio start them now so flush
+	 * will be able to return 1.
+	 */
+        qemu_aio_wait();
+
         LIST_FOREACH(node, &aio_handlers, node) {
             ret |= node->io_flush(node->opaque);
         }
-
-        qemu_aio_wait();
     } while (ret > 0);
 }
 
diff --git a/qemu-aio.h b/qemu-aio.h
index 7967829..f262344 100644
--- a/qemu-aio.h
+++ b/qemu-aio.h
@@ -24,9 +24,10 @@ typedef int (AioFlushHandler)(void *opaque);
  * outstanding AIO operations have been completed or cancelled. */
 void qemu_aio_flush(void);
 
-/* Wait for a single AIO completion to occur.  This function will until a
- * single AIO opeartion has completed.  It is intended to be used as a looping
- * primative when simulating synchronous IO based on asynchronous IO. */
+/* Wait for a single AIO completion to occur.  This function will wait
+ * until a single AIO event has completed and it will ensure something
+ * has moved before returning. This can issue new pending aio as
+ * result of executing I/O completion or bh callbacks. */
 void qemu_aio_wait(void);
 
 /* Register a file descriptor and associated callbacks.  Behaves very similarly

  reply	other threads:[~2009-06-05  0:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-28 16:33 [Qemu-devel] fix bdrv_read/write_em and qemu_aio_flush Andrea Arcangeli
2009-05-30 10:08 ` Christoph Hellwig
2009-05-30 12:17   ` Andrea Arcangeli
2009-06-04 11:26     ` Andrea Arcangeli [this message]
2009-06-04 11:51       ` [Qemu-devel] Re: [PATCH] fix qemu_aio_flush Kevin Wolf
2009-06-05 15:57       ` [Qemu-devel] " Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090604112645.GQ25483@random.random \
    --to=aarcange@redhat.com \
    --cc=hch@lst.de \
    --cc=kwolf@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).