qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <marcelo@kvack.org>
To: Paul Brook <paul@codesourcery.com>
Cc: kvm-devel <kvm-devel@lists.sourceforge.net>, qemu-devel@nongnu.org
Subject: Re: [kvm-devel] [Qemu-devel] [PATCH] QEMU: fsync AIO writes on flush request
Date: Fri, 28 Mar 2008 15:13:11 -0300	[thread overview]
Message-ID: <20080328181311.GA19547@dmt> (raw)
In-Reply-To: <200803281700.40420.paul@codesourcery.com>

On Fri, Mar 28, 2008 at 05:00:39PM +0000, Paul Brook wrote:
> > > Surely you should be using the normal aio notification to wait for the
> > > aio_fsync to complete before reporting success to the device.
> >
> > qemu_aio_flush() will wait for all pending AIO requests (including
> > aio_fsync) to complete.
> 
> Then why do you need to separate fdatasync?

Oh, I see what Jamie means now: fdatasync() is redundant with
aio_fsync(O_DSYNC).

How's this? 

Index: kvm-userspace.io/qemu/block-raw-posix.c
===================================================================
--- kvm-userspace.io.orig/qemu/block-raw-posix.c
+++ kvm-userspace.io/qemu/block-raw-posix.c
@@ -557,10 +557,39 @@ static int raw_create(const char *filena
     return 0;
 }
 
+static void raw_aio_flush_complete(void *opaque, int ret)
+{
+    if (ret)
+        printf("WARNING: aio_fsync failed (completion)\n");
+}
+
+static void raw_aio_flush(BlockDriverState *bs)
+{
+    RawAIOCB *acb;
+
+    acb = raw_aio_setup(bs, 0, NULL, 0, raw_aio_flush_complete, NULL);
+    if (!acb)
+        return;
+
+    if (aio_fsync(O_DSYNC, &acb->aiocb) < 0) {
+        qemu_aio_release(acb);
+        perror("aio_fsync");
+        printf("WARNING: aio_fsync failed\n");
+        return;
+    }
+}
+
 static void raw_flush(BlockDriverState *bs)
 {
     BDRVRawState *s = bs->opaque;
-    fsync(s->fd);
+    raw_aio_flush(bs);
+
+    /* We rely on the fact that no other AIO will be submitted
+     * in parallel, but this should be fixed by per-device
+     * AIO queues when allowing multiple CPU's to process IO
+     * in QEMU.
+     */
+    qemu_aio_flush();
 }
 
 BlockDriver bdrv_raw = {

  reply	other threads:[~2008-03-28 18:10 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-28 15:05 [Qemu-devel] [PATCH] QEMU: fsync AIO writes on flush request Marcelo Tosatti
2008-03-28 15:07 ` Jamie Lokier
2008-03-28 16:31   ` [kvm-devel] " Marcelo Tosatti
2008-03-28 16:40     ` Paul Brook
2008-03-28 16:59       ` Marcelo Tosatti
2008-03-28 17:00         ` Paul Brook
2008-03-28 18:13           ` Marcelo Tosatti [this message]
2008-03-29  1:17             ` Jamie Lokier
2008-03-29  2:02               ` Paul Brook
2008-03-29  2:11                 ` Jamie Lokier
2008-03-29  2:43                   ` Paul Brook
2008-03-28 18:03     ` Jamie Lokier
2008-03-28 18:36       ` Marcelo Tosatti
2008-03-29  1:09         ` Jamie Lokier
2008-03-29  6:49           ` Marcelo Tosatti
2008-03-28 17:25 ` Ian Jackson
2008-03-28 19:11   ` [kvm-devel] " Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080328181311.GA19547@dmt \
    --to=marcelo@kvack.org \
    --cc=kvm-devel@lists.sourceforge.net \
    --cc=paul@codesourcery.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).