From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JfJ1p-0001Rz-Eg for qemu-devel@nongnu.org; Fri, 28 Mar 2008 14:10:13 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JfJ1n-0001Q4-2y for qemu-devel@nongnu.org; Fri, 28 Mar 2008 14:10:12 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JfJ1m-0001Ps-Sq for qemu-devel@nongnu.org; Fri, 28 Mar 2008 14:10:10 -0400 Received: from kanga.kvack.org ([66.96.29.28]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1JfJ1m-0006ec-HP for qemu-devel@nongnu.org; Fri, 28 Mar 2008 14:10:10 -0400 Date: Fri, 28 Mar 2008 15:13:11 -0300 From: Marcelo Tosatti Subject: Re: [kvm-devel] [Qemu-devel] [PATCH] QEMU: fsync AIO writes on flush request Message-ID: <20080328181311.GA19547@dmt> References: <20080328150517.GA18077@dmt> <200803281640.55185.paul@codesourcery.com> <20080328165941.GA19155@dmt> <200803281700.40420.paul@codesourcery.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200803281700.40420.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paul Brook Cc: kvm-devel , qemu-devel@nongnu.org On Fri, Mar 28, 2008 at 05:00:39PM +0000, Paul Brook wrote: > > > Surely you should be using the normal aio notification to wait for the > > > aio_fsync to complete before reporting success to the device. > > > > qemu_aio_flush() will wait for all pending AIO requests (including > > aio_fsync) to complete. > > Then why do you need to separate fdatasync? Oh, I see what Jamie means now: fdatasync() is redundant with aio_fsync(O_DSYNC). How's this? Index: kvm-userspace.io/qemu/block-raw-posix.c =================================================================== --- kvm-userspace.io.orig/qemu/block-raw-posix.c +++ kvm-userspace.io/qemu/block-raw-posix.c @@ -557,10 +557,39 @@ static int raw_create(const char *filena return 0; } +static void raw_aio_flush_complete(void *opaque, int ret) +{ + if (ret) + printf("WARNING: aio_fsync failed (completion)\n"); +} + +static void raw_aio_flush(BlockDriverState *bs) +{ + RawAIOCB *acb; + + acb = raw_aio_setup(bs, 0, NULL, 0, raw_aio_flush_complete, NULL); + if (!acb) + return; + + if (aio_fsync(O_DSYNC, &acb->aiocb) < 0) { + qemu_aio_release(acb); + perror("aio_fsync"); + printf("WARNING: aio_fsync failed\n"); + return; + } +} + static void raw_flush(BlockDriverState *bs) { BDRVRawState *s = bs->opaque; - fsync(s->fd); + raw_aio_flush(bs); + + /* We rely on the fact that no other AIO will be submitted + * in parallel, but this should be fixed by per-device + * AIO queues when allowing multiple CPU's to process IO + * in QEMU. + */ + qemu_aio_flush(); } BlockDriver bdrv_raw = {