From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=44029 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PBqGJ-0000sG-GG for qemu-devel@nongnu.org; Fri, 29 Oct 2010 10:49:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PBq8h-00017E-91 for qemu-devel@nongnu.org; Fri, 29 Oct 2010 10:41:08 -0400 Received: from e37.co.us.ibm.com ([32.97.110.158]:34712) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PBq8h-000176-3W for qemu-devel@nongnu.org; Fri, 29 Oct 2010 10:41:07 -0400 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e37.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id o9TEcrKZ032254 for ; Fri, 29 Oct 2010 08:38:53 -0600 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id o9TEf2cb238768 for ; Fri, 29 Oct 2010 08:41:03 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o9TEf2b1001336 for ; Fri, 29 Oct 2010 08:41:02 -0600 Message-ID: <4CCADCF9.5030508@linux.vnet.ibm.com> Date: Fri, 29 Oct 2010 09:40:57 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug() References: <1288030956-28383-1-git-send-email-ryanh@us.ibm.com> <1288030956-28383-3-git-send-email-ryanh@us.ibm.com> <4CCAD6F4.6010201@linux.vnet.ibm.com> <4CCADA4C.4030302@redhat.com> In-Reply-To: <4CCADA4C.4030302@redhat.com> Content-Type: multipart/mixed; boundary="------------060905080804050705030209" List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: Stefan Hajnoczi , Ryan Harper , Markus Armbruster , qemu-devel@nongnu.org This is a multi-part message in MIME format. --------------060905080804050705030209 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 10/29/2010 09:29 AM, Kevin Wolf wrote: > Am 29.10.2010 16:15, schrieb Anthony Liguori: > >> On 10/29/2010 09:01 AM, Markus Armbruster wrote: >> >>> Ryan Harper writes: >>> >>>> diff --git a/block.c b/block.c >>>> index a19374d..be47655 100644 >>>> --- a/block.c >>>> +++ b/block.c >>>> @@ -1328,6 +1328,13 @@ void bdrv_set_removable(BlockDriverState *bs, int removable) >>>> } >>>> } >>>> >>>> +void bdrv_unplug(BlockDriverState *bs) >>>> +{ >>>> + qemu_aio_flush(); >>>> + bdrv_flush(bs); >>>> + bdrv_close(bs); >>>> +} >>>> >>>> >>> Stupid question: why doesn't bdrv_close() flush automatically? >>> >>> >> I don't think it's a bad idea to do that but to the extent that the >> block API is designed after posix file I/O, close does not usually imply >> flush. >> > I don't think it really resembles POSIX. More or less the only thing > they have in common is that both provide open, read, write and close, > which is something that probably any API for file accesses provides. > > The operation you're talking about here is bdrv_flush/fsync that is not > implied by a POSIX close? > Yes. But I think for the purposes of this patch, a bdrv_cancel_all() would be just as good. The intention is to eliminate pending I/O requests, the fsync is just a side effect. >>> And why do we have to flush here, but not before other uses of >>> bdrv_close(), such as eject_device()? >>> >>> >> Good question. Kevin should also confirm, but looking at the code, I >> think flush() is needed before close. If there's a pending I/O event >> and you close before the I/O event is completed, you'll get a callback >> for completion against a bogus BlockDriverState. >> >> I can't find anything in either raw-posix or the generic block layer >> that would mitigate this. >> > I'm not aware of anything either. This is what qemu_aio_flush would do. > > It seems reasonable to me to call both qemu_aio_flush and bdrv_flush in > bdrv_close. We probably don't really need to call bdrv_flush to operate > correctly, but it can't hurt and bdrv_close shouldn't happen that often > anyway. > I agree. Re: qemu_aio_flush, we have to wait for it to complete which gets a little complicated in bdrv_close(). I think it would be better to make bdrv_flush() call bdrv_aio_flush() if an explicit bdrv_flush method isn't provided. Something like the attached (still need to test). Does that seem reasonable? Regards, Anthony Liguori > Kevin > --------------060905080804050705030209 Content-Type: text/x-patch; name="0001-block-make-bdrv_flush-fall-back-to-bdrv_aio_flush.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename*0="0001-block-make-bdrv_flush-fall-back-to-bdrv_aio_flush.patch" >>From 86bf3c9eb5ce43280224f9271a4ad016b0dd3fb1 Mon Sep 17 00:00:00 2001 From: Anthony Liguori Date: Fri, 29 Oct 2010 09:36:53 -0500 Subject: [PATCH 1/2] block: make bdrv_flush() fall back to bdrv_aio_flush Signed-off-by: Anthony Liguori diff --git a/block.c b/block.c index 985d0b7..fc8defd 100644 --- a/block.c +++ b/block.c @@ -1453,14 +1453,51 @@ const char *bdrv_get_device_name(BlockDriverState *bs) return bs->device_name; } +static void bdrv_flush_em_cb(void *opaque, int ret) +{ + int *pcomplete = opaque; + *pcomplete = 1; +} + +static void bdrv_flush_em(BlockDriverState *bs) +{ + int complete = 0; + BlockDriverAIOCB *acb; + + if (!bs->drv->bdrv_aio_flush) { + return; + } + + async_context_push(); + + acb = bs->drv->bdrv_aio_flush(bs, bdrv_flush_em_cb, &complete); + if (!acb) { + goto out; + } + + while (!complete) { + qemu_aio_wait(); + } + +out: + async_context_pop(); +} + void bdrv_flush(BlockDriverState *bs) { if (bs->open_flags & BDRV_O_NO_FLUSH) { return; } - if (bs->drv && bs->drv->bdrv_flush) + if (!bs->drv) { + return; + } + + if (bs->drv->bdrv_flush) { bs->drv->bdrv_flush(bs); + } else { + bdrv_flush_em(bs); + } } void bdrv_flush_all(void) -- 1.7.0.4 --------------060905080804050705030209 Content-Type: text/x-patch; name="0002-block-add-bdrv_flush-to-bdrv_close.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="0002-block-add-bdrv_flush-to-bdrv_close.patch" >>From 094049974796ddf78ee2f1541bffa40fe1176a1a Mon Sep 17 00:00:00 2001 From: Anthony Liguori Date: Fri, 29 Oct 2010 09:37:25 -0500 Subject: [PATCH 2/2] block: add bdrv_flush to bdrv_close To ensure that there are no pending completions before destroying a block device. Signed-off-by: Anthony Liguori diff --git a/block.c b/block.c index fc8defd..d2aed1b 100644 --- a/block.c +++ b/block.c @@ -644,6 +644,8 @@ unlink_and_fail: void bdrv_close(BlockDriverState *bs) { if (bs->drv) { + bdrv_flush(bs); + if (bs == bs_snapshots) { bs_snapshots = NULL; } -- 1.7.0.4 --------------060905080804050705030209--