From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=52283 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PBqsL-00026m-QX for qemu-devel@nongnu.org; Fri, 29 Oct 2010 11:28:21 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PBqsC-0003CR-MS for qemu-devel@nongnu.org; Fri, 29 Oct 2010 11:28:17 -0400 Received: from e4.ny.us.ibm.com ([32.97.182.144]:49428) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PBqsC-0003CD-Js for qemu-devel@nongnu.org; Fri, 29 Oct 2010 11:28:08 -0400 Received: from d01relay07.pok.ibm.com (d01relay07.pok.ibm.com [9.56.227.147]) by e4.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id o9TFBpGc016387 for ; Fri, 29 Oct 2010 11:11:51 -0400 Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay07.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o9TFS6IX2126062 for ; Fri, 29 Oct 2010 11:28:06 -0400 Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o9TFS57V027289 for ; Fri, 29 Oct 2010 13:28:06 -0200 Message-ID: <4CCAE803.8090800@linux.vnet.ibm.com> Date: Fri, 29 Oct 2010 10:28:03 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug() References: <1288030956-28383-1-git-send-email-ryanh@us.ibm.com> <1288030956-28383-3-git-send-email-ryanh@us.ibm.com> <4CCAD6F4.6010201@linux.vnet.ibm.com> <4CCADA4C.4030302@redhat.com> <4CCADCF9.5030508@linux.vnet.ibm.com> <4CCAE0E0.7020707@redhat.com> In-Reply-To: <4CCAE0E0.7020707@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: Stefan Hajnoczi , Ryan Harper , Christoph Hellwig , Markus Armbruster , qemu-devel@nongnu.org On 10/29/2010 09:57 AM, Kevin Wolf wrote: > Am 29.10.2010 16:40, schrieb Anthony Liguori: > >> On 10/29/2010 09:29 AM, Kevin Wolf wrote: >> >>> Am 29.10.2010 16:15, schrieb Anthony Liguori: >>> >>>> I don't think it's a bad idea to do that but to the extent that the >>>> block API is designed after posix file I/O, close does not usually imply >>>> flush. >>>> >>>> >>> I don't think it really resembles POSIX. More or less the only thing >>> they have in common is that both provide open, read, write and close, >>> which is something that probably any API for file accesses provides. >>> >>> The operation you're talking about here is bdrv_flush/fsync that is not >>> implied by a POSIX close? >>> >>> >> Yes. But I think for the purposes of this patch, a bdrv_cancel_all() >> would be just as good. The intention is to eliminate pending I/O >> requests, the fsync is just a side effect. >> > Well, if I'm not mistaken, bdrv_flush would provide only this side > effect and not the semantics that you're really looking for. This is why > I suggested adding both bdrv_flush and qemu_aio_flush. We could probably > introduce a qemu_aio_flush variant that flushes only one > BlockDriverState - this is what you really want. > > >>>>> And why do we have to flush here, but not before other uses of >>>>> bdrv_close(), such as eject_device()? >>>>> >>>>> >>>>> >>>> Good question. Kevin should also confirm, but looking at the code, I >>>> think flush() is needed before close. If there's a pending I/O event >>>> and you close before the I/O event is completed, you'll get a callback >>>> for completion against a bogus BlockDriverState. >>>> >>>> I can't find anything in either raw-posix or the generic block layer >>>> that would mitigate this. >>>> >>>> >>> I'm not aware of anything either. This is what qemu_aio_flush would do. >>> >>> It seems reasonable to me to call both qemu_aio_flush and bdrv_flush in >>> bdrv_close. We probably don't really need to call bdrv_flush to operate >>> correctly, but it can't hurt and bdrv_close shouldn't happen that often >>> anyway. >>> >>> >> I agree. Re: qemu_aio_flush, we have to wait for it to complete which >> gets a little complicated in bdrv_close(). >> > qemu_aio_flush is the function that waits for requests to complete. > Please excuse me while my head explodes ;-) I think we've got a bit of a problem. We have: 1) bdrv_flush() - sends an fdatasync 2) bdrv_aio_flush() - sends an fdatasync using the thread pool 3) qemu_aio_flush() - waits for all pending aio requests to complete But we use bdrv_aio_flush() to implement a barrier and we don't actually preserve those barrier semantics in the thread pool. That is: If I do: bdrv_aio_write() -> A bdrv_aio_write() -> B bdrv_aio_flush() -> C This will get queued as three requests on the thread pool. (A) is a write, (B) is a write, and (C) is a fdatasync. But if this gets picked up by three separate threads, the ordering isn't guaranteed. It might be C, B, A. So semantically, is bdrv_aio_flush() supposed to flush any *pending* writes or any *completed* writes? If it's the later, we're okay, but if it's the former, we're broken. If it's supposed to flush any pending writes, then my patch series is correct in theory. Regards, Anthony Liguori >> I think it would be better >> to make bdrv_flush() call bdrv_aio_flush() if an explicit bdrv_flush >> method isn't provided. Something like the attached (still need to test). >> >> Does that seem reasonable? >> > I'm not sure why you want to introduce this emulation. Are there any > drivers that implement bdrv_aio_flush, but not bdrv_flush? They are > definitely broken. > > Today, bdrv_aio_flush is emulated using bdrv_flush if the driver doesn't > provide it explicitly. > > I think this also means that your first patch would kill any drivers > implementing neither bdrv_flush nor bdrv_aio_flush because they'd try to > emulate the other function in an endless recursion. > > And apart from that, as said above, bdrv_flush doesn't do the right > thing anyway. ;-) > > Kevin >