From: Kevin Wolf <kwolf@redhat.com>
To: Anthony Liguori <aliguori@linux.vnet.ibm.com>
Cc: Stefan Hajnoczi <stefan.hajnoczi@uk.ibm.com>,
Ryan Harper <ryanh@us.ibm.com>, Christoph Hellwig <hch@lst.de>,
Markus Armbruster <armbru@redhat.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug()
Date: Fri, 29 Oct 2010 18:08:03 +0200 [thread overview]
Message-ID: <4CCAF163.1090806@redhat.com> (raw)
In-Reply-To: <4CCAE803.8090800@linux.vnet.ibm.com>
Am 29.10.2010 17:28, schrieb Anthony Liguori:
> On 10/29/2010 09:57 AM, Kevin Wolf wrote:
>> Am 29.10.2010 16:40, schrieb Anthony Liguori:
>>
>>> On 10/29/2010 09:29 AM, Kevin Wolf wrote:
>>>
>>>> Am 29.10.2010 16:15, schrieb Anthony Liguori:
>>>>
>>>>> I don't think it's a bad idea to do that but to the extent that the
>>>>> block API is designed after posix file I/O, close does not usually imply
>>>>> flush.
>>>>>
>>>>>
>>>> I don't think it really resembles POSIX. More or less the only thing
>>>> they have in common is that both provide open, read, write and close,
>>>> which is something that probably any API for file accesses provides.
>>>>
>>>> The operation you're talking about here is bdrv_flush/fsync that is not
>>>> implied by a POSIX close?
>>>>
>>>>
>>> Yes. But I think for the purposes of this patch, a bdrv_cancel_all()
>>> would be just as good. The intention is to eliminate pending I/O
>>> requests, the fsync is just a side effect.
>>>
>> Well, if I'm not mistaken, bdrv_flush would provide only this side
>> effect and not the semantics that you're really looking for. This is why
>> I suggested adding both bdrv_flush and qemu_aio_flush. We could probably
>> introduce a qemu_aio_flush variant that flushes only one
>> BlockDriverState - this is what you really want.
>>
>>
>>>>>> And why do we have to flush here, but not before other uses of
>>>>>> bdrv_close(), such as eject_device()?
>>>>>>
>>>>>>
>>>>>>
>>>>> Good question. Kevin should also confirm, but looking at the code, I
>>>>> think flush() is needed before close. If there's a pending I/O event
>>>>> and you close before the I/O event is completed, you'll get a callback
>>>>> for completion against a bogus BlockDriverState.
>>>>>
>>>>> I can't find anything in either raw-posix or the generic block layer
>>>>> that would mitigate this.
>>>>>
>>>>>
>>>> I'm not aware of anything either. This is what qemu_aio_flush would do.
>>>>
>>>> It seems reasonable to me to call both qemu_aio_flush and bdrv_flush in
>>>> bdrv_close. We probably don't really need to call bdrv_flush to operate
>>>> correctly, but it can't hurt and bdrv_close shouldn't happen that often
>>>> anyway.
>>>>
>>>>
>>> I agree. Re: qemu_aio_flush, we have to wait for it to complete which
>>> gets a little complicated in bdrv_close().
>>>
>> qemu_aio_flush is the function that waits for requests to complete.
>>
>
> Please excuse me while my head explodes ;-)
>
> I think we've got a bit of a problem.
>
> We have:
>
> 1) bdrv_flush() - sends an fdatasync
>
> 2) bdrv_aio_flush() - sends an fdatasync using the thread pool
>
> 3) qemu_aio_flush() - waits for all pending aio requests to complete
>
> But we use bdrv_aio_flush() to implement a barrier and we don't actually
> preserve those barrier semantics in the thread pool.
Not really. We use it to implement flush commands, which I think don't
necessarily constitute a barrier by themselves.
> That is:
>
> If I do:
>
> bdrv_aio_write() -> A
> bdrv_aio_write() -> B
> bdrv_aio_flush() -> C
>
> This will get queued as three requests on the thread pool. (A) is a
> write, (B) is a write, and (C) is a fdatasync.
>
> But if this gets picked up by three separate threads, the ordering isn't
> guaranteed. It might be C, B, A. So semantically, is bdrv_aio_flush()
> supposed to flush any *pending* writes or any *completed* writes? If
> it's the later, we're okay, but if it's the former, we're broken.
Right, so don't do that. ;-)
bdrv_aio_flush, as I understand it, is meant to flush only completed
writes. We've had this discussion before and if I understood right, this
is also how real hardware works generally. So to get barrier semantics
you as an OS need to flush your queue, i.e. you wait for A and B to
complete before you issue C.
Christoph should be able to detail on this.
Kevin
next prev parent reply other threads:[~2010-10-29 16:07 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-25 18:22 [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from device removal Ryan Harper
2010-10-25 18:22 ` [Qemu-devel] [PATCH 1/3] v2 Add drive_get_by_id Ryan Harper
2010-10-29 13:18 ` Markus Armbruster
2010-10-25 18:22 ` [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug() Ryan Harper
2010-10-29 14:01 ` Markus Armbruster
2010-10-29 14:15 ` Anthony Liguori
2010-10-29 14:29 ` Kevin Wolf
2010-10-29 14:40 ` Anthony Liguori
2010-10-29 14:57 ` Kevin Wolf
2010-10-29 15:28 ` Anthony Liguori
2010-10-29 16:08 ` Kevin Wolf [this message]
2010-10-30 13:25 ` Christoph Hellwig
2010-10-29 15:28 ` Markus Armbruster
2010-11-01 21:06 ` Ryan Harper
2010-10-25 18:22 ` [Qemu-devel] [PATCH 3/3] Add qmp version of drive_unplug Ryan Harper
2010-10-29 14:12 ` [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from device removal Markus Armbruster
2010-10-29 15:03 ` Ryan Harper
2010-10-29 16:10 ` Markus Armbruster
2010-10-29 16:50 ` Ryan Harper
2010-11-02 9:40 ` Markus Armbruster
2010-11-02 13:22 ` Michael S. Tsirkin
2010-11-02 13:41 ` Kevin Wolf
2010-11-02 13:46 ` Ryan Harper
2010-11-02 13:58 ` Michael S. Tsirkin
2010-11-02 14:22 ` Ryan Harper
2010-11-02 15:46 ` Michael S. Tsirkin
2010-11-02 16:53 ` Ryan Harper
2010-11-02 17:59 ` Michael S. Tsirkin
2010-11-02 19:01 ` Ryan Harper
2010-11-02 19:17 ` Michael S. Tsirkin
2010-11-02 20:23 ` Ryan Harper
2010-11-03 7:21 ` Michael S. Tsirkin
2010-11-03 12:04 ` Ryan Harper
2010-11-03 16:41 ` Markus Armbruster
2010-11-03 17:29 ` Ryan Harper
2010-11-03 18:02 ` Michael S. Tsirkin
2010-11-03 20:59 ` Ryan Harper
2010-11-03 21:26 ` Michael S. Tsirkin
2010-11-04 16:45 ` Ryan Harper
2010-11-04 17:04 ` Michael S. Tsirkin
2010-11-05 13:27 ` Markus Armbruster
2010-11-05 14:17 ` Michael S. Tsirkin
2010-11-05 14:29 ` Ryan Harper
2010-11-05 16:01 ` Markus Armbruster
2010-11-08 21:02 ` Michael S. Tsirkin
2010-11-05 14:25 ` Ryan Harper
2010-11-05 16:10 ` Markus Armbruster
2010-11-05 16:22 ` Ryan Harper
2010-11-06 8:18 ` Markus Armbruster
2010-11-08 2:19 ` Ryan Harper
2010-11-08 10:32 ` Markus Armbruster
2010-11-08 10:49 ` Michael S. Tsirkin
2010-11-08 12:03 ` Markus Armbruster
2010-11-08 14:02 ` Ryan Harper
2010-11-08 16:56 ` Michael S. Tsirkin
2010-11-08 17:04 ` Daniel P. Berrange
2010-11-08 18:41 ` Ryan Harper
2010-11-08 18:39 ` Ryan Harper
2010-11-08 19:06 ` Daniel P. Berrange
2010-11-08 16:34 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CCAF163.1090806@redhat.com \
--to=kwolf@redhat.com \
--cc=aliguori@linux.vnet.ibm.com \
--cc=armbru@redhat.com \
--cc=hch@lst.de \
--cc=qemu-devel@nongnu.org \
--cc=ryanh@us.ibm.com \
--cc=stefan.hajnoczi@uk.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).