* [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close()
@ 2013-07-02 13:36 Stefan Hajnoczi
2013-07-02 13:54 ` Kevin Wolf
2013-07-03 8:34 ` Stefan Hajnoczi
0 siblings, 2 replies; 3+ messages in thread
From: Stefan Hajnoczi @ 2013-07-02 13:36 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, qemu-stable, Stefan Hajnoczi
Since 80ccf93b we flush the block device during close. The
bdrv_drain_all() call should come before bdrv_flush() to ensure guest
write requests have completed. Otherwise we may miss pending writes
when flushing.
Call bdrv_drain_all() again for safety as the final step after
bdrv_flush(). This should not be necessary but we can be paranoid here
in case bdrv_flush() left I/O pending.
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
v2:
* Drain after block_job_cancel_sync() [kwolf]
block.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/block.c b/block.c
index 6c493ad..183fec8 100644
--- a/block.c
+++ b/block.c
@@ -1358,11 +1358,12 @@ void bdrv_reopen_abort(BDRVReopenState *reopen_state)
void bdrv_close(BlockDriverState *bs)
{
- bdrv_flush(bs);
if (bs->job) {
block_job_cancel_sync(bs->job);
}
- bdrv_drain_all();
+ bdrv_drain_all(); /* complete I/O */
+ bdrv_flush(bs);
+ bdrv_drain_all(); /* in case flush left pending I/O */
notifier_list_notify(&bs->close_notifiers, bs);
if (bs->drv) {
--
1.8.1.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close()
2013-07-02 13:36 [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close() Stefan Hajnoczi
@ 2013-07-02 13:54 ` Kevin Wolf
2013-07-03 8:34 ` Stefan Hajnoczi
1 sibling, 0 replies; 3+ messages in thread
From: Kevin Wolf @ 2013-07-02 13:54 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: qemu-devel, qemu-stable
Am 02.07.2013 um 15:36 hat Stefan Hajnoczi geschrieben:
> Since 80ccf93b we flush the block device during close. The
> bdrv_drain_all() call should come before bdrv_flush() to ensure guest
> write requests have completed. Otherwise we may miss pending writes
> when flushing.
>
> Call bdrv_drain_all() again for safety as the final step after
> bdrv_flush(). This should not be necessary but we can be paranoid here
> in case bdrv_flush() left I/O pending.
>
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close()
2013-07-02 13:36 [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close() Stefan Hajnoczi
2013-07-02 13:54 ` Kevin Wolf
@ 2013-07-03 8:34 ` Stefan Hajnoczi
1 sibling, 0 replies; 3+ messages in thread
From: Stefan Hajnoczi @ 2013-07-03 8:34 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, qemu-stable
On Tue, Jul 02, 2013 at 03:36:25PM +0200, Stefan Hajnoczi wrote:
> Since 80ccf93b we flush the block device during close. The
> bdrv_drain_all() call should come before bdrv_flush() to ensure guest
> write requests have completed. Otherwise we may miss pending writes
> when flushing.
>
> Call bdrv_drain_all() again for safety as the final step after
> bdrv_flush(). This should not be necessary but we can be paranoid here
> in case bdrv_flush() left I/O pending.
>
> Cc: qemu-stable@nongnu.org
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> v2:
> * Drain after block_job_cancel_sync() [kwolf]
>
> block.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
Applied to my block tree:
https://github.com/stefanha/qemu/commits/block
Stefan
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-07-03 8:34 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-02 13:36 [Qemu-devel] [PATCH v2] block: fix bdrv_flush() ordering in bdrv_close() Stefan Hajnoczi
2013-07-02 13:54 ` Kevin Wolf
2013-07-03 8:34 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).