* [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine
@ 2016-04-01 9:46 Fam Zheng
2016-04-01 9:49 ` Paolo Bonzini
2016-04-01 12:05 ` Laurent Vivier
0 siblings, 2 replies; 4+ messages in thread
From: Fam Zheng @ 2016-04-01 9:46 UTC (permalink / raw)
To: qemu-devel; +Cc: Kevin Wolf, lvivier, qemu-block, Stefan Hajnoczi, pbonzini
Using the nested aio_poll() in coroutine is a bad idea. This patch
replaces the aio_poll loop in bdrv_drain with a BH, if called in
coroutine.
For example, the bdrv_drain() in mirror.c can hang when a guest issued
request is pending on it in qemu_co_mutex_lock().
Mirror coroutine in this case has just finished a request, and the block
job is about to complete. It calls bdrv_drain() which waits for the
other coroutine to complete. The other coroutine is a scsi-disk request.
The deadlock happens when the latter is in turn pending on the former to
yield/terminate, qemu_co_mutex_lock(). The state flow is as below
(assuming a qcow2 image):
mirror coroutine scsi-disk coroutine
-------------------------------------------------------------
do last write
qcow2:qemu_co_mutex_lock()
...
scsi disk read
tracked request begin
qcow2:qemu_co_mutex_lock.enter
qcow2:qemu_co_mutex_unlock()
bdrv_drain
while (has tracked request)
aio_poll()
In the scsi-disk coroutine, the qemu_co_mutex_lock() will never return
because the mirror coroutine is blocked in the aio_poll(blocking=true).
Reported-by: Laurent Vivier <lvivier@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
---
block/io.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/block/io.c b/block/io.c
index c4869b9..d0a4551 100644
--- a/block/io.c
+++ b/block/io.c
@@ -253,6 +253,42 @@ static void bdrv_drain_recurse(BlockDriverState *bs)
}
}
+typedef struct {
+ Coroutine *co;
+ BlockDriverState *bs;
+ bool done;
+} BdrvCoDrainData;
+
+static void bdrv_co_drain_bh_cb(void *opaque)
+{
+ BdrvCoDrainData *data = opaque;
+ Coroutine *co = data->co;
+
+ bdrv_drain(data->bs);
+ data->done = true;
+ qemu_coroutine_enter(co, NULL);
+}
+
+static void coroutine_fn bdrv_co_drain(BlockDriverState *bs)
+{
+ QEMUBH *bh;
+ BdrvCoDrainData data;
+
+ assert(qemu_in_coroutine());
+ data = (BdrvCoDrainData) {
+ .co = qemu_coroutine_self(),
+ .bs = bs,
+ .done = false,
+ };
+ bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data),
+ qemu_bh_schedule(bh);
+
+ do {
+ qemu_coroutine_yield();
+ } while (!data.done);
+ qemu_bh_delete(bh);
+}
+
/*
* Wait for pending requests to complete on a single BlockDriverState subtree,
* and suspend block driver's internal I/O until next request arrives.
@@ -269,6 +305,10 @@ void bdrv_drain(BlockDriverState *bs)
bool busy = true;
bdrv_drain_recurse(bs);
+ if (qemu_in_coroutine()) {
+ bdrv_co_drain(bs);
+ return;
+ }
while (busy) {
/* Keep iterating */
bdrv_flush_io_queue(bs);
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine
2016-04-01 9:46 [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine Fam Zheng
@ 2016-04-01 9:49 ` Paolo Bonzini
2016-04-01 10:09 ` Fam Zheng
2016-04-01 12:05 ` Laurent Vivier
1 sibling, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2016-04-01 9:49 UTC (permalink / raw)
To: Fam Zheng, qemu-devel; +Cc: Kevin Wolf, lvivier, Stefan Hajnoczi, qemu-block
On 01/04/2016 11:46, Fam Zheng wrote:
> +
> +static void bdrv_co_drain_bh_cb(void *opaque)
> +{
> + BdrvCoDrainData *data = opaque;
> + Coroutine *co = data->co;
> +
> + bdrv_drain(data->bs);
> + data->done = true;
> + qemu_coroutine_enter(co, NULL);
> +}
> +
> +static void coroutine_fn bdrv_co_drain(BlockDriverState *bs)
> +{
> + QEMUBH *bh;
> + BdrvCoDrainData data;
> +
> + assert(qemu_in_coroutine());
> + data = (BdrvCoDrainData) {
> + .co = qemu_coroutine_self(),
> + .bs = bs,
> + .done = false,
> + };
> + bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data),
> + qemu_bh_schedule(bh);
> +
> + do {
> + qemu_coroutine_yield();
> + } while (!data.done);
The loop and "done" is not necessary. Also,
> + qemu_bh_delete(bh);
this can be moved to bdrv_co_drain_bh_cb before bdrv_drain, so that the
bottom half doesn't slow down the event loop until bdrv_drain completes.
Paolo
> +}
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine
2016-04-01 9:49 ` Paolo Bonzini
@ 2016-04-01 10:09 ` Fam Zheng
0 siblings, 0 replies; 4+ messages in thread
From: Fam Zheng @ 2016-04-01 10:09 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Kevin Wolf, lvivier, Stefan Hajnoczi, qemu-devel, qemu-block
On Fri, 04/01 11:49, Paolo Bonzini wrote:
>
>
> On 01/04/2016 11:46, Fam Zheng wrote:
> > +
> > +static void bdrv_co_drain_bh_cb(void *opaque)
> > +{
> > + BdrvCoDrainData *data = opaque;
> > + Coroutine *co = data->co;
> > +
> > + bdrv_drain(data->bs);
> > + data->done = true;
> > + qemu_coroutine_enter(co, NULL);
> > +}
> > +
> > +static void coroutine_fn bdrv_co_drain(BlockDriverState *bs)
> > +{
> > + QEMUBH *bh;
> > + BdrvCoDrainData data;
> > +
> > + assert(qemu_in_coroutine());
> > + data = (BdrvCoDrainData) {
> > + .co = qemu_coroutine_self(),
> > + .bs = bs,
> > + .done = false,
> > + };
> > + bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data),
> > + qemu_bh_schedule(bh);
> > +
> > + do {
> > + qemu_coroutine_yield();
> > + } while (!data.done);
>
> The loop and "done" is not necessary. Also,
I was trying to protect against the bugs similar to the one fixed in
e424aff5f30, but you're right we can make this an assertion and the loop is not
needed. If a calling coroutine is resumed unexpectedly, we will catch it and
fix it.
>
> > + qemu_bh_delete(bh);
>
> this can be moved to bdrv_co_drain_bh_cb before bdrv_drain, so that the
> bottom half doesn't slow down the event loop until bdrv_drain completes.
Good point! Will fix.
Thanks,
Fam
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine
2016-04-01 9:46 [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine Fam Zheng
2016-04-01 9:49 ` Paolo Bonzini
@ 2016-04-01 12:05 ` Laurent Vivier
1 sibling, 0 replies; 4+ messages in thread
From: Laurent Vivier @ 2016-04-01 12:05 UTC (permalink / raw)
To: Fam Zheng, qemu-devel; +Cc: Kevin Wolf, pbonzini, qemu-block, Stefan Hajnoczi
On 01/04/2016 11:46, Fam Zheng wrote:
> Using the nested aio_poll() in coroutine is a bad idea. This patch
> replaces the aio_poll loop in bdrv_drain with a BH, if called in
> coroutine.
>
> For example, the bdrv_drain() in mirror.c can hang when a guest issued
> request is pending on it in qemu_co_mutex_lock().
>
> Mirror coroutine in this case has just finished a request, and the block
> job is about to complete. It calls bdrv_drain() which waits for the
> other coroutine to complete. The other coroutine is a scsi-disk request.
> The deadlock happens when the latter is in turn pending on the former to
> yield/terminate, qemu_co_mutex_lock(). The state flow is as below
> (assuming a qcow2 image):
>
> mirror coroutine scsi-disk coroutine
> -------------------------------------------------------------
> do last write
>
> qcow2:qemu_co_mutex_lock()
> ...
> scsi disk read
>
> tracked request begin
>
> qcow2:qemu_co_mutex_lock.enter
>
> qcow2:qemu_co_mutex_unlock()
>
> bdrv_drain
> while (has tracked request)
> aio_poll()
>
> In the scsi-disk coroutine, the qemu_co_mutex_lock() will never return
> because the mirror coroutine is blocked in the aio_poll(blocking=true).
>
> Reported-by: Laurent Vivier <lvivier@redhat.com>
I've checked this fixes the problem I have reported (on ppc64 and
x86_64), and it does.
Thanks Fam,
Laurent
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
> block/io.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 40 insertions(+)
>
> diff --git a/block/io.c b/block/io.c
> index c4869b9..d0a4551 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -253,6 +253,42 @@ static void bdrv_drain_recurse(BlockDriverState *bs)
> }
> }
>
> +typedef struct {
> + Coroutine *co;
> + BlockDriverState *bs;
> + bool done;
> +} BdrvCoDrainData;
> +
> +static void bdrv_co_drain_bh_cb(void *opaque)
> +{
> + BdrvCoDrainData *data = opaque;
> + Coroutine *co = data->co;
> +
> + bdrv_drain(data->bs);
> + data->done = true;
> + qemu_coroutine_enter(co, NULL);
> +}
> +
> +static void coroutine_fn bdrv_co_drain(BlockDriverState *bs)
> +{
> + QEMUBH *bh;
> + BdrvCoDrainData data;
> +
> + assert(qemu_in_coroutine());
> + data = (BdrvCoDrainData) {
> + .co = qemu_coroutine_self(),
> + .bs = bs,
> + .done = false,
> + };
> + bh = aio_bh_new(bdrv_get_aio_context(bs), bdrv_co_drain_bh_cb, &data),
> + qemu_bh_schedule(bh);
> +
> + do {
> + qemu_coroutine_yield();
> + } while (!data.done);
> + qemu_bh_delete(bh);
> +}
> +
> /*
> * Wait for pending requests to complete on a single BlockDriverState subtree,
> * and suspend block driver's internal I/O until next request arrives.
> @@ -269,6 +305,10 @@ void bdrv_drain(BlockDriverState *bs)
> bool busy = true;
>
> bdrv_drain_recurse(bs);
> + if (qemu_in_coroutine()) {
> + bdrv_co_drain(bs);
> + return;
> + }
> while (busy) {
> /* Keep iterating */
> bdrv_flush_io_queue(bs);
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-04-01 12:05 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-01 9:46 [Qemu-devel] [PATCH] block: Fix bdrv_drain in coroutine Fam Zheng
2016-04-01 9:49 ` Paolo Bonzini
2016-04-01 10:09 ` Fam Zheng
2016-04-01 12:05 ` Laurent Vivier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).