From: Stefan Hajnoczi <stefanha@gmail.com>
To: Fam Zheng <famz@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>, Kevin Wolf <kwolf@redhat.com>,
qemu block <qemu-block@nongnu.org>, Max Reitz <mreitz@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH for-2.9-rc5 v2] block: Drain BH in bdrv_drained_begin
Date: Fri, 14 Apr 2017 09:45:50 +0100 [thread overview]
Message-ID: <CAJSP0QVgYNHh6P8HG20fzuq1dpMOAVs4ciY8Ho1hYLHED3xa_g@mail.gmail.com> (raw)
In-Reply-To: <20170414080206.2301-1-famz@redhat.com>
On Fri, Apr 14, 2017 at 9:02 AM, Fam Zheng <famz@redhat.com> wrote:
> During block job completion, nothing is preventing
> block_job_defer_to_main_loop_bh from being called in a nested
> aio_poll(), which is a trouble, such as in this code path:
>
> qmp_block_commit
> commit_active_start
> bdrv_reopen
> bdrv_reopen_multiple
> bdrv_reopen_prepare
> bdrv_flush
> aio_poll
> aio_bh_poll
> aio_bh_call
> block_job_defer_to_main_loop_bh
> stream_complete
> bdrv_reopen
>
> block_job_defer_to_main_loop_bh is the last step of the stream job,
> which should have been "paused" by the bdrv_drained_begin/end in
> bdrv_reopen_multiple, but it is not done because it's in the form of a
> main loop BH.
>
> Similar to why block jobs should be paused between drained_begin and
> drained_end, BHs they schedule must be excluded as well. To achieve
> this, this patch forces draining the BH in BDRV_POLL_WHILE.
>
> Also because the BH in question can do bdrv_unref and child replacing,
> protect @bs carefully to avoid use-after-free.
>
> As a side effect this fixes a hang in block_job_detach_aio_context
> during system_reset when a block job is ready:
>
> #0 0x0000555555aa79f3 in bdrv_drain_recurse
> #1 0x0000555555aa825d in bdrv_drained_begin
> #2 0x0000555555aa8449 in bdrv_drain
> #3 0x0000555555a9c356 in blk_drain
> #4 0x0000555555aa3cfd in mirror_drain
> #5 0x0000555555a66e11 in block_job_detach_aio_context
> #6 0x0000555555a62f4d in bdrv_detach_aio_context
> #7 0x0000555555a63116 in bdrv_set_aio_context
> #8 0x0000555555a9d326 in blk_set_aio_context
> #9 0x00005555557e38da in virtio_blk_data_plane_stop
> #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd
> #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd
> #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd
> #13 0x00005555559f6a18 in virtio_pci_reset
> #14 0x00005555559139a9 in qdev_reset_one
> #15 0x0000555555916738 in qbus_walk_children
> #16 0x0000555555913318 in qdev_walk_children
> #17 0x0000555555916738 in qbus_walk_children
> #18 0x00005555559168ca in qemu_devices_reset
> #19 0x000055555581fcbb in pc_machine_reset
> #20 0x00005555558a4d96 in qemu_system_reset
> #21 0x000055555577157a in main_loop_should_exit
> #22 0x000055555577157a in main_loop
> #23 0x000055555577157a in main
>
> The rationale is that the loop in block_job_detach_aio_context cannot
> make any progress in pausing/completing the job, because bs->in_flight
> is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop
> BH. With this patch, it does.
>
> Reported-by: Jeff Cody <jcody@redhat.com>
> Signed-off-by: Fam Zheng <famz@redhat.com>
>
> ---
>
> v2: Do the BH poll in BDRV_POLL_WHILE to cover bdrv_drain_all_begin as
> well. [Kevin]
> ---
> block/io.c | 10 +++++++---
> include/block/block.h | 21 +++++++++++++--------
> 2 files changed, 20 insertions(+), 11 deletions(-)
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> diff --git a/block/io.c b/block/io.c
> index 8706bfa..a472157 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -158,7 +158,7 @@ bool bdrv_requests_pending(BlockDriverState *bs)
>
> static bool bdrv_drain_recurse(BlockDriverState *bs)
> {
> - BdrvChild *child;
> + BdrvChild *child, *tmp;
> bool waited;
>
> waited = BDRV_POLL_WHILE(bs, atomic_read(&bs->in_flight) > 0);
> @@ -167,8 +167,12 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
> bs->drv->bdrv_drain(bs);
> }
>
> - QLIST_FOREACH(child, &bs->children, next) {
> - waited |= bdrv_drain_recurse(child->bs);
> + QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> + BlockDriverState *bs = child->bs;
> + assert(bs->refcnt > 0);
> + bdrv_ref(bs);
> + waited |= bdrv_drain_recurse(bs);
> + bdrv_unref(bs);
> }
There are other loops over bs->children in the block layer code. If a
they indirectly call aio_poll() then the child bs could disappear.
They should probably also take areference.
That said, I wonder if this really solves the problem of graph changes
during bs->children traversal. We now have a reference but are
iterating over stale data.
This is a separate issue and doesn't need to hold up this patch.
>
> return waited;
> diff --git a/include/block/block.h b/include/block/block.h
> index 97d4330..3f05e71 100644
> --- a/include/block/block.h
> +++ b/include/block/block.h
> @@ -381,12 +381,13 @@ void bdrv_drain_all(void);
>
> #define BDRV_POLL_WHILE(bs, cond) ({ \
> bool waited_ = false; \
> + bool busy_ = true; \
> BlockDriverState *bs_ = (bs); \
> AioContext *ctx_ = bdrv_get_aio_context(bs_); \
> if (aio_context_in_iothread(ctx_)) { \
> - while ((cond)) { \
> - aio_poll(ctx_, true); \
> - waited_ = true; \
> + while ((cond) || busy_) { \
> + busy_ = aio_poll(ctx_, (cond)); \
> + waited_ |= !!(cond); \
> } \
> } else { \
> assert(qemu_get_current_aio_context() == \
> @@ -398,11 +399,15 @@ void bdrv_drain_all(void);
> */ \
> assert(!bs_->wakeup); \
> bs_->wakeup = true; \
> - while ((cond)) { \
> - aio_context_release(ctx_); \
> - aio_poll(qemu_get_aio_context(), true); \
> - aio_context_acquire(ctx_); \
> - waited_ = true; \
> + while (busy_) { \
> + if ((cond)) { \
> + waited_ = busy_ = true; \
> + aio_context_release(ctx_); \
> + aio_poll(qemu_get_aio_context(), true); \
> + aio_context_acquire(ctx_); \
> + } else { \
> + busy_ = aio_poll(ctx_, false); \
> + } \
> } \
> bs_->wakeup = false; \
> } \
> --
> 2.9.3
>
>
next prev parent reply other threads:[~2017-04-14 8:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-14 8:02 [Qemu-devel] [PATCH for-2.9-rc5 v2] block: Drain BH in bdrv_drained_begin Fam Zheng
2017-04-14 8:10 ` Fam Zheng
2017-04-14 8:45 ` Stefan Hajnoczi [this message]
2017-04-14 8:51 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-04-14 17:10 ` Paolo Bonzini
2017-04-16 9:37 ` Stefan Hajnoczi
2017-04-17 3:33 ` Fam Zheng
2017-04-18 8:16 ` [Qemu-devel] " Paolo Bonzini
2017-04-17 8:27 ` [Qemu-devel] [Qemu-block] " Fam Zheng
2017-04-17 11:21 ` Jeff Cody
2017-04-18 8:18 ` [Qemu-devel] " Paolo Bonzini
2017-04-18 8:36 ` Fam Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJSP0QVgYNHh6P8HG20fzuq1dpMOAVs4ciY8Ho1hYLHED3xa_g@mail.gmail.com \
--to=stefanha@gmail.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).