qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Fam Zheng <famz@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org,
	"Michael S. Tsirkin" <mst@redhat.com>,
	peterx@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 2/2] virtio-scsi/virtio-blk: Disable poll handlers when stopping vq handler
Date: Wed, 12 Sep 2018 09:31:34 +0800	[thread overview]
Message-ID: <20180912013134.GA2526@lemon.usersys.redhat.com> (raw)
In-Reply-To: <271b24b6-c8eb-1425-3e73-e0ec734bafc6@redhat.com>

On Tue, 09/11 17:30, Paolo Bonzini wrote:
> On 11/09/2018 16:12, Fam Zheng wrote:
> > On Tue, 09/11 13:32, Paolo Bonzini wrote:
> >> On 10/09/2018 16:56, Fam Zheng wrote:
> >>> We have this unwanted call stack:
> >>>
> >>>   > ...
> >>>   > #13 0x00005586602b7793 in virtio_scsi_handle_cmd_vq
> >>>   > #14 0x00005586602b8d66 in virtio_scsi_data_plane_handle_cmd
> >>>   > #15 0x00005586602ddab7 in virtio_queue_notify_aio_vq
> >>>   > #16 0x00005586602dfc9f in virtio_queue_host_notifier_aio_poll
> >>>   > #17 0x00005586607885da in run_poll_handlers_once
> >>>   > #18 0x000055866078880e in try_poll_mode
> >>>   > #19 0x00005586607888eb in aio_poll
> >>>   > #20 0x0000558660784561 in aio_wait_bh_oneshot
> >>>   > #21 0x00005586602b9582 in virtio_scsi_dataplane_stop
> >>>   > #22 0x00005586605a7110 in virtio_bus_stop_ioeventfd
> >>>   > #23 0x00005586605a9426 in virtio_pci_stop_ioeventfd
> >>>   > #24 0x00005586605ab808 in virtio_pci_common_write
> >>>   > #25 0x0000558660242396 in memory_region_write_accessor
> >>>   > #26 0x00005586602425ab in access_with_adjusted_size
> >>>   > #27 0x0000558660245281 in memory_region_dispatch_write
> >>>   > #28 0x00005586601e008e in flatview_write_continue
> >>>   > #29 0x00005586601e01d8 in flatview_write
> >>>   > #30 0x00005586601e04de in address_space_write
> >>>   > #31 0x00005586601e052f in address_space_rw
> >>>   > #32 0x00005586602607f2 in kvm_cpu_exec
> >>>   > #33 0x0000558660227148 in qemu_kvm_cpu_thread_fn
> >>>   > #34 0x000055866078bde7 in qemu_thread_start
> >>>   > #35 0x00007f5784906594 in start_thread
> >>>   > #36 0x00007f5784639e6f in clone
> >>>
> >>> Avoid it with the aio_disable_external/aio_enable_external pair, so that
> >>> no vq poll handlers can be called in aio_wait_bh_oneshot.
> >>
> >> I don't understand.  We are in the vCPU thread, so not in the
> >> AioContext's home thread.  Why is aio_wait_bh_oneshot polling rather
> >> than going through the aio_wait_bh path?
> > 
> > What do you mean by 'aio_wait_bh path'? Here is aio_wait_bh_oneshot:
> 
> Sorry, I meant the "atomic_inc(&wait_->num_waiters);" path.  But if this
> backtrace is obtained without dataplane, that's the answer I was seeking.
> 
> > void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
> > {
> >     AioWaitBHData data = {
> >         .cb = cb,
> >         .opaque = opaque,
> >     };
> > 
> >     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> > 
> >     aio_bh_schedule_oneshot(ctx, aio_wait_bh, &data);
> >     AIO_WAIT_WHILE(&data.wait, ctx, !data.done);
> > }
> > 
> > ctx is qemu_aio_context here, so there's no interaction with IOThread.
> 
> In this case, it should be okay to have the reentrancy, what is the bug
> that this patch is fixing?

The same symptom as in the previous patch: virtio_scsi_handle_cmd_vq hangs. The
reason it hangs is fixed by the previous patch, but I don't think it should be
invoked as we're in the middle of virtio_scsi_dataplane_stop(). Applying either
one of the two patches avoids the problem, but this one is more superficial.
What do you think?

Fam

  reply	other threads:[~2018-09-12  1:32 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-10 14:56 [Qemu-devel] [PATCH 0/2] virtio-scsi: Fix QEMU hang with vIOMMU and ATS Fam Zheng
2018-09-10 14:56 ` [Qemu-devel] [PATCH 1/2] virtio: Return true from virtio_queue_empty if broken Fam Zheng
2018-09-10 14:56 ` [Qemu-devel] [PATCH 2/2] virtio-scsi/virtio-blk: Disable poll handlers when stopping vq handler Fam Zheng
2018-09-11 11:32   ` Paolo Bonzini
2018-09-11 14:12     ` Fam Zheng
2018-09-11 15:30       ` Paolo Bonzini
2018-09-12  1:31         ` Fam Zheng [this message]
2018-09-12 11:11           ` Paolo Bonzini
2018-09-12 11:50             ` Fam Zheng
2018-09-12 12:42               ` Paolo Bonzini
2018-09-13  6:03                 ` Fam Zheng
2018-09-13  9:11                   ` Paolo Bonzini
2018-09-13 10:04                     ` [Qemu-devel] [Qemu-block] " Paolo Bonzini
2018-09-13 16:00                       ` Alex Williamson
2018-09-14  2:45                         ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180912013134.GA2526@lemon.usersys.redhat.com \
    --to=famz@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).