From: Kevin Wolf <kwolf@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: mreitz@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org,
qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Qemu-devel] [RFC PATCH] virtio-blk: schedule virtio_notify_config to run on main context
Date: Fri, 13 Sep 2019 11:45:33 +0200 [thread overview]
Message-ID: <20190913094533.GB8312@dhcp-200-226.str.redhat.com> (raw)
In-Reply-To: <87woecwnmv.fsf@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4336 bytes --]
Am 13.09.2019 um 11:28 hat Sergio Lopez geschrieben:
>
> Kevin Wolf <kwolf@redhat.com> writes:
>
> > Am 12.09.2019 um 21:51 hat Michael S. Tsirkin geschrieben:
> >> On Thu, Sep 12, 2019 at 08:19:25PM +0200, Sergio Lopez wrote:
> >> > Another AioContext-related issue, and this is a tricky one.
> >> >
> >> > Executing a QMP block_resize request for a virtio-blk device running
> >> > on an iothread may cause a deadlock involving the following mutexes:
> >> >
> >> > - main thead
> >> > * Has acquired: qemu_mutex_global.
> >> > * Is trying the acquire: iothread AioContext lock via
> >> > AIO_WAIT_WHILE (after aio_poll).
> >> >
> >> > - iothread
> >> > * Has acquired: AioContext lock.
> >> > * Is trying to acquire: qemu_mutex_global (via
> >> > virtio_notify_config->prepare_mmio_access).
> >>
> >> Hmm is this really the only case iothread takes qemu mutex?
> >> If any such access can deadlock, don't we need a generic
> >> solution? Maybe main thread can drop qemu mutex
> >> before taking io thread AioContext lock?
> >
> > The rule is that iothreads must not take the qemu mutex. If they do
> > (like in this case), it's a bug.
> >
> > Maybe we could actually assert this in qemu_mutex_lock_iothread()?
> >
> >> > With this change, virtio_blk_resize checks if it's being called from a
> >> > coroutine context running on a non-main thread, and if that's the
> >> > case, creates a new coroutine and schedules it to be run on the main
> >> > thread.
> >> >
> >> > This works, but means the actual operation is done
> >> > asynchronously, perhaps opening a window in which a "device_del"
> >> > operation may fit and remove the VirtIODevice before
> >> > virtio_notify_config() is executed.
> >> >
> >> > I *think* it shouldn't be possible, as BHs will be processed before
> >> > any new QMP/monitor command, but I'm open to a different approach.
> >> >
> >> > RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1744955
> >> > Signed-off-by: Sergio Lopez <slp@redhat.com>
> >> > ---
> >> > hw/block/virtio-blk.c | 25 ++++++++++++++++++++++++-
> >> > 1 file changed, 24 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> >> > index 18851601cb..c763d071f6 100644
> >> > --- a/hw/block/virtio-blk.c
> >> > +++ b/hw/block/virtio-blk.c
> >> > @@ -16,6 +16,7 @@
> >> > #include "qemu/iov.h"
> >> > #include "qemu/module.h"
> >> > #include "qemu/error-report.h"
> >> > +#include "qemu/main-loop.h"
> >> > #include "trace.h"
> >> > #include "hw/block/block.h"
> >> > #include "hw/qdev-properties.h"
> >> > @@ -1086,11 +1087,33 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
> >> > return 0;
> >> > }
> >> >
> >> > +static void coroutine_fn virtio_resize_co_entry(void *opaque)
> >> > +{
> >> > + VirtIODevice *vdev = opaque;
> >> > +
> >> > + assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> >> > + virtio_notify_config(vdev);
> >> > + aio_wait_kick();
> >> > +}
> >> > +
> >> > static void virtio_blk_resize(void *opaque)
> >> > {
> >> > VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
> >> > + Coroutine *co;
> >> >
> >> > - virtio_notify_config(vdev);
> >> > + if (qemu_in_coroutine() &&
> >> > + qemu_get_current_aio_context() != qemu_get_aio_context()) {
> >> > + /*
> >> > + * virtio_notify_config() needs to acquire the global mutex,
> >> > + * so calling it from a coroutine running on a non-main context
> >> > + * may cause a deadlock. Instead, create a new coroutine and
> >> > + * schedule it to be run on the main thread.
> >> > + */
> >> > + co = qemu_coroutine_create(virtio_resize_co_entry, vdev);
> >> > + aio_co_schedule(qemu_get_aio_context(), co);
> >> > + } else {
> >> > + virtio_notify_config(vdev);
> >> > + }
> >> > }
> >
> > Wouldn't a simple BH suffice (aio_bh_schedule_oneshot)? I don't see why
> > you need a coroutine when you never yield.
>
> You're right, that's actually simpler, haven't thought of it.
>
> Do you see any drawbacks or should I send a non-RFC fixed version of
> this patch?
Sending a fixed non-RFC version sounds good to me.
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
prev parent reply other threads:[~2019-09-13 9:47 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-12 18:19 [Qemu-devel] [RFC PATCH] virtio-blk: schedule virtio_notify_config to run on main context Sergio Lopez
2019-09-12 19:51 ` Michael S. Tsirkin
2019-09-13 7:46 ` Sergio Lopez
2019-09-13 9:04 ` Kevin Wolf
2019-09-13 9:28 ` Sergio Lopez
2019-09-13 9:45 ` Kevin Wolf [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190913094533.GB8312@dhcp-200-226.str.redhat.com \
--to=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=slp@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).