qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Reiter <s.reiter@proxmox.com>
Cc: qemu-block@nongnu.org, armbru@redhat.com, qemu-devel@nongnu.org,
	mreitz@redhat.com, stefanha@redhat.com, t.lamprecht@proxmox.com
Subject: Re: [RFC PATCH 2/3] block: Allow bdrv_run_co() from different AioContext
Date: Tue, 26 May 2020 18:42:15 +0200	[thread overview]
Message-ID: <20200526164215.GA8886@linux.fritz.box> (raw)
In-Reply-To: <20200525164150.GD19863@linux.fritz.box>

Am 25.05.2020 um 18:41 hat Kevin Wolf geschrieben:
> Am 25.05.2020 um 16:18 hat Stefan Reiter geschrieben:
> > On 5/12/20 4:43 PM, Kevin Wolf wrote:
> > > Coroutine functions that are entered through bdrv_run_co() are already
> > > safe to call from synchronous code in a different AioContext because
> > > bdrv_coroutine_enter() will schedule them in the context of the node.
> > > 
> > > However, the coroutine fastpath still requires that we're already in the
> > > right AioContext when called in coroutine context.
> > > 
> > > In order to make the behaviour more consistent and to make life a bit
> > > easier for callers, let's check the AioContext and automatically move
> > > the current coroutine around if we're not in the right context yet.
> > > 
> > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > ---
> > >   block/io.c | 15 ++++++++++++++-
> > >   1 file changed, 14 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/block/io.c b/block/io.c
> > > index c1badaadc9..7808e8bdc0 100644
> > > --- a/block/io.c
> > > +++ b/block/io.c
> > > @@ -895,8 +895,21 @@ static int bdrv_run_co(BlockDriverState *bs, CoroutineEntry *entry,
> > >                          void *opaque, int *ret)
> > >   {
> > >       if (qemu_in_coroutine()) {
> > > -        /* Fast-path if already in coroutine context */
> > > +        Coroutine *self = qemu_coroutine_self();
> > > +        AioContext *bs_ctx = bdrv_get_aio_context(bs);
> > > +        AioContext *co_ctx = qemu_coroutine_get_aio_context(self);
> > > +
> > > +        if (bs_ctx != co_ctx) {
> > > +            /* Move to the iothread of the node */
> > > +            aio_co_schedule(bs_ctx, self);
> > > +            qemu_coroutine_yield();
> > 
> > I'm pretty sure this can lead to a race: When the thread we're re-scheduling
> > to is faster to schedule us than we can reach qemu_coroutine_yield, then
> > we'll get an abort ("Co-routine re-entered recursively"), since co->caller
> > is still set.
> > 
> > I've seen this happen in our code when I try to do the scheduling fandangle
> > there.
> 
> Ah, crap. I guess letting a coroutine re-schedule itself is only safe
> within the same thread then.
> 
> > Is there a safer way to have a coroutine reschedule itself? Some lock
> > missing?
> 
> There is no problem that can't be solved by adding another level of
> indirection... We would have to schedule a BH in the original thread
> that will only schedule the coroutine in its new thread after it has
> yielded.
> 
> Maybe we should actually introduce a helper function that moves the
> current coroutine to a different AioContext this way.

Like this:

https://repo.or.cz/qemu/kevin.git/commitdiff/ed0244ba4ac699f7e8eaf7512ff25645cf43bda2

The series for which I need this isn't quite ready yet, so I haven't
sent it as a patch yet, but if it proves useful in other contexts, we
can always commit it without the rest.

Kevin



  reply	other threads:[~2020-05-26 16:46 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-12 14:43 [RFC PATCH 0/3] block: Synchronous bdrv_*() from coroutine in different AioContext Kevin Wolf
2020-05-12 14:43 ` [RFC PATCH 1/3] block: Factor out bdrv_run_co() Kevin Wolf
2020-05-12 15:37   ` Eric Blake
2020-05-20  9:09     ` Philippe Mathieu-Daudé
2020-05-20 11:14       ` Vladimir Sementsov-Ogievskiy
2020-05-12 14:43 ` [RFC PATCH 2/3] block: Allow bdrv_run_co() from different AioContext Kevin Wolf
2020-05-12 16:02   ` Thomas Lamprecht
2020-05-12 19:29     ` Kevin Wolf
2020-05-25 14:18   ` Stefan Reiter
2020-05-25 16:41     ` Kevin Wolf
2020-05-26 16:42       ` Kevin Wolf [this message]
2020-05-27  8:56         ` Stefan Reiter
2020-05-12 14:43 ` [RFC PATCH 3/3] block: Assert we're running in the right thread Kevin Wolf
2020-05-14 13:52   ` Stefan Reiter
2020-05-14 14:30     ` Kevin Wolf
2020-05-20  9:12       ` Philippe Mathieu-Daudé
2020-05-14 13:21 ` [RFC PATCH 0/3] block: Synchronous bdrv_*() from coroutine in different AioContext Thomas Lamprecht
2020-05-14 14:26   ` Kevin Wolf
2020-05-19 12:32     ` Vladimir Sementsov-Ogievskiy
2020-05-19 13:54       ` Denis Plotnikov
2020-05-19 14:18         ` Kevin Wolf
2020-05-19 15:05           ` Denis Plotnikov
2020-05-19 15:29             ` Kevin Wolf
2020-05-19 15:48               ` Vladimir Sementsov-Ogievskiy
2020-05-19 16:06                 ` Eric Blake
2020-05-20  7:23               ` Denis Plotnikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200526164215.GA8886@linux.fritz.box \
    --to=kwolf@redhat.com \
    --cc=armbru@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=s.reiter@proxmox.com \
    --cc=stefanha@redhat.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).