qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"ehabkost@redhat.com" <ehabkost@redhat.com>,
	"mreitz@redhat.com" <mreitz@redhat.com>,
	"stefanha@redhat.com" <stefanha@redhat.com>,
	"crosa@redhat.com" <crosa@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] [RFC PATCH] coroutines: generate wrapper code
Date: Tue, 12 Feb 2019 12:58:40 +0100	[thread overview]
Message-ID: <20190212115840.GB5283@localhost.localdomain> (raw)
In-Reply-To: <20190212032242.GC28401@stefanha-x1.localdomain>

[-- Attachment #1: Type: text/plain, Size: 3370 bytes --]

Am 12.02.2019 um 04:22 hat Stefan Hajnoczi geschrieben:
> On Mon, Feb 11, 2019 at 09:38:37AM +0000, Vladimir Sementsov-Ogievskiy wrote:
> > 11.02.2019 6:42, Stefan Hajnoczi wrote:
> > > On Fri, Feb 08, 2019 at 05:11:22PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > >> Hi all!
> > >>
> > >> We have a very frequent pattern of wrapping a coroutine_fn function
> > >> to be called from non-coroutine context:
> > >>
> > >>    - create structure to pack parameters
> > >>    - create function to call original function taking parameters from
> > >>      struct
> > >>    - create wrapper, which in case of non-coroutine context will
> > >>      create a coroutine, enter it and start poll-loop.
> > >>
> > >> Here is a draft of template code + example how it can be used to drop a
> > >> lot of similar code.
> > >>
> > >> Hope someone like it except me)
> > > 
> > > My 2 cents.  Cons:
> > > 
> > >   * Synchronous poll loops are an anti-pattern.  They block all of QEMU
> > >     with the big mutex held.  Making them easier to write is
> > >     questionable because we should aim to have as few of these as
> > >     possible.
> > 
> > Understand. Do we have a concept or a kind of target for a future to get rid of
> > these a lot of poll-loops? What is the right way? At least for block-layer?
> 
> It's non-trivial.  The nested event loop could be flattened if there was
> a mechanism to stop further activity on a specific object only (e.g.
> BlockDriverState).  That way the event loop can continue processing
> events for other objects and device emulation could continue for other
> objects.

The mechanism to stop activity on BlockDriverStates is bdrv_drain(). But
I don't see how this is related. Nested event loops aren't for stopping
concurrent activity (events related to async operations started earlier
are still processed in nested event loops), but for making progress on
the operation we're waiting for. They happen when synchronous code calls
into asynchronous code.

The way to get rid of them is making their callers async. I think we
would come a long way if we ran QMP command handlers (at least the block
related ones) and qemu-img operations in coroutines instead of blocking
while we wait for the result.

> Unfortunately there are interactions between objects like in block jobs
> that act on multiple BDSes, so it becomes even tricky.
> 
> A simple way of imagining this is to make each object an "actor"
> coroutine.  The coroutine processes a single message (request) at a time
> and yields when it needs to wait.  Callers send messages and expect
> asynchronous responses.  This model is bad for efficiency (parallelism
> is necessary) but at least it offers a sane way of thinking about
> multiple asynchronous components coordinating together.  (It's another
> way of saying, let's put everything into coroutines.)
> 
> The advantage of a flat event loop is that a hang in one object (e.g.
> I/O getting stuck in one file) doesn't freeze the entire event loop.

I think this one is more theoretical because you'll still have
dependencies between the components. blk_drain_all() isn't hanging
because the code is designed suboptimally, but because its semantics is
to wait until all requests have completed. And it's called because this
semantics is required.

Kevin

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]

  parent reply	other threads:[~2019-02-12 12:04 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-08 14:11 [Qemu-devel] [RFC PATCH] coroutines: generate wrapper code Vladimir Sementsov-Ogievskiy
2019-02-08 14:13 ` Vladimir Sementsov-Ogievskiy
2019-02-11  3:42 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2019-02-11  9:38   ` Vladimir Sementsov-Ogievskiy
2019-02-12  3:22     ` Stefan Hajnoczi
2019-02-12 10:03       ` Vladimir Sementsov-Ogievskiy
2019-02-12 10:55         ` Stefan Hajnoczi
2019-02-12 11:58       ` Kevin Wolf [this message]
2019-02-13  6:58         ` Stefan Hajnoczi
2019-02-13 10:09           ` Kevin Wolf
2019-02-14  2:14             ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190212115840.GB5283@localhost.localdomain \
    --to=kwolf@redhat.com \
    --cc=crosa@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).