From: Stefan Hajnoczi <stefanha@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, Fam Zheng <fam@euphon.net>,
qemu-block@nongnu.org,
Emanuele Giuseppe Esposito <eesposit@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Hanna Reitz <hreitz@redhat.com>
Subject: Re: [PATCH 1/6] block: don't acquire AioContext lock in bdrv_drain_all()
Date: Wed, 8 Mar 2023 09:26:21 -0500 [thread overview]
Message-ID: <20230308142621.GD299426@fedora> (raw)
In-Reply-To: <ZAhL0Xz4tuUWPeXY@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 5683 bytes --]
On Wed, Mar 08, 2023 at 09:48:17AM +0100, Kevin Wolf wrote:
> Am 07.03.2023 um 20:20 hat Stefan Hajnoczi geschrieben:
> > On Tue, Mar 07, 2023 at 06:17:22PM +0100, Kevin Wolf wrote:
> > > Am 01.03.2023 um 21:57 hat Stefan Hajnoczi geschrieben:
> > > > There is no need for the AioContext lock in bdrv_drain_all() because
> > > > nothing in AIO_WAIT_WHILE() needs the lock and the condition is atomic.
> > > >
> > > > Note that the NULL AioContext argument to AIO_WAIT_WHILE() is odd. In
> > > > the future it can be removed.
> > >
> > > It can be removed for all callers that run in the main loop context. For
> > > code running in an iothread, it's still important to pass a non-NULL
> > > context. This makes me doubt that the ctx parameter can really be
> > > removed without changing more.
> > >
> > > Is your plan to remove the if from AIO_WAIT_WHILE_INTERNAL(), too, and
> > > to poll qemu_get_current_aio_context() instead of ctx_ or the main
> > > context?
> >
> > This is what I'd like once everything has been converted to
> > AIO_WAIT_WHILE_UNLOCKED() - and at this point we might as well call it
> > AIO_WAIT_WHILE() again:
> >
> > #define AIO_WAIT_WHILE(cond) ({ \
> > bool waited_ = false; \
> > AioWait *wait_ = &global_aio_wait; \
> > /* Increment wait_->num_waiters before evaluating cond. */ \
> > qatomic_inc(&wait_->num_waiters); \
> > /* Paired with smp_mb in aio_wait_kick(). */ \
> > smp_mb(); \
> > while ((cond)) { \
> > aio_poll(qemu_get_current_aio_context(), true); \
> > waited_ = true; \
> > } \
> > qatomic_dec(&wait_->num_waiters); \
> > waited_; })
>
> Ok, yes, this is what I tried to describe above.
>
> > However, I just realized this only works in the main loop thread because
> > that's where aio_wait_kick() notifications are received. An IOThread
> > running AIO_WAIT_WHILE() won't be woken when another thread (including
> > the main loop thread) calls aio_wait_kick().
>
> Which is of course a limitation we already have today. You can wait for
> things in your own iothread, or for all threads from the main loop.
>
> However, in the future multiqueue world, the first case probably becomes
> pretty much useless because even for the same node, you could get
> activity in any thread.
>
> So essentially AIO_WAIT_WHILE() becomes GLOBAL_STATE_CODE(). Which is
> probably a good idea anyway, but I'm not entirely sure how many places
> we currently have where it's called from an iothread. I know the drain
> in mirror_run(), but Emanuele already had a patch in his queue where
> bdrv_co_yield_to_drain() schedules drain in the main context, so if that
> works, mirror_run() would be solved.
>
> https://gitlab.com/eesposit/qemu/-/commit/63562351aca4fb05d5711eb410feb96e64b5d4ad
>
> > I would propose introducing a QemuCond for each condition that we wait
> > on, but QemuCond lacks event loop integration. The current thread would
> > be unable to run aio_poll() while also waiting on a QemuCond.
> >
> > Life outside coroutines is hard, man! I need to think about this more.
> > Luckily this problem doesn't block this patch series.
>
> I hope that we don't really need all of this if we can limit running
> synchronous code to the main loop.
Great idea, I think you're right.
I'll audit the code to find the IOThread AIO_WAIT_WHILE() callers and
maybe a future patch series can work on that.
> > > > There is an assertion in
> > > > AIO_WAIT_WHILE() that checks that we're in the main loop AioContext and
> > > > we would lose that check by dropping the argument. However, that was a
> > > > precursor to the GLOBAL_STATE_CODE()/IO_CODE() macros and is now a
> > > > duplicate check. So I think we won't lose much by dropping it, but let's
> > > > do a few more AIO_WAIT_WHILE_UNLOCKED() coversions of this sort to
> > > > confirm this is the case.
> > > >
> > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > >
> > > Yes, it seems that we don't lose much, except maybe some consistency in
> > > the intermediate state. The commit message could state a bit more
> > > directly what we gain, though. Since you mention removing the parameter
> > > as a future possibility, I assume that's the goal with it, but I
> > > wouldn't be sure just from reading the commit message.
> >
> > AIO_WAIT_WHILE() callers need to be weened of the AioContext lock.
> > That's the main motivation and this patch series converts the easy
> > cases where we already don't need the lock. Dropping the function
> > argument eventually is a side benefit.
>
> Yes, but the conversion to AIO_WAIT_WHILE_UNLOCKED() could be done with
> ctx instead of NULL. So moving to NULL is a separate change that needs a
> separate explanation. You could even argue that it should be a separate
> patch if it's an independent change.
>
> Or am I missing something and keeping ctx would actually break things?
Yes, ctx argument does not need to be modified when converting from
AIO_WAIT_WHILE() to AIO_WAIT_WHILE_UNLOCKED(). Passing it bothers me
because we don't really use it when unlock=false.
Would you like me to keep ctx non-NULL for now?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2023-03-08 14:27 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-01 20:57 [PATCH 0/6] block: switch to AIO_WAIT_WHILE_UNLOCKED() where possible Stefan Hajnoczi
2023-03-01 20:57 ` [PATCH 1/6] block: don't acquire AioContext lock in bdrv_drain_all() Stefan Hajnoczi
2023-03-07 17:17 ` Kevin Wolf
2023-03-07 19:20 ` Stefan Hajnoczi
2023-03-08 8:48 ` Kevin Wolf
2023-03-08 14:26 ` Stefan Hajnoczi [this message]
2023-03-08 17:25 ` Kevin Wolf
2023-03-09 12:38 ` Stefan Hajnoczi
2023-03-01 20:57 ` [PATCH 2/6] block: convert blk_exp_close_all_type() to AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
2023-03-02 10:36 ` Philippe Mathieu-Daudé
2023-03-02 13:08 ` Stefan Hajnoczi
2023-03-02 14:16 ` Philippe Mathieu-Daudé
2023-03-02 16:00 ` Stefan Hajnoczi
2023-03-07 20:37 ` Philippe Mathieu-Daudé
2023-03-01 20:57 ` [PATCH 3/6] block: convert bdrv_graph_wrlock() " Stefan Hajnoczi
2023-03-02 10:19 ` Philippe Mathieu-Daudé
2023-03-07 20:37 ` Philippe Mathieu-Daudé
2023-03-01 20:57 ` [PATCH 4/6] block: convert bdrv_drain_all_begin() " Stefan Hajnoczi
2023-03-07 20:37 ` Philippe Mathieu-Daudé
2023-03-01 20:58 ` [PATCH 5/6] hmp: convert handle_hmp_command() " Stefan Hajnoczi
2023-03-02 7:17 ` Markus Armbruster
2023-03-02 13:22 ` Stefan Hajnoczi
2023-03-02 15:02 ` Markus Armbruster
2023-03-02 15:48 ` Stefan Hajnoczi
2023-03-02 16:25 ` Markus Armbruster
2023-03-07 20:39 ` Philippe Mathieu-Daudé
2023-03-01 20:58 ` [PATCH 6/6] monitor: convert monitor_cleanup() " Stefan Hajnoczi
2023-03-02 7:20 ` Markus Armbruster
2023-03-02 16:26 ` Markus Armbruster
2023-03-07 20:39 ` Philippe Mathieu-Daudé
2023-03-07 17:29 ` [PATCH 0/6] block: switch to AIO_WAIT_WHILE_UNLOCKED() where possible Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230308142621.GD299426@fedora \
--to=stefanha@redhat.com \
--cc=armbru@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eesposit@redhat.com \
--cc=fam@euphon.net \
--cc=hreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).