From: Kevin Wolf <kwolf@redhat.com>
To: Eric Blake <eblake@redhat.com>
Cc: qemu-block@nongnu.org, stefanha@redhat.com, eesposit@redhat.com,
pbonzini@redhat.com, vsementsov@yandex-team.ru,
qemu-devel@nongnu.org
Subject: Re: [PATCH 05/21] block: Introduce bdrv_schedule_unref()
Date: Mon, 21 Aug 2023 18:59:02 +0200 [thread overview]
Message-ID: <ZOOX1pdZL84XAo3u@redhat.com> (raw)
In-Reply-To: <cthrktm4jug2g7pexz4h6ocactvatye6cqsqqjwesybvg55anm@3iayrpa3rra2>
Am 18.08.2023 um 18:26 hat Eric Blake geschrieben:
> On Fri, Aug 18, 2023 at 11:24:00AM -0500, Eric Blake wrote:
> > > +++ b/block/graph-lock.c
> > > @@ -163,17 +163,26 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
> > > void bdrv_graph_wrunlock(void)
> > > {
> > > GLOBAL_STATE_CODE();
> > > - QEMU_LOCK_GUARD(&aio_context_list_lock);
> > > assert(qatomic_read(&has_writer));
> > >
> > > + WITH_QEMU_LOCK_GUARD(&aio_context_list_lock) {
> > > + /*
> > > + * No need for memory barriers, this works in pair with
> > > + * the slow path of rdlock() and both take the lock.
> > > + */
> > > + qatomic_store_release(&has_writer, 0);
> > > +
> > > + /* Wake up all coroutine that are waiting to read the graph */
> > > + qemu_co_enter_all(&reader_queue, &aio_context_list_lock);
> >
> > So if I understand coroutines correctly, this says all pending
> > coroutines are now scheduled to run (or maybe they do try to run here,
> > but then immediately return control back to this coroutine to await
> > the right lock conditions since we are still in the block guarded by
> > list_lock)...
> >
> > > + }
> > > +
> > > /*
> > > - * No need for memory barriers, this works in pair with
> > > - * the slow path of rdlock() and both take the lock.
> > > + * Run any BHs that were scheduled during the wrlock section and that
> > > + * callers might expect to have finished (e.g. bdrv_unref() calls). Do this
> > > + * only after restarting coroutines so that nested event loops in BHs don't
> > > + * deadlock if their condition relies on the coroutine making progress.
> > > */
> > > - qatomic_store_release(&has_writer, 0);
> > > -
> > > - /* Wake up all coroutine that are waiting to read the graph */
> > > - qemu_co_enter_all(&reader_queue, &aio_context_list_lock);
> > > + aio_bh_poll(qemu_get_aio_context());
> >
> > ...and as long as the other coroutines sharing this thread don't
> > actually get to make progress until the next point at which the
> > current coroutine yields, and as long as our aio_bh_poll() doesn't
> > also include a yield point, then we are ensured that the BH has
> > completed before the next yield point in our caller.
> >
> > There are times (like today) where I'm still trying to wrap my mind
> > about the subtle differences between true multi-threading
> > vs. cooperative coroutines sharing a single thread via the use of
> > yield points. coroutines are cool when they can get rid of some of
> > the races that you have to worry about in true multi-threading.
>
> That said, once we introduce multi-queue, can we ever have a scenario
> where a different iothread might be trying to access the graph and
> perform a reopen in the time while this thread has not completed the
> BH close? Or is that protected by some other mutual exclusion (where
> the only one we have to worry about is reopen by a coroutine in the
> same thread, because all other threads are locked out of graph
> modifications)?
We don't have to worry about that one because reopen (and taking the
writer lock in general) only happen in the main thread, which is exactly
the thread that this code runs in.
The thing that we need to take into consideration is that aio_bh_poll()
could call something that wants to take the writer lock and modify the
graph again. It's not really a problem, though, because we're already
done at that point. Any readers that we resumed above will just
synchronise with the writer in the usual way and one of them will have
to wait. But there is nothing that is still waiting to be resumed and
could deadlock.
Kevin
next prev parent reply other threads:[~2023-08-21 17:00 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-17 12:49 [PATCH 00/21] Graph locking part 4 (node management) Kevin Wolf
2023-08-17 12:50 ` [PATCH 01/21] block: Remove unused BlockReopenQueueEntry.perms_checked Kevin Wolf
2023-08-17 19:18 ` Eric Blake
2023-08-21 7:34 ` Emanuele Giuseppe Esposito
2023-08-22 18:23 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 02/21] preallocate: Factor out preallocate_truncate_to_real_size() Kevin Wolf
2023-08-18 15:23 ` Eric Blake
2023-08-21 7:34 ` Emanuele Giuseppe Esposito
2023-08-22 18:26 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 03/21] preallocate: Don't poll during permission updates Kevin Wolf
2023-08-18 15:34 ` Eric Blake
2023-08-22 18:41 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 04/21] block: Take AioContext lock for bdrv_append() more consistently Kevin Wolf
2023-08-18 16:07 ` Eric Blake
2023-08-21 7:34 ` Emanuele Giuseppe Esposito
2023-08-22 18:49 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 05/21] block: Introduce bdrv_schedule_unref() Kevin Wolf
2023-08-18 16:23 ` Eric Blake
2023-08-18 16:26 ` Eric Blake
2023-08-21 16:59 ` Kevin Wolf [this message]
2023-08-22 19:01 ` Stefan Hajnoczi
2023-09-05 16:39 ` Kevin Wolf
2023-09-06 9:17 ` Kevin Wolf
2023-08-17 12:50 ` [PATCH 06/21] block-coroutine-wrapper: Add no_co_wrapper_bdrv_wrlock functions Kevin Wolf
2023-08-21 7:34 ` Emanuele Giuseppe Esposito
2023-08-22 19:03 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 07/21] block-coroutine-wrapper: Allow arbitrary parameter names Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:03 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 08/21] block: Mark bdrv_replace_child_noperm() GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:05 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 09/21] block: Mark bdrv_replace_child_tran() GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:14 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 10/21] block: Mark bdrv_attach_child_common() GRAPH_WRLOCK Kevin Wolf
2023-08-22 19:16 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 11/21] block: Call transaction callbacks with lock held Kevin Wolf
2023-08-22 19:19 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 12/21] block: Mark bdrv_attach_child() GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:21 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 13/21] block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:23 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 14/21] block: Mark bdrv_get_cumulative_perm() " Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:28 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 15/21] block: Mark bdrv_child_perm() GRAPH_RDLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:29 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 16/21] block: Mark bdrv_parent_cb_change_media() GRAPH_RDLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:29 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 17/21] block: Take graph rdlock in bdrv_drop_intermediate() Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:35 ` Stefan Hajnoczi
2023-09-05 15:26 ` Kevin Wolf
2023-08-17 12:50 ` [PATCH 18/21] block: Take graph rdlock in bdrv_change_aio_context() Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-17 12:50 ` [PATCH 19/21] block: Mark bdrv_root_unref_child() GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:38 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 20/21] block: Mark bdrv_unref_child() GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:39 ` Stefan Hajnoczi
2023-08-17 12:50 ` [PATCH 21/21] block: Mark bdrv_add/del_child() and caller GRAPH_WRLOCK Kevin Wolf
2023-08-21 7:35 ` Emanuele Giuseppe Esposito
2023-08-22 19:40 ` Stefan Hajnoczi
2023-08-22 18:20 ` [PATCH 00/21] Graph locking part 4 (node management) Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZOOX1pdZL84XAo3u@redhat.com \
--to=kwolf@redhat.com \
--cc=eblake@redhat.com \
--cc=eesposit@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vsementsov@yandex-team.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).