From: Paolo Bonzini <pbonzini@redhat.com>
To: "Alex Bennée" <alex.bennee@linaro.org>, qemu-devel@nongnu.org
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Akihiko Odaki" <akihiko.odaki@gmail.com>,
"Gerd Hoffmann" <kraxel@redhat.com>
Subject: Re: [RFC PATCH] main-loop: introduce WITH_QEMU_IOTHREAD_LOCK
Date: Tue, 25 Oct 2022 14:45:30 +0200 [thread overview]
Message-ID: <b5317dc1-30fe-c59f-2a41-a47e7346a616@redhat.com> (raw)
In-Reply-To: <20221024171909.434818-1-alex.bennee@linaro.org>
On 10/24/22 19:19, Alex Bennée wrote:
> This helper intends to ape our other auto-unlocking helpers with
> WITH_QEMU_LOCK_GUARD. The principle difference is the iothread lock
> is often nested needs a little extra book keeping to ensure we don't
> double lock or unlock a lock taken higher up the call chain.
>
> Convert some of the common routines that follow this pattern to use
> the new wrapper.
>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Looks good, but having to check whether the lock is taken is a bit of an
antipattern.
What do you think about having both WITH_QEMU_IOTHREAD_LOCK() and
MAYBE_WITH_QEMU_IOTHREAD_LOCK()?
Also lots of bonus points for finally renaming these functions to
"*_main_thread" rather than "*_iothread" since, confusingly, iothreads
(plural) are the only ones that do not and cannot take the "iothread lock".
Paolo
> ---
> include/qemu/main-loop.h | 41 ++++++++++++++++++++++++++++++++++++++++
> hw/core/cpu-common.c | 10 ++--------
> util/rcu.c | 40 ++++++++++++++++-----------------------
> ui/cocoa.m | 18 ++++--------------
> 4 files changed, 63 insertions(+), 46 deletions(-)
>
> diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
> index aac707d073..604e1823da 100644
> --- a/include/qemu/main-loop.h
> +++ b/include/qemu/main-loop.h
> @@ -341,6 +341,47 @@ void qemu_mutex_lock_iothread_impl(const char *file, int line);
> */
> void qemu_mutex_unlock_iothread(void);
>
> +/**
> + * WITH_QEMU_IOTHREAD_LOCK - nested lock of iothread
> + *
> + * This is a specialised form of WITH_QEMU_LOCK_GUARD which is used to
> + * safely encapsulate code that needs the BQL. The main difference is
> + * the BQL is often nested so we need to save the state of it on entry
> + * so we know if we need to free it once we leave the scope of the gaurd.
> + */
> +
> +typedef struct {
> + bool taken;
> +} IoThreadLocked;
> +
> +static inline IoThreadLocked * qemu_iothread_auto_lock(IoThreadLocked *x)
> +{
> + bool locked = qemu_mutex_iothread_locked();
> + if (!locked) {
> + qemu_mutex_lock_iothread();
> + x->taken = true;
> + }
> + return x;
> +}
> +
> +static inline void qemu_iothread_auto_unlock(IoThreadLocked *x)
> +{
> + if (x->taken) {
> + qemu_mutex_unlock_iothread();
> + }
> +}
> +
> +G_DEFINE_AUTOPTR_CLEANUP_FUNC(IoThreadLocked, qemu_iothread_auto_unlock)
> +
> +#define WITH_QEMU_IOTHREAD_LOCK_(var) \
> + for (g_autoptr(IoThreadLocked) var = \
> + qemu_iothread_auto_lock(&(IoThreadLocked) {}); \
> + var; \
> + qemu_iothread_auto_unlock(var), var = NULL)
> +
> +#define WITH_QEMU_IOTHREAD_LOCK \
> + WITH_QEMU_IOTHREAD_LOCK_(glue(qemu_lockable_auto, __COUNTER__))
> +
> /*
> * qemu_cond_wait_iothread: Wait on condition for the main loop mutex
> *
> diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
> index f9fdd46b9d..0a60f916a9 100644
> --- a/hw/core/cpu-common.c
> +++ b/hw/core/cpu-common.c
> @@ -70,14 +70,8 @@ CPUState *cpu_create(const char *typename)
> * BQL here if we need to. cpu_interrupt assumes it is held.*/
> void cpu_reset_interrupt(CPUState *cpu, int mask)
> {
> - bool need_lock = !qemu_mutex_iothread_locked();
> -
> - if (need_lock) {
> - qemu_mutex_lock_iothread();
> - }
> - cpu->interrupt_request &= ~mask;
> - if (need_lock) {
> - qemu_mutex_unlock_iothread();
> + WITH_QEMU_IOTHREAD_LOCK {
> + cpu->interrupt_request &= ~mask;
> }
> }
>
> diff --git a/util/rcu.c b/util/rcu.c
> index b6d6c71cff..02e7491de1 100644
> --- a/util/rcu.c
> +++ b/util/rcu.c
> @@ -320,35 +320,27 @@ static void drain_rcu_callback(struct rcu_head *node)
> void drain_call_rcu(void)
> {
> struct rcu_drain rcu_drain;
> - bool locked = qemu_mutex_iothread_locked();
>
> memset(&rcu_drain, 0, sizeof(struct rcu_drain));
> qemu_event_init(&rcu_drain.drain_complete_event, false);
>
> - if (locked) {
> - qemu_mutex_unlock_iothread();
> - }
> -
> -
> - /*
> - * RCU callbacks are invoked in the same order as in which they
> - * are registered, thus we can be sure that when 'drain_rcu_callback'
> - * is called, all RCU callbacks that were registered on this thread
> - * prior to calling this function are completed.
> - *
> - * Note that since we have only one global queue of the RCU callbacks,
> - * we also end up waiting for most of RCU callbacks that were registered
> - * on the other threads, but this is a side effect that shoudn't be
> - * assumed.
> - */
> -
> - qatomic_inc(&in_drain_call_rcu);
> - call_rcu1(&rcu_drain.rcu, drain_rcu_callback);
> - qemu_event_wait(&rcu_drain.drain_complete_event);
> - qatomic_dec(&in_drain_call_rcu);
> + WITH_QEMU_IOTHREAD_LOCK {
> + /*
> + * RCU callbacks are invoked in the same order as in which they
> + * are registered, thus we can be sure that when 'drain_rcu_callback'
> + * is called, all RCU callbacks that were registered on this thread
> + * prior to calling this function are completed.
> + *
> + * Note that since we have only one global queue of the RCU callbacks,
> + * we also end up waiting for most of RCU callbacks that were registered
> + * on the other threads, but this is a side effect that shoudn't be
> + * assumed.
> + */
>
> - if (locked) {
> - qemu_mutex_lock_iothread();
> + qatomic_inc(&in_drain_call_rcu);
> + call_rcu1(&rcu_drain.rcu, drain_rcu_callback);
> + qemu_event_wait(&rcu_drain.drain_complete_event);
> + qatomic_dec(&in_drain_call_rcu);
> }
>
> }
> diff --git a/ui/cocoa.m b/ui/cocoa.m
> index 660d3e0935..f8bd315bdd 100644
> --- a/ui/cocoa.m
> +++ b/ui/cocoa.m
> @@ -115,27 +115,17 @@ static void cocoa_switch(DisplayChangeListener *dcl,
>
> static void with_iothread_lock(CodeBlock block)
> {
> - bool locked = qemu_mutex_iothread_locked();
> - if (!locked) {
> - qemu_mutex_lock_iothread();
> - }
> - block();
> - if (!locked) {
> - qemu_mutex_unlock_iothread();
> + WITH_QEMU_IOTHREAD_LOCK {
> + block();
> }
> }
>
> static bool bool_with_iothread_lock(BoolCodeBlock block)
> {
> - bool locked = qemu_mutex_iothread_locked();
> bool val;
>
> - if (!locked) {
> - qemu_mutex_lock_iothread();
> - }
> - val = block();
> - if (!locked) {
> - qemu_mutex_unlock_iothread();
> + WITH_QEMU_IOTHREAD_LOCK {
> + val = block();
> }
> return val;
> }
prev parent reply other threads:[~2022-10-25 12:48 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-24 17:19 [RFC PATCH] main-loop: introduce WITH_QEMU_IOTHREAD_LOCK Alex Bennée
2022-10-24 23:01 ` Richard Henderson
2022-10-25 4:45 ` Akihiko Odaki
2022-10-25 12:45 ` Paolo Bonzini [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b5317dc1-30fe-c59f-2a41-a47e7346a616@redhat.com \
--to=pbonzini@redhat.com \
--cc=akihiko.odaki@gmail.com \
--cc=alex.bennee@linaro.org \
--cc=kraxel@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).