qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp>
Cc: qemu-devel@nongnu.org,
	"Dmitry Osipenko" <dmitry.osipenko@collabora.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Alex Bennée" <alex.bennee@linaro.org>
Subject: Re: [PATCH 4/5] rcu: Wake the RCU thread when draining
Date: Fri, 7 Nov 2025 09:00:37 -0500	[thread overview]
Message-ID: <aQ37hd0fVJltYtt-@x1.local> (raw)
In-Reply-To: <5279f15f-bf46-438e-9c1f-0873b08b59e7@rsg.ci.i.u-tokyo.ac.jp>

On Fri, Nov 07, 2025 at 10:47:35AM +0900, Akihiko Odaki wrote:
> On 2025/11/07 6:52, Peter Xu wrote:
> > On Thu, Nov 06, 2025 at 10:40:52AM +0900, Akihiko Odaki wrote:
> > > > > > > +        /*
> > > > > > > +         * Ensure that the forced variable has not been set after fetching
> > > > > > > +         * rcu_call_count; otherwise we may get confused by a force quiescent
> > > > > > > +         * state request for an element later than n.
> > > > > > > +         */
> > > > > > > +        while (qatomic_xchg(&forced, false)) {
> > > > > > > +            sleep = false;
> > > > > > > +            n = qatomic_read(&rcu_call_count);
> > > > > > >             }
> > > > > > 
> > > > > > This is pretty tricky, and I wonder if it will make the code easier to read
> > > > > > if we convert the sync_event to be a semaphore instead.  When as a sem, it
> > > > > > will take account of whatever kick to it, either a call_rcu1() or an
> > > > > > enforced rcu flush, so that we don't need to reset it.  Meanwhile, we don't
> > > > > > worry on an slightly outdated "n" read because the 2nd round of sem_wait()
> > > > > > will catch that new "n".
> > > > > > 
> > > > > > Instead, worst case is rcu thread runs one more round without seeing
> > > > > > callbacks on the queue.
> > > > > > 
> > > > > > I'm not sure if that could help simplying code, maybe also make it less
> > > > > > error-prone.
> > > > > 
> > > > > Semaphore is not applicable here because it will not de-duplicate concurrent
> > > > > kicks of RCU threads.
> > > > 
> > > > Why concurrent kicks of rcu threads is a problem?  QemuSemaphore is
> > > > thread-safe itself, meanwhile IIUC it only still causes call_rcu_thread()
> > > > loops some more rounds reading "n", which looks all safe. No?
> > > 
> > > It is safe but incurs overheads and confusing. QemuEvent represents the
> > > boolean semantics better.
> > > 
> > > I also have difficulty to understand how converting sync_event to a
> > > semaphore simplifies the code. Perhaps some (pseudo)code to show how the
> > > code will look like may be useful.
> > 
> > I prepared a patch on top of your current patchset to show what I meant.  I
> > also added comments and some test results showing why I think it might be
> > fine that the sem overhead should be small.
> > 
> > In short, I tested a VM with 8 vCPUs and 4G mem, booting Linux and properly
> > poweroff, I only saw <1000 rcu_call1 users in total.  That should be the
> > max-bound of sem overhead on looping in rcu thread.
> > 
> > It's in patch format but still treat it as a comment instead to discuss
> > with you.  Attaching it is just easier for me.
> > 
> > ===8<===
> >  From 71f15ed19050a973088352a8d71b6cc6b7b5f7cf Mon Sep 17 00:00:00 2001
> > From: Peter Xu <peterx@redhat.com>
> > Date: Thu, 6 Nov 2025 16:03:00 -0500
> > Subject: [PATCH] rcu: Make sync_event a semaphore
> > 
> > It could simply all reset logic, especially after enforced rcu is
> > introduced we'll also need a tweak to re-read "n", which can be avoided too
> > when with a sem.
> > 
> > However, the sem can introduce an overhead in high frequecy rcu frees.
> > This patch is drafted with the assumption that rcu free is at least very
> > rare in QEMU, hence it's not a problem.
> > 
> > When I tested with this command:
> > 
> > qemu-system-x86_64 -M q35,kernel-irqchip=split,suppress-vmdesc=on -smp 8 \
> >    -m 4G -msg timestamp=on -name peter-vm,debug-threads=on -cpu Nehalem \
> >    -accel kvm -qmp unix:/tmp/peter.sock,server,nowait -nographic \
> >    -monitor telnet::6666,server,nowait -netdev user,id=net0,hostfwd=tcp::5555-:22
> >    -device e1000,netdev=net0 -device virtio-balloon $DISK
> > 
> > I booted a pre-installed Linux, login and poweroff, wait until VM
> > completely shutdowns.  I captured less than 1000 rcu_free1() calls in
> > summary.  It means for the whole lifetime of such VM the max overhead of
> > the call_rcu_thread() loop reading rcu_call_count will be 1000 loops.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> >   util/rcu.c | 36 ++++++++----------------------------
> >   1 file changed, 8 insertions(+), 28 deletions(-)
> > 
> > diff --git a/util/rcu.c b/util/rcu.c
> > index 85f9333f5d..dfe031a5c9 100644
> > --- a/util/rcu.c
> > +++ b/util/rcu.c
> > @@ -54,7 +54,7 @@ static int rcu_call_count;
> >   static QemuMutex rcu_registry_lock;
> >   /* Set when the forced variable is set or rcu_call_count becomes non-zero. */
> > -static QemuEvent sync_event;
> > +static QemuSemaphore sync_event;
> >   /*
> >    * Check whether a quiescent state was crossed between the beginning of
> > @@ -80,7 +80,7 @@ static ThreadList registry = QLIST_HEAD_INITIALIZER(registry);
> >   void force_rcu(void)
> >   {
> >       qatomic_set(&forced, true);
> > -    qemu_event_set(&sync_event);
> > +    qemu_sem_post(&sync_event);
> >   }
> >   /* Wait for previous parity/grace period to be empty of readers.  */
> > @@ -148,7 +148,7 @@ static void wait_for_readers(bool sleep)
> >                */
> >               qemu_event_reset(&rcu_gp_event);
> >           } else if (qatomic_read(&rcu_call_count) >= RCU_CALL_MIN_SIZE ||
> > -                   !sleeps || qemu_event_timedwait(&sync_event, 10)) {
> > +                   !sleeps || qemu_sem_timedwait(&sync_event, 10)) {
> >               /*
> >                * Now one of the following heuristical conditions is satisfied:
> >                * - A decent number of callbacks piled up.
> > @@ -286,7 +286,6 @@ static void *call_rcu_thread(void *opaque)
> >       rcu_register_thread();
> >       for (;;) {
> > -        bool sleep = true;
> >           int n;
> >           /*
> > @@ -294,7 +293,6 @@ static void *call_rcu_thread(void *opaque)
> >            * added before enter_qs() starts.
> >            */
> >           for (;;) {
> > -            qemu_event_reset(&sync_event);
> >               n = qatomic_read(&rcu_call_count);
> >               if (n) {
> >                   break;
> > @@ -303,36 +301,19 @@ static void *call_rcu_thread(void *opaque)
> >   #if defined(CONFIG_MALLOC_TRIM)
> >               malloc_trim(4 * 1024 * 1024);
> >   #endif
> > -            qemu_event_wait(&sync_event);
> > +            qemu_sem_wait(&sync_event);
> >           }
> > -        /*
> > -         * Ensure that an event for a rcu_call_count change will not interrupt
> > -         * wait_for_readers().
> > -         */
> > -        qemu_event_reset(&sync_event);
> > -
> > -        /*
> > -         * Ensure that the forced variable has not been set after fetching
> > -         * rcu_call_count; otherwise we may get confused by a force quiescent
> > -         * state request for an element later than n.
> > -         */
> > -        while (qatomic_xchg(&forced, false)) {
> > -            sleep = false;
> > -            n = qatomic_read(&rcu_call_count);
> > -        }
> > -
> > -        enter_qs(sleep);
> > +        enter_qs(!qatomic_xchg(&forced, false));
> 
> This is not OK; the forced variable may be set after rcu_call_count is
> fetched. In that case, we should avoid unsetting the force quiescent state
> request for the elements later than "n" or refetch "n".

Indeed I missed that part, but it should be trivial to fix, on top of my
previous patch:

===8<===
diff --git a/util/rcu.c b/util/rcu.c
index dfe031a5c9..aff98d9ee2 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -286,6 +286,7 @@ static void *call_rcu_thread(void *opaque)
     rcu_register_thread();
 
     for (;;) {
+        bool sleep;
         int n;
 
         /*
@@ -293,6 +294,7 @@ static void *call_rcu_thread(void *opaque)
          * added before enter_qs() starts.
          */
         for (;;) {
+            sleep = !qatomic_xchg(&forced, false);
             n = qatomic_read(&rcu_call_count);
             if (n) {
                 break;
@@ -304,7 +306,7 @@ static void *call_rcu_thread(void *opaque)
             qemu_sem_wait(&sync_event);
         }
 
-        enter_qs(!qatomic_xchg(&forced, false));
+        enter_qs(sleep);
         qatomic_sub(&rcu_call_count, n);
         bql_lock();
         while (n > 0) {
===8<===

The idea is still the same, using semaphore can avoid explicit resets and a
lot of other ordering constraints like reading call_count, etc.

E.g. even before this series, we still need to properly reset at explicit
time to make sure we can capture a set() correct.  When with sem, all these
issues are gone simply because we won't miss post() when it's a counter not
boolean.

Also, would you please also have a look at other comments I left in the
same email (after the patch I attached)?

https://lore.kernel.org/qemu-devel/aQ0Ys09WtlSPoapm@x1.local/

Can search "When I was having a closer look, I found some other issues".

Thanks,

-- 
Peter Xu



  reply	other threads:[~2025-11-07 14:02 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-29  6:12 [PATCH 0/5] virtio-gpu: Force RCU when unmapping blob Akihiko Odaki
2025-10-29  6:12 ` [PATCH 1/5] futex: Add qemu_futex_timedwait() Akihiko Odaki
2025-10-30 16:13   ` Alex Bennée
2025-10-29  6:12 ` [PATCH 2/5] qemu-thread: Add qemu_event_timedwait() Akihiko Odaki
2025-10-29  6:12 ` [PATCH 3/5] rcu: Use call_rcu() in synchronize_rcu() Akihiko Odaki
2025-10-29  6:12 ` [PATCH 4/5] rcu: Wake the RCU thread when draining Akihiko Odaki
2025-10-29 18:22   ` Peter Xu
2025-11-03  9:45     ` Akihiko Odaki
2025-11-05 20:43       ` Peter Xu
2025-11-06  1:40         ` Akihiko Odaki
2025-11-06 21:52           ` Peter Xu
2025-11-07  1:47             ` Akihiko Odaki
2025-11-07 14:00               ` Peter Xu [this message]
2025-11-08  1:47                 ` Akihiko Odaki
2025-11-13 17:03                   ` Peter Xu
2025-11-14  1:24                     ` Akihiko Odaki
2025-11-14 15:30                       ` Peter Xu
2025-11-15  1:58                         ` Akihiko Odaki
2025-11-15  2:59                           ` Akihiko Odaki
2025-11-17 16:42                             ` Peter Xu
2025-11-17 22:53                               ` Akihiko Odaki
2025-11-17 16:39                           ` Peter Xu
     [not found]             ` <1b318ad8-48b3-4968-86ca-c62aef3b3bd4@rsg.ci.i.u-tokyo.ac.jp>
     [not found]               ` <7c49d808-ccb8-4262-ae6c-2ac746b43b80@rsg.ci.i.u-tokyo.ac.jp>
2025-11-13 17:30                 ` Peter Xu
2025-11-14  1:12                   ` Akihiko Odaki
2025-10-29  6:12 ` [PATCH 5/5] virtio-gpu: Force RCU when unmapping blob Akihiko Odaki
2025-10-30 11:18   ` Dmitry Osipenko
2025-10-30 11:17 ` [PATCH 0/5] " Dmitry Osipenko
2025-10-30 17:59 ` Alex Bennée
2025-10-31 21:32 ` Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aQ37hd0fVJltYtt-@x1.local \
    --to=peterx@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=dmitry.osipenko@collabora.com \
    --cc=mst@redhat.com \
    --cc=odaki@rsg.ci.i.u-tokyo.ac.jp \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).