public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <frederic@kernel.org>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Leonardo Bras <leobras.c@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Waiman Long <longman@redhat.com>,
	Boqun Feun <boqun.feng@gmail.com>
Subject: Re: [PATCH v2 0/5] Introduce QPW for per-cpu operations (v2)
Date: Tue, 10 Mar 2026 22:34:22 +0100	[thread overview]
Message-ID: <abCOXjSPbxNxa0f6@pavilion.home> (raw)
In-Reply-To: <aaoyFClXLEYNzzBR@tpad>

Le Thu, Mar 05, 2026 at 10:47:00PM -0300, Marcelo Tosatti a écrit :
> On Thu, Mar 05, 2026 at 05:55:12PM +0100, Frederic Weisbecker wrote:
> > So let me summarize what are the possible design solutions, on top of our discussions,
> > so we can compare:
> 
> I find this summary difficult to comprehend. The way i see it is:
> 
> A certain class of data structures can be manipulated only by each individual CPU (the
> per-CPU caches), since they lack proper locks for such data to be
> manipulated by remote CPUs.
> 
> There are certain operations which require such data to be manipulated,
> therefore work is queued to execute on the owner CPUs.

Right.

 
> > 
> > 1) Never queue remotely but always queue locally and execute on userspace
> 
> When you say "queue locally", do you mean to queue the data structure 
> manipulation to happen on return to userspace of the owner CPU ?

Yes.

> 
> What if it does not return to userspace ? (or takes a long time to return 
> to userspace?).

Indeed it's a bet that syscalls eventually return "soon enough" for correctness
to be maintained and that the CPU is not stuck on some kthread. But on isolation
workloads, those assumptions are usually true.

> 
> >    return via task work.
> > 
> >    Pros:
> >          - Simple and easy to maintain.
> > 
> >    Cons:
> >          - Need a case by case handling.
> > 
> > 	 - Might be suitable for full userspace applications but not for
> >            some HPC usecases. In the best world MPI is fully implemented in
> >            userspace but that doesn't appear to be the case.
> > 
> > 2) Queue locally the workqueue right away or do it remotely (if it's
> >    really necessary) if the isolated CPU is in userspace, otherwise queue
> >    it for execution on return to kernel. The work will be handled by preemption
> >    to a worker or by a workqueue flush on return to userspace.
> > 
> >    Pros:
> >         - The local queue handling is simple.
> > 
> >    Cons:
> >         - The remote queue must synchronize with return to userspace and
> > 	  eventually postpone to return to kernel if the target is in userspace.
> > 	  Also it may need to differentiate IRQs and syscalls.
> > 
> >         - Therefore still involve some case by case handling eventually.
> >    
> >         - Flushing the global workqueues to avoid deadlocks is unadvised as shown
> >           in the comment above flush_scheduled_work(). It even triggers a
> >           warning. Significant efforts have been put to convert all the existing
> > 	  users. It's not impossible to sell in our case because we shouldn't
> > 	  hold a lock upon return to userspace. But that will restore a new
> > 	  dangerous API.
> > 
> >         - Queueing the workqueue / flushing involves a context switch which
> >           induce more noise (eg: tick restart)
> > 	  
> >         - As above, probably not suitable for HPC.
> > 
> > 3) QPW: Handle the work remotely
> > 
> >    Pros:
> >         - Works on all cases, without any surprise.
> > 
> >    Cons:
> >         - Introduce new locking scheme to maintain and debug.
> > 
> >         - Needs case by case handling.
> > 
> > Thoughts?
> > 
> > -- 
> > Frederic Weisbecker
> > SUSE Labs
> 
> Its hard for me to parse your concise summary (perhaps it could be more
> verbose).
> 
> Anyway, one thought is to use some sort of SRCU type protection on the 
> per-CPU caches.
> But that adds cost as well (compared to non-SRCU), which then seems to
> have cost similar to adding per-CPU spinlocks.

Well, there is SRCU-fast now. Though do we care about housekeeping performance
to be optimized on isolated workloads to the point we complicate things with a
weaker and and trickier synchronization mechanism? Probably not. If we choose to
pick up your solution, I'm fine with spinlocks.

Thanks.

-- 
Frederic Weisbecker
SUSE Labs


  reply	other threads:[~2026-03-10 21:34 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-02 15:49 [PATCH v2 0/5] Introduce QPW for per-cpu operations (v2) Marcelo Tosatti
2026-03-02 15:49 ` [PATCH v2 1/5] slab: distinguish lock and trylock for sheaf_flush_main() Marcelo Tosatti
2026-03-02 15:49 ` [PATCH v2 2/5] Introducing qpw_lock() and per-cpu queue & flush work Marcelo Tosatti
2026-03-03 12:03   ` Vlastimil Babka (SUSE)
2026-03-03 16:02     ` Marcelo Tosatti
2026-03-08 18:00       ` Leonardo Bras
2026-03-09 10:14         ` Vlastimil Babka (SUSE)
2026-03-11  0:16           ` Leonardo Bras
2026-03-11  7:58   ` Vlastimil Babka (SUSE)
2026-03-15 17:37     ` Leonardo Bras
2026-03-16 10:55       ` Vlastimil Babka (SUSE)
2026-03-23  0:51         ` Leonardo Bras
2026-03-13 21:55   ` Frederic Weisbecker
2026-03-15 18:10     ` Leonardo Bras
2026-03-17 13:33       ` Frederic Weisbecker
2026-03-23  1:38         ` Leonardo Bras
2026-03-24 11:54           ` Frederic Weisbecker
2026-03-24 22:06             ` Leonardo Bras
2026-03-23 14:36         ` Marcelo Tosatti
2026-03-02 15:49 ` [PATCH v2 3/5] mm/swap: move bh draining into a separate workqueue Marcelo Tosatti
2026-03-02 15:49 ` [PATCH v2 4/5] swap: apply new queue_percpu_work_on() interface Marcelo Tosatti
2026-03-02 15:49 ` [PATCH v2 5/5] slub: " Marcelo Tosatti
2026-03-03 11:15 ` [PATCH v2 0/5] Introduce QPW for per-cpu operations (v2) Frederic Weisbecker
2026-03-08 18:02   ` Leonardo Bras
2026-03-03 12:07 ` Vlastimil Babka (SUSE)
2026-03-05 16:55 ` Frederic Weisbecker
2026-03-06  1:47   ` Marcelo Tosatti
2026-03-10 21:34     ` Frederic Weisbecker [this message]
2026-03-10 17:12   ` Marcelo Tosatti
2026-03-10 22:14     ` Frederic Weisbecker
2026-03-11  1:18     ` Hillf Danton
2026-03-11  7:54     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abCOXjSPbxNxa0f6@pavilion.home \
    --to=frederic@kernel.org \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=boqun.feng@gmail.com \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=leobras.c@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=muchun.song@linux.dev \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox