public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Yury Norov <yury.norov@gmail.com>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	linux-kernel@vger.kernel.org,
	Valentin Schneider <vschneid@redhat.com>,
	Mel Gorman <mgorman@suse.de>,
	Steven Rostedt <rostedt@goodmis.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>,
	Dmitry Vyukov <dvyukov@google.com>,
	Marco Elver <elver@google.com>
Subject: Re: [RFC PATCH 2/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads
Date: Wed, 4 Sep 2024 08:24:38 -0700	[thread overview]
Message-ID: <Zth7tj9Cq-gigTx8@yury-ThinkPad> (raw)
In-Reply-To: <615f169b-3b24-4661-8a2c-185c6d80f7a4@efficios.com>

On Tue, Sep 03, 2024 at 07:22:37PM -0400, Mathieu Desnoyers wrote:
> On 2024-09-03 15:59, Yury Norov wrote:
> > On Tue, Sep 03, 2024 at 03:06:50PM -0400, Mathieu Desnoyers wrote:
> [...]
> > > +
> > > +static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask)
> > > +{
> > > +	struct cpumask *mm_allowed = mm_cpus_allowed(mm);
> > > +	int cpu, nr_set = 0;
> > > +
> > > +	if (!mm)
> > > +		return;
> > > +	/* The mm_cpus_allowed is the union of each thread allowed CPUs masks. */
> > > +	for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next_andnot(cpu, cpumask, mm_allowed)) {
> > > +		if (!cpumask_test_and_set_cpu(cpu, mm_allowed))
> > > +			nr_set++;
> > > +	}
> > 
> > You can do the same nicer:
> > 
> >    for_each_cpu(cpu, cpumask)
> >    	nr_set += !cpumask_test_and_set_cpu(cpu, mm_allowed);
> > 
> > This should be faster and a bit simpler, to me.
> 
> In this scenario, I expect the following per-thread cpumask properties for a
> given process (typically): those will be typically the same bits
> set repeated over all threads belonging to a process. There are of
> course scenarios where specific threads will override the mask, but
> I don't expect this to be the most frequent case.
> 
> So we typically have an operation which initially copies the initial
> thread's allowed cpus mask to the mm allowed cpus mask, and then when
> additional affinity changes are done, we want to augment the mm allowed
> cpus masks with any additional cpu that may show up. But again, I expect
> the initial thread to typically have the complete mask and other
> operations won't typically change the mm allowed cpumask bits.
> 
> I also expect the cpumask to be often quite dense (often all bits
> are set).
> 
> Now if we look at the operations for your proposal here:
> 
> - for_each_cpu loads cpumask word-by-word and for each set bit, it
>   issues cpumask_test_and_set_cpu on mm_allowed, which is really a
>   test_and_set_bit, a fully ordered atomic operation, on each _bit_
>   set. That's O(nr_cpus) fully ordered atomic operations, and thus
>   expensive exclusive cache line accesses.

Both versions are O(N).
 
> My approach does:
> 
> - The equivalent of a for_each_cpu_andnot (actually I should use
>   exactly that! I just noticed it exists in the API.), which loads

Yes, you should.

>   both thread and mm CPUs allowed masks in parallel, word-by-word,
>   and only issues a cpumask_test_and_set_cpu for CPUs which are set
>   in the per-thread mask, but not in the mm mask. In the typical cases
>   discussed above, we pretty much never need to issue the atomic
>   test-and-set. So all we need to do for the common case is to read
>   both cpu masks in parallel, no stores/atomic ops needed.

This all doesn't look like a hot path. And anyways, speculating around
performance without numbers on hands sounds cheap.

In my experience, iterators with a very lightweight payload are ~100
times slower comparing to dedicated bitmap ops. Check this for example:
3cea8d4753277.

If you're really cared about performance here, I'd suggest you to
compare your iterators approach with something like this:

  cpumask_or(mm_allowed, mm_allowed, cpumask);
  atomic_set(&mm->nr_cpus_allowed, cpumask_weight(mm_allowed);

  reply	other threads:[~2024-09-04 15:24 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-03 19:06 [RFC PATCH 0/2] sched: Improve cache locality of RSEQ concurrency IDs Mathieu Desnoyers
2024-09-03 19:06 ` [RFC PATCH 1/2] cpumask: Implement cpumask_next_andnot Mathieu Desnoyers
2024-09-03 19:28   ` Yury Norov
2024-09-04  0:47     ` Mathieu Desnoyers
2024-09-03 19:06 ` [RFC PATCH 2/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads Mathieu Desnoyers
2024-09-03 19:59   ` Yury Norov
2024-09-03 23:22     ` Mathieu Desnoyers
2024-09-04 15:24       ` Yury Norov [this message]
2024-09-04 15:50         ` Mathieu Desnoyers
2024-09-04 18:28           ` Mathieu Desnoyers
2024-09-05 13:36             ` Mathieu Desnoyers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zth7tj9Cq-gigTx8@yury-ThinkPad \
    --to=yury.norov@gmail.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@rasmusvillemoes.dk \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox