From: Yury Norov <yury.norov@gmail.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Breno Leitao <leitao@debian.org>,
Nathan Chancellor <nathan@kernel.org>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH 1/9] cpumask: introduce for_each_cpu_and_from()
Date: Sun, 21 Jan 2024 11:50:02 -0800 [thread overview]
Message-ID: <Za11asdkTrKzrL8e@yury-ThinkPad> (raw)
In-Reply-To: <Zas4CeVG6mlfiUM9@fedora>
On Sat, Jan 20, 2024 at 11:03:37AM +0800, Ming Lei wrote:
> On Fri, Jan 19, 2024 at 06:50:45PM -0800, Yury Norov wrote:
> > Similarly to for_each_cpu_and(), introduce a for_each_cpu_and_from(),
> > which is handy when it's needed to traverse 2 cpumasks or bitmaps,
> > starting from a given position.
>
> The new helper is useless, see
>
> https://lore.kernel.org/lkml/ZZNgDb6bzOscrNmk@fedora/
Let's consider the following configuration.
CPUs: 0b1111
Sibling groups: 0b0011 and 0b1100
nmsk: 0b1111
As the complexity measure we take the number of accesses to nmsk in
the outer loop, and to (nmsk & sibl) in the inner loop in search
routines, so that
cpumask_first(1111)
requires 1 access to find the first set bit, and
cpumask_first(1000)
requires 4 such accesses.
Actual find_bit() ops work better than this by using __ffs(), but on long
bitmaps the performance will be as described above.
Now, look at the code. This is yours:
static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
unsigned int cpus_per_grp)
{
const struct cpumask *siblmsk;
int cpu, sibl;
for ( ; cpus_per_grp > 0; ) {
cpu = cpumask_first(nmsk);
/* Should not happen, but I'm too lazy to think about it */
if (cpu >= nr_cpu_ids)
return;
cpumask_clear_cpu(cpu, nmsk);
cpumask_set_cpu(cpu, irqmsk);
cpus_per_grp--;
/* If the cpu has siblings, use them first */
siblmsk = topology_sibling_cpumask(cpu);
for (sibl = -1; cpus_per_grp > 0; ) {
sibl = cpumask_next(sibl, siblmsk);
if (sibl >= nr_cpu_ids)
break;
if (!cpumask_test_and_clear_cpu(sibl, nmsk))
continue;
cpumask_set_cpu(sibl, irqmsk);
cpus_per_grp--;
}
}
}
This is your code step-by-step:
# loop cpu match siblmsk nmsk irqmsk
0 outer 0 yes 1110 0001
1 inner 0 no 1110 0001
2 inner 1 yes 0011 1100 0011
3 inner 2 no 1100 0011
4 inner 3 no 1100 0011
5 outer 0 no 1100 0011
6 outer 1 no 1100 0011
7 outer 2 yes 1000 0111
8 inner 0 no 1100 1000 0111
9 inner 1 no 1100 1000 0111
10 inner 2 no 1100 1000 0111
11 inner 3 yes 1100 0000 1111
12 outer 0 no 0000 1111
13 outer 1 no 0000 1111
14 outer 2 no 0000 1111
15 outer 3 no 0000 1111
This is mine:
static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
unsigned int cpus_per_grp)
{
const struct cpumask *siblmsk;
int cpu, sibl;
for_each_cpu(cpu, nmsk) {
if (cpus_per_grp-- == 0)
return;
/*
* If a caller wants to spread IRQa on offline CPUs, we need to
* take care of it explicitly because those offline CPUS are not
* included in siblings cpumask.
*/
__cpumask_clear_cpu(cpu, nmsk);
__cpumask_set_cpu(cpu, irqmsk);
/* If the cpu has siblings, use them first */
siblmsk = topology_sibling_cpumask(cpu);
sibl = cpu + 1;
for_each_cpu_and_from(sibl, siblmsk, nmsk) {
if (cpus_per_grp-- == 0)
return;
__cpumask_clear_cpu(sibl, nmsk);
__cpumask_set_cpu(sibl, irqmsk);
cpu = sibl + 1;
}
}
}
Step-by-step:
# loop cpu match siblmsk nmsk irqmsk
0 outer 0 yes 1110 0001
1 inner 1 yes 0011 1100 0011
2 inner 2 no 0011 1100 0011
3 inner 3 no 0011 1100 0011
4 outer 2 yes 1000 0111
5 inner 3 yes 1100 0000 1111
Your code works worse because it's a Schlemiel the Painter's algorithm.
I mentioned it twice in the commit messages and at least 3 times in
replies to your comments.
Here I'll stop and will not reply to your emails, including the rest of
that Friday's night mailbombing, unless you at least admit you're wrong
in this case and for_each_cpu_and_from() is useful here.
I'd also recommend you to learn more about atomic operations basics and
revoke your NAK from the patch #3.
Thanks,
Yury
PS: There's a typo in the series name, I meant that the series makes the
function O(N) of course. But even that is overly optimistic. It's O(N*S),
where S is the number of sibling groups. A couple more patches needed to
make it a true O(N). Still, much better.
next prev parent reply other threads:[~2024-01-21 19:50 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-20 2:50 [PATCH v5 0/9] lib/group_cpus: rework grp_spread_init_one() and make it O(1) Yury Norov
2024-01-20 2:50 ` [PATCH 1/9] cpumask: introduce for_each_cpu_and_from() Yury Norov
2024-01-20 3:03 ` Ming Lei
2024-01-21 19:50 ` Yury Norov [this message]
2024-01-22 2:41 ` Ming Lei
2024-01-20 2:50 ` [PATCH 2/9] lib/group_cpus: optimize inner loop in grp_spread_init_one() Yury Norov
2024-01-20 3:17 ` Ming Lei
2024-01-20 7:03 ` Ming Lei
2024-01-20 2:50 ` [PATCH 3/9] lib/group_cpus: relax atomicity requirement " Yury Norov
2024-01-20 2:50 ` [PATCH 4/9] lib/group_cpus: optimize outer loop " Yury Norov
2024-01-20 3:51 ` Ming Lei
2024-01-20 6:17 ` Ming Lei
2024-01-20 2:50 ` [PATCH 5/9] lib/group_cpus: don't zero cpumasks in group_cpus_evenly() on allocation Yury Norov
2024-01-20 2:50 ` [PATCH 6/9] lib/group_cpus: drop unneeded cpumask_empty() call in __group_cpus_evenly() Yury Norov
2024-01-20 2:50 ` [PATCH 7/9] cpumask: define cleanup function for cpumasks Yury Norov
2024-01-20 2:50 ` [PATCH 8/9] lib/group_cpus: rework group_cpus_evenly() Yury Norov
2024-01-20 2:50 ` [PATCH 9/9] lib/group_cpus: simplify group_cpus_evenly() for more Yury Norov
-- strict thread matches above, loose matches on Subject: below --
2023-12-28 20:09 [PATCH v4 0/9] lib/group_cpus: rework grp_spread_init_one() and make it O(1) Yury Norov
2023-12-28 20:09 ` [PATCH 1/9] cpumask: introduce for_each_cpu_and_from() Yury Norov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Za11asdkTrKzrL8e@yury-ThinkPad \
--to=yury.norov@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=leitao@debian.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=ming.lei@redhat.com \
--cc=nathan@kernel.org \
--cc=tglx@linutronix.de \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox