From: Ming Lei <ming.lei@redhat.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>
Subject: Re: [PATCH v3 3/7] lib/group_cpus: relax atomicity requirement in grp_spread_init_one()
Date: Thu, 14 Dec 2023 08:43:58 +0800 [thread overview]
Message-ID: <ZXpPzoX1pcQZMyBw@fedora> (raw)
In-Reply-To: <ZXnj1WhpSgdMXSfS@yury-ThinkPad>
On Wed, Dec 13, 2023 at 09:03:17AM -0800, Yury Norov wrote:
> On Wed, Dec 13, 2023 at 08:14:45AM +0800, Ming Lei wrote:
> > On Tue, Dec 12, 2023 at 08:52:14AM -0800, Yury Norov wrote:
> > > On Tue, Dec 12, 2023 at 05:50:04PM +0800, Ming Lei wrote:
> > > > On Mon, Dec 11, 2023 at 08:21:03PM -0800, Yury Norov wrote:
> > > > > Because nmsk and irqmsk are stable, extra atomicity is not required.
> > > > >
> > > > > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > > > > ---
> > > > > lib/group_cpus.c | 8 ++++----
> > > > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/lib/group_cpus.c b/lib/group_cpus.c
> > > > > index 10dead3ab0e0..7ac94664230f 100644
> > > > > --- a/lib/group_cpus.c
> > > > > +++ b/lib/group_cpus.c
> > > > > @@ -24,8 +24,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
> > > > > if (cpu >= nr_cpu_ids)
> > > > > return;
> > > > >
> > > > > - cpumask_clear_cpu(cpu, nmsk);
> > > > > - cpumask_set_cpu(cpu, irqmsk);
> > > > > + __cpumask_clear_cpu(cpu, nmsk);
> > > > > + __cpumask_set_cpu(cpu, irqmsk);
> > > > > cpus_per_grp--;
> > > > >
> > > > > /* If the cpu has siblings, use them first */
> > > > > @@ -33,8 +33,8 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,
> > > > > sibl = cpu + 1;
> > > > >
> > > > > for_each_cpu_and_from(sibl, siblmsk, nmsk) {
> > > > > - cpumask_clear_cpu(sibl, nmsk);
> > > > > - cpumask_set_cpu(sibl, irqmsk);
> > > > > + __cpumask_clear_cpu(sibl, nmsk);
> > > > > + __cpumask_set_cpu(sibl, irqmsk);
> > > >
> > > > I think this kind of change should be avoided, here the code is
> > > > absolutely in slow path, and we care code cleanness and readability
> > > > much more than the saved cycle from non atomicity.
> > >
> > > Atomic ops have special meaning and special function. This 'atomic' way
> > > of moving a bit from one bitmap to another looks completely non-trivial
> > > and puzzling to me.
> > >
> > > A sequence of atomic ops is not atomic itself. Normally it's a sing of
> > > a bug. But in this case, both masks are stable, and we don't need
> > > atomicity at all.
> >
> > Here we don't care the atomicity.
> >
> > >
> > > It's not about performance, it's about readability.
> >
> > __cpumask_clear_cpu() and __cpumask_set_cpu() are more like private
> > helper, and more hard to follow.
>
> No that's not true. Non-atomic version of the function is not a
> private helper of course.
>
> > [@linux]$ git grep -n -w -E "cpumask_clear_cpu|cpumask_set_cpu" ./ | wc
> > 674 2055 53954
> > [@linux]$ git grep -n -w -E "__cpumask_clear_cpu|__cpumask_set_cpu" ./ | wc
> > 21 74 1580
> >
> > I don't object to comment the current usage, but NAK for this change.
>
> No problem, I'll add you NAK.
You can add the following words meantime:
__cpumask_clear_cpu() and __cpumask_set_cpu() are added in commit 6c8557bdb28d
("smp, cpumask: Use non-atomic cpumask_{set,clear}_cpu()") for fast code path(
smp_call_function_many()).
We have ~670 users of cpumask_clear_cpu & cpumask_set_cpu, lots of them
fall into same category with group_cpus.c(doesn't care atomicity, not in fast
code path), and needn't change to __cpumask_clear_cpu() and __cpumask_set_cpu().
Otherwise, this way may encourage to update others into the __cpumask_* version.
Thanks,
Ming
next prev parent reply other threads:[~2023-12-14 0:44 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-12 4:21 [PATCH v3 0/7] lib/group_cpus: rework grp_spread_init_one() and make it O(1) Yury Norov
2023-12-12 4:21 ` [PATCH v3 1/7] cpumask: introduce for_each_cpu_and_from() Yury Norov
2023-12-12 4:21 ` [PATCH v3 2/7] lib/group_cpus: optimize inner loop in grp_spread_init_one() Yury Norov
2023-12-12 9:46 ` Ming Lei
2023-12-12 17:04 ` Yury Norov
2023-12-13 0:06 ` Ming Lei
2023-12-25 17:38 ` Yury Norov
2023-12-12 4:21 ` [PATCH v3 3/7] lib/group_cpus: relax atomicity requirement " Yury Norov
2023-12-12 9:50 ` Ming Lei
2023-12-12 16:52 ` Yury Norov
2023-12-13 0:14 ` Ming Lei
2023-12-13 17:03 ` Yury Norov
2023-12-14 0:43 ` Ming Lei [this message]
2023-12-12 4:21 ` [PATCH v3 4/7] lib/group_cpus: optimize outer loop " Yury Norov
2023-12-12 4:21 ` [PATCH v3 5/7] lib/cgroup_cpus: don't zero cpumasks in group_cpus_evenly() on allocation Yury Norov
2023-12-13 0:56 ` Ming Lei
2023-12-12 4:21 ` [PATCH v3 6/7] lib/group_cpus: drop unneeded cpumask_empty() call in __group_cpus_evenly() Yury Norov
2023-12-13 0:59 ` Ming Lei
2023-12-12 4:21 ` [PATCH v3 7/7] lib/group_cpus: simplify grp_spread_init_one() for more Yury Norov
2023-12-13 1:06 ` Ming Lei
2023-12-25 18:03 ` Yury Norov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZXpPzoX1pcQZMyBw@fedora \
--to=ming.lei@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=tglx@linutronix.de \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox