From: Tim Chen <tim.c.chen@linux.intel.com>
To: K Prateek Nayak <kprateek.nayak@amd.com>,
"Chen, Yu C" <yu.c.chen@intel.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: Pan Deng <pan.deng@intel.com>,
mingo@kernel.org, linux-kernel@vger.kernel.org,
tianyou.li@intel.com
Subject: Re: [PATCH v2 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention
Date: Thu, 09 Apr 2026 16:09:55 -0700 [thread overview]
Message-ID: <c896591397b00a3275d091a4c85dd9bcea1a0d9a.camel@linux.intel.com> (raw)
In-Reply-To: <e093d930-79df-4285-a492-cc6d40b3cd51@amd.com>
On Thu, 2026-04-09 at 10:47 +0530, K Prateek Nayak wrote:
> Hello Chenyu, Tim,
>
> On 4/8/2026 9:22 PM, K Prateek Nayak wrote:
> > Hello Chenyu,
> >
> > On 4/8/2026 5:05 PM, Chen, Yu C wrote:
> > > We haven't tried breaking it down further. One possible approach
> > > is to partition it at L2 scope, the benefit of which may depend on
> > > the workload.
> >
> > I fear at that point we'll have too many cachelines and too much
> > cache pollution when the CPU starts reading this at tick to schedule
> > a newidle balance.
> >
> > A 128 core system would bring in 128 * 64B = 8kB worth of data to
> > traverse the mask and at that point it becomes a trade off between
> > how fast you want reads vs writes and does it even speed up writes
> > after a certain point?
> >
> > Sorry I got distracted by some other stuff today but I'll share the
> > results from my experiments tomorrow.
>
> Here is some data from an experiments I ran on a 3rd Generation EPYC
> system (2 socket x 64C/128T (8LLCs per socket)):
>
> Experiment: Two threads pinned per-CPU on all CPUs yielding to each other
> and are operating on some cpumask - one setting the current CPU on the
> mask and other clearing the current CPU: Just an estimate of worst case
> scenario is we have to do one modification per sched-switch.
>
> I'm measuring total cycles taken for cpumask operations with following
> variants:
>
> %cycles vs global mask operation
>
> global mask : 100.0000% (var: 3.28%)
> per-NUMA mask : 32.9209% (var: 7.77%)
> per-LLC mask : 1.2977% (var: 4.85%)
> per-LLC mask (u8 operation; no LOCK prefix) : 0.4930% (var: 0.83%)
>
> per-NUMA split is 3X faster, per-LLC on this 16LLC machine is 77x faster
> and since there is enough space in the cacheline we can use a u8 to set
> and clear the CPu atomically without LOCK prefix and then do a >> 3 to
> get the CPU index from set bit which is 202x faster.
>
> If we use the u8 operations, we can only read 8CPUs per 8-byte load on
> 64-bit system but with per-LLC mask, we can scan all 16CPUs on the LLC
> with one 8-byte read and and per-NUMA one requires two 8-byte reads to
> scan the 128CPUs per socket.
>
> I think per-LLC mask (or, as Tim suggested, 64CPUs per cacheline) is
> a good tradeoff between the speedup vs amount of loads required to
> piece together the full cpumask. Thoughts?
I agree that per-LLC mask is a good compromise between minimizing loads
and offer good speed ups. I think we should get the LLC APICID
mask from 0x4 leaf (L1, L2, L3) instead of inferring from 0x1f leaf (Tile, Die ...etc)
for Intel. And the cache leaf I think is 0x8000_001D leaf for AMD.
Those are parsed in cacheinfo code and we can get it from there.
Tim
next prev parent reply other threads:[~2026-04-09 23:09 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-21 6:10 [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Pan Deng
2025-07-21 6:10 ` [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate " Pan Deng
2026-03-20 10:09 ` Peter Zijlstra
2026-03-24 9:36 ` Deng, Pan
2026-03-24 12:11 ` Peter Zijlstra
2026-03-27 10:17 ` Deng, Pan
2026-04-02 10:37 ` Deng, Pan
2026-04-02 10:43 ` Peter Zijlstra
2026-04-08 10:16 ` Chen, Yu C
2026-04-09 11:47 ` Deng, Pan
2025-07-21 6:10 ` [PATCH v2 2/4] sched/rt: Restructure root_domain to reduce cacheline contention Pan Deng
2026-03-20 10:18 ` Peter Zijlstra
2025-07-21 6:10 ` [PATCH v2 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters Pan Deng
2026-03-20 10:24 ` Peter Zijlstra
2026-03-23 18:09 ` Tim Chen
2026-03-24 12:16 ` Peter Zijlstra
2026-03-24 22:40 ` Tim Chen
2025-07-21 6:10 ` [PATCH v2 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention Pan Deng
2026-03-20 12:40 ` Peter Zijlstra
2026-03-23 18:45 ` Tim Chen
2026-03-24 12:00 ` Peter Zijlstra
2026-03-31 5:37 ` Chen, Yu C
2026-03-31 10:19 ` K Prateek Nayak
2026-04-02 3:15 ` Chen, Yu C
2026-04-02 4:41 ` K Prateek Nayak
2026-04-02 10:55 ` Peter Zijlstra
2026-04-02 11:06 ` K Prateek Nayak
2026-04-03 5:46 ` Chen, Yu C
2026-04-03 8:13 ` K Prateek Nayak
2026-04-07 20:35 ` Tim Chen
2026-04-08 3:06 ` K Prateek Nayak
2026-04-08 11:35 ` Chen, Yu C
2026-04-08 15:52 ` K Prateek Nayak
2026-04-09 5:17 ` K Prateek Nayak
2026-04-09 23:09 ` Tim Chen [this message]
2026-04-10 5:51 ` Chen, Yu C
2026-04-10 6:02 ` K Prateek Nayak
2026-04-08 9:25 ` Chen, Yu C
2026-04-08 16:47 ` Tim Chen
2026-03-20 9:59 ` [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Peter Zijlstra
2026-03-20 12:50 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c896591397b00a3275d091a4c85dd9bcea1a0d9a.camel@linux.intel.com \
--to=tim.c.chen@linux.intel.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=pan.deng@intel.com \
--cc=peterz@infradead.org \
--cc=tianyou.li@intel.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox