From: Peter Zijlstra <peterz@infradead.org>
To: Pan Deng <pan.deng@intel.com>
Cc: mingo@kernel.org, linux-kernel@vger.kernel.org,
tianyou.li@intel.com, tim.c.chen@linux.intel.com,
yu.c.chen@intel.com
Subject: Re: [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate cache line contention
Date: Fri, 20 Mar 2026 11:09:03 +0100 [thread overview]
Message-ID: <20260320100903.GR3738786@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <24c460fb48d86a5b990acbb42d0d29d91dfc427c.1753076363.git.pan.deng@intel.com>
On Mon, Jul 21, 2025 at 02:10:23PM +0800, Pan Deng wrote:
> When running a multi-instance FFmpeg workload on an HCC system, significant
> cache line contention is observed around `cpupri_vec->count` and `mask` in
> struct root_domain.
>
> The SUT is a 2-socket machine with 240 physical cores and 480 logical
> CPUs. 60 FFmpeg instances are launched, each pinned to 4 physical cores
> (8 logical CPUs) for transcoding tasks. Sub-threads use RT priority 99
> with FIFO scheduling. FPS is used as score.
>
> perf c2c tool reveals:
> root_domain cache line 3:
> - `cpupri->pri_to_cpu[0].count` (offset 0x38) is heavily loaded/stored
> and contends with other fields, since counts[0] is more frequently
> updated than others along with a rt task enqueues an empty runq or
> dequeues from a non-overloaded runq.
> - cycles per load: ~10K to 59K
>
> cpupri's last cache line:
> - `cpupri_vec->count` and `mask` contends. The transcoding threads use
> rt pri 99, so that the contention occurs in the end.
> - cycles per load: ~1.5K to 10.5K
>
> This change mitigates `cpupri_vec->count`, `mask` related contentions by
> separating each count and mask into different cache lines.
Right.
> Note: The side effect of this change is that struct cpupri size is
> increased from 26 cache lines to 203 cache lines.
That is pretty horrible, but probably unavoidable.
> An alternative implementation of this patch could be separating `counts`
> and `masks` into 2 vectors in cpupri_vec (counts[] and masks[]), and
> add two paddings:
> 1. Between counts[0] and counts[1], since counts[0] is more frequently
> updated than others.
That is completely workload specific; it is a direct consequence of your
(probably busted) priority assignment scheme.
> 2. Between the two vectors, since counts[] is read-write access while
> masks[] is read access when it stores pointers.
>
> The alternative introduces the complexity of 31+/21- LoC changes,
> it achieves almost the same performance, at the same time, struct cpupri
> size is reduced from 26 cache lines to 21 cache lines.
That is not an alternative, since it very specifically only deals with
fifo-99 contention.
> ---
> kernel/sched/cpupri.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/cpupri.h b/kernel/sched/cpupri.h
> index d6cba0020064..245b0fa626be 100644
> --- a/kernel/sched/cpupri.h
> +++ b/kernel/sched/cpupri.h
> @@ -9,7 +9,7 @@
>
> struct cpupri_vec {
> atomic_t count;
> - cpumask_var_t mask;
> + cpumask_var_t mask ____cacheline_aligned;
> };
At the very least this needs a comment, explaining the what and how of
it.
next prev parent reply other threads:[~2026-03-20 10:09 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-21 6:10 [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Pan Deng
2025-07-21 6:10 ` [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate " Pan Deng
2026-03-20 10:09 ` Peter Zijlstra [this message]
2026-03-24 9:36 ` Deng, Pan
2026-03-24 12:11 ` Peter Zijlstra
2026-03-27 10:17 ` Deng, Pan
2026-04-02 10:37 ` Deng, Pan
2026-04-02 10:43 ` Peter Zijlstra
2026-04-08 10:16 ` Chen, Yu C
2026-04-09 11:47 ` Deng, Pan
2025-07-21 6:10 ` [PATCH v2 2/4] sched/rt: Restructure root_domain to reduce cacheline contention Pan Deng
2026-03-20 10:18 ` Peter Zijlstra
2025-07-21 6:10 ` [PATCH v2 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters Pan Deng
2026-03-20 10:24 ` Peter Zijlstra
2026-03-23 18:09 ` Tim Chen
2026-03-24 12:16 ` Peter Zijlstra
2026-03-24 22:40 ` Tim Chen
2025-07-21 6:10 ` [PATCH v2 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention Pan Deng
2026-03-20 12:40 ` Peter Zijlstra
2026-03-23 18:45 ` Tim Chen
2026-03-24 12:00 ` Peter Zijlstra
2026-03-31 5:37 ` Chen, Yu C
2026-03-31 10:19 ` K Prateek Nayak
2026-04-02 3:15 ` Chen, Yu C
2026-04-02 4:41 ` K Prateek Nayak
2026-04-02 10:55 ` Peter Zijlstra
2026-04-02 11:06 ` K Prateek Nayak
2026-04-03 5:46 ` Chen, Yu C
2026-04-03 8:13 ` K Prateek Nayak
2026-04-07 20:35 ` Tim Chen
2026-04-08 3:06 ` K Prateek Nayak
2026-04-08 11:35 ` Chen, Yu C
2026-04-08 15:52 ` K Prateek Nayak
2026-04-09 5:17 ` K Prateek Nayak
2026-04-09 23:09 ` Tim Chen
2026-04-10 5:51 ` Chen, Yu C
2026-04-10 6:02 ` K Prateek Nayak
2026-04-08 9:25 ` Chen, Yu C
2026-04-08 16:47 ` Tim Chen
2026-03-20 9:59 ` [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Peter Zijlstra
2026-03-20 12:50 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260320100903.GR3738786@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=pan.deng@intel.com \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox