From: Pan Deng <pan.deng@intel.com>
To: peterz@infradead.org, mingo@kernel.org
Cc: linux-kernel@vger.kernel.org, tianyou.li@intel.com,
tim.c.chen@linux.intel.com, yu.c.chen@intel.com,
pan.deng@intel.com
Subject: [PATCH 0/4] sched/rt: mitigate root_domain cache line contention
Date: Mon, 7 Jul 2025 10:35:24 +0800 [thread overview]
Message-ID: <cover.1751852370.git.pan.deng@intel.com> (raw)
From: Deng Pan <pan.deng@intel.com>
When running multi-instance FFmpeg workload in cloud environment,
cache line contention is severe during the access to root_domain data
structures, which significantly degrades performance.
The SUT is a 2-socket machine with 240 physical cores and 480 logical
CPUs. 60 FFmpeg instances are launched, each pinned to 4 physical cores
(8 logical CPUs) for transcoding tasks. Sub-threads use RT priority 99
with FIFO scheduling. FPS is used as score.
Profiling shows the kernel consumes ~20% of CPU cycles, which is
excessive in this scenario. The overhead primarily comes from RT task
scheduling functions like `cpupri_set`, `cpupri_find_fitness`,
`dequeue_pushable_task`, `enqueue_pushable_task`, `pull_rt_task`,
`__find_first_and_bit`, and `__bitmap_and`. This is due to read/write
contention on root_domain cache lines.
The `perf c2c` report, sorted by contention severity, reveals:
root_domain cache line 3:
- `cpupri->pri_to_cpu[0].count` is heavily loaded/stored,
since counts[0] is more frequently updated than others along with a
rt task enqueues an empty runq or dequeues from a non-overloaded runq.
- `rto_mask` is heavily loaded
- `rto_loop_next` and `rto_loop_start` are frequently stored
- `rto_push_work` and `rto_lock` are lightly accessed
- cycles per load: ~10K to 59K.
root_domain cache line 1:
- `rto_count` is frequently loaded/stored
- `overloaded` is heavily loaded
- cycles per load: ~2.8K to 44K
cpumask (bitmap) cache line of cpupri_vec->mask:
- bits are loaded during cpupri_find
- bits are stored during cpupri_set
- cycles per load: ~2.2K to 8.7K
The end cache line of cpupri:
- `cpupri_vec->count` and `mask` contends. The transcoding threads use
rt pri 99, so that the contention occurs in the end.
- cycles per load: ~1.5K to 10.5K
According to above, we propose 4 patches to mitigate the contention.
Patch 1: Reorganize `cpupri_vec`, separate `count`, `mask` fields,
reducing contention on root_domain cache line 3 and cpupri's
last cache line.
Patch 2: Restructure `root_domain` structure to minimize contention of
root_domain cache line 1 and 3 by reordering fields.
Patch 3: Split `root_domain->rto_count` to per-NUMA-node counters,
reducing the contention on root_domain cache line 1.
Patch 4: Split `cpupri_vec->cpumask` to per-NUMA-node bitmaps, reducing
load/store contention on the cpumask bitmap cache line.
Evaluation:
Performance improvements (FPS, relative to baseline):
- Patch 1: +11.0%
- Patch 2: +5.0%
- Patch 3: +4.0%
- Patch 4: +3.8%
Kernel CPU cycle usage reduction:
- Patch 1: 20.0% -> 11.0%
- Patch 2: 20.0% -> 17.7%
- Patch 3: 20.0% -> 18.6%
- Patch 4: 20.0% -> 18.7%
Cycles per load reduction (by perf c2c report):
- Patch 1:
- `root_domain` cache line 3: 10K–59K -> 0.5K–8K
- `cpupri` last cache line: 1.5K–10.5K -> eliminated
- Patch 2:
- `root_domain` cache line 1: 2.8K–44K -> 2.1K–2.7K
- `root_domain` cache line 3: 10K–59K -> eliminated
- Patch 3:
- `root_domain` cache line 1: 2.8K–44K -> eliminated
- Patch 4:
- `cpupri_vec->mask` cache line: 2.2K–8.7K -> 0.5K–2.2K
Comments are appreciated.
Pan Deng (4):
sched/rt: Optimize cpupri_vec layout to mitigate cache line contention
sched/rt: Restructure root_domain to reduce cacheline contention
sched/rt: Split root_domain->rto_count to per-NUMA-node counters
sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce
contention
kernel/sched/cpupri.c | 200 ++++++++++++++++++++++++++++++++++++----
kernel/sched/cpupri.h | 6 +-
kernel/sched/rt.c | 65 ++++++++++++-
kernel/sched/sched.h | 61 ++++++------
kernel/sched/topology.c | 7 ++
5 files changed, 291 insertions(+), 48 deletions(-)
--
2.43.5
next reply other threads:[~2025-07-07 2:30 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-07 2:35 Pan Deng [this message]
2025-07-07 2:35 ` [PATCH 1/4] sched/rt: Optimize cpupri_vec layout to mitigate cache line contention Pan Deng
2025-09-01 5:10 ` Chen, Yu C
2025-09-01 13:24 ` Deng, Pan
2025-07-07 2:35 ` [PATCH 2/4] sched/rt: Restructure root_domain to reduce cacheline contention Pan Deng
2025-07-07 2:35 ` [PATCH 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters Pan Deng
2025-07-07 6:53 ` kernel test robot
2025-07-07 11:36 ` Deng, Pan
2025-07-07 6:53 ` kernel test robot
2025-07-08 5:33 ` kernel test robot
2025-07-08 14:02 ` Deng, Pan
2025-07-09 8:56 ` Li, Philip
2025-07-07 2:35 ` [PATCH 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention Pan Deng
2025-07-21 11:23 ` Chen, Yu C
2025-07-22 14:46 ` Deng, Pan
2025-08-06 14:00 ` Deng, Pan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1751852370.git.pan.deng@intel.com \
--to=pan.deng@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).