public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Pan Deng <pan.deng@intel.com>
Cc: mingo@kernel.org, linux-kernel@vger.kernel.org,
	tianyou.li@intel.com, tim.c.chen@linux.intel.com,
	yu.c.chen@intel.com
Subject: Re: [PATCH v2 2/4] sched/rt: Restructure root_domain to reduce cacheline contention
Date: Fri, 20 Mar 2026 11:18:47 +0100	[thread overview]
Message-ID: <20260320101847.GS3738786@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <346b697a0bbf9b0ff6a62d787ccf6665dcefc99f.1753076363.git.pan.deng@intel.com>

On Mon, Jul 21, 2025 at 02:10:24PM +0800, Pan Deng wrote:
> When running a multi-instance FFmpeg workload on HCC system, significant
> contention is observed in root_domain cacheline 1 and 3.

What's a HCC? Hobby Computer Club? Google is telling me it is the most
prevalent form of liver cancer, but I somehow doubt that is what you're
on about.

> The SUT is a 2-socket machine with 240 physical cores and 480 logical

Satellite User Terminal? Subsea Umbilical Termination? Small Unit
Transceiver? Single Unit Test?

> CPUs. 60 FFmpeg instances are launched, each pinned to 4 physical cores
> (8 logical CPUs) for transcoding tasks. Sub-threads use RT priority 99
> with FIFO scheduling. FPS is used as score.

Yes yes, poorly configured systems hurt.

> perf c2c tool reveals (sorted by contention severity):
> root_domain cache line 3:
> - `cpupri->pri_to_cpu[0].count` (offset 0x38) is heavily loaded/stored,
>    since counts[0] is more frequently updated than others along with a
>    rt task enqueues an empty runq or dequeues from a non-overloaded runq.
> - `rto_mask` (0x30) is heavily loaded
> - `rto_loop_next` (0x24) and `rto_loop_start` (0x28) are frequently stored
> - `rto_push_work` (0x0) and `rto_lock` (0x18) are lightly accessed
> - cycles per load: ~10K to 59K
> 
> root_domain cache line 1:
> - `rto_count` (0x4) is frequently loaded/stored
> - `overloaded` (0x28) is heavily loaded
> - cycles per load: ~2.8K to 44K:
> 
> This change adjusts the layout of `root_domain` to isolate these contended
> fields across separate cache lines:
> 1. `rto_count` remains in the 1st cache line; `overloaded` and
>    `overutilized` are moved to the last cache line
> 2. `rto_push_work` is placed in the 2nd cache line
> 3. `rto_loop_start`, `rto_loop_next`, and `rto_lock` remain in the 3rd
>    cache line; `rto_mask` is moved near `pd` in the penultimate cache line
> 4. `cpupri` starts at the 4th cache line to prevent `pri_to_cpu[0].count`
>    contending with fields in cache line 3.
> 
> With this change:
> - FPS improves by ~5%
> - Kernel cycles% drops from ~20% to ~17.7%
> - root_domain cache line 3 no longer appears in perf-c2c report
> - cycles per load of root_domain cache line 1 is reduced to from
>   ~2.8K-44K to ~2.1K-2.7K
> - stress-ng cyclic benchmark is improved ~18.6%, command:
>   stress-ng/stress-ng --cyclic $(nproc) --cyclic-policy fifo   \
>                       --timeout 30 --minimize --metrics
> - rt-tests/pi_stress is improved ~4.7%, command:
>   rt-tests/pi_stress -D 30 -g $(($(nproc) / 2))
> 
> According to the nature of the change, to my understanding, it doesn`t
> introduce any negative impact in other scenario.
> 
> Note: This change increases the size of `root_domain` from 29 to 31 cache
> lines, it's considered acceptable since `root_domain` is a single global
> object.

Uhm, what? We're at 207 cachelines due to that previous patch, remember?
A few more don't matter at this point I would guess.

It doesn't actually apply anymore, but it needs the very same that
previous patch did -- more comments.

  reply	other threads:[~2026-03-20 10:18 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-21  6:10 [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Pan Deng
2025-07-21  6:10 ` [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate " Pan Deng
2026-03-20 10:09   ` Peter Zijlstra
2026-03-24  9:36     ` Deng, Pan
2026-03-24 12:11       ` Peter Zijlstra
2026-03-27 10:17         ` Deng, Pan
2026-04-02 10:37           ` Deng, Pan
2026-04-02 10:43           ` Peter Zijlstra
2026-04-08 10:16   ` Chen, Yu C
2026-04-09 11:47     ` Deng, Pan
2025-07-21  6:10 ` [PATCH v2 2/4] sched/rt: Restructure root_domain to reduce cacheline contention Pan Deng
2026-03-20 10:18   ` Peter Zijlstra [this message]
2025-07-21  6:10 ` [PATCH v2 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters Pan Deng
2026-03-20 10:24   ` Peter Zijlstra
2026-03-23 18:09     ` Tim Chen
2026-03-24 12:16       ` Peter Zijlstra
2026-03-24 22:40         ` Tim Chen
2025-07-21  6:10 ` [PATCH v2 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention Pan Deng
2026-03-20 12:40   ` Peter Zijlstra
2026-03-23 18:45     ` Tim Chen
2026-03-24 12:00       ` Peter Zijlstra
2026-03-31  5:37         ` Chen, Yu C
2026-03-31 10:19           ` K Prateek Nayak
2026-04-02  3:15             ` Chen, Yu C
2026-04-02  4:41               ` K Prateek Nayak
2026-04-02 10:55                 ` Peter Zijlstra
2026-04-02 11:06                   ` K Prateek Nayak
2026-04-03  5:46                     ` Chen, Yu C
2026-04-03  8:13                       ` K Prateek Nayak
2026-04-07 20:35                       ` Tim Chen
2026-04-08  3:06                         ` K Prateek Nayak
2026-04-08 11:35                           ` Chen, Yu C
2026-04-08 15:52                             ` K Prateek Nayak
2026-04-09  5:17                               ` K Prateek Nayak
2026-04-09 23:09                                 ` Tim Chen
2026-04-10  5:51                                   ` Chen, Yu C
2026-04-10  6:02                                     ` K Prateek Nayak
2026-04-08  9:25                         ` Chen, Yu C
2026-04-08 16:47                           ` Tim Chen
2026-03-20  9:59 ` [PATCH v2 0/4] sched/rt: mitigate root_domain cache line contention Peter Zijlstra
2026-03-20 12:50   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260320101847.GS3738786@noisy.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=pan.deng@intel.com \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox