From: "Chen, Yu C" <yu.c.chen@intel.com>
To: Luo Gengkun <luogengkun2@huawei.com>
Cc: <adamli@os.amperecomputing.com>, <aubrey.li@intel.com>,
<bsegall@google.com>, <cyy@cyyself.name>,
<dietmar.eggemann@arm.com>, <gavinguo@igalia.com>,
<haoxing990@gmail.com>, <hdanton@sina.com>,
<jianyong.wu@outlook.com>, <joshdon@google.com>,
<juri.lelli@redhat.com>, <kprateek.nayak@amd.com>,
<len.brown@intel.com>, <libchen@purestorage.com>,
<linux-kernel@vger.kernel.org>, <mgorman@suse.de>,
<mingo@redhat.com>, <peterz@infradead.org>, <qyousef@layalina.io>,
<rostedt@goodmis.org>, <sshegde@linux.ibm.com>,
<tim.c.chen@intel.com>, <tim.c.chen@linux.intel.com>,
<tingyin.duan@gmail.com>, <vernhao@tencent.com>,
<vincent.guittot@linaro.org>, <vineethr@linux.ibm.com>,
<vschneid@redhat.com>, <zhao1.liu@intel.com>,
<ziqianlu@bytedance.com>
Subject: Re: [PATCH v2] sched/cache: Reduce the overhead of task_cache_work by only scan the visisted cpus.
Date: Wed, 15 Apr 2026 11:10:43 +0800 [thread overview]
Message-ID: <1ae149cd-7b36-4625-8e93-daf45aaac080@intel.com> (raw)
In-Reply-To: <20260414150745.225416-1-luogengkun2@huawei.com>
Hi Gengkun,
On 4/14/2026 11:07 PM, Luo Gengkun wrote:
> The overhead of task_cache_work is high, espeically in multi-NUMA system.
> Currently, task_cache_work try to find the pref_llc by scan all cpus in the
> system. However, most of these scans are meaningless, such as those for
> cpus that have never been visited or were accessed a long time ago.
>
> To address this problem, this patch introduces visited_cpus to track the
> visited cpus and uses llc_epoch_visited_timeout to evict cpus that have
> timed out.
>
> Signed-off-by: Luo Gengkun <luogengkun2@huawei.com>
> ---
> Thanks for the reviews. I've updated the patch based on your feedback.
>
> v2 Changes:
> 1. Added a pre-check before set/clear visited_cpus to avoid C2C overhead.
> 2. Optimized llc_epoch_visited_timeout by using a static key to minimize overhead.
Since the visited CPUs optimization should help reduce the scan cost,
I wonder if we should enable it by default, regardless of the timeout
value set by the user. This mainly helps avoid introducing extra debugfs
controls/static key.
> #ifdef CONFIG_PREEMPT_DYNAMIC
> @@ -669,6 +717,8 @@ static __init int sched_init_debug(void)
> llc = debugfs_create_dir("llc_balancing", debugfs_sched);
> debugfs_create_file("enabled", 0644, llc, NULL,
> &sched_cache_enable_fops);
> + debugfs_create_file("epoch_visited_timeout", 0644, llc, NULL,
> + &sched_cache_timeout_enable_fops);
Is it possible to reuse llc_epoch_affinity_timeout without introducing
epoch_visited_timeout? The idea is that if a task has not run on that CPU
for 10 ms (by default), its footprint will be cleared.
[ ... ]
> @@ -1736,8 +1746,17 @@ static void task_cache_work(struct callback_head *work)
> continue;
>
> for_each_cpu(i, sched_domain_span(sd)) {
> - occ = fraction_mm_sched(cpu_rq(i),
> - per_cpu_ptr(mm->sc_stat.pcpu_sched, i));
> + struct rq *rq = cpu_rq(i);
> + struct sched_cache_time *pcpu_sched = per_cpu_ptr(mm->sc_stat.pcpu_sched, i);
> + /* Skip the rq that has not been hit for a long time */
> + if (sched_cache_timeout_enabled() &&
> + cpumask_test_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus) &&
cpumask_test_cpu(i) should be fine. The rq access above doesn't hold
cpu_epoch_lock.
I wonder if we can safely calculate rq->cpu_epoch - pcpu_sched->epoch
inside fraction_mm_sched while holding the lock?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'll test your patch after fixing the bug reported by sashiko.dev.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
thanks,
Chenyu
next prev parent reply other threads:[~2026-04-15 3:11 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 21:52 [Patch v4 00/22] Cache aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 01/22] sched/cache: Introduce infrastructure for cache-aware load balancing Tim Chen
2026-04-09 12:41 ` Peter Zijlstra
2026-04-09 19:21 ` Tim Chen
2026-04-09 23:00 ` Peter Zijlstra
2026-04-10 6:30 ` Chen, Yu C
2026-04-15 2:06 ` Vern Hao
2026-04-15 3:34 ` Chen, Yu C
2026-04-01 21:52 ` [Patch v4 02/22] sched/cache: Limit the scan number of CPUs when calculating task occupancy Tim Chen
2026-04-09 13:17 ` Luo Gengkun
2026-04-09 13:41 ` Peter Zijlstra
2026-04-10 10:12 ` Luo Gengkun
2026-04-10 7:29 ` Chen, Yu C
2026-04-10 10:20 ` Luo Gengkun
2026-04-10 17:12 ` Tim Chen
2026-04-10 17:27 ` Chen, Yu C
2026-04-13 7:23 ` [RFC PATCH] sched/fair: dynamically scale the period of cache work Jianyong Wu
2026-04-13 8:38 ` Chen, Yu C
2026-04-13 11:27 ` Jianyong Wu
2026-04-15 3:31 ` Chen, Yu C
2026-04-16 3:39 ` Jianyong Wu
2026-04-15 17:22 ` Tim Chen
2026-04-14 15:07 ` [PATCH v2] sched/cache: Reduce the overhead of task_cache_work by only scan the visisted cpus Luo Gengkun
2026-04-15 3:10 ` Chen, Yu C [this message]
2026-04-01 21:52 ` [Patch v4 03/22] sched/cache: Record per LLC utilization to guide cache aware scheduling decisions Tim Chen
2026-04-01 21:52 ` [Patch v4 04/22] sched/cache: Introduce helper functions to enforce LLC migration policy Tim Chen
2026-04-01 21:52 ` [Patch v4 05/22] sched/cache: Make LLC id continuous Tim Chen
2026-04-01 21:52 ` [Patch v4 06/22] sched/cache: Assign preferred LLC ID to processes Tim Chen
2026-04-01 21:52 ` [Patch v4 07/22] sched/cache: Track LLC-preferred tasks per runqueue Tim Chen
2026-04-01 21:52 ` [Patch v4 08/22] sched/cache: Introduce per CPU's tasks LLC preference counter Tim Chen
2026-04-01 21:52 ` [Patch v4 09/22] sched/cache: Calculate the percpu sd task LLC preference Tim Chen
2026-04-01 21:52 ` [Patch v4 10/22] sched/cache: Count tasks prefering destination LLC in a sched group Tim Chen
2026-04-01 21:52 ` [Patch v4 11/22] sched/cache: Check local_group only once in update_sg_lb_stats() Tim Chen
2026-04-01 21:52 ` [Patch v4 12/22] sched/cache: Prioritize tasks preferring destination LLC during balancing Tim Chen
2026-04-01 21:52 ` [Patch v4 13/22] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Tim Chen
2026-04-01 21:52 ` [Patch v4 14/22] sched/cache: Handle moving single tasks to/from their preferred LLC Tim Chen
2026-04-01 21:52 ` [Patch v4 15/22] sched/cache: Respect LLC preference in task migration and detach Tim Chen
2026-04-01 21:52 ` [Patch v4 16/22] sched/cache: Disable cache aware scheduling for processes with high thread counts Tim Chen
2026-04-09 12:43 ` Peter Zijlstra
2026-04-09 19:27 ` Tim Chen
2026-04-01 21:52 ` [Patch v4 17/22] sched/cache: Avoid cache-aware scheduling for memory-heavy processes Tim Chen
2026-04-09 12:46 ` Peter Zijlstra
2026-04-09 12:55 ` Peter Zijlstra
2026-04-10 8:59 ` Chen, Yu C
2026-04-10 9:20 ` Peter Zijlstra
2026-04-01 21:52 ` [Patch v4 18/22] sched/cache: Enable cache aware scheduling for multi LLCs NUMA node Tim Chen
2026-04-09 13:37 ` Peter Zijlstra
2026-04-09 19:39 ` Tim Chen
2026-04-01 21:52 ` [Patch v4 19/22] sched/cache: Allow the user space to turn on and off cache aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 20/22] sched/cache: Add user control to adjust the aggressiveness of cache-aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 21/22] -- DO NOT APPLY!!! -- sched/cache/debug: Display the per LLC occupancy for each process via proc fs Tim Chen
2026-04-01 21:52 ` [Patch v4 22/22] -- DO NOT APPLY!!! -- sched/cache/debug: Add ftrace to track the load balance statistics Tim Chen
2026-04-09 13:54 ` [Patch v4 00/22] Cache aware scheduling Peter Zijlstra
2026-04-09 20:02 ` Tim Chen
2026-04-14 3:20 ` Duan Tingyin
2026-04-15 17:35 ` Tim Chen
2026-04-16 0:27 ` Qais Yousef
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1ae149cd-7b36-4625-8e93-daf45aaac080@intel.com \
--to=yu.c.chen@intel.com \
--cc=adamli@os.amperecomputing.com \
--cc=aubrey.li@intel.com \
--cc=bsegall@google.com \
--cc=cyy@cyyself.name \
--cc=dietmar.eggemann@arm.com \
--cc=gavinguo@igalia.com \
--cc=haoxing990@gmail.com \
--cc=hdanton@sina.com \
--cc=jianyong.wu@outlook.com \
--cc=joshdon@google.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=len.brown@intel.com \
--cc=libchen@purestorage.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luogengkun2@huawei.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=qyousef@layalina.io \
--cc=rostedt@goodmis.org \
--cc=sshegde@linux.ibm.com \
--cc=tim.c.chen@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=tingyin.duan@gmail.com \
--cc=vernhao@tencent.com \
--cc=vincent.guittot@linaro.org \
--cc=vineethr@linux.ibm.com \
--cc=vschneid@redhat.com \
--cc=zhao1.liu@intel.com \
--cc=ziqianlu@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox