From: Qais Yousef <qyousef@layalina.io>
To: "Chen, Yu C" <yu.c.chen@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Juri Lelli <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Madadi Vineeth Reddy <vineethr@linux.ibm.com>,
Hillf Danton <hdanton@sina.com>,
Shrikanth Hegde <sshegde@linux.ibm.com>,
Jianyong Wu <jianyong.wu@outlook.com>,
Yangyu Chen <cyy@cyyself.name>,
Tingyin Duan <tingyin.duan@gmail.com>,
Vern Hao <vernhao@tencent.com>, Vern Hao <haoxing990@gmail.com>,
Len Brown <len.brown@intel.com>, Aubrey Li <aubrey.li@intel.com>,
Zhao Liu <zhao1.liu@intel.com>, Chen Yu <yu.chen.surf@gmail.com>,
Adam Li <adamli@os.amperecomputing.com>,
Aaron Lu <ziqianlu@bytedance.com>,
Tim Chen <tim.c.chen@intel.com>, Josh Don <joshdon@google.com>,
Gavin Guo <gavinguo@igalia.com>,
Libo Chen <libchen@purestorage.com>,
linux-kernel@vger.kernel.org
Subject: Re: [Patch v4 00/22] Cache aware scheduling
Date: Sat, 25 Apr 2026 01:14:38 +0100 [thread overview]
Message-ID: <20260425001438.xroo3orrlmwp2rcb@airbuntu> (raw)
In-Reply-To: <72da6ff8-142c-4135-9b1a-5dbb30ecf7fd@intel.com>
On 04/24/26 01:17, Chen, Yu C wrote:
> On 4/21/2026 8:34 AM, Qais Yousef wrote:
> > On 04/20/26 17:01, Chen, Yu C wrote:
> > > On 4/16/2026 8:27 AM, Qais Yousef wrote:
> > > > On 04/01/26 14:52, Tim Chen wrote:
> > >
> > > [ ... ]
> > >
> > > It seems to me that there are multiple use cases. In one scenario,
> > > the administrator (including daemons) is responsible for tagging
> > > workloads. In another, users prefer the OS to handle automatic
> > > placement without any userspace involvement.
> >
> > How do you define this automatic placement? AFAICS you're just grouping all
> > tasks of a specific process to stay within the same LLC and hitting overcommit
> > issues which you're workingaround with this load balancer only based approach?
> >
> > I think in practice there will be many corner cases where state is not optimal
> > and we'd end up with heuristics to 'balance' things out and sensitivity to
> > independent changes disturbing this fragile balance causing weird regressions
> > and us slowly has less flexibility to move and shuffle code (okay, maybe too
> > much doom and gloom, but we've been by this in the past :)).
> >
> > I am not sure how many of these tests stressed the system with multiple
> > critical processes running concurrently?
> >
>
> In the initial RFC patches, we ran multi-process tests,
> where workloads were assigned by cache-aware LB to dedicated
> LLCs when under-loaded. I just conducted additional
> multi-process hackbench tests, and the results demonstrate
> improved stabilization with cache-aware LB enabled. Thus,
> I think for multi-process cases, there is no difference from
> single-process cases - the tasks can be aggregated to one LLC
> as long as it is under-loaded, no matter what process this
> migrating task belongs to.
Multi as in > num_llcs?
Doing my own tests with schedqos and monitor the schedqos log, I was surprised
how many processes are created from simplest of operations.
My worry is that since you assume all processes must be grouped, in real life
scenario you will end up with processes > num_llc in many corner cases.
With opt-in approach, you know exactly how many there will be and admins can
design for it.
>
> > By making it a userspace problem they have to figure out the right balance and
> > we can focus on providing the right mechanism.
> >
>
> I totally agree that with the help from userspace, the task aggregation
> would become more usable. The test data would speak. Once we have resolved
> the issues reported by Sashiko we will evaluate the schedqos provided
> interface.
Great. Happy to work closely with you to help iron out problems.
prev parent reply other threads:[~2026-04-25 0:14 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 21:52 [Patch v4 00/22] Cache aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 01/22] sched/cache: Introduce infrastructure for cache-aware load balancing Tim Chen
2026-04-09 12:41 ` Peter Zijlstra
2026-04-09 19:21 ` Tim Chen
2026-04-09 23:00 ` Peter Zijlstra
2026-04-10 6:30 ` Chen, Yu C
2026-04-15 2:06 ` Vern Hao
2026-04-15 3:34 ` Chen, Yu C
2026-04-01 21:52 ` [Patch v4 02/22] sched/cache: Limit the scan number of CPUs when calculating task occupancy Tim Chen
2026-04-09 13:17 ` Luo Gengkun
2026-04-09 13:41 ` Peter Zijlstra
2026-04-10 10:12 ` Luo Gengkun
2026-04-10 7:29 ` Chen, Yu C
2026-04-10 10:20 ` Luo Gengkun
2026-04-10 17:12 ` Tim Chen
2026-04-10 17:27 ` Chen, Yu C
2026-04-13 7:23 ` [RFC PATCH] sched/fair: dynamically scale the period of cache work Jianyong Wu
2026-04-13 8:38 ` Chen, Yu C
2026-04-13 11:27 ` Jianyong Wu
2026-04-15 3:31 ` Chen, Yu C
2026-04-16 3:39 ` Jianyong Wu
2026-04-15 17:22 ` Tim Chen
2026-04-16 6:50 ` Jianyong Wu
2026-04-14 15:07 ` [PATCH v2] sched/cache: Reduce the overhead of task_cache_work by only scan the visisted cpus Luo Gengkun
2026-04-15 3:10 ` Chen, Yu C
2026-04-18 9:01 ` Luo Gengkun
2026-04-20 7:53 ` Chen, Yu C
2026-04-23 8:54 ` [PATCH v3] " Luo Gengkun
2026-04-01 21:52 ` [Patch v4 03/22] sched/cache: Record per LLC utilization to guide cache aware scheduling decisions Tim Chen
2026-04-01 21:52 ` [Patch v4 04/22] sched/cache: Introduce helper functions to enforce LLC migration policy Tim Chen
2026-04-01 21:52 ` [Patch v4 05/22] sched/cache: Make LLC id continuous Tim Chen
2026-04-01 21:52 ` [Patch v4 06/22] sched/cache: Assign preferred LLC ID to processes Tim Chen
2026-04-01 21:52 ` [Patch v4 07/22] sched/cache: Track LLC-preferred tasks per runqueue Tim Chen
2026-04-01 21:52 ` [Patch v4 08/22] sched/cache: Introduce per CPU's tasks LLC preference counter Tim Chen
2026-04-01 21:52 ` [Patch v4 09/22] sched/cache: Calculate the percpu sd task LLC preference Tim Chen
2026-04-01 21:52 ` [Patch v4 10/22] sched/cache: Count tasks prefering destination LLC in a sched group Tim Chen
2026-04-01 21:52 ` [Patch v4 11/22] sched/cache: Check local_group only once in update_sg_lb_stats() Tim Chen
2026-04-01 21:52 ` [Patch v4 12/22] sched/cache: Prioritize tasks preferring destination LLC during balancing Tim Chen
2026-04-01 21:52 ` [Patch v4 13/22] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Tim Chen
2026-04-01 21:52 ` [Patch v4 14/22] sched/cache: Handle moving single tasks to/from their preferred LLC Tim Chen
2026-04-01 21:52 ` [Patch v4 15/22] sched/cache: Respect LLC preference in task migration and detach Tim Chen
2026-04-01 21:52 ` [Patch v4 16/22] sched/cache: Disable cache aware scheduling for processes with high thread counts Tim Chen
2026-04-09 12:43 ` Peter Zijlstra
2026-04-09 19:27 ` Tim Chen
2026-04-01 21:52 ` [Patch v4 17/22] sched/cache: Avoid cache-aware scheduling for memory-heavy processes Tim Chen
2026-04-09 12:46 ` Peter Zijlstra
2026-04-09 12:55 ` Peter Zijlstra
2026-04-10 8:59 ` Chen, Yu C
2026-04-10 9:20 ` Peter Zijlstra
2026-04-01 21:52 ` [Patch v4 18/22] sched/cache: Enable cache aware scheduling for multi LLCs NUMA node Tim Chen
2026-04-09 13:37 ` Peter Zijlstra
2026-04-09 19:39 ` Tim Chen
2026-04-01 21:52 ` [Patch v4 19/22] sched/cache: Allow the user space to turn on and off cache aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 20/22] sched/cache: Add user control to adjust the aggressiveness of cache-aware scheduling Tim Chen
2026-04-01 21:52 ` [Patch v4 21/22] -- DO NOT APPLY!!! -- sched/cache/debug: Display the per LLC occupancy for each process via proc fs Tim Chen
2026-04-01 21:52 ` [Patch v4 22/22] -- DO NOT APPLY!!! -- sched/cache/debug: Add ftrace to track the load balance statistics Tim Chen
2026-04-09 13:54 ` [Patch v4 00/22] Cache aware scheduling Peter Zijlstra
2026-04-09 20:02 ` Tim Chen
2026-04-14 3:20 ` Duan Tingyin
2026-04-15 17:35 ` Tim Chen
2026-04-16 0:27 ` Qais Yousef
2026-04-20 9:01 ` Chen, Yu C
2026-04-21 0:34 ` Qais Yousef
2026-04-21 20:57 ` Tim Chen
2026-04-23 15:06 ` Qais Yousef
2026-04-23 16:48 ` Chen, Yu C
2026-04-25 0:05 ` Qais Yousef
2026-04-23 17:17 ` Chen, Yu C
2026-04-25 0:14 ` Qais Yousef [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260425001438.xroo3orrlmwp2rcb@airbuntu \
--to=qyousef@layalina.io \
--cc=adamli@os.amperecomputing.com \
--cc=aubrey.li@intel.com \
--cc=bsegall@google.com \
--cc=cyy@cyyself.name \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=gavinguo@igalia.com \
--cc=haoxing990@gmail.com \
--cc=hdanton@sina.com \
--cc=jianyong.wu@outlook.com \
--cc=joshdon@google.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=len.brown@intel.com \
--cc=libchen@purestorage.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sshegde@linux.ibm.com \
--cc=tim.c.chen@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=tingyin.duan@gmail.com \
--cc=vernhao@tencent.com \
--cc=vincent.guittot@linaro.org \
--cc=vineethr@linux.ibm.com \
--cc=vschneid@redhat.com \
--cc=yu.c.chen@intel.com \
--cc=yu.chen.surf@gmail.com \
--cc=zhao1.liu@intel.com \
--cc=ziqianlu@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox