public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	K Prateek Nayak <kprateek.nayak@amd.com>,
	"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
	Vincent Guittot	 <vincent.guittot@linaro.org>,
	Chen Yu <yu.c.chen@intel.com>,
	Juri Lelli	 <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	 Valentin Schneider	 <vschneid@redhat.com>,
	Hillf Danton <hdanton@sina.com>,
	Shrikanth Hegde	 <sshegde@linux.ibm.com>,
	Jianyong Wu <jianyong.wu@outlook.com>,
	Yangyu Chen	 <cyy@cyyself.name>,
	Tingyin Duan <tingyin.duan@gmail.com>,
	Vern Hao	 <vernhao@tencent.com>, Vern Hao <haoxing990@gmail.com>,
	Len Brown	 <len.brown@intel.com>, Aubrey Li <aubrey.li@intel.com>,
	Zhao Liu	 <zhao1.liu@intel.com>, Chen Yu <yu.chen.surf@gmail.com>,
	Adam Li	 <adamli@os.amperecomputing.com>,
	Aaron Lu <ziqianlu@bytedance.com>,
	Tim Chen	 <tim.c.chen@intel.com>, Josh Don <joshdon@google.com>,
	Gavin Guo	 <gavinguo@igalia.com>,
	Qais Yousef <qyousef@layalina.io>,
	Libo Chen	 <libchen@purestorage.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 15/21] sched/cache: Disable cache aware scheduling for processes with high thread counts
Date: Thu, 19 Feb 2026 13:12:04 -0800	[thread overview]
Message-ID: <f024ea0d2dce10cef006637603ede1cae4241ebd.camel@linux.intel.com> (raw)
In-Reply-To: <6721ce77-c8b5-4fb3-b133-282b31ee6af2@linux.ibm.com>

On Thu, 2026-02-19 at 07:58 +0530, Madadi Vineeth Reddy wrote:
> On 19/02/26 03:14, Tim Chen wrote:
> > On Wed, 2026-02-18 at 23:24 +0530, Madadi Vineeth Reddy wrote:
> > > On 11/02/26 03:48, Tim Chen wrote:
> > > > From: Chen Yu <yu.c.chen@intel.com>
> > > > 
> > > > 
> > [ .. snip ..]
> > 
> > > >  
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index d1145997b88d..86b6b08e7e1e 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -1223,6 +1223,19 @@ static inline bool valid_llc_buf(struct sched_domain *sd,
> > > >  	return valid_llc_id(id);
> > > >  }
> > > >  
> > > > +static bool exceed_llc_nr(struct mm_struct *mm, int cpu)
> > > > +{
> > > > +	int smt_nr = 1;
> > > > +
> > > > +#ifdef CONFIG_SCHED_SMT
> > > > +	if (sched_smt_active())
> > > > +		smt_nr = cpumask_weight(cpu_smt_mask(cpu));
> > > > +#endif
> > > > +
> > > > +	return !fits_capacity((mm->sc_stat.nr_running_avg * smt_nr),
> > > > +			per_cpu(sd_llc_size, cpu));
> > > 
> > > 
> > > On Power10/Power11 with SMT4 and LLC size of 4, this check
> > > effectively disables cache-aware scheduling for any process.
> > 
> > There are 4 cores per LLC, with 4 SMT per core? In that case, once we have more than
> > 4 running threads and there's another idle LLC available, seems
> > like putting the additional thread on a different LLC is the
> > right thing to do as threads sharing a core will usually be much
> > slower.
> > 
> > But when number of threads are under 4, we should still be
> > doing aggregation.
> > 
> > Perhaps I am misunderstanding your topology.
> 
> There is only one core per LLC whose size is 4 CPUs.
> So, mm->sc_stat.nr_running_avg can't be >= 1 for
> cache aware scheduling to be enabled.

If there is only 1 core, and mm->sc_stat.nr_running_avg > 1,
wouldn't it be better to spread the tasks among the cores with
normal load balance, instead of having threads aggregated
fighting for the resource of a single core, i.e. run without
cache aware scheduling?

Tim 

> 
> Thanks,
> Vineeth
> 
> > 
> > Tim
> > 
> > > 
> > > I raised this point in v1 as well. Increasing the threshold
> > > doesn't seem like a viable solution either, as that would regress
> > > hackbench/ebizzy.
> > > 
> > > Is there a way to make this useful for architectures with small LLC
> > > sizes? One possible approach we were exploring is to have LLC at a
> > > hemisphere level that comprise multiple SMT4 cores.
> > > 
> > > Thanks,
> > > Vineeth

  parent reply	other threads:[~2026-02-19 21:12 UTC|newest]

Thread overview: 117+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-10 22:18 [PATCH v3 00/21] Cache Aware Scheduling Tim Chen
2026-02-10 22:18 ` [PATCH v3 01/21] sched/cache: Introduce infrastructure for cache-aware load balancing Tim Chen
2026-02-14 12:26   ` Madadi Vineeth Reddy
2026-02-14 15:34     ` Chen, Yu C
2026-02-17 18:51       ` Tim Chen
2026-02-10 22:18 ` [PATCH v3 02/21] sched/cache: Record per LLC utilization to guide cache aware scheduling decisions Tim Chen
2026-02-10 22:18 ` [PATCH v3 03/21] sched/cache: Introduce helper functions to enforce LLC migration policy Tim Chen
2026-02-14 16:12   ` Madadi Vineeth Reddy
2026-02-15 12:14     ` Chen, Yu C
2026-02-19 11:29   ` Peter Zijlstra
2026-02-19 14:48     ` Chen, Yu C
2026-02-19 14:55       ` Peter Zijlstra
2026-02-10 22:18 ` [PATCH v3 04/21] sched/cache: Make LLC id continuous Tim Chen
2026-02-14 17:53   ` Madadi Vineeth Reddy
2026-02-15 14:25     ` Chen, Yu C
2026-02-17 10:05       ` Madadi Vineeth Reddy
2026-02-17 21:20         ` Tim Chen
2026-02-16  7:44   ` K Prateek Nayak
2026-02-17  6:07     ` Chen, Yu C
2026-02-17  8:09       ` K Prateek Nayak
2026-02-17 23:12         ` Tim Chen
2026-02-18  3:28           ` K Prateek Nayak
2026-02-18 15:22             ` Chen, Yu C
2026-02-18 17:46               ` K Prateek Nayak
2026-02-18 23:21                 ` Tim Chen
2026-02-19  6:12                   ` K Prateek Nayak
2026-02-19 15:51                     ` Peter Zijlstra
2026-02-20  0:11                     ` Tim Chen
2026-02-19 11:25                   ` Chen, Yu C
2026-02-19 16:10                     ` K Prateek Nayak
2026-02-18 18:45               ` Tim Chen
2026-02-18 21:33             ` Tim Chen
2026-02-18 15:11         ` Chen, Yu C
2026-02-19 15:48         ` Peter Zijlstra
2026-02-20 15:22           ` Chen, Yu C
2026-02-19 15:40     ` Peter Zijlstra
2026-02-20 15:53       ` Chen, Yu C
2026-02-20 16:03         ` Peter Zijlstra
2026-02-20 16:10           ` Chen, Yu C
2026-02-20 19:24             ` Tim Chen
2026-02-20 19:30               ` Peter Zijlstra
2026-02-20 19:35                 ` Tim Chen
2026-02-19 11:35   ` Peter Zijlstra
2026-02-19 18:17     ` Tim Chen
2026-02-19 14:59   ` Peter Zijlstra
2026-02-19 15:20     ` Chen, Yu C
2026-02-19 19:20       ` Tim Chen
2026-02-19 21:04         ` Tim Chen
2026-02-20 17:17           ` Chen, Yu C
2026-02-10 22:18 ` [PATCH v3 05/21] sched/cache: Assign preferred LLC ID to processes Tim Chen
2026-02-14 18:36   ` Madadi Vineeth Reddy
2026-02-16  6:58     ` Chen, Yu C
2026-02-10 22:18 ` [PATCH v3 06/21] sched/cache: Track LLC-preferred tasks per runqueue Tim Chen
2026-02-10 22:18 ` [PATCH v3 07/21] sched/cache: Introduce per CPU's tasks LLC preference counter Tim Chen
2026-02-20 10:45   ` Peter Zijlstra
2026-02-20 16:57     ` Chen, Yu C
2026-02-20 18:38       ` Peter Zijlstra
2026-02-10 22:18 ` [PATCH v3 08/21] sched/cache: Calculate the percpu sd task LLC preference Tim Chen
2026-02-20 11:02   ` Peter Zijlstra
2026-02-20 14:02     ` Peter Zijlstra
2026-02-20 17:25       ` Chen, Yu C
2026-02-10 22:18 ` [PATCH v3 09/21] sched/cache: Count tasks prefering destination LLC in a sched group Tim Chen
2026-02-20 12:52   ` Peter Zijlstra
2026-02-20 13:43     ` Peter Zijlstra
2026-02-21  2:53       ` Chen, Yu C
2026-02-10 22:18 ` [PATCH v3 10/21] sched/cache: Check local_group only once in update_sg_lb_stats() Tim Chen
2026-02-10 22:18 ` [PATCH v3 11/21] sched/cache: Prioritize tasks preferring destination LLC during balancing Tim Chen
2026-02-17 18:33   ` Madadi Vineeth Reddy
2026-02-17 21:45     ` Tim Chen
2026-02-10 22:18 ` [PATCH v3 12/21] sched/cache: Add migrate_llc_task migration type for cache-aware balancing Tim Chen
2026-02-10 22:18 ` [PATCH v3 13/21] sched/cache: Handle moving single tasks to/from their preferred LLC Tim Chen
2026-02-17 19:00   ` Madadi Vineeth Reddy
2026-02-17 22:04     ` Tim Chen
2026-02-20 13:53   ` Peter Zijlstra
2026-02-20 18:22     ` Tim Chen
2026-02-10 22:18 ` [PATCH v3 14/21] sched/cache: Respect LLC preference in task migration and detach Tim Chen
2026-02-18  9:14   ` Madadi Vineeth Reddy
2026-02-18 15:34     ` Chen, Yu C
2026-02-10 22:18 ` [PATCH v3 15/21] sched/cache: Disable cache aware scheduling for processes with high thread counts Tim Chen
2026-02-18 17:54   ` Madadi Vineeth Reddy
2026-02-18 21:44     ` Tim Chen
2026-02-19  2:28       ` Madadi Vineeth Reddy
2026-02-19 14:38         ` Chen, Yu C
2026-02-19 21:12         ` Tim Chen [this message]
2026-02-19 16:52     ` Peter Zijlstra
2026-02-20  7:02       ` Madadi Vineeth Reddy
2026-02-19 16:55     ` Peter Zijlstra
2026-02-20  6:40       ` Madadi Vineeth Reddy
2026-02-20  9:53         ` Peter Zijlstra
2026-02-24  9:42           ` Madadi Vineeth Reddy
2026-02-19 16:50   ` Peter Zijlstra
2026-02-19 21:06     ` Tim Chen
2026-02-10 22:18 ` [PATCH v3 16/21] sched/cache: Avoid cache-aware scheduling for memory-heavy processes Tim Chen
2026-02-10 22:18 ` [PATCH v3 17/21] sched/cache: Enable cache aware scheduling for multi LLCs NUMA node Tim Chen
2026-02-10 22:18 ` [PATCH v3 18/21] sched/cache: Allow the user space to turn on and off cache aware scheduling Tim Chen
2026-02-10 22:18 ` [PATCH v3 19/21] sched/cache: Add user control to adjust the aggressiveness of cache-aware scheduling Tim Chen
2026-02-20 14:29   ` Peter Zijlstra
2026-02-20 18:18     ` Tim Chen
2026-02-10 22:19 ` [PATCH v3 20/21] -- DO NOT APPLY!!! -- sched/cache/debug: Display the per LLC occupancy for each process via proc fs Tim Chen
2026-02-10 22:19 ` [PATCH v3 21/21] -- DO NOT APPLY!!! -- sched/cache/debug: Add ftrace to track the load balance statistics Tim Chen
2026-02-19 14:08 ` [PATCH v3 00/21] Cache Aware Scheduling Qais Yousef
2026-02-19 14:41   ` Peter Zijlstra
2026-02-19 15:07     ` Chen, Yu C
2026-02-19 18:11       ` Tim Chen
2026-02-20  3:29         ` Qais Yousef
2026-02-20  9:43           ` Peter Zijlstra
2026-02-24  2:49             ` Qais Yousef
2026-02-20 18:14           ` Tim Chen
2026-02-24  3:02             ` Qais Yousef
2026-02-20  3:25       ` Qais Yousef
2026-02-21  2:48         ` Chen, Yu C
2026-02-24  3:11           ` Qais Yousef
2026-02-19 19:48     ` Qais Yousef
2026-02-19 21:47       ` Tim Chen
2026-02-20  3:41         ` Qais Yousef
2026-02-20  8:45           ` Peter Zijlstra
2026-02-24  3:31             ` Qais Yousef

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f024ea0d2dce10cef006637603ede1cae4241ebd.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=adamli@os.amperecomputing.com \
    --cc=aubrey.li@intel.com \
    --cc=bsegall@google.com \
    --cc=cyy@cyyself.name \
    --cc=dietmar.eggemann@arm.com \
    --cc=gautham.shenoy@amd.com \
    --cc=gavinguo@igalia.com \
    --cc=haoxing990@gmail.com \
    --cc=hdanton@sina.com \
    --cc=jianyong.wu@outlook.com \
    --cc=joshdon@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=len.brown@intel.com \
    --cc=libchen@purestorage.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=qyousef@layalina.io \
    --cc=rostedt@goodmis.org \
    --cc=sshegde@linux.ibm.com \
    --cc=tim.c.chen@intel.com \
    --cc=tingyin.duan@gmail.com \
    --cc=vernhao@tencent.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vineethr@linux.ibm.com \
    --cc=vschneid@redhat.com \
    --cc=yu.c.chen@intel.com \
    --cc=yu.chen.surf@gmail.com \
    --cc=zhao1.liu@intel.com \
    --cc=ziqianlu@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox