From: K Prateek Nayak <kprateek.nayak@amd.com>
To: Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
<linux-kernel@vger.kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Chen Yu <yu.c.chen@intel.com>,
Shrikanth Hegde <sshegde@linux.ibm.com>,
"Gautham R. Shenoy" <gautham.shenoy@amd.com>,
K Prateek Nayak <kprateek.nayak@amd.com>
Subject: [PATCH v3 1/8] sched/topology: Compute sd_weight considering cpuset partitions
Date: Tue, 20 Jan 2026 11:32:39 +0000 [thread overview]
Message-ID: <20260120113246.27987-2-kprateek.nayak@amd.com> (raw)
In-Reply-To: <20260120113246.27987-1-kprateek.nayak@amd.com>
The "sd_weight" used for calculating the load balancing interval, and
its limits, considers the span weight of the entire topology level
without accounting for cpuset partitions.
Compute the "sd_weight" after computing the "sd_span" considering the
cpu_map covered by the partition, and set the load balancing interval,
and its limits accordingly.
Fixes: cb83b629bae03 ("sched/numa: Rewrite the CONFIG_NUMA sched domain support")
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
Changelog rfc v2..v3:
o New patch.
---
kernel/sched/topology.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index cf643a5ddedd..649674bb6c3c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1638,8 +1638,6 @@ sd_init(struct sched_domain_topology_level *tl,
int sd_id, sd_weight, sd_flags = 0;
struct cpumask *sd_span;
- sd_weight = cpumask_weight(tl->mask(tl, cpu));
-
if (tl->sd_flags)
sd_flags = (*tl->sd_flags)();
if (WARN_ONCE(sd_flags & ~TOPOLOGY_SD_FLAGS,
@@ -1647,8 +1645,6 @@ sd_init(struct sched_domain_topology_level *tl,
sd_flags &= TOPOLOGY_SD_FLAGS;
*sd = (struct sched_domain){
- .min_interval = sd_weight,
- .max_interval = 2*sd_weight,
.busy_factor = 16,
.imbalance_pct = 117,
@@ -1668,7 +1664,6 @@ sd_init(struct sched_domain_topology_level *tl,
,
.last_balance = jiffies,
- .balance_interval = sd_weight,
/* 50% success rate */
.newidle_call = 512,
@@ -1685,6 +1680,11 @@ sd_init(struct sched_domain_topology_level *tl,
cpumask_and(sd_span, cpu_map, tl->mask(tl, cpu));
sd_id = cpumask_first(sd_span);
+ sd_weight = cpumask_weight(sd_span);
+ sd->min_interval = sd_weight;
+ sd->max_interval = 2 * sd_weight;
+ sd->balance_interval = sd_weight;
+
sd->flags |= asym_cpu_capacity_classify(sd_span, cpu_map);
WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
--
2.34.1
next prev parent reply other threads:[~2026-01-20 11:33 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-20 11:32 [PATCH v3 0/8] sched/topology: Optimize sd->shared allocation K Prateek Nayak
2026-01-20 11:32 ` K Prateek Nayak [this message]
2026-01-21 14:45 ` [PATCH v3 1/8] sched/topology: Compute sd_weight considering cpuset partitions Chen, Yu C
2026-01-21 15:42 ` Shrikanth Hegde
2026-01-22 2:51 ` K Prateek Nayak
2026-02-05 16:53 ` Valentin Schneider
2026-01-20 11:32 ` [PATCH v3 2/8] sched/topology: Allocate per-CPU sched_domain_shared in s_data K Prateek Nayak
2026-01-21 15:17 ` Chen, Yu C
2026-02-05 16:53 ` Valentin Schneider
2026-01-20 11:32 ` [PATCH v3 3/8] sched/topology: Switch to assigning "sd->shared" from s_data K Prateek Nayak
2026-01-21 15:26 ` Chen, Yu C
2026-01-22 2:49 ` K Prateek Nayak
2026-01-22 8:12 ` Shrikanth Hegde
2026-01-22 8:36 ` K Prateek Nayak
2026-01-23 4:08 ` Shrikanth Hegde
2026-01-23 4:53 ` K Prateek Nayak
2026-02-05 16:53 ` Valentin Schneider
2026-02-06 5:20 ` K Prateek Nayak
2026-02-06 9:38 ` Valentin Schneider
2026-02-14 3:04 ` Chen, Yu C
2026-02-16 3:50 ` K Prateek Nayak
2026-02-14 2:59 ` Chen, Yu C
2026-01-20 11:32 ` [PATCH v3 4/8] sched/topology: Remove sched_domain_shared allocation with sd_data K Prateek Nayak
2026-02-05 16:53 ` Valentin Schneider
2026-01-20 11:32 ` [PATCH v3 5/8] sched/core: Check for rcu_read_lock_any_held() in idle_get_state() K Prateek Nayak
2026-01-20 11:32 ` [PATCH v3 6/8] sched/fair: Remove superfluous rcu_read_lock() in the wakeup path K Prateek Nayak
2026-01-20 11:32 ` [PATCH v3 7/8] sched/fair: Simplify the entry condition for update_idle_cpu_scan() K Prateek Nayak
2026-02-14 15:41 ` Chen, Yu C
2026-01-20 11:32 ` [PATCH v3 8/8] sched/fair: Simplify SIS_UTIL handling in select_idle_cpu() K Prateek Nayak
2026-01-23 6:06 ` Shrikanth Hegde
2026-01-23 6:27 ` K Prateek Nayak
2026-01-23 7:14 ` Shrikanth Hegde
2026-02-14 15:56 ` Chen, Yu C
2026-01-21 16:16 ` [PATCH v3 0/8] sched/topology: Optimize sd->shared allocation Peter Zijlstra
2026-01-22 2:56 ` K Prateek Nayak
2026-01-23 9:54 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260120113246.27987-2-kprateek.nayak@amd.com \
--to=kprateek.nayak@amd.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sshegde@linux.ibm.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox