From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E282397AEF for ; Tue, 12 May 2026 15:22:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778599327; cv=none; b=S4jo3KOM9D8+5RdcQ26cPQV9LoDArnJf80YY2XAMZkVxeq3AjQvqOTH02z4cBUL1kGyvXUHgEoiJOICGwwgdiuYKKFS6fobHTg7rLEjGNJSVN0vb9hwI6VzG8erAZLZMlUhgw9737g4DQwd2apoQJGCXpocqCgorDxxZtvPU0rA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778599327; c=relaxed/simple; bh=Hl8sDVCtUKr7S1yKfKZOIzsb0ofmzoFEAGursHNr+UM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GR0vJX8HvwZdoP+FXaOn1l+WUxUVtj9K2LbfPSxIjk5pkrvzAffkiwDie6adA+GIkD2Hspy4+PmHWM+J2mtAwycMy0CdI421k7M0jOICAV/mY5eyf08CT6p79htlXbjpchi+NHgVnZZS7OBesItCyrLZwvusVwhM4xHHOCXPJ60= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=iQlgy2mF; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="iQlgy2mF" Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 64CCAf7V3775703; Tue, 12 May 2026 15:21:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=jT1WDiBgm+RwFD/4n aZfQIRax9xYpg2g9FOn7P3bc20=; b=iQlgy2mFWJLukNVlytfLArE5Au+J1u9jA YdKs3bwNwtHMhtgTb2BiDxDnlZ5MOS+67FM6mUUDXV7lsdyZ60QWGE+M3jiePXEW SGwyRgKbabaJfqLWdSyprMVpidt6+0O5GUQjRH9hYs3P02Mw+AfbYCU0SEubarCz 9BfLtSXj7HL+io360gHVUF934CoWTiCKdboGd51S6vCQR9Sr0tK+s+Y9SWCUYn/m fk9w/ZIzQB/qF3XturkQPhhOxkuHhNrGoORjbnXFbzyTy6uK+IvBhcugwacri2w6 fFEDFML0wz/oZ/Ys4cUPRNcU+miClPJwA45WwHmbMyM2+9eGaqssQ== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 4e3nv6kj17-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 12 May 2026 15:21:46 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.7/8.18.1.7) with ESMTP id 64CF9TD9032309; Tue, 12 May 2026 15:21:45 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4e3nfgkpd0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 12 May 2026 15:21:45 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 64CFLhte50463150 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 May 2026 15:21:43 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4CF8220043; Tue, 12 May 2026 15:21:43 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B5C9C20040; Tue, 12 May 2026 15:21:39 +0000 (GMT) Received: from li-7bb28a4c-2dab-11b2-a85c-887b5c60d769.ibm.com.com (unknown [9.39.28.106]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 12 May 2026 15:21:39 +0000 (GMT) From: Shrikanth Hegde To: mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org, linux-kernel@vger.kernel.org Cc: sshegde@linux.ibm.com, kprateek.nayak@amd.com, juri.lelli@redhat.com, vschneid@redhat.com, dietmar.eggemann@arm.com, tj@kernel.org, rostedt@goodmis.org, mgorman@suse.de, bsegall@google.com, arighi@nvidia.com Subject: [PATCH v2 2/3] sched: Simplify ifdeffery around cpu_smt_mask Date: Tue, 12 May 2026 20:51:24 +0530 Message-ID: <20260512152125.308280-3-sshegde@linux.ibm.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260512152125.308280-1-sshegde@linux.ibm.com> References: <20260512152125.308280-1-sshegde@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Reinject: loops=2 maxloops=12 X-Authority-Analysis: v=2.4 cv=KbvidwYD c=1 sm=1 tr=0 ts=6a03458a cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=NGcC8JguVDcA:10 a=VkNPw1HP01LnGYTKEx00:22 a=RnoormkPH1_aCDwRdu11:22 a=V8glGbnc2Ofi9Qvn3v5h:22 a=VwQbUJbxAAAA:8 a=VnNF1IyMAAAA:8 a=tYOOUy7Fey2-yfTBsNMA:9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwNTEyMDE1OCBTYWx0ZWRfX/ctNZQUx0h76 2zTXKb3S11UArqxmGNgPOU78pTiFc46Ozh8eB9Wk/BEqBWMWyBvRiLLfIbVEX/2/9gRSQWVs3Jt 2+HBrKojXsEICtpgAg8oVjXFd5oQALxh2iPTYbvd9JjOqLemlfX0iKoZdam+4t8xRIauQ4tO+3E A1kJRemkVRR1VCm/c6dNupHrRI+u2GbwRsmnJjQ9iuqPhWUs/YDHhu1/XSiMC5PeeosPdI6tJpf 5Js5IuxC0Oa6BRqRp0kQIqoqTI8dMbfs9MzPwG5t1wd4bUwi6EQnL00W8ny6KmqFQ1mO/BonujB jYnULts4eLHDPgr+1kDHF7RZvHAup6mjwTiUvTRD0vemnJimA3Wwj+P7o46ofB8Mq7ELFGeu1EX MxWni+Aft68Cx1ORJYWob2/ntzQYCfUn24bHoM6nKj+xsbJ7kjV/JXBpYov8InmyPJkYYbqzEEH uDsvp93qADIEEJMv/3Q== X-Proofpoint-GUID: 12SZzt2F-pWbbIzGnhOfLEZYWswkclvM X-Proofpoint-ORIG-GUID: _FeQcVfMSCSkfvZq2j1LJIn2xbVjZLrv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-05-11_05,2026-05-08_02,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 bulkscore=0 impostorscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 spamscore=0 suspectscore=0 clxscore=1015 adultscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2605050000 definitions=main-2605120158 Now, that cpu_smt_mask is defined as cpumask_of(cpu) for CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery. Effectively, - This makes sched_smt_present is defined always - cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec will never enable the sched_smt_present. Which is expected. - Paths that were compile-time eliminated become runtime guarded using static keys. - Defines set_idle_cores, test_idle_cores etc which could likely benefit the CONFIG_SCHED_SMT=n systems to use the same optimizations within the LLC at wakeups. - This will expose sched_smt_present symbol for CONFIG_SCHED_SMT=n. Likely not a concern. - There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048) add/remove: 24/18 grow/shrink: 26/28 up/down: 6396/-3188 (3208) Total: Before=30629880, After=30633088, chg +0.01% - No code bloat for CONFIG_SCHED_SMT=y, which is expected. - Add comments around stop_core_cpuslocked on why ifdefs are not removed. - This leaves the remaining uses of CONFIG_SCHED_SMT mainly for topology building bits which have a policy based decision. Acked-by: Tejun Heo Signed-off-by: Shrikanth Hegde --- include/linux/sched/smt.h | 4 ---- kernel/sched/core.c | 6 ------ kernel/sched/ext_idle.c | 6 ------ kernel/sched/fair.c | 35 ----------------------------------- kernel/sched/sched.h | 6 ------ kernel/sched/topology.c | 2 -- kernel/stop_machine.c | 5 +++++ kernel/workqueue.c | 4 ---- 8 files changed, 5 insertions(+), 63 deletions(-) diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h index 166b19af956f..cde6679c0278 100644 --- a/include/linux/sched/smt.h +++ b/include/linux/sched/smt.h @@ -4,16 +4,12 @@ #include -#ifdef CONFIG_SCHED_SMT extern struct static_key_false sched_smt_present; static __always_inline bool sched_smt_active(void) { return static_branch_likely(&sched_smt_present); } -#else -static __always_inline bool sched_smt_active(void) { return false; } -#endif void arch_smt_update(void); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b905805bbcbe..3ae5f19c1b7e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8612,18 +8612,14 @@ static void cpuset_cpu_inactive(unsigned int cpu) static inline void sched_smt_present_inc(int cpu) { -#ifdef CONFIG_SCHED_SMT if (cpumask_weight(cpu_smt_mask(cpu)) == 2) static_branch_inc_cpuslocked(&sched_smt_present); -#endif } static inline void sched_smt_present_dec(int cpu) { -#ifdef CONFIG_SCHED_SMT if (cpumask_weight(cpu_smt_mask(cpu)) == 2) static_branch_dec_cpuslocked(&sched_smt_present); -#endif } int sched_cpu_activate(unsigned int cpu) @@ -8711,9 +8707,7 @@ int sched_cpu_deactivate(unsigned int cpu) */ sched_smt_present_dec(cpu); -#ifdef CONFIG_SCHED_SMT sched_core_cpu_deactivate(cpu); -#endif if (!sched_smp_initialized) return 0; diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index 6e1980763270..9f5ad6b071f9 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -79,7 +79,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu) int node = scx_cpu_node_if_enabled(cpu); struct cpumask *idle_cpus = idle_cpumask(node)->cpu; -#ifdef CONFIG_SCHED_SMT /* * SMT mask should be cleared whether we can claim @cpu or not. The SMT * cluster is not wholly idle either way. This also prevents @@ -104,7 +103,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu) else if (cpumask_test_cpu(cpu, idle_smts)) __cpumask_clear_cpu(cpu, idle_smts); } -#endif return cpumask_test_and_clear_cpu(cpu, idle_cpus); } @@ -622,7 +620,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, goto out_unlock; } -#ifdef CONFIG_SCHED_SMT /* * Use @prev_cpu's sibling if it's idle. */ @@ -634,7 +631,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, goto out_unlock; } } -#endif /* * Search for any idle CPU in the same LLC domain. @@ -714,7 +710,6 @@ static void update_builtin_idle(int cpu, bool idle) assign_cpu(cpu, idle_cpus, idle); -#ifdef CONFIG_SCHED_SMT if (sched_smt_active()) { const struct cpumask *smt = cpu_smt_mask(cpu); struct cpumask *idle_smts = idle_cpumask(node)->smt; @@ -731,7 +726,6 @@ static void update_builtin_idle(int cpu, bool idle) cpumask_andnot(idle_smts, idle_smts, smt); } } -#endif } /* diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3ebec186f982..353e31ecaadc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1584,7 +1584,6 @@ update_stats_curr_start(struct cfs_rq *cfs_rq, struct sched_entity *se) static inline bool is_core_idle(int cpu) { -#ifdef CONFIG_SCHED_SMT int sibling; for_each_cpu(sibling, cpu_smt_mask(cpu)) { @@ -1594,7 +1593,6 @@ static inline bool is_core_idle(int cpu) if (!idle_cpu(sibling)) return false; } -#endif return true; } @@ -2277,7 +2275,6 @@ numa_type numa_classify(unsigned int imbalance_pct, return node_fully_busy; } -#ifdef CONFIG_SCHED_SMT /* Forward declarations of select_idle_sibling helpers */ static inline bool test_idle_cores(int cpu); static inline int numa_idle_core(int idle_core, int cpu) @@ -2295,12 +2292,6 @@ static inline int numa_idle_core(int idle_core, int cpu) return idle_core; } -#else /* !CONFIG_SCHED_SMT: */ -static inline int numa_idle_core(int idle_core, int cpu) -{ - return idle_core; -} -#endif /* !CONFIG_SCHED_SMT */ /* * Gather all necessary information to make NUMA balancing placement @@ -7811,7 +7802,6 @@ static inline int __select_idle_cpu(int cpu, struct task_struct *p) return -1; } -#ifdef CONFIG_SCHED_SMT DEFINE_STATIC_KEY_FALSE(sched_smt_present); EXPORT_SYMBOL_GPL(sched_smt_present); @@ -7921,29 +7911,6 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t return -1; } -#else /* !CONFIG_SCHED_SMT: */ - -static inline void set_idle_cores(int cpu, int val) -{ -} - -static inline bool test_idle_cores(int cpu) -{ - return false; -} - -static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu) -{ - return __select_idle_cpu(core, p); -} - -static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) -{ - return -1; -} - -#endif /* !CONFIG_SCHED_SMT */ - /* * Scan the LLC domain for idle CPUs; this is dynamically regulated by * comparing the average scan cost (tracked in sd->avg_scan_cost) against the @@ -12036,9 +12003,7 @@ static int should_we_balance(struct lb_env *env) * idle has been found, then its not needed to check other * SMT siblings for idleness: */ -#ifdef CONFIG_SCHED_SMT cpumask_andnot(swb_cpus, swb_cpus, cpu_smt_mask(cpu)); -#endif continue; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9f63b15d309d..e476623a0c2a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1667,7 +1667,6 @@ do { \ flags = _raw_spin_rq_lock_irqsave(rq); \ } while (0) -#ifdef CONFIG_SCHED_SMT extern void __update_idle_core(struct rq *rq); static inline void update_idle_core(struct rq *rq) @@ -1676,12 +1675,7 @@ static inline void update_idle_core(struct rq *rq) __update_idle_core(rq); } -#else /* !CONFIG_SCHED_SMT: */ -static inline void update_idle_core(struct rq *rq) { } -#endif /* !CONFIG_SCHED_SMT */ - #ifdef CONFIG_FAIR_GROUP_SCHED - static inline struct task_struct *task_of(struct sched_entity *se) { WARN_ON_ONCE(!entity_is_task(se)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5847b83d9d55..a1f46e3f4ede 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1310,9 +1310,7 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) cpumask_copy(mask, sched_group_span(sg)); for_each_cpu(cpu, mask) { cores++; -#ifdef CONFIG_SCHED_SMT cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); -#endif } sg->cores = cores; diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 3fe6b0c99f3d..773d8e9ae30c 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -633,6 +633,11 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus) EXPORT_SYMBOL_GPL(stop_machine); #ifdef CONFIG_SCHED_SMT +/* + * INTEL_IFS is the only user of this API. That selftest can + * only be compiled if SMP=y. On x86 it selects SCHED_SMT. + * Keep the ifdefs for now. + */ int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data) { const struct cpumask *smt_mask = cpu_smt_mask(cpu); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 3d2e3b2ec528..c911fdcb4428 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -8198,11 +8198,7 @@ static bool __init cpus_dont_share(int cpu0, int cpu1) static bool __init cpus_share_smt(int cpu0, int cpu1) { -#ifdef CONFIG_SCHED_SMT return cpumask_test_cpu(cpu0, cpu_smt_mask(cpu1)); -#else - return false; -#endif } static bool __init cpus_share_numa(int cpu0, int cpu1) -- 2.47.3