From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9A6F421A1A for ; Wed, 6 May 2026 11:01:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778065316; cv=none; b=bTq57A419ZGCiZDFfYobxt+Ig77Q34ga8PC1+FexAf+jWx2irMWUJR1o8ribfM4hGI9/hAp0U2HGZ87dsK0xiCZw6OSxQhBbxFqXsDRc//v5KSqO3NyzDx0MhytkHKKRSttUU08VLwG+UjHg2lH2zMOfTaKLyGrOWGNPIPNNgCs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778065316; c=relaxed/simple; bh=33Xhvs/WWZqnFdzQbkno2HWlZ/A9qgx9/TbUGlrw7bo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MH4l/TWb1C27gTMbPSbgsqRpMET9ZABPvd6y2GKXtAkLJijhtD6IcyKlOpsNi7V81vcLtkzyjm03qGJeukuC82qt7H/3BCRo6qNB9XOhSFs0uFr+pQw/4UwYAOhYjPL2wKTTrlwtL7dDLt6W3blzomTGFxx5PopkfX1cvWamR5Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=iuCouCoy; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="iuCouCoy" Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 6461Qih9347004; Wed, 6 May 2026 11:01:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=bJMDGlTpaFlRvrP2b z8PXJitSLVT/rsX8dPNKVeB1Es=; b=iuCouCoyT8EpX57OlFxM1ttUGZmtOsrer uht6RHZeUAbJHbYT7v1hqzQWKFvCvCs88LzEb/YlVd42FWne++NldpyRweJ3Rj+B evHEVhWRtNE7fy0b8M+82b2ynrl0eO5wI+lBYO2NVwNBIf9tyiglqf1nO4LYHRew r9SOz7pkBkNGaSptqyq+cMYUNzI6tE+C1qXkuuB027VAMHPYb96WgxRYaRTbMepb r47SRYI8/7CAWn+R8jQ13011pGM9zfJikOywj5Z/0Mp+NYIULKEVO2eEUhTzh+TB Lhgv5KG9GSQHLqYBfR+V3lKN7HSuSrJdZCqQO7oEj2Tti0q6gHQFQ== Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 4dw9x4r6ss-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 06 May 2026 11:01:24 +0000 (GMT) Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.18.1.7/8.18.1.7) with ESMTP id 646AsTwx013346; Wed, 6 May 2026 11:01:23 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 4dwx9ydmmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 06 May 2026 11:01:23 +0000 (GMT) Received: from smtpav05.fra02v.mail.ibm.com (smtpav05.fra02v.mail.ibm.com [10.20.54.104]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 646B1Llf30671444 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 6 May 2026 11:01:21 GMT Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8B95C20040; Wed, 6 May 2026 11:01:21 +0000 (GMT) Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 543C52004B; Wed, 6 May 2026 11:01:16 +0000 (GMT) Received: from li-7bb28a4c-2dab-11b2-a85c-887b5c60d769.ibm.com.com (unknown [9.124.208.93]) by smtpav05.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 6 May 2026 11:01:15 +0000 (GMT) From: Shrikanth Hegde To: mingo@kernel.org, peterz@infradead.org, vincent.guittot@linaro.org, linux-kernel@vger.kernel.org Cc: sshegde@linux.ibm.com, kprateek.nayak@amd.com, juri.lelli@redhat.com, vschneid@redhat.com, dietmar.eggemann@arm.com, tj@kernel.org, rostedt@goodmis.org, tglx@kernel.org, mgorman@suse.de, bsegall@google.com, arighi@nvidia.com Subject: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask Date: Wed, 6 May 2026 16:30:51 +0530 Message-ID: <20260506110052.9974-3-sshegde@linux.ibm.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260506110052.9974-1-sshegde@linux.ibm.com> References: <20260506110052.9974-1-sshegde@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-Reinject: loops=2 maxloops=12 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwNTA2MDEwNCBTYWx0ZWRfX1ZtRWpdfmkiR +Nttq9GUZzc0p4MmDDRdpv7uEp4oN4wGy0IGEPzGnizj2dGIUJDAIgTr5vvmHFFzRjayfVSHfep ElUXYxOMVsjVukBHuT3aWsKlcJu/0qD48aW/FidczmkxINNMvHZ5To9EGYdXCpB+BGutmF1c5sN t4UTse6WXPupVntJVuinIbXz4OiZ/tkJ0yL6R2JimZWKKYXXtbEg+C8yHKpnZbNY0vgR+SDLqX7 ELmPJDCJCm9gzn+UVf0iLiZstgJAY2/J1whT5BKFi3DEPV1ZCZ6Ili2C8uojUKU+B4qfhxRhO/O 7oy8PSi/aHK2bfjvttcxU1qVLFGpGdzZ1WE5IwYwovbBO1bW4eyBavRzfbfbblmNU0DW7ojc6RO A3ULWCNOoW2dU1wuk7Hvstf6pKPWS1nz9pi9YDSm5YdJF2rkg8di//Ll7+5jLeVze/wofYG971F EYWG477n7UEpikoX7aA== X-Proofpoint-ORIG-GUID: 11ycG1wfB_TXInF61FwasfOuUHvAQWO2 X-Proofpoint-GUID: fGfWHnROEPKK0USkDk2Rr0Dd7PkZpnX- X-Authority-Analysis: v=2.4 cv=W7UIkxWk c=1 sm=1 tr=0 ts=69fb1f84 cx=c_pps a=aDMHemPKRhS1OARIsFnwRA==:117 a=aDMHemPKRhS1OARIsFnwRA==:17 a=NGcC8JguVDcA:10 a=VkNPw1HP01LnGYTKEx00:22 a=RnoormkPH1_aCDwRdu11:22 a=uAbxVGIbfxUO_5tXvNgY:22 a=VnNF1IyMAAAA:8 a=tYOOUy7Fey2-yfTBsNMA:9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-05-05_03,2026-04-30_02,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 adultscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=0 spamscore=0 clxscore=1015 phishscore=0 bulkscore=0 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2604200000 definitions=main-2605060104 Now, that cpu_smt_mask is defined as cpumask_of(cpu) for CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery. Effectively, - This makes sched_smt_present is defined always - cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec will never enable the sched_smt_present. Which is expected. - Paths that were compile-time eliminated become runtime guarded using static keys. - Defines set_idle_cores, test_idle_cores etc which could likely benefit the CONFIG_SCHED_SMT=n systems to use the same optimizations within the LLC at wakeups. - This will expose sched_smt_present,stop_core_cpuslocked symbol for CONFIG_SCHED_SMT=n. Likely not a concern. - There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048) add/remove: 25/18 grow/shrink: 26/19 up/down: 6696/-3064 (3632) Total: Before=30771823, After=30775455, chg +0.01% - No code bloat for CONFIG_SCHED_SMT=y, which is expected. Signed-off-by: Shrikanth Hegde --- include/linux/sched/smt.h | 4 ---- kernel/sched/core.c | 6 ------ kernel/sched/ext_idle.c | 6 ------ kernel/sched/fair.c | 35 ----------------------------------- kernel/sched/sched.h | 6 ------ kernel/sched/topology.c | 2 -- kernel/stop_machine.c | 2 -- kernel/workqueue.c | 4 ---- 8 files changed, 65 deletions(-) diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h index 166b19af956f..cde6679c0278 100644 --- a/include/linux/sched/smt.h +++ b/include/linux/sched/smt.h @@ -4,16 +4,12 @@ #include -#ifdef CONFIG_SCHED_SMT extern struct static_key_false sched_smt_present; static __always_inline bool sched_smt_active(void) { return static_branch_likely(&sched_smt_present); } -#else -static __always_inline bool sched_smt_active(void) { return false; } -#endif void arch_smt_update(void); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b8871449d3c6..055db51c5483 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8604,18 +8604,14 @@ static void cpuset_cpu_inactive(unsigned int cpu) static inline void sched_smt_present_inc(int cpu) { -#ifdef CONFIG_SCHED_SMT if (cpumask_weight(cpu_smt_mask(cpu)) == 2) static_branch_inc_cpuslocked(&sched_smt_present); -#endif } static inline void sched_smt_present_dec(int cpu) { -#ifdef CONFIG_SCHED_SMT if (cpumask_weight(cpu_smt_mask(cpu)) == 2) static_branch_dec_cpuslocked(&sched_smt_present); -#endif } int sched_cpu_activate(unsigned int cpu) @@ -8703,9 +8699,7 @@ int sched_cpu_deactivate(unsigned int cpu) */ sched_smt_present_dec(cpu); -#ifdef CONFIG_SCHED_SMT sched_core_cpu_deactivate(cpu); -#endif if (!sched_smp_initialized) return 0; diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index 7468560a6d80..2bcf58e99c9b 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -79,7 +79,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu) int node = scx_cpu_node_if_enabled(cpu); struct cpumask *idle_cpus = idle_cpumask(node)->cpu; -#ifdef CONFIG_SCHED_SMT /* * SMT mask should be cleared whether we can claim @cpu or not. The SMT * cluster is not wholly idle either way. This also prevents @@ -104,7 +103,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu) else if (cpumask_test_cpu(cpu, idle_smts)) __cpumask_clear_cpu(cpu, idle_smts); } -#endif return cpumask_test_and_clear_cpu(cpu, idle_cpus); } @@ -622,7 +620,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, goto out_unlock; } -#ifdef CONFIG_SCHED_SMT /* * Use @prev_cpu's sibling if it's idle. */ @@ -634,7 +631,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, goto out_unlock; } } -#endif /* * Search for any idle CPU in the same LLC domain. @@ -714,7 +710,6 @@ static void update_builtin_idle(int cpu, bool idle) assign_cpu(cpu, idle_cpus, idle); -#ifdef CONFIG_SCHED_SMT if (sched_smt_active()) { const struct cpumask *smt = cpu_smt_mask(cpu); struct cpumask *idle_smts = idle_cpumask(node)->smt; @@ -731,7 +726,6 @@ static void update_builtin_idle(int cpu, bool idle) cpumask_andnot(idle_smts, idle_smts, smt); } } -#endif } /* diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 728965851842..d19c416d1b84 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1555,7 +1555,6 @@ update_stats_curr_start(struct cfs_rq *cfs_rq, struct sched_entity *se) static inline bool is_core_idle(int cpu) { -#ifdef CONFIG_SCHED_SMT int sibling; for_each_cpu(sibling, cpu_smt_mask(cpu)) { @@ -1565,7 +1564,6 @@ static inline bool is_core_idle(int cpu) if (!idle_cpu(sibling)) return false; } -#endif return true; } @@ -2248,7 +2246,6 @@ numa_type numa_classify(unsigned int imbalance_pct, return node_fully_busy; } -#ifdef CONFIG_SCHED_SMT /* Forward declarations of select_idle_sibling helpers */ static inline bool test_idle_cores(int cpu); static inline int numa_idle_core(int idle_core, int cpu) @@ -2266,12 +2263,6 @@ static inline int numa_idle_core(int idle_core, int cpu) return idle_core; } -#else /* !CONFIG_SCHED_SMT: */ -static inline int numa_idle_core(int idle_core, int cpu) -{ - return idle_core; -} -#endif /* !CONFIG_SCHED_SMT */ /* * Gather all necessary information to make NUMA balancing placement @@ -7782,7 +7773,6 @@ static inline int __select_idle_cpu(int cpu, struct task_struct *p) return -1; } -#ifdef CONFIG_SCHED_SMT DEFINE_STATIC_KEY_FALSE(sched_smt_present); EXPORT_SYMBOL_GPL(sched_smt_present); @@ -7892,29 +7882,6 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t return -1; } -#else /* !CONFIG_SCHED_SMT: */ - -static inline void set_idle_cores(int cpu, int val) -{ -} - -static inline bool test_idle_cores(int cpu) -{ - return false; -} - -static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu) -{ - return __select_idle_cpu(core, p); -} - -static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) -{ - return -1; -} - -#endif /* !CONFIG_SCHED_SMT */ - /* * Scan the LLC domain for idle CPUs; this is dynamically regulated by * comparing the average scan cost (tracked in sd->avg_scan_cost) against the @@ -12006,9 +11973,7 @@ static int should_we_balance(struct lb_env *env) * idle has been found, then its not needed to check other * SMT siblings for idleness: */ -#ifdef CONFIG_SCHED_SMT cpumask_andnot(swb_cpus, swb_cpus, cpu_smt_mask(cpu)); -#endif continue; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 9f63b15d309d..e476623a0c2a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1667,7 +1667,6 @@ do { \ flags = _raw_spin_rq_lock_irqsave(rq); \ } while (0) -#ifdef CONFIG_SCHED_SMT extern void __update_idle_core(struct rq *rq); static inline void update_idle_core(struct rq *rq) @@ -1676,12 +1675,7 @@ static inline void update_idle_core(struct rq *rq) __update_idle_core(rq); } -#else /* !CONFIG_SCHED_SMT: */ -static inline void update_idle_core(struct rq *rq) { } -#endif /* !CONFIG_SCHED_SMT */ - #ifdef CONFIG_FAIR_GROUP_SCHED - static inline struct task_struct *task_of(struct sched_entity *se) { WARN_ON_ONCE(!entity_is_task(se)); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 5847b83d9d55..a1f46e3f4ede 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1310,9 +1310,7 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) cpumask_copy(mask, sched_group_span(sg)); for_each_cpu(cpu, mask) { cores++; -#ifdef CONFIG_SCHED_SMT cpumask_andnot(mask, mask, cpu_smt_mask(cpu)); -#endif } sg->cores = cores; diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 3fe6b0c99f3d..e17afa52893c 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -632,7 +632,6 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus) } EXPORT_SYMBOL_GPL(stop_machine); -#ifdef CONFIG_SCHED_SMT int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data) { const struct cpumask *smt_mask = cpu_smt_mask(cpu); @@ -651,7 +650,6 @@ int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data) return stop_cpus(smt_mask, multi_cpu_stop, &msdata); } EXPORT_SYMBOL_GPL(stop_core_cpuslocked); -#endif /** * stop_machine_from_inactive_cpu - stop_machine() from inactive CPU diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 5f747f241a5f..99ef412f02a6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -8187,11 +8187,7 @@ static bool __init cpus_dont_share(int cpu0, int cpu1) static bool __init cpus_share_smt(int cpu0, int cpu1) { -#ifdef CONFIG_SCHED_SMT return cpumask_test_cpu(cpu0, cpu_smt_mask(cpu1)); -#else - return false; -#endif } static bool __init cpus_share_numa(int cpu0, int cpu1) -- 2.51.0