public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Optimize CPU iteration using for_each_cpu_and[not]
@ 2025-08-15  1:15 lirongqing
  2025-08-19 11:45 ` Valentin Schneider
  0 siblings, 1 reply; 3+ messages in thread
From: lirongqing @ 2025-08-15  1:15 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, vschneid, linux-kernel
  Cc: Li RongQing

From: Li RongQing <lirongqing@baidu.com>

Replace open-coded CPU iteration patterns with more efficient
for_each_cpu_and() and for_each_cpu_andnot() macros in three locations.

This change both simplifies the code and provides minor performance
improvements by using the more specialized iteration macros.

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 kernel/sched/fair.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b173a05..8794581 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1389,10 +1389,7 @@ static inline bool is_core_idle(int cpu)
 #ifdef CONFIG_SCHED_SMT
 	int sibling;
 
-	for_each_cpu(sibling, cpu_smt_mask(cpu)) {
-		if (cpu == sibling)
-			continue;
-
+	for_each_cpu_andnot(sibling, cpu_smt_mask(cpu), cpumask_of(cpu)) {
 		if (!idle_cpu(sibling))
 			return false;
 	}
@@ -2474,11 +2471,7 @@ static void task_numa_find_cpu(struct task_numa_env *env,
 		maymove = !load_too_imbalanced(src_load, dst_load, env);
 	}
 
-	for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
-		/* Skip this CPU if the source task cannot migrate */
-		if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
-			continue;
-
+	for_each_cpu_and(cpu, cpumask_of_node(env->dst_nid), env->p->cpus_ptr) {
 		env->dst_cpu = cpu;
 		if (task_numa_compare(env, taskimp, groupimp, maymove))
 			break;
@@ -7493,10 +7486,7 @@ void __update_idle_core(struct rq *rq)
 	if (test_idle_cores(core))
 		goto unlock;
 
-	for_each_cpu(cpu, cpu_smt_mask(core)) {
-		if (cpu == core)
-			continue;
-
+	for_each_cpu_andnot(cpu, cpu_smt_mask(core), cpumask_of(core)) {
 		if (!available_idle_cpu(cpu))
 			goto unlock;
 	}
-- 
2.9.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-09-25  9:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-15  1:15 [PATCH] sched/fair: Optimize CPU iteration using for_each_cpu_and[not] lirongqing
2025-08-19 11:45 ` Valentin Schneider
2025-09-25  9:50   ` [外部邮件] " Li,Rongqing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox