public inbox for sched-ext@lists.linux.dev
 help / color / mirror / Atom feed
* [PATCH] sched_ext: Use the resched_cpu() to replace resched_curr() in the bypass_lb_node()
@ 2025-12-22 11:53 Zqiang
  2025-12-22 11:53 ` [PATCH] sched_ext: Avoid multiple irq_work_queue() calls in destroy_dsq() Zqiang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Zqiang @ 2025-12-22 11:53 UTC (permalink / raw)
  To: tj, void, arighi, changwoo; +Cc: sched-ext, linux-kernel, qiang.zhang

For the PREEMPT_RT kernels, the scx_bypass_lb_timerfn() running in the
preemptible per-CPU ktimer kthread context, this means that the following
scenarios will occur(for x86 platform):

       cpu1                          cpu2
				 ktimer kthread:
                                 ->scx_bypass_lb_timerfn
                                   ->bypass_lb_node
                                     ->for_each_cpu(cpu, resched_mask)

    migration/1:                       by preempt by migration/2:
    multi_cpu_stop()                     multi_cpu_stop()
    ->take_cpu_down()
      ->__cpu_disable()
	->set cpu1 offline

                                       ->rq1 = cpu_rq(cpu1)
                                       ->resched_curr(rq1)
                                         ->smp_send_reschedule(cpu1)
					   ->native_smp_send_reschedule(cpu1)
					     ->if(unlikely(cpu_is_offline(cpu))) {
                					WARN(1, "sched: Unexpected
							reschedule of offline CPU#%d!\n", cpu);
                					return;
        					}

This commit therefore use the resched_cpu() to replace resched_curr()
in the bypass_lb_node() to avoid send-ipi to offline CPUs.

Signed-off-by: Zqiang <qiang.zhang@linux.dev>
---
 kernel/sched/ext.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 5ebf8a740847..8f6d8d7f895c 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3956,13 +3956,8 @@ static void bypass_lb_node(struct scx_sched *sch, int node)
 					     nr_donor_target, nr_target);
 	}
 
-	for_each_cpu(cpu, resched_mask) {
-		struct rq *rq = cpu_rq(cpu);
-
-		raw_spin_rq_lock_irq(rq);
-		resched_curr(rq);
-		raw_spin_rq_unlock_irq(rq);
-	}
+	for_each_cpu(cpu, resched_mask)
+		resched_cpu(cpu);
 
 	for_each_cpu_and(cpu, cpu_online_mask, node_mask) {
 		u32 nr = READ_ONCE(cpu_rq(cpu)->scx.bypass_dsq.nr);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-12-23 13:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-22 11:53 [PATCH] sched_ext: Use the resched_cpu() to replace resched_curr() in the bypass_lb_node() Zqiang
2025-12-22 11:53 ` [PATCH] sched_ext: Avoid multiple irq_work_queue() calls in destroy_dsq() Zqiang
2025-12-22 18:30   ` Andrea Righi
2025-12-23 13:16     ` Zqiang
2025-12-23  4:00   ` Tejun Heo
2025-12-23 13:18     ` Zqiang
2025-12-22 18:16 ` [PATCH] sched_ext: Use the resched_cpu() to replace resched_curr() in the bypass_lb_node() Andrea Righi
2025-12-23  4:00 ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox