* [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched
@ 2025-10-09 23:56 Tejun Heo
2025-10-10 6:31 ` Andrea Righi
2025-10-13 18:33 ` Tejun Heo
0 siblings, 2 replies; 3+ messages in thread
From: Tejun Heo @ 2025-10-09 23:56 UTC (permalink / raw)
To: David Vernet, Andrea Righi, Changwoo Min
Cc: linux-kernel, sched-ext, Tejun Heo
By the time scx_sched_free_rcu_work() runs, the scx_sched is no longer
reachable. However, a previously queued error_irq_work may still be pending or
running. Ensure it completes before proceeding with teardown.
Fixes: bff3b5aec1b7 ("sched_ext: Move disable machinery into scx_sched")
Signed-off-by: Tejun Heo <tj@kernel.org>
---
kernel/sched/ext.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index XXXXXXXX..XXXXXXXX 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3471,7 +3471,9 @@ static void scx_sched_free_rcu_work(struct rcu_work *rwork)
struct scx_dispatch_q *dsq;
int node;
+ irq_work_sync(&sch->error_irq_work);
kthread_stop(sch->helper->task);
+
free_percpu(sch->pcpu);
for_each_node_state(node, N_POSSIBLE)
--
2.48.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched
2025-10-09 23:56 [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched Tejun Heo
@ 2025-10-10 6:31 ` Andrea Righi
2025-10-13 18:33 ` Tejun Heo
1 sibling, 0 replies; 3+ messages in thread
From: Andrea Righi @ 2025-10-10 6:31 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, linux-kernel, sched-ext
On Thu, Oct 09, 2025 at 01:56:23PM -1000, Tejun Heo wrote:
> By the time scx_sched_free_rcu_work() runs, the scx_sched is no longer
> reachable. However, a previously queued error_irq_work may still be pending or
> running. Ensure it completes before proceeding with teardown.
>
> Fixes: bff3b5aec1b7 ("sched_ext: Move disable machinery into scx_sched")
> Signed-off-by: Tejun Heo <tj@kernel.org>
Good catch.
Acked-by: Andrea Righi <arighi@nvidia.com>
Thanks,
-Andrea
> ---
> kernel/sched/ext.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index XXXXXXXX..XXXXXXXX 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -3471,7 +3471,9 @@ static void scx_sched_free_rcu_work(struct rcu_work *rwork)
> struct scx_dispatch_q *dsq;
> int node;
>
> + irq_work_sync(&sch->error_irq_work);
> kthread_stop(sch->helper->task);
> +
> free_percpu(sch->pcpu);
>
> for_each_node_state(node, N_POSSIBLE)
> --
> 2.48.0
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched
2025-10-09 23:56 [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched Tejun Heo
2025-10-10 6:31 ` Andrea Righi
@ 2025-10-13 18:33 ` Tejun Heo
1 sibling, 0 replies; 3+ messages in thread
From: Tejun Heo @ 2025-10-13 18:33 UTC (permalink / raw)
To: David Vernet, Andrea Righi, Changwoo Min; +Cc: linux-kernel, sched-ext
On Thu, Oct 09, 2025 at 01:56:23PM -1000, Tejun Heo wrote:
> By the time scx_sched_free_rcu_work() runs, the scx_sched is no longer
> reachable. However, a previously queued error_irq_work may still be pending or
> running. Ensure it completes before proceeding with teardown.
>
> Fixes: bff3b5aec1b7 ("sched_ext: Move disable machinery into scx_sched")
> Signed-off-by: Tejun Heo <tj@kernel.org>
Applied to sched_ext/for-6.18-fixes.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-10-13 18:33 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-09 23:56 [PATCH sched_ext/for-6.18-fixes] sched_ext: Sync error_irq_work before freeing scx_sched Tejun Heo
2025-10-10 6:31 ` Andrea Righi
2025-10-13 18:33 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox