* [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
@ 2026-04-13 11:04 Cheng-Yang Chou
2026-04-13 15:29 ` Kuba Piecuch
0 siblings, 1 reply; 10+ messages in thread
From: Cheng-Yang Chou @ 2026-04-13 11:04 UTC (permalink / raw)
To: sched-ext, Tejun Heo, David Vernet, Andrea Righi, Changwoo Min
Cc: Ching-Chun Huang, Chia-Ping Tsai, yphbchou0911
scx_central only fast-pathed per-cpu kthreads (PF_KTHREAD &&
nr_cpus_allowed == 1) directly to the local DSQ, leaving non-kthread
tasks pinned to a single CPU to go through the central queue. When
dispatch_to_cpu() later dequeued such a task and found it couldn't run
on the current CPU, it could trigger scx_error() and disable the
scheduler:
sched_ext: central: SCX_DSQ_LOCAL[_ON] cannot move migration disabled qemu-system-x86[223212] from CPU 1 to 0
Though scx_central is a demo scheduler and meant to dispatch all tasks
from a central CPU, it should remain robust under any circumstances.
Extend the handling to all tasks with nr_cpus_allowed == 1:
- select_cpu() and enqueue(): return the task's current CPU and dispatch
directly to local DSQ for pinned tasks.
- dispatch_to_cpu(): use SCX_KICK_PREEMPT when kicking a task's home
CPU after bouncing to the fallback DSQ, so the CPU preempts
immediately rather than waiting for the next timer tick.
Test plan:
- run scx_central on the host, then launch virtme-ng in a separate
terminal to create pinned tasks (qemu-system-x86), and verify stable
operation.
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
tools/sched_ext/scx_central.bpf.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index 4efcce099bd5..5a02669483cc 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -96,7 +96,13 @@ s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
* disturbing other CPUs. It's safe to blindly return the central cpu as
* select_cpu() is a hint and if @p can't be on it, the kernel will
* automatically pick a fallback CPU.
+ *
+ * For tasks pinned to a single CPU, return the current CPU to avoid
+ * an unnecessary fallback.
*/
+ if (p->nr_cpus_allowed == 1)
+ return scx_bpf_task_cpu(p);
+
return central_cpu;
}
@@ -107,12 +113,13 @@ void BPF_STRUCT_OPS(central_enqueue, struct task_struct *p, u64 enq_flags)
__sync_fetch_and_add(&nr_total, 1);
/*
- * Push per-cpu kthreads at the head of local dsq's and preempt the
- * corresponding CPU. This ensures that e.g. ksoftirqd isn't blocked
- * behind other threads which is necessary for forward progress
- * guarantee as we depend on the BPF timer which may run from ksoftirqd.
+ * Push tasks pinned to a single CPU at the head of local dsq's and
+ * preempt the corresponding CPU. This ensures that e.g. ksoftirqd
+ * isn't blocked behind other threads which is necessary for forward
+ * progress guarantee as we depend on the BPF timer which may run
+ * from ksoftirqd.
*/
- if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
+ if (p->nr_cpus_allowed == 1) {
__sync_fetch_and_add(&nr_locals, 1);
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_INF,
enq_flags | SCX_ENQ_PREEMPT);
@@ -155,6 +162,13 @@ static bool dispatch_to_cpu(s32 cpu)
if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
__sync_fetch_and_add(&nr_mismatches, 1);
scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
+
+ /*
+ * Kick the task's home CPU to pick it up from the
+ * fallback DSQ promptly.
+ */
+ scx_bpf_kick_cpu(scx_bpf_task_cpu(p), SCX_KICK_PREEMPT);
+
bpf_task_release(p);
/*
* We might run out of dispatch buffer slots if we continue dispatching
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-13 11:04 [PATCH] tools/sched_ext: Handle pinned tasks in scx_central Cheng-Yang Chou
@ 2026-04-13 15:29 ` Kuba Piecuch
2026-04-14 3:45 ` Cheng-Yang Chou
0 siblings, 1 reply; 10+ messages in thread
From: Kuba Piecuch @ 2026-04-13 15:29 UTC (permalink / raw)
To: Cheng-Yang Chou, sched-ext, Tejun Heo, David Vernet, Andrea Righi,
Changwoo Min
Cc: Ching-Chun Huang, Chia-Ping Tsai
Hi Cheng-Yang,
On Mon Apr 13, 2026 at 11:04 AM UTC, Cheng-Yang Chou wrote:
> @@ -107,12 +113,13 @@ void BPF_STRUCT_OPS(central_enqueue, struct task_struct *p, u64 enq_flags)
> __sync_fetch_and_add(&nr_total, 1);
>
> /*
> - * Push per-cpu kthreads at the head of local dsq's and preempt the
> - * corresponding CPU. This ensures that e.g. ksoftirqd isn't blocked
> - * behind other threads which is necessary for forward progress
> - * guarantee as we depend on the BPF timer which may run from ksoftirqd.
> + * Push tasks pinned to a single CPU at the head of local dsq's and
> + * preempt the corresponding CPU. This ensures that e.g. ksoftirqd
> + * isn't blocked behind other threads which is necessary for forward
> + * progress guarantee as we depend on the BPF timer which may run
> + * from ksoftirqd.
> */
> - if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
> + if (p->nr_cpus_allowed == 1) {
If you want to cover all cases, then I think you should also check
is_migration_disabled(p). When a task disables migration, it is not reflected
in p->cpus_ptr or p->nr_cpus_allowed until the task is switched out (see
migrate_disable_switch()), which happens __after__ ops.enqueue().
So here, you could see p->nr_cpus_allowed > 1 even though the task has
migration disabled.
Thanks,
Kuba
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-13 15:29 ` Kuba Piecuch
@ 2026-04-14 3:45 ` Cheng-Yang Chou
2026-04-14 17:19 ` Tejun Heo
2026-04-14 21:01 ` Andrea Righi
0 siblings, 2 replies; 10+ messages in thread
From: Cheng-Yang Chou @ 2026-04-14 3:45 UTC (permalink / raw)
To: Kuba Piecuch
Cc: sched-ext, Tejun Heo, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
[-- Attachment #1: Type: text/plain, Size: 665 bytes --]
Hi Kuba,
On Mon, Apr 13, 2026 at 03:29:37PM +0000, Kuba Piecuch wrote:
> > - if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
> > + if (p->nr_cpus_allowed == 1) {
>
> If you want to cover all cases, then I think you should also check
> is_migration_disabled(p). When a task disables migration, it is not reflected
> in p->cpus_ptr or p->nr_cpus_allowed until the task is switched out (see
> migrate_disable_switch()), which happens __after__ ops.enqueue().
> So here, you could see p->nr_cpus_allowed > 1 even though the task has
> migration disabled.
Thanks for the feedback!
I've attached the v2 patch with the updated logic.
--
Thanks,
Cheng-Yang
[-- Attachment #2: 0001-tools-sched_ext-Handle-pinned-tasks-in-scx_central.patch --]
[-- Type: text/x-diff, Size: 4247 bytes --]
From 038e42a0a2c6d0b23e6914eaafcb0de301b70258 Mon Sep 17 00:00:00 2001
From: Cheng-Yang Chou <yphbchou0911@gmail.com>
Date: Mon, 13 Apr 2026 15:19:59 +0800
Subject: [PATCH v2 sched_ext/for-7.1-fixes] tools/sched_ext: Handle pinned tasks in scx_central
scx_central only fast-pathed per-cpu kthreads (PF_KTHREAD &&
nr_cpus_allowed == 1) directly to the local DSQ, leaving non-kthread
tasks pinned to a single CPU to go through the central queue. When
dispatch_to_cpu() later dequeued such a task and found it couldn't run
on the current CPU, it could trigger scx_error() and disable the
scheduler:
sched_ext: central: SCX_DSQ_LOCAL[_ON] cannot move migration disabled qemu-system-x86[223212] from CPU 1 to 0
Though scx_central is a demo scheduler and meant to dispatch all tasks
from a central CPU, it should remain robust under any circumstances.
Extend the handling to all tasks with nr_cpus_allowed == 1:
- select_cpu() and enqueue(): bypass the central queue for pinned and
migration-disabled tasks. enqueue() also checks is_migration_disabled()
directly because p->nr_cpus_allowed lags behind migrate_disable_switch().
- dispatch_to_cpu(): use SCX_KICK_PREEMPT when bouncing to the fallback
DSQ so the home CPU preempts immediately.
Test plan:
- run scx_central on the host, then launch virtme-ng in a separate
terminal to create pinned tasks (qemu-system-x86), and verify stable
operation.
Suggested-by: Kuba Piecuch <jpiecuch@google.com>
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
v2: Add is_migration_disabled() check in enqueue() as p->nr_cpus_allowed
is not updated until migrate_disable_switch() runs during context
switch, which happens after ops.enqueue(). (Kuba Piecuch)
tools/sched_ext/scx_central.bpf.c | 28 +++++++++++++++++++++++-----
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index 4efcce099bd5..67b160a7ce18 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -96,7 +96,13 @@ s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
* disturbing other CPUs. It's safe to blindly return the central cpu as
* select_cpu() is a hint and if @p can't be on it, the kernel will
* automatically pick a fallback CPU.
+ *
+ * For tasks pinned to a single CPU, return the current CPU to avoid
+ * an unnecessary fallback.
*/
+ if (p->nr_cpus_allowed == 1)
+ return scx_bpf_task_cpu(p);
+
return central_cpu;
}
@@ -107,12 +113,17 @@ void BPF_STRUCT_OPS(central_enqueue, struct task_struct *p, u64 enq_flags)
__sync_fetch_and_add(&nr_total, 1);
/*
- * Push per-cpu kthreads at the head of local dsq's and preempt the
- * corresponding CPU. This ensures that e.g. ksoftirqd isn't blocked
- * behind other threads which is necessary for forward progress
- * guarantee as we depend on the BPF timer which may run from ksoftirqd.
+ * Push tasks pinned to a single CPU or with migration disabled at
+ * the head of local dsq's and preempt the corresponding CPU. This
+ * ensures that e.g. ksoftirqd isn't blocked behind other threads
+ * which is necessary for forward progress guarantee as we depend on
+ * the BPF timer which may run from ksoftirqd.
+ *
+ * p->nr_cpus_allowed is not updated until migrate_disable_switch()
+ * runs during context switch, which happens after ops.enqueue(). Check
+ * is_migration_disabled() directly to catch tasks in this window.
*/
- if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
+ if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
__sync_fetch_and_add(&nr_locals, 1);
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_INF,
enq_flags | SCX_ENQ_PREEMPT);
@@ -155,6 +166,13 @@ static bool dispatch_to_cpu(s32 cpu)
if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
__sync_fetch_and_add(&nr_mismatches, 1);
scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
+
+ /*
+ * Kick the task's home CPU to pick it up from the
+ * fallback DSQ promptly.
+ */
+ scx_bpf_kick_cpu(scx_bpf_task_cpu(p), SCX_KICK_PREEMPT);
+
bpf_task_release(p);
/*
* We might run out of dispatch buffer slots if we continue dispatching
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-14 3:45 ` Cheng-Yang Chou
@ 2026-04-14 17:19 ` Tejun Heo
2026-04-15 6:17 ` Cheng-Yang Chou
2026-04-14 21:01 ` Andrea Righi
1 sibling, 1 reply; 10+ messages in thread
From: Tejun Heo @ 2026-04-14 17:19 UTC (permalink / raw)
To: Cheng-Yang Chou
Cc: Kuba Piecuch, sched-ext, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
Hello,
On Tue, Apr 14, 2026 at 11:45:59AM +0800, Cheng-Yang Chou wrote:
> diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
> index 4efcce099bd5..67b160a7ce18 100644
> --- a/tools/sched_ext/scx_central.bpf.c
> +++ b/tools/sched_ext/scx_central.bpf.c
> @@ -96,7 +96,13 @@ s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
> * disturbing other CPUs. It's safe to blindly return the central cpu as
> * select_cpu() is a hint and if @p can't be on it, the kernel will
> * automatically pick a fallback CPU.
> + *
> + * For tasks pinned to a single CPU, return the current CPU to avoid
> + * an unnecessary fallback.
> */
> + if (p->nr_cpus_allowed == 1)
> + return scx_bpf_task_cpu(p);
> +
This isn't necessary. select_cpu() isn't called if nr_cpus_allowed == 1.
> @@ -107,12 +113,17 @@ void BPF_STRUCT_OPS(central_enqueue, struct task_struct *p, u64 enq_flags)
> __sync_fetch_and_add(&nr_total, 1);
>
> /*
> - * Push per-cpu kthreads at the head of local dsq's and preempt the
> - * corresponding CPU. This ensures that e.g. ksoftirqd isn't blocked
> - * behind other threads which is necessary for forward progress
> - * guarantee as we depend on the BPF timer which may run from ksoftirqd.
> + * Push tasks pinned to a single CPU or with migration disabled at
> + * the head of local dsq's and preempt the corresponding CPU. This
> + * ensures that e.g. ksoftirqd isn't blocked behind other threads
> + * which is necessary for forward progress guarantee as we depend on
> + * the BPF timer which may run from ksoftirqd.
> + *
> + * p->nr_cpus_allowed is not updated until migrate_disable_switch()
> + * runs during context switch, which happens after ops.enqueue(). Check
> + * is_migration_disabled() directly to catch tasks in this window.
> */
> - if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
> + if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
Let's not expand the condition here. This is to guarantee forward progress
of the scheduler itself.
> @@ -155,6 +166,13 @@ static bool dispatch_to_cpu(s32 cpu)
> if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
Instead, can you expand the condition above to "if this task can't run on
the target CPU for whatever reason"?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-14 17:19 ` Tejun Heo
@ 2026-04-15 6:17 ` Cheng-Yang Chou
2026-04-15 9:59 ` Kuba Piecuch
2026-04-17 18:38 ` Tejun Heo
0 siblings, 2 replies; 10+ messages in thread
From: Cheng-Yang Chou @ 2026-04-15 6:17 UTC (permalink / raw)
To: Tejun Heo
Cc: Kuba Piecuch, sched-ext, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
[-- Attachment #1: Type: text/plain, Size: 904 bytes --]
Hi Tejun,
On Tue, Apr 14, 2026 at 07:19:04AM -1000, Tejun Heo wrote:
> Hello,
>
> > + if (p->nr_cpus_allowed == 1)
> > + return scx_bpf_task_cpu(p);
> > +
>
> This isn't necessary. select_cpu() isn't called if nr_cpus_allowed == 1.
>
Ack.
> > @@ -107,12 +113,17 @@ void BPF_STRUCT_OPS(central_enqueue, struct task_struct *p, u64 enq_flags)
> > __sync_fetch_and_add(&nr_total, 1);
> >
> > - if ((p->flags & PF_KTHREAD) && p->nr_cpus_allowed == 1) {
> > + if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) {
>
> Let's not expand the condition here. This is to guarantee forward progress
> of the scheduler itself.
Ack.
> > @@ -155,6 +166,13 @@ static bool dispatch_to_cpu(s32 cpu)
> > if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
>
> Instead, can you expand the condition above to "if this task can't run on
> the target CPU for whatever reason"?
Done!
--
Thanks,
Cheng-Yang
[-- Attachment #2: v3-0001-tools-sched_ext-Handle-migration-disabled-tasks-i.patch --]
[-- Type: text/x-diff, Size: 2405 bytes --]
From df62cc372cb9fab7aa9b666a6a127d8248923dd8 Mon Sep 17 00:00:00 2001
From: Cheng-Yang Chou <yphbchou0911@gmail.com>
Date: Mon, 13 Apr 2026 15:19:59 +0800
Subject: [PATCH v3 sched_ext/for-7.1] tools/sched_ext: Handle migration-disabled tasks in
scx_central
When a task calls migrate_disable(), p->cpus_ptr is not updated until
migrate_disable_switch() runs during context switch, so dispatch_to_cpu()
may dequeue such a task and dispatch it to a CPU it cannot run on.
Extend the mismatch check in dispatch_to_cpu() to also test
is_migration_disabled() alongside the cpumask check, so tasks in this
window are bounced to the fallback DSQ.
Suggested-by: Andrea Righi <arighi@nvidia.com>
Suggested-by: Tejun Heo <tj@kernel.org>
Suggested-by: Kuba Piecuch <jpiecuch@google.com>
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
v3:
- Drop the select_cpu() shortcut, revert enqueue() to the kthread-only
forward-progress path, and expand the dispatch_to_cpu() mismatch
check to cover is_migration_disabled() (Tejun Heo)
- Drop kick_cpu() on mismatch (Andrea Righi)
- Reword commit subject and message with the fix
v2:
- Add is_migration_disabled() check in enqueue() as p->nr_cpus_allowed
is not updated until migrate_disable_switch() runs during context
switch, which happens after ops.enqueue(). (Kuba Piecuch)
tools/sched_ext/scx_central.bpf.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index 4efcce099bd5..64dd60b3e922 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -149,10 +149,14 @@ static bool dispatch_to_cpu(s32 cpu)
}
/*
- * If we can't run the task at the top, do the dumb thing and
- * bounce it to the fallback dsq.
+ * If we can't run the task at the top for whatever reason,
+ * bounce it to the fallback dsq. Also check
+ * is_migration_disabled() explicitly as p->cpus_ptr may not
+ * reflect the migration-disabled state yet if
+ * migrate_disable_switch() hasn't run.
*/
- if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
+ if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr) ||
+ (is_migration_disabled(p) && scx_bpf_task_cpu(p) != cpu)) {
__sync_fetch_and_add(&nr_mismatches, 1);
scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
bpf_task_release(p);
--
2.48.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-15 6:17 ` Cheng-Yang Chou
@ 2026-04-15 9:59 ` Kuba Piecuch
2026-04-15 10:07 ` Cheng-Yang Chou
2026-04-17 18:38 ` Tejun Heo
1 sibling, 1 reply; 10+ messages in thread
From: Kuba Piecuch @ 2026-04-15 9:59 UTC (permalink / raw)
To: Cheng-Yang Chou, Tejun Heo
Cc: Kuba Piecuch, sched-ext, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
On Wed Apr 15, 2026 at 6:17 AM UTC, Cheng-Yang Chou wrote:
> From df62cc372cb9fab7aa9b666a6a127d8248923dd8 Mon Sep 17 00:00:00 2001
> From: Cheng-Yang Chou <yphbchou0911@gmail.com>
> Date: Mon, 13 Apr 2026 15:19:59 +0800
> Subject: [PATCH v3 sched_ext/for-7.1] tools/sched_ext: Handle migration-disabled tasks in
> scx_central
>
> When a task calls migrate_disable(), p->cpus_ptr is not updated until
> migrate_disable_switch() runs during context switch, so dispatch_to_cpu()
> may dequeue such a task and dispatch it to a CPU it cannot run on.
>
> Extend the mismatch check in dispatch_to_cpu() to also test
> is_migration_disabled() alongside the cpumask check, so tasks in this
> window are bounced to the fallback DSQ.
>
> Suggested-by: Andrea Righi <arighi@nvidia.com>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Kuba Piecuch <jpiecuch@google.com>
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> ---
> v3:
> - Drop the select_cpu() shortcut, revert enqueue() to the kthread-only
> forward-progress path, and expand the dispatch_to_cpu() mismatch
> check to cover is_migration_disabled() (Tejun Heo)
> - Drop kick_cpu() on mismatch (Andrea Righi)
> - Reword commit subject and message with the fix
>
> v2:
> - Add is_migration_disabled() check in enqueue() as p->nr_cpus_allowed
> is not updated until migrate_disable_switch() runs during context
> switch, which happens after ops.enqueue(). (Kuba Piecuch)
>
> tools/sched_ext/scx_central.bpf.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
> index 4efcce099bd5..64dd60b3e922 100644
> --- a/tools/sched_ext/scx_central.bpf.c
> +++ b/tools/sched_ext/scx_central.bpf.c
> @@ -149,10 +149,14 @@ static bool dispatch_to_cpu(s32 cpu)
> }
>
> /*
> - * If we can't run the task at the top, do the dumb thing and
> - * bounce it to the fallback dsq.
> + * If we can't run the task at the top for whatever reason,
> + * bounce it to the fallback dsq. Also check
> + * is_migration_disabled() explicitly as p->cpus_ptr may not
> + * reflect the migration-disabled state yet if
> + * migrate_disable_switch() hasn't run.
> */
> - if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
> + if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr) ||
> + (is_migration_disabled(p) && scx_bpf_task_cpu(p) != cpu)) {
> __sync_fetch_and_add(&nr_mismatches, 1);
> scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
> bpf_task_release(p);
Looks good to me!
Minor email-related comment: I would prefer if patch revisions (v2, v3, ...)
were sent in separate emails with their own subject line, otherwise it may look
like I'm reviewing the v1, if you look solely at the subject line.
To be clear, the following tag applies to v3.
Reviewed-by: Kuba Piecuch <jpiecuch@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-15 9:59 ` Kuba Piecuch
@ 2026-04-15 10:07 ` Cheng-Yang Chou
0 siblings, 0 replies; 10+ messages in thread
From: Cheng-Yang Chou @ 2026-04-15 10:07 UTC (permalink / raw)
To: Kuba Piecuch
Cc: Tejun Heo, sched-ext, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
Hi Kuba,
On Wed, Apr 15, 2026 at 09:59:33AM +0000, Kuba Piecuch wrote:
> Looks good to me!
>
> Minor email-related comment: I would prefer if patch revisions (v2, v3, ...)
> were sent in separate emails with their own subject line, otherwise it may look
> like I'm reviewing the v1, if you look solely at the subject line.
>
> To be clear, the following tag applies to v3.
>
> Reviewed-by: Kuba Piecuch <jpiecuch@google.com>
>
Ahh, thanks for the review and reminder!
I was just being lazy because I thought keeping it in a single thread
would mean I wouldn't have to add links to the changelog (but yes, I
should add them anyway).
I'll send future revisions in separate threads. Thanks!
--
Thanks,
Cheng-Yang
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-15 6:17 ` Cheng-Yang Chou
2026-04-15 9:59 ` Kuba Piecuch
@ 2026-04-17 18:38 ` Tejun Heo
1 sibling, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2026-04-17 18:38 UTC (permalink / raw)
To: Cheng-Yang Chou
Cc: Kuba Piecuch, sched-ext, David Vernet, Andrea Righi, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
Hello,
> Cheng-Yang Chou (1):
> tools/sched_ext: Handle migration-disabled tasks in scx_central
Applied to sched_ext/for-7.2.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-14 3:45 ` Cheng-Yang Chou
2026-04-14 17:19 ` Tejun Heo
@ 2026-04-14 21:01 ` Andrea Righi
2026-04-15 6:02 ` Cheng-Yang Chou
1 sibling, 1 reply; 10+ messages in thread
From: Andrea Righi @ 2026-04-14 21:01 UTC (permalink / raw)
To: Cheng-Yang Chou
Cc: Kuba Piecuch, sched-ext, Tejun Heo, David Vernet, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
Hi Cheng-Yang,
On Tue, Apr 14, 2026 at 11:45:59AM +0800, Cheng-Yang Chou wrote:
...
> @@ -155,6 +166,13 @@ static bool dispatch_to_cpu(s32 cpu)
> if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
> __sync_fetch_and_add(&nr_mismatches, 1);
> scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
> +
> + /*
> + * Kick the task's home CPU to pick it up from the
> + * fallback DSQ promptly.
> + */
> + scx_bpf_kick_cpu(scx_bpf_task_cpu(p), SCX_KICK_PREEMPT);
> +
You can also pass SCX_ENQ_PREEMPT to scx_bpf_dsq_insert() and avoid calling
scx_bpf_kick_cpu().
More in general, do we really need to preempt the CPU here? It'll be preempted
in the central_timerfn() handler later, right? If we preempt the CPU here we may
have lots of preemption events when many affinitized tasks are running (i.e.,
stress-ng --race-sched 0).
Thanks,
-Andrea
> bpf_task_release(p);
> /*
> * We might run out of dispatch buffer slots if we continue dispatching
> --
> 2.48.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH] tools/sched_ext: Handle pinned tasks in scx_central
2026-04-14 21:01 ` Andrea Righi
@ 2026-04-15 6:02 ` Cheng-Yang Chou
0 siblings, 0 replies; 10+ messages in thread
From: Cheng-Yang Chou @ 2026-04-15 6:02 UTC (permalink / raw)
To: Andrea Righi
Cc: Kuba Piecuch, sched-ext, Tejun Heo, David Vernet, Changwoo Min,
Ching-Chun Huang, Chia-Ping Tsai
Hi Andrea,
On Tue, Apr 14, 2026 at 11:01:00PM +0200, Andrea Righi wrote:
> Hi Cheng-Yang,
>
> On Tue, Apr 14, 2026 at 11:45:59AM +0800, Cheng-Yang Chou wrote:
> ...
> > @@ -155,6 +166,13 @@ static bool dispatch_to_cpu(s32 cpu)
> > if (!bpf_cpumask_test_cpu(cpu, p->cpus_ptr)) {
> > __sync_fetch_and_add(&nr_mismatches, 1);
> > scx_bpf_dsq_insert(p, FALLBACK_DSQ_ID, SCX_SLICE_INF, 0);
> > +
> > + /*
> > + * Kick the task's home CPU to pick it up from the
> > + * fallback DSQ promptly.
> > + */
> > + scx_bpf_kick_cpu(scx_bpf_task_cpu(p), SCX_KICK_PREEMPT);
> > +
>
> You can also pass SCX_ENQ_PREEMPT to scx_bpf_dsq_insert() and avoid calling
> scx_bpf_kick_cpu().
>
> More in general, do we really need to preempt the CPU here? It'll be preempted
> in the central_timerfn() handler later, right? If we preempt the CPU here we may
> have lots of preemption events when many affinitized tasks are running (i.e.,
> stress-ng --race-sched 0).
Yeah, this is wrong :/ Thanks for your explanation!
Will fix this in the upcoming patch, enjoy your time in the UK!
--
Thanks,
Cheng-Yang
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-04-17 18:38 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-13 11:04 [PATCH] tools/sched_ext: Handle pinned tasks in scx_central Cheng-Yang Chou
2026-04-13 15:29 ` Kuba Piecuch
2026-04-14 3:45 ` Cheng-Yang Chou
2026-04-14 17:19 ` Tejun Heo
2026-04-15 6:17 ` Cheng-Yang Chou
2026-04-15 9:59 ` Kuba Piecuch
2026-04-15 10:07 ` Cheng-Yang Chou
2026-04-17 18:38 ` Tejun Heo
2026-04-14 21:01 ` Andrea Righi
2026-04-15 6:02 ` Cheng-Yang Chou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox