Linux s390 Architecture development
 help / color / mirror / Atom feed
* [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions
@ 2026-05-07 10:41 Vasily Gorbik
  2026-05-07 10:41 ` [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor Vasily Gorbik
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Vasily Gorbik @ 2026-05-07 10:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, John Stultz, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

v1 [1] consisted of a fix for a scheduler corruption where
try_steal_cookie() could migrate a proxy-exec donor away from the source
rq while that rq still used it as the active scheduling context.

Prateek pointed out [2] a separate proxy-exec/core-sched issue: after
pick_next_task() selects a core cookie compatible donor, find_proxy_task()
can replace the execution context with a mutex owner with a different
cookie.

This v2 keeps the donor steal fix as patch 1 and adds patch 2 to reject
mismatched final proxy owners.

The v1 reported the issue reproduced on s390 LPAR, but it seems to be
easily reproducible with strace test suite "make -j$(nproc) check" on
any system with SMT, CONFIG_SCHED_CORE=y and CONFIG_SCHED_PROXY_EXEC=y
enabled, e.g. on x86 KVM with -smp cpus=16,sockets=1,cores=8,threads=2:

[  283.181298] WARNING: kernel/sched/fair.c:5788 at put_prev_entity+0x4f/0x90, CPU#2: unshare-report-/27895
[  283.185230] Modules linked in:
[  283.186480] CPU: 2 UID: 0 PID: 27895 Comm: unshare-report- Not tainted 7.1.0-rc2-00076-g74fe02ce122a #26 PREEMPT(full)
[  283.190699] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-9.fc43 06/10/2025
[  283.194482] RIP: 0010:put_prev_entity+0x4f/0x90
[  283.196591] Code: fd ff ff 80 7b 58 00 74 e0 66 90 48 89 de 48 89 ef e8 85 a9 ff ff 31 d2 48 89 de 48 89 ef e8 d8 d6 ff ff 48 39 5d 58 74 c6 90 <0f> 0b 90 48 c7 45 58 00 00 00 00 5b 5d e9 7f cb 31 01 48 83 bb b8
[  283.205157] RSP: 0018:ffffc90009177af0 EFLAGS: 00010006
[  283.207443] RAX: 0000000000000000 RBX: ffff888102de8080 RCX: 000000000004f800
[  283.210442] RDX: 0000000000000000 RSI: 0000000000027c00 RDI: 00000041dd7d5860
[  283.213528] RBP: ffff888116cb2200 R08: ffff888116fe8080 R09: 0000000000000002
[  283.216766] R10: 0000000005bf08d6 R11: 00000000000002b7 R12: ffff8881192da4a0
[  283.219872] R13: ffff88813a3ec801 R14: 0000000000000001 R15: ffff88813a3ec800
[  283.222777] FS:  00007f6b5ca21780(0000) GS:ffff8881b628c000(0000) knlGS:0000000000000000
[  283.226171] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  283.228493] CR2: 000000001319e358 CR3: 000000001f322000 CR4: 00000000000006f0
[  283.231951] Call Trace:
[  283.233137]  <TASK>
[  283.234066]  put_prev_task_fair+0x1d/0x40
[  283.235943]  __schedule+0x1165/0x28d0
[  283.237599]  ? __resched_curr+0x372/0x3a0
[  283.239413]  ? detach_task+0xc1/0xd0
[  283.241015]  ? lockdep_hardirqs_on_prepare+0xd7/0x190
[  283.243170]  ? trace_hardirqs_on+0x18/0x100
[  283.244852]  preempt_schedule+0x2e/0x50
[  283.246707]  preempt_schedule_thunk+0x16/0x30
[  283.248680]  ? _raw_spin_unlock_irqrestore+0x3f/0x50
[  283.251012]  __mutex_unlock_slowpath+0x2d9/0x3d0
[  283.253196]  pcpu_alloc_noprof+0x3e6/0xbd0
[  283.255187]  alloc_vfsmnt+0xd7/0x1e0
[  283.256651]  clone_mnt+0x1e/0x280
[  283.258061]  copy_tree+0x127/0x420
[  283.259449]  copy_mnt_ns+0x13f/0x520
[  283.260926]  create_new_namespaces+0x54/0x2e0
[  283.262974]  unshare_nsproxy_namespaces+0x7e/0xb0
[  283.265317]  ksys_unshare+0x196/0x550
[  283.267097]  __x64_sys_unshare+0xd/0x20
[  283.268876]  do_syscall_64+0xf3/0x6a0
[  283.270611]  ? exc_page_fault+0xfa/0x240
[  283.272329]  ? __irq_exit_rcu+0x3c/0x100
[  283.274006]  entry_SYSCALL_64_after_hwframe+0x77/0x7f
[  283.276085] RIP: 0033:0x7f6b5cb1730d
[  283.277509] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c3 5a 0f 00 f7 d8 64 89 01 48
[  283.285484] RSP: 002b:00007ffef4e305d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
[  283.288741] RAX: ffffffffffffffda RBX: 0000000000000007 RCX: 00007f6b5cb1730d
[  283.291711] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000020000
[  283.294664] RBP: 0000000000000000 R08: 0000000000000000 R09: 3230345b3a737475
[  283.298149] R10: 000000000000eefe R11: 0000000000000246 R12: 00000000000000f0
[  283.301268] R13: 0000000000000001 R14: 00007f6b5cd53000 R15: 0000000000404df0
[  283.304570]  </TASK>
[  283.305583] irq event stamp: 2018
[  283.307085] hardirqs last  enabled at (2017): [<ffffffff8269777f>] _raw_spin_unlock_irqrestore+0x3f/0x50
[  283.311026] hardirqs last disabled at (2018): [<ffffffff82689aff>] __schedule+0x13df/0x28d0
[  283.314726] softirqs last  enabled at (2008): [<ffffffff81324f40>] __irq_exit_rcu+0xc0/0x100
[  283.318427] softirqs last disabled at (2001): [<ffffffff81324f40>] __irq_exit_rcu+0xc0/0x100
[  283.321920] ---[ end trace 0000000000000000 ]---
[  283.323878] BUG: kernel NULL pointer dereference, address: 0000000000000059
[  283.326033] #PF: supervisor read access in kernel mode
[  283.327357] #PF: error_code(0x0000) - not-present page
[  283.328698] PGD 800000000a8c5067 P4D 800000000a8c5067 PUD 12879067 PMD 0
[  283.329491] Oops: Oops: 0000 [#1] SMP PTI
[  283.329796] CPU: 2 UID: 0 PID: 0 Comm: swapper/2 Tainted: G        W           7.1.0-rc2-00076-g74fe02ce122a #26 PREEMPT(full)
[  283.331183] Tainted: [W]=WARN
[  283.331468] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-9.fc43 06/10/2025
[  283.332346] RIP: 0010:pick_task_fair+0x2d/0xb0
[  283.332735] Code: fa 8b 97 10 01 00 00 85 d2 0f 84 92 00 00 00 53 48 89 fb 48 83 ec 08 48 8d bb 00 01 00 00 eb 21 be 01 00 00 00 e8 13 8b ff ff <80> 78 59 00 75 3d 48 85 c0 74 48 48 8b b8 b8 00 00 00 48 85 ff 74
[  283.334364] RSP: 0018:ffffc900000b7e20 EFLAGS: 00010086
[  283.334992] RAX: 0000000000000000 RBX: ffff88813a5ec800 RCX: 041dd83271100000
[  283.335731] RDX: 0000000000000000 RSI: 0000000000200000 RDI: ffff888116cb2200
[  283.336404] RBP: ffffc900000b7f20 R08: 041dd83271100000 R09: 0000000000200000
[  283.337852] R10: 00000005252d41b2 R11: 0000000000000001 R12: 0000000000000002
[  283.338802] R13: ffff888025c18000 R14: 0000000000000003 R15: ffffffff84160800
[  283.339827] FS:  0000000000000000(0000) GS:ffff8881b628c000(0000) knlGS:0000000000000000
[  283.341158] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  283.342502] CR2: 0000000000000059 CR3: 000000001f322000 CR4: 00000000000006f0
[  283.345455] Call Trace:
[  283.347033]  <TASK>
[  283.348350]  __schedule+0xc65/0x28d0
[  283.349703]  ? tick_nohz_idle_exit+0x66/0x160
[  283.350882]  ? do_idle+0x17c/0x2b0
[  283.351454]  schedule_idle+0x1d/0x40
[  283.352017]  cpu_startup_entry+0x24/0x30
[  283.352594]  start_secondary+0xf8/0x100
[  283.353272]  common_startup_64+0x13e/0x148
[  283.353840]  </TASK>

Tested with strace test suite as well as hackbench and stress-ng on s390 and x86.

v1-v2:
- added a fix to prevent proxy-exec of unmatched cookie lock owners

[1] https://lore.kernel.org/all/c00-01.ttedd70@ub.hpns/
[2] https://lore.kernel.org/all/10282ce9-f4ae-498f-9b57-f4e1e61fffbc@amd.com/

Vasily Gorbik (2):
  sched/core: Don't steal a proxy-exec donor
  sched/core: Don't proxy-exec unmatched cookie lock owners

 kernel/sched/core.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

-- 
2.53.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor
  2026-05-07 10:41 [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions Vasily Gorbik
@ 2026-05-07 10:41 ` Vasily Gorbik
  2026-05-12 21:35   ` John Stultz
  2026-05-07 10:41 ` [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners Vasily Gorbik
  2026-05-12 21:17 ` [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions John Stultz
  2 siblings, 1 reply; 8+ messages in thread
From: Vasily Gorbik @ 2026-05-07 10:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, John Stultz, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

try_steal_cookie() avoids stealing src->core_pick and src->curr before
moving a task with the same cookie via move_queued_task_locked().

With proxy-exec, src->donor is the current scheduling context and may
differ from src->curr. Stealing it migrates a task that the source rq
still treats as current, leaving src's scheduler state for that task
stale. For CFS this means cfs_rq->curr points at the stolen entity,
and the next pick on the source rq hits the WARN_ON_ONCE in
put_prev_entity().

Commit 7de9d4f94638 ("sched: Start blocked_on chain processing in
find_proxy_task()") tweaked the fair class logic so that the donor task
isn't migrated away while we're running the proxy. Do it similarly for
try_steal_cookie() and skip src->donor as well.

Fixes: 7de9d4f94638 ("sched: Start blocked_on chain processing in find_proxy_task()")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
---
 kernel/sched/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b905805bbcbe..8aed55592ca9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6366,7 +6366,7 @@ static bool try_steal_cookie(int this, int that)
 		return false;
 
 	do {
-		if (p == src->core_pick || p == src->curr)
+		if (p == src->core_pick || p == src->curr || p == src->donor)
 			goto next;
 
 		if (!is_cpu_allowed(p, this))
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners
  2026-05-07 10:41 [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions Vasily Gorbik
  2026-05-07 10:41 ` [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor Vasily Gorbik
@ 2026-05-07 10:41 ` Vasily Gorbik
  2026-05-12 22:16   ` John Stultz
  2026-05-12 21:17 ` [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions John Stultz
  2 siblings, 1 reply; 8+ messages in thread
From: Vasily Gorbik @ 2026-05-07 10:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, John Stultz, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

Core scheduling chooses a core-wide cookie before __schedule()
installs the next task. With proxy-exec enabled, that task becomes the
donor/scheduling context, and find_proxy_task() may then replace the
execution context with the runnable mutex owner. If its cookie differs
from the selected core cookie, running it would bypass core scheduling's
cookie selection.

When the final mutex owner found by find_proxy_task() does not match the
selected core cookie, stop proxying the donor. If the current execution
context is already in the blocked chain, fall back to idle like the
existing proxy-exec retry paths do. Otherwise deactivate the donor and
let __schedule() pick again. The mutex owner can be picked later under
its own cookie.

Fixes: 7de9d4f94638 ("sched: Start blocked_on chain processing in find_proxy_task()")
Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
---
 kernel/sched/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8aed55592ca9..d338fb714ce8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6960,6 +6960,12 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
 		 */
 	}
 	WARN_ON_ONCE(owner && !owner->on_rq);
+
+	if (owner && !sched_cpu_cookie_match(rq, owner)) {
+		if (curr_in_chain)
+			return proxy_resched_idle(rq);
+		goto deactivate;
+	}
 	return owner;
 
 deactivate:
-- 
2.53.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions
  2026-05-07 10:41 [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions Vasily Gorbik
  2026-05-07 10:41 ` [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor Vasily Gorbik
  2026-05-07 10:41 ` [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners Vasily Gorbik
@ 2026-05-12 21:17 ` John Stultz
  2026-05-13  0:48   ` John Stultz
  2 siblings, 1 reply; 8+ messages in thread
From: John Stultz @ 2026-05-12 21:17 UTC (permalink / raw)
  To: Vasily Gorbik
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

On Thu, May 7, 2026 at 3:42 AM Vasily Gorbik <gor@linux.ibm.com> wrote:
>
> v1 [1] consisted of a fix for a scheduler corruption where
> try_steal_cookie() could migrate a proxy-exec donor away from the source
> rq while that rq still used it as the active scheduling context.
>
> Prateek pointed out [2] a separate proxy-exec/core-sched issue: after
> pick_next_task() selects a core cookie compatible donor, find_proxy_task()
> can replace the execution context with a mutex owner with a different
> cookie.
>
> This v2 keeps the donor steal fix as patch 1 and adds patch 2 to reject
> mismatched final proxy owners.
>
> The v1 reported the issue reproduced on s390 LPAR, but it seems to be
> easily reproducible with strace test suite "make -j$(nproc) check" on
> any system with SMT, CONFIG_SCHED_CORE=y and CONFIG_SCHED_PROXY_EXEC=y
> enabled, e.g. on x86 KVM with -smp cpus=16,sockets=1,cores=8,threads=2:
>

Vasily! Thank you so much for reporting this and working out fixes
(along with K Prateek!)

Apologies for being slow to reply, I've been under the weather.

I really appreciate this reproducer detail, but I've so far not been
able to trip this issue up (SCHED_CORE=y, SCHED_PROXY_EXEC=y and using
the qemu arguments you included above). Could you mail me your .config
in case something else is needed?

thanks
-john

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor
  2026-05-07 10:41 ` [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor Vasily Gorbik
@ 2026-05-12 21:35   ` John Stultz
  0 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2026-05-12 21:35 UTC (permalink / raw)
  To: Vasily Gorbik
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

On Thu, May 7, 2026 at 3:42 AM Vasily Gorbik <gor@linux.ibm.com> wrote:
>
> try_steal_cookie() avoids stealing src->core_pick and src->curr before
> moving a task with the same cookie via move_queued_task_locked().
>
> With proxy-exec, src->donor is the current scheduling context and may
> differ from src->curr. Stealing it migrates a task that the source rq
> still treats as current, leaving src's scheduler state for that task
> stale. For CFS this means cfs_rq->curr points at the stolen entity,
> and the next pick on the source rq hits the WARN_ON_ONCE in
> put_prev_entity().
>
> Commit 7de9d4f94638 ("sched: Start blocked_on chain processing in
> find_proxy_task()") tweaked the fair class logic so that the donor task
> isn't migrated away while we're running the proxy. Do it similarly for
> try_steal_cookie() and skip src->donor as well.
>
> Fixes: 7de9d4f94638 ("sched: Start blocked_on chain processing in find_proxy_task()")
> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
> ---
>  kernel/sched/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b905805bbcbe..8aed55592ca9 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6366,7 +6366,7 @@ static bool try_steal_cookie(int this, int that)
>                 return false;
>
>         do {
> -               if (p == src->core_pick || p == src->curr)
> +               if (p == src->core_pick || p == src->curr || p == src->donor)
>                         goto next;
>

This makes sense to me, we don't want to be migrating rq donors.
 Acked-by: John Stultz <jstultz@google.com>

I'm still getting my head around the finer details of the core
scheduler, but I suspect we also should add a task_is_blocked() check
in here as well, since migrating blocked_on tasks isn't useful either.

thanks
-john

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners
  2026-05-07 10:41 ` [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners Vasily Gorbik
@ 2026-05-12 22:16   ` John Stultz
  2026-05-14  9:54     ` K Prateek Nayak
  0 siblings, 1 reply; 8+ messages in thread
From: John Stultz @ 2026-05-12 22:16 UTC (permalink / raw)
  To: Vasily Gorbik
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

On Thu, May 7, 2026 at 3:42 AM Vasily Gorbik <gor@linux.ibm.com> wrote:
>
> Core scheduling chooses a core-wide cookie before __schedule()
> installs the next task. With proxy-exec enabled, that task becomes the
> donor/scheduling context, and find_proxy_task() may then replace the
> execution context with the runnable mutex owner. If its cookie differs
> from the selected core cookie, running it would bypass core scheduling's
> cookie selection.
>
> When the final mutex owner found by find_proxy_task() does not match the
> selected core cookie, stop proxying the donor. If the current execution
> context is already in the blocked chain, fall back to idle like the
> existing proxy-exec retry paths do. Otherwise deactivate the donor and
> let __schedule() pick again. The mutex owner can be picked later under
> its own cookie.
>
> Fixes: 7de9d4f94638 ("sched: Start blocked_on chain processing in find_proxy_task()")
> Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
> ---
>  kernel/sched/core.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 8aed55592ca9..d338fb714ce8 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6960,6 +6960,12 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
>                  */
>         }
>         WARN_ON_ONCE(owner && !owner->on_rq);
> +
> +       if (owner && !sched_cpu_cookie_match(rq, owner)) {
> +               if (curr_in_chain)
> +                       return proxy_resched_idle(rq);
> +               goto deactivate;
> +       }


Hrm. This is less pretty.

My previous (admittedly shallow) thinking on the core-scheduler was
that it wouldn't be an issue for proxy because the donor wasn't going
to actually run on the cpu, so whatever isolation is done on the core,
the donor migration wouldn't be a problem.

But I'm seeing now the donor won't be *chosen* until it has the right
core_cookie, and then that may be different from the owners cookie.

It seems like ideally we want the donor's effective cookie to be the
same as the runnable-owner's in the chain.  The downside to this is
you have to walk the blocked_on chain to evaluate this, and the whole
core_tree rbtree sorts by cookie, so its not trivial to rework
selection this way.   And since the runnable-owner of the chain-tree
changes over time, we can't just set the inherited cookie when we set
blocked_on.

So I will need to think a bit more on this.

In the short term, I think you're change is probably ok since it makes
sure we don't run tasks with the wrong cookie, but it effectively
stops proxying from having a beneficial effect.

Thanks again so much for raising this issue (along with K Prateek)!

thanks
-john

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions
  2026-05-12 21:17 ` [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions John Stultz
@ 2026-05-13  0:48   ` John Stultz
  0 siblings, 0 replies; 8+ messages in thread
From: John Stultz @ 2026-05-13  0:48 UTC (permalink / raw)
  To: Vasily Gorbik
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	K Prateek Nayak, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

On Tue, May 12, 2026 at 2:17 PM John Stultz <jstultz@google.com> wrote:
> On Thu, May 7, 2026 at 3:42 AM Vasily Gorbik <gor@linux.ibm.com> wrote:
> >
> > v1 [1] consisted of a fix for a scheduler corruption where
> > try_steal_cookie() could migrate a proxy-exec donor away from the source
> > rq while that rq still used it as the active scheduling context.
> >
> > Prateek pointed out [2] a separate proxy-exec/core-sched issue: after
> > pick_next_task() selects a core cookie compatible donor, find_proxy_task()
> > can replace the execution context with a mutex owner with a different
> > cookie.
> >
> > This v2 keeps the donor steal fix as patch 1 and adds patch 2 to reject
> > mismatched final proxy owners.
> >
> > The v1 reported the issue reproduced on s390 LPAR, but it seems to be
> > easily reproducible with strace test suite "make -j$(nproc) check" on
> > any system with SMT, CONFIG_SCHED_CORE=y and CONFIG_SCHED_PROXY_EXEC=y
> > enabled, e.g. on x86 KVM with -smp cpus=16,sockets=1,cores=8,threads=2:
> >
>
> Vasily! Thank you so much for reporting this and working out fixes
> (along with K Prateek!)
>
> Apologies for being slow to reply, I've been under the weather.
>
> I really appreciate this reproducer detail, but I've so far not been
> able to trip this issue up (SCHED_CORE=y, SCHED_PROXY_EXEC=y and using
> the qemu arguments you included above). Could you mail me your .config
> in case something else is needed?

Ok, I think I was able to force it using my priority-inversion-demo by
taking the spots in the run.sh script where we kick off the
rename-test and prefixing it with `coresched new -t pid --`
  https://github.com/johnstultz-work/priority-inversion-demo/blob/main/run.sh#L89

That way the foreground/background tasks run with separate cookies and
that forces proxying across cookies, and with that I've tripped over
the issues you highlight.

That said, I'm still curious to learn more about your x86 environment
and why it tripped so much more easily there, so let me know.

With your patches it does seem to resolve things, but I'm also hoping
to find some better ways to more thoroughly stress the
proxy+core-sched logic.

thanks again!
-john

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners
  2026-05-12 22:16   ` John Stultz
@ 2026-05-14  9:54     ` K Prateek Nayak
  0 siblings, 0 replies; 8+ messages in thread
From: K Prateek Nayak @ 2026-05-14  9:54 UTC (permalink / raw)
  To: John Stultz, Vasily Gorbik
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Vineeth Pillai, Joel Fernandes,
	Heiko Carstens, linux-s390, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 17699 bytes --]

Hello John, Vasily,

On 5/13/2026 3:46 AM, John Stultz wrote:
> On Thu, May 7, 2026 at 3:42 AM Vasily Gorbik <gor@linux.ibm.com> wrote:
>>
>> Core scheduling chooses a core-wide cookie before __schedule()
>> installs the next task. With proxy-exec enabled, that task becomes the
>> donor/scheduling context, and find_proxy_task() may then replace the
>> execution context with the runnable mutex owner. If its cookie differs
>> from the selected core cookie, running it would bypass core scheduling's
>> cookie selection.
>>
>> When the final mutex owner found by find_proxy_task() does not match the
>> selected core cookie, stop proxying the donor. If the current execution
>> context is already in the blocked chain, fall back to idle like the
>> existing proxy-exec retry paths do. Otherwise deactivate the donor and
>> let __schedule() pick again. The mutex owner can be picked later under
>> its own cookie.
>>
>> Fixes: 7de9d4f94638 ("sched: Start blocked_on chain processing in find_proxy_task()")
>> Reported-by: K Prateek Nayak <kprateek.nayak@amd.com>
>> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
>> ---
>>  kernel/sched/core.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 8aed55592ca9..d338fb714ce8 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -6960,6 +6960,12 @@ find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
>>                  */
>>         }
>>         WARN_ON_ONCE(owner && !owner->on_rq);
>> +
>> +       if (owner && !sched_cpu_cookie_match(rq, owner)) {
>> +               if (curr_in_chain)
>> +                       return proxy_resched_idle(rq);
>> +               goto deactivate;
>> +       }
> 
> 
> Hrm. This is less pretty.
> 
> My previous (admittedly shallow) thinking on the core-scheduler was
> that it wouldn't be an issue for proxy because the donor wasn't going
> to actually run on the cpu, so whatever isolation is done on the core,
> the donor migration wouldn't be a problem.
> 
> But I'm seeing now the donor won't be *chosen* until it has the right
> core_cookie, and then that may be different from the owners cookie.
> 
> It seems like ideally we want the donor's effective cookie to be the
> same as the runnable-owner's in the chain.  The downside to this is
> you have to walk the blocked_on chain to evaluate this, and the whole
> core_tree rbtree sorts by cookie, so its not trivial to rework
> selection this way.   And since the runnable-owner of the chain-tree
> changes over time, we can't just set the inherited cookie when we set
> blocked_on.
> 
> So I will need to think a bit more on this.

Sorry it took me a little while to wrap my head around the core pick
bits (and an embarrassingly long time to spot the rq_lockp() trick for
core-wide locking) but with limited knowledge, this is what I've come up
with to make proxy work with core scheduling with the basic principle
that:

o If the lock owner matches the cookie with donor, we don't have
  to do anything - just run the owner on behalf of the donor.

o If there is a mismatch, then we have to first see if the rq of the
  blocked donor was responsible for core-pick (the rq that has the "max"
  priority task queued in pick_next_task()) and if it was, we can retry
  the pick by overriding the core cookie with that of the lock owner
  (spoofing lock owner as max).

  In case we find a blocked donor on a CPU that is not influencing the
  core cookie, we either swap to a different task with same cookie or
  force-idle the core.

This is the diff I have been testing so far:

  (On top of tip:sched/core at 4ac4d6549a656 + Vasily's Patch 1
   from this series)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8aed55592ca96..abddb958e10b5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -341,6 +341,18 @@ void sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags)
 
 static int sched_task_is_throttled(struct task_struct *p, int cpu)
 {
+	/*
+	 * Don't move / select blocked during cookie stealing.
+	 * Proxy execution + core scheduling uses __schedule()
+	 * for migration, return-migration, and selecting the
+	 * correct core-cookie based on the donor context.
+	 *
+	 * Simplese way to achieve this is by spoofing throttle
+	 * status for blocked donors.
+	 */
+	if (task_is_blocked(p))
+		return 1;
+
 	if (p->sched_class->task_is_throttled)
 		return p->sched_class->task_is_throttled(p, cpu);
 
@@ -6061,6 +6073,9 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	BUG(); /* The idle class should always have a runnable task. */
 }
 
+static struct task_struct *
+find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf);
+
 #ifdef CONFIG_SCHED_CORE
 static inline bool is_task_rq_idle(struct task_struct *t)
 {
@@ -6104,6 +6119,96 @@ extern void task_vruntime_update(struct rq *rq, struct task_struct *p, bool in_f
 
 static void queue_core_balance(struct rq *rq);
 
+static inline bool proxy_should_set_donor(struct rq *rq)
+{
+	bool donor_already_set = rq->core_proxy_pick;
+
+	if (!sched_core_enabled(rq))
+		return true;
+
+	/* The core_proxy_pick cycle has ended. */
+	rq->core_proxy_pick = false;
+	return !donor_already_set;
+}
+
+static struct task_struct *
+proxy_steal_cookie(struct rq *rq, struct task_struct *donor, struct task_struct *owner)
+{
+	if (!sched_core_enabled(rq))
+		return owner;
+
+	/* owner can safely run in place of donor */
+	if (cookie_match(donor, owner))
+		return owner;
+
+	/*
+	 * Another CPU on this core dictated the pick! Try to
+	 * find a task with matching cookie on this rq,
+	 * otherwise resort to force-idling.
+	 */
+	if (!rq->core_pick_leader) {
+		unsigned long cookie = rq->core->core_cookie;
+		struct task_struct *next;
+		bool fi_before = false;
+
+		next = sched_core_find(rq, cookie);
+		if (next == donor)
+			next = sched_core_next(next, cookie);
+
+		/*
+		 * We have a compatible next that is not a blocked task.
+		 * Switch context and run that instead.
+		 */
+		if (next)
+			goto found;
+
+		/*
+		 * rq doesn't have any compatible task and the core-pick
+		 * was dictated by a remote CPU. Transition to force-idle.
+		 *
+		 * XXX: Set &rf to NULL and use the idle class pick to
+		 * notify sched-ext core of idling although proxy execution
+		 * and sched-ext are mutually exclusive at the moment.
+		 */
+		next = idle_sched_class.pick_task(rq, NULL);;
+		if (schedstat_enabled() && rq->core->core_forceidle_count) {
+			rq->core->core_forceidle_start = rq_clock(rq->core);
+			rq->core->core_forceidle_occupation--;
+		}
+
+		fi_before = !!rq->core->core_forceidle_count;
+		rq->core->core_forceidle_count++;
+		if (!fi_before) {
+			rq->core->core_forceidle_seq++;
+			task_vruntime_update(rq, donor, !!rq->core->core_forceidle_count);
+		}
+
+		rq->core_dl_server = NULL;
+		queue_core_balance(rq);
+found:
+		WARN_ON_ONCE(!cookie_match(next, donor));
+		put_prev_set_next_task(rq, donor, next);
+		rq_set_donor(rq, next);
+		return next;
+	}
+
+	/*
+	 * This CPU dictated the core-wide pick. Since this CPU is running
+	 * the highest priority donor on the core, it is possible to explore
+	 * dictate the core-cookie and explore proxy execution.
+	 *
+	 * Set rq->core_proxy_pick to the owner and explore a re-pick.
+	 * Returning NULL here would go through another cycle of
+	 * pick_next_task() which will update the cookie to owner's cookie
+	 * and retry.
+	 *
+	 * __schedule() runs with IRQs disabled and the owner will remain
+	 * valid until the re-pick even when the core lock is dropped.
+	 */
+	rq->core_proxy_pick = true;
+	return NULL;
+}
+
 static struct task_struct *
 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	__must_hold(__rq_lockp(rq))
@@ -6131,6 +6236,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 		 */
 		rq->core_pick = NULL;
 		rq->core_dl_server = NULL;
+		rq->core_pick_leader = true;
 		return __pick_next_task(rq, prev, rf);
 	}
 
@@ -6150,12 +6256,18 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 
 		next = rq->core_pick;
 		rq->dl_server = rq->core_dl_server;
+		WARN_ON_ONCE(rq->core_proxy_pick);
 		rq->core_pick = NULL;
 		rq->core_dl_server = NULL;
 		goto out_set_next;
 	}
 
-	prev_balance(rq, prev, rf);
+	/*
+	 * If the CPU is re-trying pick for the sake of proxy, skip executing
+	 * ->balance() calls since the pick on this CPU was already stabilized.
+	 */
+	if (likely(!rq->core_proxy_pick))
+		prev_balance(rq, prev, rf);
 
 	smt_mask = cpu_smt_mask(cpu);
 	need_sync = !!rq->core->core_cookie;
@@ -6188,6 +6300,15 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 	 */
 	rq->core->core_task_seq++;
 
+	/*
+	 * If the CPU is here to re-evaluate the cookie for proxy-execution,
+	 * skip the pick on this CPU since either the selected donor, or the
+	 * core_proxy_pick have a valid core-cookie and the core-wide pick
+	 * is necessary in either cases.
+	 */
+	if (unlikely(rq->core_proxy_pick))
+		goto restart_multi;
+
 	/*
 	 * Optimize for common case where this CPU has no cookies
 	 * and there are no cookied tasks running on siblings.
@@ -6200,6 +6321,12 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 		if (!next->core_cookie) {
 			rq->core_pick = NULL;
 			rq->core_dl_server = NULL;
+			/*
+			 * If no cookie is set, this CPU is free to trigger
+			 * a re-pick for proxy execution if the selection
+			 * turns out to be a blocked donor.
+			 */
+			rq->core_pick_leader = true;
 			/*
 			 * For robustness, update the min_vruntime_fi for
 			 * unconstrained picks as well.
@@ -6229,6 +6356,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 		if (i != cpu && (rq_i != rq->core || !core_clock_updated))
 			update_rq_clock(rq_i);
 
+		rq_i->core_pick_leader = false;
 		p = pick_task(rq_i, rf);
 		if (unlikely(p == RETRY_TASK))
 			goto restart_multi;
@@ -6236,10 +6364,72 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 		rq_i->core_pick = p;
 		rq_i->core_dl_server = rq_i->dl_server;
 
-		if (!max || prio_less(max, p, fi_before))
+		if (!max || prio_less(max, p, fi_before)) {
+			rq_i->core_pick_leader = true;
 			max = p;
+		}
 	}
 
+	if (unlikely(rq->core_proxy_pick)) {
+		struct task_struct *owner, *donor = rq->donor;
+
+		/*
+		 * Reset indicator; For all the failure cases, the
+		 * "core_proxy_pick" should be considered ended.
+		 */
+		rq->core_proxy_pick = false;
+
+		/*
+		 * The core_proxy_pick should no longer be considered if:
+		 *
+		 * 1) rq is no longer the pick leader (running the highest priority task).
+		 * 2) rq->donor is no longer the rq->core_pick. (Different pick).
+		 * 3) rq->core_pick is no longer blocked. (Donor was woken up)
+		 * 4) find_proxy_task(rq->donor) no longer points to core_proxy_pick.
+		 *    (change in the wait chain)
+		 */
+		if (!rq->core_pick_leader ||
+		    donor != rq->core_pick ||
+		    !task_is_blocked(donor))
+			goto continue_pick;
+
+		/* Check the wait-chain again. */
+		owner = find_proxy_task(rq, donor, rf);
+		/*
+		 * Wait chain has changed and requires a re-pick :-(
+		 * Since there is no dependency on rq->curr, do the
+		 * repick now.
+		 *
+		 * The "core_proxy_pick" has ended; Continue with the
+		 * default path. __schedule() will trigger a repick
+		 * if it finds it appropriate.
+		 */
+		if (!owner)
+			goto restart_multi;
+		/*
+		 * wait-chain has a new dependency on rq->curr!
+		 *
+		 * Continue with the core_pick as is for now even if
+		 * it is blocked. Effect of proxy_resched_idle() will
+		 * be undone at "out_set_next" on the way out and
+		 * find_proxy_task() in __schedule() will redo these
+		 * changes.
+		 */
+		if (owner == rq->idle) {
+			prev = rq->idle; /* New donor */
+			goto continue_pick;
+		}
+		/*
+		 * If the wait-chain changed but find_proxy_task()
+		 * returned a valid owner task, use it as the new max
+		 * task and continue with the pick.
+		 */
+		rq->core_proxy_pick = true;
+		rq->core_pick = owner;
+		next = owner;
+		max = owner;
+	}
+continue_pick:
 	cookie = rq->core->core_cookie = max->core_cookie;
 
 	/*
@@ -6335,8 +6525,17 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 
 		resched_curr(rq_i);
 	}
-
+	/*
+	 * If this was a proxy driven pick, skip switching the donor context.
+	 * "core_proxy_pick" indicator will be consumed in the caller since
+	 * "rq->donor" can be overriden by the caller.
+	 */
+	if (unlikely(rq->core_proxy_pick)) {
+		WARN_ON_ONCE(rq->donor != prev);
+		return next;
+	}
 out_set_next:
+	WARN_ON_ONCE(rq->core_proxy_pick);
 	put_prev_set_next_task(rq, prev, next);
 	if (rq->core->core_forceidle_count && next == rq->idle)
 		queue_core_balance(rq);
@@ -6464,6 +6663,7 @@ static void sched_core_cpu_starting(unsigned int cpu)
 	guard(core_lock)(&cpu);
 
 	WARN_ON_ONCE(rq->core != rq);
+	rq->core_proxy_pick = false;
 
 	/* if we're the first, we'll be our own leader */
 	if (cpumask_weight(smt_mask) == 1)
@@ -6558,6 +6758,13 @@ static inline void sched_core_cpu_dying(unsigned int cpu)
 static inline void sched_core_cpu_starting(unsigned int cpu) {}
 static inline void sched_core_cpu_deactivate(unsigned int cpu) {}
 static inline void sched_core_cpu_dying(unsigned int cpu) {}
+static inline bool proxy_should_set_donor(struct rq *rq) { return true; }
+
+static struct task_struct *
+proxy_steal_cookie(struct rq *rq, struct task_struct *donor, struct task_struct *next)
+{
+	return next;
+}
 
 static struct task_struct *
 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
@@ -6653,6 +6860,7 @@ static inline void proxy_set_task_cpu(struct task_struct *p, int cpu)
 static inline struct task_struct *proxy_resched_idle(struct rq *rq)
 {
 	put_prev_set_next_task(rq, rq->donor, rq->idle);
+	rq->next_class = &idle_sched_class;
 	rq_set_donor(rq, rq->idle);
 	set_tsk_need_resched(rq->idle);
 	return rq->idle;
@@ -7115,7 +7323,9 @@ static void __sched notrace __schedule(int sched_mode)
 	if (sched_proxy_exec()) {
 		struct task_struct *prev_donor = rq->donor;
 
-		rq_set_donor(rq, next);
+		if (likely(proxy_should_set_donor(rq)))
+			rq_set_donor(rq, next);
+
 		if (unlikely(next->blocked_on)) {
 			next = find_proxy_task(rq, next, &rf);
 			if (!next) {
@@ -7126,6 +7336,21 @@ static void __sched notrace __schedule(int sched_mode)
 				zap_balance_callbacks(rq);
 				goto keep_resched;
 			}
+			/*
+			 * Check if the cookie matches. next can be:
+			 *
+			 * - The lock owner found by find_proxy_task()
+			 *   if it has a compatible cookie.
+			 *
+			 * - rq->idle if the core-wide pick was dictated
+			 *   by a task on a remote CPU and this CPU
+			 *   should now force-idle until the next pick.
+			 *
+			 * - NULL if the CPU should retry the pick.
+			 */
+			next = proxy_steal_cookie(rq, rq->donor, next);
+			if (!next)
+				goto pick_again;
 		}
 		if (rq->donor == prev_donor && prev != next) {
 			struct task_struct *donor = rq->donor;
@@ -9034,6 +9259,7 @@ void __init sched_init(void)
 		rq->core_forceidle_start = 0;
 
 		rq->core_cookie = 0UL;
+		rq->core_proxy_pick = false;
 #endif
 		zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
 	}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9f63b15d309d1..cd8acc85f5158 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1333,14 +1333,38 @@ struct rq {
 	unsigned int		core_enabled;
 	unsigned int		core_sched_seq;
 	struct rb_root		core_tree;
+	/*
+	 * Proxy Execution bits:
+	 *
+	 * core_proxy_pick: The CPU is redoing
+	 * pick_next_task() for proxy execution when
+	 * the lock owner has a differnt core cookie
+	 * compared to the donor.
+	 *
+	 * core_pick_leader: Indicator on the runqueue
+	 * that has the highest priority task from the
+	 * core-wide pick. If a !core_pick_leader finds
+	 * a blocked donor, it should go into a force
+	 * idle since the core_cookie is being dictated
+	 * by a task on a different CPU.
+	 *
+	 * XXX: Intentionally placed outside
+	 * CONFIG_SCHED_PROXY_EXEC for now for the sake
+	 * of this RFC. Do we really need to complicate
+	 * the dependency for 4-bytes that fit perfectly
+	 * in a hole?
+	 */
+	bool			core_proxy_pick;
+	bool			core_pick_leader;
+				/* Hole */
 
 	/* shared state -- careful with sched_core_cpu_deactivate() */
 	unsigned int		core_task_seq;
 	unsigned int		core_pick_seq;
-	unsigned long		core_cookie;
 	unsigned int		core_forceidle_count;
 	unsigned int		core_forceidle_seq;
 	unsigned int		core_forceidle_occupation;
+	unsigned long		core_cookie;
 	u64			core_forceidle_start;
 #endif /* CONFIG_SCHED_CORE */
 
---

So far it has survived 10 runs of the modified priority-inversion-demo
with coresched prefixed and attached is the chart while comparing a
sched_proxy_exec=0 (no-proxy*) vs sched_proxy_exec=1 (proxy*) runs.
There is still much room for optimization.

Full disclaimer: I see a:

    NOHZ tick-stop error: local softirq work is pending, handler #200!!!

being logged once in a while during the test but I see it even with
sched_proxy_exec=0 so I'm not too sure if this is something I've
introduced or not. I'll chase it a little while later on tip:sched/core
if it is reproducible there.

I have only tested CONFIG_SCHED_PROXY_EXEC + CONFIG_SCHED_CORE and
disabling just one might fail the build (or crash and burn).

Early feedback is welcome. I hope I have left enough comments in there
to justify the rationale (or how I might be breaking core-scheduling)
:-)

-- 
Thanks and Regards,
Prateek

[-- Attachment #2: chart-split.png --]
[-- Type: image/png, Size: 186991 bytes --]

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-14  9:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-07 10:41 [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions Vasily Gorbik
2026-05-07 10:41 ` [PATCH v2 1/2] sched/core: Don't steal a proxy-exec donor Vasily Gorbik
2026-05-12 21:35   ` John Stultz
2026-05-07 10:41 ` [PATCH v2 2/2] sched/core: Don't proxy-exec unmatched cookie lock owners Vasily Gorbik
2026-05-12 22:16   ` John Stultz
2026-05-14  9:54     ` K Prateek Nayak
2026-05-12 21:17 ` [PATCH v2 0/2] sched/core: Fix proxy-exec/core-sched interactions John Stultz
2026-05-13  0:48   ` John Stultz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox