* [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
@ 2025-10-06 5:43 Sahil Chandna
2025-10-07 0:34 ` David Hunter
2025-10-07 5:15 ` Yonghong Song
0 siblings, 2 replies; 6+ messages in thread
From: Sahil Chandna @ 2025-10-06 5:43 UTC (permalink / raw)
To: ast, daniel, andrii, martin.lau, song, john.fastabend, haoluo,
jolsa, bpf, netdev
Cc: david.hunter.linux, skhan, khalid, Sahil Chandna,
syzbot+1f1fbecb9413cdbfbef8
By default, the timer mode is being initialized to `NO_PREEMPT`.
This disables preemption and forces execution in atomic context.
This can cause issue with PREEMPT_RT when calling spin_lock_bh() due
to sleeping nature of the lock.
...
BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
preempt_count: 1, expected: 0
RCU nest depth: 1, expected: 1
Preemption disabled at:
[<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
__might_resched+0x44b/0x5d0 kernel/sched/core.c:8957
__rt_spin_lock kernel/locking/spinlock_rt.c:48 [inline]
rt_spin_lock+0xc7/0x2c0 kernel/locking/spinlock_rt.c:57
spin_lock_bh include/linux/spinlock_rt.h:88 [inline]
__sock_map_delete net/core/sock_map.c:421 [inline]
sock_map_delete_elem+0xb7/0x170 net/core/sock_map.c:452
bpf_prog_2c29ac5cdc6b1842+0x43/0x4b
bpf_dispatcher_nop_func include/linux/bpf.h:1332 [inline]
...
Change initialization to NO_MIGRATE mode to prevent this.
Reported-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
Tested-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
Signed-off-by: Sahil Chandna <chandna.linuxkernel@gmail.com>
---
net/bpf/test_run.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 4a862d605386..daf966dfed69 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -1368,7 +1368,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
- struct bpf_test_timer t = { NO_PREEMPT };
+ struct bpf_test_timer t = { NO_MIGRATE };
u32 size = kattr->test.data_size_in;
struct bpf_flow_dissector ctx = {};
u32 repeat = kattr->test.repeat;
@@ -1436,7 +1436,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
- struct bpf_test_timer t = { NO_PREEMPT };
+ struct bpf_test_timer t = { NO_MIGRATE };
struct bpf_prog_array *progs = NULL;
struct bpf_sk_lookup_kern ctx = {};
u32 repeat = kattr->test.repeat;
--
2.50.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
2025-10-06 5:43 [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode Sahil Chandna
@ 2025-10-07 0:34 ` David Hunter
2025-10-07 5:15 ` Yonghong Song
1 sibling, 0 replies; 6+ messages in thread
From: David Hunter @ 2025-10-07 0:34 UTC (permalink / raw)
To: Sahil Chandna, ast, daniel, andrii, martin.lau, song,
john.fastabend, haoluo, jolsa, bpf, netdev
Cc: skhan, khalid, syzbot+1f1fbecb9413cdbfbef8
On 10/6/25 01:43, Sahil Chandna wrote:
> By default, the timer mode is being initialized to `NO_PREEMPT`.
> This disables preemption and forces execution in atomic context.
> This can cause issue with PREEMPT_RT when calling spin_lock_bh() due
> to sleeping nature of the lock.
What kind of testing did you do for this?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
2025-10-06 5:43 [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode Sahil Chandna
2025-10-07 0:34 ` David Hunter
@ 2025-10-07 5:15 ` Yonghong Song
1 sibling, 0 replies; 6+ messages in thread
From: Yonghong Song @ 2025-10-07 5:15 UTC (permalink / raw)
To: Sahil Chandna, ast, daniel, andrii, martin.lau, song,
john.fastabend, haoluo, jolsa, bpf, netdev
Cc: david.hunter.linux, skhan, khalid, syzbot+1f1fbecb9413cdbfbef8
On 10/5/25 10:43 PM, Sahil Chandna wrote:
> By default, the timer mode is being initialized to `NO_PREEMPT`.
> This disables preemption and forces execution in atomic context.
> This can cause issue with PREEMPT_RT when calling spin_lock_bh() due
> to sleeping nature of the lock.
> ...
> BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
> preempt_count: 1, expected: 0
> RCU nest depth: 1, expected: 1
> Preemption disabled at:
> [<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42
> Call Trace:
> <TASK>
> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
> __might_resched+0x44b/0x5d0 kernel/sched/core.c:8957
> __rt_spin_lock kernel/locking/spinlock_rt.c:48 [inline]
> rt_spin_lock+0xc7/0x2c0 kernel/locking/spinlock_rt.c:57
> spin_lock_bh include/linux/spinlock_rt.h:88 [inline]
> __sock_map_delete net/core/sock_map.c:421 [inline]
> sock_map_delete_elem+0xb7/0x170 net/core/sock_map.c:452
> bpf_prog_2c29ac5cdc6b1842+0x43/0x4b
> bpf_dispatcher_nop_func include/linux/bpf.h:1332 [inline]
> ...
> Change initialization to NO_MIGRATE mode to prevent this.
>
> Reported-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
> Tested-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Signed-off-by: Sahil Chandna <chandna.linuxkernel@gmail.com>
> ---
> net/bpf/test_run.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 4a862d605386..daf966dfed69 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -1368,7 +1368,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t = { NO_MIGRATE };
I checked the original reproducer. And changing from
NO_PREEMPT to NO_MIGRATE is needed only with
CONFIG_PREEMPT_RT enabled.
> u32 size = kattr->test.data_size_in;
> struct bpf_flow_dissector ctx = {};
> u32 repeat = kattr->test.repeat;
> @@ -1436,7 +1436,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t = { NO_MIGRATE };
This change is not needed for the above particular BUG.
> struct bpf_prog_array *progs = NULL;
> struct bpf_sk_lookup_kern ctx = {};
> u32 repeat = kattr->test.repeat;
Checking the git history. I found the earliest NO_PREEMPT usage can be traced
back to this commit:
commit a439184d515fbf4805f57d11fa5dfd4524d2c0eb
Author: Stanislav Fomichev <sdf@google.com>
Date: Tue Feb 19 10:54:17 2019 -0800
bpf/test_run: fix unkillable BPF_PROG_TEST_RUN for flow dissector
At that time, migrate_disable/enable() are not used yet.
So I suspect that we can remove NO_PREEMPT/NO_MIGRATE in test_run.c
and use migrate_disable()/migrate_enable() universally.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
@ 2025-10-09 22:50 Brahmajit Das
2025-10-09 23:10 ` Brahmajit Das
0 siblings, 1 reply; 6+ messages in thread
From: Brahmajit Das @ 2025-10-09 22:50 UTC (permalink / raw)
To: yonghong.song
Cc: andrii, ast, bpf, chandna.linuxkernel, daniel, david.hunter.linux,
haoluo, john.fastabend, jolsa, khalid, martin.lau, netdev, skhan,
song, syzbot+1f1fbecb9413cdbfbef8
Yonghong Song,
> So I suspect that we can remove NO_PREEMPT/NO_MIGRATE in test_run.c
> and use migrate_disable()/migrate_enable() universally.
Would something like this work?
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -38,10 +38,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
__acquires(rcu)
{
rcu_read_lock();
- if (t->mode == NO_PREEMPT)
- preempt_disable();
- else
- migrate_disable();
+ migrate_disable();
t->time_start = ktime_get_ns();
}
@@ -51,10 +48,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
{
t->time_start = 0;
- if (t->mode == NO_PREEMPT)
- preempt_enable();
- else
- migrate_enable();
+ migrate_enable();
rcu_read_unlock();
}
--
Regards,
listout
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
2025-10-09 22:50 Brahmajit Das
@ 2025-10-09 23:10 ` Brahmajit Das
2025-10-10 3:43 ` Yonghong Song
0 siblings, 1 reply; 6+ messages in thread
From: Brahmajit Das @ 2025-10-09 23:10 UTC (permalink / raw)
To: d0fdced7-a9a5-473e-991f-4f5e4c13f616
Cc: yonghong.song, andrii, ast, bpf, chandna.linuxkernel, daniel,
david.hunter.linux, haoluo, john.fastabend, jolsa, khalid,
martin.lau, netdev, skhan, song, syzbot+1f1fbecb9413cdbfbef8
On 10.10.2025 04:20, Brahmajit Das wrote:
> Yonghong Song,
>
> > So I suspect that we can remove NO_PREEMPT/NO_MIGRATE in test_run.c
> > and use migrate_disable()/migrate_enable() universally.
> Would something like this work?
>
Or we can do something like this to completely remove
NO_PREEMPT/NO_MIGRATE.
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -29,7 +29,6 @@
#include <trace/events/bpf_test_run.h>
struct bpf_test_timer {
- enum { NO_PREEMPT, NO_MIGRATE } mode;
u32 i;
u64 time_start, time_spent;
};
@@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
__acquires(rcu)
{
rcu_read_lock();
- if (t->mode == NO_PREEMPT)
- preempt_disable();
- else
- migrate_disable();
+ migrate_disable();
t->time_start = ktime_get_ns();
}
@@ -51,10 +47,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
{
t->time_start = 0;
- if (t->mode == NO_PREEMPT)
- preempt_enable();
- else
- migrate_enable();
+ migrate_enable();
rcu_read_unlock();
}
@@ -374,7 +367,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
{
struct xdp_test_data xdp = { .batch_size = batch_size };
- struct bpf_test_timer t = { .mode = NO_MIGRATE };
+ struct bpf_test_timer t = {};
int ret;
if (!repeat)
@@ -404,7 +397,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
struct bpf_prog_array_item item = {.prog = prog};
struct bpf_run_ctx *old_ctx;
struct bpf_cg_run_ctx run_ctx;
- struct bpf_test_timer t = { NO_MIGRATE };
+ struct bpf_test_timer t = {};
enum bpf_cgroup_storage_type stype;
int ret;
@@ -1377,7 +1370,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
- struct bpf_test_timer t = { NO_PREEMPT };
+ struct bpf_test_timer t = {};
u32 size = kattr->test.data_size_in;
struct bpf_flow_dissector ctx = {};
u32 repeat = kattr->test.repeat;
@@ -1445,7 +1438,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
- struct bpf_test_timer t = { NO_PREEMPT };
+ struct bpf_test_timer t = {};
struct bpf_prog_array *progs = NULL;
struct bpf_sk_lookup_kern ctx = {};
u32 repeat = kattr->test.repeat;
Basically RFC. I posted a patch, wasn't aware that work was already
going on.
--
Regards,
listout
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode
2025-10-09 23:10 ` Brahmajit Das
@ 2025-10-10 3:43 ` Yonghong Song
0 siblings, 0 replies; 6+ messages in thread
From: Yonghong Song @ 2025-10-10 3:43 UTC (permalink / raw)
To: Brahmajit Das, d0fdced7-a9a5-473e-991f-4f5e4c13f616
Cc: andrii, ast, bpf, chandna.linuxkernel, daniel, david.hunter.linux,
haoluo, john.fastabend, jolsa, khalid, martin.lau, netdev, skhan,
song, syzbot+1f1fbecb9413cdbfbef8
On 10/9/25 4:10 PM, Brahmajit Das wrote:
> On 10.10.2025 04:20, Brahmajit Das wrote:
>> Yonghong Song,
>>
>>> So I suspect that we can remove NO_PREEMPT/NO_MIGRATE in test_run.c
>>> and use migrate_disable()/migrate_enable() universally.
>> Would something like this work?
>>
> Or we can do something like this to completely remove
> NO_PREEMPT/NO_MIGRATE.
>
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -29,7 +29,6 @@
> #include <trace/events/bpf_test_run.h>
>
> struct bpf_test_timer {
> - enum { NO_PREEMPT, NO_MIGRATE } mode;
> u32 i;
> u64 time_start, time_spent;
> };
> @@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
> __acquires(rcu)
> {
> rcu_read_lock();
> - if (t->mode == NO_PREEMPT)
> - preempt_disable();
> - else
> - migrate_disable();
> + migrate_disable();
>
> t->time_start = ktime_get_ns();
> }
> @@ -51,10 +47,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
> {
> t->time_start = 0;
>
> - if (t->mode == NO_PREEMPT)
> - preempt_enable();
> - else
> - migrate_enable();
> + migrate_enable();
> rcu_read_unlock();
> }
>
> @@ -374,7 +367,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
>
> {
> struct xdp_test_data xdp = { .batch_size = batch_size };
> - struct bpf_test_timer t = { .mode = NO_MIGRATE };
> + struct bpf_test_timer t = {};
> int ret;
>
> if (!repeat)
> @@ -404,7 +397,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
> struct bpf_prog_array_item item = {.prog = prog};
> struct bpf_run_ctx *old_ctx;
> struct bpf_cg_run_ctx run_ctx;
> - struct bpf_test_timer t = { NO_MIGRATE };
> + struct bpf_test_timer t = {};
> enum bpf_cgroup_storage_type stype;
> int ret;
>
> @@ -1377,7 +1370,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t = {};
> u32 size = kattr->test.data_size_in;
> struct bpf_flow_dissector ctx = {};
> u32 repeat = kattr->test.repeat;
> @@ -1445,7 +1438,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t = {};
> struct bpf_prog_array *progs = NULL;
> struct bpf_sk_lookup_kern ctx = {};
> u32 repeat = kattr->test.repeat;
>
> Basically RFC. I posted a patch, wasn't aware that work was already
> going on.
The above sounds good to me.We should remove enum { NO_PREEMPT, NO_MIGRATE } mode;
in struct bpf_test_timer.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-10 3:43 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-06 5:43 [PATCH] bpf: test_run: Fix timer mode initialization to NO_MIGRATE mode Sahil Chandna
2025-10-07 0:34 ` David Hunter
2025-10-07 5:15 ` Yonghong Song
-- strict thread matches above, loose matches on Subject: below --
2025-10-09 22:50 Brahmajit Das
2025-10-09 23:10 ` Brahmajit Das
2025-10-10 3:43 ` Yonghong Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).