netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally
@ 2025-10-10  7:59 Sahil Chandna
  2025-10-10 15:28 ` Yonghong Song
  2025-10-11 13:17 ` Menglong Dong
  0 siblings, 2 replies; 3+ messages in thread
From: Sahil Chandna @ 2025-10-10  7:59 UTC (permalink / raw)
  To: ast, daniel, andrii, martin.lau, song, john.fastabend, haoluo,
	jolsa, bpf, netdev
  Cc: david.hunter.linux, skhan, khalid, chandna.linuxkernel,
	syzbot+1f1fbecb9413cdbfbef8

The timer context can safely use migrate_disable()/migrate_enable()
universally instead of conditional preemption or migration disabling.
Previously, the timer was initialized in NO_PREEMPT mode by default,
which disabled preemption and forced execution in atomic context.
This caused issues on PREEMPT_RT configurations when invoking
spin_lock_bh() — a sleeping lock — leading to the following warning:

BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
preempt_count: 1, expected: 0
RCU nest depth: 1, expected: 1
Preemption disabled at:
[<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42

Reported-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
Tested-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
Signed-off-by: Sahil Chandna <chandna.linuxkernel@gmail.com>

---
Link to v1: https://lore.kernel.org/all/20251006054320.159321-1-chandna.linuxkernel@gmail.com/

Changes since v1:
- Dropped `enum { NO_PREEMPT, NO_MIGRATE } mode` from `struct bpf_test_timer`.
- Removed all conditional preempt/migrate disable logic.
- Unified timer handling to use `migrate_disable()` / `migrate_enable()` universally.

Testing:
- Reproduced syzbot bug locally using the provided reproducer.
- Observed `BUG: sleeping function called from invalid context` on v1.
- Confirmed bug disappears after applying this patch.
- Validated normal functionality of `bpf_prog_test_run_*` helpers with C
  reproducer.
---
 net/bpf/test_run.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index dfb03ee0bb62..b23bc93e738e 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -29,7 +29,6 @@
 #include <trace/events/bpf_test_run.h>
 
 struct bpf_test_timer {
-	enum { NO_PREEMPT, NO_MIGRATE } mode;
 	u32 i;
 	u64 time_start, time_spent;
 };
@@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
 	__acquires(rcu)
 {
 	rcu_read_lock();
-	if (t->mode == NO_PREEMPT)
-		preempt_disable();
-	else
-		migrate_disable();
+	migrate_disable();
 
 	t->time_start = ktime_get_ns();
 }
@@ -50,11 +46,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
 	__releases(rcu)
 {
 	t->time_start = 0;
-
-	if (t->mode == NO_PREEMPT)
-		preempt_enable();
-	else
-		migrate_enable();
+	migrate_enable();
 	rcu_read_unlock();
 }
 
@@ -374,7 +366,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
 
 {
 	struct xdp_test_data xdp = { .batch_size = batch_size };
-	struct bpf_test_timer t = { .mode = NO_MIGRATE };
+	struct bpf_test_timer t;
 	int ret;
 
 	if (!repeat)
@@ -404,7 +396,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
 	struct bpf_prog_array_item item = {.prog = prog};
 	struct bpf_run_ctx *old_ctx;
 	struct bpf_cg_run_ctx run_ctx;
-	struct bpf_test_timer t = { NO_MIGRATE };
+	struct bpf_test_timer t;
 	enum bpf_cgroup_storage_type stype;
 	int ret;
 
@@ -1377,7 +1369,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 				     const union bpf_attr *kattr,
 				     union bpf_attr __user *uattr)
 {
-	struct bpf_test_timer t = { NO_PREEMPT };
+	struct bpf_test_timer t;
 	u32 size = kattr->test.data_size_in;
 	struct bpf_flow_dissector ctx = {};
 	u32 repeat = kattr->test.repeat;
@@ -1445,7 +1437,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
 int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
 				union bpf_attr __user *uattr)
 {
-	struct bpf_test_timer t = { NO_PREEMPT };
+	struct bpf_test_timer t;
 	struct bpf_prog_array *progs = NULL;
 	struct bpf_sk_lookup_kern ctx = {};
 	u32 repeat = kattr->test.repeat;
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally
  2025-10-10  7:59 [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally Sahil Chandna
@ 2025-10-10 15:28 ` Yonghong Song
  2025-10-11 13:17 ` Menglong Dong
  1 sibling, 0 replies; 3+ messages in thread
From: Yonghong Song @ 2025-10-10 15:28 UTC (permalink / raw)
  To: Sahil Chandna, ast, daniel, andrii, martin.lau, song,
	john.fastabend, haoluo, jolsa, bpf, netdev
  Cc: david.hunter.linux, skhan, khalid, syzbot+1f1fbecb9413cdbfbef8



On 10/10/25 12:59 AM, Sahil Chandna wrote:
> The timer context can safely use migrate_disable()/migrate_enable()
> universally instead of conditional preemption or migration disabling.
> Previously, the timer was initialized in NO_PREEMPT mode by default,
> which disabled preemption and forced execution in atomic context.
> This caused issues on PREEMPT_RT configurations when invoking
> spin_lock_bh() — a sleeping lock — leading to the following warning:
>
> BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
> preempt_count: 1, expected: 0
> RCU nest depth: 1, expected: 1
> Preemption disabled at:
> [<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42
>
> Reported-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
> Tested-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Signed-off-by: Sahil Chandna <chandna.linuxkernel@gmail.com>
>
> ---
> Link to v1: https://lore.kernel.org/all/20251006054320.159321-1-chandna.linuxkernel@gmail.com/
>
> Changes since v1:
> - Dropped `enum { NO_PREEMPT, NO_MIGRATE } mode` from `struct bpf_test_timer`.
> - Removed all conditional preempt/migrate disable logic.
> - Unified timer handling to use `migrate_disable()` / `migrate_enable()` universally.
>
> Testing:
> - Reproduced syzbot bug locally using the provided reproducer.
> - Observed `BUG: sleeping function called from invalid context` on v1.
> - Confirmed bug disappears after applying this patch.
> - Validated normal functionality of `bpf_prog_test_run_*` helpers with C
>    reproducer.
> ---
>   net/bpf/test_run.c | 20 ++++++--------------
>   1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index dfb03ee0bb62..b23bc93e738e 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -29,7 +29,6 @@
>   #include <trace/events/bpf_test_run.h>
>   
>   struct bpf_test_timer {
> -	enum { NO_PREEMPT, NO_MIGRATE } mode;
>   	u32 i;
>   	u64 time_start, time_spent;
>   };
> @@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
>   	__acquires(rcu)
>   {
>   	rcu_read_lock();
> -	if (t->mode == NO_PREEMPT)
> -		preempt_disable();
> -	else
> -		migrate_disable();
> +	migrate_disable();
>   
>   	t->time_start = ktime_get_ns();
>   }
> @@ -50,11 +46,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
>   	__releases(rcu)
>   {
>   	t->time_start = 0;
> -
> -	if (t->mode == NO_PREEMPT)
> -		preempt_enable();
> -	else
> -		migrate_enable();
> +	migrate_enable();
>   	rcu_read_unlock();
>   }
>   
> @@ -374,7 +366,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
>   
>   {
>   	struct xdp_test_data xdp = { .batch_size = batch_size };
> -	struct bpf_test_timer t = { .mode = NO_MIGRATE };
> +	struct bpf_test_timer t;

We still need to initialize 'struct bpf_test_timer t' with t.time_spent = 0 like
	struct bpf_test_timer t = {};
since time_spent is used like
         t->time_spent += ktime_get_ns() - t->time_start;


>   	int ret;
>   
>   	if (!repeat)
> @@ -404,7 +396,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
>   	struct bpf_prog_array_item item = {.prog = prog};
>   	struct bpf_run_ctx *old_ctx;
>   	struct bpf_cg_run_ctx run_ctx;
> -	struct bpf_test_timer t = { NO_MIGRATE };
> +	struct bpf_test_timer t;
>   	enum bpf_cgroup_storage_type stype;
>   	int ret;
>   
> @@ -1377,7 +1369,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
>   				     const union bpf_attr *kattr,
>   				     union bpf_attr __user *uattr)
>   {
> -	struct bpf_test_timer t = { NO_PREEMPT };
> +	struct bpf_test_timer t;
>   	u32 size = kattr->test.data_size_in;
>   	struct bpf_flow_dissector ctx = {};
>   	u32 repeat = kattr->test.repeat;
> @@ -1445,7 +1437,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
>   int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
>   				union bpf_attr __user *uattr)
>   {
> -	struct bpf_test_timer t = { NO_PREEMPT };
> +	struct bpf_test_timer t;
>   	struct bpf_prog_array *progs = NULL;
>   	struct bpf_sk_lookup_kern ctx = {};
>   	u32 repeat = kattr->test.repeat;


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally
  2025-10-10  7:59 [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally Sahil Chandna
  2025-10-10 15:28 ` Yonghong Song
@ 2025-10-11 13:17 ` Menglong Dong
  1 sibling, 0 replies; 3+ messages in thread
From: Menglong Dong @ 2025-10-11 13:17 UTC (permalink / raw)
  To: Sahil Chandna
  Cc: ast, daniel, andrii, martin.lau, song, john.fastabend, haoluo,
	jolsa, bpf, netdev, david.hunter.linux, skhan, khalid,
	chandna.linuxkernel, syzbot+1f1fbecb9413cdbfbef8

On 2025/10/10 15:59, Sahil Chandna wrote:
> The timer context can safely use migrate_disable()/migrate_enable()
> universally instead of conditional preemption or migration disabling.
> Previously, the timer was initialized in NO_PREEMPT mode by default,
> which disabled preemption and forced execution in atomic context.
> This caused issues on PREEMPT_RT configurations when invoking
> spin_lock_bh() — a sleeping lock — leading to the following warning:
> 
> BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
> preempt_count: 1, expected: 0
> RCU nest depth: 1, expected: 1
> Preemption disabled at:
> [<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42
> 
> Reported-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
> Tested-by: syzbot+1f1fbecb9413cdbfbef8@syzkaller.appspotmail.com
> Signed-off-by: Sahil Chandna <chandna.linuxkernel@gmail.com>
> 
> ---
> Link to v1: https://lore.kernel.org/all/20251006054320.159321-1-chandna.linuxkernel@gmail.com/
> 
> Changes since v1:
> - Dropped `enum { NO_PREEMPT, NO_MIGRATE } mode` from `struct bpf_test_timer`.
> - Removed all conditional preempt/migrate disable logic.
> - Unified timer handling to use `migrate_disable()` / `migrate_enable()` universally.
> 
> Testing:
> - Reproduced syzbot bug locally using the provided reproducer.
> - Observed `BUG: sleeping function called from invalid context` on v1.
> - Confirmed bug disappears after applying this patch.
> - Validated normal functionality of `bpf_prog_test_run_*` helpers with C
>   reproducer.
> ---
>  net/bpf/test_run.c | 20 ++++++--------------
>  1 file changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index dfb03ee0bb62..b23bc93e738e 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -29,7 +29,6 @@
>  #include <trace/events/bpf_test_run.h>
>  
>  struct bpf_test_timer {
> -	enum { NO_PREEMPT, NO_MIGRATE } mode;
>  	u32 i;
>  	u64 time_start, time_spent;
>  };
> @@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
>  	__acquires(rcu)
>  {
>  	rcu_read_lock();
> -	if (t->mode == NO_PREEMPT)
> -		preempt_disable();
> -	else
> -		migrate_disable();
> +	migrate_disable();

Maybe we can use rcu_read_lock_dont_migrate/rcu_read_unlock_migrate
here instead, which has better performance :)

Thanks!
Menglong Dong

>  
>  	t->time_start = ktime_get_ns();
>  }
> @@ -50,11 +46,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
>  	__releases(rcu)
>  {
>  	t->time_start = 0;
> -
> -	if (t->mode == NO_PREEMPT)
> -		preempt_enable();
> -	else
> -		migrate_enable();
> +	migrate_enable();
>  	rcu_read_unlock();
>  }
>  
> @@ -374,7 +366,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
>  
>  {
>  	struct xdp_test_data xdp = { .batch_size = batch_size };
> -	struct bpf_test_timer t = { .mode = NO_MIGRATE };
> +	struct bpf_test_timer t;
>  	int ret;
>  
>  	if (!repeat)
> @@ -404,7 +396,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
>  	struct bpf_prog_array_item item = {.prog = prog};
>  	struct bpf_run_ctx *old_ctx;
>  	struct bpf_cg_run_ctx run_ctx;
> -	struct bpf_test_timer t = { NO_MIGRATE };
> +	struct bpf_test_timer t;
>  	enum bpf_cgroup_storage_type stype;
>  	int ret;
>  
> @@ -1377,7 +1369,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
>  				     const union bpf_attr *kattr,
>  				     union bpf_attr __user *uattr)
>  {
> -	struct bpf_test_timer t = { NO_PREEMPT };
> +	struct bpf_test_timer t;
>  	u32 size = kattr->test.data_size_in;
>  	struct bpf_flow_dissector ctx = {};
>  	u32 repeat = kattr->test.repeat;
> @@ -1445,7 +1437,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
>  int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
>  				union bpf_attr __user *uattr)
>  {
> -	struct bpf_test_timer t = { NO_PREEMPT };
> +	struct bpf_test_timer t;
>  	struct bpf_prog_array *progs = NULL;
>  	struct bpf_sk_lookup_kern ctx = {};
>  	u32 repeat = kattr->test.repeat;
> 





^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-10-11 13:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-10  7:59 [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally Sahil Chandna
2025-10-10 15:28 ` Yonghong Song
2025-10-11 13:17 ` Menglong Dong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).