BPF List
 help / color / mirror / Atom feed
* [PATCH bpf-next v3] bpf: prevent nesting overflow in bpf_try_get_buffers
@ 2025-11-14  6:49 Sahil Chandna
  2025-11-14 21:10 ` patchwork-bot+netdevbpf
  0 siblings, 1 reply; 2+ messages in thread
From: Sahil Chandna @ 2025-11-14  6:49 UTC (permalink / raw)
  To: yonghong.song, ast, daniel, andrii, martin.lau, eddyz87, song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, bigeasy, bpf
  Cc: Sahil Chandna, syzbot+b0cff308140f79a9c4cb

bpf_try_get_buffers() returns one of multiple per-CPU buffers based on a
per-CPU nesting counter. This mechanism expects that buffers are not
endlessly acquired before being returned. migrate_disable() ensures that a
task remains on the same CPU, but it does not prevent the task from being
preempted by another task on that CPU.

Without disabled preemption, a task may be preempted while holding a
buffer, allowing another task to run on same CPU and acquire an 
additional buffer. Several such preemptions can cause the per-CPU
nest counter to exceed MAX_BPRINTF_NEST_LEVEL and trigger the warning in
bpf_try_get_buffers(). Adding preempt_disable()/preempt_enable() around
buffer acquisition and release prevents this task preemption and
preserves the intended bounded nesting behavior.

Reported-by: syzbot+b0cff308140f79a9c4cb@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68f6a4c8.050a0220.1be48.0011.GAE@google.com/
Fixes: 4223bf833c849 ("bpf: Remove preempt_disable in bpf_try_get_buffers")
Suggested-by: Yonghong Song <yonghong.song@linux.dev>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
---
changes since v2 :
Updated commit message as per suggestion from Sebastian

changes since v1:
- Remove additional call to preempt_enable() which may lead to
inconsistent preempt state if invoked without preempt_disable() called.
- Correct tags as suggested by Sebastian

Link to v2:https://lore.kernel.org/all/20251111170628.410641-1-chandna.sahil@gmail.com/
Link to v1:https://lore.kernel.org/all/20251109173648.401996-1-chandna.sahil@gmail.com/

Testing:
Tested using syzkaller reproducers from:
  [1] https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
  [2] https://syzkaller.appspot.com/bug?extid=b0cff308140f79a9c4cb

Validation was done on PREEMPT_FULL and PREEMPT_RT configurations.
---
 kernel/bpf/helpers.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index eb25e70e0bdc..3879eb42a681 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -777,9 +777,11 @@ int bpf_try_get_buffers(struct bpf_bprintf_buffers **bufs)
 {
 	int nest_level;
 
+	preempt_disable();
 	nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
 	if (WARN_ON_ONCE(nest_level > MAX_BPRINTF_NEST_LEVEL)) {
 		this_cpu_dec(bpf_bprintf_nest_level);
+		preempt_enable();
 		return -EBUSY;
 	}
 	*bufs = this_cpu_ptr(&bpf_bprintf_bufs[nest_level - 1]);
@@ -792,6 +794,7 @@ void bpf_put_buffers(void)
 	if (WARN_ON_ONCE(this_cpu_read(bpf_bprintf_nest_level) == 0))
 		return;
 	this_cpu_dec(bpf_bprintf_nest_level);
+	preempt_enable();
 }
 
 void bpf_bprintf_cleanup(struct bpf_bprintf_data *data)
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH bpf-next v3] bpf: prevent nesting overflow in bpf_try_get_buffers
  2025-11-14  6:49 [PATCH bpf-next v3] bpf: prevent nesting overflow in bpf_try_get_buffers Sahil Chandna
@ 2025-11-14 21:10 ` patchwork-bot+netdevbpf
  0 siblings, 0 replies; 2+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-14 21:10 UTC (permalink / raw)
  To: Sahil Chandna
  Cc: yonghong.song, ast, daniel, andrii, martin.lau, eddyz87, song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, bigeasy, bpf,
	syzbot+b0cff308140f79a9c4cb

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Fri, 14 Nov 2025 12:19:22 +0530 you wrote:
> bpf_try_get_buffers() returns one of multiple per-CPU buffers based on a
> per-CPU nesting counter. This mechanism expects that buffers are not
> endlessly acquired before being returned. migrate_disable() ensures that a
> task remains on the same CPU, but it does not prevent the task from being
> preempted by another task on that CPU.
> 
> Without disabled preemption, a task may be preempted while holding a
> buffer, allowing another task to run on same CPU and acquire an
> additional buffer. Several such preemptions can cause the per-CPU
> nest counter to exceed MAX_BPRINTF_NEST_LEVEL and trigger the warning in
> bpf_try_get_buffers(). Adding preempt_disable()/preempt_enable() around
> buffer acquisition and release prevents this task preemption and
> preserves the intended bounded nesting behavior.
> 
> [...]

Here is the summary with links:
  - [bpf-next,v3] bpf: prevent nesting overflow in bpf_try_get_buffers
    https://git.kernel.org/bpf/bpf-next/c/c1da3df7191f

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-11-14 21:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-14  6:49 [PATCH bpf-next v3] bpf: prevent nesting overflow in bpf_try_get_buffers Sahil Chandna
2025-11-14 21:10 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox