* [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting
@ 2025-11-11 17:06 Sahil Chandna
2025-11-11 20:22 ` Yonghong Song
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Sahil Chandna @ 2025-11-11 17:06 UTC (permalink / raw)
To: yonghong.song, ast, daniel, andrii, martin.lau, eddyz87, song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, bigeasy, bpf
Cc: Sahil Chandna, syzbot+b0cff308140f79a9c4cb
The bpf_bprintf_prepare() and related helpers (bpf_try_get_buffers() /
bpf_put_buffers()) rely on a per-CPU counter bpf_bprintf_nest_level to
manage nested buffer usage. However, when invoked from different contexts
(process, softirq, NMI), the nesting counter can become inconsistent if
task migration occurs between CPUs during these operations. This can
result in warnings such as:
WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_try_get_buffers kernel/bpf/helpers.c:781 [inline]
WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_bprintf_prepare+0x12cf/0x13a0 kernel/bpf/helpers.c:834
Having only migrate_disable is insufficient here to prevent nesting,
hence add preempt_disable()/enable() around buffer acquisition and release.
Reported-by: syzbot+b0cff308140f79a9c4cb@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68f6a4c8.050a0220.1be48.0011.GAE@google.com/
Fixes: 4223bf833c849 ("bpf: Remove preempt_disable in bpf_try_get_buffers")
Suggested-by: Yonghong Song <yonghong.song@linux.dev>
Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
---
changes since v1:
- Remove additional call to preempt_enable() which may lead to
inconsistent preempt state if invoked without preempt_disable() called.
- Correct tags as suggested by Sebastian
Testing:
Tested using syzkaller reproducers from:
[1] https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
[2] https://syzkaller.appspot.com/bug?extid=b0cff308140f79a9c4cb
Validation was done on PREEMPT_FULL and PREEMPT_RT configurations.
---
kernel/bpf/helpers.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index eb25e70e0bdc..3879eb42a681 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -777,9 +777,11 @@ int bpf_try_get_buffers(struct bpf_bprintf_buffers **bufs)
{
int nest_level;
+ preempt_disable();
nest_level = this_cpu_inc_return(bpf_bprintf_nest_level);
if (WARN_ON_ONCE(nest_level > MAX_BPRINTF_NEST_LEVEL)) {
this_cpu_dec(bpf_bprintf_nest_level);
+ preempt_enable();
return -EBUSY;
}
*bufs = this_cpu_ptr(&bpf_bprintf_bufs[nest_level - 1]);
@@ -792,6 +794,7 @@ void bpf_put_buffers(void)
if (WARN_ON_ONCE(this_cpu_read(bpf_bprintf_nest_level) == 0))
return;
this_cpu_dec(bpf_bprintf_nest_level);
+ preempt_enable();
}
void bpf_bprintf_cleanup(struct bpf_bprintf_data *data)
--
2.50.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting
2025-11-11 17:06 [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting Sahil Chandna
@ 2025-11-11 20:22 ` Yonghong Song
2025-11-12 8:23 ` Sebastian Andrzej Siewior
2025-11-14 21:10 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Yonghong Song @ 2025-11-11 20:22 UTC (permalink / raw)
To: Sahil Chandna, ast, daniel, andrii, martin.lau, eddyz87, song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, bigeasy, bpf
Cc: syzbot+b0cff308140f79a9c4cb
On 11/11/25 9:06 AM, Sahil Chandna wrote:
> The bpf_bprintf_prepare() and related helpers (bpf_try_get_buffers() /
> bpf_put_buffers()) rely on a per-CPU counter bpf_bprintf_nest_level to
> manage nested buffer usage. However, when invoked from different contexts
> (process, softirq, NMI), the nesting counter can become inconsistent if
> task migration occurs between CPUs during these operations. This can
> result in warnings such as:
>
> WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_try_get_buffers kernel/bpf/helpers.c:781 [inline]
> WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_bprintf_prepare+0x12cf/0x13a0 kernel/bpf/helpers.c:834
>
> Having only migrate_disable is insufficient here to prevent nesting,
> hence add preempt_disable()/enable() around buffer acquisition and release.
>
> Reported-by: syzbot+b0cff308140f79a9c4cb@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/68f6a4c8.050a0220.1be48.0011.GAE@google.com/
> Fixes: 4223bf833c849 ("bpf: Remove preempt_disable in bpf_try_get_buffers")
> Suggested-by: Yonghong Song <yonghong.song@linux.dev>
> Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting
2025-11-11 17:06 [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting Sahil Chandna
2025-11-11 20:22 ` Yonghong Song
@ 2025-11-12 8:23 ` Sebastian Andrzej Siewior
2025-11-14 21:10 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-11-12 8:23 UTC (permalink / raw)
To: Sahil Chandna
Cc: yonghong.song, ast, daniel, andrii, martin.lau, eddyz87, song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, bpf,
syzbot+b0cff308140f79a9c4cb
On 2025-11-11 22:36:28 [+0530], Sahil Chandna wrote:
> The bpf_bprintf_prepare() and related helpers (bpf_try_get_buffers() /
> bpf_put_buffers()) rely on a per-CPU counter bpf_bprintf_nest_level to
> manage nested buffer usage. However, when invoked from different contexts
> (process, softirq, NMI), the nesting counter can become inconsistent if
> task migration occurs between CPUs during these operations. This can
> result in warnings such as:
Not to be a nitpick but different contexts as describe above is fine. If
a preemptible task is preempted by softirq followed by NMI nothing bad
will happen, it will work as intended. That preempt_disable() has no
affect on not getting preempted by a softirq, hardirq or NMI.
The problem is that a task can be preempted by another task. The BPF
program is limited to a single CPU (so counter will always be
incremented on the same CPU) but you can still have multiple tasks
on the same CPU asking/ returning a buffer.
The first task is using slot 0, the second one slot 1. Still works (by
some definition of works since 1 can return the buffer before 2 does and
ask for a new one which will be slot 2 which is still in use).
However if you add two additional tasks to the mix then you will exceed
the available slots which leads to the warning below.
> WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_try_get_buffers kernel/bpf/helpers.c:781 [inline]
> WARNING: CPU: 1 PID: 6145 at kernel/bpf/helpers.c:781 bpf_bprintf_prepare+0x12cf/0x13a0 kernel/bpf/helpers.c:834
>
> Having only migrate_disable is insufficient here to prevent nesting,
> hence add preempt_disable()/enable() around buffer acquisition and release.
I prepared to following while adding the local locks which we agreed not
do:
bpf_try_get_buffers() returns one of multiple per-CPU buffers. The
"nesting" level is counted based on invocation to decide which buffer to
return. This relies on disabled preemption in order to not get preempted
while a buffer is returned and to have the buffers returned in the
same order as they were requested. Without disabled preemption,
multiple tasks can be preempted while a buffer is acquired,
eventually exceeding the limit (MAX_BPRINTF_NEST_LEVEL) and
triggering the warning.
Multiple buffer can be acquired by the same context due users in
different execution context (task and IRQ). It can also happen for a
single context if after a buffer has been returned and another
buffer needs to be returned due to an event (such as invoking
spin_lock()).
you can take it or leave it. Here is a
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
since the change itself is okay and does not lead to any problems as far
as I am aware. I just think the description is not accurate and does not
describe the problem the syzkaller reproducer triggered. I leave to the
maintainer here…
Sebastian
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting
2025-11-11 17:06 [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting Sahil Chandna
2025-11-11 20:22 ` Yonghong Song
2025-11-12 8:23 ` Sebastian Andrzej Siewior
@ 2025-11-14 21:10 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-14 21:10 UTC (permalink / raw)
To: Sahil Chandna
Cc: yonghong.song, ast, daniel, andrii, martin.lau, eddyz87, song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, bigeasy, bpf,
syzbot+b0cff308140f79a9c4cb
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Tue, 11 Nov 2025 22:36:28 +0530 you wrote:
> The bpf_bprintf_prepare() and related helpers (bpf_try_get_buffers() /
> bpf_put_buffers()) rely on a per-CPU counter bpf_bprintf_nest_level to
> manage nested buffer usage. However, when invoked from different contexts
> (process, softirq, NMI), the nesting counter can become inconsistent if
> task migration occurs between CPUs during these operations. This can
> result in warnings such as:
>
> [...]
Here is the summary with links:
- [bpf-next,v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting
https://git.kernel.org/bpf/bpf-next/c/c1da3df7191f
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-11-14 21:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-11 17:06 [PATCH bpf-next v2] bpf: use preempt_disable/enable() to protect bpf_bprintf_buffers nesting Sahil Chandna
2025-11-11 20:22 ` Yonghong Song
2025-11-12 8:23 ` Sebastian Andrzej Siewior
2025-11-14 21:10 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox