* [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock
@ 2025-04-11 10:17 Kumar Kartikeya Dwivedi
2025-04-11 17:04 ` Kumar Kartikeya Dwivedi
2025-04-11 17:37 ` [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock patchwork-bot+netdevbpf
0 siblings, 2 replies; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-04-11 10:17 UTC (permalink / raw)
To: bpf
Cc: syzbot+850aaf14624dc0c6d366, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Eduard Zingerman, kkd,
kernel-team
Convert the raw spinlock used by BPF ringbuf to rqspinlock. Currently,
we have an open syzbot report of a potential deadlock. In addition, the
ringbuf can fail to reserve spuriously under contention from NMI
context.
It is potentially attractive to enable unconstrained usage (incl. NMIs)
while ensuring no deadlocks manifest at runtime, perform the conversion
to rqspinlock to achieve this.
This change was benchmarked for BPF ringbuf's multi-producer contention
case on an Intel Sapphire Rapids server, with hyperthreading disabled
and performance governor turned on. 5 warm up runs were done for each
case before obtaining the results.
Before (raw_spinlock_t):
Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1 11.440 ± 0.019M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 2 2.706 ± 0.010M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 3 3.130 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 4 2.472 ± 0.003M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 8 2.352 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 12 2.813 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 16 1.988 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 20 2.245 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 24 2.148 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 28 2.190 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 32 2.490 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 36 2.180 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 40 2.201 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 44 2.226 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 48 2.164 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 52 1.874 ± 0.001M/s (drops 0.000 ± 0.000M/s)
After (rqspinlock_t):
Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1 11.078 ± 0.019M/s (drops 0.000 ± 0.000M/s) (-3.16%)
rb-libbpf nr_prod 2 2.801 ± 0.014M/s (drops 0.000 ± 0.000M/s) (3.51%)
rb-libbpf nr_prod 3 3.454 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.35%)
rb-libbpf nr_prod 4 2.567 ± 0.002M/s (drops 0.000 ± 0.000M/s) (3.84%)
rb-libbpf nr_prod 8 2.468 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.93%)
rb-libbpf nr_prod 12 2.510 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-10.77%)
rb-libbpf nr_prod 16 2.075 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.38%)
rb-libbpf nr_prod 20 2.640 ± 0.001M/s (drops 0.000 ± 0.000M/s) (17.59%)
rb-libbpf nr_prod 24 2.092 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-2.61%)
rb-libbpf nr_prod 28 2.426 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.78%)
rb-libbpf nr_prod 32 2.331 ± 0.004M/s (drops 0.000 ± 0.000M/s) (-6.39%)
rb-libbpf nr_prod 36 2.306 ± 0.003M/s (drops 0.000 ± 0.000M/s) (5.78%)
rb-libbpf nr_prod 40 2.178 ± 0.002M/s (drops 0.000 ± 0.000M/s) (-1.04%)
rb-libbpf nr_prod 44 2.293 ± 0.001M/s (drops 0.000 ± 0.000M/s) (3.01%)
rb-libbpf nr_prod 48 2.022 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-6.56%)
rb-libbpf nr_prod 52 1.809 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-3.47%)
There's a fair amount of noise in the benchmark, with numbers on reruns
going up and down by 10%, so all changes are in the range of this
disturbance, and we see no major regressions.
Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/0000000000004aa700061379547e@google.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
kernel/bpf/ringbuf.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 1499d8caa9a3..719d73299397 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -11,6 +11,7 @@
#include <linux/kmemleak.h>
#include <uapi/linux/btf.h>
#include <linux/btf_ids.h>
+#include <asm/rqspinlock.h>
#define RINGBUF_CREATE_FLAG_MASK (BPF_F_NUMA_NODE)
@@ -29,7 +30,7 @@ struct bpf_ringbuf {
u64 mask;
struct page **pages;
int nr_pages;
- raw_spinlock_t spinlock ____cacheline_aligned_in_smp;
+ rqspinlock_t spinlock ____cacheline_aligned_in_smp;
/* For user-space producer ring buffers, an atomic_t busy bit is used
* to synchronize access to the ring buffers in the kernel, rather than
* the spinlock that is used for kernel-producer ring buffers. This is
@@ -173,7 +174,7 @@ static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
if (!rb)
return NULL;
- raw_spin_lock_init(&rb->spinlock);
+ raw_res_spin_lock_init(&rb->spinlock);
atomic_set(&rb->busy, 0);
init_waitqueue_head(&rb->waitq);
init_irq_work(&rb->work, bpf_ringbuf_notify);
@@ -416,12 +417,8 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
cons_pos = smp_load_acquire(&rb->consumer_pos);
- if (in_nmi()) {
- if (!raw_spin_trylock_irqsave(&rb->spinlock, flags))
- return NULL;
- } else {
- raw_spin_lock_irqsave(&rb->spinlock, flags);
- }
+ if (raw_res_spin_lock_irqsave(&rb->spinlock, flags))
+ return NULL;
pend_pos = rb->pending_pos;
prod_pos = rb->producer_pos;
@@ -446,7 +443,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
*/
if (new_prod_pos - cons_pos > rb->mask ||
new_prod_pos - pend_pos > rb->mask) {
- raw_spin_unlock_irqrestore(&rb->spinlock, flags);
+ raw_res_spin_unlock_irqrestore(&rb->spinlock, flags);
return NULL;
}
@@ -458,7 +455,7 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
/* pairs with consumer's smp_load_acquire() */
smp_store_release(&rb->producer_pos, new_prod_pos);
- raw_spin_unlock_irqrestore(&rb->spinlock, flags);
+ raw_res_spin_unlock_irqrestore(&rb->spinlock, flags);
return (void *)hdr + BPF_RINGBUF_HDR_SZ;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock
2025-04-11 10:17 [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock Kumar Kartikeya Dwivedi
@ 2025-04-11 17:04 ` Kumar Kartikeya Dwivedi
2025-04-11 17:19 ` [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
2025-04-11 17:37 ` [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock patchwork-bot+netdevbpf
1 sibling, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-04-11 17:04 UTC (permalink / raw)
To: bpf
Cc: syzbot+850aaf14624dc0c6d366, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin KaFai Lau, Eduard Zingerman, kkd,
kernel-team
On Fri, 11 Apr 2025 at 12:18, Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> Convert the raw spinlock used by BPF ringbuf to rqspinlock. Currently,
> we have an open syzbot report of a potential deadlock. In addition, the
> ringbuf can fail to reserve spuriously under contention from NMI
> context.
>
> It is potentially attractive to enable unconstrained usage (incl. NMIs)
> while ensuring no deadlocks manifest at runtime, perform the conversion
> to rqspinlock to achieve this.
>
> This change was benchmarked for BPF ringbuf's multi-producer contention
> case on an Intel Sapphire Rapids server, with hyperthreading disabled
> and performance governor turned on. 5 warm up runs were done for each
> case before obtaining the results.
>
> Before (raw_spinlock_t):
>
> Ringbuf, multi-producer contention
> ==================================
> rb-libbpf nr_prod 1 11.440 ± 0.019M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 2 2.706 ± 0.010M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 3 3.130 ± 0.004M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 4 2.472 ± 0.003M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 8 2.352 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 12 2.813 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 16 1.988 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 20 2.245 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 24 2.148 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 28 2.190 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 32 2.490 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 36 2.180 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 40 2.201 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 44 2.226 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 48 2.164 ± 0.001M/s (drops 0.000 ± 0.000M/s)
> rb-libbpf nr_prod 52 1.874 ± 0.001M/s (drops 0.000 ± 0.000M/s)
>
> After (rqspinlock_t):
>
> Ringbuf, multi-producer contention
> ==================================
> rb-libbpf nr_prod 1 11.078 ± 0.019M/s (drops 0.000 ± 0.000M/s) (-3.16%)
> rb-libbpf nr_prod 2 2.801 ± 0.014M/s (drops 0.000 ± 0.000M/s) (3.51%)
> rb-libbpf nr_prod 3 3.454 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.35%)
> rb-libbpf nr_prod 4 2.567 ± 0.002M/s (drops 0.000 ± 0.000M/s) (3.84%)
> rb-libbpf nr_prod 8 2.468 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.93%)
> rb-libbpf nr_prod 12 2.510 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-10.77%)
> rb-libbpf nr_prod 16 2.075 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.38%)
> rb-libbpf nr_prod 20 2.640 ± 0.001M/s (drops 0.000 ± 0.000M/s) (17.59%)
> rb-libbpf nr_prod 24 2.092 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-2.61%)
> rb-libbpf nr_prod 28 2.426 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.78%)
> rb-libbpf nr_prod 32 2.331 ± 0.004M/s (drops 0.000 ± 0.000M/s) (-6.39%)
> rb-libbpf nr_prod 36 2.306 ± 0.003M/s (drops 0.000 ± 0.000M/s) (5.78%)
> rb-libbpf nr_prod 40 2.178 ± 0.002M/s (drops 0.000 ± 0.000M/s) (-1.04%)
> rb-libbpf nr_prod 44 2.293 ± 0.001M/s (drops 0.000 ± 0.000M/s) (3.01%)
> rb-libbpf nr_prod 48 2.022 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-6.56%)
> rb-libbpf nr_prod 52 1.809 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-3.47%)
>
> There's a fair amount of noise in the benchmark, with numbers on reruns
> going up and down by 10%, so all changes are in the range of this
> disturbance, and we see no major regressions.
>
> Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/0000000000004aa700061379547e@google.com
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
#syz test
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2025-04-11 17:04 ` Kumar Kartikeya Dwivedi
@ 2025-04-11 17:19 ` syzbot
2025-04-11 17:31 ` Kumar Kartikeya Dwivedi
0 siblings, 1 reply; 14+ messages in thread
From: syzbot @ 2025-04-11 17:19 UTC (permalink / raw)
To: andrii, ast, bpf, daniel, eddyz87, kernel-team, kkd, linux-kernel,
martin.lau, memxor, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __bpf_ringbuf_reserve
============================================
WARNING: possible recursive locking detected
6.15.0-rc1-syzkaller-ge618ee89561b #0 Not tainted
--------------------------------------------
kworker/2:3/6044 is trying to acquire lock:
ffffc90006f360d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
but task is already holding lock:
ffffc900070410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&rb->spinlock);
lock(&rb->spinlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
6 locks held by kworker/2:3/6044:
#0: ffff88801b48ad48 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc90004c1fd18 ((work_completion)(&(&ssp->srcu_sup->work)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
#2: ffff88801ea8f158 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: srcu_advance_state kernel/rcu/srcutree.c:1701 [inline]
#2: ffff88801ea8f158 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: process_srcu+0x73/0x1920 kernel/rcu/srcutree.c:1861
#3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2362 [inline]
#3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run2+0x1b6/0x590 kernel/trace/bpf_trace.c:2404
#4: ffffc900070410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
#5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2362 [inline]
#5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run2+0x1b6/0x590 kernel/trace/bpf_trace.c:2404
stack backtrace:
CPU: 2 UID: 0 PID: 6044 Comm: kworker/2:3 Not tainted 6.15.0-rc1-syzkaller-ge618ee89561b #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Workqueue: rcu_gp process_srcu
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
print_deadlock_bug+0x1e9/0x240 kernel/locking/lockdep.c:3042
check_deadlock kernel/locking/lockdep.c:3094 [inline]
validate_chain kernel/locking/lockdep.c:3896 [inline]
__lock_acquire+0xff7/0x1ba0 kernel/locking/lockdep.c:5235
lock_acquire kernel/locking/lockdep.c:5866 [inline]
lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
bpf_ringbuf_reserve+0x57/0x90 kernel/bpf/ringbuf.c:466
bpf_prog_385141c453c15099+0x36/0x5d
bpf_dispatcher_nop_func include/linux/bpf.h:1316 [inline]
__bpf_prog_run include/linux/filter.h:718 [inline]
bpf_prog_run include/linux/filter.h:725 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2363 [inline]
bpf_trace_run2+0x230/0x590 kernel/trace/bpf_trace.c:2404
__bpf_trace_contention_begin+0xc9/0x110 include/trace/events/lock.h:95
__traceiter_contention_begin+0x5a/0xa0 include/trace/events/lock.h:95
__preempt_count_dec_and_test arch/x86/include/asm/preempt.h:95 [inline]
class_preempt_notrace_destructor include/linux/preempt.h:482 [inline]
__do_trace_contention_begin include/trace/events/lock.h:95 [inline]
trace_contention_begin.constprop.0+0xde/0x160 include/trace/events/lock.h:95
__pv_queued_spin_lock_slowpath+0x109/0xcf0 kernel/locking/qspinlock.c:219
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:572 [inline]
queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x20e/0x2b0 kernel/locking/spinlock_debug.c:116
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0x42/0x60 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
bpf_ringbuf_reserve+0x57/0x90 kernel/bpf/ringbuf.c:466
bpf_prog_385141c453c15099+0x36/0x5d
bpf_dispatcher_nop_func include/linux/bpf.h:1316 [inline]
__bpf_prog_run include/linux/filter.h:718 [inline]
bpf_prog_run include/linux/filter.h:725 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2363 [inline]
bpf_trace_run2+0x230/0x590 kernel/trace/bpf_trace.c:2404
__bpf_trace_contention_begin+0xc9/0x110 include/trace/events/lock.h:95
__traceiter_contention_begin+0x5a/0xa0 include/trace/events/lock.h:95
__do_trace_contention_begin include/trace/events/lock.h:95 [inline]
trace_contention_begin+0xc1/0x130 include/trace/events/lock.h:95
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x1a6/0xb90 kernel/locking/mutex.c:746
srcu_advance_state kernel/rcu/srcutree.c:1701 [inline]
process_srcu+0x73/0x1920 kernel/rcu/srcutree.c:1861
process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
process_scheduled_works kernel/workqueue.c:3319 [inline]
worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
kthread+0x3c2/0x780 kernel/kthread.c:464
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Tested on:
commit: e618ee89 Merge tag 'spi-fix-v6.15-rc1' of git://git.ke..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10461c04580000
kernel config: https://syzkaller.appspot.com/x/.config?x=36c5de4d99134dda
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
Note: no patches were applied.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2025-04-11 17:19 ` [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
@ 2025-04-11 17:31 ` Kumar Kartikeya Dwivedi
2025-04-11 18:26 ` syzbot
0 siblings, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-04-11 17:31 UTC (permalink / raw)
To: syzbot
Cc: andrii, ast, bpf, daniel, eddyz87, kernel-team, kkd, linux-kernel,
martin.lau, syzkaller-bugs
On Fri, 11 Apr 2025 at 19:19, syzbot
<syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> possible deadlock in __bpf_ringbuf_reserve
>
> ============================================
> WARNING: possible recursive locking detected
> 6.15.0-rc1-syzkaller-ge618ee89561b #0 Not tainted
> --------------------------------------------
> kworker/2:3/6044 is trying to acquire lock:
> ffffc90006f360d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
>
> but task is already holding lock:
> ffffc900070410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&rb->spinlock);
> lock(&rb->spinlock);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 6 locks held by kworker/2:3/6044:
> #0: ffff88801b48ad48 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
> #1: ffffc90004c1fd18 ((work_completion)(&(&ssp->srcu_sup->work)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
> #2: ffff88801ea8f158 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: srcu_advance_state kernel/rcu/srcutree.c:1701 [inline]
> #2: ffff88801ea8f158 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: process_srcu+0x73/0x1920 kernel/rcu/srcutree.c:1861
> #3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
> #3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2362 [inline]
> #3: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run2+0x1b6/0x590 kernel/trace/bpf_trace.c:2404
> #4: ffffc900070410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
> #5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
> #5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
> #5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2362 [inline]
> #5: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run2+0x1b6/0x590 kernel/trace/bpf_trace.c:2404
>
> stack backtrace:
> CPU: 2 UID: 0 PID: 6044 Comm: kworker/2:3 Not tainted 6.15.0-rc1-syzkaller-ge618ee89561b #0 PREEMPT(full)
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
> Workqueue: rcu_gp process_srcu
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:94 [inline]
> dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
> print_deadlock_bug+0x1e9/0x240 kernel/locking/lockdep.c:3042
> check_deadlock kernel/locking/lockdep.c:3094 [inline]
> validate_chain kernel/locking/lockdep.c:3896 [inline]
> __lock_acquire+0xff7/0x1ba0 kernel/locking/lockdep.c:5235
> lock_acquire kernel/locking/lockdep.c:5866 [inline]
> lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
> __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
> ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
> bpf_ringbuf_reserve+0x57/0x90 kernel/bpf/ringbuf.c:466
> bpf_prog_385141c453c15099+0x36/0x5d
> bpf_dispatcher_nop_func include/linux/bpf.h:1316 [inline]
> __bpf_prog_run include/linux/filter.h:718 [inline]
> bpf_prog_run include/linux/filter.h:725 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2363 [inline]
> bpf_trace_run2+0x230/0x590 kernel/trace/bpf_trace.c:2404
> __bpf_trace_contention_begin+0xc9/0x110 include/trace/events/lock.h:95
> __traceiter_contention_begin+0x5a/0xa0 include/trace/events/lock.h:95
> __preempt_count_dec_and_test arch/x86/include/asm/preempt.h:95 [inline]
> class_preempt_notrace_destructor include/linux/preempt.h:482 [inline]
> __do_trace_contention_begin include/trace/events/lock.h:95 [inline]
> trace_contention_begin.constprop.0+0xde/0x160 include/trace/events/lock.h:95
> __pv_queued_spin_lock_slowpath+0x109/0xcf0 kernel/locking/qspinlock.c:219
> pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:572 [inline]
> queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
> queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> do_raw_spin_lock+0x20e/0x2b0 kernel/locking/spinlock_debug.c:116
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> _raw_spin_lock_irqsave+0x42/0x60 kernel/locking/spinlock.c:162
> __bpf_ringbuf_reserve+0x36e/0x4b0 kernel/bpf/ringbuf.c:423
> ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
> bpf_ringbuf_reserve+0x57/0x90 kernel/bpf/ringbuf.c:466
> bpf_prog_385141c453c15099+0x36/0x5d
> bpf_dispatcher_nop_func include/linux/bpf.h:1316 [inline]
> __bpf_prog_run include/linux/filter.h:718 [inline]
> bpf_prog_run include/linux/filter.h:725 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2363 [inline]
> bpf_trace_run2+0x230/0x590 kernel/trace/bpf_trace.c:2404
> __bpf_trace_contention_begin+0xc9/0x110 include/trace/events/lock.h:95
> __traceiter_contention_begin+0x5a/0xa0 include/trace/events/lock.h:95
> __do_trace_contention_begin include/trace/events/lock.h:95 [inline]
> trace_contention_begin+0xc1/0x130 include/trace/events/lock.h:95
> __mutex_lock_common kernel/locking/mutex.c:603 [inline]
> __mutex_lock+0x1a6/0xb90 kernel/locking/mutex.c:746
> srcu_advance_state kernel/rcu/srcutree.c:1701 [inline]
> process_srcu+0x73/0x1920 kernel/rcu/srcutree.c:1861
> process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
> process_scheduled_works kernel/workqueue.c:3319 [inline]
> worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
> kthread+0x3c2/0x780 kernel/kthread.c:464
> ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
> </TASK>
>
>
> Tested on:
>
> commit: e618ee89 Merge tag 'spi-fix-v6.15-rc1' of git://git.ke..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=10461c04580000
> kernel config: https://syzkaller.appspot.com/x/.config?x=36c5de4d99134dda
> dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
>
> Note: no patches were applied.
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git master
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2025-04-11 17:31 ` Kumar Kartikeya Dwivedi
@ 2025-04-11 18:26 ` syzbot
0 siblings, 0 replies; 14+ messages in thread
From: syzbot @ 2025-04-11 18:26 UTC (permalink / raw)
To: andrii, ast, bpf, daniel, eddyz87, kernel-team, kkd, linux-kernel,
martin.lau, memxor, syzkaller-bugs
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
unregister_netdevice: waiting for DEV to become free
unregister_netdevice: waiting for batadv0 to become free. Usage count = 3
Tested on:
commit: a650d389 bpf: Convert ringbuf map to rqspinlock
git tree: bpf
console output: https://syzkaller.appspot.com/x/log.txt?x=17928870580000
kernel config: https://syzkaller.appspot.com/x/.config?x=ea2b297a0891c87e
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
Note: no patches were applied.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock
2025-04-11 10:17 [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock Kumar Kartikeya Dwivedi
2025-04-11 17:04 ` Kumar Kartikeya Dwivedi
@ 2025-04-11 17:37 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 14+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-04-11 17:37 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, syzbot+850aaf14624dc0c6d366, ast, andrii, daniel, martin.lau,
eddyz87, kkd, kernel-team
Hello:
This patch was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Fri, 11 Apr 2025 03:17:59 -0700 you wrote:
> Convert the raw spinlock used by BPF ringbuf to rqspinlock. Currently,
> we have an open syzbot report of a potential deadlock. In addition, the
> ringbuf can fail to reserve spuriously under contention from NMI
> context.
>
> It is potentially attractive to enable unconstrained usage (incl. NMIs)
> while ensuring no deadlocks manifest at runtime, perform the conversion
> to rqspinlock to achieve this.
>
> [...]
Here is the summary with links:
- [bpf,v1] bpf: Convert ringbuf.c to rqspinlock
https://git.kernel.org/bpf/bpf/c/0b51f0ac3dc5
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
@ 2024-03-12 16:41 syzbot
2024-03-12 21:02 ` Jiri Olsa
2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
0 siblings, 2 replies; 14+ messages in thread
From: syzbot @ 2024-03-12 16:41 UTC (permalink / raw)
To: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
yonghong.song
Hello,
syzbot found the following issue on:
HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
git tree: bpf
console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
============================================
WARNING: possible recursive locking detected
6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
--------------------------------------------
strace-static-x/5063 is trying to acquire lock:
ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
but task is already holding lock:
ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&rb->spinlock);
lock(&rb->spinlock);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by strace-static-x/5063:
#0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
#0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
#1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
#1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
#1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
#1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
#2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
#3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
#3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
#3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
#3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
stack backtrace:
CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
check_deadlock kernel/locking/lockdep.c:3062 [inline]
validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
__lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
bpf_prog_9efe54833449f08e+0x2d/0x47
bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
__bpf_prog_run include/linux/filter.h:651 [inline]
bpf_prog_run include/linux/filter.h:658 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
__traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
__bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
bpf_prog_9efe54833449f08e+0x2d/0x47
bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
__bpf_prog_run include/linux/filter.h:651 [inline]
bpf_prog_run include/linux/filter.h:658 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
__traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:617 [inline]
__mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
__pipe_lock fs/pipe.c:103 [inline]
pipe_write+0x1cc/0x1a40 fs/pipe.c:465
call_write_iter include/linux/fs.h:2087 [inline]
new_sync_write fs/read_write.c:497 [inline]
vfs_write+0xa81/0xcb0 fs/read_write.c:590
ksys_write+0x1a0/0x2c0 fs/read_write.c:643
do_syscall_64+0xf9/0x240
entry_SYSCALL_64_after_hwframe+0x6f/0x77
RIP: 0033:0x4e8593
Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
@ 2024-03-12 21:02 ` Jiri Olsa
2024-03-12 21:18 ` Jiri Olsa
2024-03-13 12:13 ` Hillf Danton
2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
1 sibling, 2 replies; 14+ messages in thread
From: Jiri Olsa @ 2024-03-12 21:02 UTC (permalink / raw)
To: syzbot
Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
yonghong.song
On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> git tree: bpf
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
>
> ============================================
> WARNING: possible recursive locking detected
> 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> --------------------------------------------
> strace-static-x/5063 is trying to acquire lock:
> ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
>
> but task is already holding lock:
> ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(&rb->spinlock);
> lock(&rb->spinlock);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by strace-static-x/5063:
> #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
>
> stack backtrace:
> CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> check_deadlock kernel/locking/lockdep.c:3062 [inline]
> validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> bpf_prog_9efe54833449f08e+0x2d/0x47
> bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> __bpf_prog_run include/linux/filter.h:651 [inline]
> bpf_prog_run include/linux/filter.h:658 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
hum, scratching my head how this could passed through the prog->active check,
will try to reproduce
jirka
> bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> bpf_prog_9efe54833449f08e+0x2d/0x47
> bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> __bpf_prog_run include/linux/filter.h:651 [inline]
> bpf_prog_run include/linux/filter.h:658 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> __pipe_lock fs/pipe.c:103 [inline]
> pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> call_write_iter include/linux/fs.h:2087 [inline]
> new_sync_write fs/read_write.c:497 [inline]
> vfs_write+0xa81/0xcb0 fs/read_write.c:590
> ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> do_syscall_64+0xf9/0x240
> entry_SYSCALL_64_after_hwframe+0x6f/0x77
> RIP: 0033:0x4e8593
> Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> </TASK>
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want syzbot to run the reproducer, reply with:
> #syz test: git://repo/address.git branch-or-commit-hash
> If you attach or paste a git patch, syzbot will apply it before testing.
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
>
> If you want to undo deduplication, reply with:
> #syz undup
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 21:02 ` Jiri Olsa
@ 2024-03-12 21:18 ` Jiri Olsa
2024-03-12 22:37 ` Andrii Nakryiko
2024-03-13 12:13 ` Hillf Danton
1 sibling, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2024-03-12 21:18 UTC (permalink / raw)
To: Jiri Olsa
Cc: syzbot, andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
yonghong.song
On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > git tree: bpf
> > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> >
> > ============================================
> > WARNING: possible recursive locking detected
> > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > --------------------------------------------
> > strace-static-x/5063 is trying to acquire lock:
> > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >
> > but task is already holding lock:
> > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >
> > other info that might help us debug this:
> > Possible unsafe locking scenario:
> >
> > CPU0
> > ----
> > lock(&rb->spinlock);
> > lock(&rb->spinlock);
> >
> > *** DEADLOCK ***
> >
> > May be due to missing lock nesting notation
> >
> > 4 locks held by strace-static-x/5063:
> > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> >
> > stack backtrace:
> > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > Call Trace:
> > <TASK>
> > __dump_stack lib/dump_stack.c:88 [inline]
> > dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > bpf_prog_9efe54833449f08e+0x2d/0x47
> > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > __bpf_prog_run include/linux/filter.h:651 [inline]
> > bpf_prog_run include/linux/filter.h:658 [inline]
> > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
>
> hum, scratching my head how this could passed through the prog->active check,
nah could be 2 instances of the same program, got confused by the tag
trace_contention_end
__bpf_trace_run(prog1)
bpf_prog_9efe54833449f08e
bpf_ringbuf_reserve
trace_contention_end
__bpf_trace_run(prog1) prog1->active check fails
__bpf_trace_run(prog2)
bpf_prog_9efe54833449f08e
bpf_ringbuf_reserve
lockup
we had similar issue in [1] and we replaced the lock with extra buffers,
not sure that's possible in bpf_ringbuf_reserve
jirka
[1] e2bb9e01d589 bpf: Remove trace_printk_lock
> will try to reproduce
>
> jirka
>
> > bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> > __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> > pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> > queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> > queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> > do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> > _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > bpf_prog_9efe54833449f08e+0x2d/0x47
> > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > __bpf_prog_run include/linux/filter.h:651 [inline]
> > bpf_prog_run include/linux/filter.h:658 [inline]
> > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> > __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> > __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> > __pipe_lock fs/pipe.c:103 [inline]
> > pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > call_write_iter include/linux/fs.h:2087 [inline]
> > new_sync_write fs/read_write.c:497 [inline]
> > vfs_write+0xa81/0xcb0 fs/read_write.c:590
> > ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> > do_syscall_64+0xf9/0x240
> > entry_SYSCALL_64_after_hwframe+0x6f/0x77
> > RIP: 0033:0x4e8593
> > Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> > RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> > RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> > RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> > RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> > R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> > </TASK>
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> >
> > If the report is already addressed, let syzbot know by replying with:
> > #syz fix: exact-commit-title
> >
> > If you want syzbot to run the reproducer, reply with:
> > #syz test: git://repo/address.git branch-or-commit-hash
> > If you attach or paste a git patch, syzbot will apply it before testing.
> >
> > If you want to overwrite report's subsystems, reply with:
> > #syz set subsystems: new-subsystem
> > (See the list of subsystem names on the web dashboard)
> >
> > If the report is a duplicate of another one, reply with:
> > #syz dup: exact-subject-of-another-report
> >
> > If you want to undo deduplication, reply with:
> > #syz undup
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 21:18 ` Jiri Olsa
@ 2024-03-12 22:37 ` Andrii Nakryiko
2024-03-13 9:04 ` Jiri Olsa
0 siblings, 1 reply; 14+ messages in thread
From: Andrii Nakryiko @ 2024-03-12 22:37 UTC (permalink / raw)
To: Jiri Olsa
Cc: syzbot, andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
yonghong.song
On Tue, Mar 12, 2024 at 2:18 PM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> > On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > > git tree: bpf
> > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> > >
> > > ============================================
> > > WARNING: possible recursive locking detected
> > > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > > --------------------------------------------
> > > strace-static-x/5063 is trying to acquire lock:
> > > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > but task is already holding lock:
> > > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > other info that might help us debug this:
> > > Possible unsafe locking scenario:
> > >
> > > CPU0
> > > ----
> > > lock(&rb->spinlock);
> > > lock(&rb->spinlock);
> > >
> > > *** DEADLOCK ***
> > >
> > > May be due to missing lock nesting notation
> > >
> > > 4 locks held by strace-static-x/5063:
> > > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > > #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > >
> > > stack backtrace:
> > > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > > Call Trace:
> > > <TASK>
> > > __dump_stack lib/dump_stack.c:88 [inline]
> > > dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > > check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > > validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > > __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > > lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > > _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > > bpf_prog_9efe54833449f08e+0x2d/0x47
> > > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > > __bpf_prog_run include/linux/filter.h:651 [inline]
> > > bpf_prog_run include/linux/filter.h:658 [inline]
> > > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> >
> > hum, scratching my head how this could passed through the prog->active check,
>
> nah could be 2 instances of the same program, got confused by the tag
>
> trace_contention_end
> __bpf_trace_run(prog1)
> bpf_prog_9efe54833449f08e
> bpf_ringbuf_reserve
> trace_contention_end
> __bpf_trace_run(prog1) prog1->active check fails
> __bpf_trace_run(prog2)
> bpf_prog_9efe54833449f08e
> bpf_ringbuf_reserve
> lockup
>
> we had similar issue in [1] and we replaced the lock with extra buffers,
> not sure that's possible in bpf_ringbuf_reserve
>
Having trace_contention_begin and trace_contention_end in such
low-level parts of ringbuf (and I'm sure anything in BPF that's using
spinlock) is unfortunate. I'm not sure what's the best solution, but
it would be great if we had ability to disable these tracepoints when
taking lock in low-level BPF infrastructure. Given BPF programs can
attach to these tracepoints, it's best to avoid this arbitrary nesting
of BPF ringbuf calls. Also note, no per-program protection will help,
because it can be independent BPF programs using the same map.
> jirka
>
>
> [1] e2bb9e01d589 bpf: Remove trace_printk_lock
>
> > will try to reproduce
> >
> > jirka
> >
> > > bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > > __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > > trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> > > __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> > > pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> > > queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> > > queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> > > do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> > > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> > > _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> > > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > > bpf_prog_9efe54833449f08e+0x2d/0x47
> > > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > > __bpf_prog_run include/linux/filter.h:651 [inline]
> > > bpf_prog_run include/linux/filter.h:658 [inline]
> > > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > > bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > > __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > > trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> > > __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> > > __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> > > __pipe_lock fs/pipe.c:103 [inline]
> > > pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > > call_write_iter include/linux/fs.h:2087 [inline]
> > > new_sync_write fs/read_write.c:497 [inline]
> > > vfs_write+0xa81/0xcb0 fs/read_write.c:590
> > > ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> > > do_syscall_64+0xf9/0x240
> > > entry_SYSCALL_64_after_hwframe+0x6f/0x77
> > > RIP: 0033:0x4e8593
> > > Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> > > RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> > > RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> > > RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> > > RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> > > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> > > R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> > > </TASK>
> > >
> > >
> > > ---
> > > This report is generated by a bot. It may contain errors.
> > > See https://goo.gl/tpsmEJ for more information about syzbot.
> > > syzbot engineers can be reached at syzkaller@googlegroups.com.
> > >
> > > syzbot will keep track of this issue. See:
> > > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > >
> > > If the report is already addressed, let syzbot know by replying with:
> > > #syz fix: exact-commit-title
> > >
> > > If you want syzbot to run the reproducer, reply with:
> > > #syz test: git://repo/address.git branch-or-commit-hash
> > > If you attach or paste a git patch, syzbot will apply it before testing.
> > >
> > > If you want to overwrite report's subsystems, reply with:
> > > #syz set subsystems: new-subsystem
> > > (See the list of subsystem names on the web dashboard)
> > >
> > > If the report is a duplicate of another one, reply with:
> > > #syz dup: exact-subject-of-another-report
> > >
> > > If you want to undo deduplication, reply with:
> > > #syz undup
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 22:37 ` Andrii Nakryiko
@ 2024-03-13 9:04 ` Jiri Olsa
0 siblings, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2024-03-13 9:04 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Jiri Olsa, syzbot, andrii, ast, bpf, daniel, haoluo,
john.fastabend, kpsingh, linux-kernel, martin.lau, netdev, sdf,
song, syzkaller-bugs, yonghong.song
On Tue, Mar 12, 2024 at 03:37:16PM -0700, Andrii Nakryiko wrote:
> On Tue, Mar 12, 2024 at 2:18 PM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> > > On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > > > Hello,
> > > >
> > > > syzbot found the following issue on:
> > > >
> > > > HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > > > git tree: bpf
> > > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > > > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > > > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > > >
> > > > Downloadable assets:
> > > > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > > > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > > > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > > >
> > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> > > >
> > > > ============================================
> > > > WARNING: possible recursive locking detected
> > > > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > > > --------------------------------------------
> > > > strace-static-x/5063 is trying to acquire lock:
> > > > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >
> > > > but task is already holding lock:
> > > > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >
> > > > other info that might help us debug this:
> > > > Possible unsafe locking scenario:
> > > >
> > > > CPU0
> > > > ----
> > > > lock(&rb->spinlock);
> > > > lock(&rb->spinlock);
> > > >
> > > > *** DEADLOCK ***
> > > >
> > > > May be due to missing lock nesting notation
> > > >
> > > > 4 locks held by strace-static-x/5063:
> > > > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > > > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > > > #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > > >
> > > > stack backtrace:
> > > > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > > > Call Trace:
> > > > <TASK>
> > > > __dump_stack lib/dump_stack.c:88 [inline]
> > > > dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > > > check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > > > validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > > > __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > > > lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > > > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > > > _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > > > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > > > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > > > bpf_prog_9efe54833449f08e+0x2d/0x47
> > > > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > > > __bpf_prog_run include/linux/filter.h:651 [inline]
> > > > bpf_prog_run include/linux/filter.h:658 [inline]
> > > > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > >
> > > hum, scratching my head how this could passed through the prog->active check,
> >
> > nah could be 2 instances of the same program, got confused by the tag
> >
> > trace_contention_end
> > __bpf_trace_run(prog1)
> > bpf_prog_9efe54833449f08e
> > bpf_ringbuf_reserve
> > trace_contention_end
> > __bpf_trace_run(prog1) prog1->active check fails
> > __bpf_trace_run(prog2)
> > bpf_prog_9efe54833449f08e
> > bpf_ringbuf_reserve
> > lockup
> >
> > we had similar issue in [1] and we replaced the lock with extra buffers,
> > not sure that's possible in bpf_ringbuf_reserve
> >
>
> Having trace_contention_begin and trace_contention_end in such
> low-level parts of ringbuf (and I'm sure anything in BPF that's using
> spinlock) is unfortunate. I'm not sure what's the best solution, but
> it would be great if we had ability to disable these tracepoints when
> taking lock in low-level BPF infrastructure. Given BPF programs can
> attach to these tracepoints, it's best to avoid this arbitrary nesting
> of BPF ringbuf calls. Also note, no per-program protection will help,
> because it can be independent BPF programs using the same map.
one of the initial attempts for the previous problem was to deny
the attach of programs calling printk to printk tracepoint:
https://lore.kernel.org/bpf/20221121213123.1373229-1-jolsa@kernel.org/
how about we overload the bpf contention tracepoints callbacks and
make it conditional like outlined below.. but not sure it'd be
feasible on the lock/unlock calling sides to use this
jirka
---
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0a5c4efc73c3..c17b7eaab440 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2347,13 +2347,42 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
extern struct bpf_raw_event_map __start__bpf_raw_tp[];
extern struct bpf_raw_event_map __stop__bpf_raw_tp[];
+extern struct tracepoint __tracepoint_contention_begin;
+
+#define __CAST_TO_U64(x) ({ \
+ typeof(x) __src = (x); \
+ UINTTYPE(sizeof(x)) __dst; \
+ memcpy(&__dst, &__src, sizeof(__dst)); \
+ (u64)__dst; })
+
+int contention_tps_disable;
+
+static notrace void
+__bpf_trace_contention_begin_overload(void *__data, void *lock, unsigned int flags)
+{
+ struct bpf_prog *prog = __data;
+
+ if (contention_tps_disable)
+ return;
+
+ bpf_trace_run2(prog, __CAST_TO_U64(lock), __CAST_TO_U64(flags));
+}
+
+static struct bpf_raw_event_map* fixup(struct bpf_raw_event_map *btp)
+{
+ if (btp->tp == &__tracepoint_contention_begin)
+ btp->bpf_func = __bpf_trace_contention_begin_overload;
+
+ return btp;
+}
+
struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name)
{
struct bpf_raw_event_map *btp = __start__bpf_raw_tp;
for (; btp < __stop__bpf_raw_tp; btp++) {
if (!strcmp(btp->tp->name, name))
- return btp;
+ return fixup(btp);
}
return bpf_get_raw_tracepoint_module(name);
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 21:02 ` Jiri Olsa
2024-03-12 21:18 ` Jiri Olsa
@ 2024-03-13 12:13 ` Hillf Danton
1 sibling, 0 replies; 14+ messages in thread
From: Hillf Danton @ 2024-03-13 12:13 UTC (permalink / raw)
To: Jiri Olsa
Cc: syzbot, andrii, ast, bpf, linux-kernel, syzkaller-bugs,
yonghong.song
On Tue, 12 Mar 2024 22:02:27 +0100 Jiri Olsa <olsajiri@gmail.com>
> On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > git tree: bpf
> > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> >
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> >
> > ============================================
> > WARNING: possible recursive locking detected
> > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > --------------------------------------------
> > strace-static-x/5063 is trying to acquire lock:
> > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >
> > but task is already holding lock:
> > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >
> > other info that might help us debug this:
> > Possible unsafe locking scenario:
> >
> > CPU0
> > ----
> > lock(&rb->spinlock);
> > lock(&rb->spinlock);
> >
> > *** DEADLOCK ***
> >
> > May be due to missing lock nesting notation
> >
> > 4 locks held by strace-static-x/5063:
> > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> >
> > stack backtrace:
> > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > Call Trace:
> > <TASK>
> > __dump_stack lib/dump_stack.c:88 [inline]
> > dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > bpf_prog_9efe54833449f08e+0x2d/0x47
> > bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > __bpf_prog_run include/linux/filter.h:651 [inline]
> > bpf_prog_run include/linux/filter.h:658 [inline]
> > __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
>
> hum, scratching my head how this could passed through the prog->active check,
> will try to reproduce
Feel free to take a look at another syzbot report [1,2]
[1] Subject: Re: [syzbot] [ntfs3?] possible deadlock in ntfs_set_state (2)
https://lore.kernel.org/lkml/ZdwSXCaTrzq7mm7Z@boqun-archlinux/
[2] Subject: Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
https://lore.kernel.org/lkml/00000000000082883f061388d49e@google.com/
^ permalink raw reply [flat|nested] 14+ messages in thread
* [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
2024-03-12 21:02 ` Jiri Olsa
@ 2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
2025-04-10 12:53 ` syzbot
1 sibling, 1 reply; 14+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-04-10 12:38 UTC (permalink / raw)
To: syzbot+850aaf14624dc0c6d366
Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
linux-kernel, martin.lau, netdev, song, syzkaller-bugs,
yonghong.song
#syz test: https://github.com/kkdwivedi/linux.git res-lock-next
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
@ 2025-04-10 12:53 ` syzbot
0 siblings, 0 replies; 14+ messages in thread
From: syzbot @ 2025-04-10 12:53 UTC (permalink / raw)
To: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
linux-kernel, martin.lau, memxor, netdev, song, syzkaller-bugs,
yonghong.song
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
unregister_netdevice: waiting for DEV to become free
unregister_netdevice: waiting for batadv0 to become free. Usage count = 3
Tested on:
commit: e403941b bpf: Convert ringbuf.c to rqspinlock
git tree: https://github.com/kkdwivedi/linux.git res-lock-next
console output: https://syzkaller.appspot.com/x/log.txt?x=13f46c04580000
kernel config: https://syzkaller.appspot.com/x/.config?x=ea2b297a0891c87e
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
Note: no patches were applied.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-04-11 18:26 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-11 10:17 [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock Kumar Kartikeya Dwivedi
2025-04-11 17:04 ` Kumar Kartikeya Dwivedi
2025-04-11 17:19 ` [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
2025-04-11 17:31 ` Kumar Kartikeya Dwivedi
2025-04-11 18:26 ` syzbot
2025-04-11 17:37 ` [PATCH bpf v1] bpf: Convert ringbuf.c to rqspinlock patchwork-bot+netdevbpf
-- strict thread matches above, loose matches on Subject: below --
2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
2024-03-12 21:02 ` Jiri Olsa
2024-03-12 21:18 ` Jiri Olsa
2024-03-12 22:37 ` Andrii Nakryiko
2024-03-13 9:04 ` Jiri Olsa
2024-03-13 12:13 ` Hillf Danton
2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
2025-04-10 12:53 ` syzbot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox