netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
@ 2024-03-12 16:41 syzbot
  2024-03-12 21:02 ` Jiri Olsa
  2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
  0 siblings, 2 replies; 7+ messages in thread
From: syzbot @ 2024-03-12 16:41 UTC (permalink / raw)
  To: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
	linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
	yonghong.song

Hello,

syzbot found the following issue on:

HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
git tree:       bpf
console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com

============================================
WARNING: possible recursive locking detected
6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
--------------------------------------------
strace-static-x/5063 is trying to acquire lock:
ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424

but task is already holding lock:
ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&rb->spinlock);
  lock(&rb->spinlock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by strace-static-x/5063:
 #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
 #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
 #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
 #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
 #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
 #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
 #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420

stack backtrace:
CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
 check_deadlock kernel/locking/lockdep.c:3062 [inline]
 validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
 __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
 __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
 ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
 bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
 bpf_prog_9efe54833449f08e+0x2d/0x47
 bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
 __bpf_prog_run include/linux/filter.h:651 [inline]
 bpf_prog_run include/linux/filter.h:658 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
 bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
 __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
 __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
 queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
 __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
 ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
 bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
 bpf_prog_9efe54833449f08e+0x2d/0x47
 bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
 __bpf_prog_run include/linux/filter.h:651 [inline]
 bpf_prog_run include/linux/filter.h:658 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
 bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
 __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
 __mutex_lock_common kernel/locking/mutex.c:617 [inline]
 __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
 __pipe_lock fs/pipe.c:103 [inline]
 pipe_write+0x1cc/0x1a40 fs/pipe.c:465
 call_write_iter include/linux/fs.h:2087 [inline]
 new_sync_write fs/read_write.c:497 [inline]
 vfs_write+0xa81/0xcb0 fs/read_write.c:590
 ksys_write+0x1a0/0x2c0 fs/read_write.c:643
 do_syscall_64+0xf9/0x240
 entry_SYSCALL_64_after_hwframe+0x6f/0x77
RIP: 0033:0x4e8593
Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
@ 2024-03-12 21:02 ` Jiri Olsa
  2024-03-12 21:18   ` Jiri Olsa
  2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
  1 sibling, 1 reply; 7+ messages in thread
From: Jiri Olsa @ 2024-03-12 21:02 UTC (permalink / raw)
  To: syzbot
  Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
	linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
	yonghong.song

On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> Hello,
> 
> syzbot found the following issue on:
> 
> HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> git tree:       bpf
> console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> 
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> 
> ============================================
> WARNING: possible recursive locking detected
> 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> --------------------------------------------
> strace-static-x/5063 is trying to acquire lock:
> ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> 
> but task is already holding lock:
> ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> 
> other info that might help us debug this:
>  Possible unsafe locking scenario:
> 
>        CPU0
>        ----
>   lock(&rb->spinlock);
>   lock(&rb->spinlock);
> 
>  *** DEADLOCK ***
> 
>  May be due to missing lock nesting notation
> 
> 4 locks held by strace-static-x/5063:
>  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
>  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
>  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
>  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
>  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
>  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
>  #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
>  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
>  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
>  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
>  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> 
> stack backtrace:
> CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> Call Trace:
>  <TASK>
>  __dump_stack lib/dump_stack.c:88 [inline]
>  dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
>  check_deadlock kernel/locking/lockdep.c:3062 [inline]
>  validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
>  __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
>  lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
>  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
>  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
>  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
>  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
>  bpf_prog_9efe54833449f08e+0x2d/0x47
>  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
>  __bpf_prog_run include/linux/filter.h:651 [inline]
>  bpf_prog_run include/linux/filter.h:658 [inline]
>  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]

hum, scratching my head how this could passed through the prog->active check,
will try to reproduce

jirka

>  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
>  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
>  trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
>  __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
>  pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
>  queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
>  queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
>  do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
>  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
>  _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
>  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
>  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
>  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
>  bpf_prog_9efe54833449f08e+0x2d/0x47
>  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
>  __bpf_prog_run include/linux/filter.h:651 [inline]
>  bpf_prog_run include/linux/filter.h:658 [inline]
>  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
>  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
>  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
>  trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
>  __mutex_lock_common kernel/locking/mutex.c:617 [inline]
>  __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
>  __pipe_lock fs/pipe.c:103 [inline]
>  pipe_write+0x1cc/0x1a40 fs/pipe.c:465
>  call_write_iter include/linux/fs.h:2087 [inline]
>  new_sync_write fs/read_write.c:497 [inline]
>  vfs_write+0xa81/0xcb0 fs/read_write.c:590
>  ksys_write+0x1a0/0x2c0 fs/read_write.c:643
>  do_syscall_64+0xf9/0x240
>  entry_SYSCALL_64_after_hwframe+0x6f/0x77
> RIP: 0033:0x4e8593
> Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
>  </TASK>
> 
> 
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
> 
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> 
> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
> 
> If you want syzbot to run the reproducer, reply with:
> #syz test: git://repo/address.git branch-or-commit-hash
> If you attach or paste a git patch, syzbot will apply it before testing.
> 
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
> 
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report
> 
> If you want to undo deduplication, reply with:
> #syz undup

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2024-03-12 21:02 ` Jiri Olsa
@ 2024-03-12 21:18   ` Jiri Olsa
  2024-03-12 22:37     ` Andrii Nakryiko
  0 siblings, 1 reply; 7+ messages in thread
From: Jiri Olsa @ 2024-03-12 21:18 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: syzbot, andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
	linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
	yonghong.song

On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > Hello,
> > 
> > syzbot found the following issue on:
> > 
> > HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > git tree:       bpf
> > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > 
> > Downloadable assets:
> > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > 
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> > 
> > ============================================
> > WARNING: possible recursive locking detected
> > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > --------------------------------------------
> > strace-static-x/5063 is trying to acquire lock:
> > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > 
> > but task is already holding lock:
> > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > 
> > other info that might help us debug this:
> >  Possible unsafe locking scenario:
> > 
> >        CPU0
> >        ----
> >   lock(&rb->spinlock);
> >   lock(&rb->spinlock);
> > 
> >  *** DEADLOCK ***
> > 
> >  May be due to missing lock nesting notation
> > 
> > 4 locks held by strace-static-x/5063:
> >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> >  #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > 
> > stack backtrace:
> > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > Call Trace:
> >  <TASK>
> >  __dump_stack lib/dump_stack.c:88 [inline]
> >  dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> >  check_deadlock kernel/locking/lockdep.c:3062 [inline]
> >  validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> >  __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> >  lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> >  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> >  bpf_prog_9efe54833449f08e+0x2d/0x47
> >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> >  __bpf_prog_run include/linux/filter.h:651 [inline]
> >  bpf_prog_run include/linux/filter.h:658 [inline]
> >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> 
> hum, scratching my head how this could passed through the prog->active check,

nah could be 2 instances of the same program, got confused by the tag

trace_contention_end
  __bpf_trace_run(prog1)
    bpf_prog_9efe54833449f08e
      bpf_ringbuf_reserve
        trace_contention_end
          __bpf_trace_run(prog1)  prog1->active check fails
          __bpf_trace_run(prog2) 
            bpf_prog_9efe54833449f08e
              bpf_ringbuf_reserve
                lockup

we had similar issue in [1] and we replaced the lock with extra buffers,
not sure that's possible in bpf_ringbuf_reserve

jirka


[1] e2bb9e01d589 bpf: Remove trace_printk_lock

> will try to reproduce
> 
> jirka
> 
> >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> >  trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> >  __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> >  pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> >  queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> >  queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> >  do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> >  _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> >  bpf_prog_9efe54833449f08e+0x2d/0x47
> >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> >  __bpf_prog_run include/linux/filter.h:651 [inline]
> >  bpf_prog_run include/linux/filter.h:658 [inline]
> >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> >  trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> >  __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> >  __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> >  __pipe_lock fs/pipe.c:103 [inline]
> >  pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> >  call_write_iter include/linux/fs.h:2087 [inline]
> >  new_sync_write fs/read_write.c:497 [inline]
> >  vfs_write+0xa81/0xcb0 fs/read_write.c:590
> >  ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> >  do_syscall_64+0xf9/0x240
> >  entry_SYSCALL_64_after_hwframe+0x6f/0x77
> > RIP: 0033:0x4e8593
> > Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> > RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> > RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> > RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> > RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> > R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> >  </TASK>
> > 
> > 
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> > 
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > 
> > If the report is already addressed, let syzbot know by replying with:
> > #syz fix: exact-commit-title
> > 
> > If you want syzbot to run the reproducer, reply with:
> > #syz test: git://repo/address.git branch-or-commit-hash
> > If you attach or paste a git patch, syzbot will apply it before testing.
> > 
> > If you want to overwrite report's subsystems, reply with:
> > #syz set subsystems: new-subsystem
> > (See the list of subsystem names on the web dashboard)
> > 
> > If the report is a duplicate of another one, reply with:
> > #syz dup: exact-subject-of-another-report
> > 
> > If you want to undo deduplication, reply with:
> > #syz undup

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2024-03-12 21:18   ` Jiri Olsa
@ 2024-03-12 22:37     ` Andrii Nakryiko
  2024-03-13  9:04       ` Jiri Olsa
  0 siblings, 1 reply; 7+ messages in thread
From: Andrii Nakryiko @ 2024-03-12 22:37 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: syzbot, andrii, ast, bpf, daniel, haoluo, john.fastabend, kpsingh,
	linux-kernel, martin.lau, netdev, sdf, song, syzkaller-bugs,
	yonghong.song

On Tue, Mar 12, 2024 at 2:18 PM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> > On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > > git tree:       bpf
> > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > > kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > >
> > > Downloadable assets:
> > > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > >
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> > >
> > > ============================================
> > > WARNING: possible recursive locking detected
> > > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > > --------------------------------------------
> > > strace-static-x/5063 is trying to acquire lock:
> > > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > but task is already holding lock:
> > > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >
> > > other info that might help us debug this:
> > >  Possible unsafe locking scenario:
> > >
> > >        CPU0
> > >        ----
> > >   lock(&rb->spinlock);
> > >   lock(&rb->spinlock);
> > >
> > >  *** DEADLOCK ***
> > >
> > >  May be due to missing lock nesting notation
> > >
> > > 4 locks held by strace-static-x/5063:
> > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > >  #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > >
> > > stack backtrace:
> > > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > > Call Trace:
> > >  <TASK>
> > >  __dump_stack lib/dump_stack.c:88 [inline]
> > >  dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > >  check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > >  validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > >  __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > >  lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > >  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > >  bpf_prog_9efe54833449f08e+0x2d/0x47
> > >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > >  __bpf_prog_run include/linux/filter.h:651 [inline]
> > >  bpf_prog_run include/linux/filter.h:658 [inline]
> > >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> >
> > hum, scratching my head how this could passed through the prog->active check,
>
> nah could be 2 instances of the same program, got confused by the tag
>
> trace_contention_end
>   __bpf_trace_run(prog1)
>     bpf_prog_9efe54833449f08e
>       bpf_ringbuf_reserve
>         trace_contention_end
>           __bpf_trace_run(prog1)  prog1->active check fails
>           __bpf_trace_run(prog2)
>             bpf_prog_9efe54833449f08e
>               bpf_ringbuf_reserve
>                 lockup
>
> we had similar issue in [1] and we replaced the lock with extra buffers,
> not sure that's possible in bpf_ringbuf_reserve
>

Having trace_contention_begin and trace_contention_end in such
low-level parts of ringbuf (and I'm sure anything in BPF that's using
spinlock) is unfortunate. I'm not sure what's the best solution, but
it would be great if we had ability to disable these tracepoints when
taking lock in low-level BPF infrastructure. Given BPF programs can
attach to these tracepoints, it's best to avoid this arbitrary nesting
of BPF ringbuf calls. Also note, no per-program protection will help,
because it can be independent BPF programs using the same map.


> jirka
>
>
> [1] e2bb9e01d589 bpf: Remove trace_printk_lock
>
> > will try to reproduce
> >
> > jirka
> >
> > >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > >  trace_contention_end+0xf6/0x120 include/trace/events/lock.h:122
> > >  __pv_queued_spin_lock_slowpath+0x939/0xc60 kernel/locking/qspinlock.c:560
> > >  pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
> > >  queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
> > >  queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
> > >  do_raw_spin_lock+0x271/0x370 kernel/locking/spinlock_debug.c:116
> > >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
> > >  _raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
> > >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > >  bpf_prog_9efe54833449f08e+0x2d/0x47
> > >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > >  __bpf_prog_run include/linux/filter.h:651 [inline]
> > >  bpf_prog_run include/linux/filter.h:658 [inline]
> > >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > >  bpf_trace_run2+0x204/0x420 kernel/trace/bpf_trace.c:2420
> > >  __traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
> > >  trace_contention_end+0xd7/0x100 include/trace/events/lock.h:122
> > >  __mutex_lock_common kernel/locking/mutex.c:617 [inline]
> > >  __mutex_lock+0x2e4/0xd70 kernel/locking/mutex.c:752
> > >  __pipe_lock fs/pipe.c:103 [inline]
> > >  pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > >  call_write_iter include/linux/fs.h:2087 [inline]
> > >  new_sync_write fs/read_write.c:497 [inline]
> > >  vfs_write+0xa81/0xcb0 fs/read_write.c:590
> > >  ksys_write+0x1a0/0x2c0 fs/read_write.c:643
> > >  do_syscall_64+0xf9/0x240
> > >  entry_SYSCALL_64_after_hwframe+0x6f/0x77
> > > RIP: 0033:0x4e8593
> > > Code: c7 c2 a8 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 48 89 54 24 18
> > > RSP: 002b:00007ffeda768928 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> > > RAX: ffffffffffffffda RBX: 0000000000000012 RCX: 00000000004e8593
> > > RDX: 0000000000000012 RSI: 0000000000817140 RDI: 0000000000000002
> > > RBP: 0000000000817140 R08: 0000000000000010 R09: 0000000000000090
> > > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000012
> > > R13: 000000000063f460 R14: 0000000000000012 R15: 0000000000000001
> > >  </TASK>
> > >
> > >
> > > ---
> > > This report is generated by a bot. It may contain errors.
> > > See https://goo.gl/tpsmEJ for more information about syzbot.
> > > syzbot engineers can be reached at syzkaller@googlegroups.com.
> > >
> > > syzbot will keep track of this issue. See:
> > > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > >
> > > If the report is already addressed, let syzbot know by replying with:
> > > #syz fix: exact-commit-title
> > >
> > > If you want syzbot to run the reproducer, reply with:
> > > #syz test: git://repo/address.git branch-or-commit-hash
> > > If you attach or paste a git patch, syzbot will apply it before testing.
> > >
> > > If you want to overwrite report's subsystems, reply with:
> > > #syz set subsystems: new-subsystem
> > > (See the list of subsystem names on the web dashboard)
> > >
> > > If the report is a duplicate of another one, reply with:
> > > #syz dup: exact-subject-of-another-report
> > >
> > > If you want to undo deduplication, reply with:
> > > #syz undup

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2024-03-12 22:37     ` Andrii Nakryiko
@ 2024-03-13  9:04       ` Jiri Olsa
  0 siblings, 0 replies; 7+ messages in thread
From: Jiri Olsa @ 2024-03-13  9:04 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Jiri Olsa, syzbot, andrii, ast, bpf, daniel, haoluo,
	john.fastabend, kpsingh, linux-kernel, martin.lau, netdev, sdf,
	song, syzkaller-bugs, yonghong.song

On Tue, Mar 12, 2024 at 03:37:16PM -0700, Andrii Nakryiko wrote:
> On Tue, Mar 12, 2024 at 2:18 PM Jiri Olsa <olsajiri@gmail.com> wrote:
> >
> > On Tue, Mar 12, 2024 at 10:02:27PM +0100, Jiri Olsa wrote:
> > > On Tue, Mar 12, 2024 at 09:41:26AM -0700, syzbot wrote:
> > > > Hello,
> > > >
> > > > syzbot found the following issue on:
> > > >
> > > > HEAD commit:    df4793505abd Merge tag 'net-6.8-rc8' of git://git.kernel.o..
> > > > git tree:       bpf
> > > > console+strace: https://syzkaller.appspot.com/x/log.txt?x=11fd0092180000
> > > > kernel config:  https://syzkaller.appspot.com/x/.config?x=c11c5c676adb61f0
> > > > dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
> > > > compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > > > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=1509c4ae180000
> > > > C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=10babc01180000
> > > >
> > > > Downloadable assets:
> > > > disk image: https://storage.googleapis.com/syzbot-assets/d2e80ee1112b/disk-df479350.raw.xz
> > > > vmlinux: https://storage.googleapis.com/syzbot-assets/b35ea54cd190/vmlinux-df479350.xz
> > > > kernel image: https://storage.googleapis.com/syzbot-assets/59f69d999ad2/bzImage-df479350.xz
> > > >
> > > > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > > > Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
> > > >
> > > > ============================================
> > > > WARNING: possible recursive locking detected
> > > > 6.8.0-rc7-syzkaller-gdf4793505abd #0 Not tainted
> > > > --------------------------------------------
> > > > strace-static-x/5063 is trying to acquire lock:
> > > > ffffc900096f10d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >
> > > > but task is already holding lock:
> > > > ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >
> > > > other info that might help us debug this:
> > > >  Possible unsafe locking scenario:
> > > >
> > > >        CPU0
> > > >        ----
> > > >   lock(&rb->spinlock);
> > > >   lock(&rb->spinlock);
> > > >
> > > >  *** DEADLOCK ***
> > > >
> > > >  May be due to missing lock nesting notation
> > > >
> > > > 4 locks held by strace-static-x/5063:
> > > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
> > > >  #0: ffff88807857e068 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_write+0x1cc/0x1a40 fs/pipe.c:465
> > > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > >  #1: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > > >  #2: ffffc900098410d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
> > > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
> > > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
> > > >  #3: ffffffff8e130be0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420
> > > >
> > > > stack backtrace:
> > > > CPU: 0 PID: 5063 Comm: strace-static-x Not tainted 6.8.0-rc7-syzkaller-gdf4793505abd #0
> > > > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
> > > > Call Trace:
> > > >  <TASK>
> > > >  __dump_stack lib/dump_stack.c:88 [inline]
> > > >  dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
> > > >  check_deadlock kernel/locking/lockdep.c:3062 [inline]
> > > >  validate_chain+0x15c0/0x58e0 kernel/locking/lockdep.c:3856
> > > >  __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
> > > >  lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
> > > >  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > > >  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> > > >  __bpf_ringbuf_reserve+0x211/0x4f0 kernel/bpf/ringbuf.c:424
> > > >  ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:459 [inline]
> > > >  bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:451
> > > >  bpf_prog_9efe54833449f08e+0x2d/0x47
> > > >  bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
> > > >  __bpf_prog_run include/linux/filter.h:651 [inline]
> > > >  bpf_prog_run include/linux/filter.h:658 [inline]
> > > >  __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
> > >
> > > hum, scratching my head how this could passed through the prog->active check,
> >
> > nah could be 2 instances of the same program, got confused by the tag
> >
> > trace_contention_end
> >   __bpf_trace_run(prog1)
> >     bpf_prog_9efe54833449f08e
> >       bpf_ringbuf_reserve
> >         trace_contention_end
> >           __bpf_trace_run(prog1)  prog1->active check fails
> >           __bpf_trace_run(prog2)
> >             bpf_prog_9efe54833449f08e
> >               bpf_ringbuf_reserve
> >                 lockup
> >
> > we had similar issue in [1] and we replaced the lock with extra buffers,
> > not sure that's possible in bpf_ringbuf_reserve
> >
> 
> Having trace_contention_begin and trace_contention_end in such
> low-level parts of ringbuf (and I'm sure anything in BPF that's using
> spinlock) is unfortunate. I'm not sure what's the best solution, but
> it would be great if we had ability to disable these tracepoints when
> taking lock in low-level BPF infrastructure. Given BPF programs can
> attach to these tracepoints, it's best to avoid this arbitrary nesting
> of BPF ringbuf calls. Also note, no per-program protection will help,
> because it can be independent BPF programs using the same map.

one of the initial attempts for the previous problem was to deny
the attach of programs calling printk to printk tracepoint:
  https://lore.kernel.org/bpf/20221121213123.1373229-1-jolsa@kernel.org/

how about we overload the bpf contention tracepoints callbacks and
make it conditional like outlined below.. but not sure it'd be
feasible on the lock/unlock calling sides to use this

jirka


---
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0a5c4efc73c3..c17b7eaab440 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2347,13 +2347,42 @@ int perf_event_query_prog_array(struct perf_event *event, void __user *info)
 extern struct bpf_raw_event_map __start__bpf_raw_tp[];
 extern struct bpf_raw_event_map __stop__bpf_raw_tp[];
 
+extern struct tracepoint __tracepoint_contention_begin;
+
+#define __CAST_TO_U64(x) ({ \
+	typeof(x) __src = (x); \
+	UINTTYPE(sizeof(x)) __dst; \
+	memcpy(&__dst, &__src, sizeof(__dst)); \
+	(u64)__dst; })
+
+int contention_tps_disable;
+
+static notrace void
+__bpf_trace_contention_begin_overload(void *__data, void *lock, unsigned int flags)
+{
+	struct bpf_prog *prog = __data;
+
+	if (contention_tps_disable)
+		return;
+
+	bpf_trace_run2(prog, __CAST_TO_U64(lock), __CAST_TO_U64(flags));
+}
+
+static struct bpf_raw_event_map* fixup(struct bpf_raw_event_map *btp)
+{
+	if (btp->tp == &__tracepoint_contention_begin)
+		btp->bpf_func = __bpf_trace_contention_begin_overload;
+
+	return btp;
+}
+
 struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name)
 {
 	struct bpf_raw_event_map *btp = __start__bpf_raw_tp;
 
 	for (; btp < __stop__bpf_raw_tp; btp++) {
 		if (!strcmp(btp->tp->name, name))
-			return btp;
+			return fixup(btp);
 	}
 
 	return bpf_get_raw_tracepoint_module(name);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
  2024-03-12 21:02 ` Jiri Olsa
@ 2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
  2025-04-10 12:53   ` syzbot
  1 sibling, 1 reply; 7+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2025-04-10 12:38 UTC (permalink / raw)
  To: syzbot+850aaf14624dc0c6d366
  Cc: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
	linux-kernel, martin.lau, netdev, song, syzkaller-bugs,
	yonghong.song

#syz test: https://github.com/kkdwivedi/linux.git res-lock-next

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve
  2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
@ 2025-04-10 12:53   ` syzbot
  0 siblings, 0 replies; 7+ messages in thread
From: syzbot @ 2025-04-10 12:53 UTC (permalink / raw)
  To: andrii, ast, bpf, daniel, haoluo, john.fastabend, jolsa, kpsingh,
	linux-kernel, martin.lau, memxor, netdev, song, syzkaller-bugs,
	yonghong.song

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
unregister_netdevice: waiting for DEV to become free

unregister_netdevice: waiting for batadv0 to become free. Usage count = 3


Tested on:

commit:         e403941b bpf: Convert ringbuf.c to rqspinlock
git tree:       https://github.com/kkdwivedi/linux.git res-lock-next
console output: https://syzkaller.appspot.com/x/log.txt?x=13f46c04580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=ea2b297a0891c87e
dashboard link: https://syzkaller.appspot.com/bug?extid=850aaf14624dc0c6d366
compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-04-10 12:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-12 16:41 [syzbot] [bpf?] possible deadlock in __bpf_ringbuf_reserve syzbot
2024-03-12 21:02 ` Jiri Olsa
2024-03-12 21:18   ` Jiri Olsa
2024-03-12 22:37     ` Andrii Nakryiko
2024-03-13  9:04       ` Jiri Olsa
2025-04-10 12:38 ` Kumar Kartikeya Dwivedi
2025-04-10 12:53   ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).