From: syzbot <syzbot+9d95beb2a3c260622518@syzkaller.appspotmail.com>
To: linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com,
wojciech.gladysz@infogain.com
Subject: Re: [syzbot] [bpf?] possible deadlock in trie_delete_elem
Date: Wed, 05 Jun 2024 01:24:02 -0700 [thread overview]
Message-ID: <000000000000f9b3ac061a204910@google.com> (raw)
In-Reply-To: <PN2PR01MB4891BF795360AE31E349CFECFCF92@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
Hello,
syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in trie_delete_elem
============================================
WARNING: possible recursive locking detected
6.10.0-rc2-syzkaller-00022-g32f88d65f01b #0 Not tainted
--------------------------------------------
syz-executor.1/6309 is trying to acquire lock:
ffff88802e40e1f8 (&trie->lock){-.-.}-{2:2}, at: trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
but task is already holding lock:
ffff88802e40c1f8 (&trie->lock){-.-.}-{2:2}, at: trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&trie->lock);
lock(&trie->lock);
*** DEADLOCK ***
May be due to missing lock nesting notation
4 locks held by syz-executor.1/6309:
#0: ffff88807ada5090 (&child->perf_event_mutex){+.+.}-{3:3}, at: perf_event_exit_task+0x9b/0xaa0 kernel/events/core.c:13184
#1: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#1: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#1: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2402 [inline]
#1: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x1fc/0x540 kernel/trace/bpf_trace.c:2444
#2: ffff88802e40c1f8 (&trie->lock){-.-.}-{2:2}, at: trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
#3: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#3: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#3: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2402 [inline]
#3: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x1fc/0x540 kernel/trace/bpf_trace.c:2444
stack backtrace:
CPU: 1 PID: 6309 Comm: syz-executor.1 Not tainted 6.10.0-rc2-syzkaller-00022-g32f88d65f01b #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
check_deadlock kernel/locking/lockdep.c:3062 [inline]
validate_chain+0x15d3/0x5900 kernel/locking/lockdep.c:3856
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
bpf_prog_1db1603a7cfa36fb+0x45/0x49
bpf_dispatcher_nop_func include/linux/bpf.h:1243 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2403 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2444
__traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
trace_contention_end+0x114/0x140 include/trace/events/lock.h:122
__pv_queued_spin_lock_slowpath+0xb81/0xdc0 kernel/locking/qspinlock.c:557
pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
do_raw_spin_lock+0x272/0x370 kernel/locking/spinlock_debug.c:116
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xe1/0x120 kernel/locking/spinlock.c:162
trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
bpf_prog_1db1603a7cfa36fb+0x45/0x49
bpf_dispatcher_nop_func include/linux/bpf.h:1243 [inline]
__bpf_prog_run include/linux/filter.h:691 [inline]
bpf_prog_run include/linux/filter.h:698 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2403 [inline]
bpf_trace_run2+0x2ec/0x540 kernel/trace/bpf_trace.c:2444
__traceiter_contention_end+0x7b/0xb0 include/trace/events/lock.h:122
trace_contention_end+0xf5/0x120 include/trace/events/lock.h:122
__mutex_lock_common kernel/locking/mutex.c:617 [inline]
__mutex_lock+0x2e5/0xd70 kernel/locking/mutex.c:752
perf_event_exit_task+0x9b/0xaa0 kernel/events/core.c:13184
do_exit+0xa37/0x27e0 kernel/exit.c:883
do_group_exit+0x207/0x2c0 kernel/exit.c:1023
__do_sys_exit_group kernel/exit.c:1034 [inline]
__se_sys_exit_group kernel/exit.c:1032 [inline]
__x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1032
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7af347dca9
Code: Unable to access opcode bytes at 0x7f7af347dc7f.
RSP: 002b:00007ffc344b9708 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 000000000000001e RCX: 00007f7af347dca9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
R10: 00007f7af3400000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000000
</TASK>
Tested on:
commit: 32f88d65 Merge tag 'linux_kselftest-fixes-6.10-rc3' of..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=162ace06980000
kernel config: https://syzkaller.appspot.com/x/.config?x=eb72437243175f22
dashboard link: https://syzkaller.appspot.com/bug?extid=9d95beb2a3c260622518
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Note: no patches were applied.
next parent reply other threads:[~2024-06-05 8:24 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <PN2PR01MB4891BF795360AE31E349CFECFCF92@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-05 8:24 ` syzbot [this message]
[not found] <PN2PR01MB4891E7B8CF9B3C1531CB9ADEFCFB2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-07 7:45 ` [syzbot] [bpf?] possible deadlock in trie_delete_elem syzbot
[not found] <20240606141413.137516-1-norkam41@gmail.com>
2024-06-06 14:38 ` syzbot
[not found] <20240606140352.135948-1-norkam41@gmail.com>
2024-06-06 14:26 ` syzbot
[not found] <20240606135139.134172-1-norkam41@gmail.com>
2024-06-06 13:55 ` syzbot
[not found] <MAXPR01MB316750CFC54579A6A384FF91F7FA2@MAXPR01MB3167.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 13:40 ` syzbot
[not found] <PN2PR01MB4891DFF0F26C14B0A7ED225EFCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 13:39 ` syzbot
[not found] <PN2PR01MB489146FC0041ED737236BE64FCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 13:17 ` syzbot
[not found] <PN2PR01MB4891F53CC6107BBDB25C8C09FCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 13:10 ` syzbot
[not found] <PN2PR01MB489159F2619858474EF92FC5FCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 12:32 ` syzbot
[not found] <PN2PR01MB489190072C2A23C3C1CCCE2FFCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 12:12 ` syzbot
[not found] <PN2PR01MB48911035081582F4B9893F9FFCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 11:47 ` syzbot
[not found] <PN2PR01MB4891AE31D63427B828C79ECBFCFA2@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-06 9:27 ` syzbot
[not found] <PN2PR01MB4891734D357ACCDDD106F360FCF92@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-05 7:57 ` syzbot
[not found] <PN2PR01MB489119119088A1E7091BB6B9FCF92@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-05 7:39 ` syzbot
[not found] <PN2PR01MB489171CD7B0E226F8256C23DFCF92@PN2PR01MB4891.INDPRD01.PROD.OUTLOOK.COM>
2024-06-05 7:17 ` syzbot
2024-03-20 8:42 syzbot
2024-03-20 10:32 ` Hillf Danton
2024-03-20 19:42 ` syzbot
2024-03-20 14:20 ` syzbot
2025-05-08 7:13 ` syzbot
2025-05-08 18:00 ` Alexei Starovoitov
2025-05-08 23:39 ` Kumar Kartikeya Dwivedi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=000000000000f9b3ac061a204910@google.com \
--to=syzbot+9d95beb2a3c260622518@syzkaller.appspotmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=syzkaller-bugs@googlegroups.com \
--cc=wojciech.gladysz@infogain.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox