* BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
@ 2021-02-27 6:02 syzbot
2023-07-19 9:32 ` [syzbot] [btrfs?] [netfilter?] " syzbot
2023-07-20 8:51 ` Taehee Yoo
0 siblings, 2 replies; 9+ messages in thread
From: syzbot @ 2021-02-27 6:02 UTC (permalink / raw)
To: davem, dsahern, kuba, linux-kernel, netdev, syzkaller-bugs,
yoshfuji
Hello,
syzbot found the following issue on:
HEAD commit: 557c223b selftests/bpf: No need to drop the packet when th..
git tree: bpf
console output: https://syzkaller.appspot.com/x/log.txt?x=156409a8d00000
kernel config: https://syzkaller.appspot.com/x/.config?x=2b8307379601586a
dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
Unfortunately, I don't have any reproducer for this issue yet.
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9bbbacfbf1e04d5221f7@syzkaller.appspotmail.com
netlink: 'syz-executor.4': attribute type 10 has an invalid length.
BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
turning off the locking correctness validator.
CPU: 1 PID: 22786 Comm: syz-executor.4 Not tainted 5.11.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:79 [inline]
dump_stack+0xfa/0x151 lib/dump_stack.c:120
add_chain_cache kernel/locking/lockdep.c:3540 [inline]
lookup_chain_cache_add kernel/locking/lockdep.c:3621 [inline]
validate_chain kernel/locking/lockdep.c:3642 [inline]
__lock_acquire.cold+0x3af/0x3b4 kernel/locking/lockdep.c:4900
lock_acquire kernel/locking/lockdep.c:5510 [inline]
lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline]
do_write_seqcount_begin include/linux/seqlock.h:545 [inline]
psi_group_change+0x123/0x8d0 kernel/sched/psi.c:707
psi_task_change+0x142/0x220 kernel/sched/psi.c:807
psi_enqueue kernel/sched/stats.h:82 [inline]
enqueue_task kernel/sched/core.c:1590 [inline]
activate_task kernel/sched/core.c:1613 [inline]
ttwu_do_activate+0x25b/0x660 kernel/sched/core.c:2991
ttwu_queue kernel/sched/core.c:3188 [inline]
try_to_wake_up+0x60e/0x14a0 kernel/sched/core.c:3466
wake_up_worker kernel/workqueue.c:837 [inline]
insert_work+0x2a0/0x370 kernel/workqueue.c:1346
__queue_work+0x5c1/0xf00 kernel/workqueue.c:1497
__queue_delayed_work+0x1c8/0x270 kernel/workqueue.c:1644
mod_delayed_work_on+0xdd/0x1e0 kernel/workqueue.c:1718
mod_delayed_work include/linux/workqueue.h:537 [inline]
addrconf_mod_dad_work net/ipv6/addrconf.c:328 [inline]
addrconf_dad_start net/ipv6/addrconf.c:4013 [inline]
addrconf_add_linklocal+0x321/0x590 net/ipv6/addrconf.c:3186
addrconf_addr_gen+0x3a4/0x3e0 net/ipv6/addrconf.c:3313
addrconf_dev_config+0x26c/0x410 net/ipv6/addrconf.c:3360
addrconf_notify+0x362/0x23e0 net/ipv6/addrconf.c:3593
notifier_call_chain+0xb5/0x200 kernel/notifier.c:83
call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:2063
call_netdevice_notifiers_extack net/core/dev.c:2075 [inline]
call_netdevice_notifiers net/core/dev.c:2089 [inline]
dev_open net/core/dev.c:1592 [inline]
dev_open+0x132/0x150 net/core/dev.c:1580
team_port_add drivers/net/team/team.c:1210 [inline]
team_add_slave+0xa53/0x1c20 drivers/net/team/team.c:1967
do_set_master+0x1c8/0x220 net/core/rtnetlink.c:2519
do_setlink+0x920/0x3a70 net/core/rtnetlink.c:2715
__rtnl_newlink+0xdc6/0x1710 net/core/rtnetlink.c:3376
rtnl_newlink+0x64/0xa0 net/core/rtnetlink.c:3491
rtnetlink_rcv_msg+0x44e/0xad0 net/core/rtnetlink.c:5553
netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502
netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline]
netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338
netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927
sock_sendmsg_nosec net/socket.c:652 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:672
____sys_sendmsg+0x6e8/0x810 net/socket.c:2348
___sys_sendmsg+0xf3/0x170 net/socket.c:2402
__sys_sendmsg+0xe5/0x1b0 net/socket.c:2435
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x465ef9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2db3282188 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 000000000056bf60 RCX: 0000000000465ef9
RDX: 0000000000000000 RSI: 00000000200001c0 RDI: 0000000000000004
RBP: 00000000004bcd1c R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000056bf60
R13: 00007ffea3f3a6af R14: 00007f2db3282300 R15: 0000000000022000
team0: Device ipvlan0 failed to register rx_handler
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2021-02-27 6:02 BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2) syzbot
@ 2023-07-19 9:32 ` syzbot
2023-07-19 17:04 ` David Sterba
2023-07-20 8:51 ` Taehee Yoo
1 sibling, 1 reply; 9+ messages in thread
From: syzbot @ 2023-07-19 9:32 UTC (permalink / raw)
To: bakmitopiacibubur, clm, davem, dsahern, dsterba, fw, gregkh,
jirislaby, josef, kadlec, kuba, linux-btrfs, linux-fsdevel,
linux-kernel, linux-serial, linux, netdev, netfilter-devel, pablo,
syzkaller-bugs, yoshfuji
syzbot has found a reproducer for the following issue on:
HEAD commit: e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
kernel config: https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9bbbacfbf1e04d5221f7@syzkaller.appspotmail.com
team3253: Mode changed to "activebackup"
BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
turning off the locking correctness validator.
CPU: 1 PID: 9973 Comm: syz-executor246 Not tainted 6.4.0-rc7-syzkaller-ge40939bbfc68 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023
Call trace:
dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:233
show_stack+0x2c/0x44 arch/arm64/kernel/stacktrace.c:240
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd0/0x124 lib/dump_stack.c:106
dump_stack+0x1c/0x28 lib/dump_stack.c:113
lookup_chain_cache_add kernel/locking/lockdep.c:3794 [inline]
validate_chain kernel/locking/lockdep.c:3815 [inline]
__lock_acquire+0x1c44/0x7604 kernel/locking/lockdep.c:5088
lock_acquire+0x23c/0x71c kernel/locking/lockdep.c:5705
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x48/0x60 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:350 [inline]
pl011_console_write+0x180/0x708 drivers/tty/serial/amba-pl011.c:2333
console_emit_next_record kernel/printk/printk.c:2877 [inline]
console_flush_all+0x5c0/0xb54 kernel/printk/printk.c:2933
console_unlock+0x148/0x274 kernel/printk/printk.c:3007
vprintk_emit+0x14c/0x2e4 kernel/printk/printk.c:2307
vprintk_default+0xa0/0xe4 kernel/printk/printk.c:2318
vprintk+0x218/0x2f0 kernel/printk/printk_safe.c:50
_printk+0xdc/0x128 kernel/printk/printk.c:2328
__netdev_printk+0x1f8/0x39c net/core/dev.c:11273
netdev_info+0x104/0x150 net/core/dev.c:11320
team_change_mode drivers/net/team/team.c:619 [inline]
team_mode_option_set+0x350/0x390 drivers/net/team/team.c:1388
team_option_set drivers/net/team/team.c:374 [inline]
team_nl_cmd_options_set+0x7e0/0xdec drivers/net/team/team.c:2663
genl_family_rcv_msg_doit net/netlink/genetlink.c:968 [inline]
genl_family_rcv_msg net/netlink/genetlink.c:1048 [inline]
genl_rcv_msg+0x938/0xc1c net/netlink/genetlink.c:1065
netlink_rcv_skb+0x214/0x3c4 net/netlink/af_netlink.c:2546
genl_rcv+0x38/0x50 net/netlink/genetlink.c:1076
netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline]
netlink_unicast+0x660/0x8d4 net/netlink/af_netlink.c:1365
netlink_sendmsg+0x834/0xb18 net/netlink/af_netlink.c:1913
sock_sendmsg_nosec net/socket.c:724 [inline]
sock_sendmsg net/socket.c:747 [inline]
____sys_sendmsg+0x568/0x81c net/socket.c:2503
___sys_sendmsg net/socket.c:2557 [inline]
__sys_sendmsg+0x26c/0x33c net/socket.c:2586
__do_sys_sendmsg net/socket.c:2595 [inline]
__se_sys_sendmsg net/socket.c:2593 [inline]
__arm64_sys_sendmsg+0x80/0x94 net/socket.c:2593
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x244 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x198 arch/arm64/kernel/syscall.c:191
el0_svc+0x4c/0x160 arch/arm64/kernel/entry-common.c:647
el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:665
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591
---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-19 9:32 ` [syzbot] [btrfs?] [netfilter?] " syzbot
@ 2023-07-19 17:04 ` David Sterba
2023-07-19 17:11 ` syzbot
0 siblings, 1 reply; 9+ messages in thread
From: David Sterba @ 2023-07-19 17:04 UTC (permalink / raw)
To: syzbot
Cc: bakmitopiacibubur, clm, davem, dsahern, dsterba, fw, gregkh,
jirislaby, josef, kadlec, kuba, linux-btrfs, linux-fsdevel,
linux-kernel, linux-serial, linux, netdev, netfilter-devel, pablo,
syzkaller-bugs, yoshfuji
On Wed, Jul 19, 2023 at 02:32:51AM -0700, syzbot wrote:
> syzbot has found a reproducer for the following issue on:
>
> HEAD commit: e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
> dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> userspace arch: arm64
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
>
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
#syz unset btrfs
The MAX_LOCKDEP_CHAIN_HLOCKS bugs/warnings can be worked around by
configuration, otherwise are considered invalid. This report has also
'netfilter' label so I'm not closing it right away.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-19 17:04 ` David Sterba
@ 2023-07-19 17:11 ` syzbot
2023-07-19 17:14 ` Aleksandr Nogikh
0 siblings, 1 reply; 9+ messages in thread
From: syzbot @ 2023-07-19 17:11 UTC (permalink / raw)
To: dsterba
Cc: bakmitopiacibubur, clm, davem, dsahern, dsterba, dsterba, fw,
gregkh, jirislaby, josef, kadlec, kuba, linux-btrfs,
linux-fsdevel, linux-kernel, linux-serial, linux, netdev,
netfilter-devel, pablo, syzkaller-bugs, yoshfuji
> On Wed, Jul 19, 2023 at 02:32:51AM -0700, syzbot wrote:
>> syzbot has found a reproducer for the following issue on:
>>
>> HEAD commit: e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
>> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
>> console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
>> kernel config: https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
>> dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
>> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
>> userspace arch: arm64
>> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
>> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
>>
>> Downloadable assets:
>> disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
>> vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
>> kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
>
> #syz unset btrfs
The following labels did not exist: btrfs
>
> The MAX_LOCKDEP_CHAIN_HLOCKS bugs/warnings can be worked around by
> configuration, otherwise are considered invalid. This report has also
> 'netfilter' label so I'm not closing it right away.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-19 17:11 ` syzbot
@ 2023-07-19 17:14 ` Aleksandr Nogikh
2023-07-19 23:12 ` Florian Westphal
0 siblings, 1 reply; 9+ messages in thread
From: Aleksandr Nogikh @ 2023-07-19 17:14 UTC (permalink / raw)
To: syzbot
Cc: dsterba, bakmitopiacibubur, clm, davem, dsahern, dsterba, fw,
gregkh, jirislaby, josef, kadlec, kuba, linux-btrfs,
linux-fsdevel, linux-kernel, linux-serial, linux, netdev,
netfilter-devel, pablo, syzkaller-bugs, yoshfuji
On Wed, Jul 19, 2023 at 7:11 PM syzbot
<syzbot+9bbbacfbf1e04d5221f7@syzkaller.appspotmail.com> wrote:
>
> > On Wed, Jul 19, 2023 at 02:32:51AM -0700, syzbot wrote:
> >> syzbot has found a reproducer for the following issue on:
> >>
> >> HEAD commit: e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
> >> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> >> console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
> >> kernel config: https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
> >> dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
> >> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> >> userspace arch: arm64
> >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
> >> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
> >>
> >> Downloadable assets:
> >> disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
> >> vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
> >> kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
> >
> > #syz unset btrfs
>
> The following labels did not exist: btrfs
#syz set subsystems: netfilter
>
> >
> > The MAX_LOCKDEP_CHAIN_HLOCKS bugs/warnings can be worked around by
> > configuration, otherwise are considered invalid. This report has also
> > 'netfilter' label so I'm not closing it right away.
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bugs+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/00000000000042a3ac0600da1f69%40google.com.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-19 17:14 ` Aleksandr Nogikh
@ 2023-07-19 23:12 ` Florian Westphal
2023-07-20 3:30 ` Jakub Kicinski
0 siblings, 1 reply; 9+ messages in thread
From: Florian Westphal @ 2023-07-19 23:12 UTC (permalink / raw)
To: Aleksandr Nogikh
Cc: syzbot, dsterba, bakmitopiacibubur, clm, davem, dsahern, dsterba,
fw, gregkh, jirislaby, josef, kadlec, kuba, linux-btrfs,
linux-fsdevel, linux-kernel, linux-serial, linux, netdev,
netfilter-devel, pablo, syzkaller-bugs, yoshfuji
Aleksandr Nogikh <nogikh@google.com> wrote:
> On Wed, Jul 19, 2023 at 7:11 PM syzbot
> <syzbot+9bbbacfbf1e04d5221f7@syzkaller.appspotmail.com> wrote:
> >
> > > On Wed, Jul 19, 2023 at 02:32:51AM -0700, syzbot wrote:
> > >> syzbot has found a reproducer for the following issue on:
> > >>
> > >> HEAD commit: e40939bbfc68 Merge branch 'for-next/core' into for-kernelci
> > >> git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> > >> console output: https://syzkaller.appspot.com/x/log.txt?x=15d92aaaa80000
> > >> kernel config: https://syzkaller.appspot.com/x/.config?x=c4a2640e4213bc2f
> > >> dashboard link: https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
> > >> compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
> > >> userspace arch: arm64
> > >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=149b2d66a80000
> > >> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1214348aa80000
> > >>
> > >> Downloadable assets:
> > >> disk image: https://storage.googleapis.com/syzbot-assets/9d87aa312c0e/disk-e40939bb.raw.xz
> > >> vmlinux: https://storage.googleapis.com/syzbot-assets/22a11d32a8b2/vmlinux-e40939bb.xz
> > >> kernel image: https://storage.googleapis.com/syzbot-assets/0978b5788b52/Image-e40939bb.gz.xz
> > >
> > > #syz unset btrfs
> >
> > The following labels did not exist: btrfs
>
> #syz set subsystems: netfilter
I don't see any netfilter involvement here.
The repro just creates a massive amount of team devices.
At the time it hits the LOCKDEP limits on my test vm it has
created ~2k team devices, system load is at +14 because udev
is also busy spawing hotplug scripts for the new devices.
After reboot and suspending the running reproducer after about 1500
devices (before hitting lockdep limits), followed by 'ip link del' for
the team devices gets the lockdep entries down to ~8k (from 40k),
which is in the range that it has on this VM after a fresh boot.
So as far as I can see this workload is just pushing lockdep
past what it can handle with the configured settings and is
not triggering any actual bug.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-19 23:12 ` Florian Westphal
@ 2023-07-20 3:30 ` Jakub Kicinski
2023-07-20 7:18 ` Paolo Abeni
0 siblings, 1 reply; 9+ messages in thread
From: Jakub Kicinski @ 2023-07-20 3:30 UTC (permalink / raw)
To: netdev
Cc: Florian Westphal, Aleksandr Nogikh, syzbot, dsterba,
bakmitopiacibubur, clm, davem, dsahern, dsterba, gregkh,
jirislaby, josef, kadlec, linux-btrfs, linux-fsdevel,
linux-kernel, linux-serial, linux, netfilter-devel, pablo,
syzkaller-bugs
On Thu, 20 Jul 2023 01:12:07 +0200 Florian Westphal wrote:
> I don't see any netfilter involvement here.
>
> The repro just creates a massive amount of team devices.
>
> At the time it hits the LOCKDEP limits on my test vm it has
> created ~2k team devices, system load is at +14 because udev
> is also busy spawing hotplug scripts for the new devices.
>
> After reboot and suspending the running reproducer after about 1500
> devices (before hitting lockdep limits), followed by 'ip link del' for
> the team devices gets the lockdep entries down to ~8k (from 40k),
> which is in the range that it has on this VM after a fresh boot.
>
> So as far as I can see this workload is just pushing lockdep
> past what it can handle with the configured settings and is
> not triggering any actual bug.
The lockdep splat because of netdevice stacking is one of our top
reports from syzbot. Is anyone else feeling like we should add
an artificial but very high limit on netdev stacking? :(
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [btrfs?] [netfilter?] BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2023-07-20 3:30 ` Jakub Kicinski
@ 2023-07-20 7:18 ` Paolo Abeni
0 siblings, 0 replies; 9+ messages in thread
From: Paolo Abeni @ 2023-07-20 7:18 UTC (permalink / raw)
To: Jakub Kicinski, netdev
Cc: Florian Westphal, Aleksandr Nogikh, syzbot, dsterba,
bakmitopiacibubur, clm, davem, dsahern, dsterba, gregkh,
jirislaby, josef, kadlec, linux-btrfs, linux-fsdevel,
linux-kernel, linux-serial, linux, netfilter-devel, pablo,
syzkaller-bugs
On Wed, 2023-07-19 at 20:30 -0700, Jakub Kicinski wrote:
> On Thu, 20 Jul 2023 01:12:07 +0200 Florian Westphal wrote:
> > I don't see any netfilter involvement here.
> >
> > The repro just creates a massive amount of team devices.
> >
> > At the time it hits the LOCKDEP limits on my test vm it has
> > created ~2k team devices, system load is at +14 because udev
> > is also busy spawing hotplug scripts for the new devices.
> >
> > After reboot and suspending the running reproducer after about 1500
> > devices (before hitting lockdep limits), followed by 'ip link del' for
> > the team devices gets the lockdep entries down to ~8k (from 40k),
> > which is in the range that it has on this VM after a fresh boot.
> >
> > So as far as I can see this workload is just pushing lockdep
> > past what it can handle with the configured settings and is
> > not triggering any actual bug.
>
> The lockdep splat because of netdevice stacking is one of our top
> reports from syzbot. Is anyone else feeling like we should add
> an artificial but very high limit on netdev stacking? :(
We already have a similar limit for xmit: XMIT_RECURSION_LIMIT. I guess
stacking more then such devices will be quite useless/non functional.
We could use such value to limit the device stacking, too.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2)
2021-02-27 6:02 BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2) syzbot
2023-07-19 9:32 ` [syzbot] [btrfs?] [netfilter?] " syzbot
@ 2023-07-20 8:51 ` Taehee Yoo
1 sibling, 0 replies; 9+ messages in thread
From: Taehee Yoo @ 2023-07-20 8:51 UTC (permalink / raw)
To: syzbot, davem, dsahern, kuba, linux-kernel, netdev,
syzkaller-bugs, yoshfuji
On 2021. 2. 27. 오후 3:02, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: 557c223b selftests/bpf: No need to drop the packet
when th..
> git tree: bpf
> console output: https://syzkaller.appspot.com/x/log.txt?x=156409a8d00000
> kernel config:
https://syzkaller.appspot.com/x/.config?x=2b8307379601586a
> dashboard link:
https://syzkaller.appspot.com/bug?extid=9bbbacfbf1e04d5221f7
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> IMPORTANT: if you fix the issue, please add the following tag to the
commit:
> Reported-by: syzbot+9bbbacfbf1e04d5221f7@syzkaller.appspotmail.com
>
> netlink: 'syz-executor.4': attribute type 10 has an invalid length.
> BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!
> turning off the locking correctness validator.
> CPU: 1 PID: 22786 Comm: syz-executor.4 Not tainted 5.11.0-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:79 [inline]
> dump_stack+0xfa/0x151 lib/dump_stack.c:120
> add_chain_cache kernel/locking/lockdep.c:3540 [inline]
> lookup_chain_cache_add kernel/locking/lockdep.c:3621 [inline]
> validate_chain kernel/locking/lockdep.c:3642 [inline]
> __lock_acquire.cold+0x3af/0x3b4 kernel/locking/lockdep.c:4900
> lock_acquire kernel/locking/lockdep.c:5510 [inline]
> lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
> do_write_seqcount_begin_nested include/linux/seqlock.h:520 [inline]
> do_write_seqcount_begin include/linux/seqlock.h:545 [inline]
> psi_group_change+0x123/0x8d0 kernel/sched/psi.c:707
> psi_task_change+0x142/0x220 kernel/sched/psi.c:807
> psi_enqueue kernel/sched/stats.h:82 [inline]
> enqueue_task kernel/sched/core.c:1590 [inline]
> activate_task kernel/sched/core.c:1613 [inline]
> ttwu_do_activate+0x25b/0x660 kernel/sched/core.c:2991
> ttwu_queue kernel/sched/core.c:3188 [inline]
> try_to_wake_up+0x60e/0x14a0 kernel/sched/core.c:3466
> wake_up_worker kernel/workqueue.c:837 [inline]
> insert_work+0x2a0/0x370 kernel/workqueue.c:1346
> __queue_work+0x5c1/0xf00 kernel/workqueue.c:1497
> __queue_delayed_work+0x1c8/0x270 kernel/workqueue.c:1644
> mod_delayed_work_on+0xdd/0x1e0 kernel/workqueue.c:1718
> mod_delayed_work include/linux/workqueue.h:537 [inline]
> addrconf_mod_dad_work net/ipv6/addrconf.c:328 [inline]
> addrconf_dad_start net/ipv6/addrconf.c:4013 [inline]
> addrconf_add_linklocal+0x321/0x590 net/ipv6/addrconf.c:3186
> addrconf_addr_gen+0x3a4/0x3e0 net/ipv6/addrconf.c:3313
> addrconf_dev_config+0x26c/0x410 net/ipv6/addrconf.c:3360
> addrconf_notify+0x362/0x23e0 net/ipv6/addrconf.c:3593
> notifier_call_chain+0xb5/0x200 kernel/notifier.c:83
> call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:2063
> call_netdevice_notifiers_extack net/core/dev.c:2075 [inline]
> call_netdevice_notifiers net/core/dev.c:2089 [inline]
> dev_open net/core/dev.c:1592 [inline]
> dev_open+0x132/0x150 net/core/dev.c:1580
> team_port_add drivers/net/team/team.c:1210 [inline]
> team_add_slave+0xa53/0x1c20 drivers/net/team/team.c:1967
> do_set_master+0x1c8/0x220 net/core/rtnetlink.c:2519
> do_setlink+0x920/0x3a70 net/core/rtnetlink.c:2715
> __rtnl_newlink+0xdc6/0x1710 net/core/rtnetlink.c:3376
> rtnl_newlink+0x64/0xa0 net/core/rtnetlink.c:3491
> rtnetlink_rcv_msg+0x44e/0xad0 net/core/rtnetlink.c:5553
> netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502
> netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline]
> netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338
> netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927
> sock_sendmsg_nosec net/socket.c:652 [inline]
> sock_sendmsg+0xcf/0x120 net/socket.c:672
> ____sys_sendmsg+0x6e8/0x810 net/socket.c:2348
> ___sys_sendmsg+0xf3/0x170 net/socket.c:2402
> __sys_sendmsg+0xe5/0x1b0 net/socket.c:2435
> do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
> entry_SYSCALL_64_after_hwframe+0x44/0xae
> RIP: 0033:0x465ef9
> Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48
89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d
01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007f2db3282188 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
> RAX: ffffffffffffffda RBX: 000000000056bf60 RCX: 0000000000465ef9
> RDX: 0000000000000000 RSI: 00000000200001c0 RDI: 0000000000000004
> RBP: 00000000004bcd1c R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000056bf60
> R13: 00007ffea3f3a6af R14: 00007f2db3282300 R15: 0000000000022000
> team0: Device ipvlan0 failed to register rx_handler
>
Hi,
This issue would occur by the commit 369f61bee0f5 ("team: fix nested
locking lockdep warning").
This patch uses the dynamic lockdep key mechanism instead of the static
lockdep key to fix nested locking lockdep warnings.
The problem with the dynamic lockdep key mechanism is that each
interface registers its own lockdep key.
If there are 4K interfaces, it uses 4K lockdep keys.
So, the "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" splat occurs if there
are so many team interfaces.
I think it should use the static lockdep key instead of the dynamic
lockdep key.
If so, all team interfaces use only 8 lockdep keys, so this splat wil be
disappeared.
The patch will be similar to the b3e80d44f5b1 ("bonding: fix lockdep
warning in bond_get_stats()")
and e7511f560f5 ("bonding: remove useless stats_lock_key").
I will send a patch for it.
Thanks a lot!
Taehee Yoo
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-07-20 8:51 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-27 6:02 BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low! (2) syzbot
2023-07-19 9:32 ` [syzbot] [btrfs?] [netfilter?] " syzbot
2023-07-19 17:04 ` David Sterba
2023-07-19 17:11 ` syzbot
2023-07-19 17:14 ` Aleksandr Nogikh
2023-07-19 23:12 ` Florian Westphal
2023-07-20 3:30 ` Jakub Kicinski
2023-07-20 7:18 ` Paolo Abeni
2023-07-20 8:51 ` Taehee Yoo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).