* [syzbot] [f2fs?] possible deadlock in super_lock @ 2023-09-03 22:30 syzbot 2023-09-20 9:13 ` [syzbot] [reiserfs?] " syzbot ` (3 more replies) 0 siblings, 4 replies; 9+ messages in thread From: syzbot @ 2023-09-03 22:30 UTC (permalink / raw) To: chao, jaegeuk, linux-f2fs-devel, linux-fsdevel, linux-kernel, syzkaller-bugs, terrelln Hello, syzbot found the following issue on: HEAD commit: 6c1b980a7e79 Merge tag 'dma-mapping-6.6-2023-08-29' of git.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=13a9669fa80000 kernel config: https://syzkaller.appspot.com/x/.config?x=2212484c18930a61 dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40 Unfortunately, I don't have any reproducer for this issue yet. Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/6e2281f5cb6b/disk-6c1b980a.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/5fc2481dcded/vmlinux-6c1b980a.xz kernel image: https://storage.googleapis.com/syzbot-assets/283bb76567da/bzImage-6c1b980a.xz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+062317ea1d0a6d5e29e7@syzkaller.appspotmail.com ====================================================== WARNING: possible circular locking dependency detected 6.5.0-syzkaller-04808-g6c1b980a7e79 #0 Not tainted ------------------------------------------------------ syz-executor.4/22893 is trying to acquire lock: ffff888039b740e0 (&type->s_umount_key#25){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline] ffff888039b740e0 (&type->s_umount_key#25){++++}-{3:3}, at: super_lock+0x23c/0x380 fs/super.c:117 but task is already holding lock: ffff88801e60ba88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:368 [inline] ffff88801e60ba88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x14e9/0x1ce0 block/ioctl.c:500 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&bdev->bd_holder_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 bdev_mark_dead+0x25/0x230 block/bdev.c:961 disk_force_media_change+0x51/0x80 block/disk-events.c:303 __loop_clr_fd+0x3ab/0x8f0 drivers/block/loop.c:1174 lo_release+0x188/0x1c0 drivers/block/loop.c:1743 blkdev_put_whole+0xa5/0xe0 block/bdev.c:663 blkdev_put+0x40f/0x8e0 block/bdev.c:898 kill_block_super+0x58/0x70 fs/super.c:1623 kill_f2fs_super+0x2b7/0x3d0 fs/f2fs/super.c:4879 deactivate_locked_super+0x9a/0x170 fs/super.c:481 deactivate_super+0xde/0x100 fs/super.c:514 cleanup_mnt+0x222/0x3d0 fs/namespace.c:1254 task_work_run+0x14d/0x240 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x1d/0x60 kernel/entry/common.c:296 do_syscall_64+0x44/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&disk->open_mutex){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 blkdev_get_by_dev.part.0+0x4f0/0xb20 block/bdev.c:786 blkdev_get_by_dev+0x75/0x80 block/bdev.c:829 journal_init_dev fs/reiserfs/journal.c:2626 [inline] journal_init+0xbb8/0x64b0 fs/reiserfs/journal.c:2786 reiserfs_fill_super+0xcc6/0x3150 fs/reiserfs/super.c:2022 mount_bdev+0x1f3/0x2e0 fs/super.c:1603 legacy_get_tree+0x109/0x220 fs/fs_context.c:638 vfs_get_tree+0x8c/0x370 fs/super.c:1724 do_new_mount fs/namespace.c:3335 [inline] path_mount+0x1492/0x1ed0 fs/namespace.c:3662 do_mount fs/namespace.c:3675 [inline] __do_sys_mount fs/namespace.c:3884 [inline] __se_sys_mount fs/namespace.c:3861 [inline] __x64_sys_mount+0x293/0x310 fs/namespace.c:3861 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&type->s_umount_key#25){++++}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_read+0x9c/0x470 kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x23c/0x380 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1387 [inline] fs_bdev_sync+0x94/0x1b0 fs/super.c:1422 blkdev_flushbuf block/ioctl.c:370 [inline] blkdev_common_ioctl+0x1550/0x1ce0 block/ioctl.c:500 blkdev_ioctl+0x249/0x770 block/ioctl.c:622 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd other info that might help us debug this: Chain exists of: &type->s_umount_key#25 --> &disk->open_mutex --> &bdev->bd_holder_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&bdev->bd_holder_lock); lock(&disk->open_mutex); lock(&bdev->bd_holder_lock); rlock(&type->s_umount_key#25); *** DEADLOCK *** 1 lock held by syz-executor.4/22893: #0: ffff88801e60ba88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:368 [inline] #0: ffff88801e60ba88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x14e9/0x1ce0 block/ioctl.c:500 stack backtrace: CPU: 1 PID: 22893 Comm: syz-executor.4 Not tainted 6.5.0-syzkaller-04808-g6c1b980a7e79 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_read+0x9c/0x470 kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x23c/0x380 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1387 [inline] fs_bdev_sync+0x94/0x1b0 fs/super.c:1422 blkdev_flushbuf block/ioctl.c:370 [inline] blkdev_common_ioctl+0x1550/0x1ce0 block/ioctl.c:500 blkdev_ioctl+0x249/0x770 block/ioctl.c:622 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f6f9f67cae9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f6fa03670c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f6f9f79bf80 RCX: 00007f6f9f67cae9 RDX: ffffffffffffffff RSI: 0000000000001261 RDI: 0000000000000003 RBP: 00007f6f9f6c847a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f6f9f79bf80 R15: 00007ffc6e219ec8 </TASK> --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller@googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot. If the bug is already fixed, let syzbot know by replying with: #syz fix: exact-commit-title If you want to overwrite bug's subsystems, reply with: #syz set subsystems: new-subsystem (See the list of subsystem names on the web dashboard) If the bug is a duplicate of another bug, reply with: #syz dup: exact-subject-of-another-report If you want to undo deduplication, reply with: #syz undup ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-09-03 22:30 [syzbot] [f2fs?] possible deadlock in super_lock syzbot @ 2023-09-20 9:13 ` syzbot 2023-10-08 15:14 ` syzbot ` (2 subsequent siblings) 3 siblings, 0 replies; 9+ messages in thread From: syzbot @ 2023-09-20 9:13 UTC (permalink / raw) To: chao, jaegeuk, linux-f2fs-devel, linux-fsdevel, linux-kernel, reiserfs-devel, syzkaller-bugs, terrelln syzbot has found a reproducer for the following issue on: HEAD commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=12780282680000 kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 compiler: gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/f48f4ed701b8/disk-2cf0f715.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/5b8491e29a2d/vmlinux-2cf0f715.xz kernel image: https://storage.googleapis.com/syzbot-assets/90faa04d6558/bzImage-2cf0f715.xz mounted in repro: https://storage.googleapis.com/syzbot-assets/c98194587df7/mount_0.gz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+062317ea1d0a6d5e29e7@syzkaller.appspotmail.com ====================================================== WARNING: possible circular locking dependency detected 6.6.0-rc2-syzkaller-00018-g2cf0f7156238 #0 Not tainted ------------------------------------------------------ syz-executor.0/8792 is trying to acquire lock: ffff88807993a0e0 (&type->s_umount_key#25){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline] ffff88807993a0e0 (&type->s_umount_key#25){++++}-{3:3}, at: super_lock+0x23c/0x380 fs/super.c:117 but task is already holding lock: ffff888148439388 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:370 [inline] ffff888148439388 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x14e9/0x1ce0 block/ioctl.c:502 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&bdev->bd_holder_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 bdev_mark_dead+0x25/0x230 block/bdev.c:961 disk_force_media_change+0x51/0x80 block/disk-events.c:303 __loop_clr_fd+0x3ab/0x8f0 drivers/block/loop.c:1174 lo_release+0x188/0x1c0 drivers/block/loop.c:1743 blkdev_put_whole+0xa5/0xe0 block/bdev.c:663 blkdev_put+0x40f/0x8e0 block/bdev.c:898 kill_block_super+0x58/0x70 fs/super.c:1649 deactivate_locked_super+0x9a/0x170 fs/super.c:481 deactivate_super+0xde/0x100 fs/super.c:514 cleanup_mnt+0x222/0x3d0 fs/namespace.c:1254 task_work_run+0x14d/0x240 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x1d/0x60 kernel/entry/common.c:296 do_syscall_64+0x44/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&disk->open_mutex){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 blkdev_get_by_dev.part.0+0x4f0/0xb20 block/bdev.c:786 blkdev_get_by_dev+0x75/0x80 block/bdev.c:829 journal_init_dev fs/reiserfs/journal.c:2626 [inline] journal_init+0xbb8/0x64b0 fs/reiserfs/journal.c:2786 reiserfs_fill_super+0xcc6/0x3150 fs/reiserfs/super.c:2022 mount_bdev+0x1f3/0x2e0 fs/super.c:1629 legacy_get_tree+0x109/0x220 fs/fs_context.c:638 vfs_get_tree+0x8c/0x370 fs/super.c:1750 do_new_mount fs/namespace.c:3335 [inline] path_mount+0x1492/0x1ed0 fs/namespace.c:3662 do_mount fs/namespace.c:3675 [inline] __do_sys_mount fs/namespace.c:3884 [inline] __se_sys_mount fs/namespace.c:3861 [inline] __x64_sys_mount+0x293/0x310 fs/namespace.c:3861 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&type->s_umount_key#25){++++}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_read+0x9c/0x470 kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x23c/0x380 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1431 [inline] fs_bdev_sync+0x94/0x1b0 fs/super.c:1466 blkdev_flushbuf block/ioctl.c:372 [inline] blkdev_common_ioctl+0x1550/0x1ce0 block/ioctl.c:502 blkdev_ioctl+0x249/0x770 block/ioctl.c:624 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd other info that might help us debug this: Chain exists of: &type->s_umount_key#25 --> &disk->open_mutex --> &bdev->bd_holder_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&bdev->bd_holder_lock); lock(&disk->open_mutex); lock(&bdev->bd_holder_lock); rlock(&type->s_umount_key#25); *** DEADLOCK *** 1 lock held by syz-executor.0/8792: #0: ffff888148439388 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:370 [inline] #0: ffff888148439388 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x14e9/0x1ce0 block/ioctl.c:502 stack backtrace: CPU: 0 PID: 8792 Comm: syz-executor.0 Not tainted 6.6.0-rc2-syzkaller-00018-g2cf0f7156238 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/04/2023 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_read+0x9c/0x470 kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x23c/0x380 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1431 [inline] fs_bdev_sync+0x94/0x1b0 fs/super.c:1466 blkdev_flushbuf block/ioctl.c:372 [inline] blkdev_common_ioctl+0x1550/0x1ce0 block/ioctl.c:502 blkdev_ioctl+0x249/0x770 block/ioctl.c:624 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f18ae67cae9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f18af4830c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f18ae79bf80 RCX: 00007f18ae67cae9 RDX: 0000000000000003 RSI: 0000000000001261 RDI: 0000000000000003 RBP: 00007f18ae6c847a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f18ae79bf80 R15: 00007ffc4185a478 </TASK> --- If you want syzbot to run the reproducer, reply with: #syz test: git://repo/address.git branch-or-commit-hash If you attach or paste a git patch, syzbot will apply it before testing. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-09-03 22:30 [syzbot] [f2fs?] possible deadlock in super_lock syzbot 2023-09-20 9:13 ` [syzbot] [reiserfs?] " syzbot @ 2023-10-08 15:14 ` syzbot 2023-10-09 2:05 ` syzbot 2023-12-24 16:40 ` syzbot 3 siblings, 0 replies; 9+ messages in thread From: syzbot @ 2023-10-08 15:14 UTC (permalink / raw) To: chao, hdanton, jaegeuk, linux-f2fs-devel, linux-fsdevel, linux-kernel, reiserfs-devel, syzkaller-bugs, terrelln syzbot has found a reproducer for the following issue on: HEAD commit: 19af4a4ed414 Merge branch 'for-next/core', remote-tracking.. git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci console output: https://syzkaller.appspot.com/x/log.txt?x=1627a911680000 kernel config: https://syzkaller.appspot.com/x/.config?x=80eedef55cd21fa4 dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 userspace arch: arm64 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13deb1c9680000 C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1006a759680000 Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/702d996331e0/disk-19af4a4e.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/2a48ce0aeb32/vmlinux-19af4a4e.xz kernel image: https://storage.googleapis.com/syzbot-assets/332eb4a803d2/Image-19af4a4e.gz.xz mounted in repro: https://storage.googleapis.com/syzbot-assets/97d89134ed25/mount_2.gz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+062317ea1d0a6d5e29e7@syzkaller.appspotmail.com ====================================================== WARNING: possible circular locking dependency detected 6.6.0-rc4-syzkaller-g19af4a4ed414 #0 Not tainted ------------------------------------------------------ syz-executor254/6025 is trying to acquire lock: ffff0000db54a0e0 (&type->s_umount_key#25){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline] ffff0000db54a0e0 (&type->s_umount_key#25){++++}-{3:3}, at: super_lock+0x160/0x328 fs/super.c:117 but task is already holding lock: ffff0000c1540c88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:370 [inline] ffff0000c1540c88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x7fc/0x2884 block/ioctl.c:502 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&bdev->bd_holder_lock){+.+.}-{3:3}: __mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_finish_claiming+0x218/0x3dc block/bdev.c:566 blkdev_get_by_dev+0x3f4/0x55c block/bdev.c:799 setup_bdev_super+0x68/0x51c fs/super.c:1484 mount_bdev+0x1a0/0x2b4 fs/super.c:1626 get_super_block+0x44/0x58 fs/reiserfs/super.c:2601 legacy_get_tree+0xd4/0x16c fs/fs_context.c:638 vfs_get_tree+0x90/0x288 fs/super.c:1750 do_new_mount+0x25c/0x8c8 fs/namespace.c:3335 path_mount+0x590/0xe04 fs/namespace.c:3662 init_mount+0xe4/0x144 fs/init.c:25 do_mount_root+0x104/0x3e4 init/do_mounts.c:166 mount_root_generic+0x1f0/0x594 init/do_mounts.c:205 mount_block_root+0x6c/0x7c init/do_mounts.c:378 mount_root+0xb4/0xe4 init/do_mounts.c:405 prepare_namespace+0xdc/0x11c init/do_mounts.c:489 kernel_init_freeable+0x35c/0x474 init/main.c:1560 kernel_init+0x24/0x29c init/main.c:1437 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:857 -> #2 (bdev_lock ){+.+.}-{3:3}: __mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_finish_claiming+0x84/0x3dc block/bdev.c:557 blkdev_get_by_dev+0x3f4/0x55c block/bdev.c:799 setup_bdev_super+0x68/0x51c fs/super.c:1484 mount_bdev+0x1a0/0x2b4 fs/super.c:1626 get_super_block+0x44/0x58 fs/reiserfs/super.c:2601 legacy_get_tree+0xd4/0x16c fs/fs_context.c:638 vfs_get_tree+0x90/0x288 fs/super.c:1750 do_new_mount+0x25c/0x8c8 fs/namespace.c:3335 path_mount+0x590/0xe04 fs/namespace.c:3662 init_mount+0xe4/0x144 fs/init.c:25 do_mount_root+0x104/0x3e4 init/do_mounts.c:166 mount_root_generic+0x1f0/0x594 init/do_mounts.c:205 mount_block_root+0x6c/0x7c init/do_mounts.c:378 mount_root+0xb4/0xe4 init/do_mounts.c:405 prepare_namespace+0xdc/0x11c init/do_mounts.c:489 kernel_init_freeable+0x35c/0x474 init/main.c:1560 kernel_init+0x24/0x29c init/main.c:1437 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:857 -> #1 (&disk->open_mutex){+.+.}-{3:3}: __mutex_lock_common+0x190/0x21a0 kernel/locking/mutex.c:603 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 journal_init_dev fs/reiserfs/journal.c:2626 [inline] journal_init+0xa60/0x1e44 fs/reiserfs/journal.c:2786 reiserfs_fill_super+0xd50/0x2028 fs/reiserfs/super.c:2022 mount_bdev+0x1e8/0x2b4 fs/super.c:1629 get_super_block+0x44/0x58 fs/reiserfs/super.c:2601 legacy_get_tree+0xd4/0x16c fs/fs_context.c:638 vfs_get_tree+0x90/0x288 fs/super.c:1750 do_new_mount+0x25c/0x8c8 fs/namespace.c:3335 path_mount+0x590/0xe04 fs/namespace.c:3662 do_mount fs/namespace.c:3675 [inline] __do_sys_mount fs/namespace.c:3884 [inline] __se_sys_mount fs/namespace.c:3861 [inline] __arm64_sys_mount+0x45c/0x594 fs/namespace.c:3861 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 -> #0 (&type->s_umount_key#25){++++}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x3370/0x75e8 kernel/locking/lockdep.c:5136 lock_acquire+0x23c/0x71c kernel/locking/lockdep.c:5753 down_read+0x58/0x2fc kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x160/0x328 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1431 [inline] fs_bdev_sync+0xa4/0x168 fs/super.c:1466 blkdev_flushbuf block/ioctl.c:372 [inline] blkdev_common_ioctl+0x848/0x2884 block/ioctl.c:502 blkdev_ioctl+0x35c/0xae4 block/ioctl.c:624 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 other info that might help us debug this: Chain exists of: &type->s_umount_key#25 --> bdev_lock --> &bdev->bd_holder_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&bdev->bd_holder_lock); lock(bdev_lock); lock(&bdev->bd_holder_lock); rlock(&type->s_umount_key#25); *** DEADLOCK *** 1 lock held by syz-executor254/6025: #0: ffff0000c1540c88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:370 [inline] #0: ffff0000c1540c88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x7fc/0x2884 block/ioctl.c:502 stack backtrace: CPU: 0 PID: 6025 Comm: syz-executor254 Not tainted 6.6.0-rc4-syzkaller-g19af4a4ed414 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023 Call trace: dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:233 show_stack+0x2c/0x44 arch/arm64/kernel/stacktrace.c:240 __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd0/0x124 lib/dump_stack.c:106 dump_stack+0x1c/0x28 lib/dump_stack.c:113 print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2060 check_noncircular+0x310/0x404 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x3370/0x75e8 kernel/locking/lockdep.c:5136 lock_acquire+0x23c/0x71c kernel/locking/lockdep.c:5753 down_read+0x58/0x2fc kernel/locking/rwsem.c:1520 __super_lock fs/super.c:58 [inline] super_lock+0x160/0x328 fs/super.c:117 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1431 [inline] fs_bdev_sync+0xa4/0x168 fs/super.c:1466 blkdev_flushbuf block/ioctl.c:372 [inline] blkdev_common_ioctl+0x848/0x2884 block/ioctl.c:502 blkdev_ioctl+0x35c/0xae4 block/ioctl.c:624 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 --- If you want syzbot to run the reproducer, reply with: #syz test: git://repo/address.git branch-or-commit-hash If you attach or paste a git patch, syzbot will apply it before testing. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-09-03 22:30 [syzbot] [f2fs?] possible deadlock in super_lock syzbot 2023-09-20 9:13 ` [syzbot] [reiserfs?] " syzbot 2023-10-08 15:14 ` syzbot @ 2023-10-09 2:05 ` syzbot 2023-10-09 12:37 ` Christian Brauner 2023-12-24 16:40 ` syzbot 3 siblings, 1 reply; 9+ messages in thread From: syzbot @ 2023-10-09 2:05 UTC (permalink / raw) To: axboe, brauner, chao, daniel.vetter, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 syzbot has bisected this issue to: commit 7908632f2927b65f7486ae6b67c24071666ba43f Author: Maíra Canal <mcanal@igalia.com> Date: Thu Sep 14 10:19:02 2023 +0000 Revert "drm/vkms: Fix race-condition between the hrtimer and the atomic commit" bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=17fc0565680000 start commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. git tree: upstream final oops: https://syzkaller.appspot.com/x/report.txt?x=14020565680000 console output: https://syzkaller.appspot.com/x/log.txt?x=10020565680000 kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 Reported-by: syzbot+062317ea1d0a6d5e29e7@syzkaller.appspotmail.com Fixes: 7908632f2927 ("Revert "drm/vkms: Fix race-condition between the hrtimer and the atomic commit"") For information about bisection process see: https://goo.gl/tpsmEJ#bisection ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-10-09 2:05 ` syzbot @ 2023-10-09 12:37 ` Christian Brauner 2023-10-09 14:19 ` syzbot 0 siblings, 1 reply; 9+ messages in thread From: Christian Brauner @ 2023-10-09 12:37 UTC (permalink / raw) To: syzbot Cc: axboe, chao, daniel.vetter, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 On Sun, Oct 08, 2023 at 07:05:32PM -0700, syzbot wrote: > syzbot has bisected this issue to: > > commit 7908632f2927b65f7486ae6b67c24071666ba43f > Author: Maíra Canal <mcanal@igalia.com> > Date: Thu Sep 14 10:19:02 2023 +0000 > > Revert "drm/vkms: Fix race-condition between the hrtimer and the atomic commit" > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=17fc0565680000 > start commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. > git tree: upstream > final oops: https://syzkaller.appspot.com/x/report.txt?x=14020565680000 > console output: https://syzkaller.appspot.com/x/log.txt?x=10020565680000 > kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df > dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 > > Reported-by: syzbot+062317ea1d0a6d5e29e7@syzkaller.appspotmail.com > Fixes: 7908632f2927 ("Revert "drm/vkms: Fix race-condition between the hrtimer and the atomic commit"") > > For information about bisection process see: https://goo.gl/tpsmEJ#bisection The bisect is obviously bogus. I haven't seend that bug report before otherwise I would've already fixed this way earlier: #syz test: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git b4/vfs-fixes-reiserfs ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-10-09 12:37 ` Christian Brauner @ 2023-10-09 14:19 ` syzbot 0 siblings, 0 replies; 9+ messages in thread From: syzbot @ 2023-10-09 14:19 UTC (permalink / raw) To: axboe, brauner, chao, daniel.vetter, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 Hello, syzbot has tested the proposed patch but the reproducer is still triggering an issue: INFO: task hung in blkdev_put INFO: task syz-executor.1:6676 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.1 state:D stack:0 pid:6676 ppid:6383 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 blkdev_put+0xec/0x740 block/bdev.c:884 blkdev_release+0x84/0x9c block/fops.c:604 __fput+0x324/0x7f8 fs/file_table.c:384 __fput_sync+0x60/0x9c fs/file_table.c:465 __do_sys_close fs/open.c:1572 [inline] __se_sys_close fs/open.c:1557 [inline] __arm64_sys_close+0x150/0x1e0 fs/open.c:1557 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.2:6678 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.2 state:D stack:0 pid:6678 ppid:6377 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 blkdev_put+0xec/0x740 block/bdev.c:884 blkdev_release+0x84/0x9c block/fops.c:604 __fput+0x324/0x7f8 fs/file_table.c:384 __fput_sync+0x60/0x9c fs/file_table.c:465 __do_sys_close fs/open.c:1572 [inline] __se_sys_close fs/open.c:1557 [inline] __arm64_sys_close+0x150/0x1e0 fs/open.c:1557 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.0:6682 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.0 state:D stack:0 pid:6682 ppid:6389 flags:0x0000000d Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_finish_claiming+0x218/0x3dc block/bdev.c:566 blkdev_get_by_dev+0x3f4/0x55c block/bdev.c:799 journal_init_dev fs/reiserfs/journal.c:2616 [inline] journal_init+0xb08/0x1e68 fs/reiserfs/journal.c:2783 reiserfs_fill_super+0xd58/0x2058 fs/reiserfs/super.c:2029 mount_bdev+0x1e8/0x2b4 fs/super.c:1629 get_super_block+0x44/0x58 fs/reiserfs/super.c:2605 legacy_get_tree+0xd4/0x16c fs/fs_context.c:638 vfs_get_tree+0x90/0x288 fs/super.c:1750 do_new_mount+0x25c/0x8c8 fs/namespace.c:3335 path_mount+0x590/0xe04 fs/namespace.c:3662 do_mount fs/namespace.c:3675 [inline] __do_sys_mount fs/namespace.c:3884 [inline] __se_sys_mount fs/namespace.c:3861 [inline] __arm64_sys_mount+0x45c/0x594 fs/namespace.c:3861 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.3:6690 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.3 state:D stack:0 pid:6690 ppid:6373 flags:0x0000000d Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 super_lock+0x23c/0x328 fs/super.c:134 super_lock_shared fs/super.c:146 [inline] super_lock_shared_active fs/super.c:1431 [inline] fs_bdev_sync+0xa4/0x168 fs/super.c:1466 blkdev_flushbuf block/ioctl.c:372 [inline] blkdev_common_ioctl+0x848/0x2884 block/ioctl.c:502 blkdev_ioctl+0x35c/0xae4 block/ioctl.c:624 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.3:6695 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.3 state:D stack:0 pid:6695 ppid:6373 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 loop_configure+0x15c/0xfd4 drivers/block/loop.c:1018 lo_ioctl+0xc70/0x1d04 blkdev_ioctl+0x3e4/0xae4 block/ioctl.c:630 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.5:6696 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.5 state:D stack:0 pid:6696 ppid:6381 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 blkdev_open+0x128/0x2b0 block/fops.c:589 do_dentry_open+0x6fc/0x118c fs/open.c:929 vfs_open+0x7c/0x90 fs/open.c:1063 do_open fs/namei.c:3639 [inline] path_openat+0x1f2c/0x27f8 fs/namei.c:3796 do_filp_open+0x1bc/0x3cc fs/namei.c:3823 do_sys_openat2+0x124/0x1b8 fs/open.c:1422 do_sys_open fs/open.c:1437 [inline] __do_sys_openat fs/open.c:1453 [inline] __se_sys_openat fs/open.c:1448 [inline] __arm64_sys_openat+0x1f0/0x240 fs/open.c:1448 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.5:6703 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.5 state:D stack:0 pid:6703 ppid:6381 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 loop_configure+0x15c/0xfd4 drivers/block/loop.c:1018 lo_ioctl+0xc70/0x1d04 blkdev_ioctl+0x3e4/0xae4 block/ioctl.c:630 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.4:6698 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.4 state:D stack:0 pid:6698 ppid:6384 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 blkdev_open+0x128/0x2b0 block/fops.c:589 do_dentry_open+0x6fc/0x118c fs/open.c:929 vfs_open+0x7c/0x90 fs/open.c:1063 do_open fs/namei.c:3639 [inline] path_openat+0x1f2c/0x27f8 fs/namei.c:3796 do_filp_open+0x1bc/0x3cc fs/namei.c:3823 do_sys_openat2+0x124/0x1b8 fs/open.c:1422 do_sys_open fs/open.c:1437 [inline] __do_sys_openat fs/open.c:1453 [inline] __se_sys_openat fs/open.c:1448 [inline] __arm64_sys_openat+0x1f0/0x240 fs/open.c:1448 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 INFO: task syz-executor.4:6704 blocked for more than 143 seconds. Not tainted 6.6.0-rc5-syzkaller-gb6ab131813c2 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.4 state:D stack:0 pid:6704 ppid:6384 flags:0x00000005 Call trace: __switch_to+0x320/0x754 arch/arm64/kernel/process.c:556 context_switch kernel/sched/core.c:5382 [inline] __schedule+0x1364/0x23b4 kernel/sched/core.c:6695 schedule+0xc4/0x170 kernel/sched/core.c:6771 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6830 __mutex_lock_common+0xbd8/0x21a0 kernel/locking/mutex.c:679 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799 bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 loop_configure+0x15c/0xfd4 drivers/block/loop.c:1018 lo_ioctl+0xc70/0x1d04 blkdev_ioctl+0x3e4/0xae4 block/ioctl.c:630 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __arm64_sys_ioctl+0x14c/0x1c8 fs/ioctl.c:857 __invoke_syscall arch/arm64/kernel/syscall.c:37 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:51 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:136 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:155 el0_svc+0x58/0x16c arch/arm64/kernel/entry-common.c:678 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:595 Showing all locks held in the system: 1 lock held by khungtaskd/30: #0: ffff80008e3739c0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:302 2 locks held by kworker/u4:6/235: 2 locks held by getty/5770: #0: ffff0000d6cf20a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340 #1: ffff8000959f02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1214 drivers/tty/n_tty.c:2206 1 lock held by syz-executor.1/6676: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xec/0x740 block/bdev.c:884 1 lock held by syz-executor.2/6678: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xec/0x740 block/bdev.c:884 3 locks held by syz-executor.0/6682: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 #1: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_finish_claiming+0x84/0x3dc block/bdev.c:557 #2: ffff0000c1543a88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: bd_finish_claiming+0x218/0x3dc block/bdev.c:566 1 lock held by syz-executor.3/6690: #0: ffff0000c1543a88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_flushbuf block/ioctl.c:370 [inline] #0: ffff0000c1543a88 (&bdev->bd_holder_lock){+.+.}-{3:3}, at: blkdev_common_ioctl+0x7fc/0x2884 block/ioctl.c:502 1 lock held by syz-executor.3/6695: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.5/6696: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.5/6703: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.4/6698: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.4/6704: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.0/6872: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.1/6939: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.1/6940: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.2/6956: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.2/6957: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.5/6959: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.5/6960: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.3/6976: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.3/6977: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.4/6979: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.4/6980: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.1/6999: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.1/7000: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.2/7054: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.2/7055: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.5/7067: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.5/7068: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.3/7075: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.3/7078: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 1 lock held by syz-executor.4/7083: #0: ffff0000c9ce34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x114/0x55c block/bdev.c:786 1 lock held by syz-executor.4/7084: #0: ffff80008e1748a8 (bdev_lock){+.+.}-{3:3}, at: bd_prepare_to_claim+0x1a4/0x49c block/bdev.c:508 ============================================= Tested on: commit: b6ab1318 reiserfs: fix journal device opening git tree: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git b4/vfs-fixes-reiserfs console output: https://syzkaller.appspot.com/x/log.txt?x=125bdcde680000 kernel config: https://syzkaller.appspot.com/x/.config?x=1b8c825e0d5f3f72 dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 userspace arch: arm64 Note: no patches were applied. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-09-03 22:30 [syzbot] [f2fs?] possible deadlock in super_lock syzbot ` (2 preceding siblings ...) 2023-10-09 2:05 ` syzbot @ 2023-12-24 16:40 ` syzbot 2023-12-28 10:50 ` Christian Brauner 3 siblings, 1 reply; 9+ messages in thread From: syzbot @ 2023-12-24 16:40 UTC (permalink / raw) To: axboe, brauner, chao, christian, daniel.vetter, hch, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 syzbot suspects this issue was fixed by commit: commit fd1464105cb37a3b50a72c1d2902e97a71950af8 Author: Jan Kara <jack@suse.cz> Date: Wed Oct 18 15:29:24 2023 +0000 fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=14639595e80000 start commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. git tree: upstream kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 If the result looks correct, please mark the issue as fixed by replying with: #syz fix: fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock For information about bisection process see: https://goo.gl/tpsmEJ#bisection ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-12-24 16:40 ` syzbot @ 2023-12-28 10:50 ` Christian Brauner 2024-01-02 12:14 ` Jan Kara 0 siblings, 1 reply; 9+ messages in thread From: Christian Brauner @ 2023-12-28 10:50 UTC (permalink / raw) To: syzbot Cc: axboe, chao, christian, daniel.vetter, hch, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 On Sun, Dec 24, 2023 at 08:40:05AM -0800, syzbot wrote: > syzbot suspects this issue was fixed by commit: > > commit fd1464105cb37a3b50a72c1d2902e97a71950af8 > Author: Jan Kara <jack@suse.cz> > Date: Wed Oct 18 15:29:24 2023 +0000 > > fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=14639595e80000 > start commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. > git tree: upstream > kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df > dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 > > If the result looks correct, please mark the issue as fixed by replying with: > > #syz fix: fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock > > For information about bisection process see: https://goo.gl/tpsmEJ#bisection Fwiw, this was always a false-positive. But we also reworked the locking that even the false-positive cannot be triggered anymore. So yay! ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [syzbot] [reiserfs?] possible deadlock in super_lock 2023-12-28 10:50 ` Christian Brauner @ 2024-01-02 12:14 ` Jan Kara 0 siblings, 0 replies; 9+ messages in thread From: Jan Kara @ 2024-01-02 12:14 UTC (permalink / raw) To: syzbot Cc: Christian Brauner, axboe, chao, christian, daniel.vetter, hch, hdanton, jack, jaegeuk, jinpu.wang, linux-f2fs-devel, linux-fsdevel, linux-kernel, mairacanal, mcanal, reiserfs-devel, syzkaller-bugs, terrelln, willy, yukuai3 On Thu 28-12-23 11:50:32, Christian Brauner wrote: > On Sun, Dec 24, 2023 at 08:40:05AM -0800, syzbot wrote: > > syzbot suspects this issue was fixed by commit: > > > > commit fd1464105cb37a3b50a72c1d2902e97a71950af8 > > Author: Jan Kara <jack@suse.cz> > > Date: Wed Oct 18 15:29:24 2023 +0000 > > > > fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock > > > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=14639595e80000 > > start commit: 2cf0f7156238 Merge tag 'nfs-for-6.6-2' of git://git.linux-.. > > git tree: upstream > > kernel config: https://syzkaller.appspot.com/x/.config?x=710dc49bece494df > > dashboard link: https://syzkaller.appspot.com/bug?extid=062317ea1d0a6d5e29e7 > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=107e9518680000 > > > > If the result looks correct, please mark the issue as fixed by replying with: > > > > #syz fix: fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock > > > > For information about bisection process see: https://goo.gl/tpsmEJ#bisection > > Fwiw, this was always a false-positive. But we also reworked the locking > that even the false-positive cannot be triggered anymore. So yay! Yup, nice. I think you need to start the line with syz command so: #syz fix: fs: Avoid grabbing sb->s_umount under bdev->bd_holder_lock Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-01-02 12:15 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-09-03 22:30 [syzbot] [f2fs?] possible deadlock in super_lock syzbot 2023-09-20 9:13 ` [syzbot] [reiserfs?] " syzbot 2023-10-08 15:14 ` syzbot 2023-10-09 2:05 ` syzbot 2023-10-09 12:37 ` Christian Brauner 2023-10-09 14:19 ` syzbot 2023-12-24 16:40 ` syzbot 2023-12-28 10:50 ` Christian Brauner 2024-01-02 12:14 ` Jan Kara
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).