linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [syzbot] [block?] possible deadlock in blk_mq_submit_bio
@ 2024-11-23 15:37 syzbot
  2024-11-23 23:59 ` Hillf Danton
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: syzbot @ 2024-11-23 15:37 UTC (permalink / raw)
  To: axboe, linux-block, linux-kernel, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    06afb0f36106 Merge tag 'trace-v6.13' of git://git.kernel.o..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=148bfec0580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=b011a14ee4cb9480
dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: i386

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-06afb0f3.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/aae0561fd279/vmlinux-06afb0f3.xz
kernel image: https://storage.googleapis.com/syzbot-assets/faa3af3fa7ce/bzImage-06afb0f3.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5218c85078236fc46227@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.12.0-syzkaller-07834-g06afb0f36106 #0 Not tainted
------------------------------------------------------
kswapd0/112 is trying to acquire lock:
ffff88801f3f1438 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: bio_queue_enter block/blk.h:79 [inline]
ffff88801f3f1438 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3092

but task is already holding lock:
ffffffff8df4de60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xcd9/0x18f0 mm/vmscan.c:6976

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:3851 [inline]
       fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:3865
       might_alloc include/linux/sched/mm.h:318 [inline]
       slab_pre_alloc_hook mm/slub.c:4036 [inline]
       slab_alloc_node mm/slub.c:4114 [inline]
       __do_kmalloc_node mm/slub.c:4263 [inline]
       __kmalloc_node_noprof+0xb7/0x440 mm/slub.c:4270
       __kvmalloc_node_noprof+0xad/0x1a0 mm/util.c:658
       sbitmap_init_node+0x1ca/0x770 lib/sbitmap.c:132
       scsi_realloc_sdev_budget_map+0x2c7/0x610 drivers/scsi/scsi_scan.c:246
       scsi_add_lun+0x11b4/0x1fd0 drivers/scsi/scsi_scan.c:1106
       scsi_probe_and_add_lun+0x4fa/0xda0 drivers/scsi/scsi_scan.c:1287
       __scsi_add_device+0x24b/0x290 drivers/scsi/scsi_scan.c:1622
       ata_scsi_scan_host+0x215/0x780 drivers/ata/libata-scsi.c:4575
       async_run_entry_fn+0x9c/0x530 kernel/async.c:129
       process_one_work+0x958/0x1b30 kernel/workqueue.c:3229
       process_scheduled_works kernel/workqueue.c:3310 [inline]
       worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391
       kthread+0x2c1/0x3a0 kernel/kthread.c:389
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #0 (&q->q_usage_counter(io)#68){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain kernel/locking/lockdep.c:3904 [inline]
       __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
       lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
       __bio_queue_enter+0x4c6/0x740 block/blk-core.c:361
       bio_queue_enter block/blk.h:79 [inline]
       blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3092
       __submit_bio+0x384/0x540 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x698/0xd70 block/blk-core.c:739
       submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868
       swap_writepage_bdev_async mm/page_io.c:449 [inline]
       __swap_writepage+0x3a3/0xf50 mm/page_io.c:472
       swap_writepage+0x403/0x1040 mm/page_io.c:288
       pageout+0x3b2/0xaa0 mm/vmscan.c:689
       shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1367
       evict_folios+0x6d6/0x1970 mm/vmscan.c:4589
       try_to_shrink_lruvec+0x612/0x9b0 mm/vmscan.c:4784
       shrink_one+0x3e3/0x7b0 mm/vmscan.c:4822
       shrink_many mm/vmscan.c:4885 [inline]
       lru_gen_shrink_node mm/vmscan.c:4963 [inline]
       shrink_node+0xbbc/0x3ed0 mm/vmscan.c:5943
       kswapd_shrink_node mm/vmscan.c:6771 [inline]
       balance_pgdat+0xc1f/0x18f0 mm/vmscan.c:6963
       kswapd+0x5f8/0xc30 mm/vmscan.c:7232
       kthread+0x2c1/0x3a0 kernel/kthread.c:389
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&q->q_usage_counter(io)#68);
                               lock(fs_reclaim);
  rlock(&q->q_usage_counter(io)#68);

 *** DEADLOCK ***

1 lock held by kswapd0/112:
 #0: ffffffff8df4de60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xcd9/0x18f0 mm/vmscan.c:6976

stack backtrace:
CPU: 3 UID: 0 PID: 112 Comm: kswapd0 Not tainted 6.12.0-syzkaller-07834-g06afb0f36106 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x41c/0x610 kernel/locking/lockdep.c:2074
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain kernel/locking/lockdep.c:3904 [inline]
 __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
 __bio_queue_enter+0x4c6/0x740 block/blk-core.c:361
 bio_queue_enter block/blk.h:79 [inline]
 blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3092
 __submit_bio+0x384/0x540 block/blk-core.c:629
 __submit_bio_noacct_mq block/blk-core.c:710 [inline]
 submit_bio_noacct_nocheck+0x698/0xd70 block/blk-core.c:739
 submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868
 swap_writepage_bdev_async mm/page_io.c:449 [inline]
 __swap_writepage+0x3a3/0xf50 mm/page_io.c:472
 swap_writepage+0x403/0x1040 mm/page_io.c:288
 pageout+0x3b2/0xaa0 mm/vmscan.c:689
 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1367
 evict_folios+0x6d6/0x1970 mm/vmscan.c:4589
 try_to_shrink_lruvec+0x612/0x9b0 mm/vmscan.c:4784
 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4822
 shrink_many mm/vmscan.c:4885 [inline]
 lru_gen_shrink_node mm/vmscan.c:4963 [inline]
 shrink_node+0xbbc/0x3ed0 mm/vmscan.c:5943
 kswapd_shrink_node mm/vmscan.c:6771 [inline]
 balance_pgdat+0xc1f/0x18f0 mm/vmscan.c:6963
 kswapd+0x5f8/0xc30 mm/vmscan.c:7232
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-11-23 15:37 [syzbot] [block?] possible deadlock in blk_mq_submit_bio syzbot
@ 2024-11-23 23:59 ` Hillf Danton
  2024-11-27  5:59 ` Ming Lei
  2024-12-09 19:19 ` syzbot
  2 siblings, 0 replies; 8+ messages in thread
From: Hillf Danton @ 2024-11-23 23:59 UTC (permalink / raw)
  To: Ming Lei
  Cc: syzbot, axboe, linux-block, Boqun Feng, linux-kernel,
	syzkaller-bugs

On Sat, 23 Nov 2024 07:37:22 -0800
> syzbot found the following issue on:
> 
> HEAD commit:    06afb0f36106 Merge tag 'trace-v6.13' of git://git.kernel.o..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=148bfec0580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=b011a14ee4cb9480
> dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
> compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> userspace arch: i386
> 
> Unfortunately, I don't have any reproducer for this issue yet.
> 
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-06afb0f3.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/aae0561fd279/vmlinux-06afb0f3.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/faa3af3fa7ce/bzImage-06afb0f3.xz
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+5218c85078236fc46227@syzkaller.appspotmail.com
> 
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.12.0-syzkaller-07834-g06afb0f36106 #0 Not tainted
> ------------------------------------------------------
> kswapd0/112 is trying to acquire lock:
> ffff88801f3f1438 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: bio_queue_enter block/blk.h:79 [inline]
> ffff88801f3f1438 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3092
> 
> but task is already holding lock:
> ffffffff8df4de60 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xcd9/0x18f0 mm/vmscan.c:6976
> 
> which lock already depends on the new lock.
> 
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #1 (fs_reclaim){+.+.}-{0:0}:
>        __fs_reclaim_acquire mm/page_alloc.c:3851 [inline]
>        fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:3865
>        might_alloc include/linux/sched/mm.h:318 [inline]
>        slab_pre_alloc_hook mm/slub.c:4036 [inline]
>        slab_alloc_node mm/slub.c:4114 [inline]
>        __do_kmalloc_node mm/slub.c:4263 [inline]
>        __kmalloc_node_noprof+0xb7/0x440 mm/slub.c:4270
>        __kvmalloc_node_noprof+0xad/0x1a0 mm/util.c:658
>        sbitmap_init_node+0x1ca/0x770 lib/sbitmap.c:132
>        scsi_realloc_sdev_budget_map+0x2c7/0x610 drivers/scsi/scsi_scan.c:246
>        scsi_add_lun+0x11b4/0x1fd0 drivers/scsi/scsi_scan.c:1106
>        scsi_probe_and_add_lun+0x4fa/0xda0 drivers/scsi/scsi_scan.c:1287
>        __scsi_add_device+0x24b/0x290 drivers/scsi/scsi_scan.c:1622
>        ata_scsi_scan_host+0x215/0x780 drivers/ata/libata-scsi.c:4575
>        async_run_entry_fn+0x9c/0x530 kernel/async.c:129
>        process_one_work+0x958/0x1b30 kernel/workqueue.c:3229
>        process_scheduled_works kernel/workqueue.c:3310 [inline]
>        worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391
>        kthread+0x2c1/0x3a0 kernel/kthread.c:389
>        ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
>        ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
> 
> -> #0 (&q->q_usage_counter(io)#68){++++}-{0:0}:
>        check_prev_add kernel/locking/lockdep.c:3161 [inline]
>        check_prevs_add kernel/locking/lockdep.c:3280 [inline]
>        validate_chain kernel/locking/lockdep.c:3904 [inline]
>        __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
>        lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
>        __bio_queue_enter+0x4c6/0x740 block/blk-core.c:361
>        bio_queue_enter block/blk.h:79 [inline]

Another splat in bio_queue_enter() [1]

[1] https://lore.kernel.org/lkml/20241104112732.3144-1-hdanton@sina.com/

>        blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3092
>        __submit_bio+0x384/0x540 block/blk-core.c:629
>        __submit_bio_noacct_mq block/blk-core.c:710 [inline]
>        submit_bio_noacct_nocheck+0x698/0xd70 block/blk-core.c:739
>        submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868
>        swap_writepage_bdev_async mm/page_io.c:449 [inline]
>        __swap_writepage+0x3a3/0xf50 mm/page_io.c:472
>        swap_writepage+0x403/0x1040 mm/page_io.c:288
>        pageout+0x3b2/0xaa0 mm/vmscan.c:689
>        shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1367
>        evict_folios+0x6d6/0x1970 mm/vmscan.c:4589
>        try_to_shrink_lruvec+0x612/0x9b0 mm/vmscan.c:4784
>        shrink_one+0x3e3/0x7b0 mm/vmscan.c:4822
>        shrink_many mm/vmscan.c:4885 [inline]
>        lru_gen_shrink_node mm/vmscan.c:4963 [inline]
>        shrink_node+0xbbc/0x3ed0 mm/vmscan.c:5943
>        kswapd_shrink_node mm/vmscan.c:6771 [inline]
>        balance_pgdat+0xc1f/0x18f0 mm/vmscan.c:6963
>        kswapd+0x5f8/0xc30 mm/vmscan.c:7232
>        kthread+0x2c1/0x3a0 kernel/kthread.c:389
>        ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
>        ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
> 
> other info that might help us debug this:
> 
>  Possible unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(fs_reclaim);
>                                lock(&q->q_usage_counter(io)#68);
>                                lock(fs_reclaim);
>   rlock(&q->q_usage_counter(io)#68);
> 
>  *** DEADLOCK ***

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-11-23 15:37 [syzbot] [block?] possible deadlock in blk_mq_submit_bio syzbot
  2024-11-23 23:59 ` Hillf Danton
@ 2024-11-27  5:59 ` Ming Lei
  2024-11-27  5:59   ` syzbot
  2024-12-09 19:19 ` syzbot
  2 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2024-11-27  5:59 UTC (permalink / raw)
  To: syzbot; +Cc: axboe, linux-block, linux-kernel, syzkaller-bugs

On Sat, Nov 23, 2024 at 11:37 PM syzbot
<syzbot+5218c85078236fc46227@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:    06afb0f36106 Merge tag 'trace-v6.13' of git://git.kernel.o..
> git tree:       upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=148bfec0580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=b011a14ee4cb9480
> dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
> compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
> userspace arch: i386
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> Downloadable assets:
> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-06afb0f3.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/aae0561fd279/vmlinux-06afb0f3.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/faa3af3fa7ce/bzImage-06afb0f3.xz
...

> If the report is already addressed, let syzbot know by replying with:
> #syz fix: exact-commit-title
>
> If you want to overwrite report's subsystems, reply with:
> #syz set subsystems: new-subsystem
> (See the list of subsystem names on the web dashboard)
>
> If the report is a duplicate of another one, reply with:
> #syz dup: exact-subject-of-another-report

#syz test: https://github.com/ming1/linux v6.13/block-fix


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-11-27  5:59 ` Ming Lei
@ 2024-11-27  5:59   ` syzbot
  0 siblings, 0 replies; 8+ messages in thread
From: syzbot @ 2024-11-27  5:59 UTC (permalink / raw)
  To: ming.lei; +Cc: axboe, linux-block, linux-kernel, ming.lei, syzkaller-bugs

> On Sat, Nov 23, 2024 at 11:37 PM syzbot
> <syzbot+5218c85078236fc46227@syzkaller.appspotmail.com> wrote:
>>
>> Hello,
>>
>> syzbot found the following issue on:
>>
>> HEAD commit:    06afb0f36106 Merge tag 'trace-v6.13' of git://git.kernel.o..
>> git tree:       upstream
>> console output: https://syzkaller.appspot.com/x/log.txt?x=148bfec0580000
>> kernel config:  https://syzkaller.appspot.com/x/.config?x=b011a14ee4cb9480
>> dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
>> compiler:       gcc (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40
>> userspace arch: i386
>>
>> Unfortunately, I don't have any reproducer for this issue yet.
>>
>> Downloadable assets:
>> disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/7feb34a89c2a/non_bootable_disk-06afb0f3.raw.xz
>> vmlinux: https://storage.googleapis.com/syzbot-assets/aae0561fd279/vmlinux-06afb0f3.xz
>> kernel image: https://storage.googleapis.com/syzbot-assets/faa3af3fa7ce/bzImage-06afb0f3.xz
> ...
>
>> If the report is already addressed, let syzbot know by replying with:
>> #syz fix: exact-commit-title
>>
>> If you want to overwrite report's subsystems, reply with:
>> #syz set subsystems: new-subsystem
>> (See the list of subsystem names on the web dashboard)
>>
>> If the report is a duplicate of another one, reply with:
>> #syz dup: exact-subject-of-another-report
>
> #syz test: https://github.com/ming1/linux v6.13/block-fix

This crash does not have a reproducer. I cannot test it.

>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-11-23 15:37 [syzbot] [block?] possible deadlock in blk_mq_submit_bio syzbot
  2024-11-23 23:59 ` Hillf Danton
  2024-11-27  5:59 ` Ming Lei
@ 2024-12-09 19:19 ` syzbot
  2024-12-10 10:44   ` Hillf Danton
  2 siblings, 1 reply; 8+ messages in thread
From: syzbot @ 2024-12-09 19:19 UTC (permalink / raw)
  To: axboe, boqun.feng, hdanton, linux-block, linux-kernel, ming.lei,
	syzkaller-bugs

syzbot has found a reproducer for the following issue on:

HEAD commit:    fac04efc5c79 Linux 6.13-rc2
git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=100313e8580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=696fb014d05da3a3
dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=147528f8580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/d219b605f6a9/disk-fac04efc.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/06776a2c689e/vmlinux-fac04efc.xz
kernel image: https://storage.googleapis.com/syzbot-assets/8ab42bd03182/Image-fac04efc.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/c3180abcd8eb/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+5218c85078236fc46227@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc2-syzkaller-gfac04efc5c79 #0 Not tainted
------------------------------------------------------
syz.0.15/6643 is trying to acquire lock:
ffff0000c9f19de8 (&q->q_usage_counter(io)#17){++++}-{0:0}, at: bio_queue_enter block/blk.h:79 [inline]
ffff0000c9f19de8 (&q->q_usage_counter(io)#17){++++}-{0:0}, at: blk_mq_submit_bio+0x11c8/0x2070 block/blk-mq.c:3092

but task is already holding lock:
ffff0000e7fcc0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&tree->tree_lock){+.+.}-{4:4}:
       __mutex_lock_common+0x218/0x28f4 kernel/locking/mutex.c:585
       __mutex_lock kernel/locking/mutex.c:735 [inline]
       mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:787
       hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28
       hfsplus_cat_write_inode+0x1a4/0xd48 fs/hfsplus/inode.c:589
       hfsplus_write_inode+0x15c/0x4dc fs/hfsplus/super.c:161
       write_inode fs/fs-writeback.c:1525 [inline]
       __writeback_single_inode+0x5a0/0x15a4 fs/fs-writeback.c:1745
       writeback_single_inode+0x18c/0x554 fs/fs-writeback.c:1801
       sync_inode_metadata+0xc4/0x12c fs/fs-writeback.c:2871
       hfsplus_file_fsync+0xe4/0x4c8 fs/hfsplus/inode.c:316
       vfs_fsync_range fs/sync.c:187 [inline]
       vfs_fsync+0x154/0x18c fs/sync.c:201
       __loop_update_dio+0x248/0x420 drivers/block/loop.c:204
       loop_set_status+0x538/0x7f4 drivers/block/loop.c:1289
       lo_ioctl+0xf10/0x1c48
       blkdev_ioctl+0x3a8/0xa8c block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __arm64_sys_ioctl+0x14c/0x1cc fs/ioctl.c:892
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

-> #1 (&sb->s_type->i_mutex_key#20){+.+.}-{4:4}:
       down_write+0x50/0xc0 kernel/locking/rwsem.c:1577
       inode_lock include/linux/fs.h:818 [inline]
       hfsplus_file_fsync+0xd8/0x4c8 fs/hfsplus/inode.c:311
       vfs_fsync_range fs/sync.c:187 [inline]
       vfs_fsync+0x154/0x18c fs/sync.c:201
       __loop_update_dio+0x248/0x420 drivers/block/loop.c:204
       loop_set_status+0x538/0x7f4 drivers/block/loop.c:1289
       lo_ioctl+0xf10/0x1c48
       blkdev_ioctl+0x3a8/0xa8c block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __arm64_sys_ioctl+0x14c/0x1cc fs/ioctl.c:892
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

-> #0 (&q->q_usage_counter(io)#17){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain kernel/locking/lockdep.c:3904 [inline]
       __lock_acquire+0x34f0/0x7904 kernel/locking/lockdep.c:5226
       lock_acquire+0x23c/0x724 kernel/locking/lockdep.c:5849
       __bio_queue_enter+0x4dc/0x5b0 block/blk-core.c:361
       bio_queue_enter block/blk.h:79 [inline]
       blk_mq_submit_bio+0x11c8/0x2070 block/blk-mq.c:3092
       __submit_bio+0x1a0/0x4f8 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x3bc/0xcbc block/blk-core.c:739
       submit_bio_noacct+0xc6c/0x166c block/blk-core.c:868
       submit_bio+0x374/0x564 block/blk-core.c:910
       submit_bh_wbc+0x3f8/0x4c8 fs/buffer.c:2814
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x7ac/0x914 fs/buffer.c:2446
       hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
       filemap_read_folio+0x108/0x318 mm/filemap.c:2366
       do_read_cache_folio+0x368/0x5c0 mm/filemap.c:3826
       do_read_cache_page mm/filemap.c:3892 [inline]
       read_cache_page+0x6c/0x15c mm/filemap.c:3901
       read_mapping_page include/linux/pagemap.h:1005 [inline]
       __hfs_bnode_create+0x3dc/0x6d4 fs/hfsplus/bnode.c:440
       hfsplus_bnode_find+0x200/0xe60 fs/hfsplus/bnode.c:486
       hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:172
       hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:211
       hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
       hfsplus_iget+0x34c/0x584 fs/hfsplus/super.c:83
       hfsplus_fill_super+0xa5c/0x16f8 fs/hfsplus/super.c:504
       get_tree_bdev_flags+0x38c/0x494 fs/super.c:1636
       get_tree_bdev+0x2c/0x3c fs/super.c:1659
       hfsplus_get_tree+0x28/0x38 fs/hfsplus/super.c:640
       vfs_get_tree+0x90/0x28c fs/super.c:1814
       do_new_mount+0x278/0x900 fs/namespace.c:3507
       path_mount+0x590/0xe04 fs/namespace.c:3834
       do_mount fs/namespace.c:3847 [inline]
       __do_sys_mount fs/namespace.c:4057 [inline]
       __se_sys_mount fs/namespace.c:4034 [inline]
       __arm64_sys_mount+0x4d4/0x5ac fs/namespace.c:4034
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

other info that might help us debug this:

Chain exists of:
  &q->q_usage_counter(io)#17 --> &sb->s_type->i_mutex_key#20 --> &tree->tree_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&tree->tree_lock);
                               lock(&sb->s_type->i_mutex_key#20);
                               lock(&tree->tree_lock);
  rlock(&q->q_usage_counter(io)#17);

 *** DEADLOCK ***

2 locks held by syz.0.15/6643:
 #0: ffff0000e90420e0 (&type->s_umount_key#51/1){+.+.}-{4:4}, at: alloc_super+0x1b0/0x834 fs/super.c:344
 #1: ffff0000e7fcc0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28

stack backtrace:
CPU: 0 UID: 0 PID: 6643 Comm: syz.0.15 Not tainted 6.13.0-rc2-syzkaller-gfac04efc5c79 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:484 (C)
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 print_circular_bug+0x154/0x1c0 kernel/locking/lockdep.c:2074
 check_noncircular+0x310/0x404 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain kernel/locking/lockdep.c:3904 [inline]
 __lock_acquire+0x34f0/0x7904 kernel/locking/lockdep.c:5226
 lock_acquire+0x23c/0x724 kernel/locking/lockdep.c:5849
 __bio_queue_enter+0x4dc/0x5b0 block/blk-core.c:361
 bio_queue_enter block/blk.h:79 [inline]
 blk_mq_submit_bio+0x11c8/0x2070 block/blk-mq.c:3092
 __submit_bio+0x1a0/0x4f8 block/blk-core.c:629
 __submit_bio_noacct_mq block/blk-core.c:710 [inline]
 submit_bio_noacct_nocheck+0x3bc/0xcbc block/blk-core.c:739
 submit_bio_noacct+0xc6c/0x166c block/blk-core.c:868
 submit_bio+0x374/0x564 block/blk-core.c:910
 submit_bh_wbc+0x3f8/0x4c8 fs/buffer.c:2814
 submit_bh fs/buffer.c:2819 [inline]
 block_read_full_folio+0x7ac/0x914 fs/buffer.c:2446
 hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
 filemap_read_folio+0x108/0x318 mm/filemap.c:2366
 do_read_cache_folio+0x368/0x5c0 mm/filemap.c:3826
 do_read_cache_page mm/filemap.c:3892 [inline]
 read_cache_page+0x6c/0x15c mm/filemap.c:3901
 read_mapping_page include/linux/pagemap.h:1005 [inline]
 __hfs_bnode_create+0x3dc/0x6d4 fs/hfsplus/bnode.c:440
 hfsplus_bnode_find+0x200/0xe60 fs/hfsplus/bnode.c:486
 hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:172
 hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:211
 hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
 hfsplus_iget+0x34c/0x584 fs/hfsplus/super.c:83
 hfsplus_fill_super+0xa5c/0x16f8 fs/hfsplus/super.c:504
 get_tree_bdev_flags+0x38c/0x494 fs/super.c:1636
 get_tree_bdev+0x2c/0x3c fs/super.c:1659
 hfsplus_get_tree+0x28/0x38 fs/hfsplus/super.c:640
 vfs_get_tree+0x90/0x28c fs/super.c:1814
 do_new_mount+0x278/0x900 fs/namespace.c:3507
 path_mount+0x590/0xe04 fs/namespace.c:3834
 do_mount fs/namespace.c:3847 [inline]
 __do_sys_mount fs/namespace.c:4057 [inline]
 __se_sys_mount fs/namespace.c:4034 [inline]
 __arm64_sys_mount+0x4d4/0x5ac fs/namespace.c:4034
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
 el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600
syz.0.15: attempt to access beyond end of device
loop0: rw=0, sector=208, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 104, async page read
syz.0.15: attempt to access beyond end of device
loop0: rw=0, sector=210, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 105, async page read
syz.0.15: attempt to access beyond end of device
loop0: rw=0, sector=212, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 106, async page read
hfsplus: failed to load root directory


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-12-09 19:19 ` syzbot
@ 2024-12-10 10:44   ` Hillf Danton
  2024-12-10 11:12     ` syzbot
  2024-12-10 11:17     ` Ming Lei
  0 siblings, 2 replies; 8+ messages in thread
From: Hillf Danton @ 2024-12-10 10:44 UTC (permalink / raw)
  To: syzbot; +Cc: boqun.feng, linux-block, linux-kernel, ming.lei, syzkaller-bugs

On Mon, 09 Dec 2024 11:19:17 -0800
> syzbot has found a reproducer for the following issue on:
> 
> HEAD commit:    fac04efc5c79 Linux 6.13-rc2
> git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=147528f8580000

#syz test: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git   for-6.14/block

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-12-10 10:44   ` Hillf Danton
@ 2024-12-10 11:12     ` syzbot
  2024-12-10 11:17     ` Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: syzbot @ 2024-12-10 11:12 UTC (permalink / raw)
  To: boqun.feng, hdanton, linux-block, linux-kernel, ming.lei,
	syzkaller-bugs

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
possible deadlock in __submit_bio

======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc1-syzkaller-00011-gc018ec9dd144 #0 Not tainted
------------------------------------------------------
syz.0.15/7623 is trying to acquire lock:
ffff0000ca7b1de8 (&q->q_usage_counter(io)#17){++++}-{0:0}, at: __submit_bio+0x1a0/0x4f8 block/blk-core.c:629

but task is already holding lock:
ffff0000d771a0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&tree->tree_lock){+.+.}-{4:4}:
       __mutex_lock_common+0x218/0x28f4 kernel/locking/mutex.c:585
       __mutex_lock kernel/locking/mutex.c:735 [inline]
       mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:787
       hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28
       hfsplus_cat_write_inode+0x1a4/0xd48 fs/hfsplus/inode.c:589
       hfsplus_write_inode+0x15c/0x4dc fs/hfsplus/super.c:161
       write_inode fs/fs-writeback.c:1525 [inline]
       __writeback_single_inode+0x5a0/0x15a4 fs/fs-writeback.c:1745
       writeback_single_inode+0x18c/0x554 fs/fs-writeback.c:1801
       sync_inode_metadata+0xc4/0x12c fs/fs-writeback.c:2871
       hfsplus_file_fsync+0xe4/0x4c8 fs/hfsplus/inode.c:316
       vfs_fsync_range fs/sync.c:187 [inline]
       vfs_fsync+0x154/0x18c fs/sync.c:201
       __loop_update_dio+0x248/0x420 drivers/block/loop.c:204
       loop_set_status+0x538/0x7f4 drivers/block/loop.c:1289
       lo_ioctl+0xf10/0x1c48
       blkdev_ioctl+0x3a8/0xa8c block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __arm64_sys_ioctl+0x14c/0x1cc fs/ioctl.c:892
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

-> #1 (&sb->s_type->i_mutex_key#20){+.+.}-{4:4}:
       down_write+0x50/0xc0 kernel/locking/rwsem.c:1577
       inode_lock include/linux/fs.h:818 [inline]
       hfsplus_file_fsync+0xd8/0x4c8 fs/hfsplus/inode.c:311
       vfs_fsync_range fs/sync.c:187 [inline]
       vfs_fsync+0x154/0x18c fs/sync.c:201
       __loop_update_dio+0x248/0x420 drivers/block/loop.c:204
       loop_set_status+0x538/0x7f4 drivers/block/loop.c:1289
       lo_ioctl+0xf10/0x1c48
       blkdev_ioctl+0x3a8/0xa8c block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __arm64_sys_ioctl+0x14c/0x1cc fs/ioctl.c:892
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

-> #0 (&q->q_usage_counter(io)#17){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain kernel/locking/lockdep.c:3904 [inline]
       __lock_acquire+0x34f0/0x7904 kernel/locking/lockdep.c:5226
       lock_acquire+0x23c/0x724 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1254/0x2070 block/blk-mq.c:3093
       __submit_bio+0x1a0/0x4f8 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x3bc/0xcbc block/blk-core.c:739
       submit_bio_noacct+0xc6c/0x166c block/blk-core.c:868
       submit_bio+0x374/0x564 block/blk-core.c:910
       submit_bh_wbc+0x3f8/0x4c8 fs/buffer.c:2814
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x7d0/0x950 fs/buffer.c:2446
       hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
       filemap_read_folio+0x108/0x318 mm/filemap.c:2366
       do_read_cache_folio+0x368/0x5c0 mm/filemap.c:3826
       do_read_cache_page mm/filemap.c:3892 [inline]
       read_cache_page+0x6c/0x15c mm/filemap.c:3901
       read_mapping_page include/linux/pagemap.h:1005 [inline]
       __hfs_bnode_create+0x3dc/0x6d4 fs/hfsplus/bnode.c:440
       hfsplus_bnode_find+0x200/0xe60 fs/hfsplus/bnode.c:486
       hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:172
       hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:211
       hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
       hfsplus_iget+0x34c/0x584 fs/hfsplus/super.c:83
       hfsplus_fill_super+0xa5c/0x16f8 fs/hfsplus/super.c:504
       get_tree_bdev_flags+0x38c/0x494 fs/super.c:1636
       get_tree_bdev+0x2c/0x3c fs/super.c:1659
       hfsplus_get_tree+0x28/0x38 fs/hfsplus/super.c:640
       vfs_get_tree+0x90/0x28c fs/super.c:1814
       do_new_mount+0x278/0x900 fs/namespace.c:3507
       path_mount+0x590/0xe04 fs/namespace.c:3834
       do_mount fs/namespace.c:3847 [inline]
       __do_sys_mount fs/namespace.c:4057 [inline]
       __se_sys_mount fs/namespace.c:4034 [inline]
       __arm64_sys_mount+0x4d4/0x5ac fs/namespace.c:4034
       __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
       invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
       el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
       do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
       el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
       el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
       el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600

other info that might help us debug this:

Chain exists of:
  &q->q_usage_counter(io)#17 --> &sb->s_type->i_mutex_key#20 --> &tree->tree_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&tree->tree_lock);
                               lock(&sb->s_type->i_mutex_key#20);
                               lock(&tree->tree_lock);
  rlock(&q->q_usage_counter(io)#17);

 *** DEADLOCK ***

2 locks held by syz.0.15/7623:
 #0: ffff0000cb7e60e0 (&type->s_umount_key#51/1){+.+.}-{4:4}, at: alloc_super+0x1b0/0x834 fs/super.c:344
 #1: ffff0000d771a0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x144/0x1bc fs/hfsplus/bfind.c:28

stack backtrace:
CPU: 1 UID: 0 PID: 7623 Comm: syz.0.15 Not tainted 6.13.0-rc1-syzkaller-00011-gc018ec9dd144 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:484 (C)
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0xe4/0x150 lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 print_circular_bug+0x154/0x1c0 kernel/locking/lockdep.c:2074
 check_noncircular+0x310/0x404 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain kernel/locking/lockdep.c:3904 [inline]
 __lock_acquire+0x34f0/0x7904 kernel/locking/lockdep.c:5226
 lock_acquire+0x23c/0x724 kernel/locking/lockdep.c:5849
 bio_queue_enter block/blk.h:75 [inline]
 blk_mq_submit_bio+0x1254/0x2070 block/blk-mq.c:3093
 __submit_bio+0x1a0/0x4f8 block/blk-core.c:629
 __submit_bio_noacct_mq block/blk-core.c:710 [inline]
 submit_bio_noacct_nocheck+0x3bc/0xcbc block/blk-core.c:739
 submit_bio_noacct+0xc6c/0x166c block/blk-core.c:868
 submit_bio+0x374/0x564 block/blk-core.c:910
 submit_bh_wbc+0x3f8/0x4c8 fs/buffer.c:2814
 submit_bh fs/buffer.c:2819 [inline]
 block_read_full_folio+0x7d0/0x950 fs/buffer.c:2446
 hfsplus_read_folio+0x28/0x38 fs/hfsplus/inode.c:28
 filemap_read_folio+0x108/0x318 mm/filemap.c:2366
 do_read_cache_folio+0x368/0x5c0 mm/filemap.c:3826
 do_read_cache_page mm/filemap.c:3892 [inline]
 read_cache_page+0x6c/0x15c mm/filemap.c:3901
 read_mapping_page include/linux/pagemap.h:1005 [inline]
 __hfs_bnode_create+0x3dc/0x6d4 fs/hfsplus/bnode.c:440
 hfsplus_bnode_find+0x200/0xe60 fs/hfsplus/bnode.c:486
 hfsplus_brec_find+0x134/0x4a0 fs/hfsplus/bfind.c:172
 hfsplus_brec_read+0x38/0x128 fs/hfsplus/bfind.c:211
 hfsplus_find_cat+0x140/0x4a0 fs/hfsplus/catalog.c:202
 hfsplus_iget+0x34c/0x584 fs/hfsplus/super.c:83
 hfsplus_fill_super+0xa5c/0x16f8 fs/hfsplus/super.c:504
 get_tree_bdev_flags+0x38c/0x494 fs/super.c:1636
 get_tree_bdev+0x2c/0x3c fs/super.c:1659
 hfsplus_get_tree+0x28/0x38 fs/hfsplus/super.c:640
 vfs_get_tree+0x90/0x28c fs/super.c:1814
 do_new_mount+0x278/0x900 fs/namespace.c:3507
 path_mount+0x590/0xe04 fs/namespace.c:3834
 do_mount fs/namespace.c:3847 [inline]
 __do_sys_mount fs/namespace.c:4057 [inline]
 __se_sys_mount fs/namespace.c:4034 [inline]
 __arm64_sys_mount+0x4d4/0x5ac fs/namespace.c:4034
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x54/0x168 arch/arm64/kernel/entry-common.c:744
 el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600
loop0: detected capacity change from 1024 to 3
Dev loop0: unable to read RDB block 3
 loop0: unable to read partition table
loop0: partition table beyond EOD, truncated
loop_reread_partitions: partition scan of loop0 (�Rt�\v*�3\f!6{\x06bO�0�\x7f�\x17.�Qʝ�\x03�	H�"Uqd\�'�Lz�8�\b���w1�A\bH��\x10�\x19��) failed (rc=-5)


Tested on:

commit:         c018ec9d block: rnull: Initialize the module in place
git tree:       https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-6.14/block
console output: https://syzkaller.appspot.com/x/log.txt?x=124c68f8580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=bd60186d08e947a5
dashboard link: https://syzkaller.appspot.com/bug?extid=5218c85078236fc46227
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [syzbot] [block?] possible deadlock in blk_mq_submit_bio
  2024-12-10 10:44   ` Hillf Danton
  2024-12-10 11:12     ` syzbot
@ 2024-12-10 11:17     ` Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Ming Lei @ 2024-12-10 11:17 UTC (permalink / raw)
  To: Hillf Danton
  Cc: syzbot, boqun.feng, linux-block, linux-kernel, syzkaller-bugs,
	Ming Lei

On Tue, Dec 10, 2024 at 6:45 PM Hillf Danton <hdanton@sina.com> wrote:
>
> On Mon, 09 Dec 2024 11:19:17 -0800
> > syzbot has found a reproducer for the following issue on:
> >
> > HEAD commit:    fac04efc5c79 Linux 6.13-rc2
> > git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> > syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=147528f8580000
>
> #syz test: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git   for-6.14/block

This one looks one real deadlock risk, similar with the following one:

https://lore.kernel.org/linux-block/Z0hkFoFsW5Xv8iKw@fedora/

I will take a look when I get time.

Thanks,


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-12-10 11:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-23 15:37 [syzbot] [block?] possible deadlock in blk_mq_submit_bio syzbot
2024-11-23 23:59 ` Hillf Danton
2024-11-27  5:59 ` Ming Lei
2024-11-27  5:59   ` syzbot
2024-12-09 19:19 ` syzbot
2024-12-10 10:44   ` Hillf Danton
2024-12-10 11:12     ` syzbot
2024-12-10 11:17     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).