public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
       [not found] <20240909154237.3656000-2-cassel@kernel.org>
@ 2024-10-21 10:58 ` Lai, Yi
  2024-10-21 12:07   ` Niklas Cassel
  0 siblings, 1 reply; 10+ messages in thread
From: Lai, Yi @ 2024-10-21 10:58 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

Hi Niklas Cassel,

Greetings!

I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3

After bisection and the first bad commit is:
"
e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
"

All detailed into can be found at:
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag
Syzkaller repro code:
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.c
Syzkaller repro syscall steps:
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.prog
Syzkaller report:
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.report
Kconfig(make olddefconfig):
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/kconfig_origin
Bisect info:
https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/bisect_info.log
bzImage:
https://github.com/laifryiee/syzkaller_logs/raw/refs/heads/main/241018_105830_blk_mq_get_tag/bzImage_8e929cb546ee42c9a61d24fae60605e9e3192354
Issue dmesg:
https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/8e929cb546ee42c9a61d24fae60605e9e3192354_dmesg.log

"
[  300.375482] INFO: task kworker/u8:0:11 blocked for more than 147 seconds.
[  300.375906]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.376204] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.376646] task:kworker/u8:0    state:D stack:0     pid:11    tgid:11    ppid:2      flags:0x00004000
[  300.377158] Workqueue: writeback wb_workfn (flush-8:0)
[  300.377479] Call Trace:
[  300.377624]  <TASK>
[  300.377760]  __schedule+0xe13/0x33a0
[  300.377985]  ? __pfx___schedule+0x10/0x10
[  300.378222]  ? lock_release+0x441/0x870
[  300.378457]  ? __pfx_lock_release+0x10/0x10
[  300.378704]  ? trace_lock_acquire+0x139/0x1b0
[  300.378982]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.379296]  ? kthread_data+0x61/0xd0
[  300.379555]  schedule+0xf6/0x3f0
[  300.379754]  io_schedule+0xce/0x150
[  300.379968]  rq_qos_wait+0x1c5/0x310
[  300.380181]  ? __pfx_wbt_cleanup_cb+0x10/0x10
[  300.380437]  ? __pfx_rq_qos_wait+0x10/0x10
[  300.380681]  ? __pfx_lock_release+0x10/0x10
[  300.380925]  ? __pfx_rq_qos_wake_function+0x10/0x10
[  300.381214]  ? __pfx_wbt_inflight_cb+0x10/0x10
[  300.381479]  ? do_raw_spin_unlock+0x15c/0x210
[  300.381743]  wbt_wait+0x1ec/0x400
[  300.381946]  ? __pfx_wbt_wait+0x10/0x10
[  300.382174]  ? __pfx_dd_bio_merge+0x10/0x10
[  300.382437]  ? __pfx_wbt_wait+0x10/0x10
[  300.382667]  __rq_qos_throttle+0x5f/0xb0
[  300.382913]  blk_mq_submit_bio+0x8ad/0x2a20
[  300.383157]  ? __kasan_check_read+0x15/0x20
[  300.383438]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.383708]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.383974]  ? __this_cpu_preempt_check+0x21/0x30
[  300.384260]  ? __this_cpu_preempt_check+0x21/0x30
[  300.384532]  ? lock_release+0x441/0x870
[  300.384765]  __submit_bio+0x18d/0x500
[  300.384985]  ? __pfx___submit_bio+0x10/0x10
[  300.385234]  ? __this_cpu_preempt_check+0x21/0x30
[  300.385509]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.385875]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.386186]  ? kvm_clock_get_cycles+0x43/0x70
[  300.386451]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.386729]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.387049]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.387361]  submit_bio_noacct+0x61d/0x1dc0
[  300.387612]  submit_bio+0xce/0x480
[  300.387819]  submit_bh_wbc+0x58a/0x740
[  300.388052]  __block_write_full_folio+0x7a1/0xed0
[  300.388327]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.388599]  block_write_full_folio+0x2e6/0x3a0
[  300.388880]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.389147]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.389414]  write_cache_pages+0xbc/0x140
[  300.389654]  ? __pfx_block_write_full_folio+0x10/0x10
[  300.389954]  ? __pfx_write_cache_pages+0x10/0x10
[  300.390224]  ? __this_cpu_preempt_check+0x21/0x30
[  300.390504]  blkdev_writepages+0xa5/0xf0
[  300.390738]  ? __pfx_blkdev_writepages+0x10/0x10
[  300.391016]  ? __pfx_do_raw_spin_lock+0x10/0x10
[  300.391293]  ? do_raw_spin_unlock+0x15c/0x210
[  300.391573]  ? __pfx_blkdev_writepages+0x10/0x10
[  300.391841]  do_writepages+0x1c3/0x8a0
[  300.392068]  ? __pfx_do_writepages+0x10/0x10
[  300.392323]  ? __pfx_lock_release+0x10/0x10
[  300.392568]  ? lock_release+0x441/0x870
[  300.392805]  __writeback_single_inode+0x124/0xe10
[  300.393081]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.393397]  writeback_sb_inodes+0x63b/0x1000
[  300.393661]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.393935]  ? lock_acquire.part.0+0x152/0x390
[  300.394230]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.394544]  __writeback_inodes_wb+0xc8/0x280
[  300.394806]  wb_writeback+0x717/0xaf0
[  300.395039]  ? __pfx_wb_writeback+0x10/0x10
[  300.395290]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.395589]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.395868]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.396134]  ? lockdep_hardirqs_on+0x89/0x110
[  300.396393]  wb_workfn+0x76c/0xc90
[  300.396597]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.396882]  ? __pfx_wb_workfn+0x10/0x10
[  300.397110]  ? rcu_is_watching+0x19/0xc0
[  300.397354]  process_one_work+0x92e/0x1b50
[  300.397594]  ? lock_acquire.part.0+0x152/0x390
[  300.397879]  ? __pfx_process_one_work+0x10/0x10
[  300.398150]  ? move_linked_works+0x1bf/0x2c0
[  300.398407]  ? __this_cpu_preempt_check+0x21/0x30
[  300.398690]  ? assign_work+0x19f/0x250
[  300.398920]  ? lock_is_held_type+0xef/0x150
[  300.399174]  worker_thread+0x68d/0xe90
[  300.399428]  kthread+0x35a/0x470
[  300.399623]  ? __pfx_worker_thread+0x10/0x10
[  300.399874]  ? __pfx_kthread+0x10/0x10
[  300.400100]  ret_from_fork+0x56/0x90
[  300.400315]  ? __pfx_kthread+0x10/0x10
[  300.400540]  ret_from_fork_asm+0x1a/0x30
[  300.400783]  </TASK>
[  300.400928] INFO: task kworker/u8:2:36 blocked for more than 147 seconds.
[  300.401309]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.401613] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.402046] task:kworker/u8:2    state:D stack:0     pid:36    tgid:36    ppid:2      flags:0x00004000
[  300.402638] Workqueue: writeback wb_workfn (flush-8:0)
[  300.402941] Call Trace:
[  300.403083]  <TASK>
[  300.403214]  __schedule+0xe13/0x33a0
[  300.403458]  ? __pfx___schedule+0x10/0x10
[  300.403686]  ? lock_release+0x441/0x870
[  300.403909]  ? __pfx_lock_release+0x10/0x10
[  300.404146]  ? trace_lock_acquire+0x139/0x1b0
[  300.404399]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.404701]  ? kthread_data+0x61/0xd0
[  300.404915]  schedule+0xf6/0x3f0
[  300.405106]  io_schedule+0xce/0x150
[  300.405309]  blk_mq_get_tag+0x5e7/0xce0
[  300.405531]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.405771]  ? __pfx_wbt_cleanup_cb+0x10/0x10
[  300.406011]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.406288]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.406579]  ? __pfx_rq_qos_wake_function+0x10/0x10
[  300.406848]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.407136]  ? dd_limit_depth+0x21e/0x2a0
[  300.407380]  ? __pfx_dd_limit_depth+0x10/0x10
[  300.407625]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.407901]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.408186]  ? __pfx_wbt_wait+0x10/0x10
[  300.408405]  blk_mq_submit_bio+0x93c/0x2a20
[  300.408632]  ? __kasan_check_read+0x15/0x20
[  300.408866]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.409117]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.409364]  ? __this_cpu_preempt_check+0x21/0x30
[  300.409626]  ? __this_cpu_preempt_check+0x21/0x30
[  300.409883]  ? lock_release+0x441/0x870
[  300.410098]  __submit_bio+0x18d/0x500
[  300.410303]  ? __pfx___submit_bio+0x10/0x10
[  300.410534]  ? __this_cpu_preempt_check+0x21/0x30
[  300.410790]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.411135]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.411440]  ? kvm_clock_get_cycles+0x43/0x70
[  300.411690]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.411958]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.412249]  ? lock_is_held_type+0xef/0x150
[  300.412484]  submit_bio_noacct+0x61d/0x1dc0
[  300.412718]  submit_bio+0xce/0x480
[  300.412911]  ext4_io_submit+0xb0/0x150
[  300.413128]  ext4_do_writepages+0xa54/0x2f30
[  300.413372]  ? __this_cpu_preempt_check+0x21/0x30
[  300.413638]  ? __pfx_ext4_do_writepages+0x10/0x10
[  300.413892]  ? rcu_is_watching+0x19/0xc0
[  300.414113]  ? __this_cpu_preempt_check+0x21/0x30
[  300.414376]  ? lock_is_held_type+0xef/0x150
[  300.414616]  ext4_writepages+0x35c/0x7c0
[  300.414836]  ? __pfx_ext4_writepages+0x10/0x10
[  300.415087]  ? lock_release+0x441/0x870
[  300.415313]  ? do_raw_spin_unlock+0x15c/0x210
[  300.415576]  ? __pfx_ext4_writepages+0x10/0x10
[  300.415824]  do_writepages+0x1c3/0x8a0
[  300.416039]  ? __pfx_do_writepages+0x10/0x10
[  300.416278]  ? __pfx_lock_release+0x10/0x10
[  300.416510]  ? __pfx_lock_release+0x10/0x10
[  300.416754]  __writeback_single_inode+0x124/0xe10
[  300.417016]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.417301]  writeback_sb_inodes+0x63b/0x1000
[  300.417546]  ? lock_release+0x441/0x870
[  300.417772]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.418039]  ? lock_acquire.part.0+0x152/0x390
[  300.418321]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.418616]  __writeback_inodes_wb+0xc8/0x280
[  300.418865]  wb_writeback+0x717/0xaf0
[  300.419082]  ? __pfx_wb_writeback+0x10/0x10
[  300.419335]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.419598]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.419847]  ? lockdep_hardirqs_on+0x89/0x110
[  300.420094]  wb_workfn+0x7b5/0xc90
[  300.420291]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.420561]  ? __pfx_wb_workfn+0x10/0x10
[  300.420780]  ? rcu_is_watching+0x19/0xc0
[  300.421017]  process_one_work+0x92e/0x1b50
[  300.421245]  ? lock_acquire.part.0+0x152/0x390
[  300.421498]  ? __pfx_process_one_work+0x10/0x10
[  300.421743]  ? move_linked_works+0x1bf/0x2c0
[  300.421976]  ? __this_cpu_preempt_check+0x21/0x30
[  300.422236]  ? assign_work+0x19f/0x250
[  300.422442]  ? lock_is_held_type+0xef/0x150
[  300.422673]  worker_thread+0x68d/0xe90
[  300.422896]  kthread+0x35a/0x470
[  300.423074]  ? __pfx_worker_thread+0x10/0x10
[  300.423306]  ? __pfx_kthread+0x10/0x10
[  300.423538]  ret_from_fork+0x56/0x90
[  300.423735]  ? __pfx_kthread+0x10/0x10
[  300.423944]  ret_from_fork_asm+0x1a/0x30
[  300.424171]  </TASK>
[  300.424303] INFO: task kworker/1:2:80 blocked for more than 147 seconds.
[  300.424649]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.424930] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.425332] task:kworker/1:2     state:D stack:0     pid:80    tgid:80    ppid:2      flags:0x00004000
[  300.425818] Workqueue: events ata_scsi_dev_rescan
[  300.426080] Call Trace:
[  300.426217]  <TASK>
[  300.426340]  __schedule+0xe13/0x33a0
[  300.426545]  ? __pfx___schedule+0x10/0x10
[  300.426761]  ? lock_release+0x441/0x870
[  300.426977]  ? __pfx_lock_release+0x10/0x10
[  300.427204]  ? _raw_spin_unlock_irq+0x3c/0x60
[  300.427452]  schedule+0xf6/0x3f0
[  300.427631]  io_schedule+0xce/0x150
[  300.427820]  blk_mq_get_tag+0x5e7/0xce0
[  300.428032]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.428264]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.428542]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.428790]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.429077]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.429352]  blk_mq_alloc_request+0x655/0x930
[  300.429586]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.429840]  ? __lock_acquire+0xd87/0x5c90
[  300.430067]  scsi_execute_cmd+0x223/0x1040
[  300.430294]  ? __this_cpu_preempt_check+0x21/0x30
[  300.430540]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.430781]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.431038]  scsi_vpd_inquiry+0xdf/0x220
[  300.431251]  ? __pfx_scsi_vpd_inquiry+0x10/0x10
[  300.431502]  ? debug_smp_processor_id+0x20/0x30
[  300.431741]  ? rcu_is_watching+0x19/0xc0
[  300.431953]  scsi_get_vpd_size+0x11e/0x310
[  300.432168]  ? __mutex_lock+0x230/0x1490
[  300.432377]  ? __pfx_scsi_get_vpd_size+0x10/0x10
[  300.432619]  ? scsi_rescan_device+0x33/0x360
[  300.432845]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.433083]  ? __this_cpu_preempt_check+0x21/0x30
[  300.433330]  ? lock_acquire.part.0+0x152/0x390
[  300.433565]  ? __pfx___mutex_lock+0x10/0x10
[  300.433791]  scsi_get_vpd_buf+0x30/0x180
[  300.434006]  scsi_attach_vpd+0xf8/0x5b0
[  300.434218]  scsi_rescan_device+0x106/0x360
[  300.434446]  ata_scsi_dev_rescan+0x1d2/0x470
[  300.434684]  process_one_work+0x92e/0x1b50
[  300.434904]  ? lock_acquire.part.0+0x152/0x390
[  300.435154]  ? __pfx_process_one_work+0x10/0x10
[  300.435412]  ? move_linked_works+0x1bf/0x2c0
[  300.435639]  ? __this_cpu_preempt_check+0x21/0x30
[  300.435891]  ? assign_work+0x19f/0x250
[  300.436089]  ? lock_is_held_type+0xef/0x150
[  300.436313]  worker_thread+0x68d/0xe90
[  300.436525]  kthread+0x35a/0x470
[  300.436699]  ? __pfx_worker_thread+0x10/0x10
[  300.436924]  ? __pfx_kthread+0x10/0x10
[  300.437125]  ret_from_fork+0x56/0x90
[  300.437317]  ? __pfx_kthread+0x10/0x10
[  300.437517]  ret_from_fork_asm+0x1a/0x30
[  300.437736]  </TASK>
[  300.437858] INFO: task jbd2/sda-8:104 blocked for more than 147 seconds.
[  300.438192]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.438465] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.438852] task:jbd2/sda-8      state:D stack:0     pid:104   tgid:104   ppid:2      flags:0x00004000
[  300.439339] Call Trace:
[  300.439475]  <TASK>
[  300.439595]  __schedule+0xe13/0x33a0
[  300.439794]  ? __pfx___schedule+0x10/0x10
[  300.440005]  ? lock_release+0x441/0x870
[  300.440212]  ? __pfx_lock_release+0x10/0x10
[  300.440431]  ? trace_lock_acquire+0x139/0x1b0
[  300.440666]  ? lock_acquire+0x80/0xb0
[  300.440860]  ? schedule+0x216/0x3f0
[  300.441051]  schedule+0xf6/0x3f0
[  300.441229]  io_schedule+0xce/0x150
[  300.441417]  blk_mq_get_tag+0x5e7/0xce0
[  300.441628]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.441859]  ? lock_release+0x441/0x870
[  300.442063]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.442326]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.442596]  ? blk_mq_sched_try_merge+0xce/0x650
[  300.442841]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.443123]  ? dd_limit_depth+0x21e/0x2a0
[  300.443358]  ? __pfx_dd_limit_depth+0x10/0x10
[  300.443591]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.443846]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.444131]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.444403]  ? __pfx_wbt_wait+0x10/0x10
[  300.444610]  blk_mq_submit_bio+0x93c/0x2a20
[  300.444829]  ? __pfx___lock_acquire+0x10/0x10
[  300.445064]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.445307]  ? __this_cpu_preempt_check+0x21/0x30
[  300.445559]  ? __this_cpu_preempt_check+0x21/0x30
[  300.445807]  ? lock_release+0x441/0x870
[  300.446014]  __submit_bio+0x18d/0x500
[  300.446211]  ? __pfx___submit_bio+0x10/0x10
[  300.446432]  ? __this_cpu_preempt_check+0x21/0x30
[  300.446677]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.447005]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.447282]  ? kvm_clock_get_cycles+0x43/0x70
[  300.447534]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.447787]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.448067]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.448330]  submit_bio_noacct+0x61d/0x1dc0
[  300.448556]  submit_bio+0xce/0x480
[  300.448742]  submit_bh_wbc+0x58a/0x740
[  300.448947]  submit_bh+0x2b/0x40
[  300.449124]  jbd2_journal_commit_transaction+0x2a09/0x65e0
[  300.449431]  ? __pfx_jbd2_journal_commit_transaction+0x10/0x10
[  300.449737]  ? __this_cpu_preempt_check+0x21/0x30
[  300.449983]  ? lock_release+0x441/0x870
[  300.450196]  ? __this_cpu_preempt_check+0x21/0x30
[  300.450445]  ? _raw_spin_unlock_irqrestore+0x45/0x70
[  300.450703]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.450998]  kjournald2+0x204/0x790
[  300.451194]  ? __pfx_kjournald2+0x10/0x10
[  300.451416]  ? lockdep_hardirqs_on+0x89/0x110
[  300.451647]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.451917]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.452194]  ? __kthread_parkme+0x15d/0x230
[  300.452419]  kthread+0x35a/0x470
[  300.452594]  ? __pfx_kjournald2+0x10/0x10
[  300.452803]  ? __pfx_kthread+0x10/0x10
[  300.453004]  ret_from_fork+0x56/0x90
[  300.453198]  ? __pfx_kthread+0x10/0x10
[  300.453406]  ret_from_fork_asm+0x1a/0x30
[  300.453639]  </TASK>
[  300.453765] INFO: task auditd:239 blocked for more than 147 seconds.
[  300.454094]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.454367] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.454757] task:auditd          state:D stack:0     pid:239   tgid:239   ppid:1      flags:0x00000002
[  300.455219] Call Trace:
[  300.455363]  <TASK>
[  300.455481]  __schedule+0xe13/0x33a0
[  300.455676]  ? __pfx___schedule+0x10/0x10
[  300.455883]  ? lock_release+0x441/0x870
[  300.456086]  ? __pfx_lock_release+0x10/0x10
[  300.456300]  ? trace_lock_acquire+0x139/0x1b0
[  300.456530]  ? lock_acquire+0x80/0xb0
[  300.456722]  ? schedule+0x216/0x3f0
[  300.456911]  schedule+0xf6/0x3f0
[  300.457085]  io_schedule+0xce/0x150
[  300.457271]  bit_wait_io+0x1f/0x100
[  300.457457]  __wait_on_bit+0x68/0x1b0
[  300.457649]  ? __pfx_bit_wait_io+0x10/0x10
[  300.457867]  out_of_line_wait_on_bit+0xe6/0x120
[  300.458102]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.458369]  ? __pfx_wake_bit_function+0x10/0x10
[  300.458606]  ? __might_sleep+0xa3/0x170
[  300.458808]  do_get_write_access+0x83b/0x1010
[  300.459045]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.459308]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.459591]  ext4_reserve_inode_write+0x199/0x280
[  300.459833]  ? ext4_dirty_inode+0xeb/0x140
[  300.460044]  __ext4_mark_inode_dirty+0x18b/0x8c0
[  300.460282]  ? debug_smp_processor_id+0x20/0x30
[  300.460515]  ? __pfx___ext4_mark_inode_dirty+0x10/0x10
[  300.460773]  ? trace_jbd2_handle_start+0x77/0x210
[  300.461017]  ? jbd2__journal_start+0xff/0x6b0
[  300.461246]  ? __ext4_journal_start_sb+0x1f8/0x600
[  300.461490]  ? ext4_dirty_inode+0xb3/0x140
[  300.461703]  ? __pfx_ext4_dirty_inode+0x10/0x10
[  300.461937]  ext4_dirty_inode+0xeb/0x140
[  300.462139]  ? rcu_is_watching+0x19/0xc0
[  300.462343]  __mark_inode_dirty+0x1d0/0xd50
[  300.462557]  ? __pfx_generic_write_checks+0x10/0x10
[  300.462810]  generic_update_time+0xe0/0x110
[  300.463032]  file_modified+0x1af/0x1e0
[  300.463233]  ext4_buffered_write_iter+0x101/0x3e0
[  300.463493]  ext4_file_write_iter+0x8b8/0x1ab0
[  300.463722]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.463971]  ? debug_smp_processor_id+0x20/0x30
[  300.464208]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.464459]  ? __this_cpu_preempt_check+0x21/0x30
[  300.464701]  ? lock_is_held_type+0xef/0x150
[  300.464920]  vfs_write+0xc46/0x1120
[  300.465106]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.465356]  ? __pfx_vfs_write+0x10/0x10
[  300.465572]  ? mutex_lock_nested+0x1f/0x30
[  300.465788]  ksys_write+0x14f/0x290
[  300.465974]  ? __pfx_ksys_write+0x10/0x10
[  300.466189]  __x64_sys_write+0x7b/0xc0
[  300.466384]  ? syscall_trace_enter+0x9b/0x230
[  300.466610]  x64_sys_call+0x1685/0x20d0
[  300.466810]  do_syscall_64+0x6d/0x140
[  300.467007]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.467264] RIP: 0033:0x7f0f0ad3ebcf
[  300.467465] RSP: 002b:00007fffd00b5a30 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
[  300.467838] RAX: ffffffffffffffda RBX: 00000000000000e0 RCX: 00007f0f0ad3ebcf
[  300.468186] RDX: 00000000000000e0 RSI: 000055b1be1b80a0 RDI: 0000000000000005
[  300.468535] RBP: 000055b1be1b80a0 R08: 0000000000000000 R09: 00007f0f0adb14e0
[  300.468883] R10: 00007f0f0adb13e0 R11: 0000000000000293 R12: 00000000000000e0
[  300.469231] R13: 000055b1be1ae2a0 R14: 00000000000000e0 R15: 00007f0f0adf69e0
[  300.469591]  </TASK>
[  300.469711] INFO: task in:imjournal:321 blocked for more than 147 seconds.
[  300.470045]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.470311] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.470687] task:in:imjournal    state:D stack:0     pid:321   tgid:302   ppid:1      flags:0x00000002
[  300.471144] Call Trace:
[  300.471273]  <TASK>
[  300.471406]  __schedule+0xe13/0x33a0
[  300.471601]  ? __pfx___schedule+0x10/0x10
[  300.471807]  ? lock_release+0x441/0x870
[  300.472008]  ? __pfx_lock_release+0x10/0x10
[  300.472221]  ? trace_lock_acquire+0x139/0x1b0
[  300.472450]  ? lock_acquire+0x80/0xb0
[  300.472640]  ? schedule+0x216/0x3f0
[  300.472826]  schedule+0xf6/0x3f0
[  300.472999]  io_schedule+0xce/0x150
[  300.473183]  bit_wait_io+0x1f/0x100
[  300.473368]  __wait_on_bit+0x68/0x1b0
[  300.473560]  ? __pfx_bit_wait_io+0x10/0x10
[  300.473778]  out_of_line_wait_on_bit+0xe6/0x120
[  300.474010]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.474273]  ? __pfx_wake_bit_function+0x10/0x10
[  300.474512]  ? __might_sleep+0xa3/0x170
[  300.474714]  do_get_write_access+0x83b/0x1010
[  300.474951]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.475215]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.475496]  __ext4_new_inode+0xfef/0x5320
[  300.475720]  ? __pfx___ext4_new_inode+0x10/0x10
[  300.475955]  ? __pfx___dquot_initialize+0x10/0x10
[  300.476202]  ? _raw_spin_unlock+0x31/0x60
[  300.476409]  ? d_splice_alias+0x4e8/0xe30
[  300.476619]  ext4_create+0x32e/0x550
[  300.476815]  ? __pfx_ext4_create+0x10/0x10
[  300.477030]  ? inode_permission+0xf4/0x650
[  300.477239]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.477510]  ? __pfx_ext4_create+0x10/0x10
[  300.477723]  lookup_open.isra.0+0x117c/0x1580
[  300.477954]  ? __pfx_lookup_open.isra.0+0x10/0x10
[  300.478200]  ? lock_acquire+0x80/0xb0
[  300.478391]  ? path_openat+0xd40/0x2bf0
[  300.478599]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.478873]  path_openat+0xdbf/0x2bf0
[  300.479085]  ? __pfx_path_openat+0x10/0x10
[  300.479298]  ? __this_cpu_preempt_check+0x21/0x30
[  300.479553]  ? lock_is_held_type+0xef/0x150
[  300.479770]  ? __this_cpu_preempt_check+0x21/0x30
[  300.480019]  do_filp_open+0x1cc/0x420
[  300.480213]  ? __pfx_do_filp_open+0x10/0x10
[  300.480429]  ? lock_release+0x441/0x870
[  300.480638]  ? do_raw_spin_unlock+0x15c/0x210
[  300.480877]  do_sys_openat2+0x185/0x1f0
[  300.481078]  ? __pfx_do_sys_openat2+0x10/0x10
[  300.481306]  ? __this_cpu_preempt_check+0x21/0x30
[  300.481548]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.481864]  ? lockdep_hardirqs_on+0x89/0x110
[  300.482094]  __x64_sys_openat+0x17a/0x240
[  300.482304]  ? __pfx___x64_sys_openat+0x10/0x10
[  300.482539]  ? __audit_syscall_entry+0x39c/0x500
[  300.482784]  x64_sys_call+0x1a52/0x20d0
[  300.482988]  do_syscall_64+0x6d/0x140
[  300.483182]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.483450] RIP: 0033:0x7fd79d13e8c4
[  300.483639] RSP: 002b:00007fd79c1fea40 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[  300.484009] RAX: ffffffffffffffda RBX: 00007fd794000c50 RCX: 00007fd79d13e8c4
[  300.484357] RDX: 0000000000000241 RSI: 00007fd79c1feb80 RDI: 00000000ffffff9c
[  300.484705] RBP: 00007fd79c1feb80 R08: 0000000000000000 R09: 0000000000000001
[  300.485053] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[  300.485400] R13: 00007fd794000c50 R14: 0000000000000001 R15: 00007fd79402aba0
[  300.485761]  </TASK>
[  300.485918] INFO: task repro:686 blocked for more than 147 seconds.
[  300.486225]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.486491] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.486868] task:repro           state:D stack:0     pid:686   tgid:686   ppid:680    flags:0x00000006
[  300.487345] Call Trace:
[  300.487476]  <TASK>
[  300.487595]  __schedule+0xe13/0x33a0
[  300.487790]  ? __pfx___schedule+0x10/0x10
[  300.487995]  ? lock_release+0x441/0x870
[  300.488196]  ? __pfx_lock_release+0x10/0x10
[  300.488411]  ? trace_lock_acquire+0x139/0x1b0
[  300.488640]  ? lock_acquire+0x80/0xb0
[  300.488831]  ? schedule+0x216/0x3f0
[  300.489019]  schedule+0xf6/0x3f0
[  300.489193]  io_schedule+0xce/0x150
[  300.489378]  blk_mq_get_tag+0x5e7/0xce0
[  300.489584]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.489811]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.490082]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.490323]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.490603]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.490863]  ? __this_cpu_preempt_check+0x21/0x30
[  300.491108]  ? lock_acquire.part.0+0x152/0x390
[  300.491375]  blk_mq_alloc_request+0x655/0x930
[  300.491604]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.491865]  scsi_execute_cmd+0x223/0x1040
[  300.492081]  ? debug_smp_processor_id+0x20/0x30
[  300.492317]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.492548]  ? rcu_is_watching+0x19/0xc0
[  300.492754]  ? __kmalloc_noprof+0x34f/0x4a0
[  300.492970]  ? ata_cmd_ioctl+0x375/0x570
[  300.493179]  ata_cmd_ioctl+0x23c/0x570
[  300.493378]  ? __pfx_ata_cmd_ioctl+0x10/0x10
[  300.493597]  ? lock_release+0x441/0x870
[  300.493802]  ? __this_cpu_preempt_check+0x21/0x30
[  300.494052]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.494321]  ? security_capable+0x19d/0x1e0
[  300.494545]  ata_sas_scsi_ioctl+0x60a/0x7c0
[  300.494758]  ? mark_lock.part.0+0xf3/0x17b0
[  300.494979]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
[  300.495218]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.495473]  ? __this_cpu_preempt_check+0x21/0x30
[  300.495715]  ? lock_is_held_type+0xef/0x150
[  300.495935]  ? __pfx_ata_scsi_ioctl+0x10/0x10
[  300.496161]  ata_scsi_ioctl+0x70/0x90
[  300.496355]  scsi_ioctl+0xb6f/0x1630
[  300.496546]  ? __might_sleep+0xa3/0x170
[  300.496748]  ? __pfx_scsi_ioctl+0x10/0x10
[  300.496954]  ? scsi_block_when_processing_errors+0x2c4/0x390
[  300.497244]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
[  300.497546]  ? __pfx___lock_acquire+0x10/0x10
[  300.497770]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.498028]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
[  300.498298]  ? do_vfs_ioctl+0x4fa/0x1920
[  300.498502]  ? __pfx_do_vfs_ioctl+0x10/0x10
[  300.498718]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.498996]  sg_ioctl+0xfb8/0x2880
[  300.499180]  ? __this_cpu_preempt_check+0x21/0x30
[  300.499436]  ? lock_release+0x441/0x870
[  300.499640]  ? __pfx_sg_ioctl+0x10/0x10
[  300.499842]  ? __this_cpu_preempt_check+0x21/0x30
[  300.500081]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.500394]  ? lockdep_hardirqs_on+0x89/0x110
[  300.500621]  ? trace_hardirqs_on+0x51/0x60
[  300.500834]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
[  300.501149]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.501395]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
[  300.501649]  ? __pfx_sg_ioctl+0x10/0x10
[  300.501849]  __x64_sys_ioctl+0x1b5/0x230
[  300.502055]  x64_sys_call+0x1209/0x20d0
[  300.502255]  do_syscall_64+0x6d/0x140
[  300.502447]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.502702] RIP: 0033:0x7f8b9783ee5d
[  300.502890] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[  300.503265] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
[  300.503629] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
[  300.503978] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
[  300.504326] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
[  300.504674] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
[  300.505034]  </TASK>
[  300.505153] INFO: task repro:687 blocked for more than 147 seconds.
[  300.505459]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.505724] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.506102] task:repro           state:D stack:0     pid:687   tgid:687   ppid:683    flags:0x00000006
[  300.506557] Call Trace:
[  300.506686]  <TASK>
[  300.506803]  __schedule+0xe13/0x33a0
[  300.507000]  ? __pfx___schedule+0x10/0x10
[  300.507206]  ? lock_release+0x441/0x870
[  300.507419]  ? __pfx_lock_release+0x10/0x10
[  300.507633]  ? trace_lock_acquire+0x139/0x1b0
[  300.507865]  ? lock_acquire+0x80/0xb0
[  300.508055]  ? schedule+0x216/0x3f0
[  300.508243]  schedule+0xf6/0x3f0
[  300.508418]  io_schedule+0xce/0x150
[  300.508602]  blk_mq_get_tag+0x5e7/0xce0
[  300.508807]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.509034]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.509306]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.509548]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.509829]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.510092]  ? __this_cpu_preempt_check+0x21/0x30
[  300.510335]  ? lock_acquire.part.0+0x152/0x390
[  300.510570]  blk_mq_alloc_request+0x655/0x930
[  300.510801]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.511072]  scsi_execute_cmd+0x223/0x1040
[  300.511290]  ? debug_smp_processor_id+0x20/0x30
[  300.511546]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.511780]  ? rcu_is_watching+0x19/0xc0
[  300.511984]  ? __kmalloc_noprof+0x34f/0x4a0
[  300.512200]  ? ata_cmd_ioctl+0x375/0x570
[  300.512407]  ata_cmd_ioctl+0x23c/0x570
[  300.512607]  ? __pfx_ata_cmd_ioctl+0x10/0x10
[  300.512827]  ? lock_release+0x441/0x870
[  300.513032]  ? __this_cpu_preempt_check+0x21/0x30
[  300.513282]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.513552]  ? security_capable+0x19d/0x1e0
[  300.513772]  ata_sas_scsi_ioctl+0x60a/0x7c0
[  300.513985]  ? mark_lock.part.0+0xf3/0x17b0
[  300.514200]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
[  300.514439]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.514676]  ? __this_cpu_preempt_check+0x21/0x30
[  300.514919]  ? lock_is_held_type+0xef/0x150
[  300.515136]  ? __pfx_ata_scsi_ioctl+0x10/0x10
[  300.515374]  ata_scsi_ioctl+0x70/0x90
[  300.515568]  scsi_ioctl+0xb6f/0x1630
[  300.515759]  ? __might_sleep+0xa3/0x170
[  300.515962]  ? __pfx_scsi_ioctl+0x10/0x10
[  300.516170]  ? scsi_block_when_processing_errors+0x2c4/0x390
[  300.516459]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
[  300.516759]  ? __pfx___lock_acquire+0x10/0x10
[  300.516983]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.517239]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
[  300.517511]  ? do_vfs_ioctl+0x4fa/0x1920
[  300.517715]  ? __pfx_do_vfs_ioctl+0x10/0x10
[  300.517931]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.518204]  sg_ioctl+0xfb8/0x2880
[  300.518383]  ? __this_cpu_preempt_check+0x21/0x30
[  300.518623]  ? lock_release+0x441/0x870
[  300.518826]  ? __pfx_sg_ioctl+0x10/0x10
[  300.519034]  ? __this_cpu_preempt_check+0x21/0x30
[  300.519274]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.519601]  ? lockdep_hardirqs_on+0x89/0x110
[  300.519828]  ? trace_hardirqs_on+0x51/0x60
[  300.520039]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
[  300.520351]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.520598]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
[  300.520853]  ? __pfx_sg_ioctl+0x10/0x10
[  300.521053]  __x64_sys_ioctl+0x1b5/0x230
[  300.521260]  x64_sys_call+0x1209/0x20d0
[  300.521459]  do_syscall_64+0x6d/0x140
[  300.521651]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.521907] RIP: 0033:0x7f8b9783ee5d
[  300.522092] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[  300.522464] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
[  300.522815] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
[  300.523169] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
[  300.523531] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
[  300.523881] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
[  300.524241]  </TASK>
[  300.524359] INFO: task repro:688 blocked for more than 147 seconds.
[  300.524666]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.524932] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.525308] task:repro           state:D stack:0     pid:688   tgid:688   ppid:685    flags:0x00000006
[  300.525762] Call Trace:
[  300.525893]  <TASK>
[  300.526011]  __schedule+0xe13/0x33a0
[  300.526207]  ? __pfx___schedule+0x10/0x10
[  300.526413]  ? lock_release+0x441/0x870
[  300.526614]  ? __pfx_lock_release+0x10/0x10
[  300.526827]  ? trace_lock_acquire+0x139/0x1b0
[  300.527060]  ? lock_acquire+0x80/0xb0
[  300.527251]  ? schedule+0x216/0x3f0
[  300.527450]  schedule+0xf6/0x3f0
[  300.527624]  io_schedule+0xce/0x150
[  300.527807]  blk_mq_get_tag+0x5e7/0xce0
[  300.528012]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.528238]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.528508]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.528749]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.529029]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.529291]  ? __this_cpu_preempt_check+0x21/0x30
[  300.529533]  ? lock_acquire.part.0+0x152/0x390
[  300.529767]  blk_mq_alloc_request+0x655/0x930
[  300.529996]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.530259]  scsi_execute_cmd+0x223/0x1040
[  300.530476]  ? debug_smp_processor_id+0x20/0x30
[  300.530712]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.530946]  ? rcu_is_watching+0x19/0xc0
[  300.531156]  ? __kmalloc_noprof+0x34f/0x4a0
[  300.531389]  ? ata_cmd_ioctl+0x375/0x570
[  300.531596]  ata_cmd_ioctl+0x23c/0x570
[  300.531795]  ? __pfx_ata_cmd_ioctl+0x10/0x10
[  300.532014]  ? lock_release+0x441/0x870
[  300.532219]  ? __this_cpu_preempt_check+0x21/0x30
[  300.532468]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.532738]  ? security_capable+0x19d/0x1e0
[  300.532959]  ata_sas_scsi_ioctl+0x60a/0x7c0
[  300.533172]  ? mark_lock.part.0+0xf3/0x17b0
[  300.533386]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
[  300.533624]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.533863]  ? __this_cpu_preempt_check+0x21/0x30
[  300.534105]  ? lock_is_held_type+0xef/0x150
[  300.534323]  ? __pfx_ata_scsi_ioctl+0x10/0x10
[  300.534547]  ata_scsi_ioctl+0x70/0x90
[  300.534742]  scsi_ioctl+0xb6f/0x1630
[  300.534937]  ? __might_sleep+0xa3/0x170
[  300.535139]  ? __pfx_scsi_ioctl+0x10/0x10
[  300.535363]  ? scsi_block_when_processing_errors+0x2c4/0x390
[  300.535652]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
[  300.535953]  ? __pfx___lock_acquire+0x10/0x10
[  300.536176]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.536432]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
[  300.536702]  ? do_vfs_ioctl+0x4fa/0x1920
[  300.536906]  ? __pfx_do_vfs_ioctl+0x10/0x10
[  300.537122]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.537395]  sg_ioctl+0xfb8/0x2880
[  300.537574]  ? __this_cpu_preempt_check+0x21/0x30
[  300.537815]  ? lock_release+0x441/0x870
[  300.538019]  ? __pfx_sg_ioctl+0x10/0x10
[  300.538221]  ? __this_cpu_preempt_check+0x21/0x30
[  300.538462]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.538777]  ? lockdep_hardirqs_on+0x89/0x110
[  300.539006]  ? trace_hardirqs_on+0x51/0x60
[  300.539219]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
[  300.539546]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.539792]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
[  300.540047]  ? __pfx_sg_ioctl+0x10/0x10
[  300.540246]  __x64_sys_ioctl+0x1b5/0x230
[  300.540452]  x64_sys_call+0x1209/0x20d0
[  300.540650]  do_syscall_64+0x6d/0x140
[  300.540842]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.541097] RIP: 0033:0x7f8b9783ee5d
[  300.541282] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[  300.541652] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
[  300.541999] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
[  300.542347] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
[  300.542695] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
[  300.543048] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
[  300.543422]  </TASK>
[  300.543540] INFO: task repro:690 blocked for more than 147 seconds.
[  300.543847]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
[  300.544111] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.544487] task:repro           state:D stack:0     pid:690   tgid:690   ppid:689    flags:0x00000006
[  300.544939] Call Trace:
[  300.545069]  <TASK>
[  300.545186]  __schedule+0xe13/0x33a0
[  300.545380]  ? __pfx___schedule+0x10/0x10
[  300.545585]  ? lock_release+0x441/0x870
[  300.545787]  ? __pfx_lock_release+0x10/0x10
[  300.546000]  ? trace_lock_acquire+0x139/0x1b0
[  300.546230]  ? lock_acquire+0x80/0xb0
[  300.546420]  ? schedule+0x216/0x3f0
[  300.546607]  schedule+0xf6/0x3f0
[  300.546781]  io_schedule+0xce/0x150
[  300.546967]  blk_mq_get_tag+0x5e7/0xce0
[  300.547174]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.547416]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.547687]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.547929]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.548210]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.548471]  ? __this_cpu_preempt_check+0x21/0x30
[  300.548714]  ? lock_acquire.part.0+0x152/0x390
[  300.548948]  blk_mq_alloc_request+0x655/0x930
[  300.549181]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.549443]  scsi_execute_cmd+0x223/0x1040
[  300.549660]  ? debug_smp_processor_id+0x20/0x30
[  300.549895]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.550126]  ? rcu_is_watching+0x19/0xc0
[  300.550329]  ? __kmalloc_noprof+0x34f/0x4a0
[  300.550543]  ? ata_cmd_ioctl+0x375/0x570
[  300.550749]  ata_cmd_ioctl+0x23c/0x570
[  300.550950]  ? __pfx_ata_cmd_ioctl+0x10/0x10
[  300.551170]  ? lock_release+0x441/0x870
[  300.551391]  ? __this_cpu_preempt_check+0x21/0x30
[  300.551641]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.551911]  ? security_capable+0x19d/0x1e0
[  300.552131]  ata_sas_scsi_ioctl+0x60a/0x7c0
[  300.552344]  ? mark_lock.part.0+0xf3/0x17b0
[  300.552558]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
[  300.552796]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.553033]  ? __this_cpu_preempt_check+0x21/0x30
[  300.553274]  ? lock_is_held_type+0xef/0x150
[  300.553491]  ? __pfx_ata_scsi_ioctl+0x10/0x10
[  300.553716]  ata_scsi_ioctl+0x70/0x90
[  300.553909]  scsi_ioctl+0xb6f/0x1630
[  300.554099]  ? __might_sleep+0xa3/0x170
[  300.554301]  ? __pfx_scsi_ioctl+0x10/0x10
[  300.554510]  ? scsi_block_when_processing_errors+0x2c4/0x390
[  300.554800]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
[  300.555103]  ? __pfx___lock_acquire+0x10/0x10
[  300.555350]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.555608]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
[  300.555877]  ? do_vfs_ioctl+0x4fa/0x1920
[  300.556082]  ? __pfx_do_vfs_ioctl+0x10/0x10
[  300.556297]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.556569]  sg_ioctl+0xfb8/0x2880
[  300.556747]  ? __this_cpu_preempt_check+0x21/0x30
[  300.556987]  ? lock_release+0x441/0x870
[  300.557189]  ? __pfx_sg_ioctl+0x10/0x10
[  300.557391]  ? __this_cpu_preempt_check+0x21/0x30
[  300.557631]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.557945]  ? lockdep_hardirqs_on+0x89/0x110
[  300.558171]  ? trace_hardirqs_on+0x51/0x60
[  300.558382]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
[  300.558695]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.558944]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
[  300.559201]  ? __pfx_sg_ioctl+0x10/0x10
[  300.559427]  __x64_sys_ioctl+0x1b5/0x230
[  300.559635]  x64_sys_call+0x1209/0x20d0
[  300.559833]  do_syscall_64+0x6d/0x140
[  300.560025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.560278] RIP: 0033:0x7f8b9783ee5d
[  300.560463] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[  300.560832] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
[  300.561180] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
[  300.561527] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
[  300.561877] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
[  300.562227] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
[  300.562588]  </TASK>
[  300.562706] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
[  300.563102]
[  300.563102] Showing all locks held in the system:
[  300.563420] 3 locks held by kworker/u8:0/11:
[  300.563636]  #0: ffff88800df9a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.564146]  #1: ffff88800dcd7d78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.564687]  #2: ffff88800d77a0e0 (&type->s_umount_key#52){.+.+}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.565179] 4 locks held by kworker/u8:1/31:
[  300.565394]  #0: ffff8880125bd148 ((wq_completion)ext4-rsv-conversion){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.565924]  #1: ffff88800df4fd78 ((work_completion)(&ei->i_rsv_conversion_work)){+.+.}-{0:0}, at: process_one_work+0x80
[  300.566500]  #2: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.566948]  #3: ffff888023638c60 (&ei->i_data_sem){++++}-{3:3}, at: ext4_map_blocks+0x344/0x1390
[  300.567407] 1 lock held by khungtaskd/33:
[  300.567612]  #0: ffffffff8705c9c0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x73/0x3c0
[  300.568064] 4 locks held by kworker/u8:2/36:
[  300.568281]  #0: ffff88800df9a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.568774]  #1: ffff88800e4efd78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.569316]  #2: ffff8880126740e0 (&type->s_umount_key#38){++++}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.569806]  #3: ffff888012670b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x1c3/0x8a0
[  300.570281] 4 locks held by kworker/1:2/80:
[  300.570491]  #0: ffff88800d45ad48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.570973]  #1: ffff888011fefd78 ((work_completion)(&(&ap->scsi_rescan_task)->work)){+.+.}-{0:0}, at: process_one_work0
[  300.571572]  #2: ffff888010a8c430 (&ap->scsi_scan_mutex){+.+.}-{3:3}, at: ata_scsi_dev_rescan+0x42/0x470
[  300.572044]  #3: ffff888010b46378 (&dev->mutex){....}-{3:3}, at: scsi_rescan_device+0x33/0x360
[  300.572480] 4 locks held by auditd/239:
[  300.572674]  #0: ffff8880134ccef8 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x25e/0x380
[  300.573087]  #1: ffff888012674408 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x14f/0x290
[  300.573505]  #2: ffff888017c87040 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb8/0x3e0
[  300.574025]  #3: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.574469] 3 locks held by in:imjournal/321:
[  300.574690]  #0: ffff888012674408 (sb_writers#3){.+.+}-{0:0}, at: path_openat+0x1f7c/0x2bf0
[  300.575118]  #1: ffff88801b708dd0 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: path_openat+0xd40/0x2bf0
[  300.575606]  #2: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.576052]
[  300.576138] =============================================
"

I hope you find it useful.

Regards,
Yi Lai

---

If you don't need the following environment to reproduce the problem or if you
already have one reproduced environment, please ignore the following information.

How to reproduce:
git clone https://gitlab.com/xupengfe/repro_vm_env.git
cd repro_vm_env
tar -xvf repro_vm_env.tar.gz
cd repro_vm_env; ./start3.sh  // it needs qemu-system-x86_64 and I used v7.1.0
  // start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel
  // You could change the bzImage_xxx as you want
  // Maybe you need to remove line "-drive if=pflash,format=raw,readonly=on,file=./OVMF_CODE.fd \" for different qemu version
You could use below command to log in, there is no password for root.
ssh -p 10023 root@localhost

After login vm(virtual machine) successfully, you could transfer reproduced
binary to the vm by below way, and reproduce the problem in vm:
gcc -pthread -o repro repro.c
scp -P 10023 repro root@localhost:/root/

Get the bzImage for target kernel:
Please use target kconfig and copy it to kernel_src/.config
make olddefconfig
make -jx bzImage           //x should equal or less than cpu num your pc has

Fill the bzImage file into above start3.sh to load the target kernel in vm.


Tips:
If you already have qemu-system-x86_64, please ignore below info.
If you want to install qemu v7.1.0 version:
git clone https://github.com/qemu/qemu.git
cd qemu
git checkout -f v7.1.0
mkdir build
cd build
yum install -y ninja-build.x86_64
yum -y install libslirp-devel.x86_64
../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl --enable-usb-redir --enable-slirp
make
make install 

On Mon, Sep 09, 2024 at 05:42:38PM +0200, Niklas Cassel wrote:
> When ata_qc_complete() schedules a command for EH using
> ata_qc_schedule_eh(), blk_abort_request() will be called, which leads to
> req->q->mq_ops->timeout() / scsi_timeout() being called.
> 
> scsi_timeout(), if the LLDD has no abort handler (libata has no abort
> handler), will set host byte to DID_TIME_OUT, and then call
> scsi_eh_scmd_add() to add the command to EH.
> 
> Thus, when commands first enter libata's EH strategy_handler, all the
> commands that have been added to EH will have DID_TIME_OUT set.
> 
> libata has its own flag (AC_ERR_TIMEOUT), that it sets for commands that
> have not received a completion at the time of entering EH.
> 
> Thus, libata doesn't really care about DID_TIME_OUT at all, and currently
> clears the host byte at the end of EH, in ata_scsi_qc_complete(), before
> scsi_eh_finish_cmd() is called.
> 
> However, this clearing in ata_scsi_qc_complete() is currently only done
> for commands that are not ATA passthrough commands.
> 
> Since the host byte is visible in the completion that we return to user
> space for ATA passthrough commands, for ATA passthrough commands that got
> completed via EH (commands with sense data), the user will incorrectly see:
> ATA pass-through(16): transport error: Host_status=0x03 [DID_TIME_OUT]
> 
> Fix this by moving the clearing of the host byte (which is currently only
> done for commands that are not ATA passthrough commands) from
> ata_scsi_qc_complete() to the start of EH (regardless if the command is
> ATA passthrough or not).
> 
> While at it, use the proper helper function to clear the host byte, rather
> than open coding the clearing.
> 
> This will make sure that we:
> -Correctly clear DID_TIME_OUT for both ATA passthrough commands and
>  commands that are not ATA passthrough commands.
> -Do not needlessly clear the host byte for commands that did not go via EH.
>  ata_scsi_qc_complete() is called both for commands that are completed
>  normally (without going via EH), and for commands that went via EH,
>  however, only commands that went via EH will have DID_TIME_OUT set.
> 
> Fixes: 24aeebbf8ea9 ("scsi: ata: libata: Change ata_eh_request_sense() to not set CHECK_CONDITION")
> Reported-by: Igor Pylypiv <ipylypiv@google.com>
> Closes: https://lore.kernel.org/linux-ide/ZttIN8He8TOZ7Lct@google.com/
> Signed-off-by: Niklas Cassel <cassel@kernel.org>
> ---
> Changes since v2: use set_host_byte() as suggested by Damien.
> 
>  drivers/ata/libata-eh.c   | 8 ++++++++
>  drivers/ata/libata-scsi.c | 3 ---
>  2 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
> index 7de97ee8e78b..703f5ce26765 100644
> --- a/drivers/ata/libata-eh.c
> +++ b/drivers/ata/libata-eh.c
> @@ -630,6 +630,14 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
>  	list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) {
>  		struct ata_queued_cmd *qc;
>  
> +		/*
> +		 * If the scmd was added to EH, via ata_qc_schedule_eh() ->
> +		 * scsi_timeout() -> scsi_eh_scmd_add(), scsi_timeout() will
> +		 * have set DID_TIME_OUT (since libata does not have an abort
> +		 * handler). Thus, to clear DID_TIME_OUT, clear the host byte.
> +		 */
> +		set_host_byte(scmd, DID_OK);
> +
>  		ata_qc_for_each_raw(ap, qc, i) {
>  			if (qc->flags & ATA_QCFLAG_ACTIVE &&
>  			    qc->scsicmd == scmd)
> diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
> index 3a442f564b0d..6a90062c8b55 100644
> --- a/drivers/ata/libata-scsi.c
> +++ b/drivers/ata/libata-scsi.c
> @@ -1680,9 +1680,6 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
>  			set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION);
>  	} else if (is_error && !have_sense) {
>  		ata_gen_ata_sense(qc);
> -	} else {
> -		/* Keep the SCSI ML and status byte, clear host byte. */
> -		cmd->result &= 0x0000ffff;
>  	}
>  
>  	ata_qc_done(qc);
> -- 
> 2.46.0
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-21 10:58 ` [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data Lai, Yi
@ 2024-10-21 12:07   ` Niklas Cassel
  2024-10-21 12:34     ` Damien Le Moal
  2024-10-21 15:20     ` Niklas Cassel
  0 siblings, 2 replies; 10+ messages in thread
From: Niklas Cassel @ 2024-10-21 12:07 UTC (permalink / raw)
  To: Lai, Yi
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

Hello Yi Lai,

On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> Hi Niklas Cassel,
> 
> Greetings!
> 
> I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> 
> After bisection and the first bad commit is:
> "
> e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> "

It might be that your bisection results are accurate.

However, after looking at the stacktraces, I find it way more likely that
bisection has landed on the wrong commit.

See this series that was just queued (for 6.13) a few days ago that solves a
similar starvation:
https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block

You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
and then try v6.14-rc4 + that series applied, to see if you can still trigger
the bug?


Kind regards,
Niklas


> 
> All detailed into can be found at:
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag
> Syzkaller repro code:
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.c
> Syzkaller repro syscall steps:
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.prog
> Syzkaller report:
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/repro.report
> Kconfig(make olddefconfig):
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/kconfig_origin
> Bisect info:
> https://github.com/laifryiee/syzkaller_logs/tree/main/241018_105830_blk_mq_get_tag/bisect_info.log
> bzImage:
> https://github.com/laifryiee/syzkaller_logs/raw/refs/heads/main/241018_105830_blk_mq_get_tag/bzImage_8e929cb546ee42c9a61d24fae60605e9e3192354
> Issue dmesg:
> https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/8e929cb546ee42c9a61d24fae60605e9e3192354_dmesg.log
> 
> "
> [  300.375482] INFO: task kworker/u8:0:11 blocked for more than 147 seconds.
> [  300.375906]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.376204] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.376646] task:kworker/u8:0    state:D stack:0     pid:11    tgid:11    ppid:2      flags:0x00004000
> [  300.377158] Workqueue: writeback wb_workfn (flush-8:0)
> [  300.377479] Call Trace:
> [  300.377624]  <TASK>
> [  300.377760]  __schedule+0xe13/0x33a0
> [  300.377985]  ? __pfx___schedule+0x10/0x10
> [  300.378222]  ? lock_release+0x441/0x870
> [  300.378457]  ? __pfx_lock_release+0x10/0x10
> [  300.378704]  ? trace_lock_acquire+0x139/0x1b0
> [  300.378982]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.379296]  ? kthread_data+0x61/0xd0
> [  300.379555]  schedule+0xf6/0x3f0
> [  300.379754]  io_schedule+0xce/0x150
> [  300.379968]  rq_qos_wait+0x1c5/0x310
> [  300.380181]  ? __pfx_wbt_cleanup_cb+0x10/0x10
> [  300.380437]  ? __pfx_rq_qos_wait+0x10/0x10
> [  300.380681]  ? __pfx_lock_release+0x10/0x10
> [  300.380925]  ? __pfx_rq_qos_wake_function+0x10/0x10
> [  300.381214]  ? __pfx_wbt_inflight_cb+0x10/0x10
> [  300.381479]  ? do_raw_spin_unlock+0x15c/0x210
> [  300.381743]  wbt_wait+0x1ec/0x400
> [  300.381946]  ? __pfx_wbt_wait+0x10/0x10
> [  300.382174]  ? __pfx_dd_bio_merge+0x10/0x10
> [  300.382437]  ? __pfx_wbt_wait+0x10/0x10
> [  300.382667]  __rq_qos_throttle+0x5f/0xb0
> [  300.382913]  blk_mq_submit_bio+0x8ad/0x2a20
> [  300.383157]  ? __kasan_check_read+0x15/0x20
> [  300.383438]  ? __pfx_blk_mq_submit_bio+0x10/0x10
> [  300.383708]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.383974]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.384260]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.384532]  ? lock_release+0x441/0x870
> [  300.384765]  __submit_bio+0x18d/0x500
> [  300.384985]  ? __pfx___submit_bio+0x10/0x10
> [  300.385234]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.385509]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.385875]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.386186]  ? kvm_clock_get_cycles+0x43/0x70
> [  300.386451]  submit_bio_noacct_nocheck+0x6a0/0xcc0
> [  300.386729]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
> [  300.387049]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.387361]  submit_bio_noacct+0x61d/0x1dc0
> [  300.387612]  submit_bio+0xce/0x480
> [  300.387819]  submit_bh_wbc+0x58a/0x740
> [  300.388052]  __block_write_full_folio+0x7a1/0xed0
> [  300.388327]  ? __pfx_blkdev_get_block+0x10/0x10
> [  300.388599]  block_write_full_folio+0x2e6/0x3a0
> [  300.388880]  ? __pfx_blkdev_get_block+0x10/0x10
> [  300.389147]  ? __pfx_blkdev_get_block+0x10/0x10
> [  300.389414]  write_cache_pages+0xbc/0x140
> [  300.389654]  ? __pfx_block_write_full_folio+0x10/0x10
> [  300.389954]  ? __pfx_write_cache_pages+0x10/0x10
> [  300.390224]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.390504]  blkdev_writepages+0xa5/0xf0
> [  300.390738]  ? __pfx_blkdev_writepages+0x10/0x10
> [  300.391016]  ? __pfx_do_raw_spin_lock+0x10/0x10
> [  300.391293]  ? do_raw_spin_unlock+0x15c/0x210
> [  300.391573]  ? __pfx_blkdev_writepages+0x10/0x10
> [  300.391841]  do_writepages+0x1c3/0x8a0
> [  300.392068]  ? __pfx_do_writepages+0x10/0x10
> [  300.392323]  ? __pfx_lock_release+0x10/0x10
> [  300.392568]  ? lock_release+0x441/0x870
> [  300.392805]  __writeback_single_inode+0x124/0xe10
> [  300.393081]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
> [  300.393397]  writeback_sb_inodes+0x63b/0x1000
> [  300.393661]  ? __pfx_writeback_sb_inodes+0x10/0x10
> [  300.393935]  ? lock_acquire.part.0+0x152/0x390
> [  300.394230]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
> [  300.394544]  __writeback_inodes_wb+0xc8/0x280
> [  300.394806]  wb_writeback+0x717/0xaf0
> [  300.395039]  ? __pfx_wb_writeback+0x10/0x10
> [  300.395290]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.395589]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.395868]  ? get_nr_dirty_inodes+0x12c/0x1b0
> [  300.396134]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.396393]  wb_workfn+0x76c/0xc90
> [  300.396597]  ? __pfx_lock_acquire.part.0+0x10/0x10
> [  300.396882]  ? __pfx_wb_workfn+0x10/0x10
> [  300.397110]  ? rcu_is_watching+0x19/0xc0
> [  300.397354]  process_one_work+0x92e/0x1b50
> [  300.397594]  ? lock_acquire.part.0+0x152/0x390
> [  300.397879]  ? __pfx_process_one_work+0x10/0x10
> [  300.398150]  ? move_linked_works+0x1bf/0x2c0
> [  300.398407]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.398690]  ? assign_work+0x19f/0x250
> [  300.398920]  ? lock_is_held_type+0xef/0x150
> [  300.399174]  worker_thread+0x68d/0xe90
> [  300.399428]  kthread+0x35a/0x470
> [  300.399623]  ? __pfx_worker_thread+0x10/0x10
> [  300.399874]  ? __pfx_kthread+0x10/0x10
> [  300.400100]  ret_from_fork+0x56/0x90
> [  300.400315]  ? __pfx_kthread+0x10/0x10
> [  300.400540]  ret_from_fork_asm+0x1a/0x30
> [  300.400783]  </TASK>
> [  300.400928] INFO: task kworker/u8:2:36 blocked for more than 147 seconds.
> [  300.401309]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.401613] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.402046] task:kworker/u8:2    state:D stack:0     pid:36    tgid:36    ppid:2      flags:0x00004000
> [  300.402638] Workqueue: writeback wb_workfn (flush-8:0)
> [  300.402941] Call Trace:
> [  300.403083]  <TASK>
> [  300.403214]  __schedule+0xe13/0x33a0
> [  300.403458]  ? __pfx___schedule+0x10/0x10
> [  300.403686]  ? lock_release+0x441/0x870
> [  300.403909]  ? __pfx_lock_release+0x10/0x10
> [  300.404146]  ? trace_lock_acquire+0x139/0x1b0
> [  300.404399]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.404701]  ? kthread_data+0x61/0xd0
> [  300.404915]  schedule+0xf6/0x3f0
> [  300.405106]  io_schedule+0xce/0x150
> [  300.405309]  blk_mq_get_tag+0x5e7/0xce0
> [  300.405531]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.405771]  ? __pfx_wbt_cleanup_cb+0x10/0x10
> [  300.406011]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.406288]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.406579]  ? __pfx_rq_qos_wake_function+0x10/0x10
> [  300.406848]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.407136]  ? dd_limit_depth+0x21e/0x2a0
> [  300.407380]  ? __pfx_dd_limit_depth+0x10/0x10
> [  300.407625]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.407901]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.408186]  ? __pfx_wbt_wait+0x10/0x10
> [  300.408405]  blk_mq_submit_bio+0x93c/0x2a20
> [  300.408632]  ? __kasan_check_read+0x15/0x20
> [  300.408866]  ? __pfx_blk_mq_submit_bio+0x10/0x10
> [  300.409117]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.409364]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.409626]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.409883]  ? lock_release+0x441/0x870
> [  300.410098]  __submit_bio+0x18d/0x500
> [  300.410303]  ? __pfx___submit_bio+0x10/0x10
> [  300.410534]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.410790]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.411135]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.411440]  ? kvm_clock_get_cycles+0x43/0x70
> [  300.411690]  submit_bio_noacct_nocheck+0x6a0/0xcc0
> [  300.411958]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
> [  300.412249]  ? lock_is_held_type+0xef/0x150
> [  300.412484]  submit_bio_noacct+0x61d/0x1dc0
> [  300.412718]  submit_bio+0xce/0x480
> [  300.412911]  ext4_io_submit+0xb0/0x150
> [  300.413128]  ext4_do_writepages+0xa54/0x2f30
> [  300.413372]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.413638]  ? __pfx_ext4_do_writepages+0x10/0x10
> [  300.413892]  ? rcu_is_watching+0x19/0xc0
> [  300.414113]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.414376]  ? lock_is_held_type+0xef/0x150
> [  300.414616]  ext4_writepages+0x35c/0x7c0
> [  300.414836]  ? __pfx_ext4_writepages+0x10/0x10
> [  300.415087]  ? lock_release+0x441/0x870
> [  300.415313]  ? do_raw_spin_unlock+0x15c/0x210
> [  300.415576]  ? __pfx_ext4_writepages+0x10/0x10
> [  300.415824]  do_writepages+0x1c3/0x8a0
> [  300.416039]  ? __pfx_do_writepages+0x10/0x10
> [  300.416278]  ? __pfx_lock_release+0x10/0x10
> [  300.416510]  ? __pfx_lock_release+0x10/0x10
> [  300.416754]  __writeback_single_inode+0x124/0xe10
> [  300.417016]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
> [  300.417301]  writeback_sb_inodes+0x63b/0x1000
> [  300.417546]  ? lock_release+0x441/0x870
> [  300.417772]  ? __pfx_writeback_sb_inodes+0x10/0x10
> [  300.418039]  ? lock_acquire.part.0+0x152/0x390
> [  300.418321]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
> [  300.418616]  __writeback_inodes_wb+0xc8/0x280
> [  300.418865]  wb_writeback+0x717/0xaf0
> [  300.419082]  ? __pfx_wb_writeback+0x10/0x10
> [  300.419335]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.419598]  ? get_nr_dirty_inodes+0x12c/0x1b0
> [  300.419847]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.420094]  wb_workfn+0x7b5/0xc90
> [  300.420291]  ? __pfx_lock_acquire.part.0+0x10/0x10
> [  300.420561]  ? __pfx_wb_workfn+0x10/0x10
> [  300.420780]  ? rcu_is_watching+0x19/0xc0
> [  300.421017]  process_one_work+0x92e/0x1b50
> [  300.421245]  ? lock_acquire.part.0+0x152/0x390
> [  300.421498]  ? __pfx_process_one_work+0x10/0x10
> [  300.421743]  ? move_linked_works+0x1bf/0x2c0
> [  300.421976]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.422236]  ? assign_work+0x19f/0x250
> [  300.422442]  ? lock_is_held_type+0xef/0x150
> [  300.422673]  worker_thread+0x68d/0xe90
> [  300.422896]  kthread+0x35a/0x470
> [  300.423074]  ? __pfx_worker_thread+0x10/0x10
> [  300.423306]  ? __pfx_kthread+0x10/0x10
> [  300.423538]  ret_from_fork+0x56/0x90
> [  300.423735]  ? __pfx_kthread+0x10/0x10
> [  300.423944]  ret_from_fork_asm+0x1a/0x30
> [  300.424171]  </TASK>
> [  300.424303] INFO: task kworker/1:2:80 blocked for more than 147 seconds.
> [  300.424649]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.424930] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.425332] task:kworker/1:2     state:D stack:0     pid:80    tgid:80    ppid:2      flags:0x00004000
> [  300.425818] Workqueue: events ata_scsi_dev_rescan
> [  300.426080] Call Trace:
> [  300.426217]  <TASK>
> [  300.426340]  __schedule+0xe13/0x33a0
> [  300.426545]  ? __pfx___schedule+0x10/0x10
> [  300.426761]  ? lock_release+0x441/0x870
> [  300.426977]  ? __pfx_lock_release+0x10/0x10
> [  300.427204]  ? _raw_spin_unlock_irq+0x3c/0x60
> [  300.427452]  schedule+0xf6/0x3f0
> [  300.427631]  io_schedule+0xce/0x150
> [  300.427820]  blk_mq_get_tag+0x5e7/0xce0
> [  300.428032]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.428264]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.428542]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.428790]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.429077]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.429352]  blk_mq_alloc_request+0x655/0x930
> [  300.429586]  ? __pfx_blk_mq_alloc_request+0x10/0x10
> [  300.429840]  ? __lock_acquire+0xd87/0x5c90
> [  300.430067]  scsi_execute_cmd+0x223/0x1040
> [  300.430294]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.430540]  ? __pfx_scsi_execute_cmd+0x10/0x10
> [  300.430781]  ? __pfx_lock_acquire.part.0+0x10/0x10
> [  300.431038]  scsi_vpd_inquiry+0xdf/0x220
> [  300.431251]  ? __pfx_scsi_vpd_inquiry+0x10/0x10
> [  300.431502]  ? debug_smp_processor_id+0x20/0x30
> [  300.431741]  ? rcu_is_watching+0x19/0xc0
> [  300.431953]  scsi_get_vpd_size+0x11e/0x310
> [  300.432168]  ? __mutex_lock+0x230/0x1490
> [  300.432377]  ? __pfx_scsi_get_vpd_size+0x10/0x10
> [  300.432619]  ? scsi_rescan_device+0x33/0x360
> [  300.432845]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.433083]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.433330]  ? lock_acquire.part.0+0x152/0x390
> [  300.433565]  ? __pfx___mutex_lock+0x10/0x10
> [  300.433791]  scsi_get_vpd_buf+0x30/0x180
> [  300.434006]  scsi_attach_vpd+0xf8/0x5b0
> [  300.434218]  scsi_rescan_device+0x106/0x360
> [  300.434446]  ata_scsi_dev_rescan+0x1d2/0x470
> [  300.434684]  process_one_work+0x92e/0x1b50
> [  300.434904]  ? lock_acquire.part.0+0x152/0x390
> [  300.435154]  ? __pfx_process_one_work+0x10/0x10
> [  300.435412]  ? move_linked_works+0x1bf/0x2c0
> [  300.435639]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.435891]  ? assign_work+0x19f/0x250
> [  300.436089]  ? lock_is_held_type+0xef/0x150
> [  300.436313]  worker_thread+0x68d/0xe90
> [  300.436525]  kthread+0x35a/0x470
> [  300.436699]  ? __pfx_worker_thread+0x10/0x10
> [  300.436924]  ? __pfx_kthread+0x10/0x10
> [  300.437125]  ret_from_fork+0x56/0x90
> [  300.437317]  ? __pfx_kthread+0x10/0x10
> [  300.437517]  ret_from_fork_asm+0x1a/0x30
> [  300.437736]  </TASK>
> [  300.437858] INFO: task jbd2/sda-8:104 blocked for more than 147 seconds.
> [  300.438192]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.438465] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.438852] task:jbd2/sda-8      state:D stack:0     pid:104   tgid:104   ppid:2      flags:0x00004000
> [  300.439339] Call Trace:
> [  300.439475]  <TASK>
> [  300.439595]  __schedule+0xe13/0x33a0
> [  300.439794]  ? __pfx___schedule+0x10/0x10
> [  300.440005]  ? lock_release+0x441/0x870
> [  300.440212]  ? __pfx_lock_release+0x10/0x10
> [  300.440431]  ? trace_lock_acquire+0x139/0x1b0
> [  300.440666]  ? lock_acquire+0x80/0xb0
> [  300.440860]  ? schedule+0x216/0x3f0
> [  300.441051]  schedule+0xf6/0x3f0
> [  300.441229]  io_schedule+0xce/0x150
> [  300.441417]  blk_mq_get_tag+0x5e7/0xce0
> [  300.441628]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.441859]  ? lock_release+0x441/0x870
> [  300.442063]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.442326]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.442596]  ? blk_mq_sched_try_merge+0xce/0x650
> [  300.442841]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.443123]  ? dd_limit_depth+0x21e/0x2a0
> [  300.443358]  ? __pfx_dd_limit_depth+0x10/0x10
> [  300.443591]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.443846]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.444131]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.444403]  ? __pfx_wbt_wait+0x10/0x10
> [  300.444610]  blk_mq_submit_bio+0x93c/0x2a20
> [  300.444829]  ? __pfx___lock_acquire+0x10/0x10
> [  300.445064]  ? __pfx_blk_mq_submit_bio+0x10/0x10
> [  300.445307]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.445559]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.445807]  ? lock_release+0x441/0x870
> [  300.446014]  __submit_bio+0x18d/0x500
> [  300.446211]  ? __pfx___submit_bio+0x10/0x10
> [  300.446432]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.446677]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.447005]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.447282]  ? kvm_clock_get_cycles+0x43/0x70
> [  300.447534]  submit_bio_noacct_nocheck+0x6a0/0xcc0
> [  300.447787]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
> [  300.448067]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.448330]  submit_bio_noacct+0x61d/0x1dc0
> [  300.448556]  submit_bio+0xce/0x480
> [  300.448742]  submit_bh_wbc+0x58a/0x740
> [  300.448947]  submit_bh+0x2b/0x40
> [  300.449124]  jbd2_journal_commit_transaction+0x2a09/0x65e0
> [  300.449431]  ? __pfx_jbd2_journal_commit_transaction+0x10/0x10
> [  300.449737]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.449983]  ? lock_release+0x441/0x870
> [  300.450196]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.450445]  ? _raw_spin_unlock_irqrestore+0x45/0x70
> [  300.450703]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.450998]  kjournald2+0x204/0x790
> [  300.451194]  ? __pfx_kjournald2+0x10/0x10
> [  300.451416]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.451647]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.451917]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.452194]  ? __kthread_parkme+0x15d/0x230
> [  300.452419]  kthread+0x35a/0x470
> [  300.452594]  ? __pfx_kjournald2+0x10/0x10
> [  300.452803]  ? __pfx_kthread+0x10/0x10
> [  300.453004]  ret_from_fork+0x56/0x90
> [  300.453198]  ? __pfx_kthread+0x10/0x10
> [  300.453406]  ret_from_fork_asm+0x1a/0x30
> [  300.453639]  </TASK>
> [  300.453765] INFO: task auditd:239 blocked for more than 147 seconds.
> [  300.454094]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.454367] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.454757] task:auditd          state:D stack:0     pid:239   tgid:239   ppid:1      flags:0x00000002
> [  300.455219] Call Trace:
> [  300.455363]  <TASK>
> [  300.455481]  __schedule+0xe13/0x33a0
> [  300.455676]  ? __pfx___schedule+0x10/0x10
> [  300.455883]  ? lock_release+0x441/0x870
> [  300.456086]  ? __pfx_lock_release+0x10/0x10
> [  300.456300]  ? trace_lock_acquire+0x139/0x1b0
> [  300.456530]  ? lock_acquire+0x80/0xb0
> [  300.456722]  ? schedule+0x216/0x3f0
> [  300.456911]  schedule+0xf6/0x3f0
> [  300.457085]  io_schedule+0xce/0x150
> [  300.457271]  bit_wait_io+0x1f/0x100
> [  300.457457]  __wait_on_bit+0x68/0x1b0
> [  300.457649]  ? __pfx_bit_wait_io+0x10/0x10
> [  300.457867]  out_of_line_wait_on_bit+0xe6/0x120
> [  300.458102]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
> [  300.458369]  ? __pfx_wake_bit_function+0x10/0x10
> [  300.458606]  ? __might_sleep+0xa3/0x170
> [  300.458808]  do_get_write_access+0x83b/0x1010
> [  300.459045]  jbd2_journal_get_write_access+0x1e4/0x270
> [  300.459308]  __ext4_journal_get_write_access+0x72/0x3b0
> [  300.459591]  ext4_reserve_inode_write+0x199/0x280
> [  300.459833]  ? ext4_dirty_inode+0xeb/0x140
> [  300.460044]  __ext4_mark_inode_dirty+0x18b/0x8c0
> [  300.460282]  ? debug_smp_processor_id+0x20/0x30
> [  300.460515]  ? __pfx___ext4_mark_inode_dirty+0x10/0x10
> [  300.460773]  ? trace_jbd2_handle_start+0x77/0x210
> [  300.461017]  ? jbd2__journal_start+0xff/0x6b0
> [  300.461246]  ? __ext4_journal_start_sb+0x1f8/0x600
> [  300.461490]  ? ext4_dirty_inode+0xb3/0x140
> [  300.461703]  ? __pfx_ext4_dirty_inode+0x10/0x10
> [  300.461937]  ext4_dirty_inode+0xeb/0x140
> [  300.462139]  ? rcu_is_watching+0x19/0xc0
> [  300.462343]  __mark_inode_dirty+0x1d0/0xd50
> [  300.462557]  ? __pfx_generic_write_checks+0x10/0x10
> [  300.462810]  generic_update_time+0xe0/0x110
> [  300.463032]  file_modified+0x1af/0x1e0
> [  300.463233]  ext4_buffered_write_iter+0x101/0x3e0
> [  300.463493]  ext4_file_write_iter+0x8b8/0x1ab0
> [  300.463722]  ? __pfx_lock_acquire.part.0+0x10/0x10
> [  300.463971]  ? debug_smp_processor_id+0x20/0x30
> [  300.464208]  ? __pfx_ext4_file_write_iter+0x10/0x10
> [  300.464459]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.464701]  ? lock_is_held_type+0xef/0x150
> [  300.464920]  vfs_write+0xc46/0x1120
> [  300.465106]  ? __pfx_ext4_file_write_iter+0x10/0x10
> [  300.465356]  ? __pfx_vfs_write+0x10/0x10
> [  300.465572]  ? mutex_lock_nested+0x1f/0x30
> [  300.465788]  ksys_write+0x14f/0x290
> [  300.465974]  ? __pfx_ksys_write+0x10/0x10
> [  300.466189]  __x64_sys_write+0x7b/0xc0
> [  300.466384]  ? syscall_trace_enter+0x9b/0x230
> [  300.466610]  x64_sys_call+0x1685/0x20d0
> [  300.466810]  do_syscall_64+0x6d/0x140
> [  300.467007]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.467264] RIP: 0033:0x7f0f0ad3ebcf
> [  300.467465] RSP: 002b:00007fffd00b5a30 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
> [  300.467838] RAX: ffffffffffffffda RBX: 00000000000000e0 RCX: 00007f0f0ad3ebcf
> [  300.468186] RDX: 00000000000000e0 RSI: 000055b1be1b80a0 RDI: 0000000000000005
> [  300.468535] RBP: 000055b1be1b80a0 R08: 0000000000000000 R09: 00007f0f0adb14e0
> [  300.468883] R10: 00007f0f0adb13e0 R11: 0000000000000293 R12: 00000000000000e0
> [  300.469231] R13: 000055b1be1ae2a0 R14: 00000000000000e0 R15: 00007f0f0adf69e0
> [  300.469591]  </TASK>
> [  300.469711] INFO: task in:imjournal:321 blocked for more than 147 seconds.
> [  300.470045]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.470311] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.470687] task:in:imjournal    state:D stack:0     pid:321   tgid:302   ppid:1      flags:0x00000002
> [  300.471144] Call Trace:
> [  300.471273]  <TASK>
> [  300.471406]  __schedule+0xe13/0x33a0
> [  300.471601]  ? __pfx___schedule+0x10/0x10
> [  300.471807]  ? lock_release+0x441/0x870
> [  300.472008]  ? __pfx_lock_release+0x10/0x10
> [  300.472221]  ? trace_lock_acquire+0x139/0x1b0
> [  300.472450]  ? lock_acquire+0x80/0xb0
> [  300.472640]  ? schedule+0x216/0x3f0
> [  300.472826]  schedule+0xf6/0x3f0
> [  300.472999]  io_schedule+0xce/0x150
> [  300.473183]  bit_wait_io+0x1f/0x100
> [  300.473368]  __wait_on_bit+0x68/0x1b0
> [  300.473560]  ? __pfx_bit_wait_io+0x10/0x10
> [  300.473778]  out_of_line_wait_on_bit+0xe6/0x120
> [  300.474010]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
> [  300.474273]  ? __pfx_wake_bit_function+0x10/0x10
> [  300.474512]  ? __might_sleep+0xa3/0x170
> [  300.474714]  do_get_write_access+0x83b/0x1010
> [  300.474951]  jbd2_journal_get_write_access+0x1e4/0x270
> [  300.475215]  __ext4_journal_get_write_access+0x72/0x3b0
> [  300.475496]  __ext4_new_inode+0xfef/0x5320
> [  300.475720]  ? __pfx___ext4_new_inode+0x10/0x10
> [  300.475955]  ? __pfx___dquot_initialize+0x10/0x10
> [  300.476202]  ? _raw_spin_unlock+0x31/0x60
> [  300.476409]  ? d_splice_alias+0x4e8/0xe30
> [  300.476619]  ext4_create+0x32e/0x550
> [  300.476815]  ? __pfx_ext4_create+0x10/0x10
> [  300.477030]  ? inode_permission+0xf4/0x650
> [  300.477239]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.477510]  ? __pfx_ext4_create+0x10/0x10
> [  300.477723]  lookup_open.isra.0+0x117c/0x1580
> [  300.477954]  ? __pfx_lookup_open.isra.0+0x10/0x10
> [  300.478200]  ? lock_acquire+0x80/0xb0
> [  300.478391]  ? path_openat+0xd40/0x2bf0
> [  300.478599]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.478873]  path_openat+0xdbf/0x2bf0
> [  300.479085]  ? __pfx_path_openat+0x10/0x10
> [  300.479298]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.479553]  ? lock_is_held_type+0xef/0x150
> [  300.479770]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.480019]  do_filp_open+0x1cc/0x420
> [  300.480213]  ? __pfx_do_filp_open+0x10/0x10
> [  300.480429]  ? lock_release+0x441/0x870
> [  300.480638]  ? do_raw_spin_unlock+0x15c/0x210
> [  300.480877]  do_sys_openat2+0x185/0x1f0
> [  300.481078]  ? __pfx_do_sys_openat2+0x10/0x10
> [  300.481306]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.481548]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.481864]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.482094]  __x64_sys_openat+0x17a/0x240
> [  300.482304]  ? __pfx___x64_sys_openat+0x10/0x10
> [  300.482539]  ? __audit_syscall_entry+0x39c/0x500
> [  300.482784]  x64_sys_call+0x1a52/0x20d0
> [  300.482988]  do_syscall_64+0x6d/0x140
> [  300.483182]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.483450] RIP: 0033:0x7fd79d13e8c4
> [  300.483639] RSP: 002b:00007fd79c1fea40 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
> [  300.484009] RAX: ffffffffffffffda RBX: 00007fd794000c50 RCX: 00007fd79d13e8c4
> [  300.484357] RDX: 0000000000000241 RSI: 00007fd79c1feb80 RDI: 00000000ffffff9c
> [  300.484705] RBP: 00007fd79c1feb80 R08: 0000000000000000 R09: 0000000000000001
> [  300.485053] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
> [  300.485400] R13: 00007fd794000c50 R14: 0000000000000001 R15: 00007fd79402aba0
> [  300.485761]  </TASK>
> [  300.485918] INFO: task repro:686 blocked for more than 147 seconds.
> [  300.486225]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.486491] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.486868] task:repro           state:D stack:0     pid:686   tgid:686   ppid:680    flags:0x00000006
> [  300.487345] Call Trace:
> [  300.487476]  <TASK>
> [  300.487595]  __schedule+0xe13/0x33a0
> [  300.487790]  ? __pfx___schedule+0x10/0x10
> [  300.487995]  ? lock_release+0x441/0x870
> [  300.488196]  ? __pfx_lock_release+0x10/0x10
> [  300.488411]  ? trace_lock_acquire+0x139/0x1b0
> [  300.488640]  ? lock_acquire+0x80/0xb0
> [  300.488831]  ? schedule+0x216/0x3f0
> [  300.489019]  schedule+0xf6/0x3f0
> [  300.489193]  io_schedule+0xce/0x150
> [  300.489378]  blk_mq_get_tag+0x5e7/0xce0
> [  300.489584]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.489811]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.490082]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.490323]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.490603]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.490863]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.491108]  ? lock_acquire.part.0+0x152/0x390
> [  300.491375]  blk_mq_alloc_request+0x655/0x930
> [  300.491604]  ? __pfx_blk_mq_alloc_request+0x10/0x10
> [  300.491865]  scsi_execute_cmd+0x223/0x1040
> [  300.492081]  ? debug_smp_processor_id+0x20/0x30
> [  300.492317]  ? __pfx_scsi_execute_cmd+0x10/0x10
> [  300.492548]  ? rcu_is_watching+0x19/0xc0
> [  300.492754]  ? __kmalloc_noprof+0x34f/0x4a0
> [  300.492970]  ? ata_cmd_ioctl+0x375/0x570
> [  300.493179]  ata_cmd_ioctl+0x23c/0x570
> [  300.493378]  ? __pfx_ata_cmd_ioctl+0x10/0x10
> [  300.493597]  ? lock_release+0x441/0x870
> [  300.493802]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.494052]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.494321]  ? security_capable+0x19d/0x1e0
> [  300.494545]  ata_sas_scsi_ioctl+0x60a/0x7c0
> [  300.494758]  ? mark_lock.part.0+0xf3/0x17b0
> [  300.494979]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
> [  300.495218]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.495473]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.495715]  ? lock_is_held_type+0xef/0x150
> [  300.495935]  ? __pfx_ata_scsi_ioctl+0x10/0x10
> [  300.496161]  ata_scsi_ioctl+0x70/0x90
> [  300.496355]  scsi_ioctl+0xb6f/0x1630
> [  300.496546]  ? __might_sleep+0xa3/0x170
> [  300.496748]  ? __pfx_scsi_ioctl+0x10/0x10
> [  300.496954]  ? scsi_block_when_processing_errors+0x2c4/0x390
> [  300.497244]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
> [  300.497546]  ? __pfx___lock_acquire+0x10/0x10
> [  300.497770]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.498028]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
> [  300.498298]  ? do_vfs_ioctl+0x4fa/0x1920
> [  300.498502]  ? __pfx_do_vfs_ioctl+0x10/0x10
> [  300.498718]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.498996]  sg_ioctl+0xfb8/0x2880
> [  300.499180]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.499436]  ? lock_release+0x441/0x870
> [  300.499640]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.499842]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.500081]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.500394]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.500621]  ? trace_hardirqs_on+0x51/0x60
> [  300.500834]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
> [  300.501149]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.501395]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
> [  300.501649]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.501849]  __x64_sys_ioctl+0x1b5/0x230
> [  300.502055]  x64_sys_call+0x1209/0x20d0
> [  300.502255]  do_syscall_64+0x6d/0x140
> [  300.502447]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.502702] RIP: 0033:0x7f8b9783ee5d
> [  300.502890] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
> [  300.503265] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
> [  300.503629] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
> [  300.503978] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
> [  300.504326] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
> [  300.504674] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
> [  300.505034]  </TASK>
> [  300.505153] INFO: task repro:687 blocked for more than 147 seconds.
> [  300.505459]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.505724] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.506102] task:repro           state:D stack:0     pid:687   tgid:687   ppid:683    flags:0x00000006
> [  300.506557] Call Trace:
> [  300.506686]  <TASK>
> [  300.506803]  __schedule+0xe13/0x33a0
> [  300.507000]  ? __pfx___schedule+0x10/0x10
> [  300.507206]  ? lock_release+0x441/0x870
> [  300.507419]  ? __pfx_lock_release+0x10/0x10
> [  300.507633]  ? trace_lock_acquire+0x139/0x1b0
> [  300.507865]  ? lock_acquire+0x80/0xb0
> [  300.508055]  ? schedule+0x216/0x3f0
> [  300.508243]  schedule+0xf6/0x3f0
> [  300.508418]  io_schedule+0xce/0x150
> [  300.508602]  blk_mq_get_tag+0x5e7/0xce0
> [  300.508807]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.509034]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.509306]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.509548]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.509829]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.510092]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.510335]  ? lock_acquire.part.0+0x152/0x390
> [  300.510570]  blk_mq_alloc_request+0x655/0x930
> [  300.510801]  ? __pfx_blk_mq_alloc_request+0x10/0x10
> [  300.511072]  scsi_execute_cmd+0x223/0x1040
> [  300.511290]  ? debug_smp_processor_id+0x20/0x30
> [  300.511546]  ? __pfx_scsi_execute_cmd+0x10/0x10
> [  300.511780]  ? rcu_is_watching+0x19/0xc0
> [  300.511984]  ? __kmalloc_noprof+0x34f/0x4a0
> [  300.512200]  ? ata_cmd_ioctl+0x375/0x570
> [  300.512407]  ata_cmd_ioctl+0x23c/0x570
> [  300.512607]  ? __pfx_ata_cmd_ioctl+0x10/0x10
> [  300.512827]  ? lock_release+0x441/0x870
> [  300.513032]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.513282]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.513552]  ? security_capable+0x19d/0x1e0
> [  300.513772]  ata_sas_scsi_ioctl+0x60a/0x7c0
> [  300.513985]  ? mark_lock.part.0+0xf3/0x17b0
> [  300.514200]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
> [  300.514439]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.514676]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.514919]  ? lock_is_held_type+0xef/0x150
> [  300.515136]  ? __pfx_ata_scsi_ioctl+0x10/0x10
> [  300.515374]  ata_scsi_ioctl+0x70/0x90
> [  300.515568]  scsi_ioctl+0xb6f/0x1630
> [  300.515759]  ? __might_sleep+0xa3/0x170
> [  300.515962]  ? __pfx_scsi_ioctl+0x10/0x10
> [  300.516170]  ? scsi_block_when_processing_errors+0x2c4/0x390
> [  300.516459]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
> [  300.516759]  ? __pfx___lock_acquire+0x10/0x10
> [  300.516983]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.517239]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
> [  300.517511]  ? do_vfs_ioctl+0x4fa/0x1920
> [  300.517715]  ? __pfx_do_vfs_ioctl+0x10/0x10
> [  300.517931]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.518204]  sg_ioctl+0xfb8/0x2880
> [  300.518383]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.518623]  ? lock_release+0x441/0x870
> [  300.518826]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.519034]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.519274]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.519601]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.519828]  ? trace_hardirqs_on+0x51/0x60
> [  300.520039]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
> [  300.520351]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.520598]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
> [  300.520853]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.521053]  __x64_sys_ioctl+0x1b5/0x230
> [  300.521260]  x64_sys_call+0x1209/0x20d0
> [  300.521459]  do_syscall_64+0x6d/0x140
> [  300.521651]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.521907] RIP: 0033:0x7f8b9783ee5d
> [  300.522092] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
> [  300.522464] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
> [  300.522815] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
> [  300.523169] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
> [  300.523531] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
> [  300.523881] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
> [  300.524241]  </TASK>
> [  300.524359] INFO: task repro:688 blocked for more than 147 seconds.
> [  300.524666]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.524932] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.525308] task:repro           state:D stack:0     pid:688   tgid:688   ppid:685    flags:0x00000006
> [  300.525762] Call Trace:
> [  300.525893]  <TASK>
> [  300.526011]  __schedule+0xe13/0x33a0
> [  300.526207]  ? __pfx___schedule+0x10/0x10
> [  300.526413]  ? lock_release+0x441/0x870
> [  300.526614]  ? __pfx_lock_release+0x10/0x10
> [  300.526827]  ? trace_lock_acquire+0x139/0x1b0
> [  300.527060]  ? lock_acquire+0x80/0xb0
> [  300.527251]  ? schedule+0x216/0x3f0
> [  300.527450]  schedule+0xf6/0x3f0
> [  300.527624]  io_schedule+0xce/0x150
> [  300.527807]  blk_mq_get_tag+0x5e7/0xce0
> [  300.528012]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.528238]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.528508]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.528749]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.529029]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.529291]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.529533]  ? lock_acquire.part.0+0x152/0x390
> [  300.529767]  blk_mq_alloc_request+0x655/0x930
> [  300.529996]  ? __pfx_blk_mq_alloc_request+0x10/0x10
> [  300.530259]  scsi_execute_cmd+0x223/0x1040
> [  300.530476]  ? debug_smp_processor_id+0x20/0x30
> [  300.530712]  ? __pfx_scsi_execute_cmd+0x10/0x10
> [  300.530946]  ? rcu_is_watching+0x19/0xc0
> [  300.531156]  ? __kmalloc_noprof+0x34f/0x4a0
> [  300.531389]  ? ata_cmd_ioctl+0x375/0x570
> [  300.531596]  ata_cmd_ioctl+0x23c/0x570
> [  300.531795]  ? __pfx_ata_cmd_ioctl+0x10/0x10
> [  300.532014]  ? lock_release+0x441/0x870
> [  300.532219]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.532468]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.532738]  ? security_capable+0x19d/0x1e0
> [  300.532959]  ata_sas_scsi_ioctl+0x60a/0x7c0
> [  300.533172]  ? mark_lock.part.0+0xf3/0x17b0
> [  300.533386]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
> [  300.533624]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.533863]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.534105]  ? lock_is_held_type+0xef/0x150
> [  300.534323]  ? __pfx_ata_scsi_ioctl+0x10/0x10
> [  300.534547]  ata_scsi_ioctl+0x70/0x90
> [  300.534742]  scsi_ioctl+0xb6f/0x1630
> [  300.534937]  ? __might_sleep+0xa3/0x170
> [  300.535139]  ? __pfx_scsi_ioctl+0x10/0x10
> [  300.535363]  ? scsi_block_when_processing_errors+0x2c4/0x390
> [  300.535652]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
> [  300.535953]  ? __pfx___lock_acquire+0x10/0x10
> [  300.536176]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.536432]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
> [  300.536702]  ? do_vfs_ioctl+0x4fa/0x1920
> [  300.536906]  ? __pfx_do_vfs_ioctl+0x10/0x10
> [  300.537122]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.537395]  sg_ioctl+0xfb8/0x2880
> [  300.537574]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.537815]  ? lock_release+0x441/0x870
> [  300.538019]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.538221]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.538462]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.538777]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.539006]  ? trace_hardirqs_on+0x51/0x60
> [  300.539219]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
> [  300.539546]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.539792]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
> [  300.540047]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.540246]  __x64_sys_ioctl+0x1b5/0x230
> [  300.540452]  x64_sys_call+0x1209/0x20d0
> [  300.540650]  do_syscall_64+0x6d/0x140
> [  300.540842]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.541097] RIP: 0033:0x7f8b9783ee5d
> [  300.541282] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
> [  300.541652] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
> [  300.541999] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
> [  300.542347] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
> [  300.542695] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
> [  300.543048] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
> [  300.543422]  </TASK>
> [  300.543540] INFO: task repro:690 blocked for more than 147 seconds.
> [  300.543847]       Not tainted 6.12.0-rc3-8e929cb546ee+ #1
> [  300.544111] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [  300.544487] task:repro           state:D stack:0     pid:690   tgid:690   ppid:689    flags:0x00000006
> [  300.544939] Call Trace:
> [  300.545069]  <TASK>
> [  300.545186]  __schedule+0xe13/0x33a0
> [  300.545380]  ? __pfx___schedule+0x10/0x10
> [  300.545585]  ? lock_release+0x441/0x870
> [  300.545787]  ? __pfx_lock_release+0x10/0x10
> [  300.546000]  ? trace_lock_acquire+0x139/0x1b0
> [  300.546230]  ? lock_acquire+0x80/0xb0
> [  300.546420]  ? schedule+0x216/0x3f0
> [  300.546607]  schedule+0xf6/0x3f0
> [  300.546781]  io_schedule+0xce/0x150
> [  300.546967]  blk_mq_get_tag+0x5e7/0xce0
> [  300.547174]  ? __pfx_blk_mq_get_tag+0x10/0x10
> [  300.547416]  ? __pfx_autoremove_wake_function+0x10/0x10
> [  300.547687]  __blk_mq_alloc_requests+0x6f6/0x1720
> [  300.547929]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
> [  300.548210]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
> [  300.548471]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.548714]  ? lock_acquire.part.0+0x152/0x390
> [  300.548948]  blk_mq_alloc_request+0x655/0x930
> [  300.549181]  ? __pfx_blk_mq_alloc_request+0x10/0x10
> [  300.549443]  scsi_execute_cmd+0x223/0x1040
> [  300.549660]  ? debug_smp_processor_id+0x20/0x30
> [  300.549895]  ? __pfx_scsi_execute_cmd+0x10/0x10
> [  300.550126]  ? rcu_is_watching+0x19/0xc0
> [  300.550329]  ? __kmalloc_noprof+0x34f/0x4a0
> [  300.550543]  ? ata_cmd_ioctl+0x375/0x570
> [  300.550749]  ata_cmd_ioctl+0x23c/0x570
> [  300.550950]  ? __pfx_ata_cmd_ioctl+0x10/0x10
> [  300.551170]  ? lock_release+0x441/0x870
> [  300.551391]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.551641]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.551911]  ? security_capable+0x19d/0x1e0
> [  300.552131]  ata_sas_scsi_ioctl+0x60a/0x7c0
> [  300.552344]  ? mark_lock.part.0+0xf3/0x17b0
> [  300.552558]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
> [  300.552796]  ? __pfx_mark_lock.part.0+0x10/0x10
> [  300.553033]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.553274]  ? lock_is_held_type+0xef/0x150
> [  300.553491]  ? __pfx_ata_scsi_ioctl+0x10/0x10
> [  300.553716]  ata_scsi_ioctl+0x70/0x90
> [  300.553909]  scsi_ioctl+0xb6f/0x1630
> [  300.554099]  ? __might_sleep+0xa3/0x170
> [  300.554301]  ? __pfx_scsi_ioctl+0x10/0x10
> [  300.554510]  ? scsi_block_when_processing_errors+0x2c4/0x390
> [  300.554800]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
> [  300.555103]  ? __pfx___lock_acquire+0x10/0x10
> [  300.555350]  ? __sanitizer_cov_trace_switch+0x58/0xa0
> [  300.555608]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
> [  300.555877]  ? do_vfs_ioctl+0x4fa/0x1920
> [  300.556082]  ? __pfx_do_vfs_ioctl+0x10/0x10
> [  300.556297]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
> [  300.556569]  sg_ioctl+0xfb8/0x2880
> [  300.556747]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.556987]  ? lock_release+0x441/0x870
> [  300.557189]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.557391]  ? __this_cpu_preempt_check+0x21/0x30
> [  300.557631]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
> [  300.557945]  ? lockdep_hardirqs_on+0x89/0x110
> [  300.558171]  ? trace_hardirqs_on+0x51/0x60
> [  300.558382]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
> [  300.558695]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
> [  300.558944]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
> [  300.559201]  ? __pfx_sg_ioctl+0x10/0x10
> [  300.559427]  __x64_sys_ioctl+0x1b5/0x230
> [  300.559635]  x64_sys_call+0x1209/0x20d0
> [  300.559833]  do_syscall_64+0x6d/0x140
> [  300.560025]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [  300.560278] RIP: 0033:0x7f8b9783ee5d
> [  300.560463] RSP: 002b:00007ffc0329c228 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
> [  300.560832] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f8b9783ee5d
> [  300.561180] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
> [  300.561527] RBP: 00007ffc0329c240 R08: 00007ffc0329bfb8 R09: 00007ffc0329c240
> [  300.561877] R10: 0000000000000000 R11: 0000000000000206 R12: 00007ffc0329c398
> [  300.562227] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007f8b97aff000
> [  300.562588]  </TASK>
> [  300.562706] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> [  300.563102]
> [  300.563102] Showing all locks held in the system:
> [  300.563420] 3 locks held by kworker/u8:0/11:
> [  300.563636]  #0: ffff88800df9a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
> [  300.564146]  #1: ffff88800dcd7d78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
> [  300.564687]  #2: ffff88800d77a0e0 (&type->s_umount_key#52){.+.+}-{3:3}, at: super_trylock_shared+0x2b/0x110
> [  300.565179] 4 locks held by kworker/u8:1/31:
> [  300.565394]  #0: ffff8880125bd148 ((wq_completion)ext4-rsv-conversion){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
> [  300.565924]  #1: ffff88800df4fd78 ((work_completion)(&ei->i_rsv_conversion_work)){+.+.}-{0:0}, at: process_one_work+0x80
> [  300.566500]  #2: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
> [  300.566948]  #3: ffff888023638c60 (&ei->i_data_sem){++++}-{3:3}, at: ext4_map_blocks+0x344/0x1390
> [  300.567407] 1 lock held by khungtaskd/33:
> [  300.567612]  #0: ffffffff8705c9c0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x73/0x3c0
> [  300.568064] 4 locks held by kworker/u8:2/36:
> [  300.568281]  #0: ffff88800df9a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
> [  300.568774]  #1: ffff88800e4efd78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
> [  300.569316]  #2: ffff8880126740e0 (&type->s_umount_key#38){++++}-{3:3}, at: super_trylock_shared+0x2b/0x110
> [  300.569806]  #3: ffff888012670b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x1c3/0x8a0
> [  300.570281] 4 locks held by kworker/1:2/80:
> [  300.570491]  #0: ffff88800d45ad48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
> [  300.570973]  #1: ffff888011fefd78 ((work_completion)(&(&ap->scsi_rescan_task)->work)){+.+.}-{0:0}, at: process_one_work0
> [  300.571572]  #2: ffff888010a8c430 (&ap->scsi_scan_mutex){+.+.}-{3:3}, at: ata_scsi_dev_rescan+0x42/0x470
> [  300.572044]  #3: ffff888010b46378 (&dev->mutex){....}-{3:3}, at: scsi_rescan_device+0x33/0x360
> [  300.572480] 4 locks held by auditd/239:
> [  300.572674]  #0: ffff8880134ccef8 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x25e/0x380
> [  300.573087]  #1: ffff888012674408 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x14f/0x290
> [  300.573505]  #2: ffff888017c87040 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb8/0x3e0
> [  300.574025]  #3: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
> [  300.574469] 3 locks held by in:imjournal/321:
> [  300.574690]  #0: ffff888012674408 (sb_writers#3){.+.+}-{0:0}, at: path_openat+0x1f7c/0x2bf0
> [  300.575118]  #1: ffff88801b708dd0 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: path_openat+0xd40/0x2bf0
> [  300.575606]  #2: ffff888012672958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
> [  300.576052]
> [  300.576138] =============================================
> "
> 
> I hope you find it useful.
> 
> Regards,
> Yi Lai
> 
> ---
> 
> If you don't need the following environment to reproduce the problem or if you
> already have one reproduced environment, please ignore the following information.
> 
> How to reproduce:
> git clone https://gitlab.com/xupengfe/repro_vm_env.git
> cd repro_vm_env
> tar -xvf repro_vm_env.tar.gz
> cd repro_vm_env; ./start3.sh  // it needs qemu-system-x86_64 and I used v7.1.0
>   // start3.sh will load bzImage_2241ab53cbb5cdb08a6b2d4688feb13971058f65 v6.2-rc5 kernel
>   // You could change the bzImage_xxx as you want
>   // Maybe you need to remove line "-drive if=pflash,format=raw,readonly=on,file=./OVMF_CODE.fd \" for different qemu version
> You could use below command to log in, there is no password for root.
> ssh -p 10023 root@localhost
> 
> After login vm(virtual machine) successfully, you could transfer reproduced
> binary to the vm by below way, and reproduce the problem in vm:
> gcc -pthread -o repro repro.c
> scp -P 10023 repro root@localhost:/root/
> 
> Get the bzImage for target kernel:
> Please use target kconfig and copy it to kernel_src/.config
> make olddefconfig
> make -jx bzImage           //x should equal or less than cpu num your pc has
> 
> Fill the bzImage file into above start3.sh to load the target kernel in vm.
> 
> 
> Tips:
> If you already have qemu-system-x86_64, please ignore below info.
> If you want to install qemu v7.1.0 version:
> git clone https://github.com/qemu/qemu.git
> cd qemu
> git checkout -f v7.1.0
> mkdir build
> cd build
> yum install -y ninja-build.x86_64
> yum -y install libslirp-devel.x86_64
> ../configure --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-sdl --enable-usb-redir --enable-slirp
> make
> make install 
> 
> On Mon, Sep 09, 2024 at 05:42:38PM +0200, Niklas Cassel wrote:
> > When ata_qc_complete() schedules a command for EH using
> > ata_qc_schedule_eh(), blk_abort_request() will be called, which leads to
> > req->q->mq_ops->timeout() / scsi_timeout() being called.
> > 
> > scsi_timeout(), if the LLDD has no abort handler (libata has no abort
> > handler), will set host byte to DID_TIME_OUT, and then call
> > scsi_eh_scmd_add() to add the command to EH.
> > 
> > Thus, when commands first enter libata's EH strategy_handler, all the
> > commands that have been added to EH will have DID_TIME_OUT set.
> > 
> > libata has its own flag (AC_ERR_TIMEOUT), that it sets for commands that
> > have not received a completion at the time of entering EH.
> > 
> > Thus, libata doesn't really care about DID_TIME_OUT at all, and currently
> > clears the host byte at the end of EH, in ata_scsi_qc_complete(), before
> > scsi_eh_finish_cmd() is called.
> > 
> > However, this clearing in ata_scsi_qc_complete() is currently only done
> > for commands that are not ATA passthrough commands.
> > 
> > Since the host byte is visible in the completion that we return to user
> > space for ATA passthrough commands, for ATA passthrough commands that got
> > completed via EH (commands with sense data), the user will incorrectly see:
> > ATA pass-through(16): transport error: Host_status=0x03 [DID_TIME_OUT]
> > 
> > Fix this by moving the clearing of the host byte (which is currently only
> > done for commands that are not ATA passthrough commands) from
> > ata_scsi_qc_complete() to the start of EH (regardless if the command is
> > ATA passthrough or not).
> > 
> > While at it, use the proper helper function to clear the host byte, rather
> > than open coding the clearing.
> > 
> > This will make sure that we:
> > -Correctly clear DID_TIME_OUT for both ATA passthrough commands and
> >  commands that are not ATA passthrough commands.
> > -Do not needlessly clear the host byte for commands that did not go via EH.
> >  ata_scsi_qc_complete() is called both for commands that are completed
> >  normally (without going via EH), and for commands that went via EH,
> >  however, only commands that went via EH will have DID_TIME_OUT set.
> > 
> > Fixes: 24aeebbf8ea9 ("scsi: ata: libata: Change ata_eh_request_sense() to not set CHECK_CONDITION")
> > Reported-by: Igor Pylypiv <ipylypiv@google.com>
> > Closes: https://lore.kernel.org/linux-ide/ZttIN8He8TOZ7Lct@google.com/
> > Signed-off-by: Niklas Cassel <cassel@kernel.org>
> > ---
> > Changes since v2: use set_host_byte() as suggested by Damien.
> > 
> >  drivers/ata/libata-eh.c   | 8 ++++++++
> >  drivers/ata/libata-scsi.c | 3 ---
> >  2 files changed, 8 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
> > index 7de97ee8e78b..703f5ce26765 100644
> > --- a/drivers/ata/libata-eh.c
> > +++ b/drivers/ata/libata-eh.c
> > @@ -630,6 +630,14 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
> >  	list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) {
> >  		struct ata_queued_cmd *qc;
> >  
> > +		/*
> > +		 * If the scmd was added to EH, via ata_qc_schedule_eh() ->
> > +		 * scsi_timeout() -> scsi_eh_scmd_add(), scsi_timeout() will
> > +		 * have set DID_TIME_OUT (since libata does not have an abort
> > +		 * handler). Thus, to clear DID_TIME_OUT, clear the host byte.
> > +		 */
> > +		set_host_byte(scmd, DID_OK);
> > +
> >  		ata_qc_for_each_raw(ap, qc, i) {
> >  			if (qc->flags & ATA_QCFLAG_ACTIVE &&
> >  			    qc->scsicmd == scmd)
> > diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
> > index 3a442f564b0d..6a90062c8b55 100644
> > --- a/drivers/ata/libata-scsi.c
> > +++ b/drivers/ata/libata-scsi.c
> > @@ -1680,9 +1680,6 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
> >  			set_status_byte(qc->scsicmd, SAM_STAT_CHECK_CONDITION);
> >  	} else if (is_error && !have_sense) {
> >  		ata_gen_ata_sense(qc);
> > -	} else {
> > -		/* Keep the SCSI ML and status byte, clear host byte. */
> > -		cmd->result &= 0x0000ffff;
> >  	}
> >  
> >  	ata_qc_done(qc);
> > -- 
> > 2.46.0
> > 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-21 12:07   ` Niklas Cassel
@ 2024-10-21 12:34     ` Damien Le Moal
  2024-10-21 15:20     ` Niklas Cassel
  1 sibling, 0 replies; 10+ messages in thread
From: Damien Le Moal @ 2024-10-21 12:34 UTC (permalink / raw)
  To: Niklas Cassel, Lai, Yi
  Cc: Hannes Reinecke, Martin K. Petersen, Igor Pylypiv, Niklas Cassel,
	linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On 10/21/24 21:07, Niklas Cassel wrote:
> Hello Yi Lai,
> 
> On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
>> Hi Niklas Cassel,
>>
>> Greetings!
>>
>> I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
>>
>> After bisection and the first bad commit is:
>> "
>> e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
>> "
> 
> It might be that your bisection results are accurate.
> 
> However, after looking at the stacktraces, I find it way more likely that
> bisection has landed on the wrong commit.
> 
> See this series that was just queued (for 6.13) a few days ago that solves a
> similar starvation:
> https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> 
> You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> and then try v6.14-rc4 + that series applied, to see if you can still trigger
> the bug?

s/6.14/6.13 in this paragraph :)

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-21 12:07   ` Niklas Cassel
  2024-10-21 12:34     ` Damien Le Moal
@ 2024-10-21 15:20     ` Niklas Cassel
  2024-10-22  5:44       ` Lai, Yi
  1 sibling, 1 reply; 10+ messages in thread
From: Niklas Cassel @ 2024-10-21 15:20 UTC (permalink / raw)
  To: Lai, Yi
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> Hello Yi Lai,
> 
> On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > Hi Niklas Cassel,
> > 
> > Greetings!
> > 
> > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > 
> > After bisection and the first bad commit is:
> > "
> > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > "
> 
> It might be that your bisection results are accurate.
> 
> However, after looking at the stacktraces, I find it way more likely that
> bisection has landed on the wrong commit.
> 
> See this series that was just queued (for 6.13) a few days ago that solves a
> similar starvation:
> https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> 
> You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> and then try v6.14-rc4 + that series applied, to see if you can still trigger
> the bug?

Another patch that might be relevant:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e972b08b91ef48488bae9789f03cfedb148667fb

Which fixes a use after delete in rq_qos_wake_function().
(We can see that the stack trace has rq_qos_wake_function() before
getting stuck forever in rq_qos_wait())

Who knows what could go wrong when accessing a deleted entry, in the
report there was a crash, but I could image other surprises :)
The fix was first included in v6.12-rc4.


Kind regards,
Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-21 15:20     ` Niklas Cassel
@ 2024-10-22  5:44       ` Lai, Yi
  2024-10-22  8:48         ` Niklas Cassel
  0 siblings, 1 reply; 10+ messages in thread
From: Lai, Yi @ 2024-10-22  5:44 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > Hello Yi Lai,
> > 
> > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > Hi Niklas Cassel,
> > > 
> > > Greetings!
> > > 
> > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > 
> > > After bisection and the first bad commit is:
> > > "
> > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > "
> > 
> > It might be that your bisection results are accurate.
> > 
> > However, after looking at the stacktraces, I find it way more likely that
> > bisection has landed on the wrong commit.
> > 
> > See this series that was just queued (for 6.13) a few days ago that solves a
> > similar starvation:
> > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > 
> > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > the bug?
> 
> Another patch that might be relevant:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e972b08b91ef48488bae9789f03cfedb148667fb
> 
> Which fixes a use after delete in rq_qos_wake_function().
> (We can see that the stack trace has rq_qos_wake_function() before
> getting stuck forever in rq_qos_wait())
> 
> Who knows what could go wrong when accessing a deleted entry, in the
> report there was a crash, but I could image other surprises :)
> The fix was first included in v6.12-rc4.
> 
>
Hi Niklas,

Thanks for the info. I have tried using v6.12-rc4 kernel to reproduce
the issue. Using the same repro binary, the issue still exists.

[  300.430914] INFO: task kworker/u8:0:11 blocked for more than 147 seconds.
[  300.431593]       Not tainted 6.12.0-rc4-kvm #1
[  300.431988] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.432640] task:kworker/u8:0    state:D stack:0     pid:11    tgid:11    ppid:2      flags:0x00004000
[  300.433431] Workqueue: writeback wb_workfn (flush-8:0)
[  300.433908] Call Trace:
[  300.434132]  <TASK>
[  300.434335]  __schedule+0xe13/0x33a0
[  300.434687]  ? __pfx___schedule+0x10/0x10
[  300.435055]  ? lock_release+0x441/0x870
[  300.435393]  ? __pfx_lock_release+0x10/0x10
[  300.435730]  ? trace_lock_acquire+0x139/0x1b0
[  300.436100]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.436533]  ? kthread_data+0x61/0xd0
[  300.436848]  schedule+0xf6/0x3f0
[  300.437123]  io_schedule+0xce/0x150
[  300.437413]  bit_wait_io+0x1f/0x100
[  300.437707]  __wait_on_bit+0x68/0x1b0
[  300.438011]  ? __pfx_bit_wait_io+0x10/0x10
[  300.438351]  out_of_line_wait_on_bit+0xe6/0x120
[  300.438750]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.439162]  ? __pfx_wake_bit_function+0x10/0x10
[  300.439540]  ? __might_sleep+0xa3/0x170
[  300.439864]  do_get_write_access+0x83b/0x1010
[  300.440241]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.440655]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.441238]  ext4_mb_mark_context+0x184/0xe30
[  300.441709]  ? __pfx_lock_release+0x10/0x10
[  300.442156]  ? __pfx_ext4_mb_mark_context+0x10/0x10
[  300.442650]  ? ext4_sb_block_valid+0x3a6/0x670
[  300.443134]  ext4_mb_mark_diskspace_used+0x44d/0x9e0
[  300.443595]  ? ext4_get_group_no_and_offset+0x186/0x1f0
[  300.444074]  ? __pfx_ext4_mb_mark_diskspace_used+0x10/0x10
[  300.444582]  ext4_mb_new_blocks+0x779/0x4790
[  300.444966]  ? lock_is_held_type+0xef/0x150
[  300.445349]  ? kasan_save_track+0x18/0x40
[  300.445727]  ? __pfx_ext4_mb_new_blocks+0x10/0x10
[  300.446139]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.446598]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.447066]  ? ext4_inode_to_goal_block+0x340/0x450
[  300.447486]  ext4_ext_map_blocks+0x16f0/0x6060
[  300.447877]  ? __pfx___lock_acquire+0x10/0x10
[  300.448251]  ? stack_trace_save+0x97/0xd0
[  300.448599]  ? __pfx_ext4_ext_map_blocks+0x10/0x10
[  300.449001]  ? debug_smp_processor_id+0x20/0x30
[  300.449372]  ? rcu_is_watching+0x19/0xc0
[  300.449701]  ? trace_lock_acquire+0x139/0x1b0
[  300.450065]  ? lock_acquire+0x80/0xb0
[  300.450359]  ? ext4_map_blocks+0x344/0x1390
[  300.450696]  ? __pfx_down_write+0x10/0x10
[  300.451048]  ext4_map_blocks+0x421/0x1390
[  300.451373]  ? __pfx_ext4_map_blocks+0x10/0x10
[  300.451718]  ? kasan_save_track+0x18/0x40
[  300.452030]  ? debug_smp_processor_id+0x20/0x30
[  300.452374]  ? rcu_is_watching+0x19/0xc0
[  300.452674]  ? kmem_cache_alloc_noprof+0x2e7/0x370
[  300.453040]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.453430]  ext4_do_writepages+0x18e0/0x2f30
[  300.453775]  ? __pfx_ext4_do_writepages+0x10/0x10
[  300.454115]  ? rcu_is_watching+0x19/0xc0
[  300.454411]  ? __this_cpu_preempt_check+0x21/0x30
[  300.454760]  ? lock_is_held_type+0xef/0x150
[  300.455063]  ext4_writepages+0x35c/0x7c0
[  300.455345]  ? __pfx_ext4_writepages+0x10/0x10
[  300.455658]  ? lock_release+0x441/0x870
[  300.455948]  ? do_raw_spin_unlock+0x15c/0x210
[  300.456250]  ? __pfx_ext4_writepages+0x10/0x10
[  300.456557]  do_writepages+0x1c3/0x8a0
[  300.456833]  ? __pfx_do_writepages+0x10/0x10
[  300.457127]  ? __pfx_lock_release+0x10/0x10
[  300.457413]  ? __pfx_lock_release+0x10/0x10
[  300.457711]  __writeback_single_inode+0x124/0xe10
[  300.458033]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.458385]  writeback_sb_inodes+0x63b/0x1000
[  300.458675]  ? lock_release+0x441/0x870
[  300.458956]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.459270]  ? lock_acquire.part.0+0x152/0x390
[  300.459605]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.459959]  __writeback_inodes_wb+0xc8/0x280
[  300.460248]  wb_writeback+0x717/0xaf0
[  300.460496]  ? __pfx_wb_writeback+0x10/0x10
[  300.460772]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.461086]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.461374]  ? lockdep_hardirqs_on+0x89/0x110
[  300.461660]  wb_workfn+0x7b5/0xc90
[  300.461888]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.462198]  ? __pfx_wb_workfn+0x10/0x10
[  300.462446]  ? rcu_is_watching+0x19/0xc0
[  300.462707]  process_one_work+0x92e/0x1b50
[  300.462994]  ? lock_acquire.part.0+0x152/0x390
[  300.463285]  ? __pfx_process_one_work+0x10/0x10
[  300.463566]  ? move_linked_works+0x1bf/0x2c0
[  300.463839]  ? __this_cpu_preempt_check+0x21/0x30
[  300.464138]  ? assign_work+0x19f/0x250
[  300.464375]  ? lock_is_held_type+0xef/0x150
[  300.464641]  worker_thread+0x68d/0xe90
[  300.464893]  kthread+0x35a/0x470
[  300.465095]  ? __pfx_worker_thread+0x10/0x10
[  300.465355]  ? __pfx_kthread+0x10/0x10
[  300.465587]  ret_from_fork+0x56/0x90
[  300.465812]  ? __pfx_kthread+0x10/0x10
[  300.466043]  ret_from_fork_asm+0x1a/0x30
[  300.466297]  </TASK>
[  300.466451] INFO: task kworker/u8:1:31 blocked for more than 147 seconds.
[  300.466858]       Not tainted 6.12.0-rc4-kvm #1
[  300.467127] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.467573] task:kworker/u8:1    state:D stack:0     pid:31    tgid:31    ppid:2      flags:0x00004000
[  300.468115] Workqueue: writeback wb_workfn (flush-8:0)
[  300.468426] Call Trace:
[  300.468579]  <TASK>
[  300.468717]  __schedule+0xe13/0x33a0
[  300.468951]  ? __pfx___schedule+0x10/0x10
[  300.469194]  ? lock_release+0x441/0x870
[  300.469433]  ? __pfx_lock_release+0x10/0x10
[  300.469686]  ? trace_lock_acquire+0x139/0x1b0
[  300.469957]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.470270]  ? kthread_data+0x61/0xd0
[  300.470492]  schedule+0xf6/0x3f0
[  300.470692]  io_schedule+0xce/0x150
[  300.470917]  blk_mq_get_tag+0x5e7/0xce0
[  300.471160]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.471417]  ? __pfx_wbt_cleanup_cb+0x10/0x10
[  300.471677]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.471971]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.472275]  ? __pfx_rq_qos_wake_function+0x10/0x10
[  300.472558]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.472863]  ? dd_limit_depth+0x21e/0x2a0
[  300.473098]  ? __pfx_dd_limit_depth+0x10/0x10
[  300.473349]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.473632]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.473926]  ? __pfx_wbt_wait+0x10/0x10
[  300.474152]  blk_mq_submit_bio+0x93c/0x2a20
[  300.474391]  ? __kasan_check_read+0x15/0x20
[  300.474645]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.474921]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.475182]  ? __this_cpu_preempt_check+0x21/0x30
[  300.475457]  ? __this_cpu_preempt_check+0x21/0x30
[  300.475727]  ? lock_release+0x441/0x870
[  300.475953]  __submit_bio+0x18d/0x500
[  300.476169]  ? __pfx___submit_bio+0x10/0x10
[  300.476407]  ? __this_cpu_preempt_check+0x21/0x30
[  300.476668]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.477018]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.477313]  ? kvm_clock_get_cycles+0x43/0x70
[  300.477566]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.477834]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.478127]  ? lock_is_held_type+0xef/0x150
[  300.478369]  submit_bio_noacct+0x61d/0x1dc0
[  300.478610]  submit_bio+0xce/0x480
[  300.478819]  ext4_io_submit+0xb0/0x150
[  300.479036]  ext4_do_writepages+0xa54/0x2f30
[  300.479284]  ? __this_cpu_preempt_check+0x21/0x30
[  300.479556]  ? __pfx_ext4_do_writepages+0x10/0x10
[  300.479817]  ? rcu_is_watching+0x19/0xc0
[  300.480042]  ? __this_cpu_preempt_check+0x21/0x30
[  300.480304]  ? lock_is_held_type+0xef/0x150
[  300.480544]  ext4_writepages+0x35c/0x7c0
[  300.480766]  ? __pfx_ext4_writepages+0x10/0x10
[  300.481019]  ? lock_release+0x441/0x870
[  300.481247]  ? do_raw_spin_unlock+0x15c/0x210
[  300.481491]  ? __pfx_ext4_writepages+0x10/0x10
[  300.481734]  do_writepages+0x1c3/0x8a0
[  300.481947]  ? __pfx_do_writepages+0x10/0x10
[  300.482181]  ? __pfx_lock_release+0x10/0x10
[  300.482409]  ? __pfx_lock_release+0x10/0x10
[  300.482646]  __writeback_single_inode+0x124/0xe10
[  300.482912]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.483192]  writeback_sb_inodes+0x63b/0x1000
[  300.483431]  ? lock_release+0x441/0x870
[  300.483652]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.483911]  ? lock_acquire.part.0+0x152/0x390
[  300.484187]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.484478]  __writeback_inodes_wb+0xc8/0x280
[  300.484724]  wb_writeback+0x717/0xaf0
[  300.484938]  ? __pfx_wb_writeback+0x10/0x10
[  300.485172]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.485433]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.485675]  ? lockdep_hardirqs_on+0x89/0x110
[  300.485917]  wb_workfn+0x7b5/0xc90
[  300.486111]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.486375]  ? __pfx_wb_workfn+0x10/0x10
[  300.486591]  ? rcu_is_watching+0x19/0xc0
[  300.486831]  process_one_work+0x92e/0x1b50
[  300.487055]  ? lock_acquire.part.0+0x152/0x390
[  300.487308]  ? __pfx_process_one_work+0x10/0x10
[  300.487555]  ? move_linked_works+0x1bf/0x2c0
[  300.487791]  ? __this_cpu_preempt_check+0x21/0x30
[  300.488053]  ? assign_work+0x19f/0x250
[  300.488260]  ? lock_is_held_type+0xef/0x150
[  300.488493]  worker_thread+0x68d/0xe90
[  300.488701]  ? lockdep_hardirqs_on+0x89/0x110
[  300.488960]  kthread+0x35a/0x470
[  300.489141]  ? __pfx_worker_thread+0x10/0x10
[  300.489377]  ? __pfx_kthread+0x10/0x10
[  300.489586]  ret_from_fork+0x56/0x90
[  300.489785]  ? __pfx_kthread+0x10/0x10
[  300.489993]  ret_from_fork_asm+0x1a/0x30
[  300.490221]  </TASK>
[  300.490347] INFO: task kworker/u8:2:35 blocked for more than 147 seconds.
[  300.490698]       Not tainted 6.12.0-rc4-kvm #1
[  300.490952] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.491353] task:kworker/u8:2    state:D stack:0     pid:35    tgid:35    ppid:2      flags:0x00004000
[  300.491835] Workqueue: writeback wb_workfn (flush-8:0)
[  300.492115] Call Trace:
[  300.492252]  <TASK>
[  300.492377]  __schedule+0xe13/0x33a0
[  300.492585]  ? __pfx___schedule+0x10/0x10
[  300.492811]  ? lock_release+0x441/0x870
[  300.493031]  ? __pfx_lock_release+0x10/0x10
[  300.493259]  ? trace_lock_acquire+0x139/0x1b0
[  300.493502]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.493784]  ? kthread_data+0x61/0xd0
[  300.493985]  schedule+0xf6/0x3f0
[  300.494165]  io_schedule+0xce/0x150
[  300.494358]  blk_mq_get_tag+0x5e7/0xce0
[  300.494570]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.494815]  ? __pfx___mod_timer+0x10/0x10
[  300.495036]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.495311]  ? __pfx_rq_qos_wake_function+0x10/0x10
[  300.495571]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.495852]  ? dd_limit_depth+0x21e/0x2a0
[  300.496069]  ? __pfx_dd_limit_depth+0x10/0x10
[  300.496303]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.496552]  ? mod_timer+0x29/0x40
[  300.496749]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.497028]  ? __pfx_wbt_wait+0x10/0x10
[  300.497239]  blk_mq_submit_bio+0x93c/0x2a20
[  300.497461]  ? __kasan_check_read+0x15/0x20
[  300.497691]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.497935]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.498177]  ? __this_cpu_preempt_check+0x21/0x30
[  300.498433]  ? __this_cpu_preempt_check+0x21/0x30
[  300.498683]  ? lock_release+0x441/0x870
[  300.498905]  __submit_bio+0x18d/0x500
[  300.499105]  ? __pfx___submit_bio+0x10/0x10
[  300.499329]  ? __this_cpu_preempt_check+0x21/0x30
[  300.499578]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.499908]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.500192]  ? kvm_clock_get_cycles+0x43/0x70
[  300.500430]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.500687]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.500985]  submit_bio_noacct+0x61d/0x1dc0
[  300.501216]  submit_bio+0xce/0x480
[  300.501405]  ext4_io_submit+0xb0/0x150
[  300.501611]  ext4_do_writepages+0x1103/0x2f30
[  300.501862]  ? __pfx_ext4_do_writepages+0x10/0x10
[  300.502111]  ? rcu_is_watching+0x19/0xc0
[  300.502326]  ? __this_cpu_preempt_check+0x21/0x30
[  300.502577]  ? lock_is_held_type+0xef/0x150
[  300.502820]  ext4_writepages+0x35c/0x7c0
[  300.503034]  ? __pfx_ext4_writepages+0x10/0x10
[  300.503271]  ? lock_release+0x441/0x870
[  300.503489]  ? do_raw_spin_unlock+0x15c/0x210
[  300.503725]  ? __pfx_ext4_writepages+0x10/0x10
[  300.503964]  do_writepages+0x1c3/0x8a0
[  300.504171]  ? __pfx_do_writepages+0x10/0x10
[  300.504400]  ? __pfx_lock_release+0x10/0x10
[  300.504623]  ? __pfx_lock_release+0x10/0x10
[  300.504865]  __writeback_single_inode+0x124/0xe10
[  300.505117]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.505399]  writeback_sb_inodes+0x63b/0x1000
[  300.505641]  ? lock_release+0x441/0x870
[  300.505863]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.506123]  ? lock_acquire.part.0+0x152/0x390
[  300.506400]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.506692]  __writeback_inodes_wb+0xc8/0x280
[  300.506953]  wb_writeback+0x717/0xaf0
[  300.507164]  ? __pfx_wb_writeback+0x10/0x10
[  300.507394]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.507656]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.507898]  ? lockdep_hardirqs_on+0x89/0x110
[  300.508140]  wb_workfn+0x7b5/0xc90
[  300.508337]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.508601]  ? __pfx_wb_workfn+0x10/0x10
[  300.508822]  ? rcu_is_watching+0x19/0xc0
[  300.509047]  process_one_work+0x92e/0x1b50
[  300.509265]  ? lock_acquire.part.0+0x152/0x390
[  300.509512]  ? __pfx_process_one_work+0x10/0x10
[  300.509752]  ? move_linked_works+0x1bf/0x2c0
[  300.509982]  ? __this_cpu_preempt_check+0x21/0x30
[  300.510238]  ? assign_work+0x19f/0x250
[  300.510440]  ? lock_is_held_type+0xef/0x150
[  300.510667]  worker_thread+0x68d/0xe90
[  300.510895]  kthread+0x35a/0x470
[  300.511073]  ? __pfx_worker_thread+0x10/0x10
[  300.511302]  ? __pfx_kthread+0x10/0x10
[  300.511507]  ret_from_fork+0x56/0x90
[  300.511702]  ? __pfx_kthread+0x10/0x10
[  300.511906]  ret_from_fork_asm+0x1a/0x30
[  300.512128]  </TASK>
[  300.512270] INFO: task jbd2/sda-8:104 blocked for more than 147 seconds.
[  300.512609]       Not tainted 6.12.0-rc4-kvm #1
[  300.512847] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.513237] task:jbd2/sda-8      state:D stack:0     pid:104   tgid:104   ppid:2      flags:0x00004000
[  300.513707] Call Trace:
[  300.513840]  <TASK>
[  300.513961]  __schedule+0xe13/0x33a0
[  300.514162]  ? __pfx___schedule+0x10/0x10
[  300.514376]  ? lock_release+0x441/0x870
[  300.514586]  ? __pfx_lock_release+0x10/0x10
[  300.514819]  ? trace_lock_acquire+0x139/0x1b0
[  300.515057]  ? lock_acquire+0x80/0xb0
[  300.515254]  ? schedule+0x216/0x3f0
[  300.515448]  schedule+0xf6/0x3f0
[  300.515629]  io_schedule+0xce/0x150
[  300.515820]  blk_mq_get_tag+0x5e7/0xce0
[  300.516033]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.516266]  ? lock_release+0x441/0x870
[  300.516471]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.516737]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.517014]  ? blk_mq_sched_try_merge+0xce/0x650
[  300.517261]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.517542]  ? dd_limit_depth+0x21e/0x2a0
[  300.517758]  ? __pfx_dd_limit_depth+0x10/0x10
[  300.517992]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.518246]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.518534]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.518818]  ? __pfx_wbt_wait+0x10/0x10
[  300.519029]  blk_mq_submit_bio+0x93c/0x2a20
[  300.519251]  ? __pfx___lock_acquire+0x10/0x10
[  300.519490]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.519736]  ? __this_cpu_preempt_check+0x21/0x30
[  300.519991]  ? __this_cpu_preempt_check+0x21/0x30
[  300.520242]  ? lock_release+0x441/0x870
[  300.520451]  __submit_bio+0x18d/0x500
[  300.520650]  ? __pfx___submit_bio+0x10/0x10
[  300.520879]  ? __this_cpu_preempt_check+0x21/0x30
[  300.521130]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.521458]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.521738]  ? kvm_clock_get_cycles+0x43/0x70
[  300.521977]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.522226]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.522504]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.522782]  submit_bio_noacct+0x61d/0x1dc0
[  300.523006]  submit_bio+0xce/0x480
[  300.523190]  submit_bh_wbc+0x58a/0x740
[  300.523397]  submit_bh+0x2b/0x40
[  300.523572]  jbd2_journal_commit_transaction+0x2a09/0x65e0
[  300.523873]  ? __pfx_jbd2_journal_commit_transaction+0x10/0x10
[  300.524176]  ? __this_cpu_preempt_check+0x21/0x30
[  300.524420]  ? lock_release+0x441/0x870
[  300.524630]  ? __this_cpu_preempt_check+0x21/0x30
[  300.524879]  ? _raw_spin_unlock_irqrestore+0x45/0x70
[  300.525135]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.525417]  ? kjournald2+0x5e7/0x790
[  300.525616]  kjournald2+0x204/0x790
[  300.525809]  ? __pfx_kjournald2+0x10/0x10
[  300.526017]  ? lockdep_hardirqs_on+0x89/0x110
[  300.526245]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.526513]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.526797]  ? __kthread_parkme+0x15d/0x230
[  300.527019]  kthread+0x35a/0x470
[  300.527192]  ? __pfx_kjournald2+0x10/0x10
[  300.527399]  ? __pfx_kthread+0x10/0x10
[  300.527597]  ret_from_fork+0x56/0x90
[  300.527787]  ? __pfx_kthread+0x10/0x10
[  300.527984]  ret_from_fork_asm+0x1a/0x30
[  300.528200]  </TASK>
[  300.528319] INFO: task kworker/u8:4:167 blocked for more than 147 seconds.
[  300.528657]       Not tainted 6.12.0-rc4-kvm #1
[  300.528891] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.529273] task:kworker/u8:4    state:D stack:0     pid:167   tgid:167   ppid:2      flags:0x00004000
[  300.529732] Workqueue: writeback wb_workfn (flush-8:0)
[  300.529998] Call Trace:
[  300.530128]  <TASK>
[  300.530246]  __schedule+0xe13/0x33a0
[  300.530443]  ? __pfx___schedule+0x10/0x10
[  300.530651]  ? lock_release+0x441/0x870
[  300.530867]  ? __pfx_lock_release+0x10/0x10
[  300.531084]  ? trace_lock_acquire+0x139/0x1b0
[  300.531316]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.531589]  ? kthread_data+0x61/0xd0
[  300.531785]  schedule+0xf6/0x3f0
[  300.531961]  io_schedule+0xce/0x150
[  300.532148]  rq_qos_wait+0x1c5/0x310
[  300.532336]  ? __pfx_wbt_cleanup_cb+0x10/0x10
[  300.532563]  ? __pfx_rq_qos_wait+0x10/0x10
[  300.532780]  ? __pfx_lock_release+0x10/0x10
[  300.533000]  ? __pfx_rq_qos_wake_function+0x10/0x10
[  300.533254]  ? __pfx_wbt_inflight_cb+0x10/0x10
[  300.533487]  ? do_raw_spin_unlock+0x15c/0x210
[  300.533719]  wbt_wait+0x1ec/0x400
[  300.533898]  ? __pfx_wbt_wait+0x10/0x10
[  300.534100]  ? __pfx_dd_bio_merge+0x10/0x10
[  300.534321]  ? __pfx_wbt_wait+0x10/0x10
[  300.534523]  __rq_qos_throttle+0x5f/0xb0
[  300.534748]  blk_mq_submit_bio+0x8ad/0x2a20
[  300.534966]  ? __kasan_check_read+0x15/0x20
[  300.535188]  ? __pfx_blk_mq_submit_bio+0x10/0x10
[  300.535426]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.535663]  ? __this_cpu_preempt_check+0x21/0x30
[  300.535912]  ? __this_cpu_preempt_check+0x21/0x30
[  300.536156]  ? lock_release+0x441/0x870
[  300.536362]  __submit_bio+0x18d/0x500
[  300.536556]  ? __pfx___submit_bio+0x10/0x10
[  300.536777]  ? __this_cpu_preempt_check+0x21/0x30
[  300.537021]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.537342]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.537616]  ? kvm_clock_get_cycles+0x43/0x70
[  300.537849]  submit_bio_noacct_nocheck+0x6a0/0xcc0
[  300.538098]  ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[  300.538375]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.538637]  submit_bio_noacct+0x61d/0x1dc0
[  300.538874]  submit_bio+0xce/0x480
[  300.539059]  submit_bh_wbc+0x58a/0x740
[  300.539263]  __block_write_full_folio+0x7a1/0xed0
[  300.539506]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.539749]  block_write_full_folio+0x2e6/0x3a0
[  300.539985]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.540224]  ? __pfx_blkdev_get_block+0x10/0x10
[  300.540460]  write_cache_pages+0xbc/0x140
[  300.540670]  ? __pfx_block_write_full_folio+0x10/0x10
[  300.540933]  ? __pfx_write_cache_pages+0x10/0x10
[  300.541178]  ? __this_cpu_preempt_check+0x21/0x30
[  300.541427]  blkdev_writepages+0xa5/0xf0
[  300.541634]  ? __pfx_blkdev_writepages+0x10/0x10
[  300.541874]  ? __pfx_do_raw_spin_lock+0x10/0x10
[  300.542117]  ? do_raw_spin_unlock+0x15c/0x210
[  300.542349]  ? __pfx_blkdev_writepages+0x10/0x10
[  300.542590]  do_writepages+0x1c3/0x8a0
[  300.542809]  ? __pfx_do_writepages+0x10/0x10
[  300.543017]  ? __pfx_lock_release+0x10/0x10
[  300.543216]  ? lock_release+0x441/0x870
[  300.543410]  __writeback_single_inode+0x124/0xe10
[  300.543649]  ? wbc_attach_and_unlock_inode+0x5a9/0x950
[  300.543918]  writeback_sb_inodes+0x63b/0x1000
[  300.544159]  ? __pfx_writeback_sb_inodes+0x10/0x10
[  300.544405]  ? lock_acquire.part.0+0x152/0x390
[  300.544669]  ? __sanitizer_cov_trace_const_cmp8+0x1c/0x30
[  300.544949]  __writeback_inodes_wb+0xc8/0x280
[  300.545184]  wb_writeback+0x717/0xaf0
[  300.545385]  ? __pfx_wb_writeback+0x10/0x10
[  300.545604]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.545859]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.546109]  ? get_nr_dirty_inodes+0x12c/0x1b0
[  300.546342]  ? lockdep_hardirqs_on+0x89/0x110
[  300.546574]  wb_workfn+0x76c/0xc90
[  300.546773]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.547026]  ? __pfx_wb_workfn+0x10/0x10
[  300.547234]  ? rcu_is_watching+0x19/0xc0
[  300.547452]  process_one_work+0x92e/0x1b50
[  300.547666]  ? lock_acquire.part.0+0x152/0x390
[  300.547908]  ? __pfx_process_one_work+0x10/0x10
[  300.548144]  ? move_linked_works+0x1bf/0x2c0
[  300.548370]  ? __this_cpu_preempt_check+0x21/0x30
[  300.548620]  ? assign_work+0x19f/0x250
[  300.548821]  ? lock_is_held_type+0xef/0x150
[  300.549043]  worker_thread+0x68d/0xe90
[  300.549242]  ? lockdep_hardirqs_on+0x89/0x110
[  300.549481]  kthread+0x35a/0x470
[  300.549655]  ? __pfx_worker_thread+0x10/0x10
[  300.549880]  ? __pfx_kthread+0x10/0x10
[  300.550080]  ret_from_fork+0x56/0x90
[  300.550270]  ? __pfx_kthread+0x10/0x10
[  300.550470]  ret_from_fork_asm+0x1a/0x30
[  300.550687]  </TASK>
[  300.550826] INFO: task kworker/1:3:233 blocked for more than 147 seconds.
[  300.551165]       Not tainted 6.12.0-rc4-kvm #1
[  300.551396] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.551780] task:kworker/1:3     state:D stack:0     pid:233   tgid:233   ppid:2      flags:0x00004000
[  300.552242] Workqueue: events ata_scsi_dev_rescan
[  300.552501] Call Trace:
[  300.552635]  <TASK>
[  300.552754]  __schedule+0xe13/0x33a0
[  300.552955]  ? __pfx___schedule+0x10/0x10
[  300.553165]  ? lock_release+0x441/0x870
[  300.553370]  ? __pfx_lock_release+0x10/0x10
[  300.553594]  ? _raw_spin_unlock_irq+0x3c/0x60
[  300.553825]  schedule+0xf6/0x3f0
[  300.554002]  io_schedule+0xce/0x150
[  300.554190]  blk_mq_get_tag+0x5e7/0xce0
[  300.554398]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.554629]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.554934]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.555181]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.555466]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.555740]  blk_mq_alloc_request+0x655/0x930
[  300.555973]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.556225]  ? __lock_acquire+0xd87/0x5c90
[  300.556450]  scsi_execute_cmd+0x223/0x1040
[  300.556684]  ? __this_cpu_preempt_check+0x21/0x30
[  300.556933]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.557178]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.557431]  scsi_vpd_inquiry+0xdf/0x220
[  300.557641]  ? __pfx_scsi_vpd_inquiry+0x10/0x10
[  300.557877]  ? debug_smp_processor_id+0x20/0x30
[  300.558113]  ? rcu_is_watching+0x19/0xc0
[  300.558325]  scsi_get_vpd_size+0x11e/0x310
[  300.558539]  ? __mutex_lock+0x230/0x1490
[  300.558763]  ? __pfx_scsi_get_vpd_size+0x10/0x10
[  300.559004]  ? scsi_rescan_device+0x33/0x360
[  300.559227]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.559465]  ? __this_cpu_preempt_check+0x21/0x30
[  300.559713]  ? lock_acquire.part.0+0x152/0x390
[  300.559947]  ? __pfx___mutex_lock+0x10/0x10
[  300.560172]  scsi_get_vpd_buf+0x30/0x180
[  300.560382]  scsi_attach_vpd+0xf8/0x5b0
[  300.560591]  scsi_rescan_device+0x106/0x360
[  300.560820]  ata_scsi_dev_rescan+0x1d2/0x470
[  300.561051]  process_one_work+0x92e/0x1b50
[  300.561264]  ? lock_acquire.part.0+0x152/0x390
[  300.561505]  ? __pfx_process_one_work+0x10/0x10
[  300.561741]  ? move_linked_works+0x1bf/0x2c0
[  300.561972]  ? __this_cpu_preempt_check+0x21/0x30
[  300.562220]  ? assign_work+0x19f/0x250
[  300.562421]  ? lock_is_held_type+0xef/0x150
[  300.562645]  worker_thread+0x68d/0xe90
[  300.562856]  ? lockdep_hardirqs_on+0x89/0x110
[  300.563105]  kthread+0x35a/0x470
[  300.563278]  ? __pfx_worker_thread+0x10/0x10
[  300.563501]  ? __pfx_kthread+0x10/0x10
[  300.563700]  ret_from_fork+0x56/0x90
[  300.563892]  ? __pfx_kthread+0x10/0x10
[  300.564093]  ret_from_fork_asm+0x1a/0x30
[  300.564313]  </TASK>
[  300.564432] INFO: task auditd:240 blocked for more than 147 seconds.
[  300.564749]       Not tainted 6.12.0-rc4-kvm #1
[  300.564984] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.565375] task:auditd          state:D stack:0     pid:240   tgid:240   ppid:1      flags:0x00000002
[  300.565832] Call Trace:
[  300.565963]  <TASK>
[  300.566084]  __schedule+0xe13/0x33a0
[  300.566284]  ? __pfx___schedule+0x10/0x10
[  300.566495]  ? lock_release+0x441/0x870
[  300.566700]  ? __pfx_lock_release+0x10/0x10
[  300.566932]  ? trace_lock_acquire+0x139/0x1b0
[  300.567165]  ? lock_acquire+0x80/0xb0
[  300.567362]  ? schedule+0x216/0x3f0
[  300.567553]  schedule+0xf6/0x3f0
[  300.567730]  io_schedule+0xce/0x150
[  300.567921]  bit_wait_io+0x1f/0x100
[  300.568110]  __wait_on_bit+0x68/0x1b0
[  300.568307]  ? __pfx_bit_wait_io+0x10/0x10
[  300.568528]  out_of_line_wait_on_bit+0xe6/0x120
[  300.568775]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.569042]  ? __pfx_wake_bit_function+0x10/0x10
[  300.569286]  ? __might_sleep+0xa3/0x170
[  300.569495]  do_get_write_access+0x83b/0x1010
[  300.569733]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.570003]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.570276]  ext4_reserve_inode_write+0x199/0x280
[  300.570522]  ? ext4_dirty_inode+0xeb/0x140
[  300.570754]  __ext4_mark_inode_dirty+0x18b/0x8c0
[  300.570999]  ? debug_smp_processor_id+0x20/0x30
[  300.571238]  ? __pfx___ext4_mark_inode_dirty+0x10/0x10
[  300.571502]  ? trace_jbd2_handle_start+0x77/0x210
[  300.571750]  ? jbd2__journal_start+0xff/0x6b0
[  300.571986]  ? __ext4_journal_start_sb+0x1f8/0x600
[  300.572237]  ? ext4_dirty_inode+0xb3/0x140
[  300.572460]  ? __pfx_ext4_dirty_inode+0x10/0x10
[  300.572702]  ext4_dirty_inode+0xeb/0x140
[  300.572911]  ? rcu_is_watching+0x19/0xc0
[  300.573121]  __mark_inode_dirty+0x1d0/0xd50
[  300.573343]  ? __pfx_generic_write_checks+0x10/0x10
[  300.573608]  generic_update_time+0xe0/0x110
[  300.573832]  file_modified+0x1af/0x1e0
[  300.574039]  ext4_buffered_write_iter+0x101/0x3e0
[  300.574292]  ext4_file_write_iter+0x8b8/0x1ab0
[  300.574524]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.574791]  ? debug_smp_processor_id+0x20/0x30
[  300.575035]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.575288]  ? __this_cpu_preempt_check+0x21/0x30
[  300.575531]  ? lock_is_held_type+0xef/0x150
[  300.575758]  vfs_write+0xc46/0x1120
[  300.575950]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.576211]  ? __pfx_vfs_write+0x10/0x10
[  300.576429]  ? mutex_lock_nested+0x1f/0x30
[  300.576647]  ksys_write+0x14f/0x290
[  300.576839]  ? __pfx_ksys_write+0x10/0x10
[  300.577058]  __x64_sys_write+0x7b/0xc0
[  300.577256]  ? syscall_trace_enter+0x9b/0x230
[  300.577490]  x64_sys_call+0x1685/0x20d0
[  300.577692]  do_syscall_64+0x6d/0x140
[  300.577889]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.578156] RIP: 0033:0x7fe1ebf3ebcf
[  300.578348] RSP: 002b:00007ffebc24a5e0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
[  300.578741] RAX: ffffffffffffffda RBX: 00000000000000e0 RCX: 00007fe1ebf3ebcf
[  300.579094] RDX: 00000000000000e0 RSI: 0000559099024040 RDI: 0000000000000005
[  300.579448] RBP: 0000559099024040 R08: 0000000000000000 R09: 00007fe1ebfb14e0
[  300.579802] R10: 00007fe1ebfb13e0 R11: 0000000000000293 R12: 00000000000000e0
[  300.580154] R13: 000055909901a2a0 R14: 00000000000000e0 R15: 00007fe1ebff69e0
[  300.580518]  </TASK>
[  300.580641] INFO: task in:imjournal:319 blocked for more than 147 seconds.
[  300.580981]       Not tainted 6.12.0-rc4-kvm #1
[  300.581211] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.581591] task:in:imjournal    state:D stack:0     pid:319   tgid:300   ppid:1      flags:0x00000002
[  300.582047] Call Trace:
[  300.582177]  <TASK>
[  300.582296]  __schedule+0xe13/0x33a0
[  300.582492]  ? __pfx___schedule+0x10/0x10
[  300.582700]  ? lock_release+0x441/0x870
[  300.582918]  ? __pfx_lock_release+0x10/0x10
[  300.583135]  ? trace_lock_acquire+0x139/0x1b0
[  300.583366]  ? lock_acquire+0x80/0xb0
[  300.583558]  ? schedule+0x216/0x3f0
[  300.583747]  schedule+0xf6/0x3f0
[  300.583923]  io_schedule+0xce/0x150
[  300.584109]  bit_wait_io+0x1f/0x100
[  300.584297]  __wait_on_bit+0x68/0x1b0
[  300.584491]  ? __pfx_bit_wait_io+0x10/0x10
[  300.584710]  out_of_line_wait_on_bit+0xe6/0x120
[  300.584950]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.585216]  ? __pfx_wake_bit_function+0x10/0x10
[  300.585457]  ? __might_sleep+0xa3/0x170
[  300.585662]  do_get_write_access+0x83b/0x1010
[  300.585899]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.586166]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.586436]  __ext4_new_inode+0xd66/0x5320
[  300.586661]  ? __pfx___ext4_new_inode+0x10/0x10
[  300.586910]  ? __pfx___dquot_initialize+0x10/0x10
[  300.587163]  ? _raw_spin_unlock+0x31/0x60
[  300.587372]  ? d_splice_alias+0x4e8/0xe30
[  300.587585]  ext4_create+0x32e/0x550
[  300.587783]  ? __pfx_ext4_create+0x10/0x10
[  300.588000]  ? inode_permission+0xf4/0x650
[  300.588215]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.588489]  ? __pfx_ext4_create+0x10/0x10
[  300.588704]  lookup_open.isra.0+0x117c/0x1580
[  300.588942]  ? __pfx_lookup_open.isra.0+0x10/0x10
[  300.589192]  ? lock_acquire+0x80/0xb0
[  300.589385]  ? path_openat+0xd40/0x2bf0
[  300.589597]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.589876]  path_openat+0xdbf/0x2bf0
[  300.590080]  ? __pfx_path_openat+0x10/0x10
[  300.590294]  ? __this_cpu_preempt_check+0x21/0x30
[  300.590540]  ? lock_is_held_type+0xef/0x150
[  300.590777]  ? __this_cpu_preempt_check+0x21/0x30
[  300.591026]  do_filp_open+0x1cc/0x420
[  300.591221]  ? __pfx_do_filp_open+0x10/0x10
[  300.591440]  ? lock_release+0x441/0x870
[  300.591652]  ? do_raw_spin_unlock+0x15c/0x210
[  300.591895]  do_sys_openat2+0x185/0x1f0
[  300.592097]  ? __pfx_do_sys_openat2+0x10/0x10
[  300.592327]  ? __this_cpu_preempt_check+0x21/0x30
[  300.592570]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.592891]  ? lockdep_hardirqs_on+0x89/0x110
[  300.593122]  __x64_sys_openat+0x17a/0x240
[  300.593332]  ? __pfx___x64_sys_openat+0x10/0x10
[  300.593569]  ? __audit_syscall_entry+0x39c/0x500
[  300.593816]  x64_sys_call+0x1a52/0x20d0
[  300.594017]  do_syscall_64+0x6d/0x140
[  300.594212]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.594470] RIP: 0033:0x7f4af113e8c4
[  300.594657] RSP: 002b:00007f4af01fea40 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
[  300.595043] RAX: ffffffffffffffda RBX: 00007f4ae8000c50 RCX: 00007f4af113e8c4
[  300.595396] RDX: 0000000000000241 RSI: 00007f4af01feb80 RDI: 00000000ffffff9c
[  300.595748] RBP: 00007f4af01feb80 R08: 0000000000000000 R09: 0000000000000001
[  300.596099] R10: 00000000000001b6 R11: 0000000000000293 R12: 0000000000000241
[  300.596451] R13: 00007f4ae8000c50 R14: 0000000000000001 R15: 00007f4ae8010540
[  300.596818]  </TASK>
[  300.596937] INFO: task rs:main Q:Reg:322 blocked for more than 147 seconds.
[  300.597279]       Not tainted 6.12.0-rc4-kvm #1
[  300.597508] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.597889] task:rs:main Q:Reg   state:D stack:0     pid:322   tgid:300   ppid:1      flags:0x00000002
[  300.598345] Call Trace:
[  300.598479]  <TASK>
[  300.598597]  __schedule+0xe13/0x33a0
[  300.598804]  ? __pfx___schedule+0x10/0x10
[  300.599013]  ? lock_release+0x441/0x870
[  300.599217]  ? __pfx_lock_release+0x10/0x10
[  300.599433]  ? trace_lock_acquire+0x139/0x1b0
[  300.599664]  ? lock_acquire+0x80/0xb0
[  300.599857]  ? schedule+0x216/0x3f0
[  300.600046]  schedule+0xf6/0x3f0
[  300.600221]  io_schedule+0xce/0x150
[  300.600408]  bit_wait_io+0x1f/0x100
[  300.600595]  __wait_on_bit+0x68/0x1b0
[  300.600791]  ? __pfx_bit_wait_io+0x10/0x10
[  300.601010]  out_of_line_wait_on_bit+0xe6/0x120
[  300.601246]  ? __pfx_out_of_line_wait_on_bit+0x10/0x10
[  300.601512]  ? __pfx_wake_bit_function+0x10/0x10
[  300.601753]  ? __might_sleep+0xa3/0x170
[  300.601959]  do_get_write_access+0x83b/0x1010
[  300.602196]  jbd2_journal_get_write_access+0x1e4/0x270
[  300.602462]  __ext4_journal_get_write_access+0x72/0x3b0
[  300.602744]  ext4_reserve_inode_write+0x199/0x280
[  300.602990]  ? ext4_dirty_inode+0xeb/0x140
[  300.603204]  __ext4_mark_inode_dirty+0x18b/0x8c0
[  300.603446]  ? debug_smp_processor_id+0x20/0x30
[  300.603683]  ? __pfx___ext4_mark_inode_dirty+0x10/0x10
[  300.603945]  ? trace_jbd2_handle_start+0x77/0x210
[  300.604192]  ? jbd2__journal_start+0xff/0x6b0
[  300.604425]  ? __ext4_journal_start_sb+0x1f8/0x600
[  300.604672]  ? ext4_dirty_inode+0xb3/0x140
[  300.604892]  ? __pfx_ext4_dirty_inode+0x10/0x10
[  300.605127]  ext4_dirty_inode+0xeb/0x140
[  300.605332]  ? rcu_is_watching+0x19/0xc0
[  300.605539]  __mark_inode_dirty+0x1d0/0xd50
[  300.605756]  ? __pfx_generic_write_checks+0x10/0x10
[  300.606011]  generic_update_time+0xe0/0x110
[  300.606232]  file_modified+0x1af/0x1e0
[  300.606432]  ext4_buffered_write_iter+0x101/0x3e0
[  300.606678]  ext4_file_write_iter+0x8b8/0x1ab0
[  300.606924]  ? __pfx_lock_acquire.part.0+0x10/0x10
[  300.607175]  ? debug_smp_processor_id+0x20/0x30
[  300.607414]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.607667]  ? __this_cpu_preempt_check+0x21/0x30
[  300.607912]  ? lock_is_held_type+0xef/0x150
[  300.608135]  vfs_write+0xc46/0x1120
[  300.608324]  ? __pfx_ext4_file_write_iter+0x10/0x10
[  300.608577]  ? __pfx_vfs_write+0x10/0x10
[  300.608797]  ? mutex_lock_nested+0x1f/0x30
[  300.609016]  ksys_write+0x14f/0x290
[  300.609206]  ? __pfx_ksys_write+0x10/0x10
[  300.609420]  ? __audit_syscall_entry+0x39c/0x500
[  300.609663]  __x64_sys_write+0x7b/0xc0
[  300.609860]  ? syscall_trace_enter+0x14a/0x230
[  300.610093]  x64_sys_call+0x1685/0x20d0
[  300.610294]  do_syscall_64+0x6d/0x140
[  300.610490]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.610763] RIP: 0033:0x7f4af113ebcf
[  300.610950] RSP: 002b:00007f4aefbff890 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
[  300.611326] RAX: ffffffffffffffda RBX: 00007f4ae0000e70 RCX: 00007f4af113ebcf
[  300.611678] RDX: 0000000000000040 RSI: 00007f4ae0001150 RDI: 0000000000000005
[  300.612031] RBP: 00007f4ae0001150 R08: 0000000000000000 R09: 0000000000000000
[  300.612384] R10: 0000000000000007 R11: 0000000000000293 R12: 0000000000000000
[  300.612738] R13: 000055979b901330 R14: 00007f4ae0000e70 R15: 0000000000000040
[  300.613110]  </TASK>
[  300.613251] INFO: task repro:684 blocked for more than 147 seconds.
[  300.613537]       Not tainted 6.12.0-rc4-kvm #1
[  300.613749] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  300.614135] task:repro           state:D stack:0     pid:684   tgid:684   ppid:679    flags:0x00000006
[  300.614606] Call Trace:
[  300.614754]  <TASK>
[  300.614874]  __schedule+0xe13/0x33a0
[  300.615073]  ? __pfx___schedule+0x10/0x10
[  300.615283]  ? lock_release+0x441/0x870
[  300.615489]  ? __pfx_lock_release+0x10/0x10
[  300.615708]  ? trace_lock_acquire+0x139/0x1b0
[  300.615942]  ? lock_acquire+0x80/0xb0
[  300.616136]  ? schedule+0x216/0x3f0
[  300.616327]  schedule+0xf6/0x3f0
[  300.616507]  io_schedule+0xce/0x150
[  300.616695]  blk_mq_get_tag+0x5e7/0xce0
[  300.616909]  ? __pfx_blk_mq_get_tag+0x10/0x10
[  300.617146]  ? __pfx_autoremove_wake_function+0x10/0x10
[  300.617429]  __blk_mq_alloc_requests+0x6f6/0x1720
[  300.617678]  ? __sanitizer_cov_trace_const_cmp1+0x1e/0x30
[  300.617970]  ? __pfx___blk_mq_alloc_requests+0x10/0x10
[  300.618236]  ? __this_cpu_preempt_check+0x21/0x30
[  300.618480]  ? lock_acquire.part.0+0x152/0x390
[  300.618733]  blk_mq_alloc_request+0x655/0x930
[  300.618967]  ? __pfx_blk_mq_alloc_request+0x10/0x10
[  300.619234]  scsi_execute_cmd+0x223/0x1040
[  300.619455]  ? debug_smp_processor_id+0x20/0x30
[  300.619695]  ? __pfx_scsi_execute_cmd+0x10/0x10
[  300.619931]  ? rcu_is_watching+0x19/0xc0
[  300.620139]  ? __kmalloc_noprof+0x34f/0x4a0
[  300.620358]  ? ata_cmd_ioctl+0x375/0x570
[  300.620570]  ata_cmd_ioctl+0x23c/0x570
[  300.620775]  ? __pfx_ata_cmd_ioctl+0x10/0x10
[  300.621000]  ? lock_release+0x441/0x870
[  300.621209]  ? __this_cpu_preempt_check+0x21/0x30
[  300.621463]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.621740]  ? security_capable+0x19d/0x1e0
[  300.621976]  ata_sas_scsi_ioctl+0x60a/0x7c0
[  300.622195]  ? mark_lock.part.0+0xf3/0x17b0
[  300.622415]  ? __pfx_ata_sas_scsi_ioctl+0x10/0x10
[  300.622658]  ? __pfx_mark_lock.part.0+0x10/0x10
[  300.622915]  ? __this_cpu_preempt_check+0x21/0x30
[  300.623161]  ? lock_is_held_type+0xef/0x150
[  300.623382]  ? __pfx_ata_scsi_ioctl+0x10/0x10
[  300.623610]  ata_scsi_ioctl+0x70/0x90
[  300.623807]  scsi_ioctl+0xb6f/0x1630
[  300.624001]  ? __might_sleep+0xa3/0x170
[  300.624208]  ? __pfx_scsi_ioctl+0x10/0x10
[  300.624419]  ? scsi_block_when_processing_errors+0x2c4/0x390
[  300.624711]  ? __pfx_scsi_block_when_processing_errors+0x10/0x10
[  300.625022]  ? __pfx___lock_acquire+0x10/0x10
[  300.625250]  ? __sanitizer_cov_trace_switch+0x58/0xa0
[  300.625512]  ? __sanitizer_cov_trace_const_cmp2+0x1c/0x30
[  300.625787]  ? do_vfs_ioctl+0x4fa/0x1920
[  300.625997]  ? __pfx_do_vfs_ioctl+0x10/0x10
[  300.626217]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20
[  300.626497]  sg_ioctl+0xfb8/0x2880
[  300.626686]  ? __this_cpu_preempt_check+0x21/0x30
[  300.626947]  ? lock_release+0x441/0x870
[  300.627155]  ? __pfx_sg_ioctl+0x10/0x10
[  300.627366]  ? __this_cpu_preempt_check+0x21/0x30
[  300.627615]  ? seqcount_lockdep_reader_access.constprop.0+0xb4/0xd0
[  300.627932]  ? lockdep_hardirqs_on+0x89/0x110
[  300.628160]  ? trace_hardirqs_on+0x51/0x60
[  300.628381]  ? seqcount_lockdep_reader_access.constprop.0+0xc0/0xd0
[  300.628697]  ? __sanitizer_cov_trace_cmp4+0x1a/0x20
[  300.628949]  ? ktime_get_coarse_real_ts64+0xbf/0xf0
[  300.629210]  ? __pfx_sg_ioctl+0x10/0x10
[  300.629421]  __x64_sys_ioctl+0x1b5/0x230
[  300.629639]  x64_sys_call+0x1209/0x20d0
[  300.629844]  do_syscall_64+0x6d/0x140
[  300.630044]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[  300.630310] RIP: 0033:0x7fc80ee3ee5d
[  300.630502] RSP: 002b:00007fffa979c518 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
[  300.630915] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fc80ee3ee5d
[  300.631281] RDX: 0000000020000080 RSI: 000000000000031f RDI: 0000000000000003
[  300.631640] RBP: 00007fffa979c530 R08: 00007fffa979c2a8 R09: 00007fffa979c530
[  300.631997] R10: 0000000000000000 R11: 0000000000000206 R12: 00007fffa979c688
[  300.632354] R13: 0000000000401972 R14: 0000000000403e08 R15: 00007fc80f193000
[  300.632722]  </TASK>
[  300.632849] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
[  300.633253]
[  300.633253] Showing all locks held in the system:
[  300.633567] 7 locks held by kworker/u8:0/11:
[  300.633790]  #0: ffff88800e4a8948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.634296]  #1: ffff88800dcd7d78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.634871]  #2: ffff888011f480e0 (&type->s_umount_key#38){++++}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.635370]  #3: ffff888011f4ab98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x1c3/0x8a0
[  300.635848]  #4: ffff888011f4c958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.636299]  #5: ffff888023bb33c0 (&ei->i_data_sem){++++}-{3:3}, at: ext4_map_blocks+0x344/0x1390
[  300.636754]  #6: ffffe8ffffc077f8 (&lg->lg_mutex){+.+.}-{3:3}, at: ext4_mb_initialize_context+0x8a3/0xd20
[  300.637257] 4 locks held by kworker/u8:1/31:
[  300.637479]  #0: ffff88800e4a8948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.637979]  #1: ffff88800df47d78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.638532]  #2: ffff888011f480e0 (&type->s_umount_key#38){++++}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.639046]  #3: ffff888011f4ab98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x1c3/0x8a0
[  300.639525] 1 lock held by khungtaskd/33:
[  300.639732]  #0: ffffffff8705ca00 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x73/0x3c0
[  300.640191] 4 locks held by kworker/u8:2/35:
[  300.640411]  #0: ffff88800e4a8948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.640917]  #1: ffff88800e4bfd78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.641479]  #2: ffff888011f480e0 (&type->s_umount_key#38){++++}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.641990]  #3: ffff888011f4ab98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x1c3/0x8a0
[  300.642483] 3 locks held by kworker/u8:4/167:
[  300.642709]  #0: ffff88800e4a8948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.643228]  #1: ffff888013e07d78 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a0/0x1b0
[  300.643782]  #2: ffff88800d77e0e0 (&type->s_umount_key#52){.+.+}-{3:3}, at: super_trylock_shared+0x2b/0x110
[  300.644282] 4 locks held by kworker/1:3/233:
[  300.644503]  #0: ffff88800d45ad48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x11fc/0x1b50
[  300.644995]  #1: ffff888014ea7d78 ((work_completion)(&(&ap->scsi_rescan_task)->work)){+.+.}-{0:0}, at: process_one_work0
[  300.645610]  #2: ffff88801042c430 (&ap->scsi_scan_mutex){+.+.}-{3:3}, at: ata_scsi_dev_rescan+0x42/0x470
[  300.646096]  #3: ffff8880103e8378 (&dev->mutex){....}-{3:3}, at: scsi_rescan_device+0x33/0x360
[  300.646542] 4 locks held by auditd/240:
[  300.646757]  #0: ffff8880111580f8 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x25e/0x380
[  300.647182]  #1: ffff888011f48408 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x14f/0x290
[  300.647605]  #2: ffff888017c8d2b8 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb8/0x3e0
[  300.648137]  #3: ffff888011f4c958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.648591] 3 locks held by in:imjournal/319:
[  300.648820]  #0: ffff888011f48408 (sb_writers#3){.+.+}-{0:0}, at: path_openat+0x1f7c/0x2bf0
[  300.649260]  #1: ffff88801ae1f040 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: path_openat+0xd40/0x2bf0
[  300.649745]  #2: ffff888011f4c958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.650205] 4 locks held by rs:main Q:Reg/322:
[  300.650437]  #0: ffff888013b782b8 (&f->f_pos_lock){+.+.}-{3:3}, at: fdget_pos+0x25e/0x380
[  300.650880]  #1: ffff888011f48408 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0x14f/0x290
[  300.651306]  #2: ffff8880195183f8 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb8/0x3e0
[  300.651844]  #3: ffff888011f4c958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x107b/0x1590
[  300.652307]
[  300.652397] =============================================


> Kind regards,
> Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-22  5:44       ` Lai, Yi
@ 2024-10-22  8:48         ` Niklas Cassel
  2024-10-22  9:55           ` Lai, Yi
  0 siblings, 1 reply; 10+ messages in thread
From: Niklas Cassel @ 2024-10-22  8:48 UTC (permalink / raw)
  To: Lai, Yi
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Tue, Oct 22, 2024 at 01:44:08PM +0800, Lai, Yi wrote:
> On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> > On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > > Hello Yi Lai,
> > > 
> > > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > > Hi Niklas Cassel,
> > > > 
> > > > Greetings!
> > > > 
> > > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > > 
> > > > After bisection and the first bad commit is:
> > > > "
> > > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > > "
> > > 
> > > It might be that your bisection results are accurate.
> > > 
> > > However, after looking at the stacktraces, I find it way more likely that
> > > bisection has landed on the wrong commit.
> > > 
> > > See this series that was just queued (for 6.13) a few days ago that solves a
> > > similar starvation:
> > > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > > 
> > > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > > the bug?
> > 
> > Another patch that might be relevant:
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e972b08b91ef48488bae9789f03cfedb148667fb
> > 
> > Which fixes a use after delete in rq_qos_wake_function().
> > (We can see that the stack trace has rq_qos_wake_function() before
> > getting stuck forever in rq_qos_wait())
> > 
> > Who knows what could go wrong when accessing a deleted entry, in the
> > report there was a crash, but I could image other surprises :)
> > The fix was first included in v6.12-rc4.
> > 
> >
> Hi Niklas,
> 
> Thanks for the info. I have tried using v6.12-rc4 kernel to reproduce
> the issue. Using the same repro binary, the issue still exists.

Thanks a lot for your help with testing!

The first series that I pointed to, which looks most likely to be related:
https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/

Is only merged in:
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block

It is not included in v6.12-rc4.

Would it please be possible for you to test with Jens's for-6.13/block branch?


Kind regards,
Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-22  8:48         ` Niklas Cassel
@ 2024-10-22  9:55           ` Lai, Yi
  2024-10-22 11:04             ` Niklas Cassel
  0 siblings, 1 reply; 10+ messages in thread
From: Lai, Yi @ 2024-10-22  9:55 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Tue, Oct 22, 2024 at 10:48:04AM +0200, Niklas Cassel wrote:
> On Tue, Oct 22, 2024 at 01:44:08PM +0800, Lai, Yi wrote:
> > On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> > > On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > > > Hello Yi Lai,
> > > > 
> > > > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > > > Hi Niklas Cassel,
> > > > > 
> > > > > Greetings!
> > > > > 
> > > > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > > > 
> > > > > After bisection and the first bad commit is:
> > > > > "
> > > > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > > > "
> > > > 
> > > > It might be that your bisection results are accurate.
> > > > 
> > > > However, after looking at the stacktraces, I find it way more likely that
> > > > bisection has landed on the wrong commit.
> > > > 
> > > > See this series that was just queued (for 6.13) a few days ago that solves a
> > > > similar starvation:
> > > > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > > > 
> > > > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > > > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > > > the bug?
> > >

I tried kernel linux-block
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
branch for-6.13/block commit c97f91b1807a7966077b69b24f28c2dbcde664e9.

Issue can still be reproduced.

> > > Another patch that might be relevant:
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e972b08b91ef48488bae9789f03cfedb148667fb
> > > 
> > > Which fixes a use after delete in rq_qos_wake_function().
> > > (We can see that the stack trace has rq_qos_wake_function() before
> > > getting stuck forever in rq_qos_wait())
> > > 
> > > Who knows what could go wrong when accessing a deleted entry, in the
> > > report there was a crash, but I could image other surprises :)
> > > The fix was first included in v6.12-rc4.
> > > 
> > >
> > Hi Niklas,
> > 
> > Thanks for the info. I have tried using v6.12-rc4 kernel to reproduce
> > the issue. Using the same repro binary, the issue still exists.
> 
> Thanks a lot for your help with testing!
> 
> The first series that I pointed to, which looks most likely to be related:
> https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> 
> Is only merged in:
> https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> 
> It is not included in v6.12-rc4.
> 
> Would it please be possible for you to test with Jens's for-6.13/block branch?
> 
> 
> Kind regards,
> Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-22  9:55           ` Lai, Yi
@ 2024-10-22 11:04             ` Niklas Cassel
  2024-10-22 15:59               ` Niklas Cassel
  0 siblings, 1 reply; 10+ messages in thread
From: Niklas Cassel @ 2024-10-22 11:04 UTC (permalink / raw)
  To: Lai, Yi
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Tue, Oct 22, 2024 at 05:55:17PM +0800, Lai, Yi wrote:
> On Tue, Oct 22, 2024 at 10:48:04AM +0200, Niklas Cassel wrote:
> > On Tue, Oct 22, 2024 at 01:44:08PM +0800, Lai, Yi wrote:
> > > On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> > > > On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > > > > Hello Yi Lai,
> > > > > 
> > > > > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > > > > Hi Niklas Cassel,
> > > > > > 
> > > > > > Greetings!
> > > > > > 
> > > > > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > > > > 
> > > > > > After bisection and the first bad commit is:
> > > > > > "
> > > > > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > > > > "
> > > > > 
> > > > > It might be that your bisection results are accurate.
> > > > > 
> > > > > However, after looking at the stacktraces, I find it way more likely that
> > > > > bisection has landed on the wrong commit.
> > > > > 
> > > > > See this series that was just queued (for 6.13) a few days ago that solves a
> > > > > similar starvation:
> > > > > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > > > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > > > > 
> > > > > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > > > > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > > > > the bug?
> > > >
> 
> I tried kernel linux-block
> https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
> branch for-6.13/block commit c97f91b1807a7966077b69b24f28c2dbcde664e9.
> 
> Issue can still be reproduced.

Thanks again for your testing!

I will try to reproduce using your reproducer (repo.c),
and see if I can make any sense of this.


Kind regards,
Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-22 11:04             ` Niklas Cassel
@ 2024-10-22 15:59               ` Niklas Cassel
  2024-10-23  2:55                 ` Lai, Yi
  0 siblings, 1 reply; 10+ messages in thread
From: Niklas Cassel @ 2024-10-22 15:59 UTC (permalink / raw)
  To: Lai, Yi
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Tue, Oct 22, 2024 at 01:04:10PM +0200, Niklas Cassel wrote:
> On Tue, Oct 22, 2024 at 05:55:17PM +0800, Lai, Yi wrote:
> > On Tue, Oct 22, 2024 at 10:48:04AM +0200, Niklas Cassel wrote:
> > > On Tue, Oct 22, 2024 at 01:44:08PM +0800, Lai, Yi wrote:
> > > > On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> > > > > On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > > > > > Hello Yi Lai,
> > > > > > 
> > > > > > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > > > > > Hi Niklas Cassel,
> > > > > > > 
> > > > > > > Greetings!
> > > > > > > 
> > > > > > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > > > > > 
> > > > > > > After bisection and the first bad commit is:
> > > > > > > "
> > > > > > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > > > > > "
> > > > > > 
> > > > > > It might be that your bisection results are accurate.
> > > > > > 
> > > > > > However, after looking at the stacktraces, I find it way more likely that
> > > > > > bisection has landed on the wrong commit.
> > > > > > 
> > > > > > See this series that was just queued (for 6.13) a few days ago that solves a
> > > > > > similar starvation:
> > > > > > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > > > > > 
> > > > > > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > > > > > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > > > > > the bug?
> > > > >
> > 
> > I tried kernel linux-block
> > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
> > branch for-6.13/block commit c97f91b1807a7966077b69b24f28c2dbcde664e9.
> > 
> > Issue can still be reproduced.
> 
> Thanks again for your testing!
> 
> I will try to reproduce using your reproducer (repo.c),
> and see if I can make any sense of this.

Hello Yi Lai,


An important thing that I noticed is that even when I look at your good commit
(v5.11) - the first commit you marked as as good in git bisect, it has this:
https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/f40ddce88593482919761f74910f42f4b84c004b_dmesg.log

[  300.525626]  __schedule+0xb9f/0x2eb0^M
[  300.525843]  ? __pfx___schedule+0x10/0x10^M
[  300.526056]  ? lock_release+0x441/0x870^M
[  300.526286]  ? __pfx_lock_release+0x10/0x10^M
[  300.526512]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20^M
[  300.526801]  ? kthread_data+0x61/0xd0^M
[  300.527007]  schedule+0xf6/0x3f0^M
[  300.527225]  io_schedule+0xce/0x150^M
[  300.527408]  rq_qos_wait+0x1c5/0x310^M
[  300.527610]  ? __pfx_wbt_cleanup_cb+0x10/0x10^M
[  300.527847]  ? __pfx_rq_qos_wait+0x10/0x10^M
[  300.528070]  ? __pfx_lock_release+0x10/0x10^M
[  300.528296]  ? __pfx_rq_qos_wake_function+0x10/0x10^M
[  300.528562]  ? __pfx_wbt_inflight_cb+0x10/0x10^M
[  300.528806]  ? do_raw_spin_unlock+0x15c/0x210^M
[  300.529048]  wbt_wait+0x1ec/0x400^M

In fact, most *_dmesg.log logs in your syzlog directory has a stack trace that
contains a "blocked for more than 147 seconds" error.
Many of these logs are (with the "blocked for more than 147 seconds" error)
are commits that your bisect script has classified as "good".


If we look at your own dmesg with Linux 6.12-rc3 + e5dd410acb34 reverted,
which passes your git bisect script, even if it has a
"blocked for more than 147 seconds" error:
https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert_dmesg.log
It seems that the reason is because you don't find "blk_mq_get_tag" in dmesg:
|1019_054406|/var/www/html/bzimage/bzImage_8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert didn't contain blk_mq_get_tag: in dmesg, pass|
|1019_054406|Bisect successfully! 8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert bzimage passed!|

So it seems that both v5.11 and "v6.12-rc3 + e5dd410acb34 reverted" has
problems.


I managed to reproduce using your reproducer, and with
v6.12-rc4 + e5dd410acb34 reverted, I see:
[  299.641187]  __schedule+0x1144/0x3200
[  299.641425]  ? __pfx___schedule+0x10/0x10
[  299.641651]  ? lock_release+0x4b0/0x870
[  299.641868]  ? debug_smp_processor_id+0x1b/0x30
[  299.642128]  ? __pfx_lock_release+0x10/0x10
[  299.642362]  ? srso_alias_return_thunk+0x5/0xfbef5
[  299.642640]  ? lock_acquire+0x80/0xb0
[  299.642852]  ? schedule+0x202/0x3d0
[  299.643068]  schedule+0xe7/0x3d0
[  299.643261]  io_schedule+0x96/0x100
[  299.643494]  bit_wait_io+0x17/0xf0
[  299.643703]  __wait_on_bit_lock+0x118/0x1a0
[  299.643941]  ? __pfx_bit_wait_io+0x10/0x10
[  299.644195]  out_of_line_wait_on_bit_lock+0xe8/0x120
[  299.644475]  ? __pfx_out_of_line_wait_on_bit_lock+0x10/0x10
[  299.644793]  ? __pfx_wake_bit_function+0x10/0x10
[  299.645060]  ? srso_alias_return_thunk+0x5/0xfbef5
[  299.645344]  __lock_buffer+0x72/0x80
[  299.645569]  do_get_write_access+0x7d0/0x1130
[  299.645838]  jbd2_journal_get_write_access+0x1e9/0x270
[  299.646136]  __ext4_journal_get_write_access+0x72/0x3b0
[  299.646436]  ext4_reserve_inode_write+0x145/0x280

So even with e5dd410acb34 reverted, we see a problem!



The difference seems to be the stack trace before and after e5dd410acb34 is
slightly different.
Before e5dd410acb34 the stack trace often looks like one of the two above.
After e5dd410acb34 the stack trace instead always has blk_mq_get (and seems
to be easier to reproduce).


I think that the stack trace that contains "blk_mq_get" actually indicates a
new problem though, so thank you for that!

I have a fix for it:
diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
index fa41ea57a978..3b303d4ae37a 100644
--- a/drivers/ata/libata-eh.c
+++ b/drivers/ata/libata-eh.c
@@ -651,6 +651,7 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
                        /* the scmd has an associated qc */
                        if (!(qc->flags & ATA_QCFLAG_EH)) {
                                /* which hasn't failed yet, timeout */
+                               set_host_byte(scmd, DID_TIME_OUT);
                                qc->err_mask |= AC_ERR_TIMEOUT;
                                qc->flags |= ATA_QCFLAG_EH;
                                nr_timedout++;



I will post it as a real fix for the new problem (blk_mq_get) tomorrow.

This just solves the new problem though.

You probably need to do another bisection to find the problem that exists on
v5.11 and earlier kernels.


Kind regards,
Niklas

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
  2024-10-22 15:59               ` Niklas Cassel
@ 2024-10-23  2:55                 ` Lai, Yi
  0 siblings, 0 replies; 10+ messages in thread
From: Lai, Yi @ 2024-10-23  2:55 UTC (permalink / raw)
  To: Niklas Cassel
  Cc: Damien Le Moal, Hannes Reinecke, Martin K. Petersen, Igor Pylypiv,
	Niklas Cassel, linux-ide, yi1.lai, syzkaller-bugs, linux-kernel

On Tue, Oct 22, 2024 at 05:59:13PM +0200, Niklas Cassel wrote:
> On Tue, Oct 22, 2024 at 01:04:10PM +0200, Niklas Cassel wrote:
> > On Tue, Oct 22, 2024 at 05:55:17PM +0800, Lai, Yi wrote:
> > > On Tue, Oct 22, 2024 at 10:48:04AM +0200, Niklas Cassel wrote:
> > > > On Tue, Oct 22, 2024 at 01:44:08PM +0800, Lai, Yi wrote:
> > > > > On Mon, Oct 21, 2024 at 05:20:12PM +0200, Niklas Cassel wrote:
> > > > > > On Mon, Oct 21, 2024 at 02:07:21PM +0200, Niklas Cassel wrote:
> > > > > > > Hello Yi Lai,
> > > > > > > 
> > > > > > > On Mon, Oct 21, 2024 at 06:58:59PM +0800, Lai, Yi wrote:
> > > > > > > > Hi Niklas Cassel,
> > > > > > > > 
> > > > > > > > Greetings!
> > > > > > > > 
> > > > > > > > I used Syzkaller and found that there is INFO: task hung in blk_mq_get_tag in v6.12-rc3
> > > > > > > > 
> > > > > > > > After bisection and the first bad commit is:
> > > > > > > > "
> > > > > > > > e5dd410acb34 ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data
> > > > > > > > "
> > > > > > > 
> > > > > > > It might be that your bisection results are accurate.
> > > > > > > 
> > > > > > > However, after looking at the stacktraces, I find it way more likely that
> > > > > > > bisection has landed on the wrong commit.
> > > > > > > 
> > > > > > > See this series that was just queued (for 6.13) a few days ago that solves a
> > > > > > > similar starvation:
> > > > > > > https://lore.kernel.org/linux-block/20241014092934.53630-1-songmuchun@bytedance.com/
> > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.13/block
> > > > > > > 
> > > > > > > You could perhaps run with v6.14-rc4 (which should be able to trigger the bug)
> > > > > > > and then try v6.14-rc4 + that series applied, to see if you can still trigger
> > > > > > > the bug?
> > > > > >
> > > 
> > > I tried kernel linux-block
> > > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
> > > branch for-6.13/block commit c97f91b1807a7966077b69b24f28c2dbcde664e9.
> > > 
> > > Issue can still be reproduced.
> > 
> > Thanks again for your testing!
> > 
> > I will try to reproduce using your reproducer (repo.c),
> > and see if I can make any sense of this.
> 
> Hello Yi Lai,
> 
> 
> An important thing that I noticed is that even when I look at your good commit
> (v5.11) - the first commit you marked as as good in git bisect, it has this:
> https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/f40ddce88593482919761f74910f42f4b84c004b_dmesg.log
> 
> [  300.525626]  __schedule+0xb9f/0x2eb0^M
> [  300.525843]  ? __pfx___schedule+0x10/0x10^M
> [  300.526056]  ? lock_release+0x441/0x870^M
> [  300.526286]  ? __pfx_lock_release+0x10/0x10^M
> [  300.526512]  ? __sanitizer_cov_trace_const_cmp4+0x1a/0x20^M
> [  300.526801]  ? kthread_data+0x61/0xd0^M
> [  300.527007]  schedule+0xf6/0x3f0^M
> [  300.527225]  io_schedule+0xce/0x150^M
> [  300.527408]  rq_qos_wait+0x1c5/0x310^M
> [  300.527610]  ? __pfx_wbt_cleanup_cb+0x10/0x10^M
> [  300.527847]  ? __pfx_rq_qos_wait+0x10/0x10^M
> [  300.528070]  ? __pfx_lock_release+0x10/0x10^M
> [  300.528296]  ? __pfx_rq_qos_wake_function+0x10/0x10^M
> [  300.528562]  ? __pfx_wbt_inflight_cb+0x10/0x10^M
> [  300.528806]  ? do_raw_spin_unlock+0x15c/0x210^M
> [  300.529048]  wbt_wait+0x1ec/0x400^M
> 
> In fact, most *_dmesg.log logs in your syzlog directory has a stack trace that
> contains a "blocked for more than 147 seconds" error.
> Many of these logs are (with the "blocked for more than 147 seconds" error)
> are commits that your bisect script has classified as "good".
> 
> 
> If we look at your own dmesg with Linux 6.12-rc3 + e5dd410acb34 reverted,
> which passes your git bisect script, even if it has a
> "blocked for more than 147 seconds" error:
> https://github.com/laifryiee/syzkaller_logs/blob/main/241018_105830_blk_mq_get_tag/8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert_dmesg.log
> It seems that the reason is because you don't find "blk_mq_get_tag" in dmesg:
> |1019_054406|/var/www/html/bzimage/bzImage_8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert didn't contain blk_mq_get_tag: in dmesg, pass|
> |1019_054406|Bisect successfully! 8e929cb546ee42c9a61d24fae60605e9e3192354_e5dd410acb34c7341a0a93b429dcf3dabf9e3323_revert bzimage passed!|
> 
> So it seems that both v5.11 and "v6.12-rc3 + e5dd410acb34 reverted" has
> problems.
> 
> 
> I managed to reproduce using your reproducer, and with
> v6.12-rc4 + e5dd410acb34 reverted, I see:
> [  299.641187]  __schedule+0x1144/0x3200
> [  299.641425]  ? __pfx___schedule+0x10/0x10
> [  299.641651]  ? lock_release+0x4b0/0x870
> [  299.641868]  ? debug_smp_processor_id+0x1b/0x30
> [  299.642128]  ? __pfx_lock_release+0x10/0x10
> [  299.642362]  ? srso_alias_return_thunk+0x5/0xfbef5
> [  299.642640]  ? lock_acquire+0x80/0xb0
> [  299.642852]  ? schedule+0x202/0x3d0
> [  299.643068]  schedule+0xe7/0x3d0
> [  299.643261]  io_schedule+0x96/0x100
> [  299.643494]  bit_wait_io+0x17/0xf0
> [  299.643703]  __wait_on_bit_lock+0x118/0x1a0
> [  299.643941]  ? __pfx_bit_wait_io+0x10/0x10
> [  299.644195]  out_of_line_wait_on_bit_lock+0xe8/0x120
> [  299.644475]  ? __pfx_out_of_line_wait_on_bit_lock+0x10/0x10
> [  299.644793]  ? __pfx_wake_bit_function+0x10/0x10
> [  299.645060]  ? srso_alias_return_thunk+0x5/0xfbef5
> [  299.645344]  __lock_buffer+0x72/0x80
> [  299.645569]  do_get_write_access+0x7d0/0x1130
> [  299.645838]  jbd2_journal_get_write_access+0x1e9/0x270
> [  299.646136]  __ext4_journal_get_write_access+0x72/0x3b0
> [  299.646436]  ext4_reserve_inode_write+0x145/0x280
> 
> So even with e5dd410acb34 reverted, we see a problem!
> 
> 
> 
> The difference seems to be the stack trace before and after e5dd410acb34 is
> slightly different.
> Before e5dd410acb34 the stack trace often looks like one of the two above.
> After e5dd410acb34 the stack trace instead always has blk_mq_get (and seems
> to be easier to reproduce).
> 
> 
> I think that the stack trace that contains "blk_mq_get" actually indicates a
> new problem though, so thank you for that!
> 
> I have a fix for it:
> diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
> index fa41ea57a978..3b303d4ae37a 100644
> --- a/drivers/ata/libata-eh.c
> +++ b/drivers/ata/libata-eh.c
> @@ -651,6 +651,7 @@ void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap,
>                         /* the scmd has an associated qc */
>                         if (!(qc->flags & ATA_QCFLAG_EH)) {
>                                 /* which hasn't failed yet, timeout */
> +                               set_host_byte(scmd, DID_TIME_OUT);
>                                 qc->err_mask |= AC_ERR_TIMEOUT;
>                                 qc->flags |= ATA_QCFLAG_EH;
>                                 nr_timedout++;
> 
> 
> 
> I will post it as a real fix for the new problem (blk_mq_get) tomorrow.
>

I applied this fix on top of linux v6.12-rc4. The new problem
cannot be reproduced.

If possible, please help add reported-by tag.

> This just solves the new problem though.
> 
> You probably need to do another bisection to find the problem that exists on
> v5.11 and earlier kernels.

Thanks for the analysis. It indeed gives direction for me to further improve the
issue reproduction and bisection handling flow. I will follow up.

Regards,
Yi Lai

> 
> 
> Kind regards,
> Niklas

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-10-23  2:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20240909154237.3656000-2-cassel@kernel.org>
2024-10-21 10:58 ` [PATCH v3] ata: libata: Clear DID_TIME_OUT for ATA PT commands with sense data Lai, Yi
2024-10-21 12:07   ` Niklas Cassel
2024-10-21 12:34     ` Damien Le Moal
2024-10-21 15:20     ` Niklas Cassel
2024-10-22  5:44       ` Lai, Yi
2024-10-22  8:48         ` Niklas Cassel
2024-10-22  9:55           ` Lai, Yi
2024-10-22 11:04             ` Niklas Cassel
2024-10-22 15:59               ` Niklas Cassel
2024-10-23  2:55                 ` Lai, Yi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox