* [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list @ 2026-04-17 8:27 Michael Wu 2026-04-17 15:15 ` Ming Lei 2026-04-20 6:31 ` Michael Wu 0 siblings, 2 replies; 5+ messages in thread From: Michael Wu @ 2026-04-17 8:27 UTC (permalink / raw) To: axboe; +Cc: linux-block, linux-kernel Kernel: Linux version 6.18.16 Platform: Android A three-way deadlock can occur between blk_mq_freeze_queue and blk_mq_dispatch_list involving percpu_ref reference counting and rwsem synchronization: - Task A holds io_rwsem (e.g., F2FS write path) and enters __bio_queue_enter(), where it acquires percpu_ref and waits for mq_freeze_depth==0 - Task B holds mq_freeze_depth=1 (elevator_change) and waits for q_usage_counter to reach zero in blk_mq_freeze_queue_wait() - Task C is scheduled out via schedule() while waiting for io_rwsem. Before switching, __blk_flush_plug() triggers blk_mq_dispatch_list() which acquires percpu_ref via percpu_ref_get(). If preempt_schedule_notrace() is triggered before percpu_ref_put(), Task C holds the reference while blocked on the rwsem. Since Task C cannot release its percpu_ref while blocked, Task B cannot unfreeze the queue, and Task A cannot proceed to release the io_rwsem, creating a circular dependency deadlock. Change: Fix by disabling preemption in blk_mq_dispatch_list() when called from schedule() (from_sched=true), ensuring percpu_ref_get() and percpu_ref_put() are atomic with respect to context switches. With from_sched=true, blk_mq_run_hw_queue() dispatches asynchronously via kblockd, so no driver callbacks run in this context and preempt_disable() is safe. Detailed scenario description: When process 1838 performs f2fs_submit_page_write, it obtains io_rwsem via f2fs_down_write_trace. When process 1865 performs f2fs_down_write_trace and wants to obtain io_rwsem, it needs to wait for process 1838 to release it, so it can only be scheduled out via schedule. Before being scheduled out, it clears the plug via __blk_flush_plug, so it will run to blk_mq_dispatch_list. Process 619 is modifying the I/O scheduling algorithm, calling elevator_change to set mq_freeze_depth=1. After that, blk_mq_freeze_queue_wait will wait for the reference count of q_usage_counter to return to zero. Coincidentally, process 1838 needs to wait for mq_freeze_depth=0 when it reaches __bio_queue_enter, so it can only wait to be woken up after q_freeze_depth=0. At this time, process 1865, when blk_mq_dispatch_list reaches the point where percpu_ref_get increments the q_usage_counter reference, and before percpu_ref_put, it calls preempt_schedule_notrace to schedule the process out due to preemption, causing q_usage_counter to never reach zero. At this point, process 1865 depends on io_rwsem to wake up, process 1838 depends on mq_freeze_depth=0 to wake up, and process 619 depends on q_usage_counter being zero to wake up and unfreeze (setting mq_freeze_depth=0), resulting in a deadlock between these three processes. Stack traces from the deadlock: Task 1838 (Back-P10-3) - holds io_rwsem, waiting for queue unfreeze: Call trace: __switch_to+0x1a4/0x35c __schedule+0x8e0/0xec4 schedule+0x54/0xf8 __bio_queue_enter+0xbc/0x19c blk_mq_submit_bio+0x118/0x814 __submit_bio+0x9c/0x234 submit_bio_noacct_nocheck+0x10c/0x2d4 submit_bio_noacct+0x354/0x544 submit_bio+0x1e8/0x208 f2fs_submit_write_bio+0x44/0xe4 __submit_merged_bio+0x40/0x114 f2fs_submit_page_write+0x3f0/0x7e0 do_write_page+0x180/0x2fc f2fs_outplace_write_data+0x78/0x100 f2fs_do_write_data_page+0x3b8/0x500 f2fs_write_single_data_page+0x1ac/0x6e0 f2fs_write_data_pages+0x838/0xdfc do_writepages+0xd0/0x19c filemap_write_and_wait_range+0x204/0x274 f2fs_commit_atomic_write+0x54/0x960 __f2fs_ioctl+0x2128/0x42c8 f2fs_ioctl+0x38/0xb4 __arm64_sys_ioctl+0xa0/0xf4 Task 619 (android.hardwar) - holds mq_freeze_depth=1, waiting for percpu_ref: Call trace: __switch_to+0x1a4/0x35c __schedule+0x8e0/0xec4 schedule+0x54/0xf8 blk_mq_freeze_queue_wait+0x68/0xb0 blk_mq_freeze_queue_nomemsave+0x68/0x7c elevator_change+0x70/0x14c elv_iosched_store+0x1b0/0x234 queue_attr_store+0xe0/0x134 sysfs_kf_write+0x98/0xbc kernfs_fop_write_iter+0x118/0x1e8 vfs_write+0x2e8/0x448 ksys_write+0x78/0xf0 __arm64_sys_write+0x1c/0x2c Task 1865 (sp-control-1) - holds percpu_ref, preempted in dispatch_list: Call trace: __switch_to+0x1a4/0x35c __schedule+0x8e0/0xec4 preempt_schedule_notrace+0x60/0x7c blk_mq_dispatch_list+0x5c0/0x690 blk_mq_flush_plug_list+0x13c/0x170 __blk_flush_plug+0x11c/0x17c schedule+0x40/0xf8 schedule_preempt_disabled+0x24/0x40 rwsem_down_write_slowpath+0x61c/0xc88 down_write+0x3c/0x158 f2fs_down_write_trace+0x30/0x84 f2fs_submit_page_write+0x78/0x7e0 do_write_page+0x180/0x2fc f2fs_outplace_write_data+0x78/0x100 f2fs_do_write_data_page+0x3b8/0x500 f2fs_write_single_data_page+0x1ac/0x6e0 f2fs_write_data_pages+0x838/0xdfc do_writepages+0xd0/0x19c filemap_write_and_wait_range+0x204/0x274 f2fs_commit_atomic_write+0x54/0x960 __f2fs_ioctl+0x2128/0x42c8 f2fs_ioctl+0x38/0xb4 __arm64_sys_ioctl+0xa0/0xf4 Signed-off-by: Michael Wu <michael@allwinnertech.com> --- block/blk-mq.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index 4c5c16cce4f8f..c290bb12c1ecb 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2936,6 +2936,14 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) *rqs = requeue_list; trace_block_unplug(this_hctx->queue, depth, !from_sched); + /* + * When called from schedule(), prevent preemption and interrupts between + * ref_get and ref_put. This ensures percpu_ref_get() and percpu_ref_put() + * are atomic with respect to context switches, avoiding a deadlock with + * blk_mq_freeze_queue where a blocked task holds a percpu_ref reference. + */ + if (from_sched) + local_irq_disable(); percpu_ref_get(&this_hctx->queue->q_usage_counter); /* passthrough requests should never be issued to the I/O scheduler */ if (is_passthrough) { @@ -2951,6 +2959,8 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); } percpu_ref_put(&this_hctx->queue->q_usage_counter); + if (from_sched) + local_irq_enable(); } static void blk_mq_dispatch_multiple_queue_requests(struct rq_list *rqs) -- 2.29.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list 2026-04-17 8:27 [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list Michael Wu @ 2026-04-17 15:15 ` Ming Lei 2026-04-20 6:31 ` Michael Wu 1 sibling, 0 replies; 5+ messages in thread From: Ming Lei @ 2026-04-17 15:15 UTC (permalink / raw) To: Michael Wu; +Cc: axboe, linux-block, linux-kernel On Fri, Apr 17, 2026 at 04:27:44PM +0800, Michael Wu wrote: > Kernel: Linux version 6.18.16 > Platform: Android > > A three-way deadlock can occur between blk_mq_freeze_queue and > blk_mq_dispatch_list involving percpu_ref reference counting and rwsem > synchronization: > > - Task A holds io_rwsem (e.g., F2FS write path) and enters __bio_queue_enter(), > where it acquires percpu_ref and waits for mq_freeze_depth==0 > - Task B holds mq_freeze_depth=1 (elevator_change) and waits for > q_usage_counter to reach zero in blk_mq_freeze_queue_wait() > - Task C is scheduled out via schedule() while waiting for io_rwsem. > Before switching, __blk_flush_plug() triggers blk_mq_dispatch_list() > which acquires percpu_ref via percpu_ref_get(). If preempt_schedule_notrace() > is triggered before percpu_ref_put(), Task C holds the reference while > blocked on the rwsem. > > Since Task C cannot release its percpu_ref while blocked, Task B cannot > unfreeze the queue, and Task A cannot proceed to release the io_rwsem, > creating a circular dependency deadlock. > > Change: > Fix by disabling preemption in blk_mq_dispatch_list() when called from > schedule() (from_sched=true), ensuring percpu_ref_get() and percpu_ref_put() > are atomic with respect to context switches. With from_sched=true, > blk_mq_run_hw_queue() dispatches asynchronously via kblockd, so no driver > callbacks run in this context and preempt_disable() is safe. > > Detailed scenario description: > When process 1838 performs f2fs_submit_page_write, it obtains io_rwsem via > f2fs_down_write_trace. When process 1865 performs f2fs_down_write_trace and > wants to obtain io_rwsem, it needs to wait for process 1838 to release it, > so it can only be scheduled out via schedule. Before being scheduled out, > it clears the plug via __blk_flush_plug, so it will run to blk_mq_dispatch_list. > Process 619 is modifying the I/O scheduling algorithm, calling elevator_change > to set mq_freeze_depth=1. After that, blk_mq_freeze_queue_wait will wait for > the reference count of q_usage_counter to return to zero. Coincidentally, > process 1838 needs to wait for mq_freeze_depth=0 when it reaches > __bio_queue_enter, so it can only wait to be woken up after q_freeze_depth=0. > At this time, process 1865, when blk_mq_dispatch_list reaches the point where > percpu_ref_get increments the q_usage_counter reference, and before > percpu_ref_put, it calls preempt_schedule_notrace to schedule the process out > due to preemption, causing q_usage_counter to never reach zero. > > At this point, process 1865 depends on io_rwsem to wake up, process 1838 > depends on mq_freeze_depth=0 to wake up, and process 619 depends on > q_usage_counter being zero to wake up and unfreeze (setting mq_freeze_depth=0), > resulting in a deadlock between these three processes. > > Stack traces from the deadlock: > > Task 1838 (Back-P10-3) - holds io_rwsem, waiting for queue unfreeze: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > schedule+0x54/0xf8 > __bio_queue_enter+0xbc/0x19c > blk_mq_submit_bio+0x118/0x814 > __submit_bio+0x9c/0x234 > submit_bio_noacct_nocheck+0x10c/0x2d4 > submit_bio_noacct+0x354/0x544 > submit_bio+0x1e8/0x208 > f2fs_submit_write_bio+0x44/0xe4 > __submit_merged_bio+0x40/0x114 > f2fs_submit_page_write+0x3f0/0x7e0 > do_write_page+0x180/0x2fc > f2fs_outplace_write_data+0x78/0x100 > f2fs_do_write_data_page+0x3b8/0x500 > f2fs_write_single_data_page+0x1ac/0x6e0 > f2fs_write_data_pages+0x838/0xdfc > do_writepages+0xd0/0x19c > filemap_write_and_wait_range+0x204/0x274 > f2fs_commit_atomic_write+0x54/0x960 > __f2fs_ioctl+0x2128/0x42c8 > f2fs_ioctl+0x38/0xb4 > __arm64_sys_ioctl+0xa0/0xf4 > > Task 619 (android.hardwar) - holds mq_freeze_depth=1, waiting for percpu_ref: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > schedule+0x54/0xf8 > blk_mq_freeze_queue_wait+0x68/0xb0 > blk_mq_freeze_queue_nomemsave+0x68/0x7c > elevator_change+0x70/0x14c > elv_iosched_store+0x1b0/0x234 > queue_attr_store+0xe0/0x134 > sysfs_kf_write+0x98/0xbc > kernfs_fop_write_iter+0x118/0x1e8 > vfs_write+0x2e8/0x448 > ksys_write+0x78/0xf0 > __arm64_sys_write+0x1c/0x2c > > Task 1865 (sp-control-1) - holds percpu_ref, preempted in dispatch_list: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > preempt_schedule_notrace+0x60/0x7c > blk_mq_dispatch_list+0x5c0/0x690 > blk_mq_flush_plug_list+0x13c/0x170 > __blk_flush_plug+0x11c/0x17c > schedule+0x40/0xf8 > schedule_preempt_disabled+0x24/0x40 > rwsem_down_write_slowpath+0x61c/0xc88 > down_write+0x3c/0x158 > f2fs_down_write_trace+0x30/0x84 > f2fs_submit_page_write+0x78/0x7e0 > do_write_page+0x180/0x2fc > f2fs_outplace_write_data+0x78/0x100 > f2fs_do_write_data_page+0x3b8/0x500 > f2fs_write_single_data_page+0x1ac/0x6e0 > f2fs_write_data_pages+0x838/0xdfc > do_writepages+0xd0/0x19c > filemap_write_and_wait_range+0x204/0x274 > f2fs_commit_atomic_write+0x54/0x960 > __f2fs_ioctl+0x2128/0x42c8 > f2fs_ioctl+0x38/0xb4 > __arm64_sys_ioctl+0xa0/0xf4 > > Signed-off-by: Michael Wu <michael@allwinnertech.com> > --- > block/blk-mq.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 4c5c16cce4f8f..c290bb12c1ecb 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2936,6 +2936,14 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) > *rqs = requeue_list; > trace_block_unplug(this_hctx->queue, depth, !from_sched); > > + /* > + * When called from schedule(), prevent preemption and interrupts between > + * ref_get and ref_put. This ensures percpu_ref_get() and percpu_ref_put() > + * are atomic with respect to context switches, avoiding a deadlock with > + * blk_mq_freeze_queue where a blocked task holds a percpu_ref reference. > + */ > + if (from_sched) > + local_irq_disable(); > percpu_ref_get(&this_hctx->queue->q_usage_counter); > /* passthrough requests should never be issued to the I/O scheduler */ > if (is_passthrough) { > @@ -2951,6 +2959,8 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) > blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); > } > percpu_ref_put(&this_hctx->queue->q_usage_counter); > + if (from_sched) > + local_irq_enable(); > } It looks one strange scheduler behavior, io_schedule_prepare() is scheduled out, and never scheduled back. But the above code block can't sleep, so question why it doesn't get chance to schedule back. Can this issue be triggered on upstream kernel? If it is really the reason, the fix may not work, because it can be preempted before calling percpu_ref_get(), when requests in the plug list actually grab queue usage counter too. BTW, preempt_disable() should be enough. If it is really needed, the proper callsite may be io_schedule_prepare(). Thanks, Ming ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list 2026-04-17 8:27 [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list Michael Wu 2026-04-17 15:15 ` Ming Lei @ 2026-04-20 6:31 ` Michael Wu 2026-04-20 7:02 ` Ming Lei 2026-04-23 11:28 ` Michael Wu 1 sibling, 2 replies; 5+ messages in thread From: Michael Wu @ 2026-04-20 6:31 UTC (permalink / raw) To: Ming Lei; +Cc: linux-block, linux-kernel, axboe I'd like to add some important information: The three processes I mentioned—Task 1838 (Back-P10-3), Task 619 (android.hardwar), and Task 1865 (sp-control-1)—are all in an uninterruptible sleep state. Therefore, once Task 1865 (sp-control-1) is scheduled out using `preempt_schedule_notrace`, it cannot be scheduled back. The reason Task 1865 (sp-control-1) is in an uninterruptible sleep state is because `down_write` is waiting for `io_rwsem`. My analysis of the upstream kernel code doesn't seem to have found a fix for this issue. This situation should theoretically exist, but I don't have a platform to test this low-probability behavior. However, it's certain that this situation occurs during I/O scheduling algorithm switching and concurrent F2FS write operations. In this situation, `io_schedule_prepare` is not used. The path used in Task 1865 is `schedule->sched_submit_work->blk_flush_plug->blk_mq_dispatch_list`. As you said, this method is indeed not good, but I don't have a better idea to handle this deadlock situation. On 2026/4/17 16:27, Michael Wu wrote: > Kernel: Linux version 6.18.16 > Platform: Android > > A three-way deadlock can occur between blk_mq_freeze_queue and > blk_mq_dispatch_list involving percpu_ref reference counting and rwsem > synchronization: > > - Task A holds io_rwsem (e.g., F2FS write path) and enters __bio_queue_enter(), > where it acquires percpu_ref and waits for mq_freeze_depth==0 > - Task B holds mq_freeze_depth=1 (elevator_change) and waits for > q_usage_counter to reach zero in blk_mq_freeze_queue_wait() > - Task C is scheduled out via schedule() while waiting for io_rwsem. > Before switching, __blk_flush_plug() triggers blk_mq_dispatch_list() > which acquires percpu_ref via percpu_ref_get(). If preempt_schedule_notrace() > is triggered before percpu_ref_put(), Task C holds the reference while > blocked on the rwsem. > > Since Task C cannot release its percpu_ref while blocked, Task B cannot > unfreeze the queue, and Task A cannot proceed to release the io_rwsem, > creating a circular dependency deadlock. > > Change: > Fix by disabling preemption in blk_mq_dispatch_list() when called from > schedule() (from_sched=true), ensuring percpu_ref_get() and percpu_ref_put() > are atomic with respect to context switches. With from_sched=true, > blk_mq_run_hw_queue() dispatches asynchronously via kblockd, so no driver > callbacks run in this context and preempt_disable() is safe. > > Detailed scenario description: > When process 1838 performs f2fs_submit_page_write, it obtains io_rwsem via > f2fs_down_write_trace. When process 1865 performs f2fs_down_write_trace and > wants to obtain io_rwsem, it needs to wait for process 1838 to release it, > so it can only be scheduled out via schedule. Before being scheduled out, > it clears the plug via __blk_flush_plug, so it will run to blk_mq_dispatch_list. > Process 619 is modifying the I/O scheduling algorithm, calling elevator_change > to set mq_freeze_depth=1. After that, blk_mq_freeze_queue_wait will wait for > the reference count of q_usage_counter to return to zero. Coincidentally, > process 1838 needs to wait for mq_freeze_depth=0 when it reaches > __bio_queue_enter, so it can only wait to be woken up after q_freeze_depth=0. > At this time, process 1865, when blk_mq_dispatch_list reaches the point where > percpu_ref_get increments the q_usage_counter reference, and before > percpu_ref_put, it calls preempt_schedule_notrace to schedule the process out > due to preemption, causing q_usage_counter to never reach zero. > > At this point, process 1865 depends on io_rwsem to wake up, process 1838 > depends on mq_freeze_depth=0 to wake up, and process 619 depends on > q_usage_counter being zero to wake up and unfreeze (setting mq_freeze_depth=0), > resulting in a deadlock between these three processes. > > Stack traces from the deadlock: > > Task 1838 (Back-P10-3) - holds io_rwsem, waiting for queue unfreeze: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > schedule+0x54/0xf8 > __bio_queue_enter+0xbc/0x19c > blk_mq_submit_bio+0x118/0x814 > __submit_bio+0x9c/0x234 > submit_bio_noacct_nocheck+0x10c/0x2d4 > submit_bio_noacct+0x354/0x544 > submit_bio+0x1e8/0x208 > f2fs_submit_write_bio+0x44/0xe4 > __submit_merged_bio+0x40/0x114 > f2fs_submit_page_write+0x3f0/0x7e0 > do_write_page+0x180/0x2fc > f2fs_outplace_write_data+0x78/0x100 > f2fs_do_write_data_page+0x3b8/0x500 > f2fs_write_single_data_page+0x1ac/0x6e0 > f2fs_write_data_pages+0x838/0xdfc > do_writepages+0xd0/0x19c > filemap_write_and_wait_range+0x204/0x274 > f2fs_commit_atomic_write+0x54/0x960 > __f2fs_ioctl+0x2128/0x42c8 > f2fs_ioctl+0x38/0xb4 > __arm64_sys_ioctl+0xa0/0xf4 > > Task 619 (android.hardwar) - holds mq_freeze_depth=1, waiting for percpu_ref: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > schedule+0x54/0xf8 > blk_mq_freeze_queue_wait+0x68/0xb0 > blk_mq_freeze_queue_nomemsave+0x68/0x7c > elevator_change+0x70/0x14c > elv_iosched_store+0x1b0/0x234 > queue_attr_store+0xe0/0x134 > sysfs_kf_write+0x98/0xbc > kernfs_fop_write_iter+0x118/0x1e8 > vfs_write+0x2e8/0x448 > ksys_write+0x78/0xf0 > __arm64_sys_write+0x1c/0x2c > > Task 1865 (sp-control-1) - holds percpu_ref, preempted in dispatch_list: > Call trace: > __switch_to+0x1a4/0x35c > __schedule+0x8e0/0xec4 > preempt_schedule_notrace+0x60/0x7c > blk_mq_dispatch_list+0x5c0/0x690 > blk_mq_flush_plug_list+0x13c/0x170 > __blk_flush_plug+0x11c/0x17c > schedule+0x40/0xf8 > schedule_preempt_disabled+0x24/0x40 > rwsem_down_write_slowpath+0x61c/0xc88 > down_write+0x3c/0x158 > f2fs_down_write_trace+0x30/0x84 > f2fs_submit_page_write+0x78/0x7e0 > do_write_page+0x180/0x2fc > f2fs_outplace_write_data+0x78/0x100 > f2fs_do_write_data_page+0x3b8/0x500 > f2fs_write_single_data_page+0x1ac/0x6e0 > f2fs_write_data_pages+0x838/0xdfc > do_writepages+0xd0/0x19c > filemap_write_and_wait_range+0x204/0x274 > f2fs_commit_atomic_write+0x54/0x960 > __f2fs_ioctl+0x2128/0x42c8 > f2fs_ioctl+0x38/0xb4 > __arm64_sys_ioctl+0xa0/0xf4 > > Signed-off-by: Michael Wu <michael@allwinnertech.com> > --- > block/blk-mq.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 4c5c16cce4f8f..c290bb12c1ecb 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2936,6 +2936,14 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) > *rqs = requeue_list; > trace_block_unplug(this_hctx->queue, depth, !from_sched); > > + /* > + * When called from schedule(), prevent preemption and interrupts between > + * ref_get and ref_put. This ensures percpu_ref_get() and percpu_ref_put() > + * are atomic with respect to context switches, avoiding a deadlock with > + * blk_mq_freeze_queue where a blocked task holds a percpu_ref reference. > + */ > + if (from_sched) > + local_irq_disable(); > percpu_ref_get(&this_hctx->queue->q_usage_counter); > /* passthrough requests should never be issued to the I/O scheduler */ > if (is_passthrough) { > @@ -2951,6 +2959,8 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched) > blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); > } > percpu_ref_put(&this_hctx->queue->q_usage_counter); > + if (from_sched) > + local_irq_enable(); > } > > static void blk_mq_dispatch_multiple_queue_requests(struct rq_list *rqs) -- Regards, Michael Wu ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list 2026-04-20 6:31 ` Michael Wu @ 2026-04-20 7:02 ` Ming Lei 2026-04-23 11:28 ` Michael Wu 1 sibling, 0 replies; 5+ messages in thread From: Ming Lei @ 2026-04-20 7:02 UTC (permalink / raw) To: Michael Wu; +Cc: linux-block, linux-kernel, axboe On Mon, Apr 20, 2026 at 02:31:14PM +0800, Michael Wu wrote: > I'd like to add some important information: > > The three processes I mentioned—Task 1838 (Back-P10-3), Task 619 > (android.hardwar), and Task 1865 (sp-control-1)—are all in an > uninterruptible sleep state. Therefore, once Task 1865 (sp-control-1) is > scheduled out using `preempt_schedule_notrace`, it cannot be scheduled back. > The reason Task 1865 (sp-control-1) is in an uninterruptible sleep state is > because `down_write` is waiting for `io_rwsem`. > > My analysis of the upstream kernel code doesn't seem to have found a fix for > this issue. This situation should theoretically exist, but I don't have a > platform to test this low-probability behavior. However, it's certain that > this situation occurs during I/O scheduling algorithm switching and > concurrent F2FS write operations. > > In this situation, `io_schedule_prepare` is not used. The path used in Task > 1865 is `schedule->sched_submit_work->blk_flush_plug->blk_mq_dispatch_list`. > > As you said, this method is indeed not good, but I don't have a better idea > to handle this deadlock situation. Now I got the idea, because blk_flush_plug() is called on a sleeping task, that is why the preempted code block can't get run again even though it doesn't sleep anywhere. Can you try the following change? diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b7f77c165a6e..4217aaaa8e47 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6966,7 +6966,9 @@ static inline void sched_submit_work(struct task_struct *tsk) * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. */ + preempt_disable_notrace(); blk_flush_plug(tsk->plug, true); + preempt_enable_no_resched_notrace(); lock_map_release(&sched_map); } thanks, Ming ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list 2026-04-20 6:31 ` Michael Wu 2026-04-20 7:02 ` Ming Lei @ 2026-04-23 11:28 ` Michael Wu 1 sibling, 0 replies; 5+ messages in thread From: Michael Wu @ 2026-04-23 11:28 UTC (permalink / raw) To: Ming Lei; +Cc: linux-block, linux-kernel, axboe I apologize that due to some special circumstances, I was unable to receive your reply via email. Therefore, I am replying to you via this email. I have verified that the patch you modified does indeed resolve the aforementioned issue. To verify this, I conducted scenario verification and Mokey stress testing on my platform. So, should we choose the current solution as the final solution? diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b7f77c165a6e..4217aaaa8e47 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6966,7 +6966,9 @@ static inline void sched_submit_work(struct task_struct *tsk) * If we are going to sleep and we have plugged IO queued, * make sure to submit it to avoid deadlocks. */ + preempt_disable_notrace(); blk_flush_plug(tsk->plug, true); + preempt_enable_no_resched_notrace(); lock_map_release(&sched_map); } On 2026/4/20 14:31, Michael Wu wrote: > I'd like to add some important information: > > The three processes I mentioned—Task 1838 (Back-P10-3), Task 619 > (android.hardwar), and Task 1865 (sp-control-1)—are all in an > uninterruptible sleep state. Therefore, once Task 1865 (sp-control-1) is > scheduled out using `preempt_schedule_notrace`, it cannot be scheduled > back. The reason Task 1865 (sp-control-1) is in an uninterruptible sleep > state is because `down_write` is waiting for `io_rwsem`. > > My analysis of the upstream kernel code doesn't seem to have found a fix > for this issue. This situation should theoretically exist, but I don't > have a platform to test this low-probability behavior. However, it's > certain that this situation occurs during I/O scheduling algorithm > switching and concurrent F2FS write operations. > > In this situation, `io_schedule_prepare` is not used. The path used in > Task 1865 is > `schedule->sched_submit_work->blk_flush_plug->blk_mq_dispatch_list`. > > As you said, this method is indeed not good, but I don't have a better > idea to handle this deadlock situation. > > On 2026/4/17 16:27, Michael Wu wrote: >> Kernel: Linux version 6.18.16 >> Platform: Android >> >> A three-way deadlock can occur between blk_mq_freeze_queue and >> blk_mq_dispatch_list involving percpu_ref reference counting and rwsem >> synchronization: >> >> - Task A holds io_rwsem (e.g., F2FS write path) and enters >> __bio_queue_enter(), >> where it acquires percpu_ref and waits for mq_freeze_depth==0 >> - Task B holds mq_freeze_depth=1 (elevator_change) and waits for >> q_usage_counter to reach zero in blk_mq_freeze_queue_wait() >> - Task C is scheduled out via schedule() while waiting for io_rwsem. >> Before switching, __blk_flush_plug() triggers blk_mq_dispatch_list() >> which acquires percpu_ref via percpu_ref_get(). If >> preempt_schedule_notrace() >> is triggered before percpu_ref_put(), Task C holds the reference while >> blocked on the rwsem. >> >> Since Task C cannot release its percpu_ref while blocked, Task B cannot >> unfreeze the queue, and Task A cannot proceed to release the io_rwsem, >> creating a circular dependency deadlock. >> >> Change: >> Fix by disabling preemption in blk_mq_dispatch_list() when called from >> schedule() (from_sched=true), ensuring percpu_ref_get() and >> percpu_ref_put() >> are atomic with respect to context switches. With from_sched=true, >> blk_mq_run_hw_queue() dispatches asynchronously via kblockd, so no driver >> callbacks run in this context and preempt_disable() is safe. >> >> Detailed scenario description: >> When process 1838 performs f2fs_submit_page_write, it obtains io_rwsem >> via >> f2fs_down_write_trace. When process 1865 performs >> f2fs_down_write_trace and >> wants to obtain io_rwsem, it needs to wait for process 1838 to release >> it, >> so it can only be scheduled out via schedule. Before being scheduled out, >> it clears the plug via __blk_flush_plug, so it will run to >> blk_mq_dispatch_list. >> Process 619 is modifying the I/O scheduling algorithm, calling >> elevator_change >> to set mq_freeze_depth=1. After that, blk_mq_freeze_queue_wait will >> wait for >> the reference count of q_usage_counter to return to zero. Coincidentally, >> process 1838 needs to wait for mq_freeze_depth=0 when it reaches >> __bio_queue_enter, so it can only wait to be woken up after >> q_freeze_depth=0. >> At this time, process 1865, when blk_mq_dispatch_list reaches the >> point where >> percpu_ref_get increments the q_usage_counter reference, and before >> percpu_ref_put, it calls preempt_schedule_notrace to schedule the >> process out >> due to preemption, causing q_usage_counter to never reach zero. >> >> At this point, process 1865 depends on io_rwsem to wake up, process 1838 >> depends on mq_freeze_depth=0 to wake up, and process 619 depends on >> q_usage_counter being zero to wake up and unfreeze (setting >> mq_freeze_depth=0), >> resulting in a deadlock between these three processes. >> >> Stack traces from the deadlock: >> >> Task 1838 (Back-P10-3) - holds io_rwsem, waiting for queue unfreeze: >> Call trace: >> __switch_to+0x1a4/0x35c >> __schedule+0x8e0/0xec4 >> schedule+0x54/0xf8 >> __bio_queue_enter+0xbc/0x19c >> blk_mq_submit_bio+0x118/0x814 >> __submit_bio+0x9c/0x234 >> submit_bio_noacct_nocheck+0x10c/0x2d4 >> submit_bio_noacct+0x354/0x544 >> submit_bio+0x1e8/0x208 >> f2fs_submit_write_bio+0x44/0xe4 >> __submit_merged_bio+0x40/0x114 >> f2fs_submit_page_write+0x3f0/0x7e0 >> do_write_page+0x180/0x2fc >> f2fs_outplace_write_data+0x78/0x100 >> f2fs_do_write_data_page+0x3b8/0x500 >> f2fs_write_single_data_page+0x1ac/0x6e0 >> f2fs_write_data_pages+0x838/0xdfc >> do_writepages+0xd0/0x19c >> filemap_write_and_wait_range+0x204/0x274 >> f2fs_commit_atomic_write+0x54/0x960 >> __f2fs_ioctl+0x2128/0x42c8 >> f2fs_ioctl+0x38/0xb4 >> __arm64_sys_ioctl+0xa0/0xf4 >> >> Task 619 (android.hardwar) - holds mq_freeze_depth=1, waiting for >> percpu_ref: >> Call trace: >> __switch_to+0x1a4/0x35c >> __schedule+0x8e0/0xec4 >> schedule+0x54/0xf8 >> blk_mq_freeze_queue_wait+0x68/0xb0 >> blk_mq_freeze_queue_nomemsave+0x68/0x7c >> elevator_change+0x70/0x14c >> elv_iosched_store+0x1b0/0x234 >> queue_attr_store+0xe0/0x134 >> sysfs_kf_write+0x98/0xbc >> kernfs_fop_write_iter+0x118/0x1e8 >> vfs_write+0x2e8/0x448 >> ksys_write+0x78/0xf0 >> __arm64_sys_write+0x1c/0x2c >> >> Task 1865 (sp-control-1) - holds percpu_ref, preempted in dispatch_list: >> Call trace: >> __switch_to+0x1a4/0x35c >> __schedule+0x8e0/0xec4 >> preempt_schedule_notrace+0x60/0x7c >> blk_mq_dispatch_list+0x5c0/0x690 >> blk_mq_flush_plug_list+0x13c/0x170 >> __blk_flush_plug+0x11c/0x17c >> schedule+0x40/0xf8 >> schedule_preempt_disabled+0x24/0x40 >> rwsem_down_write_slowpath+0x61c/0xc88 >> down_write+0x3c/0x158 >> f2fs_down_write_trace+0x30/0x84 >> f2fs_submit_page_write+0x78/0x7e0 >> do_write_page+0x180/0x2fc >> f2fs_outplace_write_data+0x78/0x100 >> f2fs_do_write_data_page+0x3b8/0x500 >> f2fs_write_single_data_page+0x1ac/0x6e0 >> f2fs_write_data_pages+0x838/0xdfc >> do_writepages+0xd0/0x19c >> filemap_write_and_wait_range+0x204/0x274 >> f2fs_commit_atomic_write+0x54/0x960 >> __f2fs_ioctl+0x2128/0x42c8 >> f2fs_ioctl+0x38/0xb4 >> __arm64_sys_ioctl+0xa0/0xf4 >> >> Signed-off-by: Michael Wu <michael@allwinnertech.com> >> --- >> block/blk-mq.c | 10 ++++++++++ >> 1 file changed, 10 insertions(+) >> >> diff --git a/block/blk-mq.c b/block/blk-mq.c >> index 4c5c16cce4f8f..c290bb12c1ecb 100644 >> --- a/block/blk-mq.c >> +++ b/block/blk-mq.c >> @@ -2936,6 +2936,14 @@ static void blk_mq_dispatch_list(struct rq_list >> *rqs, bool from_sched) >> *rqs = requeue_list; >> trace_block_unplug(this_hctx->queue, depth, !from_sched); >> + /* >> + * When called from schedule(), prevent preemption and interrupts >> between >> + * ref_get and ref_put. This ensures percpu_ref_get() and >> percpu_ref_put() >> + * are atomic with respect to context switches, avoiding a >> deadlock with >> + * blk_mq_freeze_queue where a blocked task holds a percpu_ref >> reference. >> + */ >> + if (from_sched) >> + local_irq_disable(); >> percpu_ref_get(&this_hctx->queue->q_usage_counter); >> /* passthrough requests should never be issued to the I/O >> scheduler */ >> if (is_passthrough) { >> @@ -2951,6 +2959,8 @@ static void blk_mq_dispatch_list(struct rq_list >> *rqs, bool from_sched) >> blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched); >> } >> percpu_ref_put(&this_hctx->queue->q_usage_counter); >> + if (from_sched) >> + local_irq_enable(); >> } >> static void blk_mq_dispatch_multiple_queue_requests(struct rq_list >> *rqs) > -- Regards, Michael Wu ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-23 11:44 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-17 8:27 [PATCH] block: fix deadlock between blk_mq_freeze_queue and blk_mq_dispatch_list Michael Wu 2026-04-17 15:15 ` Ming Lei 2026-04-20 6:31 ` Michael Wu 2026-04-20 7:02 ` Ming Lei 2026-04-23 11:28 ` Michael Wu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox