Linux block layer
 help / color / mirror / Atom feed
* [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work
@ 2026-04-23 12:55 Ming Lei
  2026-05-12  5:56 ` Xiaosen
  0 siblings, 1 reply; 3+ messages in thread
From: Ming Lei @ 2026-04-23 12:55 UTC (permalink / raw)
  To: Jens Axboe, linux-block; +Cc: linux-kernel, Ming Lei, Michael Wu

On preemptible kernels, a three-way deadlock can occur involving
blk_mq_freeze_queue and blk_mq_dispatch_list:

- Task A holds a filesystem lock (e.g., f2fs io_rwsem) and enters
  __bio_queue_enter(), waiting for mq_freeze_depth == 0
- Task B holds mq_freeze_depth=1 (elevator_change) and waits for
  q_usage_counter to reach zero in blk_mq_freeze_queue_wait()
- Task C is going to sleep waiting for the filesystem lock. Before
  sleeping, schedule() calls sched_submit_work() -> blk_flush_plug()
  -> blk_mq_dispatch_list(), which acquires q_usage_counter via
  percpu_ref_get(). If Task C gets preempted before percpu_ref_put(),
  it will not be scheduled back because the task is already in
  uninterruptible sleep state (TASK_UNINTERRUPTIBLE). This means it
  holds the percpu_ref indefinitely, preventing freeze from completing.

This is fundamentally an ABBA deadlock between queue freeze and the
filesystem lock, exposed by preemption creating an artificial hold
on q_usage_counter during the plug flush.

Fix by disabling preemption around blk_flush_plug() in
sched_submit_work(). The _notrace variants are used since this runs
in scheduler context. preempt_enable_no_resched_notrace() is correct
because we are already inside __schedule() and about to pick the next
task.

Fixes: 73c101011926 ("block: initial patch for on-stack per-task plugging")
Reported-by: Michael Wu <michael@allwinnertech.com>
Tested-by: Michael Wu <michael@allwinnertech.com>
Link: https://lore.kernel.org/linux-block/20260417082744.30124-1-michael@allwinnertech.com/
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 kernel/sched/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b7f77c165a6e..4217aaaa8e47 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6966,7 +6966,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
 	 * If we are going to sleep and we have plugged IO queued,
 	 * make sure to submit it to avoid deadlocks.
 	 */
+	preempt_disable_notrace();
 	blk_flush_plug(tsk->plug, true);
+	preempt_enable_no_resched_notrace();

 	lock_map_release(&sched_map);
 }
--
2.53.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work
  2026-04-23 12:55 [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work Ming Lei
@ 2026-05-12  5:56 ` Xiaosen
  2026-05-12  8:07   ` Ming Lei
  0 siblings, 1 reply; 3+ messages in thread
From: Xiaosen @ 2026-05-12  5:56 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block; +Cc: linux-kernel, Michael Wu

There is another deadlock caused by preemption during calling
blk_flush_plug() in sched_submit_work().
blk_mq_dispatch_list
  percpu_ref_get(&this_hctx->queue->q_usage_counter)
    percpu_ref_get_many(ref, 1);
          rcu_read_lock()
            __rcu_read_lock()
            rcu_lock_acquire
                  lock_acquire
                    preempt_schedule_irq  --> writeback worker got
preempted here and be scheduled out in D state

1. task kworker/u32:6 had dirty pages from f2fs node inode submitted to
block layer and the corresponding request was added to plug list of the
current task.
2. task snpe-net-run acquired gc_lock waiting for the request that
contained page from node inode to be completed.
3. task kworker/u32:6 needed to acquire gc_lock to perform foreground
GC, since the gc_lock had already been acquired by task snpe-net-run, so
it called blk_flush_plug() in sched_submit_work() before sleeping to
avoid deadlocks, but task kworker/u32:6 got preempted in RCU critical
section before running hw queue to issue plugged requests. so the
plugged requests were pending in local request list. task kworker/u32:6
was scheduled out waiting to be woken up by the release of gc_lock.
4. so, there is a deadlock to cause RCU STALL.

I think task kworker/u32:6 should not be scheduled out before returning
from blk_flush_plug(), and I think this patch should be able to fix such
deadlocks.

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu:   Tasks blocked on level-0 rcu_node (CPUs 0-7): P18782/1:b..l
rcu:   (detected by 0, t=5255 jiffies, g=4338701, q=1751 ncpus=8)
task:kworker/u32:6   state:D stack:0     pid:18782 tgid:18782 ppid:2
 task_flags:0x24208060 flags:0x00000010
Workqueue: writeback wb_workfn (flush-254:55)
Call trace:
 __switch_to+0x214/0x3fc (T)
 __schedule+0xa70/0x1048
 preempt_schedule_irq+0x70/0xd4
 raw_irqentry_exit_cond_resched+0x2c/0x44
 irqentry_exit+0x38/0x64
 exit_to_kernel_mode+0x28/0x38
 el1_interrupt+0x5c/0xa8
 el1h_64_irq_handler+0x18/0x24
 el1h_64_irq+0x84/0x88
 lock_acquire+0x170/0x29c (P)
 rcu_lock_acquire+0x38/0x44
 blk_mq_dispatch_list+0x190/0x69c
 blk_mq_flush_plug_list+0x13c/0x170
 __blk_flush_plug+0x11c/0x17c
 sched_submit_work+0x7c/0xb8
 schedule+0x38/0xc4
 schedule_preempt_disabled+0x18/0x2c
 rwsem_down_write_slowpath+0x768/0x10c0
 down_write+0x98/0x240
 f2fs_down_write_trace+0x30/0x84
 f2fs_balance_fs+0x130/0x17c
 f2fs_write_single_data_page+0x42c/0x738
 f2fs_write_data_pages+0x8c0/0xe80
 do_writepages+0xd4/0x1a0
 __writeback_single_inode+0x78/0x5bc
 writeback_sb_inodes+0x2b8/0x580
 __writeback_inodes_wb+0xa0/0xf0
 wb_writeback+0x188/0x4bc
 wb_workfn+0x3ec/0x658
 process_one_work+0x284/0x62c
 worker_thread+0x260/0x3b4
 kthread+0x150/0x288
 ret_from_fork+0x10/0x20

Task name: snpe-net-run     [affinity: 0xff] pid:  10012 tgid:  10012
cpu: 2 prio: 120 start: 0xffffff8a1eafc6c0
state: 0x2[D] exit_state: 0x0 stack base: 0xffffffc0a77c0000
Stack:
 __switch_to+0x214
 __schedule+0xa70
 schedule+0x48
 schedule_timeout+0xa0
 io_schedule_timeout+0x48
 f2fs_wait_on_all_pages+0x84
 do_checkpoint+0x804
 f2fs_write_checkpoint+0x820
 f2fs_gc+0x1f0
 f2fs_balance_fs+0x14c
 f2fs_map_blocks+0xd1c
 f2fs_file_write_iter+0x3c0
 vfs_write+0x270
 ksys_write+0x78
 __arm64_sys_write+0x1c
 invoke_syscall+0x58
 el0_svc_common+0xa8
 do_el0_svc+0x1c
 el0_svc+0x40
 el0t_64_sync_handler+0x84
 el0t_64_sync+0x1c4

Regards,
Xiaosen

On 4/23/2026 8:55 PM, Ming Lei wrote:
> On preemptible kernels, a three-way deadlock can occur involving
> blk_mq_freeze_queue and blk_mq_dispatch_list:
> 
> - Task A holds a filesystem lock (e.g., f2fs io_rwsem) and enters
>   __bio_queue_enter(), waiting for mq_freeze_depth == 0
> - Task B holds mq_freeze_depth=1 (elevator_change) and waits for
>   q_usage_counter to reach zero in blk_mq_freeze_queue_wait()
> - Task C is going to sleep waiting for the filesystem lock. Before
>   sleeping, schedule() calls sched_submit_work() -> blk_flush_plug()
>   -> blk_mq_dispatch_list(), which acquires q_usage_counter via
>   percpu_ref_get(). If Task C gets preempted before percpu_ref_put(),
>   it will not be scheduled back because the task is already in
>   uninterruptible sleep state (TASK_UNINTERRUPTIBLE). This means it
>   holds the percpu_ref indefinitely, preventing freeze from completing.
> 
> This is fundamentally an ABBA deadlock between queue freeze and the
> filesystem lock, exposed by preemption creating an artificial hold
> on q_usage_counter during the plug flush.
> 
> Fix by disabling preemption around blk_flush_plug() in
> sched_submit_work(). The _notrace variants are used since this runs
> in scheduler context. preempt_enable_no_resched_notrace() is correct
> because we are already inside __schedule() and about to pick the next
> task.
> 
> Fixes: 73c101011926 ("block: initial patch for on-stack per-task plugging")
> Reported-by: Michael Wu <michael@allwinnertech.com>
> Tested-by: Michael Wu <michael@allwinnertech.com>
> Link: https://lore.kernel.org/linux-block/20260417082744.30124-1-michael@allwinnertech.com/
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---
>  kernel/sched/core.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b7f77c165a6e..4217aaaa8e47 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6966,7 +6966,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
>  	 * If we are going to sleep and we have plugged IO queued,
>  	 * make sure to submit it to avoid deadlocks.
>  	 */
> +	preempt_disable_notrace();
>  	blk_flush_plug(tsk->plug, true);
> +	preempt_enable_no_resched_notrace();
> 
>  	lock_map_release(&sched_map);
>  }
> --
> 2.53.0
> 
> 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work
  2026-05-12  5:56 ` Xiaosen
@ 2026-05-12  8:07   ` Ming Lei
  0 siblings, 0 replies; 3+ messages in thread
From: Ming Lei @ 2026-05-12  8:07 UTC (permalink / raw)
  To: Xiaosen; +Cc: Jens Axboe, linux-block, linux-kernel, Michael Wu

On Tue, May 12, 2026 at 01:56:55PM +0800, Xiaosen wrote:
> There is another deadlock caused by preemption during calling
> blk_flush_plug() in sched_submit_work().
> blk_mq_dispatch_list
>   percpu_ref_get(&this_hctx->queue->q_usage_counter)
>     percpu_ref_get_many(ref, 1);
>           rcu_read_lock()
>             __rcu_read_lock()
>             rcu_lock_acquire
>                   lock_acquire
>                     preempt_schedule_irq  --> writeback worker got
> preempted here and be scheduled out in D state
> 
> 1. task kworker/u32:6 had dirty pages from f2fs node inode submitted to
> block layer and the corresponding request was added to plug list of the
> current task.
> 2. task snpe-net-run acquired gc_lock waiting for the request that
> contained page from node inode to be completed.
> 3. task kworker/u32:6 needed to acquire gc_lock to perform foreground
> GC, since the gc_lock had already been acquired by task snpe-net-run, so
> it called blk_flush_plug() in sched_submit_work() before sleeping to
> avoid deadlocks, but task kworker/u32:6 got preempted in RCU critical
> section before running hw queue to issue plugged requests. so the
> plugged requests were pending in local request list. task kworker/u32:6
> was scheduled out waiting to be woken up by the release of gc_lock.
> 4. so, there is a deadlock to cause RCU STALL.
> 
> I think task kworker/u32:6 should not be scheduled out before returning
> from blk_flush_plug(), and I think this patch should be able to fix such
> deadlocks.

This patch isn't enough, because the current task could be preempted
anytime before calling into blk_flush_plug(), even before schedule().

One quick way is to fix schedule_preempt_disabled(), I will prepare one
formal patch later.

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b8871449d3c6..18ef6ed71b4f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7336,6 +7336,7 @@ asmlinkage __visible void __sched schedule_user(void)
  */
 void __sched schedule_preempt_disabled(void)
 {
+       blk_flush_plug(current->plug, true);
        sched_preempt_enable_no_resched();
        schedule();
        preempt_disable();


Thanks,
Ming

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-05-12  8:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-23 12:55 [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work Ming Lei
2026-05-12  5:56 ` Xiaosen
2026-05-12  8:07   ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox