Linux block layer
 help / color / mirror / Atom feed
From: Xiaosen <xiaosen.he@oss.qualcomm.com>
To: Ming Lei <tom.leiming@gmail.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, Michael Wu <michael@allwinnertech.com>
Subject: Re: [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work
Date: Tue, 12 May 2026 13:56:55 +0800	[thread overview]
Message-ID: <5660795d-87de-46f5-add4-7729a02225ef@oss.qualcomm.com> (raw)
In-Reply-To: <20260423125528.2917171-1-tom.leiming@gmail.com>

There is another deadlock caused by preemption during calling
blk_flush_plug() in sched_submit_work().
blk_mq_dispatch_list
  percpu_ref_get(&this_hctx->queue->q_usage_counter)
    percpu_ref_get_many(ref, 1);
          rcu_read_lock()
            __rcu_read_lock()
            rcu_lock_acquire
                  lock_acquire
                    preempt_schedule_irq  --> writeback worker got
preempted here and be scheduled out in D state

1. task kworker/u32:6 had dirty pages from f2fs node inode submitted to
block layer and the corresponding request was added to plug list of the
current task.
2. task snpe-net-run acquired gc_lock waiting for the request that
contained page from node inode to be completed.
3. task kworker/u32:6 needed to acquire gc_lock to perform foreground
GC, since the gc_lock had already been acquired by task snpe-net-run, so
it called blk_flush_plug() in sched_submit_work() before sleeping to
avoid deadlocks, but task kworker/u32:6 got preempted in RCU critical
section before running hw queue to issue plugged requests. so the
plugged requests were pending in local request list. task kworker/u32:6
was scheduled out waiting to be woken up by the release of gc_lock.
4. so, there is a deadlock to cause RCU STALL.

I think task kworker/u32:6 should not be scheduled out before returning
from blk_flush_plug(), and I think this patch should be able to fix such
deadlocks.

rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu:   Tasks blocked on level-0 rcu_node (CPUs 0-7): P18782/1:b..l
rcu:   (detected by 0, t=5255 jiffies, g=4338701, q=1751 ncpus=8)
task:kworker/u32:6   state:D stack:0     pid:18782 tgid:18782 ppid:2
 task_flags:0x24208060 flags:0x00000010
Workqueue: writeback wb_workfn (flush-254:55)
Call trace:
 __switch_to+0x214/0x3fc (T)
 __schedule+0xa70/0x1048
 preempt_schedule_irq+0x70/0xd4
 raw_irqentry_exit_cond_resched+0x2c/0x44
 irqentry_exit+0x38/0x64
 exit_to_kernel_mode+0x28/0x38
 el1_interrupt+0x5c/0xa8
 el1h_64_irq_handler+0x18/0x24
 el1h_64_irq+0x84/0x88
 lock_acquire+0x170/0x29c (P)
 rcu_lock_acquire+0x38/0x44
 blk_mq_dispatch_list+0x190/0x69c
 blk_mq_flush_plug_list+0x13c/0x170
 __blk_flush_plug+0x11c/0x17c
 sched_submit_work+0x7c/0xb8
 schedule+0x38/0xc4
 schedule_preempt_disabled+0x18/0x2c
 rwsem_down_write_slowpath+0x768/0x10c0
 down_write+0x98/0x240
 f2fs_down_write_trace+0x30/0x84
 f2fs_balance_fs+0x130/0x17c
 f2fs_write_single_data_page+0x42c/0x738
 f2fs_write_data_pages+0x8c0/0xe80
 do_writepages+0xd4/0x1a0
 __writeback_single_inode+0x78/0x5bc
 writeback_sb_inodes+0x2b8/0x580
 __writeback_inodes_wb+0xa0/0xf0
 wb_writeback+0x188/0x4bc
 wb_workfn+0x3ec/0x658
 process_one_work+0x284/0x62c
 worker_thread+0x260/0x3b4
 kthread+0x150/0x288
 ret_from_fork+0x10/0x20

Task name: snpe-net-run     [affinity: 0xff] pid:  10012 tgid:  10012
cpu: 2 prio: 120 start: 0xffffff8a1eafc6c0
state: 0x2[D] exit_state: 0x0 stack base: 0xffffffc0a77c0000
Stack:
 __switch_to+0x214
 __schedule+0xa70
 schedule+0x48
 schedule_timeout+0xa0
 io_schedule_timeout+0x48
 f2fs_wait_on_all_pages+0x84
 do_checkpoint+0x804
 f2fs_write_checkpoint+0x820
 f2fs_gc+0x1f0
 f2fs_balance_fs+0x14c
 f2fs_map_blocks+0xd1c
 f2fs_file_write_iter+0x3c0
 vfs_write+0x270
 ksys_write+0x78
 __arm64_sys_write+0x1c
 invoke_syscall+0x58
 el0_svc_common+0xa8
 do_el0_svc+0x1c
 el0_svc+0x40
 el0t_64_sync_handler+0x84
 el0t_64_sync+0x1c4

Regards,
Xiaosen

On 4/23/2026 8:55 PM, Ming Lei wrote:
> On preemptible kernels, a three-way deadlock can occur involving
> blk_mq_freeze_queue and blk_mq_dispatch_list:
> 
> - Task A holds a filesystem lock (e.g., f2fs io_rwsem) and enters
>   __bio_queue_enter(), waiting for mq_freeze_depth == 0
> - Task B holds mq_freeze_depth=1 (elevator_change) and waits for
>   q_usage_counter to reach zero in blk_mq_freeze_queue_wait()
> - Task C is going to sleep waiting for the filesystem lock. Before
>   sleeping, schedule() calls sched_submit_work() -> blk_flush_plug()
>   -> blk_mq_dispatch_list(), which acquires q_usage_counter via
>   percpu_ref_get(). If Task C gets preempted before percpu_ref_put(),
>   it will not be scheduled back because the task is already in
>   uninterruptible sleep state (TASK_UNINTERRUPTIBLE). This means it
>   holds the percpu_ref indefinitely, preventing freeze from completing.
> 
> This is fundamentally an ABBA deadlock between queue freeze and the
> filesystem lock, exposed by preemption creating an artificial hold
> on q_usage_counter during the plug flush.
> 
> Fix by disabling preemption around blk_flush_plug() in
> sched_submit_work(). The _notrace variants are used since this runs
> in scheduler context. preempt_enable_no_resched_notrace() is correct
> because we are already inside __schedule() and about to pick the next
> task.
> 
> Fixes: 73c101011926 ("block: initial patch for on-stack per-task plugging")
> Reported-by: Michael Wu <michael@allwinnertech.com>
> Tested-by: Michael Wu <michael@allwinnertech.com>
> Link: https://lore.kernel.org/linux-block/20260417082744.30124-1-michael@allwinnertech.com/
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---
>  kernel/sched/core.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b7f77c165a6e..4217aaaa8e47 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6966,7 +6966,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
>  	 * If we are going to sleep and we have plugged IO queued,
>  	 * make sure to submit it to avoid deadlocks.
>  	 */
> +	preempt_disable_notrace();
>  	blk_flush_plug(tsk->plug, true);
> +	preempt_enable_no_resched_notrace();
> 
>  	lock_map_release(&sched_map);
>  }
> --
> 2.53.0
> 
> 


  reply	other threads:[~2026-05-12  5:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-23 12:55 [PATCH] sched: disable preemption around blk_flush_plug in sched_submit_work Ming Lei
2026-05-12  5:56 ` Xiaosen [this message]
2026-05-12  8:07   ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5660795d-87de-46f5-add4-7729a02225ef@oss.qualcomm.com \
    --to=xiaosen.he@oss.qualcomm.com \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael@allwinnertech.com \
    --cc=tom.leiming@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox