* [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
@ 2026-05-12 8:59 Ming Lei
2026-05-12 12:04 ` Peter Zijlstra
0 siblings, 1 reply; 7+ messages in thread
From: Ming Lei @ 2026-05-12 8:59 UTC (permalink / raw)
To: Jens Axboe, linux-block, linux-kernel
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Ming Lei, Michael Wu, Xiaosen He
On preemptible kernels, a deadlock can occur when a task with plugged IO
calls schedule_preempt_disabled():
schedule_preempt_disabled()
sched_preempt_enable_no_resched() // preemption now enabled
schedule() // <-- preemption can happen here
sched_submit_work()
blk_flush_plug()
After sched_preempt_enable_no_resched() re-enables preemption, the task
can be preempted (e.g., by a higher-priority RT task) before reaching
blk_flush_plug() in sched_submit_work(). Since the task's state is
already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
requests in current->plug remain unflushed for an unbounded time.
If another task depends on those plugged requests to make progress (e.g.,
to release a lock the sleeping task needs), a deadlock results:
- Task A (writeback worker): holds plugged IO, preempted before
flushing, stuck on run queue behind higher-priority work
- Task B: waiting for IO completion from Task A's plug, holds a lock
that Task A needs to be woken up
Both reported deadlocks involve mutex/rwsem slowpaths, which are the
primary callers of schedule_preempt_disabled() with non-running task
state.
Fix by flushing the plug in schedule_preempt_disabled() while
preemption is still disabled. This ensures the plug is empty before the
preemption window opens.
Fixes: 73c101011926 ("block: initial patch for on-stack per-task plugging")
Reported-by: Michael Wu <michael@allwinnertech.com>
Tested-by: Michael Wu <michael@allwinnertech.com>
Reported-by: Xiaosen He <xiaosen.he@oss.qualcomm.com>
Link: https://lore.kernel.org/linux-block/20260417082744.30124-1-michael@allwinnertech.com/
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
kernel/sched/core.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b8871449d3c6..c1efe110c54d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7336,6 +7336,8 @@ asmlinkage __visible void __sched schedule_user(void)
*/
void __sched schedule_preempt_disabled(void)
{
+ if (!task_is_running(current))
+ blk_flush_plug(current->plug, true);
sched_preempt_enable_no_resched();
schedule();
preempt_disable();
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 8:59 [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock Ming Lei
@ 2026-05-12 12:04 ` Peter Zijlstra
2026-05-12 12:40 ` Peter Zijlstra
0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2026-05-12 12:04 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, linux-block, linux-kernel, Ingo Molnar, Juri Lelli,
Vincent Guittot, Michael Wu, Xiaosen He
On Tue, May 12, 2026 at 04:59:39PM +0800, Ming Lei wrote:
> On preemptible kernels, a deadlock can occur when a task with plugged IO
> calls schedule_preempt_disabled():
>
> schedule_preempt_disabled()
> sched_preempt_enable_no_resched() // preemption now enabled
> schedule() // <-- preemption can happen here
> sched_submit_work()
> blk_flush_plug()
>
> After sched_preempt_enable_no_resched() re-enables preemption, the task
> can be preempted (e.g., by a higher-priority RT task) before reaching
> blk_flush_plug() in sched_submit_work(). Since the task's state is
> already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
> requests in current->plug remain unflushed for an unbounded time.
>
> If another task depends on those plugged requests to make progress (e.g.,
> to release a lock the sleeping task needs), a deadlock results:
>
> - Task A (writeback worker): holds plugged IO, preempted before
> flushing, stuck on run queue behind higher-priority work
> - Task B: waiting for IO completion from Task A's plug, holds a lock
> that Task A needs to be woken up
>
> Both reported deadlocks involve mutex/rwsem slowpaths, which are the
> primary callers of schedule_preempt_disabled() with non-running task
> state.
>
> Fix by flushing the plug in schedule_preempt_disabled() while
> preemption is still disabled. This ensures the plug is empty before the
> preemption window opens.
How is this different from any path calling schedule()? That would be
subject to exactly the same issue.
The patch cannot be correct.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 12:04 ` Peter Zijlstra
@ 2026-05-12 12:40 ` Peter Zijlstra
2026-05-12 15:45 ` Ming Lei
0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2026-05-12 12:40 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, linux-block, linux-kernel, Ingo Molnar, Juri Lelli,
Vincent Guittot, Michael Wu, Xiaosen He
On Tue, May 12, 2026 at 02:04:32PM +0200, Peter Zijlstra wrote:
> On Tue, May 12, 2026 at 04:59:39PM +0800, Ming Lei wrote:
> > On preemptible kernels, a deadlock can occur when a task with plugged IO
> > calls schedule_preempt_disabled():
> >
> > schedule_preempt_disabled()
> > sched_preempt_enable_no_resched() // preemption now enabled
> > schedule() // <-- preemption can happen here
> > sched_submit_work()
> > blk_flush_plug()
> >
> > After sched_preempt_enable_no_resched() re-enables preemption, the task
> > can be preempted (e.g., by a higher-priority RT task) before reaching
> > blk_flush_plug() in sched_submit_work(). Since the task's state is
> > already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
> > requests in current->plug remain unflushed for an unbounded time.
> >
> > If another task depends on those plugged requests to make progress (e.g.,
> > to release a lock the sleeping task needs), a deadlock results:
> >
> > - Task A (writeback worker): holds plugged IO, preempted before
> > flushing, stuck on run queue behind higher-priority work
> > - Task B: waiting for IO completion from Task A's plug, holds a lock
> > that Task A needs to be woken up
> >
> > Both reported deadlocks involve mutex/rwsem slowpaths, which are the
> > primary callers of schedule_preempt_disabled() with non-running task
> > state.
> >
> > Fix by flushing the plug in schedule_preempt_disabled() while
> > preemption is still disabled. This ensures the plug is empty before the
> > preemption window opens.
>
> How is this different from any path calling schedule()? That would be
> subject to exactly the same issue.
>
> The patch cannot be correct.
Also, is there a reason io_schedule_prepare() has a blk_flush_plug()
call?
io_schedule()
token = io_schedule_prepare()
blk_flush_plug(current->plug, true);
schedule()
if (!task_is_running(tsk))
sched_submit_work()
blk_flush_plug(tsk->plug, true);
Why isn't the one in sched_submit_work() sufficient? This thing either
needs a comment justifying its existence, or get removed.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 12:40 ` Peter Zijlstra
@ 2026-05-12 15:45 ` Ming Lei
2026-05-12 16:49 ` Peter Zijlstra
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Ming Lei @ 2026-05-12 15:45 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Jens Axboe, linux-block, linux-kernel, Ingo Molnar, Juri Lelli,
Vincent Guittot, Michael Wu, Xiaosen He, Tejun Heo,
Thomas Gleixner
On Tue, May 12, 2026 at 02:40:21PM +0200, Peter Zijlstra wrote:
> On Tue, May 12, 2026 at 02:04:32PM +0200, Peter Zijlstra wrote:
> > On Tue, May 12, 2026 at 04:59:39PM +0800, Ming Lei wrote:
> > > On preemptible kernels, a deadlock can occur when a task with plugged IO
> > > calls schedule_preempt_disabled():
> > >
> > > schedule_preempt_disabled()
> > > sched_preempt_enable_no_resched() // preemption now enabled
> > > schedule() // <-- preemption can happen here
> > > sched_submit_work()
> > > blk_flush_plug()
> > >
> > > After sched_preempt_enable_no_resched() re-enables preemption, the task
> > > can be preempted (e.g., by a higher-priority RT task) before reaching
> > > blk_flush_plug() in sched_submit_work(). Since the task's state is
> > > already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
> > > requests in current->plug remain unflushed for an unbounded time.
> > >
> > > If another task depends on those plugged requests to make progress (e.g.,
> > > to release a lock the sleeping task needs), a deadlock results:
> > >
> > > - Task A (writeback worker): holds plugged IO, preempted before
> > > flushing, stuck on run queue behind higher-priority work
> > > - Task B: waiting for IO completion from Task A's plug, holds a lock
> > > that Task A needs to be woken up
> > >
> > > Both reported deadlocks involve mutex/rwsem slowpaths, which are the
> > > primary callers of schedule_preempt_disabled() with non-running task
> > > state.
> > >
> > > Fix by flushing the plug in schedule_preempt_disabled() while
> > > preemption is still disabled. This ensures the plug is empty before the
> > > preemption window opens.
> >
> > How is this different from any path calling schedule()? That would be
> > subject to exactly the same issue.
> >
> > The patch cannot be correct.
>
> Also, is there a reason io_schedule_prepare() has a blk_flush_plug()
> call?
It is added in Tejun's "[PATCHSET RFC] sched, jbd2: mark sleeps on journal->j_checkpoint_mutex as iowait":
https://lore.kernel.org/all/1477673892-28940-1-git-send-email-tj@kernel.org/#t
which fixes iowait accounting for ext4, meantime adds the model
"io_schedule_prepare() + schedule() + io_schedule_finish()", which actually
can avoid this kind issue easily because io_schedule_prepare() is called
in task running state.
For this f2fs issue, maybe it can be addressed by adding rwsem io variant
just like mutex_lock_io(), meantime iowait accounting is covered too.
> io_schedule()
> token = io_schedule_prepare()
> blk_flush_plug(current->plug, true);
> schedule()
> if (!task_is_running(tsk))
> sched_submit_work()
> blk_flush_plug(tsk->plug, true);
>
> Why isn't the one in sched_submit_work() sufficient? This thing either
> needs a comment justifying its existence, or get removed.
This plug is originally added in 73c101011926 "block: initial patch for
on-stack per-task plugging") and commit a237c1c5bc5d "block: let io_schedule()
flush the plug inline" by Jens, when there isn't such preempt issue.
But it is moved out to sched_submit_work() latter in commit 9c40cef2b799
"sched: Move blk_schedule_flush_plug() out of __schedule()") by Thomas
Gleixner, when this issue starts to become likely.
If io_schedule_prepare() can be called in every iowait context, looks
blk_flush_plug() from sched_submit_work() may be removed.
Thanks,
Ming
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 15:45 ` Ming Lei
@ 2026-05-12 16:49 ` Peter Zijlstra
2026-05-12 16:53 ` Peter Zijlstra
2026-05-12 17:16 ` Tejun Heo
2 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2026-05-12 16:49 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, linux-block, linux-kernel, Ingo Molnar, Juri Lelli,
Vincent Guittot, Michael Wu, Xiaosen He, Tejun Heo,
Thomas Gleixner
On Tue, May 12, 2026 at 11:45:14PM +0800, Ming Lei wrote:
> > io_schedule()
> > token = io_schedule_prepare()
> > blk_flush_plug(current->plug, true);
> > schedule()
> > if (!task_is_running(tsk))
> > sched_submit_work()
> > blk_flush_plug(tsk->plug, true);
> >
> > Why isn't the one in sched_submit_work() sufficient? This thing either
> > needs a comment justifying its existence, or get removed.
> If io_schedule_prepare() can be called in every iowait context, looks
> blk_flush_plug() from sched_submit_work() may be removed.
No, the other way around. I don't see the point of having the one in
io_schedule_prepare(), since we'll hit the one in sched_submit_work().
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 15:45 ` Ming Lei
2026-05-12 16:49 ` Peter Zijlstra
@ 2026-05-12 16:53 ` Peter Zijlstra
2026-05-12 17:16 ` Tejun Heo
2 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2026-05-12 16:53 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, linux-block, linux-kernel, Ingo Molnar, Juri Lelli,
Vincent Guittot, Michael Wu, Xiaosen He, Tejun Heo,
Thomas Gleixner
On Tue, May 12, 2026 at 11:45:14PM +0800, Ming Lei wrote:
> On Tue, May 12, 2026 at 02:40:21PM +0200, Peter Zijlstra wrote:
> > On Tue, May 12, 2026 at 02:04:32PM +0200, Peter Zijlstra wrote:
> > > On Tue, May 12, 2026 at 04:59:39PM +0800, Ming Lei wrote:
> > > > On preemptible kernels, a deadlock can occur when a task with plugged IO
> > > > calls schedule_preempt_disabled():
> > > >
> > > > schedule_preempt_disabled()
> > > > sched_preempt_enable_no_resched() // preemption now enabled
> > > > schedule() // <-- preemption can happen here
> > > > sched_submit_work()
> > > > blk_flush_plug()
> > > >
> > > > After sched_preempt_enable_no_resched() re-enables preemption, the task
> > > > can be preempted (e.g., by a higher-priority RT task) before reaching
> > > > blk_flush_plug() in sched_submit_work(). Since the task's state is
> > > > already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
> > > > requests in current->plug remain unflushed for an unbounded time.
> > > >
> > > > If another task depends on those plugged requests to make progress (e.g.,
> > > > to release a lock the sleeping task needs), a deadlock results:
> > > >
> > > > - Task A (writeback worker): holds plugged IO, preempted before
> > > > flushing, stuck on run queue behind higher-priority work
> > > > - Task B: waiting for IO completion from Task A's plug, holds a lock
> > > > that Task A needs to be woken up
> > > >
> > > > Both reported deadlocks involve mutex/rwsem slowpaths, which are the
> > > > primary callers of schedule_preempt_disabled() with non-running task
> > > > state.
> > > >
> > > > Fix by flushing the plug in schedule_preempt_disabled() while
> > > > preemption is still disabled. This ensures the plug is empty before the
> > > > preemption window opens.
> > >
> > > How is this different from any path calling schedule()? That would be
> > > subject to exactly the same issue.
> > >
> > > The patch cannot be correct.
> >
> > Also, is there a reason io_schedule_prepare() has a blk_flush_plug()
> > call?
>
> It is added in Tejun's "[PATCHSET RFC] sched, jbd2: mark sleeps on journal->j_checkpoint_mutex as iowait":
>
> https://lore.kernel.org/all/1477673892-28940-1-git-send-email-tj@kernel.org/#t
>
> which fixes iowait accounting for ext4, meantime adds the model
> "io_schedule_prepare() + schedule() + io_schedule_finish()", which actually
> can avoid this kind issue easily because io_schedule_prepare() is called
> in task running state.
>
> For this f2fs issue, maybe it can be addressed by adding rwsem io variant
> just like mutex_lock_io(), meantime iowait accounting is covered too.
So personally I detest all of iowait, its an abomination. And I don't
see how having an iowait specific version avoids any problem.
You can get preempted at any point before between getting the io started
and blocking.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock
2026-05-12 15:45 ` Ming Lei
2026-05-12 16:49 ` Peter Zijlstra
2026-05-12 16:53 ` Peter Zijlstra
@ 2026-05-12 17:16 ` Tejun Heo
2 siblings, 0 replies; 7+ messages in thread
From: Tejun Heo @ 2026-05-12 17:16 UTC (permalink / raw)
To: Ming Lei
Cc: Peter Zijlstra, Jens Axboe, linux-block, linux-kernel,
Ingo Molnar, Juri Lelli, Vincent Guittot, Michael Wu, Xiaosen He,
Thomas Gleixner
Hello, Ming.
On Tue, May 12, 2026 at 11:45:14PM +0800, Ming Lei wrote:
> On Tue, May 12, 2026 at 02:40:21PM +0200, Peter Zijlstra wrote:
> > On Tue, May 12, 2026 at 02:04:32PM +0200, Peter Zijlstra wrote:
> > > On Tue, May 12, 2026 at 04:59:39PM +0800, Ming Lei wrote:
> > > > On preemptible kernels, a deadlock can occur when a task with plugged IO
> > > > calls schedule_preempt_disabled():
> > > >
> > > > schedule_preempt_disabled()
> > > > sched_preempt_enable_no_resched() // preemption now enabled
> > > > schedule() // <-- preemption can happen here
> > > > sched_submit_work()
> > > > blk_flush_plug()
> > > >
> > > > After sched_preempt_enable_no_resched() re-enables preemption, the task
> > > > can be preempted (e.g., by a higher-priority RT task) before reaching
> > > > blk_flush_plug() in sched_submit_work(). Since the task's state is
> > > > already TASK_UNINTERRUPTIBLE (set by the mutex/rwsem slowpath caller),
> > > > requests in current->plug remain unflushed for an unbounded time.
> > > >
> > > > If another task depends on those plugged requests to make progress (e.g.,
> > > > to release a lock the sleeping task needs), a deadlock results:
> > > >
> > > > - Task A (writeback worker): holds plugged IO, preempted before
> > > > flushing, stuck on run queue behind higher-priority work
> > > > - Task B: waiting for IO completion from Task A's plug, holds a lock
> > > > that Task A needs to be woken up
My memory is hazy around io_schedule but the above reads really weird to me.
A task, regardless of its current state stays on the runqueue when
preempted, so the condition is temporary. As soon as the preempted task can
get CPU, it should unwind the situation. That's not a deadlock. Is the
problem that there can be preemption-induced delay in flushing the plugs?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-05-12 17:16 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 8:59 [PATCH] sched: flush plug in schedule_preempt_disabled() to prevent deadlock Ming Lei
2026-05-12 12:04 ` Peter Zijlstra
2026-05-12 12:40 ` Peter Zijlstra
2026-05-12 15:45 ` Ming Lei
2026-05-12 16:49 ` Peter Zijlstra
2026-05-12 16:53 ` Peter Zijlstra
2026-05-12 17:16 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox