linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] block/blk-mq: fix RT kernel issues and interrupt context warnings
@ 2025-12-22 20:15 Ionut Nechita (WindRiver)
  2025-12-22 20:15 ` [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path Ionut Nechita (WindRiver)
  2025-12-22 20:15 ` [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Ionut Nechita (WindRiver)
  0 siblings, 2 replies; 6+ messages in thread
From: Ionut Nechita (WindRiver) @ 2025-12-22 20:15 UTC (permalink / raw)
  To: ming.lei
  Cc: axboe, gregkh, ionut.nechita, linux-block, linux-kernel,
	muchun.song, sashal, stable

From: Ionut Nechita <ionut.nechita@windriver.com>

This series addresses two critical issues in the block layer multiqueue
(blk-mq) subsystem when running on PREEMPT_RT kernels.

The first patch fixes a severe performance regression where queue_lock
contention in the I/O hot path causes IRQ threads to sleep on RT kernels.
Testing on MegaRAID 12GSAS controller showed a 76% performance drop
(640 MB/s -> 153 MB/s). The fix replaces spinlock with memory barriers
to maintain ordering without sleeping.

The second patch fixes a WARN_ON that triggers during SCSI device scanning
when blk_freeze_queue_start() calls blk_mq_run_hw_queues() synchronously
from interrupt context. The warning "WARN_ON_ONCE(!async && in_interrupt())"
is resolved by switching to asynchronous execution.

Changes in v2:
- Removed the blk_mq_cpuhp_lock patch (needs more investigation)
- Added fix for WARN_ON in interrupt context during queue freezing
- Updated commit messages for clarity

Ionut Nechita (2):
  block/blk-mq: fix RT kernel regression with queue_lock in hot path
  block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt
    context

 block/blk-mq.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

-- 
2.52.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path
  2025-12-22 20:15 [PATCH v2 0/2] block/blk-mq: fix RT kernel issues and interrupt context warnings Ionut Nechita (WindRiver)
@ 2025-12-22 20:15 ` Ionut Nechita (WindRiver)
  2025-12-23  2:15   ` Muchun Song
  2025-12-22 20:15 ` [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Ionut Nechita (WindRiver)
  1 sibling, 1 reply; 6+ messages in thread
From: Ionut Nechita (WindRiver) @ 2025-12-22 20:15 UTC (permalink / raw)
  To: ming.lei
  Cc: axboe, gregkh, ionut.nechita, linux-block, linux-kernel,
	muchun.song, sashal, stable

From: Ionut Nechita <ionut.nechita@windriver.com>

Commit 679b1874eba7 ("block: fix ordering between checking
QUEUE_FLAG_QUIESCED request adding") introduced queue_lock acquisition
in blk_mq_run_hw_queue() to synchronize QUEUE_FLAG_QUIESCED checks.

On RT kernels (CONFIG_PREEMPT_RT), regular spinlocks are converted to
rt_mutex (sleeping locks). When multiple MSI-X IRQ threads process I/O
completions concurrently, they contend on queue_lock in the hot path,
causing all IRQ threads to enter D (uninterruptible sleep) state. This
serializes interrupt processing completely.

Test case (MegaRAID 12GSAS with 8 MSI-X vectors on RT kernel):
- Good (v6.6.52-rt):  640 MB/s sequential read
- Bad  (v6.6.64-rt):  153 MB/s sequential read (-76% regression)
- 6-8 out of 8 MSI-X IRQ threads stuck in D-state waiting on queue_lock

The original commit message mentioned memory barriers as an alternative
approach. Use full memory barriers (smp_mb) instead of queue_lock to
provide the same ordering guarantees without sleeping in RT kernel.

Memory barriers ensure proper synchronization:
- CPU0 either sees QUEUE_FLAG_QUIESCED cleared, OR
- CPU1 sees dispatch list/sw queue bitmap updates

This maintains correctness while avoiding lock contention that causes
RT kernel IRQ threads to sleep in the I/O completion path.

Fixes: 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding")
Cc: stable@vger.kernel.org
Signed-off-by: Ionut Nechita <ionut.nechita@windriver.com>
---
 block/blk-mq.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5da948b07058..5fb8da4958d0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2292,22 +2292,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 
 	might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
 
+	/*
+	 * First lockless check to avoid unnecessary overhead.
+	 * Memory barrier below synchronizes with blk_mq_unquiesce_queue().
+	 */
 	need_run = blk_mq_hw_queue_need_run(hctx);
 	if (!need_run) {
-		unsigned long flags;
-
-		/*
-		 * Synchronize with blk_mq_unquiesce_queue(), because we check
-		 * if hw queue is quiesced locklessly above, we need the use
-		 * ->queue_lock to make sure we see the up-to-date status to
-		 * not miss rerunning the hw queue.
-		 */
-		spin_lock_irqsave(&hctx->queue->queue_lock, flags);
+		/* Synchronize with blk_mq_unquiesce_queue() */
+		smp_mb();
 		need_run = blk_mq_hw_queue_need_run(hctx);
-		spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
-
 		if (!need_run)
 			return;
+		/* Ensure dispatch list/sw queue updates visible before execution */
+		smp_mb();
 	}
 
 	if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context
  2025-12-22 20:15 [PATCH v2 0/2] block/blk-mq: fix RT kernel issues and interrupt context warnings Ionut Nechita (WindRiver)
  2025-12-22 20:15 ` [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path Ionut Nechita (WindRiver)
@ 2025-12-22 20:15 ` Ionut Nechita (WindRiver)
  2025-12-23  1:22   ` Ming Lei
  2025-12-23  2:18   ` Muchun Song
  1 sibling, 2 replies; 6+ messages in thread
From: Ionut Nechita (WindRiver) @ 2025-12-22 20:15 UTC (permalink / raw)
  To: ming.lei
  Cc: axboe, gregkh, ionut.nechita, linux-block, linux-kernel,
	muchun.song, sashal, stable

From: Ionut Nechita <ionut.nechita@windriver.com>

Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during
SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues()
synchronously from interrupt context.

The issue happens during device removal/scanning when:
1. blk_mq_destroy_queue() -> blk_queue_start_drain()
2. blk_freeze_queue_start() calls blk_mq_run_hw_queues(q, false)
3. This triggers the warning in blk_mq_run_hw_queue() when in interrupt context

Change the synchronous call to asynchronous to avoid running in interrupt context.

Fixes: Warning in blk_mq_run_hw_queue+0x1fa/0x260
Signed-off-by: Ionut Nechita <ionut.nechita@windriver.com>
---
 block/blk-mq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5fb8da4958d0..ae152f7a6933 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -128,7 +128,7 @@ void blk_freeze_queue_start(struct request_queue *q)
 		percpu_ref_kill(&q->q_usage_counter);
 		mutex_unlock(&q->mq_freeze_lock);
 		if (queue_is_mq(q))
-			blk_mq_run_hw_queues(q, false);
+			blk_mq_run_hw_queues(q, true);
 	} else {
 		mutex_unlock(&q->mq_freeze_lock);
 	}
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context
  2025-12-22 20:15 ` [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Ionut Nechita (WindRiver)
@ 2025-12-23  1:22   ` Ming Lei
  2025-12-23  2:18   ` Muchun Song
  1 sibling, 0 replies; 6+ messages in thread
From: Ming Lei @ 2025-12-23  1:22 UTC (permalink / raw)
  To: Ionut Nechita (WindRiver)
  Cc: axboe, gregkh, ionut.nechita, linux-block, linux-kernel,
	muchun.song, sashal, stable

On Mon, Dec 22, 2025 at 10:15:41PM +0200, Ionut Nechita (WindRiver) wrote:
> From: Ionut Nechita <ionut.nechita@windriver.com>
> 
> Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during
> SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues()
> synchronously from interrupt context.

Can you show the whole stack trace in the warning? The in-code doesn't
indicate that freeze queue can be called from scsi's interrupt context.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path
  2025-12-22 20:15 ` [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path Ionut Nechita (WindRiver)
@ 2025-12-23  2:15   ` Muchun Song
  0 siblings, 0 replies; 6+ messages in thread
From: Muchun Song @ 2025-12-23  2:15 UTC (permalink / raw)
  To: Ionut Nechita (WindRiver)
  Cc: axboe, gregkh, ionut.nechita, linux-block, linux-kernel, sashal,
	stable, ming.lei



On 2025/12/23 04:15, Ionut Nechita (WindRiver) wrote:
> From: Ionut Nechita <ionut.nechita@windriver.com>
>
> Commit 679b1874eba7 ("block: fix ordering between checking
> QUEUE_FLAG_QUIESCED request adding") introduced queue_lock acquisition
> in blk_mq_run_hw_queue() to synchronize QUEUE_FLAG_QUIESCED checks.
>
> On RT kernels (CONFIG_PREEMPT_RT), regular spinlocks are converted to
> rt_mutex (sleeping locks). When multiple MSI-X IRQ threads process I/O
> completions concurrently, they contend on queue_lock in the hot path,
> causing all IRQ threads to enter D (uninterruptible sleep) state. This
> serializes interrupt processing completely.
>
> Test case (MegaRAID 12GSAS with 8 MSI-X vectors on RT kernel):
> - Good (v6.6.52-rt):  640 MB/s sequential read
> - Bad  (v6.6.64-rt):  153 MB/s sequential read (-76% regression)
> - 6-8 out of 8 MSI-X IRQ threads stuck in D-state waiting on queue_lock
>
> The original commit message mentioned memory barriers as an alternative
> approach. Use full memory barriers (smp_mb) instead of queue_lock to
> provide the same ordering guarantees without sleeping in RT kernel.
>
> Memory barriers ensure proper synchronization:
> - CPU0 either sees QUEUE_FLAG_QUIESCED cleared, OR
> - CPU1 sees dispatch list/sw queue bitmap updates
>
> This maintains correctness while avoiding lock contention that causes
> RT kernel IRQ threads to sleep in the I/O completion path.
>
> Fixes: 679b1874eba7 ("block: fix ordering between checking QUEUE_FLAG_QUIESCED request adding")
> Cc: stable@vger.kernel.org
> Signed-off-by: Ionut Nechita <ionut.nechita@windriver.com>
> ---
>   block/blk-mq.c | 19 ++++++++-----------
>   1 file changed, 8 insertions(+), 11 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 5da948b07058..5fb8da4958d0 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2292,22 +2292,19 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
>   
>   	might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
>   
> +	/*
> +	 * First lockless check to avoid unnecessary overhead.
> +	 * Memory barrier below synchronizes with blk_mq_unquiesce_queue().
> +	 */
>   	need_run = blk_mq_hw_queue_need_run(hctx);
>   	if (!need_run) {
> -		unsigned long flags;
> -
> -		/*
> -		 * Synchronize with blk_mq_unquiesce_queue(), because we check
> -		 * if hw queue is quiesced locklessly above, we need the use
> -		 * ->queue_lock to make sure we see the up-to-date status to
> -		 * not miss rerunning the hw queue.
> -		 */
> -		spin_lock_irqsave(&hctx->queue->queue_lock, flags);
> +		/* Synchronize with blk_mq_unquiesce_queue() */

Memory barriers must be used in pairs. So how to synchronize?

> +		smp_mb();
>   		need_run = blk_mq_hw_queue_need_run(hctx);
> -		spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
> -
>   		if (!need_run)
>   			return;
> +		/* Ensure dispatch list/sw queue updates visible before execution */
> +		smp_mb();

Why we need another barrier? What order does this barrier guarantee?

Thanks.
>   	}
>   
>   	if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context
  2025-12-22 20:15 ` [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Ionut Nechita (WindRiver)
  2025-12-23  1:22   ` Ming Lei
@ 2025-12-23  2:18   ` Muchun Song
  1 sibling, 0 replies; 6+ messages in thread
From: Muchun Song @ 2025-12-23  2:18 UTC (permalink / raw)
  To: Ionut Nechita (WindRiver)
  Cc: ming.lei, axboe, gregkh, ionut.nechita, linux-block, linux-kernel,
	sashal, stable



> On Dec 23, 2025, at 04:15, Ionut Nechita (WindRiver) <djiony2011@gmail.com> wrote:
> 
> From: Ionut Nechita <ionut.nechita@windriver.com>
> 
> Fix warning "WARN_ON_ONCE(!async && in_interrupt())" that occurs during
> SCSI device scanning when blk_freeze_queue_start() calls blk_mq_run_hw_queues()
> synchronously from interrupt context.
> 
> The issue happens during device removal/scanning when:
> 1. blk_mq_destroy_queue() -> blk_queue_start_drain()
> 2. blk_freeze_queue_start() calls blk_mq_run_hw_queues(q, false)
> 3. This triggers the warning in blk_mq_run_hw_queue() when in interrupt context
> 
> Change the synchronous call to asynchronous to avoid running in interrupt context.
> 
> Fixes: Warning in blk_mq_run_hw_queue+0x1fa/0x260

You've added a wrong format of Fixes tag.

Thanks.

> Signed-off-by: Ionut Nechita <ionut.nechita@windriver.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-12-23  2:19 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-22 20:15 [PATCH v2 0/2] block/blk-mq: fix RT kernel issues and interrupt context warnings Ionut Nechita (WindRiver)
2025-12-22 20:15 ` [PATCH v2 1/2] block/blk-mq: fix RT kernel regression with queue_lock in hot path Ionut Nechita (WindRiver)
2025-12-23  2:15   ` Muchun Song
2025-12-22 20:15 ` [PATCH v2 2/2] block: Fix WARN_ON in blk_mq_run_hw_queue when called from interrupt context Ionut Nechita (WindRiver)
2025-12-23  1:22   ` Ming Lei
2025-12-23  2:18   ` Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).