From: "Ionut Nechita (Wind River)" <ionut.nechita@windriver.com>
To: axboe@kernel.dk, linux-block@vger.kernel.org
Cc: bigeasy@linutronix.de, bvanassche@acm.org, clrkwllms@kernel.org,
rostedt@goodmis.org, ming.lei@redhat.com, muchun.song@linux.dev,
mkhalfella@purestorage.com, chris.friesen@windriver.com,
linux-kernel@vger.kernel.org, linux-rt-devel@lists.linux.dev,
linux-rt-users@vger.kernel.org, stable@vger.kernel.org,
ionut_n2001@yahoo.com, sunlightlinux@gmail.com,
Ionut Nechita <ionut.nechita@windriver.com>
Subject: [PATCH v7 0/1] block/blk-mq: use atomic_t for quiesce_depth to avoid lock contention on RT
Date: Tue, 12 May 2026 09:28:14 +0300 [thread overview]
Message-ID: <20260512062815.10815-1-ionut.nechita@windriver.com> (raw)
From: Ionut Nechita <ionut.nechita@windriver.com>
Hi Jens,
This is v7 of the fix for the PREEMPT_RT performance regression caused by
commit 6bda857bcbb86 ("block: fix ordering between checking
QUEUE_FLAG_QUIESCED request adding").
Changes since v6 (May 6):
- Reader-side barrier in blk_mq_run_hw_queue() changed from smp_rmb() to
smp_mb(). The race closed by commit 6bda857bcbb86 is a store-buffer
pattern: one CPU inserts a request and then reads the quiesce state,
another CPU unquiesces and then reads "has pending work". A full
barrier is needed on *both* sides, not just a read barrier on the
reader, so smp_mb() now pairs with the existing writer-side
smp_mb__after_atomic(). Thanks to Bart Van Assche for pointing out
that smp_rmb() was insufficient.
- Rewrote the in-code comments and the commit message to spell out which
ordering the removed q->queue_lock acquisitions provided and how it is
preserved:
* blk_mq_quiesce_queue_nowait(): the lock only published the state
change; the actual visibility/drain guarantee comes from the
synchronize_srcu()/synchronize_rcu() in blk_mq_wait_quiesce_done()
that every caller invokes. The smp_mb__after_atomic() is kept so
the helper stays self-contained for the few callers that defer
that wait.
* blk_mq_unquiesce_queue(): write side of the store-buffer pattern,
a full barrier between the quiesce_depth store and the
blk_mq_hctx_has_pending() load (atomic_dec_if_positive() already
orders the decrement on success; the barrier is spelled out for
clarity).
* blk_mq_run_hw_queue(): read side, a full barrier between the
request insert and the quiesce-state re-check.
- Also note in the changelog that this is the memory-barrier alternative
commit 6bda857bcbb86's own changelog described (and rejected as
"harder to maintain"), and that making quiesce_depth atomic_t turns
the lockless blk_queue_quiesced() read into a clean atomic load
instead of a plain-int read racing with a spin_lock-protected update.
- Rebased on linux-next (next-20260505). No other code changes; the
atomic_t conversion and removal of QUEUE_FLAG_QUIESCED are unchanged
from v6.
Sebastian's Reviewed-by is carried over: the approach (atomic counter +
barrier instead of the spinlock) is the one he suggested and reviewed;
the only functional change in v7 is upgrading the reader-side barrier to
a full one.
Changes since v5 (Mar 3):
- Rewrote the memory-ordering comments per Bart Van Assche's review.
- Rebased on top of linux-next. No code-generation changes.
The problem: on PREEMPT_RT, the spinlock_t q->queue_lock that commit
6bda857bcbb86 added to blk_mq_run_hw_queue() converts to a sleeping
rt_mutex. blk_mq_run_hw_queue() runs from every MSI-X IRQ thread and
hits that lock on the common "nothing pending" path, so all IRQ threads
serialise and go to D-state. On a Broadcom/LSI MegaRAID 12GSAS/PCIe
Secure SAS39xx (megaraid_sas, 128 MSI-X vectors, 120 hw queues),
throughput drops from 640 MB/s to 153 MB/s.
The fix takes the memory-barrier alternative and folds the quiesce
indicator into quiesce_depth itself: quiesce_depth becomes atomic_t,
QUEUE_FLAG_QUIESCED goes away, and no lock is left on the dispatch hot
path.
v6: https://lore.kernel.org/linux-block/cover.1778048987.git.ionut.nechita@windriver.com/
Ionut Nechita (1):
block/blk-mq: use atomic_t for quiesce_depth to avoid lock contention
on RT
block/blk-core.c | 1 +
block/blk-mq-debugfs.c | 1 -
block/blk-mq.c | 69 ++++++++++++++++++++++++++----------------
include/linux/blkdev.h | 9 ++++--
4 files changed, 50 insertions(+), 30 deletions(-)
--
2.54.0
next reply other threads:[~2026-05-12 6:30 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-12 6:28 Ionut Nechita (Wind River) [this message]
2026-05-12 6:28 ` [PATCH v7 1/1] block/blk-mq: use atomic_t for quiesce_depth to avoid lock contention on RT Ionut Nechita (Wind River)
2026-05-12 17:37 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260512062815.10815-1-ionut.nechita@windriver.com \
--to=ionut.nechita@windriver.com \
--cc=axboe@kernel.dk \
--cc=bigeasy@linutronix.de \
--cc=bvanassche@acm.org \
--cc=chris.friesen@windriver.com \
--cc=clrkwllms@kernel.org \
--cc=ionut_n2001@yahoo.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-devel@lists.linux.dev \
--cc=linux-rt-users@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=mkhalfella@purestorage.com \
--cc=muchun.song@linux.dev \
--cc=rostedt@goodmis.org \
--cc=stable@vger.kernel.org \
--cc=sunlightlinux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox