From: Aaron Tomlin <atomlin@atomlin.com>
To: axboe@kernel.dk, rostedt@goodmis.org, mhiramat@kernel.org,
mathieu.desnoyers@efficios.com
Cc: johannes.thumshirn@wdc.com, kch@nvidia.com, bvanassche@acm.org,
dlemoal@kernel.org, ritesh.list@gmail.com, loberman@redhat.com,
neelx@suse.com, sean@ashe.io, mproche@gmail.com,
chjohnst@gmail.com, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org
Subject: [PATCH v3 1/2] blk-mq: add tracepoint block_rq_tag_wait
Date: Thu, 19 Mar 2026 18:19:55 -0400 [thread overview]
Message-ID: <20260319221956.332770-2-atomlin@atomlin.com> (raw)
In-Reply-To: <20260319221956.332770-1-atomlin@atomlin.com>
In high-performance storage environments, particularly when utilising
RAID controllers with shared tag sets (BLK_MQ_F_TAG_HCTX_SHARED), severe
latency spikes can occur when fast devices (SSDs) are starved of hardware
tags when sharing the same blk_mq_tag_set.
Currently, diagnosing this specific hardware queue contention is
difficult. When a CPU thread exhausts the tag pool, blk_mq_get_tag()
forces the current thread to block uninterruptible via io_schedule().
While this can be inferred via sched:sched_switch or dynamically
traced by attaching a kprobe to blk_mq_mark_tag_wait(), there is no
dedicated, out-of-the-box observability for this event.
This patch introduces the block_rq_tag_wait trace point in the tag
allocation slow-path. It triggers immediately before the thread yields
the CPU, exposing the exact hardware context (hctx) that is starved, the
specific pool experiencing starvation (hardware or software scheduler),
and the total pool depth.
This provides storage engineers and performance monitoring agents
with a zero-configuration, low-overhead mechanism to definitively
identify shared-tag bottlenecks and tune I/O schedulers or cgroup
throttling accordingly.
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
---
block/blk-mq-tag.c | 4 ++++
include/trace/events/block.h | 43 ++++++++++++++++++++++++++++++++++++
2 files changed, 47 insertions(+)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 33946cdb5716..66138dd043d4 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -13,6 +13,7 @@
#include <linux/kmemleak.h>
#include <linux/delay.h>
+#include <trace/events/block.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
@@ -187,6 +188,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
if (tag != BLK_MQ_NO_TAG)
break;
+ trace_block_rq_tag_wait(data->q, data->hctx,
+ data->rq_flags & RQF_SCHED_TAGS);
+
bt_prev = bt;
io_schedule();
diff --git a/include/trace/events/block.h b/include/trace/events/block.h
index 6aa79e2d799c..71554b94e4d0 100644
--- a/include/trace/events/block.h
+++ b/include/trace/events/block.h
@@ -226,6 +226,49 @@ DECLARE_EVENT_CLASS(block_rq,
IOPRIO_PRIO_LEVEL(__entry->ioprio), __entry->comm)
);
+/**
+ * block_rq_tag_wait - triggered when a request is starved of a tag
+ * @q: request queue of the target device
+ * @hctx: hardware context of the request experiencing starvation
+ * @is_sched_tag: indicates whether the starved pool is the software scheduler
+ *
+ * Called immediately before the submitting context is forced to block due
+ * to the exhaustion of available tags (i.e., physical hardware driver tags
+ * or software scheduler tags). This trace point indicates that the context
+ * will be placed into an uninterruptible state via io_schedule() until an
+ * active request completes and relinquishes its assigned tag.
+ */
+TRACE_EVENT(block_rq_tag_wait,
+
+ TP_PROTO(struct request_queue *q, struct blk_mq_hw_ctx *hctx, bool is_sched_tag),
+
+ TP_ARGS(q, hctx, is_sched_tag),
+
+ TP_STRUCT__entry(
+ __field( dev_t, dev )
+ __field( u32, hctx_id )
+ __field( u32, nr_tags )
+ __field( bool, is_sched_tag )
+ ),
+
+ TP_fast_assign(
+ __entry->dev = disk_devt(q->disk);
+ __entry->hctx_id = hctx->queue_num;
+ __entry->is_sched_tag = is_sched_tag;
+
+ if (is_sched_tag)
+ __entry->nr_tags = hctx->sched_tags->nr_tags;
+ else
+ __entry->nr_tags = hctx->tags->nr_tags;
+ ),
+
+ TP_printk("%d,%d hctx=%u starved on %s tags (depth=%u)",
+ MAJOR(__entry->dev), MINOR(__entry->dev),
+ __entry->hctx_id,
+ __entry->is_sched_tag ? "scheduler" : "hardware",
+ __entry->nr_tags)
+);
+
/**
* block_rq_insert - insert block operation request into queue
* @rq: block IO operation request
--
2.51.0
next prev parent reply other threads:[~2026-03-19 22:20 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 22:19 [PATCH v3 0/2] blk-mq: introduce tag starvation observability Aaron Tomlin
2026-03-19 22:19 ` Aaron Tomlin [this message]
2026-03-19 22:19 ` [PATCH v3 2/2] blk-mq: expose tag starvation counts via debugfs Aaron Tomlin
2026-03-20 15:08 ` Laurence Oberman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260319221956.332770-2-atomlin@atomlin.com \
--to=atomlin@atomlin.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=chjohnst@gmail.com \
--cc=dlemoal@kernel.org \
--cc=johannes.thumshirn@wdc.com \
--cc=kch@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=loberman@redhat.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mproche@gmail.com \
--cc=neelx@suse.com \
--cc=ritesh.list@gmail.com \
--cc=rostedt@goodmis.org \
--cc=sean@ashe.io \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox