* [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
@ 2026-03-19 1:53 Aaron Tomlin
2026-03-19 3:18 ` Chaitanya Kulkarni
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Aaron Tomlin @ 2026-03-19 1:53 UTC (permalink / raw)
To: axboe, rostedt, mhiramat, mathieu.desnoyers
Cc: johannes.thumshirn, kch, bvanassche, dlemoal, ritesh.list, neelx,
sean, mproche, chjohnst, linux-block, linux-kernel,
linux-trace-kernel
In high-performance storage environments, particularly when utilising
RAID controllers with shared tag sets (BLK_MQ_F_TAG_HCTX_SHARED), severe
latency spikes can occur when fast devices (SSDs) are starved of hardware
tags when sharing the same blk_mq_tag_set.
Currently, diagnosing this specific hardware queue contention is
difficult. When a CPU thread exhausts the tag pool, blk_mq_get_tag()
forces the current thread to block uninterruptible via io_schedule().
While this can be inferred via sched:sched_switch or dynamically
traced by attaching a kprobe to blk_mq_mark_tag_wait(), there is no
dedicated, out-of-the-box observability for this event.
This patch introduces the block_rq_tag_wait static trace point in the
tag allocation slow-path. It triggers immediately before the thread
yields the CPU, exposing the exact hardware context (hctx) that is
starved, the specific pool experiencing starvation (hardware or software
scheduler), and the total pool depth.
This provides storage engineers and performance monitoring agents
with a zero-configuration, low-overhead mechanism to definitively
identify shared-tag bottlenecks and tune I/O schedulers or cgroup
throttling accordingly.
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
---
Changes in v1 [1]:
- Improved the description of the trace point (Damien Le Moal)
- Removed the redundant "active requests" (Laurence Oberman)
- Introduced pool-specific starvation tracking
[1]: https://lore.kernel.org/lkml/20260317182835.258183-1-atomlin@atomlin.com/
block/blk-mq-tag.c | 4 ++++
include/trace/events/block.h | 43 ++++++++++++++++++++++++++++++++++++
2 files changed, 47 insertions(+)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 33946cdb5716..a6691a4fe7a7 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -13,6 +13,7 @@
#include <linux/kmemleak.h>
#include <linux/delay.h>
+#include <trace/events/block.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
@@ -187,6 +188,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
if (tag != BLK_MQ_NO_TAG)
break;
+ trace_block_rq_tag_wait(data->q, data->hctx,
+ !!(data->rq_flags & RQF_SCHED_TAGS));
+
bt_prev = bt;
io_schedule();
diff --git a/include/trace/events/block.h b/include/trace/events/block.h
index 6aa79e2d799c..f7708d0d7a0c 100644
--- a/include/trace/events/block.h
+++ b/include/trace/events/block.h
@@ -226,6 +226,49 @@ DECLARE_EVENT_CLASS(block_rq,
IOPRIO_PRIO_LEVEL(__entry->ioprio), __entry->comm)
);
+/**
+ * block_rq_tag_wait - triggered when a request is starved of a tag
+ * @q: request queue of the target device
+ * @hctx: hardware context of the request experiencing starvation
+ * @is_sched_tag: indicates whether the starved pool is the software scheduler
+ *
+ * Called immediately before the submitting context is forced to block due
+ * to the exhaustion of available tags (i.e., physical hardware driver tags
+ * or software scheduler tags). This trace point indicates that the context
+ * will be placed into an uninterruptible state via io_schedule() until an
+ * active request completes and relinquishes its assigned tag.
+ */
+TRACE_EVENT(block_rq_tag_wait,
+
+ TP_PROTO(struct request_queue *q, struct blk_mq_hw_ctx *hctx, bool is_sched_tag),
+
+ TP_ARGS(q, hctx, is_sched_tag),
+
+ TP_STRUCT__entry(
+ __field( dev_t, dev )
+ __field( u32, hctx_id )
+ __field( u32, nr_tags )
+ __field( bool, is_sched_tag )
+ ),
+
+ TP_fast_assign(
+ __entry->dev = disk_devt(q->disk);
+ __entry->hctx_id = hctx->queue_num;
+ __entry->is_sched_tag = is_sched_tag;
+
+ if (__entry->is_sched_tag)
+ __entry->nr_tags = hctx->sched_tags->nr_tags;
+ else
+ __entry->nr_tags = hctx->tags->nr_tags;
+ ),
+
+ TP_printk("%d,%d hctx=%u starved on %s tags (depth=%u)",
+ MAJOR(__entry->dev), MINOR(__entry->dev),
+ __entry->hctx_id,
+ __entry->is_sched_tag ? "scheduler" : "hardware",
+ __entry->nr_tags)
+);
+
/**
* block_rq_insert - insert block operation request into queue
* @rq: block IO operation request
--
2.51.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 1:53 [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait Aaron Tomlin
@ 2026-03-19 3:18 ` Chaitanya Kulkarni
2026-03-19 3:31 ` Damien Le Moal
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Chaitanya Kulkarni @ 2026-03-19 3:18 UTC (permalink / raw)
To: Aaron Tomlin, axboe@kernel.dk, rostedt@goodmis.org,
mhiramat@kernel.org, mathieu.desnoyers@efficios.com
Cc: johannes.thumshirn@wdc.com, Chaitanya Kulkarni,
bvanassche@acm.org, dlemoal@kernel.org, ritesh.list@gmail.com,
neelx@suse.com, sean@ashe.io, mproche@gmail.com,
chjohnst@gmail.com, linux-block@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org
On 3/18/26 18:53, Aaron Tomlin wrote:
> In high-performance storage environments, particularly when utilising
> RAID controllers with shared tag sets (BLK_MQ_F_TAG_HCTX_SHARED), severe
> latency spikes can occur when fast devices (SSDs) are starved of hardware
> tags when sharing the same blk_mq_tag_set.
>
> Currently, diagnosing this specific hardware queue contention is
> difficult. When a CPU thread exhausts the tag pool, blk_mq_get_tag()
> forces the current thread to block uninterruptible via io_schedule().
> While this can be inferred viasched:sched_switch or dynamically
> traced by attaching a kprobe to blk_mq_mark_tag_wait(), there is no
> dedicated, out-of-the-box observability for this event.
>
> This patch introduces the block_rq_tag_wait static trace point in the
> tag allocation slow-path. It triggers immediately before the thread
> yields the CPU, exposing the exact hardware context (hctx) that is
> starved, the specific pool experiencing starvation (hardware or software
> scheduler), and the total pool depth.
>
> This provides storage engineers and performance monitoring agents
> with a zero-configuration, low-overhead mechanism to definitively
> identify shared-tag bottlenecks and tune I/O schedulers or cgroup
> throttling accordingly.
>
> Signed-off-by: Aaron Tomlin<atomlin@atomlin.com>
> ---
> Changes in v1 [1]:
> - Improved the description of the trace point (Damien Le Moal)
> - Removed the redundant "active requests" (Laurence Oberman)
> - Introduced pool-specific starvation tracking
>
> [1]:https://lore.kernel.org/lkml/20260317182835.258183-1-atomlin@atomlin.com/
LGTM.
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 1:53 [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait Aaron Tomlin
2026-03-19 3:18 ` Chaitanya Kulkarni
@ 2026-03-19 3:31 ` Damien Le Moal
2026-03-19 7:32 ` Johannes Thumshirn
2026-03-19 21:52 ` Steven Rostedt
3 siblings, 0 replies; 7+ messages in thread
From: Damien Le Moal @ 2026-03-19 3:31 UTC (permalink / raw)
To: Aaron Tomlin, axboe, rostedt, mhiramat, mathieu.desnoyers
Cc: johannes.thumshirn, kch, bvanassche, ritesh.list, neelx, sean,
mproche, chjohnst, linux-block, linux-kernel, linux-trace-kernel
On 3/19/26 10:53, Aaron Tomlin wrote:
> In high-performance storage environments, particularly when utilising
> RAID controllers with shared tag sets (BLK_MQ_F_TAG_HCTX_SHARED), severe
> latency spikes can occur when fast devices (SSDs) are starved of hardware
> tags when sharing the same blk_mq_tag_set.
>
> Currently, diagnosing this specific hardware queue contention is
> difficult. When a CPU thread exhausts the tag pool, blk_mq_get_tag()
> forces the current thread to block uninterruptible via io_schedule().
> While this can be inferred via sched:sched_switch or dynamically
> traced by attaching a kprobe to blk_mq_mark_tag_wait(), there is no
> dedicated, out-of-the-box observability for this event.
>
> This patch introduces the block_rq_tag_wait static trace point in the
> tag allocation slow-path. It triggers immediately before the thread
> yields the CPU, exposing the exact hardware context (hctx) that is
> starved, the specific pool experiencing starvation (hardware or software
> scheduler), and the total pool depth.
>
> This provides storage engineers and performance monitoring agents
> with a zero-configuration, low-overhead mechanism to definitively
> identify shared-tag bottlenecks and tune I/O schedulers or cgroup
> throttling accordingly.
>
> Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
> ---
> Changes in v1 [1]:
> - Improved the description of the trace point (Damien Le Moal)
> - Removed the redundant "active requests" (Laurence Oberman)
> - Introduced pool-specific starvation tracking
>
> [1]: https://lore.kernel.org/lkml/20260317182835.258183-1-atomlin@atomlin.com/
>
> block/blk-mq-tag.c | 4 ++++
> include/trace/events/block.h | 43 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 47 insertions(+)
>
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index 33946cdb5716..a6691a4fe7a7 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -13,6 +13,7 @@
> #include <linux/kmemleak.h>
>
> #include <linux/delay.h>
> +#include <trace/events/block.h>
> #include "blk.h"
> #include "blk-mq.h"
> #include "blk-mq-sched.h"
> @@ -187,6 +188,9 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
> if (tag != BLK_MQ_NO_TAG)
> break;
>
> + trace_block_rq_tag_wait(data->q, data->hctx,
> + !!(data->rq_flags & RQF_SCHED_TAGS));
I do not think that the "!!" is needed here.
Other than this, this looks OK to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> +
> bt_prev = bt;
> io_schedule();
>
> diff --git a/include/trace/events/block.h b/include/trace/events/block.h
> index 6aa79e2d799c..f7708d0d7a0c 100644
> --- a/include/trace/events/block.h
> +++ b/include/trace/events/block.h
> @@ -226,6 +226,49 @@ DECLARE_EVENT_CLASS(block_rq,
> IOPRIO_PRIO_LEVEL(__entry->ioprio), __entry->comm)
> );
>
> +/**
> + * block_rq_tag_wait - triggered when a request is starved of a tag
> + * @q: request queue of the target device
> + * @hctx: hardware context of the request experiencing starvation
> + * @is_sched_tag: indicates whether the starved pool is the software scheduler
> + *
> + * Called immediately before the submitting context is forced to block due
> + * to the exhaustion of available tags (i.e., physical hardware driver tags
> + * or software scheduler tags). This trace point indicates that the context
> + * will be placed into an uninterruptible state via io_schedule() until an
> + * active request completes and relinquishes its assigned tag.
> + */
> +TRACE_EVENT(block_rq_tag_wait,
> +
> + TP_PROTO(struct request_queue *q, struct blk_mq_hw_ctx *hctx, bool is_sched_tag),
> +
> + TP_ARGS(q, hctx, is_sched_tag),
> +
> + TP_STRUCT__entry(
> + __field( dev_t, dev )
> + __field( u32, hctx_id )
> + __field( u32, nr_tags )
> + __field( bool, is_sched_tag )
> + ),
> +
> + TP_fast_assign(
> + __entry->dev = disk_devt(q->disk);
> + __entry->hctx_id = hctx->queue_num;
> + __entry->is_sched_tag = is_sched_tag;
> +
> + if (__entry->is_sched_tag)
> + __entry->nr_tags = hctx->sched_tags->nr_tags;
> + else
> + __entry->nr_tags = hctx->tags->nr_tags;
> + ),
> +
> + TP_printk("%d,%d hctx=%u starved on %s tags (depth=%u)",
> + MAJOR(__entry->dev), MINOR(__entry->dev),
> + __entry->hctx_id,
> + __entry->is_sched_tag ? "scheduler" : "hardware",
> + __entry->nr_tags)
> +);
> +
> /**
> * block_rq_insert - insert block operation request into queue
> * @rq: block IO operation request
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 1:53 [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait Aaron Tomlin
2026-03-19 3:18 ` Chaitanya Kulkarni
2026-03-19 3:31 ` Damien Le Moal
@ 2026-03-19 7:32 ` Johannes Thumshirn
2026-03-19 12:02 ` Laurence Oberman
2026-03-19 21:52 ` Steven Rostedt
3 siblings, 1 reply; 7+ messages in thread
From: Johannes Thumshirn @ 2026-03-19 7:32 UTC (permalink / raw)
To: Aaron Tomlin, axboe@kernel.dk, rostedt@goodmis.org,
mhiramat@kernel.org, mathieu.desnoyers@efficios.com
Cc: kch@nvidia.com, bvanassche@acm.org, dlemoal@kernel.org,
ritesh.list@gmail.com, neelx@suse.com, sean@ashe.io,
mproche@gmail.com, chjohnst@gmail.com,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-trace-kernel@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 7:32 ` Johannes Thumshirn
@ 2026-03-19 12:02 ` Laurence Oberman
0 siblings, 0 replies; 7+ messages in thread
From: Laurence Oberman @ 2026-03-19 12:02 UTC (permalink / raw)
To: Johannes Thumshirn, Aaron Tomlin, axboe@kernel.dk,
rostedt@goodmis.org, mhiramat@kernel.org,
mathieu.desnoyers@efficios.com
Cc: kch@nvidia.com, bvanassche@acm.org, dlemoal@kernel.org,
ritesh.list@gmail.com, neelx@suse.com, sean@ashe.io,
mproche@gmail.com, chjohnst@gmail.com,
linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-trace-kernel@vger.kernel.org
On Thu, 2026-03-19 at 07:32 +0000, Johannes Thumshirn wrote:
> Looks good,
>
> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Looks good now.
Reviewed-by: Laurence Oberman <loberman@redhat.com>
Tested-by: Laurence Oberman <loberman@redhat.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 1:53 [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait Aaron Tomlin
` (2 preceding siblings ...)
2026-03-19 7:32 ` Johannes Thumshirn
@ 2026-03-19 21:52 ` Steven Rostedt
2026-03-19 22:10 ` Aaron Tomlin
3 siblings, 1 reply; 7+ messages in thread
From: Steven Rostedt @ 2026-03-19 21:52 UTC (permalink / raw)
To: Aaron Tomlin
Cc: axboe, mhiramat, mathieu.desnoyers, johannes.thumshirn, kch,
bvanassche, dlemoal, ritesh.list, neelx, sean, mproche, chjohnst,
linux-block, linux-kernel, linux-trace-kernel
On Wed, 18 Mar 2026 21:53:00 -0400
Aaron Tomlin <atomlin@atomlin.com> wrote:
> + TP_fast_assign(
> + __entry->dev = disk_devt(q->disk);
> + __entry->hctx_id = hctx->queue_num;
> + __entry->is_sched_tag = is_sched_tag;
> +
> + if (__entry->is_sched_tag)
Nit, but why use __entry->is_sched_tag instead of is_sched_tag.
Not sure if the compiler will optimize it (likely it will), but it seems
cleaner to use the variable directly and not the one assigned.
Perhaps the compiler is smart enough to use one register for both updates.
-- Steve
> + __entry->nr_tags = hctx->sched_tags->nr_tags;
> + else
> + __entry->nr_tags = hctx->tags->nr_tags;
> + ),
> +
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait
2026-03-19 21:52 ` Steven Rostedt
@ 2026-03-19 22:10 ` Aaron Tomlin
0 siblings, 0 replies; 7+ messages in thread
From: Aaron Tomlin @ 2026-03-19 22:10 UTC (permalink / raw)
To: Steven Rostedt
Cc: axboe, mhiramat, mathieu.desnoyers, johannes.thumshirn, kch,
bvanassche, dlemoal, ritesh.list, neelx, sean, mproche, chjohnst,
linux-block, linux-kernel, linux-trace-kernel
[-- Attachment #1: Type: text/plain, Size: 783 bytes --]
On Thu, Mar 19, 2026 at 05:52:49PM -0400, Steven Rostedt wrote:
> On Wed, 18 Mar 2026 21:53:00 -0400
> Aaron Tomlin <atomlin@atomlin.com> wrote:
>
> > + TP_fast_assign(
> > + __entry->dev = disk_devt(q->disk);
> > + __entry->hctx_id = hctx->queue_num;
> > + __entry->is_sched_tag = is_sched_tag;
> > +
> > + if (__entry->is_sched_tag)
>
> Nit, but why use __entry->is_sched_tag instead of is_sched_tag.
>
> Not sure if the compiler will optimize it (likely it will), but it seems
> cleaner to use the variable directly and not the one assigned.
>
> Perhaps the compiler is smart enough to use one register for both updates.
>
Hi Steve,
Thank you for your feedback.
That was an oversight - I'll correct it now.
Kind regards,
--
Aaron Tomlin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-03-19 22:10 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-19 1:53 [PATCH v2] blk-mq: add tracepoint block_rq_tag_wait Aaron Tomlin
2026-03-19 3:18 ` Chaitanya Kulkarni
2026-03-19 3:31 ` Damien Le Moal
2026-03-19 7:32 ` Johannes Thumshirn
2026-03-19 12:02 ` Laurence Oberman
2026-03-19 21:52 ` Steven Rostedt
2026-03-19 22:10 ` Aaron Tomlin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox