* [PATCH 0/3] block: avoid hctx spinlock for plug with multiple queues
@ 2025-04-23 20:51 Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 1/3] block: take rq_list instead of plug in dispatch functions Caleb Sander Mateos
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-04-23 20:51 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
blk_mq_flush_plug_list() has a fast path if all requests in the plug
are destined for the same request_queue. It calls ->queue_rqs() with the
whole batch of requests, falling back on ->queue_rq() for any requests
not handled by ->queue_rqs(). However, if the requests are destined for
multiple queues, blk_mq_flush_plug_list() has a slow path that calls
blk_mq_dispatch_list() repeatedly to filter the requests by ctx/hctx.
Each queue's requests are inserted into the hctx's dispatch list under a
spinlock, then __blk_mq_sched_dispatch_requests() takes them out of the
dispatch list (taking the spinlock again), and finally
blk_mq_dispatch_rq_list() calls ->queue_rq() on each request.
Acquiring the hctx spinlock twice and calling ->queue_rq() instead of
->queue_rqs() makes the slow path significantly more expensive. Thus,
batching more requests into a single plug (e.g. io_uring_enter syscall)
can counterintuitively hurt performance by causing the plug to span
multiple queues. We have observed 2-3% of CPU time spent acquiring the
hctx spinlock alone on workloads issuing requests to multiple NVMe
devices in the same io_uring SQE batches.
Add a medium path in blk_mq_flush_plug_list() for plugs that don't have
elevators or come from a schedule, but do span multiple queues. Filter
the requests by queue and call ->queue_rqs()/->queue_rq() on the list of
requests destined to each request_queue.
With this change, we no longer see any CPU time spent in _raw_spin_lock
from blk_mq_flush_plug_list and throughput increases accordingly.
Caleb Sander Mateos (3):
block: take rq_list instead of plug in dispatch functions
block: factor out blk_mq_dispatch_queue_requests() helper
block: avoid hctx spinlock for plug with multiple queues
block/blk-mq.c | 106 +++++++++++++++++++++++++++++++-------------
block/mq-deadline.c | 2 +-
2 files changed, 75 insertions(+), 33 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] block: take rq_list instead of plug in dispatch functions
2025-04-23 20:51 [PATCH 0/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
@ 2025-04-23 20:51 ` Caleb Sander Mateos
2025-04-24 11:29 ` Christoph Hellwig
2025-04-23 20:51 ` [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-04-23 20:51 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
blk_mq_plug_issue_direct(), __blk_mq_flush_plug_list(), and
blk_mq_dispatch_plug_list() take a struct blk_plug * but only use its
mq_list. Pass the struct rq_list * instead in preparation for calling
them with other lists of requests.
Drop "plug" from the function names as they are no longer plug-specific.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
block/blk-mq.c | 26 ++++++++++++--------------
block/mq-deadline.c | 2 +-
2 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 554380bfd002..fb514fd41d76 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2788,19 +2788,19 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
if (!blk_mq_get_budget_and_tag(rq))
return BLK_STS_RESOURCE;
return __blk_mq_issue_directly(hctx, rq, last);
}
-static void blk_mq_plug_issue_direct(struct blk_plug *plug)
+static void blk_mq_issue_direct(struct rq_list *rqs)
{
struct blk_mq_hw_ctx *hctx = NULL;
struct request *rq;
int queued = 0;
blk_status_t ret = BLK_STS_OK;
- while ((rq = rq_list_pop(&plug->mq_list))) {
- bool last = rq_list_empty(&plug->mq_list);
+ while ((rq = rq_list_pop(rqs))) {
+ bool last = rq_list_empty(rqs);
if (hctx != rq->mq_hctx) {
if (hctx) {
blk_mq_commit_rqs(hctx, queued, false);
queued = 0;
@@ -2827,29 +2827,28 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
out:
if (ret != BLK_STS_OK)
blk_mq_commit_rqs(hctx, queued, false);
}
-static void __blk_mq_flush_plug_list(struct request_queue *q,
- struct blk_plug *plug)
+static void __blk_mq_flush_list(struct request_queue *q, struct rq_list *rqs)
{
if (blk_queue_quiesced(q))
return;
- q->mq_ops->queue_rqs(&plug->mq_list);
+ q->mq_ops->queue_rqs(rqs);
}
-static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched)
{
struct blk_mq_hw_ctx *this_hctx = NULL;
struct blk_mq_ctx *this_ctx = NULL;
struct rq_list requeue_list = {};
unsigned int depth = 0;
bool is_passthrough = false;
LIST_HEAD(list);
do {
- struct request *rq = rq_list_pop(&plug->mq_list);
+ struct request *rq = rq_list_pop(rqs);
if (!this_hctx) {
this_hctx = rq->mq_hctx;
this_ctx = rq->mq_ctx;
is_passthrough = blk_rq_is_passthrough(rq);
@@ -2858,13 +2857,13 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
rq_list_add_tail(&requeue_list, rq);
continue;
}
list_add_tail(&rq->queuelist, &list);
depth++;
- } while (!rq_list_empty(&plug->mq_list));
+ } while (!rq_list_empty(rqs));
- plug->mq_list = requeue_list;
+ *rqs = requeue_list;
trace_block_unplug(this_hctx->queue, depth, !from_sched);
percpu_ref_get(&this_hctx->queue->q_usage_counter);
/* passthrough requests should never be issued to the I/O scheduler */
if (is_passthrough) {
@@ -2912,23 +2911,22 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
* already know at this point that all requests belong to the
* same queue, caller must ensure that's the case.
*/
if (q->mq_ops->queue_rqs) {
blk_mq_run_dispatch_ops(q,
- __blk_mq_flush_plug_list(q, plug));
+ __blk_mq_flush_list(q, &plug->mq_list));
if (rq_list_empty(&plug->mq_list))
return;
}
- blk_mq_run_dispatch_ops(q,
- blk_mq_plug_issue_direct(plug));
+ blk_mq_run_dispatch_ops(q, blk_mq_issue_direct(&plug->mq_list));
if (rq_list_empty(&plug->mq_list))
return;
}
do {
- blk_mq_dispatch_plug_list(plug, from_schedule);
+ blk_mq_dispatch_list(&plug->mq_list, from_schedule);
} while (!rq_list_empty(&plug->mq_list));
}
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 754f6b7415cd..2edf1cac06d5 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -713,11 +713,11 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
list_add_tail(&rq->queuelist, &per_prio->fifo_list[data_dir]);
}
}
/*
- * Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
+ * Called from blk_mq_insert_request() or blk_mq_dispatch_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list,
blk_insert_t flags)
{
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper
2025-04-23 20:51 [PATCH 0/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 1/3] block: take rq_list instead of plug in dispatch functions Caleb Sander Mateos
@ 2025-04-23 20:51 ` Caleb Sander Mateos
2025-04-24 11:31 ` Christoph Hellwig
2025-04-23 20:51 ` [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-04-23 20:51 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
Factor out the logic from blk_mq_flush_plug_list() that calls
->queue_rqs() with a fallback to ->queue_rq() into a helper function
blk_mq_dispatch_queue_requests(). This is in preparation for using this
code with other lists of requests.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
block/blk-mq.c | 43 ++++++++++++++++++++++---------------------
1 file changed, 22 insertions(+), 21 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index fb514fd41d76..a777cb361ee3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2834,10 +2834,31 @@ static void __blk_mq_flush_list(struct request_queue *q, struct rq_list *rqs)
if (blk_queue_quiesced(q))
return;
q->mq_ops->queue_rqs(rqs);
}
+static void blk_mq_dispatch_queue_requests(struct rq_list *rqs, unsigned depth)
+{
+ struct request_queue *q = rq_list_peek(rqs)->q;
+
+ trace_block_unplug(q, depth, true);
+
+ /*
+ * Peek first request and see if we have a ->queue_rqs() hook.
+ * If we do, we can dispatch the whole list in one go.
+ * We already know at this point that all requests belong to the
+ * same queue, caller must ensure that's the case.
+ */
+ if (q->mq_ops->queue_rqs) {
+ blk_mq_run_dispatch_ops(q, __blk_mq_flush_list(q, rqs));
+ if (rq_list_empty(rqs))
+ return;
+ }
+
+ blk_mq_run_dispatch_ops(q, blk_mq_issue_direct(rqs));
+}
+
static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched)
{
struct blk_mq_hw_ctx *this_hctx = NULL;
struct blk_mq_ctx *this_ctx = NULL;
struct rq_list requeue_list = {};
@@ -2881,11 +2902,10 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched)
percpu_ref_put(&this_hctx->queue->q_usage_counter);
}
void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
- struct request *rq;
unsigned int depth;
/*
* We may have been called recursively midway through handling
* plug->mq_list via a schedule() in the driver's queue_rq() callback.
@@ -2897,30 +2917,11 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
return;
depth = plug->rq_count;
plug->rq_count = 0;
if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) {
- struct request_queue *q;
-
- rq = rq_list_peek(&plug->mq_list);
- q = rq->q;
- trace_block_unplug(q, depth, true);
-
- /*
- * Peek first request and see if we have a ->queue_rqs() hook.
- * If we do, we can dispatch the whole plug list in one go. We
- * already know at this point that all requests belong to the
- * same queue, caller must ensure that's the case.
- */
- if (q->mq_ops->queue_rqs) {
- blk_mq_run_dispatch_ops(q,
- __blk_mq_flush_list(q, &plug->mq_list));
- if (rq_list_empty(&plug->mq_list))
- return;
- }
-
- blk_mq_run_dispatch_ops(q, blk_mq_issue_direct(&plug->mq_list));
+ blk_mq_dispatch_queue_requests(&plug->mq_list, depth);
if (rq_list_empty(&plug->mq_list))
return;
}
do {
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues
2025-04-23 20:51 [PATCH 0/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 1/3] block: take rq_list instead of plug in dispatch functions Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper Caleb Sander Mateos
@ 2025-04-23 20:51 ` Caleb Sander Mateos
2025-04-24 11:38 ` Christoph Hellwig
2 siblings, 1 reply; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-04-23 20:51 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, linux-kernel, Caleb Sander Mateos
blk_mq_flush_plug_list() has a fast path if all requests in the plug
are destined for the same request_queue. It calls ->queue_rqs() with the
whole batch of requests, falling back on ->queue_rq() for any requests
not handled by ->queue_rqs(). However, if the requests are destined for
multiple queues, blk_mq_flush_plug_list() has a slow path that calls
blk_mq_dispatch_list() repeatedly to filter the requests by ctx/hctx.
Each queue's requests are inserted into the hctx's dispatch list under a
spinlock, then __blk_mq_sched_dispatch_requests() takes them out of the
dispatch list (taking the spinlock again), and finally
blk_mq_dispatch_rq_list() calls ->queue_rq() on each request.
Acquiring the hctx spinlock twice and calling ->queue_rq() instead of
->queue_rqs() makes the slow path significantly more expensive. Thus,
batching more requests into a single plug (e.g. io_uring_enter syscall)
can counterintuitively hurt performance by causing the plug to span
multiple queues. We have observed 2-3% of CPU time spent acquiring the
hctx spinlock alone on workloads issuing requests to multiple NVMe
devices in the same io_uring SQE batches.
Add a medium path in blk_mq_flush_plug_list() for plugs that don't have
elevators or come from a schedule, but do span multiple queues. Filter
the requests by queue and call ->queue_rqs()/->queue_rq() on the list of
requests destined to each request_queue.
With this change, we no longer see any CPU time spent in _raw_spin_lock
from blk_mq_flush_plug_list and throughput increases accordingly.
Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
block/blk-mq.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 44 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a777cb361ee3..f820c6c0cb1a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2834,10 +2834,34 @@ static void __blk_mq_flush_list(struct request_queue *q, struct rq_list *rqs)
if (blk_queue_quiesced(q))
return;
q->mq_ops->queue_rqs(rqs);
}
+static void blk_mq_extract_queue_requests(struct rq_list *rqs,
+ struct rq_list *queue_rqs,
+ unsigned *queue_depth)
+{
+ struct rq_list matched_rqs = {}, unmatched_rqs = {};
+ struct request *rq = rq_list_pop(rqs);
+ struct request_queue *this_q = rq->q;
+ unsigned depth = 1;
+
+ rq_list_add_tail(&matched_rqs, rq);
+ while ((rq = rq_list_pop(rqs))) {
+ if (rq->q == this_q) {
+ rq_list_add_tail(&matched_rqs, rq);
+ depth++;
+ } else {
+ rq_list_add_tail(&unmatched_rqs, rq);
+ }
+ }
+
+ *queue_rqs = matched_rqs;
+ *rqs = unmatched_rqs;
+ *queue_depth = depth;
+}
+
static void blk_mq_dispatch_queue_requests(struct rq_list *rqs, unsigned depth)
{
struct request_queue *q = rq_list_peek(rqs)->q;
trace_block_unplug(q, depth, true);
@@ -2900,10 +2924,24 @@ static void blk_mq_dispatch_list(struct rq_list *rqs, bool from_sched)
blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
}
percpu_ref_put(&this_hctx->queue->q_usage_counter);
}
+static void blk_mq_dispatch_multiple_queue_requests(struct rq_list *rqs)
+{
+ do {
+ struct rq_list queue_rqs;
+ unsigned depth;
+
+ blk_mq_extract_queue_requests(rqs, &queue_rqs, &depth);
+ blk_mq_dispatch_queue_requests(&queue_rqs, depth);
+ while (!rq_list_empty(&queue_rqs)) {
+ blk_mq_dispatch_list(&queue_rqs, false);
+ }
+ } while (!rq_list_empty(rqs));
+}
+
void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
unsigned int depth;
/*
@@ -2916,11 +2954,16 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
if (plug->rq_count == 0)
return;
depth = plug->rq_count;
plug->rq_count = 0;
- if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) {
+ if (!plug->has_elevator && !from_schedule) {
+ if (plug->multiple_queues) {
+ blk_mq_dispatch_multiple_queue_requests(&plug->mq_list);
+ return;
+ }
+
blk_mq_dispatch_queue_requests(&plug->mq_list, depth);
if (rq_list_empty(&plug->mq_list))
return;
}
--
2.45.2
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] block: take rq_list instead of plug in dispatch functions
2025-04-23 20:51 ` [PATCH 1/3] block: take rq_list instead of plug in dispatch functions Caleb Sander Mateos
@ 2025-04-24 11:29 ` Christoph Hellwig
0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2025-04-24 11:29 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper
2025-04-23 20:51 ` [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper Caleb Sander Mateos
@ 2025-04-24 11:31 ` Christoph Hellwig
0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2025-04-24 11:31 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues
2025-04-23 20:51 ` [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
@ 2025-04-24 11:38 ` Christoph Hellwig
2025-04-25 1:00 ` Caleb Sander Mateos
0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2025-04-24 11:38 UTC (permalink / raw)
To: Caleb Sander Mateos; +Cc: Jens Axboe, linux-block, linux-kernel
> +static void blk_mq_extract_queue_requests(struct rq_list *rqs,
> + struct rq_list *queue_rqs,
> + unsigned *queue_depth)
> +{
> + struct rq_list matched_rqs = {}, unmatched_rqs = {};
> + struct request *rq = rq_list_pop(rqs);
> + struct request_queue *this_q = rq->q;
> + unsigned depth = 1;
> +
> + rq_list_add_tail(&matched_rqs, rq);
> + while ((rq = rq_list_pop(rqs))) {
> + if (rq->q == this_q) {
> + rq_list_add_tail(&matched_rqs, rq);
> + depth++;
> + } else {
> + rq_list_add_tail(&unmatched_rqs, rq);
> + }
This might be moe efficient if you keep an extra iterator and never
mode the request for another queue.
> + }
> +
> + *queue_rqs = matched_rqs;
> + *rqs = unmatched_rqs;
> + *queue_depth = depth;
.. and I'd return the queue depth here instead of making it another
output argument.
> +static void blk_mq_dispatch_multiple_queue_requests(struct rq_list *rqs)
> +{
> + do {
> + struct rq_list queue_rqs;
> + unsigned depth;
> +
> + blk_mq_extract_queue_requests(rqs, &queue_rqs, &depth);
> + blk_mq_dispatch_queue_requests(&queue_rqs, depth);
> + while (!rq_list_empty(&queue_rqs)) {
> + blk_mq_dispatch_list(&queue_rqs, false);
> + }
No need for the braces in the inner while loop here.
The other caller of blk_mq_dispatch_list loops until the list is empty,
why don't we need that here?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues
2025-04-24 11:38 ` Christoph Hellwig
@ 2025-04-25 1:00 ` Caleb Sander Mateos
0 siblings, 0 replies; 8+ messages in thread
From: Caleb Sander Mateos @ 2025-04-25 1:00 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-kernel
On Thu, Apr 24, 2025 at 4:38 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> > +static void blk_mq_extract_queue_requests(struct rq_list *rqs,
> > + struct rq_list *queue_rqs,
> > + unsigned *queue_depth)
> > +{
> > + struct rq_list matched_rqs = {}, unmatched_rqs = {};
> > + struct request *rq = rq_list_pop(rqs);
> > + struct request_queue *this_q = rq->q;
> > + unsigned depth = 1;
> > +
> > + rq_list_add_tail(&matched_rqs, rq);
> > + while ((rq = rq_list_pop(rqs))) {
> > + if (rq->q == this_q) {
> > + rq_list_add_tail(&matched_rqs, rq);
> > + depth++;
> > + } else {
> > + rq_list_add_tail(&unmatched_rqs, rq);
> > + }
>
> This might be moe efficient if you keep an extra iterator and never
> mode the request for another queue.
Sure, will do.
>
> > + }
> > +
> > + *queue_rqs = matched_rqs;
> > + *rqs = unmatched_rqs;
> > + *queue_depth = depth;
>
> .. and I'd return the queue depth here instead of making it another
> output argument.
Okay.
>
> > +static void blk_mq_dispatch_multiple_queue_requests(struct rq_list *rqs)
> > +{
> > + do {
> > + struct rq_list queue_rqs;
> > + unsigned depth;
> > +
> > + blk_mq_extract_queue_requests(rqs, &queue_rqs, &depth);
> > + blk_mq_dispatch_queue_requests(&queue_rqs, depth);
> > + while (!rq_list_empty(&queue_rqs)) {
> > + blk_mq_dispatch_list(&queue_rqs, false);
> > + }
>
> No need for the braces in the inner while loop here.
Old habits die hard :)
>
> The other caller of blk_mq_dispatch_list loops until the list is empty,
> why don't we need that here?
This loop does continue calling blk_mq_dispatch_list() until queue_rqs
is empty. And the outer loop keeps repopulating queue_rqs until the
entire rqs list is empty. Am I misunderstanding you?
Thanks for the review,
Caleb
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-04-25 1:00 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-23 20:51 [PATCH 0/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2025-04-23 20:51 ` [PATCH 1/3] block: take rq_list instead of plug in dispatch functions Caleb Sander Mateos
2025-04-24 11:29 ` Christoph Hellwig
2025-04-23 20:51 ` [PATCH 2/3] block: factor out blk_mq_dispatch_queue_requests() helper Caleb Sander Mateos
2025-04-24 11:31 ` Christoph Hellwig
2025-04-23 20:51 ` [PATCH 3/3] block: avoid hctx spinlock for plug with multiple queues Caleb Sander Mateos
2025-04-24 11:38 ` Christoph Hellwig
2025-04-25 1:00 ` Caleb Sander Mateos
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).