* cleanup request insertation parameters
@ 2023-04-11 13:33 Christoph Hellwig
2023-04-11 13:33 ` [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait Christoph Hellwig
` (15 more replies)
0 siblings, 16 replies; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Hi Jens,
in context of his latest series Bart commented that it's too hard
to find all spots that do a head insertation into the blk-mq dispatch
queues. This series collapses various far too deep callchains, drop
two of the three bools and then replaced the final once with a greppable
constant.
This will create some rebased work for Bart of top of the other comments
he got, but I think this will allow us to sort out some of the request
order issues much better while also making the code a lot more readable.
Diffstat:
bfq-iosched.c | 16 +--
blk-flush.c | 11 --
blk-mq-sched.c | 110 ---------------------
blk-mq-sched.h | 6 -
blk-mq.c | 283 ++++++++++++++++++++++++++++++++++----------------------
blk-mq.h | 11 --
elevator.h | 3
kyber-iosched.c | 5
mq-deadline.c | 11 +-
9 files changed, 200 insertions(+), 256 deletions(-)
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:41 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
` (14 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Plugs never insert at head, so don't plug for head insertations.
Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 52f8e0099c7f4b..7908d19f140815 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1299,7 +1299,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug)
+ if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
blk_mq_sched_insert_request(rq, at_head, true, false);
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
2023-04-11 13:33 ` [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:44 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
` (13 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Move all logic related to the direct insert into blk_mq_insert_requests
to clean the code flow up a bit, and to allow marking
blk_mq_try_issue_list_directly static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq-sched.c | 17 ++---------------
block/blk-mq.c | 20 ++++++++++++++++++--
block/blk-mq.h | 4 +---
3 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 06b312c691143f..7c7de9b94aed4a 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -474,23 +474,10 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
e = hctx->queue->elevator;
if (e) {
e->type->ops.insert_requests(hctx, list, false);
+ blk_mq_run_hw_queue(hctx, run_queue_async);
} else {
- /*
- * try to issue requests directly if the hw queue isn't
- * busy in case of 'none' scheduler, and this way may save
- * us one extra enqueue & dequeue to sw queue.
- */
- if (!hctx->dispatch_busy && !run_queue_async) {
- blk_mq_run_dispatch_ops(hctx->queue,
- blk_mq_try_issue_list_directly(hctx, list));
- if (list_empty(list))
- goto out;
- }
- blk_mq_insert_requests(hctx, ctx, list);
+ blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
}
-
- blk_mq_run_hw_queue(hctx, run_queue_async);
- out:
percpu_ref_put(&q->q_usage_counter);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7908d19f140815..be06fbfe879420 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -46,6 +46,9 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list);
+
static inline struct blk_mq_hw_ctx *blk_qc_to_hctx(struct request_queue *q,
blk_qc_t qc)
{
@@ -2497,12 +2500,23 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
}
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list)
+ struct list_head *list, bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
+ /*
+ * Try to issue requests directly if the hw queue isn't busy to save an
+ * extra enqueue & dequeue to the sw queue.
+ */
+ if (!hctx->dispatch_busy && !run_queue_async) {
+ blk_mq_run_dispatch_ops(hctx->queue,
+ blk_mq_try_issue_list_directly(hctx, list));
+ if (list_empty(list))
+ goto out;
+ }
+
/*
* preemption doesn't flush plug list, so it's possible ctx->cpu is
* offline now
@@ -2516,6 +2530,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
list_splice_tail_init(list, &ctx->rq_lists[type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
+out:
+ blk_mq_run_hw_queue(hctx, run_queue_async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2757,7 +2773,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
} while (!rq_list_empty(plug->mq_list));
}
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list)
{
int queued = 0;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index ef59fee62780d3..89fc2bf6cb0510 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -64,9 +64,7 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list);
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- struct list_head *list);
+ struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
2023-04-11 13:33 ` [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait Christoph Hellwig
2023-04-11 13:33 ` [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:45 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
` (12 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_dispatch_plug_list is the only caller of
blk_mq_sched_insert_requests, and it makes sense to just fold it there
as blk_mq_sched_insert_requests isn't specific to I/O scheudlers despite
the name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq-sched.c | 24 ------------------------
block/blk-mq-sched.h | 3 ---
block/blk-mq.c | 17 +++++++++++++----
block/blk-mq.h | 2 --
block/mq-deadline.c | 2 +-
5 files changed, 14 insertions(+), 34 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 7c7de9b94aed4a..2fa8e7cb4866aa 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -457,30 +457,6 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, async);
}
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-{
- struct elevator_queue *e;
- struct request_queue *q = hctx->queue;
-
- /*
- * blk_mq_sched_insert_requests() is called from flush plug
- * context only, and hold one usage counter to prevent queue
- * from being released.
- */
- percpu_ref_get(&q->q_usage_counter);
-
- e = hctx->queue->elevator;
- if (e) {
- e->type->ops.insert_requests(hctx, list, false);
- blk_mq_run_hw_queue(hctx, run_queue_async);
- } else {
- blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
- }
- percpu_ref_put(&q->q_usage_counter);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 0250139724539a..b25ad6ce41e95c 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -19,9 +19,6 @@ void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
void blk_mq_sched_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async);
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index be06fbfe879420..6ee05416a93f6c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2499,9 +2499,9 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, false);
}
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-
+static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
+ struct blk_mq_ctx *ctx, struct list_head *list,
+ bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
@@ -2727,7 +2727,16 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
plug->mq_list = requeue_list;
trace_block_unplug(this_hctx->queue, depth, !from_sched);
- blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched);
+
+ percpu_ref_get(&this_hctx->queue->q_usage_counter);
+ if (this_hctx->queue->elevator) {
+ this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
+ &list, false);
+ blk_mq_run_hw_queue(this_hctx, from_sched);
+ } else {
+ blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
+ }
+ percpu_ref_put(&this_hctx->queue->q_usage_counter);
}
void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 89fc2bf6cb0510..192784836f8a83 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -63,8 +63,6 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index f10c2a0d18d411..6065c93350f84f 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -822,7 +822,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_sched_insert_requests().
+ * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (2 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:46 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
` (11 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_sched_insert_request is the main request insert helper and not
directly I/O scheduler related. Move blk_mq_sched_insert_request to
blk-mq.c, rename it to blk_mq_insert_request and mark it static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq-sched.c | 73 -------------------------------------
block/blk-mq-sched.h | 3 --
block/blk-mq.c | 87 +++++++++++++++++++++++++++++++++++++++++---
block/mq-deadline.c | 2 +-
4 files changed, 82 insertions(+), 83 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 2fa8e7cb4866aa..b12dbccc031184 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -384,79 +384,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
}
EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
-{
- struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
-
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
- /*
- * Firstly normal IO request is inserted to scheduler queue or
- * sw queue, meantime we add flush request to dispatch queue(
- * hctx->dispatch) directly and there is at most one in-flight
- * flush request for each hw queue, so it doesn't matter to add
- * flush request to tail or front of the dispatch queue.
- *
- * Secondly in case of NCQ, flush request belongs to non-NCQ
- * command, and queueing it will fail when there is any
- * in-flight normal IO request(NCQ command). When adding flush
- * rq to the front of hctx->dispatch, it is easier to introduce
- * extra time to flush rq's latency because of S_SCHED_RESTART
- * compared with adding to the tail of dispatch queue, then
- * chance of flush merge is increased, and less flush requests
- * will be issued to controller. It is observed that ~10% time
- * is saved in blktests block/004 on disk attached to AHCI/NCQ
- * drive when adding flush rq to the front of hctx->dispatch.
- *
- * Simply queue flush rq to the front of hctx->dispatch so that
- * intensive flush workloads can benefit in case of NCQ HW.
- */
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
- LIST_HEAD(list);
-
- list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
- } else {
- spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
- spin_unlock(&ctx->lock);
- }
-
-run:
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index b25ad6ce41e95c..11bc04af4a08ad 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -17,9 +17,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx);
void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
-
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6ee05416a93f6c..05be4ae4fc0dba 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -46,6 +46,8 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1305,7 +1307,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1366,7 +1368,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1440,13 +1442,13 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP)
blk_mq_request_bypass_insert(rq, false, false);
else
- blk_mq_sched_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true, false, false);
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_sched_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false, false, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2534,6 +2536,79 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
+static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
+ struct request *rq)
+{
+ /*
+ * dispatch flush and passthrough rq directly
+ *
+ * passthrough request has to be added to hctx->dispatch directly.
+ * For some reason, device may be in one situation which can't
+ * handle FS request, so STS_RESOURCE is always returned and the
+ * FS request will be added to hctx->dispatch. However passthrough
+ * request may be required at that time for fixing the problem. If
+ * passthrough request is added to scheduler queue, there isn't any
+ * chance to dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
+ return true;
+
+ return false;
+}
+
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async)
+{
+ struct request_queue *q = rq->q;
+ struct elevator_queue *e = q->elevator;
+ struct blk_mq_ctx *ctx = rq->mq_ctx;
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
+
+ if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ /*
+ * Firstly normal IO request is inserted to scheduler queue or
+ * sw queue, meantime we add flush request to dispatch queue(
+ * hctx->dispatch) directly and there is at most one in-flight
+ * flush request for each hw queue, so it doesn't matter to add
+ * flush request to tail or front of the dispatch queue.
+ *
+ * Secondly in case of NCQ, flush request belongs to non-NCQ
+ * command, and queueing it will fail when there is any
+ * in-flight normal IO request(NCQ command). When adding flush
+ * rq to the front of hctx->dispatch, it is easier to introduce
+ * extra time to flush rq's latency because of S_SCHED_RESTART
+ * compared with adding to the tail of dispatch queue, then
+ * chance of flush merge is increased, and less flush requests
+ * will be issued to controller. It is observed that ~10% time
+ * is saved in blktests block/004 on disk attached to AHCI/NCQ
+ * drive when adding flush rq to the front of hctx->dispatch.
+ *
+ * Simply queue flush rq to the front of hctx->dispatch so that
+ * intensive flush workloads can benefit in case of NCQ HW.
+ */
+ at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ goto run;
+ }
+
+ if (e) {
+ LIST_HEAD(list);
+
+ list_add(&rq->queuelist, &list);
+ e->type->ops.insert_requests(hctx, &list, at_head);
+ } else {
+ spin_lock(&ctx->lock);
+ __blk_mq_insert_request(hctx, rq, at_head);
+ spin_unlock(&ctx->lock);
+ }
+
+run:
+ if (run_queue)
+ blk_mq_run_hw_queue(hctx, async);
+}
+
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
unsigned int nr_segs)
{
@@ -2625,7 +2700,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (bypass_insert)
return BLK_STS_RESOURCE;
- blk_mq_sched_insert_request(rq, false, run_queue, false);
+ blk_mq_insert_request(rq, false, run_queue, false);
return BLK_STS_OK;
}
@@ -2977,7 +3052,7 @@ void blk_mq_submit_bio(struct bio *bio)
else if ((rq->rq_flags & RQF_ELV) ||
(rq->mq_hctx->dispatch_busy &&
(q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_sched_insert_request(rq, false, true, true);
+ blk_mq_insert_request(rq, false, true, true);
else
blk_mq_run_dispatch_ops(rq->q,
blk_mq_try_issue_directly(rq->mq_hctx, rq));
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 6065c93350f84f..6f7a86a70dafb4 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -822,7 +822,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
+ * Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (3 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:49 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
` (10 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
There is no good point in keeping the __blk_mq_insert_request around
for two function calls and a singler caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 14 ++------------
block/blk-mq.h | 2 --
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 05be4ae4fc0dba..4f0ecd0561e48f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2465,17 +2465,6 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
}
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
-
- lockdep_assert_held(&ctx->lock);
-
- __blk_mq_insert_req_list(hctx, rq, at_head);
- blk_mq_hctx_mark_pending(hctx, ctx);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2600,7 +2589,8 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
+ __blk_mq_insert_req_list(hctx, rq, at_head);
+ blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 192784836f8a83..c2aec5cbfa7663 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -59,8 +59,6 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (4 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:18 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 07/16] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
` (9 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Remove this very small helper and fold it into the only caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 25 +++++++------------------
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4f0ecd0561e48f..438d0eca69bd81 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2448,23 +2448,6 @@ static void blk_mq_run_work_fn(struct work_struct *work)
__blk_mq_run_hw_queue(hctx);
}
-static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- enum hctx_type type = hctx->type;
-
- lockdep_assert_held(&ctx->lock);
-
- trace_block_rq_insert(rq);
-
- if (at_head)
- list_add(&rq->queuelist, &ctx->rq_lists[type]);
- else
- list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2588,8 +2571,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
list_add(&rq->queuelist, &list);
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
+ trace_block_rq_insert(rq);
+
spin_lock(&ctx->lock);
- __blk_mq_insert_req_list(hctx, rq, at_head);
+ if (at_head)
+ list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
+ else
+ list_add_tail(&rq->queuelist,
+ &ctx->rq_lists[hctx->type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 07/16] blk-mq: remove blk_flush_queue_rq
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (5 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:51 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
` (8 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Just call blk_mq_add_to_requeue_list directly from the two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 53202eff545efb..cbb5b069809117 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -138,11 +138,6 @@ static void blk_flush_restore_request(struct request *rq)
rq->end_io = rq->flush.saved_end_io;
}
-static void blk_flush_queue_rq(struct request *rq, bool add_front)
-{
- blk_mq_add_to_requeue_list(rq, add_front, true);
-}
-
static void blk_account_io_flush(struct request *rq)
{
struct block_device *part = rq->q->disk->part0;
@@ -195,7 +190,7 @@ static void blk_flush_complete_seq(struct request *rq,
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
- blk_flush_queue_rq(rq, true);
+ blk_mq_add_to_requeue_list(rq, true, true);
break;
case REQ_FSEQ_DONE:
@@ -352,7 +347,7 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
smp_wmb();
req_ref_set(flush_rq, 1);
- blk_flush_queue_rq(flush_rq, false);
+ blk_mq_add_to_requeue_list(flush_rq, false, true);
}
static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq,
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (6 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 07/16] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:54 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
` (7 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
While both passthrough and flush requests call directly into
blk_mq_request_bypass_insert, the parameters aren't the same.
Split the handling into two separate conditionals and turn the whole
function into an if/elif/elif/else flow instead of the gotos.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 50 ++++++++++++++++++--------------------------------
1 file changed, 18 insertions(+), 32 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 438d0eca69bd81..4bd7736c173bc8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2508,37 +2508,26 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
static void blk_mq_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async)
{
struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
struct blk_mq_ctx *ctx = rq->mq_ctx;
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ if (blk_rq_is_passthrough(rq)) {
+ /*
+ * Passthrough request have to be added to hctx->dispatch
+ * directly. The device may be in a situation where it can't
+ * handle FS request, and always returns BLK_STS_RESOURCE for
+ * them, which gets them added to hctx->dispatch.
+ *
+ * If a passthrough request is required to unblock the queues,
+ * and it is added to the scheduler queue, there is no chance to
+ * dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ } else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
* sw queue, meantime we add flush request to dispatch queue(
@@ -2560,16 +2549,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
+ blk_mq_request_bypass_insert(rq, true, false);
+ } else if (q->elevator) {
LIST_HEAD(list);
+ WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
+
list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list, at_head);
} else {
trace_block_rq_insert(rq);
@@ -2583,7 +2570,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
spin_unlock(&ctx->lock);
}
-run:
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (7 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:56 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
` (6 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Split the RQF_DONTPREP and RQF_SOFTBARRIER in separate branches to make
the code more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4bd7736c173bc8..a4fc6e68c3a66b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1429,20 +1429,20 @@ static void blk_mq_requeue_work(struct work_struct *work)
spin_unlock_irq(&q->requeue_lock);
list_for_each_entry_safe(rq, next, &rq_list, queuelist) {
- if (!(rq->rq_flags & (RQF_SOFTBARRIER | RQF_DONTPREP)))
- continue;
-
- rq->rq_flags &= ~RQF_SOFTBARRIER;
- list_del_init(&rq->queuelist);
/*
* If RQF_DONTPREP, rq has contained some driver specific
* data, so insert it to hctx dispatch list to avoid any
* merge.
*/
- if (rq->rq_flags & RQF_DONTPREP)
+ if (rq->rq_flags & RQF_DONTPREP) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_request_bypass_insert(rq, false, false);
- else
+ } else if (rq->rq_flags & RQF_SOFTBARRIER) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_insert_request(rq, true, false, false);
+ }
}
while (!list_empty(&rq_list)) {
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (8 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 17:57 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
` (5 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Factor out a helper from __blk_mq_try_issue_directly in preparation
of folding that function into its two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a4fc6e68c3a66b..2995863c1680eb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2625,13 +2625,27 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
return ret;
}
+static bool blk_mq_get_budget_and_tag(struct request *rq)
+{
+ int budget_token;
+
+ budget_token = blk_mq_get_dispatch_budget(rq->q);
+ if (budget_token < 0)
+ return false;
+ blk_mq_set_rq_budget_token(rq, budget_token);
+ if (!blk_mq_get_driver_tag(rq)) {
+ blk_mq_put_dispatch_budget(rq->q, budget_token);
+ return false;
+ }
+ return true;
+}
+
static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
bool bypass_insert, bool last)
{
struct request_queue *q = rq->q;
bool run_queue = true;
- int budget_token;
/*
* RCU or SRCU read lock is needed before checking quiesced flag.
@@ -2649,16 +2663,8 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
goto insert;
- budget_token = blk_mq_get_dispatch_budget(q);
- if (budget_token < 0)
- goto insert;
-
- blk_mq_set_rq_budget_token(rq, budget_token);
-
- if (!blk_mq_get_driver_tag(rq)) {
- blk_mq_put_dispatch_budget(q, budget_token);
+ if (!blk_mq_get_budget_and_tag(rq))
goto insert;
- }
return __blk_mq_issue_directly(hctx, rq, last);
insert:
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (9 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:04 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
` (4 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Due the wildly different behavior beased on the bypass_insert
argument not a whole lot of code in __blk_mq_try_issue_directly is
actually shared between blk_mq_try_issue_directly and
blk_mq_request_issue_directly. Remove __blk_mq_try_issue_directly and
fold the code into the two callers instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 72 ++++++++++++++++++++++----------------------------
1 file changed, 31 insertions(+), 41 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2995863c1680eb..68e45d5a6868b3 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2640,42 +2640,6 @@ static bool blk_mq_get_budget_and_tag(struct request *rq)
return true;
}
-static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool bypass_insert, bool last)
-{
- struct request_queue *q = rq->q;
- bool run_queue = true;
-
- /*
- * RCU or SRCU read lock is needed before checking quiesced flag.
- *
- * When queue is stopped or quiesced, ignore 'bypass_insert' from
- * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
- * and avoid driver to try to dispatch again.
- */
- if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
- run_queue = false;
- bypass_insert = false;
- goto insert;
- }
-
- if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
- goto insert;
-
- if (!blk_mq_get_budget_and_tag(rq))
- goto insert;
-
- return __blk_mq_issue_directly(hctx, rq, last);
-insert:
- if (bypass_insert)
- return BLK_STS_RESOURCE;
-
- blk_mq_insert_request(rq, false, run_queue, false);
-
- return BLK_STS_OK;
-}
-
/**
* blk_mq_try_issue_directly - Try to send a request directly to device driver.
* @hctx: Pointer of the associated hardware queue.
@@ -2689,18 +2653,44 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq)
{
- blk_status_t ret =
- __blk_mq_try_issue_directly(hctx, rq, false, true);
+ blk_status_t ret;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return;
+ }
+
+ if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
+ blk_mq_insert_request(rq, false, true, false);
+ return;
+ }
- if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
+ ret = __blk_mq_issue_directly(hctx, rq, true);
+ switch (ret) {
+ case BLK_STS_OK:
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
blk_mq_request_bypass_insert(rq, false, true);
- else if (ret != BLK_STS_OK)
+ break;
+ default:
blk_mq_end_request(rq, ret);
+ break;
+ }
}
static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
{
- return __blk_mq_try_issue_directly(rq->mq_hctx, rq, true, last);
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return BLK_STS_OK;
+ }
+
+ if (!blk_mq_get_budget_and_tag(rq))
+ return BLK_STS_RESOURCE;
+ return __blk_mq_issue_directly(hctx, rq, last);
}
static void blk_mq_plug_issue_direct(struct blk_plug *plug)
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (10 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:07 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
` (3 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_insert_request takes two bool parameters to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 56 ++++++++++++++++++++++++++++----------------------
1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 68e45d5a6868b3..4cec6bae15df6b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -46,8 +46,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
+static void blk_mq_insert_request(struct request *rq, bool at_head);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1294,6 +1293,8 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
*/
void blk_execute_rq_nowait(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
WARN_ON(irqs_disabled());
WARN_ON(!blk_rq_is_passthrough(rq));
@@ -1304,10 +1305,13 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug && !at_head)
+ if (current->plug && !at_head) {
blk_add_rq_to_plug(current->plug, rq);
- else
- blk_mq_insert_request(rq, at_head, true, false);
+ return;
+ }
+
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1357,6 +1361,7 @@ static void blk_rq_poll_completion(struct request *rq, struct completion *wait)
*/
blk_status_t blk_execute_rq(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
struct blk_rq_wait wait = {
.done = COMPLETION_INITIALIZER_ONSTACK(wait.done),
};
@@ -1368,7 +1373,8 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1441,14 +1447,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2508,8 +2514,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
+static void blk_mq_insert_request(struct request *rq, bool at_head)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2569,9 +2574,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2656,12 +2658,13 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
blk_status_t ret;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return;
}
if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
- blk_mq_insert_request(rq, false, true, false);
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
@@ -2684,7 +2687,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return BLK_STS_OK;
}
@@ -2964,6 +2967,7 @@ void blk_mq_submit_bio(struct bio *bio)
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
struct blk_plug *plug = blk_mq_plug(bio);
const int is_sync = op_is_sync(bio->bi_opf);
+ struct blk_mq_hw_ctx *hctx;
struct request *rq;
unsigned int nr_segs = 1;
blk_status_t ret;
@@ -3008,15 +3012,19 @@ void blk_mq_submit_bio(struct bio *bio)
return;
}
- if (plug)
+ if (plug) {
blk_add_rq_to_plug(plug, rq);
- else if ((rq->rq_flags & RQF_ELV) ||
- (rq->mq_hctx->dispatch_busy &&
- (q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_insert_request(rq, false, true, true);
- else
- blk_mq_run_dispatch_ops(rq->q,
- blk_mq_try_issue_directly(rq->mq_hctx, rq));
+ return;
+ }
+
+ hctx = rq->mq_hctx;
+ if ((rq->rq_flags & RQF_ELV) ||
+ (hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, true);
+ } else {
+ blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
+ }
}
#ifdef CONFIG_BLK_MQ_STACKING
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (11 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:09 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
` (2 subsequent siblings)
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_request_bypass_insert takes a bool parameter to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 2 +-
block/blk-mq.c | 24 +++++++++++-------------
block/blk-mq.h | 3 +--
3 files changed, 13 insertions(+), 16 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index cbb5b069809117..5c0d06945c435a 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -427,7 +427,7 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4cec6bae15df6b..edc82ecf7f5b77 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1443,7 +1443,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false, false);
+ blk_mq_request_bypass_insert(rq, false);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2458,13 +2458,11 @@ static void blk_mq_run_work_fn(struct work_struct *work)
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
* @at_head: true if the request should be inserted at the head of the list.
- * @run_queue: If we should run the hardware queue after inserting the request.
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue)
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
@@ -2474,9 +2472,6 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, false);
}
static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
@@ -2531,7 +2526,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head, false);
+ blk_mq_request_bypass_insert(rq, at_head);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2554,7 +2549,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true, false);
+ blk_mq_request_bypass_insert(rq, true);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2674,7 +2669,8 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
break;
default:
blk_mq_end_request(rq, ret);
@@ -2721,7 +2717,8 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
@@ -2839,8 +2836,9 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false,
- list_empty(list));
+ blk_mq_request_bypass_insert(rq, false);
+ if (list_empty(list))
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index c2aec5cbfa7663..cc17e942753117 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -59,8 +59,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue);
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (12 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:11 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
2023-04-11 13:33 ` [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Replace the at_head bool with a flags argument that so far only contains
a single BLK_MQ_INSERT_AT_HEAD value. This makes it much easier to grep
for head insertations into the blk-mq dispatch queues.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 19 ++++++++++---------
block/blk-mq.h | 2 ++
2 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index edc82ecf7f5b77..5a4ae0e4080d45 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -46,7 +46,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head);
+static void blk_mq_insert_request(struct request *rq, unsigned int flags);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1310,7 +1310,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
return;
}
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1373,7 +1373,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
@@ -1447,14 +1447,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true);
+ blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
}
blk_mq_run_hw_queues(q, false);
@@ -2509,7 +2509,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head)
+static void blk_mq_insert_request(struct request *rq, unsigned int flags)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2526,7 +2526,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head);
+ blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2556,12 +2556,13 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list,
+ flags & BLK_MQ_INSERT_AT_HEAD);
} else {
trace_block_rq_insert(rq);
spin_lock(&ctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
else
list_add_tail(&rq->queuelist,
diff --git a/block/blk-mq.h b/block/blk-mq.h
index cc17e942753117..ab2f4bfa0de6a4 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -30,6 +30,8 @@ struct blk_mq_ctx {
struct kobject kobj;
} ____cacheline_aligned_in_smp;
+#define BLK_MQ_INSERT_AT_HEAD (1U << 0)
+
void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (13 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:16 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Replace the two boolean arguments with the same flags that are already
passed to blk_mq_insert_request. Also add the currently unused
BLK_MQ_INSERT_ASYNC support so that the flags support is complete.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 2 +-
block/blk-mq.c | 18 +++++++++---------
block/blk-mq.h | 2 +-
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 5c0d06945c435a..3588a2810820d9 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -427,7 +427,7 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5a4ae0e4080d45..c0330879aff54a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1443,7 +1443,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2457,17 +2457,17 @@ static void blk_mq_run_work_fn(struct work_struct *work)
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
- * @at_head: true if the request should be inserted at the head of the list.
+ * @flags: BLK_MQ_INSERT_*
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
+void blk_mq_request_bypass_insert(struct request *rq, unsigned int flags)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
spin_lock(&hctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &hctx->dispatch);
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
@@ -2526,7 +2526,7 @@ static void blk_mq_insert_request(struct request *rq, unsigned int flags)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
+ blk_mq_request_bypass_insert(rq, flags);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2549,7 +2549,7 @@ static void blk_mq_insert_request(struct request *rq, unsigned int flags)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true);
+ blk_mq_request_bypass_insert(rq, BLK_MQ_INSERT_AT_HEAD);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2670,7 +2670,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
break;
default:
@@ -2718,7 +2718,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
goto out;
default:
@@ -2837,7 +2837,7 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
if (list_empty(list))
blk_mq_run_hw_queue(hctx, false);
goto out;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index ab2f4bfa0de6a4..e28b18d72caeea 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -61,7 +61,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
+void blk_mq_request_bypass_insert(struct request *rq, unsigned int flags);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
` (14 preceding siblings ...)
2023-04-11 13:33 ` [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-11 13:33 ` Christoph Hellwig
2023-04-11 18:13 ` Bart Van Assche
15 siblings, 1 reply; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-11 13:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Instead of passing a bool at_head, pass down the full flags from the
blk_mq_insert_request interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/bfq-iosched.c | 16 ++++++++--------
block/blk-mq.c | 5 ++---
block/elevator.h | 3 ++-
block/kyber-iosched.c | 5 +++--
block/mq-deadline.c | 9 +++++----
5 files changed, 20 insertions(+), 18 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d9ed3108c17af6..db1813e7a0dd72 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -6232,7 +6232,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
static struct bfq_queue *bfq_init_rq(struct request *rq);
static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ unsigned int flags)
{
struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
@@ -6255,11 +6255,10 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (!bfqq || at_head) {
- if (at_head)
- list_add(&rq->queuelist, &bfqd->dispatch);
- else
- list_add_tail(&rq->queuelist, &bfqd->dispatch);
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
+ list_add(&rq->queuelist, &bfqd->dispatch);
+ } else if (!bfqq) {
+ list_add_tail(&rq->queuelist, &bfqd->dispatch);
} else {
idle_timer_disabled = __bfq_insert_request(bfqd, rq);
/*
@@ -6289,14 +6288,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ unsigned int flags)
{
while (!list_empty(list)) {
struct request *rq;
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- bfq_insert_request(hctx, rq, at_head);
+ bfq_insert_request(hctx, rq, flags);
}
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c0330879aff54a..d6fe56e1aadee2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2556,8 +2556,7 @@ static void blk_mq_insert_request(struct request *rq, unsigned int flags)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list,
- flags & BLK_MQ_INSERT_AT_HEAD);
+ q->elevator->type->ops.insert_requests(hctx, &list, flags);
} else {
trace_block_rq_insert(rq);
@@ -2768,7 +2767,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
percpu_ref_get(&this_hctx->queue->q_usage_counter);
if (this_hctx->queue->elevator) {
this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
- &list, false);
+ &list, 0);
blk_mq_run_hw_queue(this_hctx, from_sched);
} else {
blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
diff --git a/block/elevator.h b/block/elevator.h
index 774a8f6b99e69e..b80b13f505ad99 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -37,7 +37,8 @@ struct elevator_mq_ops {
void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *);
void (*prepare_request)(struct request *);
void (*finish_request)(struct request *);
- void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool);
+ void (*insert_requests)(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ unsigned int flags);
struct request *(*dispatch_request)(struct blk_mq_hw_ctx *);
bool (*has_work)(struct blk_mq_hw_ctx *);
void (*completed_request)(struct request *, u64);
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 2146969237bfed..4d822d70b01f64 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -590,7 +590,8 @@ static void kyber_prepare_request(struct request *rq)
}
static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *rq_list, bool at_head)
+ struct list_head *rq_list,
+ unsigned int flags)
{
struct kyber_hctx_data *khd = hctx->sched_data;
struct request *rq, *next;
@@ -602,7 +603,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
spin_lock(&kcq->lock);
trace_block_rq_insert(rq);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_move(&rq->queuelist, head);
else
list_move_tail(&rq->queuelist, head);
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 6f7a86a70dafb4..90cc4345146e6a 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -768,7 +768,7 @@ static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
* add rq to rbtree and fifo
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ unsigned int flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -801,7 +801,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (at_head) {
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
list_add(&rq->queuelist, &per_prio->dispatch);
rq->fifo_time = jiffies;
} else {
@@ -825,7 +825,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ unsigned int flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -836,7 +837,7 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- dd_insert_request(hctx, rq, at_head);
+ dd_insert_request(hctx, rq, flags);
}
spin_unlock(&dd->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 36+ messages in thread
* Re: [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait
2023-04-11 13:33 ` [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait Christoph Hellwig
@ 2023-04-11 17:41 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:41 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Plugs never insert at head, so don't plug for head insertations.
insertations -> insertions
Anyway:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests
2023-04-11 13:33 ` [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
@ 2023-04-11 17:44 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:44 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Move all logic related to the direct insert into blk_mq_insert_requests
> to clean the code flow up a bit, and to allow marking
> blk_mq_try_issue_list_directly static.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
2023-04-11 13:33 ` [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
@ 2023-04-11 17:45 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:45 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> blk_mq_dispatch_plug_list is the only caller of
> blk_mq_sched_insert_requests, and it makes sense to just fold it there
> as blk_mq_sched_insert_requests isn't specific to I/O scheudlers despite
> the name.
scheudlers -> schedulers
Anyway:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-11 13:33 ` [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
@ 2023-04-11 17:46 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:46 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> blk_mq_sched_insert_request is the main request insert helper and not
> directly I/O scheduler related. Move blk_mq_sched_insert_request to
> blk-mq.c, rename it to blk_mq_insert_request and mark it static.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
2023-04-11 13:33 ` [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 17:49 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:49 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> There is no good point in keeping the __blk_mq_insert_request around
> for two function calls and a singler caller.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 07/16] blk-mq: remove blk_flush_queue_rq
2023-04-11 13:33 ` [PATCH 07/16] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
@ 2023-04-11 17:51 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:51 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Just call blk_mq_add_to_requeue_list directly from the two callers.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
2023-04-11 13:33 ` [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 17:54 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:54 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> While both passthrough and flush requests call directly into
> blk_mq_request_bypass_insert, the parameters aren't the same.
> Split the handling into two separate conditionals and turn the whole
> function into an if/elif/elif/else flow instead of the gotos.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
2023-04-11 13:33 ` [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
@ 2023-04-11 17:56 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:56 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Split the RQF_DONTPREP and RQF_SOFTBARRIER in separate branches to make
> the code more readable.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper
2023-04-11 13:33 ` [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
@ 2023-04-11 17:57 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 17:57 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Factor out a helper from __blk_mq_try_issue_directly in preparation
> of folding that function into its two callers.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers
2023-04-11 13:33 ` [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
@ 2023-04-11 18:04 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:04 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Due the wildly different behavior beased on the bypass_insert
beased -> based
> argument not a whole lot of code in __blk_mq_try_issue_directly is
> actually shared between blk_mq_try_issue_directly and
> blk_mq_request_issue_directly. Remove __blk_mq_try_issue_directly and
> fold the code into the two callers instead.
Anyway:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request
2023-04-11 13:33 ` [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 18:07 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:07 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> blk_mq_insert_request takes two bool parameters to control how to run
> the queue at the end of the function. Move the blk_mq_run_hw_queue call
> to the callers that want it instead.
I like this patch!
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-11 13:33 ` [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-11 18:09 ` Bart Van Assche
2023-04-12 5:02 ` Christoph Hellwig
0 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:09 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> if ((policy & REQ_FSEQ_DATA) &&
> !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
> - blk_mq_request_bypass_insert(rq, false, true);
> + blk_mq_request_bypass_insert(rq, false);
> return;
> }
Did you perhaps want to add a blk_mq_run_hw_queue() call in this
blk_insert_flush() code path?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-11 13:33 ` [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
@ 2023-04-11 18:11 ` Bart Van Assche
2023-04-12 5:02 ` Christoph Hellwig
0 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:11 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Replace the at_head bool with a flags argument that so far only contains
> a single BLK_MQ_INSERT_AT_HEAD value. This makes it much easier to grep
> for head insertations into the blk-mq dispatch queues.
insertations -> insertions
> +#define BLK_MQ_INSERT_AT_HEAD (1U << 0)
Has it been considered to introduce a new bitwise type for this flag?
That would allow sparse to detect accidental conversions from bool into
flag and vice versa.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests
2023-04-11 13:33 ` [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
@ 2023-04-11 18:13 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:13 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Instead of passing a bool at_head, pass down the full flags from the
> blk_mq_insert_request interface.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-11 13:33 ` [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-11 18:16 ` Bart Van Assche
2023-04-12 5:04 ` Christoph Hellwig
0 siblings, 1 reply; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:16 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Replace the two boolean arguments with the same flags that are already
> passed to blk_mq_insert_request. Also add the currently unused
> BLK_MQ_INSERT_ASYNC support so that the flags support is complete.
Hmm ... which two boolean arguments? This patch only converts a single
boolean argument of blk_mq_request_bypass_insert() into an unsigned integer.
Additionally, I don't see any reference to BLK_MQ_INSERT_ASYNC in this
patch?
Once the patch description is made more clear, feel free to add:
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-11 13:33 ` [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
@ 2023-04-11 18:18 ` Bart Van Assche
0 siblings, 0 replies; 36+ messages in thread
From: Bart Van Assche @ 2023-04-11 18:18 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: linux-block
On 4/11/23 06:33, Christoph Hellwig wrote:
> Remove this very small helper and fold it into the only caller.
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-11 18:09 ` Bart Van Assche
@ 2023-04-12 5:02 ` Christoph Hellwig
0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:02 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Christoph Hellwig, Jens Axboe, linux-block
On Tue, Apr 11, 2023 at 11:09:58AM -0700, Bart Van Assche wrote:
> On 4/11/23 06:33, Christoph Hellwig wrote:
>> if ((policy & REQ_FSEQ_DATA) &&
>> !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
>> - blk_mq_request_bypass_insert(rq, false, true);
>> + blk_mq_request_bypass_insert(rq, false);
>> return;
>> }
>
> Did you perhaps want to add a blk_mq_run_hw_queue() call in this
> blk_insert_flush() code path?
Yes. I'll fix it.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-11 18:11 ` Bart Van Assche
@ 2023-04-12 5:02 ` Christoph Hellwig
0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:02 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Christoph Hellwig, Jens Axboe, linux-block
On Tue, Apr 11, 2023 at 11:11:51AM -0700, Bart Van Assche wrote:
>> +#define BLK_MQ_INSERT_AT_HEAD (1U << 0)
>
> Has it been considered to introduce a new bitwise type for this flag? That
> would allow sparse to detect accidental conversions from bool into flag and
> vice versa.
I did think about it, but decided not to bother. But now that there's
a request for it I'll add it.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-11 18:16 ` Bart Van Assche
@ 2023-04-12 5:04 ` Christoph Hellwig
0 siblings, 0 replies; 36+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:04 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Christoph Hellwig, Jens Axboe, linux-block
On Tue, Apr 11, 2023 at 11:16:29AM -0700, Bart Van Assche wrote:
> On 4/11/23 06:33, Christoph Hellwig wrote:
>> Replace the two boolean arguments with the same flags that are already
>> passed to blk_mq_insert_request. Also add the currently unused
>> BLK_MQ_INSERT_ASYNC support so that the flags support is complete.
>
> Hmm ... which two boolean arguments? This patch only converts a single
> boolean argument of blk_mq_request_bypass_insert() into an unsigned
> integer.
>
> Additionally, I don't see any reference to BLK_MQ_INSERT_ASYNC in this
> patch?
Sorry, this slipped over from an earlier version before I moved the
queue runs up the stack. I'll fix up the commit message.
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2023-04-12 5:04 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-11 13:33 cleanup request insertation parameters Christoph Hellwig
2023-04-11 13:33 ` [PATCH 01/16] blk-mq: don't plug for head insertations in blk_execute_rq_nowait Christoph Hellwig
2023-04-11 17:41 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 02/16] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
2023-04-11 17:44 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 03/16] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
2023-04-11 17:45 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 04/16] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
2023-04-11 17:46 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 05/16] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
2023-04-11 17:49 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 06/16] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
2023-04-11 18:18 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 07/16] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
2023-04-11 17:51 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 08/16] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
2023-04-11 17:54 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 09/16] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
2023-04-11 17:56 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 10/16] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
2023-04-11 17:57 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 11/16] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
2023-04-11 18:04 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 12/16] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
2023-04-11 18:07 ` Bart Van Assche
2023-04-11 13:33 ` [PATCH 13/16] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
2023-04-11 18:09 ` Bart Van Assche
2023-04-12 5:02 ` Christoph Hellwig
2023-04-11 13:33 ` [PATCH 14/16] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
2023-04-11 18:11 ` Bart Van Assche
2023-04-12 5:02 ` Christoph Hellwig
2023-04-11 13:33 ` [PATCH 15/16] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
2023-04-11 18:16 ` Bart Van Assche
2023-04-12 5:04 ` Christoph Hellwig
2023-04-11 13:33 ` [PATCH 16/16] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
2023-04-11 18:13 ` Bart Van Assche
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).