* cleanup request insertation parameters v3
@ 2023-04-13 6:40 Christoph Hellwig
2023-04-13 6:40 ` [PATCH 01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
` (20 more replies)
0 siblings, 21 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Hi Jens,
in context of his latest series Bart commented that it's too hard
to find all spots that do a head insertation into the blk-mq dispatch
queues. This series collapses various far too deep callchains, drop
two of the three bools and then replaced the final once with a greppable
constant.
This will create some rebased work for Bart of top of the other comments
he got, but I think this will allow us to sort out some of the request
order issues much better while also making the code a lot more readable.
Changes since v2:
- rework a comment
- fix a spelling mistake in a commit message
- two additional patches to clean up blk_mq_add_to_requeue_list calling
conventions as well
Changes since v1:
- add back a blk_mq_run_hw_queue in blk_insert_flush that got lost
- use a __bitwise type for the insert flags
- sort out header hell a bit
- various typo fixes
Diffstat:
b/block/bfq-iosched.c | 17 +-
b/block/blk-flush.c | 17 +-
b/block/blk-mq-cpumap.c | 1
b/block/blk-mq-debugfs.c | 2
b/block/blk-mq-pci.c | 1
b/block/blk-mq-sched.c | 112 -----------------
b/block/blk-mq-sched.h | 7 -
b/block/blk-mq-sysfs.c | 2
b/block/blk-mq-tag.c | 2
b/block/blk-mq-virtio.c | 1
b/block/blk-mq.c | 307 ++++++++++++++++++++++++++++-------------------
b/block/blk-mq.h | 77 ++++++++++-
b/block/blk-pm.c | 2
b/block/blk-stat.c | 1
b/block/blk-sysfs.c | 1
b/block/elevator.h | 4
b/block/kyber-iosched.c | 7 -
b/block/mq-deadline.c | 13 -
block/blk-mq-tag.h | 73 -----------
19 files changed, 280 insertions(+), 367 deletions(-)
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 02/20] blk-mq: remove blk-mq-tag.h Christoph Hellwig
` (19 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Plugs never insert at head, so don't plug for head insertions.
Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 52f8e0099c7f4b..7908d19f140815 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1299,7 +1299,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug)
+ if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
blk_mq_sched_insert_request(rq, at_head, true, false);
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 02/20] blk-mq: remove blk-mq-tag.h
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
2023-04-13 6:40 ` [PATCH 01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 03/20] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
` (18 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk-mq-tag.h is always included by blk-mq.h, and causes recursive
inclusion hell with further changes. Just merge it into blk-mq.h
instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/bfq-iosched.c | 1 -
block/blk-flush.c | 1 -
block/blk-mq-debugfs.c | 1 -
block/blk-mq-sched.c | 1 -
block/blk-mq-sched.h | 1 -
block/blk-mq-sysfs.c | 1 -
block/blk-mq-tag.c | 1 -
block/blk-mq-tag.h | 73 ------------------------------------------
block/blk-mq.c | 1 -
block/blk-mq.h | 61 ++++++++++++++++++++++++++++++++++-
block/blk-pm.c | 1 -
block/kyber-iosched.c | 1 -
block/mq-deadline.c | 1 -
13 files changed, 60 insertions(+), 85 deletions(-)
delete mode 100644 block/blk-mq-tag.h
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d9ed3108c17af6..37f68c907ac08c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -129,7 +129,6 @@
#include "elevator.h"
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
#include "bfq-iosched.h"
#include "blk-wbt.h"
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 53202eff545efb..a13a1d6caa0f3e 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -73,7 +73,6 @@
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/* PREFLUSH/FUA sequences */
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 212a7f301e7302..ace2bcf1cf9a6f 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -12,7 +12,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#include "blk-rq-qos.h"
static int queue_poll_stat_show(void *data, struct seq_file *m)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 06b312c691143f..1029e8eed5eef6 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -15,7 +15,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#include "blk-wbt.h"
/*
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 0250139724539a..65cab6e475be8e 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -4,7 +4,6 @@
#include "elevator.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ)
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index 1b2b0d258e465f..ba84caa868dd54 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -13,7 +13,6 @@
#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
static void blk_mq_sysfs_release(struct kobject *kobj)
{
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 9eb968e14d31f8..1f8b065d72c5f2 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -14,7 +14,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
/*
* Recalculate wakeup batch when tag is shared by hctx.
diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
deleted file mode 100644
index 91ff37e3b43dff..00000000000000
--- a/block/blk-mq-tag.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef INT_BLK_MQ_TAG_H
-#define INT_BLK_MQ_TAG_H
-
-struct blk_mq_alloc_data;
-
-extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
- unsigned int reserved_tags,
- int node, int alloc_policy);
-extern void blk_mq_free_tags(struct blk_mq_tags *tags);
-extern int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags,
- struct sbitmap_queue *breserved_tags,
- unsigned int queue_depth,
- unsigned int reserved,
- int node, int alloc_policy);
-
-extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
-unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
- unsigned int *offset);
-extern void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx,
- unsigned int tag);
-void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);
-extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_tags **tags,
- unsigned int depth, bool can_grow);
-extern void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,
- unsigned int size);
-extern void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);
-
-extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
-void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
- void *priv);
-void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
- void *priv);
-
-static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt,
- struct blk_mq_hw_ctx *hctx)
-{
- if (!hctx)
- return &bt->ws[0];
- return sbq_wait_ptr(bt, &hctx->wait_index);
-}
-
-enum {
- BLK_MQ_NO_TAG = -1U,
- BLK_MQ_TAG_MIN = 1,
- BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
-};
-
-extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
-extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
-
-static inline void blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
-{
- if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
- __blk_mq_tag_busy(hctx);
-}
-
-static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
-{
- if (!(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED))
- return;
-
- __blk_mq_tag_idle(hctx);
-}
-
-static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags,
- unsigned int tag)
-{
- return tag < tags->nr_reserved_tags;
-}
-
-#endif
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7908d19f140815..545600be2063ac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -37,7 +37,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
-#include "blk-mq-tag.h"
#include "blk-pm.h"
#include "blk-stat.h"
#include "blk-mq-sched.h"
diff --git a/block/blk-mq.h b/block/blk-mq.h
index ef59fee62780d3..7a041fecea02e4 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -3,7 +3,6 @@
#define INT_BLK_MQ_H
#include "blk-stat.h"
-#include "blk-mq-tag.h"
struct blk_mq_tag_set;
@@ -30,6 +29,12 @@ struct blk_mq_ctx {
struct kobject kobj;
} ____cacheline_aligned_in_smp;
+enum {
+ BLK_MQ_NO_TAG = -1U,
+ BLK_MQ_TAG_MIN = 1,
+ BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
+};
+
void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
@@ -164,6 +169,60 @@ struct blk_mq_alloc_data {
struct blk_mq_hw_ctx *hctx;
};
+struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
+ unsigned int reserved_tags, int node, int alloc_policy);
+void blk_mq_free_tags(struct blk_mq_tags *tags);
+int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags,
+ struct sbitmap_queue *breserved_tags, unsigned int queue_depth,
+ unsigned int reserved, int node, int alloc_policy);
+
+unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
+unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
+ unsigned int *offset);
+void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx,
+ unsigned int tag);
+void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);
+int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ struct blk_mq_tags **tags, unsigned int depth, bool can_grow);
+void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,
+ unsigned int size);
+void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);
+
+void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
+void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
+ void *priv);
+void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
+ void *priv);
+
+static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt,
+ struct blk_mq_hw_ctx *hctx)
+{
+ if (!hctx)
+ return &bt->ws[0];
+ return sbq_wait_ptr(bt, &hctx->wait_index);
+}
+
+void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
+void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
+
+static inline void blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+{
+ if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
+ __blk_mq_tag_busy(hctx);
+}
+
+static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
+{
+ if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
+ __blk_mq_tag_idle(hctx);
+}
+
+static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags,
+ unsigned int tag)
+{
+ return tag < tags->nr_reserved_tags;
+}
+
static inline bool blk_mq_is_shared_tags(unsigned int flags)
{
return flags & BLK_MQ_F_TAG_HCTX_SHARED;
diff --git a/block/blk-pm.c b/block/blk-pm.c
index 2dad62cc157272..8af5ee54feb406 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -5,7 +5,6 @@
#include <linux/blkdev.h>
#include <linux/pm_runtime.h>
#include "blk-mq.h"
-#include "blk-mq-tag.h"
/**
* blk_pm_runtime_init - Block layer runtime PM initialization routine
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 2146969237bfed..d0a4838ce7fc63 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -19,7 +19,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#define CREATE_TRACE_POINTS
#include <trace/events/kyber.h>
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index f10c2a0d18d411..a18526e11194ca 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -23,7 +23,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/*
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 03/20] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
2023-04-13 6:40 ` [PATCH 01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
2023-04-13 6:40 ` [PATCH 02/20] blk-mq: remove blk-mq-tag.h Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 04/20] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
` (17 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
block/blk-mq.h needs various definitions from <linux/blk-mq.h>,
include it there instead of relying on the source files to include
both.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-flush.c | 1 -
block/blk-mq-cpumap.c | 1 -
block/blk-mq-debugfs.c | 1 -
block/blk-mq-pci.c | 1 -
block/blk-mq-sched.c | 1 -
block/blk-mq-sysfs.c | 1 -
block/blk-mq-tag.c | 1 -
block/blk-mq-virtio.c | 1 -
block/blk-mq.c | 1 -
block/blk-mq.h | 1 +
block/blk-pm.c | 1 -
block/blk-stat.c | 1 -
block/blk-sysfs.c | 1 -
block/kyber-iosched.c | 1 -
block/mq-deadline.c | 1 -
15 files changed, 1 insertion(+), 14 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index a13a1d6caa0f3e..3c81b0af5b3964 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -68,7 +68,6 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/gfp.h>
-#include <linux/blk-mq.h>
#include <linux/part_stat.h>
#include "blk.h"
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 0c612c19feb8b1..9638b25fd52124 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -12,7 +12,6 @@
#include <linux/cpu.h>
#include <linux/group_cpus.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index ace2bcf1cf9a6f..d23a8554ec4aeb 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -7,7 +7,6 @@
#include <linux/blkdev.h>
#include <linux/debugfs.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index a90b88fd1332ce..d47b5c73c9eb71 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -4,7 +4,6 @@
*/
#include <linux/kobject.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/blk-mq-pci.h>
#include <linux/pci.h>
#include <linux/module.h>
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 1029e8eed5eef6..c4b2d44b2d4ebf 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -6,7 +6,6 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/blk-mq.h>
#include <linux/list_sort.h>
#include <trace/events/block.h>
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index ba84caa868dd54..156e9bb07abf1a 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -10,7 +10,6 @@
#include <linux/workqueue.h>
#include <linux/smp.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 1f8b065d72c5f2..d6af9d431dc631 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -9,7 +9,6 @@
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/blk-mq.h>
#include <linux/delay.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c
index 6589f076a09635..68d0945c0b08a2 100644
--- a/block/blk-mq-virtio.c
+++ b/block/blk-mq-virtio.c
@@ -3,7 +3,6 @@
* Copyright (c) 2016 Christoph Hellwig.
*/
#include <linux/device.h>
-#include <linux/blk-mq.h>
#include <linux/blk-mq-virtio.h>
#include <linux/virtio_config.h>
#include <linux/module.h>
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 545600be2063ac..29014a0f9f39b1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -32,7 +32,6 @@
#include <trace/events/block.h>
-#include <linux/blk-mq.h>
#include <linux/t10-pi.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 7a041fecea02e4..fa13b694ff27d6 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -2,6 +2,7 @@
#ifndef INT_BLK_MQ_H
#define INT_BLK_MQ_H
+#include <linux/blk-mq.h>
#include "blk-stat.h"
struct blk_mq_tag_set;
diff --git a/block/blk-pm.c b/block/blk-pm.c
index 8af5ee54feb406..6b72b2e03fc8a8 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -1,6 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/blk-mq.h>
#include <linux/blk-pm.h>
#include <linux/blkdev.h>
#include <linux/pm_runtime.h>
diff --git a/block/blk-stat.c b/block/blk-stat.c
index 74a1a8c32d86f8..6226405142ff95 100644
--- a/block/blk-stat.c
+++ b/block/blk-stat.c
@@ -6,7 +6,6 @@
*/
#include <linux/kernel.h>
#include <linux/rculist.h>
-#include <linux/blk-mq.h>
#include "blk-stat.h"
#include "blk-mq.h"
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 1a743b4f29582d..a642085838531f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -9,7 +9,6 @@
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
#include <linux/blktrace_api.h>
-#include <linux/blk-mq.h>
#include <linux/debugfs.h>
#include "blk.h"
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index d0a4838ce7fc63..3f9fb2090c9158 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -8,7 +8,6 @@
#include <linux/kernel.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/module.h>
#include <linux/sbitmap.h>
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index a18526e11194ca..af9e79050dcc1f 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -8,7 +8,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/bio.h>
#include <linux/module.h>
#include <linux/slab.h>
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 04/20] blk-mq: move more logic into blk_mq_insert_requests
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (2 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 03/20] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 05/20] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
` (16 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Move all logic related to the direct insert (including the call to
blk_mq_run_hw_queue) into blk_mq_insert_requests to streamline the code
flow up a bit, and to allow marking blk_mq_try_issue_list_directly
static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq-sched.c | 17 ++---------------
block/blk-mq.c | 20 ++++++++++++++++++--
block/blk-mq.h | 4 +---
3 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index c4b2d44b2d4ebf..811a9765b745c0 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -472,23 +472,10 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
e = hctx->queue->elevator;
if (e) {
e->type->ops.insert_requests(hctx, list, false);
+ blk_mq_run_hw_queue(hctx, run_queue_async);
} else {
- /*
- * try to issue requests directly if the hw queue isn't
- * busy in case of 'none' scheduler, and this way may save
- * us one extra enqueue & dequeue to sw queue.
- */
- if (!hctx->dispatch_busy && !run_queue_async) {
- blk_mq_run_dispatch_ops(hctx->queue,
- blk_mq_try_issue_list_directly(hctx, list));
- if (list_empty(list))
- goto out;
- }
- blk_mq_insert_requests(hctx, ctx, list);
+ blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
}
-
- blk_mq_run_hw_queue(hctx, run_queue_async);
- out:
percpu_ref_put(&q->q_usage_counter);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 29014a0f9f39b1..536f001282bb63 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,6 +44,9 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list);
+
static inline struct blk_mq_hw_ctx *blk_qc_to_hctx(struct request_queue *q,
blk_qc_t qc)
{
@@ -2495,12 +2498,23 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
}
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list)
+ struct list_head *list, bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
+ /*
+ * Try to issue requests directly if the hw queue isn't busy to save an
+ * extra enqueue & dequeue to the sw queue.
+ */
+ if (!hctx->dispatch_busy && !run_queue_async) {
+ blk_mq_run_dispatch_ops(hctx->queue,
+ blk_mq_try_issue_list_directly(hctx, list));
+ if (list_empty(list))
+ goto out;
+ }
+
/*
* preemption doesn't flush plug list, so it's possible ctx->cpu is
* offline now
@@ -2514,6 +2528,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
list_splice_tail_init(list, &ctx->rq_lists[type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
+out:
+ blk_mq_run_hw_queue(hctx, run_queue_async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2755,7 +2771,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
} while (!rq_list_empty(plug->mq_list));
}
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list)
{
int queued = 0;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index fa13b694ff27d6..5d551f9ef2d6be 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -70,9 +70,7 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list);
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- struct list_head *list);
+ struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 05/20] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (3 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 04/20] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 06/20] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
` (15 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk_mq_dispatch_plug_list is the only caller of
blk_mq_sched_insert_requests, and it makes sense to just fold it there
as blk_mq_sched_insert_requests isn't specific to I/O schedulers despite
the name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq-sched.c | 24 ------------------------
block/blk-mq-sched.h | 3 ---
block/blk-mq.c | 17 +++++++++++++----
block/blk-mq.h | 2 --
block/mq-deadline.c | 2 +-
5 files changed, 14 insertions(+), 34 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 811a9765b745c0..9c0d231722d9ce 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -455,30 +455,6 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, async);
}
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-{
- struct elevator_queue *e;
- struct request_queue *q = hctx->queue;
-
- /*
- * blk_mq_sched_insert_requests() is called from flush plug
- * context only, and hold one usage counter to prevent queue
- * from being released.
- */
- percpu_ref_get(&q->q_usage_counter);
-
- e = hctx->queue->elevator;
- if (e) {
- e->type->ops.insert_requests(hctx, list, false);
- blk_mq_run_hw_queue(hctx, run_queue_async);
- } else {
- blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
- }
- percpu_ref_put(&q->q_usage_counter);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 65cab6e475be8e..1ec01e9934dc45 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -18,9 +18,6 @@ void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
void blk_mq_sched_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async);
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 536f001282bb63..f1da4f053cc691 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2497,9 +2497,9 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, false);
}
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-
+static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
+ struct blk_mq_ctx *ctx, struct list_head *list,
+ bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
@@ -2725,7 +2725,16 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
plug->mq_list = requeue_list;
trace_block_unplug(this_hctx->queue, depth, !from_sched);
- blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched);
+
+ percpu_ref_get(&this_hctx->queue->q_usage_counter);
+ if (this_hctx->queue->elevator) {
+ this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
+ &list, false);
+ blk_mq_run_hw_queue(this_hctx, from_sched);
+ } else {
+ blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
+ }
+ percpu_ref_put(&this_hctx->queue->q_usage_counter);
}
void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 5d551f9ef2d6be..bd7ae5e67a526b 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -69,8 +69,6 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index af9e79050dcc1f..d62a3039c8e04f 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -820,7 +820,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_sched_insert_requests().
+ * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 06/20] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (4 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 05/20] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 07/20] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
` (14 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk_mq_sched_insert_request is the main request insert helper and not
directly I/O scheduler related. Move blk_mq_sched_insert_request to
blk-mq.c, rename it to blk_mq_insert_request and mark it static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq-sched.c | 73 -------------------------------------
block/blk-mq-sched.h | 3 --
block/blk-mq.c | 87 +++++++++++++++++++++++++++++++++++++++++---
block/mq-deadline.c | 2 +-
4 files changed, 82 insertions(+), 83 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 9c0d231722d9ce..f90fc42a88ca2f 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -382,79 +382,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
}
EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
-{
- struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
-
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
- /*
- * Firstly normal IO request is inserted to scheduler queue or
- * sw queue, meantime we add flush request to dispatch queue(
- * hctx->dispatch) directly and there is at most one in-flight
- * flush request for each hw queue, so it doesn't matter to add
- * flush request to tail or front of the dispatch queue.
- *
- * Secondly in case of NCQ, flush request belongs to non-NCQ
- * command, and queueing it will fail when there is any
- * in-flight normal IO request(NCQ command). When adding flush
- * rq to the front of hctx->dispatch, it is easier to introduce
- * extra time to flush rq's latency because of S_SCHED_RESTART
- * compared with adding to the tail of dispatch queue, then
- * chance of flush merge is increased, and less flush requests
- * will be issued to controller. It is observed that ~10% time
- * is saved in blktests block/004 on disk attached to AHCI/NCQ
- * drive when adding flush rq to the front of hctx->dispatch.
- *
- * Simply queue flush rq to the front of hctx->dispatch so that
- * intensive flush workloads can benefit in case of NCQ HW.
- */
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
- LIST_HEAD(list);
-
- list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
- } else {
- spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
- spin_unlock(&ctx->lock);
- }
-
-run:
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 1ec01e9934dc45..7c3cbad17f3052 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -16,9 +16,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx);
void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
-
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index f1da4f053cc691..78e54a64fe920b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,6 +44,8 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1303,7 +1305,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1364,7 +1366,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1438,13 +1440,13 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP)
blk_mq_request_bypass_insert(rq, false, false);
else
- blk_mq_sched_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true, false, false);
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_sched_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false, false, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2532,6 +2534,79 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
+static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
+ struct request *rq)
+{
+ /*
+ * dispatch flush and passthrough rq directly
+ *
+ * passthrough request has to be added to hctx->dispatch directly.
+ * For some reason, device may be in one situation which can't
+ * handle FS request, so STS_RESOURCE is always returned and the
+ * FS request will be added to hctx->dispatch. However passthrough
+ * request may be required at that time for fixing the problem. If
+ * passthrough request is added to scheduler queue, there isn't any
+ * chance to dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
+ return true;
+
+ return false;
+}
+
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async)
+{
+ struct request_queue *q = rq->q;
+ struct elevator_queue *e = q->elevator;
+ struct blk_mq_ctx *ctx = rq->mq_ctx;
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
+
+ if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ /*
+ * Firstly normal IO request is inserted to scheduler queue or
+ * sw queue, meantime we add flush request to dispatch queue(
+ * hctx->dispatch) directly and there is at most one in-flight
+ * flush request for each hw queue, so it doesn't matter to add
+ * flush request to tail or front of the dispatch queue.
+ *
+ * Secondly in case of NCQ, flush request belongs to non-NCQ
+ * command, and queueing it will fail when there is any
+ * in-flight normal IO request(NCQ command). When adding flush
+ * rq to the front of hctx->dispatch, it is easier to introduce
+ * extra time to flush rq's latency because of S_SCHED_RESTART
+ * compared with adding to the tail of dispatch queue, then
+ * chance of flush merge is increased, and less flush requests
+ * will be issued to controller. It is observed that ~10% time
+ * is saved in blktests block/004 on disk attached to AHCI/NCQ
+ * drive when adding flush rq to the front of hctx->dispatch.
+ *
+ * Simply queue flush rq to the front of hctx->dispatch so that
+ * intensive flush workloads can benefit in case of NCQ HW.
+ */
+ at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ goto run;
+ }
+
+ if (e) {
+ LIST_HEAD(list);
+
+ list_add(&rq->queuelist, &list);
+ e->type->ops.insert_requests(hctx, &list, at_head);
+ } else {
+ spin_lock(&ctx->lock);
+ __blk_mq_insert_request(hctx, rq, at_head);
+ spin_unlock(&ctx->lock);
+ }
+
+run:
+ if (run_queue)
+ blk_mq_run_hw_queue(hctx, async);
+}
+
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
unsigned int nr_segs)
{
@@ -2623,7 +2698,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (bypass_insert)
return BLK_STS_RESOURCE;
- blk_mq_sched_insert_request(rq, false, run_queue, false);
+ blk_mq_insert_request(rq, false, run_queue, false);
return BLK_STS_OK;
}
@@ -2975,7 +3050,7 @@ void blk_mq_submit_bio(struct bio *bio)
else if ((rq->rq_flags & RQF_ELV) ||
(rq->mq_hctx->dispatch_busy &&
(q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_sched_insert_request(rq, false, true, true);
+ blk_mq_insert_request(rq, false, true, true);
else
blk_mq_run_dispatch_ops(rq->q,
blk_mq_try_issue_directly(rq->mq_hctx, rq));
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index d62a3039c8e04f..ceae477c3571a3 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -820,7 +820,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
+ * Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 07/20] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (5 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 06/20] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 08/20] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
` (13 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
There is no good point in keeping the __blk_mq_insert_request around
for two function calls and a singler caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 14 ++------------
block/blk-mq.h | 2 --
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 78e54a64fe920b..103caf1bae2769 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2463,17 +2463,6 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
}
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
-
- lockdep_assert_held(&ctx->lock);
-
- __blk_mq_insert_req_list(hctx, rq, at_head);
- blk_mq_hctx_mark_pending(hctx, ctx);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2598,7 +2587,8 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
+ __blk_mq_insert_req_list(hctx, rq, at_head);
+ blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index bd7ae5e67a526b..e2d59e33046e30 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -65,8 +65,6 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 08/20] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (6 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 07/20] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 09/20] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
` (12 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Remove this very small helper and fold it into the only caller.
Note that this moves the trace_block_rq_insert out of ctx->lock, matching
the other calls to this tracepoint.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 25 +++++++------------------
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 103caf1bae2769..7e9f7d00452f11 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2446,23 +2446,6 @@ static void blk_mq_run_work_fn(struct work_struct *work)
__blk_mq_run_hw_queue(hctx);
}
-static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- enum hctx_type type = hctx->type;
-
- lockdep_assert_held(&ctx->lock);
-
- trace_block_rq_insert(rq);
-
- if (at_head)
- list_add(&rq->queuelist, &ctx->rq_lists[type]);
- else
- list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2586,8 +2569,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
list_add(&rq->queuelist, &list);
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
+ trace_block_rq_insert(rq);
+
spin_lock(&ctx->lock);
- __blk_mq_insert_req_list(hctx, rq, at_head);
+ if (at_head)
+ list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
+ else
+ list_add_tail(&rq->queuelist,
+ &ctx->rq_lists[hctx->type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 09/20] blk-mq: remove blk_flush_queue_rq
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (7 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 08/20] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 10/20] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
` (11 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Just call blk_mq_add_to_requeue_list directly from the two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-flush.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 3c81b0af5b3964..62ef98f604fbf9 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -136,11 +136,6 @@ static void blk_flush_restore_request(struct request *rq)
rq->end_io = rq->flush.saved_end_io;
}
-static void blk_flush_queue_rq(struct request *rq, bool add_front)
-{
- blk_mq_add_to_requeue_list(rq, add_front, true);
-}
-
static void blk_account_io_flush(struct request *rq)
{
struct block_device *part = rq->q->disk->part0;
@@ -193,7 +188,7 @@ static void blk_flush_complete_seq(struct request *rq,
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
- blk_flush_queue_rq(rq, true);
+ blk_mq_add_to_requeue_list(rq, true, true);
break;
case REQ_FSEQ_DONE:
@@ -350,7 +345,7 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
smp_wmb();
req_ref_set(flush_rq, 1);
- blk_flush_queue_rq(flush_rq, false);
+ blk_mq_add_to_requeue_list(flush_rq, false, true);
}
static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq,
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 10/20] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (8 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 09/20] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 11/20] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
` (10 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
While both passthrough and flush requests call directly into
blk_mq_request_bypass_insert, the parameters aren't the same.
Split the handling into two separate conditionals and turn the whole
function into an if/elif/elif/else flow instead of the gotos.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 50 ++++++++++++++++++--------------------------------
1 file changed, 18 insertions(+), 32 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7e9f7d00452f11..c3de03217f4f1a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2506,37 +2506,26 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
static void blk_mq_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async)
{
struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
struct blk_mq_ctx *ctx = rq->mq_ctx;
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ if (blk_rq_is_passthrough(rq)) {
+ /*
+ * Passthrough request have to be added to hctx->dispatch
+ * directly. The device may be in a situation where it can't
+ * handle FS request, and always returns BLK_STS_RESOURCE for
+ * them, which gets them added to hctx->dispatch.
+ *
+ * If a passthrough request is required to unblock the queues,
+ * and it is added to the scheduler queue, there is no chance to
+ * dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ } else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
* sw queue, meantime we add flush request to dispatch queue(
@@ -2558,16 +2547,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
+ blk_mq_request_bypass_insert(rq, true, false);
+ } else if (q->elevator) {
LIST_HEAD(list);
+ WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
+
list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list, at_head);
} else {
trace_block_rq_insert(rq);
@@ -2581,7 +2568,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
spin_unlock(&ctx->lock);
}
-run:
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 11/20] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (9 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 10/20] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 12/20] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
` (9 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Split the RQF_DONTPREP and RQF_SOFTBARRIER in separate branches to make
the code more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c3de03217f4f1a..d17871c237f7df 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1427,20 +1427,21 @@ static void blk_mq_requeue_work(struct work_struct *work)
spin_unlock_irq(&q->requeue_lock);
list_for_each_entry_safe(rq, next, &rq_list, queuelist) {
- if (!(rq->rq_flags & (RQF_SOFTBARRIER | RQF_DONTPREP)))
- continue;
-
- rq->rq_flags &= ~RQF_SOFTBARRIER;
- list_del_init(&rq->queuelist);
/*
- * If RQF_DONTPREP, rq has contained some driver specific
- * data, so insert it to hctx dispatch list to avoid any
- * merge.
+ * If RQF_DONTPREP ist set, the request has been started by the
+ * driver already and might have driver-specific data allocated
+ * already. Insert it into the hctx dispatch list to avoid
+ * block layer merges for the request.
*/
- if (rq->rq_flags & RQF_DONTPREP)
+ if (rq->rq_flags & RQF_DONTPREP) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_request_bypass_insert(rq, false, false);
- else
+ } else if (rq->rq_flags & RQF_SOFTBARRIER) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_insert_request(rq, true, false, false);
+ }
}
while (!list_empty(&rq_list)) {
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 12/20] blk-mq: factor out a blk_mq_get_budget_and_tag helper
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (10 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 11/20] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 13/20] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
` (8 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Factor out a helper from __blk_mq_try_issue_directly in preparation
of folding that function into its two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d17871c237f7df..5cb7ebefc88c14 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2624,13 +2624,27 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
return ret;
}
+static bool blk_mq_get_budget_and_tag(struct request *rq)
+{
+ int budget_token;
+
+ budget_token = blk_mq_get_dispatch_budget(rq->q);
+ if (budget_token < 0)
+ return false;
+ blk_mq_set_rq_budget_token(rq, budget_token);
+ if (!blk_mq_get_driver_tag(rq)) {
+ blk_mq_put_dispatch_budget(rq->q, budget_token);
+ return false;
+ }
+ return true;
+}
+
static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
bool bypass_insert, bool last)
{
struct request_queue *q = rq->q;
bool run_queue = true;
- int budget_token;
/*
* RCU or SRCU read lock is needed before checking quiesced flag.
@@ -2648,16 +2662,8 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
goto insert;
- budget_token = blk_mq_get_dispatch_budget(q);
- if (budget_token < 0)
- goto insert;
-
- blk_mq_set_rq_budget_token(rq, budget_token);
-
- if (!blk_mq_get_driver_tag(rq)) {
- blk_mq_put_dispatch_budget(q, budget_token);
+ if (!blk_mq_get_budget_and_tag(rq))
goto insert;
- }
return __blk_mq_issue_directly(hctx, rq, last);
insert:
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 13/20] blk-mq: fold __blk_mq_try_issue_directly into its two callers
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (11 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 12/20] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 14/20] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
` (7 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Due to the wildly different behavior based on the bypass_insert argument,
not a whole lot of code in __blk_mq_try_issue_directly is actually shared
between blk_mq_try_issue_directly and blk_mq_request_issue_directly.
Remove __blk_mq_try_issue_directly and fold the code into the two callers
instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 72 ++++++++++++++++++++++----------------------------
1 file changed, 31 insertions(+), 41 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5cb7ebefc88c14..c5b42476337c99 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2639,42 +2639,6 @@ static bool blk_mq_get_budget_and_tag(struct request *rq)
return true;
}
-static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool bypass_insert, bool last)
-{
- struct request_queue *q = rq->q;
- bool run_queue = true;
-
- /*
- * RCU or SRCU read lock is needed before checking quiesced flag.
- *
- * When queue is stopped or quiesced, ignore 'bypass_insert' from
- * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
- * and avoid driver to try to dispatch again.
- */
- if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
- run_queue = false;
- bypass_insert = false;
- goto insert;
- }
-
- if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
- goto insert;
-
- if (!blk_mq_get_budget_and_tag(rq))
- goto insert;
-
- return __blk_mq_issue_directly(hctx, rq, last);
-insert:
- if (bypass_insert)
- return BLK_STS_RESOURCE;
-
- blk_mq_insert_request(rq, false, run_queue, false);
-
- return BLK_STS_OK;
-}
-
/**
* blk_mq_try_issue_directly - Try to send a request directly to device driver.
* @hctx: Pointer of the associated hardware queue.
@@ -2688,18 +2652,44 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq)
{
- blk_status_t ret =
- __blk_mq_try_issue_directly(hctx, rq, false, true);
+ blk_status_t ret;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return;
+ }
+
+ if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
+ blk_mq_insert_request(rq, false, true, false);
+ return;
+ }
- if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
+ ret = __blk_mq_issue_directly(hctx, rq, true);
+ switch (ret) {
+ case BLK_STS_OK:
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
blk_mq_request_bypass_insert(rq, false, true);
- else if (ret != BLK_STS_OK)
+ break;
+ default:
blk_mq_end_request(rq, ret);
+ break;
+ }
}
static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
{
- return __blk_mq_try_issue_directly(rq->mq_hctx, rq, true, last);
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return BLK_STS_OK;
+ }
+
+ if (!blk_mq_get_budget_and_tag(rq))
+ return BLK_STS_RESOURCE;
+ return __blk_mq_issue_directly(hctx, rq, last);
}
static void blk_mq_plug_issue_direct(struct blk_plug *plug)
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 14/20] blk-mq: don't run the hw_queue from blk_mq_insert_request
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (12 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 13/20] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 15/20] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
` (6 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk_mq_insert_request takes two bool parameters to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 56 ++++++++++++++++++++++++++++----------------------
1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c5b42476337c99..d1941db1ad3c97 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,8 +44,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
+static void blk_mq_insert_request(struct request *rq, bool at_head);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1292,6 +1291,8 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
*/
void blk_execute_rq_nowait(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
WARN_ON(irqs_disabled());
WARN_ON(!blk_rq_is_passthrough(rq));
@@ -1302,10 +1303,13 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug && !at_head)
+ if (current->plug && !at_head) {
blk_add_rq_to_plug(current->plug, rq);
- else
- blk_mq_insert_request(rq, at_head, true, false);
+ return;
+ }
+
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1355,6 +1359,7 @@ static void blk_rq_poll_completion(struct request *rq, struct completion *wait)
*/
blk_status_t blk_execute_rq(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
struct blk_rq_wait wait = {
.done = COMPLETION_INITIALIZER_ONSTACK(wait.done),
};
@@ -1366,7 +1371,8 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1440,14 +1446,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2507,8 +2513,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
+static void blk_mq_insert_request(struct request *rq, bool at_head)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2568,9 +2573,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2655,12 +2657,13 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
blk_status_t ret;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return;
}
if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
- blk_mq_insert_request(rq, false, true, false);
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
@@ -2683,7 +2686,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return BLK_STS_OK;
}
@@ -2963,6 +2966,7 @@ void blk_mq_submit_bio(struct bio *bio)
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
struct blk_plug *plug = blk_mq_plug(bio);
const int is_sync = op_is_sync(bio->bi_opf);
+ struct blk_mq_hw_ctx *hctx;
struct request *rq;
unsigned int nr_segs = 1;
blk_status_t ret;
@@ -3007,15 +3011,19 @@ void blk_mq_submit_bio(struct bio *bio)
return;
}
- if (plug)
+ if (plug) {
blk_add_rq_to_plug(plug, rq);
- else if ((rq->rq_flags & RQF_ELV) ||
- (rq->mq_hctx->dispatch_busy &&
- (q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_insert_request(rq, false, true, true);
- else
- blk_mq_run_dispatch_ops(rq->q,
- blk_mq_try_issue_directly(rq->mq_hctx, rq));
+ return;
+ }
+
+ hctx = rq->mq_hctx;
+ if ((rq->rq_flags & RQF_ELV) ||
+ (hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, true);
+ } else {
+ blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
+ }
}
#ifdef CONFIG_BLK_MQ_STACKING
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 15/20] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (13 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 14/20] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list Christoph Hellwig
` (5 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk_mq_request_bypass_insert takes a bool parameter to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-flush.c | 4 +++-
block/blk-mq.c | 24 +++++++++++-------------
block/blk-mq.h | 3 +--
3 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 62ef98f604fbf9..3561aba8cc23f8 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -389,6 +389,7 @@ void blk_insert_flush(struct request *rq)
unsigned long fflags = q->queue_flags; /* may change, cache */
unsigned int policy = blk_flush_policy(fflags, rq);
struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
/*
* @policy now records what operations need to be done. Adjust
@@ -425,7 +426,8 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d1941db1ad3c97..cde7ba9c39bf6b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1442,7 +1442,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false, false);
+ blk_mq_request_bypass_insert(rq, false);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2457,13 +2457,11 @@ static void blk_mq_run_work_fn(struct work_struct *work)
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
* @at_head: true if the request should be inserted at the head of the list.
- * @run_queue: If we should run the hardware queue after inserting the request.
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue)
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
@@ -2473,9 +2471,6 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, false);
}
static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
@@ -2530,7 +2525,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head, false);
+ blk_mq_request_bypass_insert(rq, at_head);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2553,7 +2548,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true, false);
+ blk_mq_request_bypass_insert(rq, true);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2673,7 +2668,8 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
break;
default:
blk_mq_end_request(rq, ret);
@@ -2720,7 +2716,8 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
@@ -2838,8 +2835,9 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false,
- list_empty(list));
+ blk_mq_request_bypass_insert(rq, false);
+ if (list_empty(list))
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index e2d59e33046e30..f30f99166f3870 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -65,8 +65,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue);
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (14 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 15/20] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:54 ` Damien Le Moal
2023-04-13 6:40 ` [PATCH 17/20] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
` (4 subsequent siblings)
20 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
blk_mq_add_to_requeue_list takes a bool parameter to control how to kick
the requeue list at the end of the function. Move the call to
blk_mq_kick_requeue_list to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 6 ++++--
block/blk-mq.c | 13 +++++++------
block/blk-mq.h | 3 +--
3 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 3561aba8cc23f8..015982bd2f7c8f 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -188,7 +188,8 @@ static void blk_flush_complete_seq(struct request *rq,
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
- blk_mq_add_to_requeue_list(rq, true, true);
+ blk_mq_add_to_requeue_list(rq, true);
+ blk_mq_kick_requeue_list(q);
break;
case REQ_FSEQ_DONE:
@@ -345,7 +346,8 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
smp_wmb();
req_ref_set(flush_rq, 1);
- blk_mq_add_to_requeue_list(flush_rq, false, true);
+ blk_mq_add_to_requeue_list(flush_rq, false);
+ blk_mq_kick_requeue_list(q);
}
static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq,
diff --git a/block/blk-mq.c b/block/blk-mq.c
index cde7ba9c39bf6b..db806c1a194c7b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1412,12 +1412,17 @@ static void __blk_mq_requeue_request(struct request *rq)
void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
{
+ struct request_queue *q = rq->q;
+
__blk_mq_requeue_request(rq);
/* this request will be re-inserted to io scheduler queue */
blk_mq_sched_requeue_request(rq);
- blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
+ blk_mq_add_to_requeue_list(rq, true);
+
+ if (kick_requeue_list)
+ blk_mq_kick_requeue_list(q);
}
EXPORT_SYMBOL(blk_mq_requeue_request);
@@ -1459,8 +1464,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
blk_mq_run_hw_queues(q, false);
}
-void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
- bool kick_requeue_list)
+void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
{
struct request_queue *q = rq->q;
unsigned long flags;
@@ -1479,9 +1483,6 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
list_add_tail(&rq->queuelist, &q->requeue_list);
}
spin_unlock_irqrestore(&q->requeue_lock, flags);
-
- if (kick_requeue_list)
- blk_mq_kick_requeue_list(q);
}
void blk_mq_kick_requeue_list(struct request_queue *q)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index f30f99166f3870..5d3761c5006346 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -44,8 +44,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *,
unsigned int);
-void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
- bool kick_requeue_list);
+void blk_mq_add_to_requeue_list(struct request *rq, bool at_head);
void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
struct blk_mq_ctx *start);
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 17/20] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (15 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 18/20] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
` (3 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Replace the at_head bool with a flags argument that so far only contains
a single BLK_MQ_INSERT_AT_HEAD value. This makes it much easier to grep
for head insertions into the blk-mq dispatch queues.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq.c | 27 ++++++++++++++-------------
block/blk-mq.h | 3 +++
2 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index db806c1a194c7b..ba64c4621e29d6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,7 +44,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head);
+static void blk_mq_insert_request(struct request *rq, blk_insert_t flags);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1308,7 +1308,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
return;
}
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1371,7 +1371,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
@@ -1451,14 +1451,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true);
+ blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
}
blk_mq_run_hw_queues(q, false);
@@ -2509,7 +2509,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head)
+static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2526,7 +2526,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head);
+ blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2556,12 +2556,13 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list,
+ flags & BLK_MQ_INSERT_AT_HEAD);
} else {
trace_block_rq_insert(rq);
spin_lock(&ctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
else
list_add_tail(&rq->queuelist,
@@ -2653,12 +2654,12 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
blk_status_t ret;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
return;
}
if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
blk_mq_run_hw_queue(hctx, false);
return;
}
@@ -2683,7 +2684,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
return BLK_STS_OK;
}
@@ -3018,7 +3019,7 @@ void blk_mq_submit_bio(struct bio *bio)
hctx = rq->mq_hctx;
if ((rq->rq_flags & RQF_ELV) ||
(hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
blk_mq_run_hw_queue(hctx, true);
} else {
blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 5d3761c5006346..273eee00524b98 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -36,6 +36,9 @@ enum {
BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
};
+typedef unsigned int __bitwise blk_insert_t;
+#define BLK_MQ_INSERT_AT_HEAD ((__force blk_insert_t)0x01)
+
void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 18/20] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (16 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 17/20] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 19/20] blk-mq: pass a flags argument to elevator_type->insert_requests Christoph Hellwig
` (2 subsequent siblings)
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Replace the boolean at_head argument with the same flags that are already
passed to blk_mq_insert_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-flush.c | 2 +-
block/blk-mq.c | 18 +++++++++---------
block/blk-mq.h | 2 +-
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 015982bd2f7c8f..1d3af17619deb7 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -428,7 +428,7 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ba64c4621e29d6..ff74559d7da1fc 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1447,7 +1447,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2457,17 +2457,17 @@ static void blk_mq_run_work_fn(struct work_struct *work)
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
- * @at_head: true if the request should be inserted at the head of the list.
+ * @flags: BLK_MQ_INSERT_*
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
+void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
spin_lock(&hctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &hctx->dispatch);
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
@@ -2526,7 +2526,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
+ blk_mq_request_bypass_insert(rq, flags);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2549,7 +2549,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true);
+ blk_mq_request_bypass_insert(rq, BLK_MQ_INSERT_AT_HEAD);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2670,7 +2670,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
break;
default:
@@ -2718,7 +2718,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
goto out;
default:
@@ -2837,7 +2837,7 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
if (list_empty(list))
blk_mq_run_hw_queue(hctx, false);
goto out;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 273eee00524b98..bb16c0a54411b0 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -67,7 +67,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
+void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 19/20] blk-mq: pass a flags argument to elevator_type->insert_requests
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (17 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 18/20] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:40 ` [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list Christoph Hellwig
2023-04-13 13:11 ` cleanup request insertation parameters v3 Jens Axboe
20 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Instead of passing a bool at_head, pass down the full flags from the
blk_mq_insert_request interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
block/bfq-iosched.c | 16 ++++++++--------
block/blk-mq.c | 5 ++---
block/elevator.h | 4 +++-
block/kyber-iosched.c | 5 +++--
block/mq-deadline.c | 9 +++++----
5 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 37f68c907ac08c..b4c4b4808c6c4c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -6231,7 +6231,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
static struct bfq_queue *bfq_init_rq(struct request *rq);
static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
@@ -6254,11 +6254,10 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (!bfqq || at_head) {
- if (at_head)
- list_add(&rq->queuelist, &bfqd->dispatch);
- else
- list_add_tail(&rq->queuelist, &bfqd->dispatch);
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
+ list_add(&rq->queuelist, &bfqd->dispatch);
+ } else if (!bfqq) {
+ list_add_tail(&rq->queuelist, &bfqd->dispatch);
} else {
idle_timer_disabled = __bfq_insert_request(bfqd, rq);
/*
@@ -6288,14 +6287,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ blk_insert_t flags)
{
while (!list_empty(list)) {
struct request *rq;
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- bfq_insert_request(hctx, rq, at_head);
+ bfq_insert_request(hctx, rq, flags);
}
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ff74559d7da1fc..6c3db1a15dadc9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2556,8 +2556,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list,
- flags & BLK_MQ_INSERT_AT_HEAD);
+ q->elevator->type->ops.insert_requests(hctx, &list, flags);
} else {
trace_block_rq_insert(rq);
@@ -2768,7 +2767,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
percpu_ref_get(&this_hctx->queue->q_usage_counter);
if (this_hctx->queue->elevator) {
this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
- &list, false);
+ &list, 0);
blk_mq_run_hw_queue(this_hctx, from_sched);
} else {
blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
diff --git a/block/elevator.h b/block/elevator.h
index 774a8f6b99e69e..7ca3d7b6ed8289 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -4,6 +4,7 @@
#include <linux/percpu.h>
#include <linux/hashtable.h>
+#include "blk-mq.h"
struct io_cq;
struct elevator_type;
@@ -37,7 +38,8 @@ struct elevator_mq_ops {
void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *);
void (*prepare_request)(struct request *);
void (*finish_request)(struct request *);
- void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool);
+ void (*insert_requests)(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ blk_insert_t flags);
struct request *(*dispatch_request)(struct blk_mq_hw_ctx *);
bool (*has_work)(struct blk_mq_hw_ctx *);
void (*completed_request)(struct request *, u64);
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 3f9fb2090c9158..4155594aefc657 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -588,7 +588,8 @@ static void kyber_prepare_request(struct request *rq)
}
static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *rq_list, bool at_head)
+ struct list_head *rq_list,
+ blk_insert_t flags)
{
struct kyber_hctx_data *khd = hctx->sched_data;
struct request *rq, *next;
@@ -600,7 +601,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
spin_lock(&kcq->lock);
trace_block_rq_insert(rq);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_move(&rq->queuelist, head);
else
list_move_tail(&rq->queuelist, head);
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index ceae477c3571a3..5839a027e0f051 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -766,7 +766,7 @@ static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
* add rq to rbtree and fifo
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -799,7 +799,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (at_head) {
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
list_add(&rq->queuelist, &per_prio->dispatch);
rq->fifo_time = jiffies;
} else {
@@ -823,7 +823,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -834,7 +835,7 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- dd_insert_request(hctx, rq, at_head);
+ dd_insert_request(hctx, rq, flags);
}
spin_unlock(&dd->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (18 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 19/20] blk-mq: pass a flags argument to elevator_type->insert_requests Christoph Hellwig
@ 2023-04-13 6:40 ` Christoph Hellwig
2023-04-13 6:55 ` Damien Le Moal
2023-04-13 13:11 ` cleanup request insertation parameters v3 Jens Axboe
20 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, Damien Le Moal, linux-block
Replace the boolean at_head argument with the same flags that are already
passed to blk_mq_insert_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 4 ++--
block/blk-mq.c | 6 +++---
block/blk-mq.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 1d3af17619deb7..00dd2f61312d89 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -188,7 +188,7 @@ static void blk_flush_complete_seq(struct request *rq,
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
- blk_mq_add_to_requeue_list(rq, true);
+ blk_mq_add_to_requeue_list(rq, BLK_MQ_INSERT_AT_HEAD);
blk_mq_kick_requeue_list(q);
break;
@@ -346,7 +346,7 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
smp_wmb();
req_ref_set(flush_rq, 1);
- blk_mq_add_to_requeue_list(flush_rq, false);
+ blk_mq_add_to_requeue_list(flush_rq, BLK_MQ_INSERT_AT_HEAD);
blk_mq_kick_requeue_list(q);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6c3db1a15dadc9..1e35c829bdddfe 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1419,7 +1419,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
/* this request will be re-inserted to io scheduler queue */
blk_mq_sched_requeue_request(rq);
- blk_mq_add_to_requeue_list(rq, true);
+ blk_mq_add_to_requeue_list(rq, BLK_MQ_INSERT_AT_HEAD);
if (kick_requeue_list)
blk_mq_kick_requeue_list(q);
@@ -1464,7 +1464,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
blk_mq_run_hw_queues(q, false);
}
-void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
+void blk_mq_add_to_requeue_list(struct request *rq, blk_insert_t insert_flags)
{
struct request_queue *q = rq->q;
unsigned long flags;
@@ -1476,7 +1476,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head)
BUG_ON(rq->rq_flags & RQF_SOFTBARRIER);
spin_lock_irqsave(&q->requeue_lock, flags);
- if (at_head) {
+ if (insert_flags & BLK_MQ_INSERT_AT_HEAD) {
rq->rq_flags |= RQF_SOFTBARRIER;
list_add(&rq->queuelist, &q->requeue_list);
} else {
diff --git a/block/blk-mq.h b/block/blk-mq.h
index bb16c0a54411b0..f882677ff106a5 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -47,7 +47,7 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *,
unsigned int);
-void blk_mq_add_to_requeue_list(struct request *rq, bool at_head);
+void blk_mq_add_to_requeue_list(struct request *rq, blk_insert_t insert_flags);
void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
struct blk_mq_ctx *start);
--
2.39.2
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list
2023-04-13 6:40 ` [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list Christoph Hellwig
@ 2023-04-13 6:54 ` Damien Le Moal
2023-04-13 6:59 ` Christoph Hellwig
0 siblings, 1 reply; 26+ messages in thread
From: Damien Le Moal @ 2023-04-13 6:54 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/13/23 15:40, Christoph Hellwig wrote:
> blk_mq_add_to_requeue_list takes a bool parameter to control how to kick
> the requeue list at the end of the function. Move the call to
> blk_mq_kick_requeue_list to the callers that want it instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
One nit below. Looks good otherwise.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index cde7ba9c39bf6b..db806c1a194c7b 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1412,12 +1412,17 @@ static void __blk_mq_requeue_request(struct request *rq)
>
> void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
> {
> + struct request_queue *q = rq->q;
Nit: not really needed given that it is used in one place only.
You could just call "blk_mq_kick_requeue_list(rq->q)" below.
> +
> __blk_mq_requeue_request(rq);
>
> /* this request will be re-inserted to io scheduler queue */
> blk_mq_sched_requeue_request(rq);
>
> - blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
> + blk_mq_add_to_requeue_list(rq, true);
> +
> + if (kick_requeue_list)
> + blk_mq_kick_requeue_list(q);
> }
> EXPORT_SYMBOL(blk_mq_requeue_request);
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list
2023-04-13 6:40 ` [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list Christoph Hellwig
@ 2023-04-13 6:55 ` Damien Le Moal
0 siblings, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2023-04-13 6:55 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/13/23 15:40, Christoph Hellwig wrote:
> Replace the boolean at_head argument with the same flags that are already
> passed to blk_mq_insert_request.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list
2023-04-13 6:54 ` Damien Le Moal
@ 2023-04-13 6:59 ` Christoph Hellwig
2023-04-13 7:47 ` Damien Le Moal
0 siblings, 1 reply; 26+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:59 UTC (permalink / raw)
To: Damien Le Moal
Cc: Christoph Hellwig, Jens Axboe, Bart Van Assche, linux-block
On Thu, Apr 13, 2023 at 03:54:31PM +0900, Damien Le Moal wrote:
> > void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
> > {
> > + struct request_queue *q = rq->q;
>
> Nit: not really needed given that it is used in one place only.
> You could just call "blk_mq_kick_requeue_list(rq->q)" below.
It is needed, because we can't dereference rq safely after
blk_mq_add_to_requeue_list returns.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list
2023-04-13 6:59 ` Christoph Hellwig
@ 2023-04-13 7:47 ` Damien Le Moal
0 siblings, 0 replies; 26+ messages in thread
From: Damien Le Moal @ 2023-04-13 7:47 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, Bart Van Assche, linux-block
On 4/13/23 15:59, Christoph Hellwig wrote:
> On Thu, Apr 13, 2023 at 03:54:31PM +0900, Damien Le Moal wrote:
>>> void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
>>> {
>>> + struct request_queue *q = rq->q;
>>
>> Nit: not really needed given that it is used in one place only.
>> You could just call "blk_mq_kick_requeue_list(rq->q)" below.
>
> It is needed, because we can't dereference rq safely after
> blk_mq_add_to_requeue_list returns.
Ah, yes, indeed. Sorry for the noise.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: cleanup request insertation parameters v3
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
` (19 preceding siblings ...)
2023-04-13 6:40 ` [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list Christoph Hellwig
@ 2023-04-13 13:11 ` Jens Axboe
20 siblings, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2023-04-13 13:11 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Bart Van Assche, Damien Le Moal, linux-block
On Thu, 13 Apr 2023 08:40:37 +0200, Christoph Hellwig wrote:
> in context of his latest series Bart commented that it's too hard
> to find all spots that do a head insertation into the blk-mq dispatch
> queues. This series collapses various far too deep callchains, drop
> two of the three bools and then replaced the final once with a greppable
> constant.
>
> This will create some rebased work for Bart of top of the other comments
> he got, but I think this will allow us to sort out some of the request
> order issues much better while also making the code a lot more readable.
>
> [...]
Applied, thanks!
[01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait
commit: 50947d7fe9fa6abe3ddc40769dfb02a51c58edb6
[02/20] blk-mq: remove blk-mq-tag.h
commit: bebe84ebeec4d030aa65af58376305749762e5a0
[03/20] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h
commit: 90110e04f265b95f59fbae09c228c5920b8a302f
[04/20] blk-mq: move more logic into blk_mq_insert_requests
commit: 94aa228c2a2f6edc8e9b7c4745942ea4c5978977
[05/20] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
commit: 05a93117703e7b2e40fa9193e622079b30395bcc
[06/20] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
commit: 2bd215df791b5d36ca1d20c07683100b48310cc2
[07/20] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
commit: a88db1e0003eda8adbe3c499b81f736d8065b952
[08/20] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
commit: 4ec5c0553c33e42f2d650785309de17d4cb8f5ba
[09/20] blk-mq: remove blk_flush_queue_rq
commit: a4fa57ffb7671c2df4ce597d03ef9f7d6d905a60
[10/20] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
commit: 53548d2a945eb2c277332c66f57505881392e5a9
[11/20] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
commit: a1e948b81ad21d635b99c1284f945423cb02b4c4
[12/20] blk-mq: factor out a blk_mq_get_budget_and_tag helper
commit: 2b71b8770710f2913e29053f01b6c7df1a5c7f75
[13/20] blk-mq: fold __blk_mq_try_issue_directly into its two callers
commit: e1f44ac0d7f48ec44a1eacfe637e545c408ede40
[14/20] blk-mq: don't run the hw_queue from blk_mq_insert_request
commit: f0dbe6e88e1bf4003ef778527b975ff60dbdd35a
[15/20] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
commit: 2394395cd598f6404c57ae0b63afb5d37e94924d
[16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list
commit: 214a441805b8cc090930fb00193125e22466a95a
[17/20] blk-mq: pass a flags argument to blk_mq_insert_request
commit: 710fa3789ed94ceee9675f8e189aaf3e7525269a
[18/20] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
commit: 2b5976134bfbc753dec6281da0890c5f194c00c9
[19/20] blk-mq: pass a flags argument to elevator_type->insert_requests
commit: 93fffe16f7ee18600f15838e2e8b5cf353f245c8
[20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list
commit: b12e5c6c755ae8bec44723f77f037873e3d08021
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2023-04-13 13:12 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-13 6:40 cleanup request insertation parameters v3 Christoph Hellwig
2023-04-13 6:40 ` [PATCH 01/20] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
2023-04-13 6:40 ` [PATCH 02/20] blk-mq: remove blk-mq-tag.h Christoph Hellwig
2023-04-13 6:40 ` [PATCH 03/20] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
2023-04-13 6:40 ` [PATCH 04/20] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
2023-04-13 6:40 ` [PATCH 05/20] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
2023-04-13 6:40 ` [PATCH 06/20] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
2023-04-13 6:40 ` [PATCH 07/20] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
2023-04-13 6:40 ` [PATCH 08/20] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
2023-04-13 6:40 ` [PATCH 09/20] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
2023-04-13 6:40 ` [PATCH 10/20] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
2023-04-13 6:40 ` [PATCH 11/20] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
2023-04-13 6:40 ` [PATCH 12/20] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
2023-04-13 6:40 ` [PATCH 13/20] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
2023-04-13 6:40 ` [PATCH 14/20] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
2023-04-13 6:40 ` [PATCH 15/20] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
2023-04-13 6:40 ` [PATCH 16/20] blk-mq: don't kick the requeue_list in blk_mq_add_to_requeue_list Christoph Hellwig
2023-04-13 6:54 ` Damien Le Moal
2023-04-13 6:59 ` Christoph Hellwig
2023-04-13 7:47 ` Damien Le Moal
2023-04-13 6:40 ` [PATCH 17/20] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
2023-04-13 6:40 ` [PATCH 18/20] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
2023-04-13 6:40 ` [PATCH 19/20] blk-mq: pass a flags argument to elevator_type->insert_requests Christoph Hellwig
2023-04-13 6:40 ` [PATCH 20/20] blk-mq: pass a flags argument to blk_mq_add_to_requeue_list Christoph Hellwig
2023-04-13 6:55 ` Damien Le Moal
2023-04-13 13:11 ` cleanup request insertation parameters v3 Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox