* cleanup request insertation parameters v2
@ 2023-04-12 5:32 Christoph Hellwig
2023-04-12 5:32 ` [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
` (17 more replies)
0 siblings, 18 replies; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Hi Jens,
in context of his latest series Bart commented that it's too hard
to find all spots that do a head insertation into the blk-mq dispatch
queues. This series collapses various far too deep callchains, drop
two of the three bools and then replaced the final once with a greppable
constant.
This will create some rebased work for Bart of top of the other comments
he got, but I think this will allow us to sort out some of the request
order issues much better while also making the code a lot more readable.
Changes since v1:
- add back a blk_mq_run_hw_queue in blk_insert_flush that got lost
- use a __bitwise type for the insert flags
- sort out header hell a bit
- various typo fixes
Diffstat:
b/block/bfq-iosched.c | 17 +-
b/block/blk-flush.c | 15 --
b/block/blk-mq-cpumap.c | 1
b/block/blk-mq-debugfs.c | 2
b/block/blk-mq-pci.c | 1
b/block/blk-mq-sched.c | 112 ------------------
b/block/blk-mq-sched.h | 7 -
b/block/blk-mq-sysfs.c | 2
b/block/blk-mq-tag.c | 2
b/block/blk-mq-virtio.c | 1
b/block/blk-mq.c | 285 ++++++++++++++++++++++++++++-------------------
b/block/blk-mq.h | 74 ++++++++++--
b/block/blk-pm.c | 2
b/block/blk-stat.c | 1
b/block/blk-sysfs.c | 1
b/block/elevator.h | 4
b/block/kyber-iosched.c | 7 -
b/block/mq-deadline.c | 13 --
block/blk-mq-tag.h | 73 ------------
19 files changed, 265 insertions(+), 355 deletions(-)
^ permalink raw reply [flat|nested] 43+ messages in thread
* [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 6:55 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 02/18] blk-mq: remove blk-mq-tag.h Christoph Hellwig
` (16 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Plugs never insert at head, so don't plug for head insertions.
Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 52f8e0099c7f4b..7908d19f140815 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1299,7 +1299,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug)
+ if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
blk_mq_sched_insert_request(rq, at_head, true, false);
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 02/18] blk-mq: remove blk-mq-tag.h
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
2023-04-12 5:32 ` [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 6:57 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
` (15 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk-mq-tag.h is always included by blk-mq.h, and causes recursive
inclusion hell with further changes. Just merge it into blk-mq.h
instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/bfq-iosched.c | 1 -
block/blk-flush.c | 1 -
block/blk-mq-debugfs.c | 1 -
block/blk-mq-sched.c | 1 -
block/blk-mq-sched.h | 1 -
block/blk-mq-sysfs.c | 1 -
block/blk-mq-tag.c | 1 -
block/blk-mq-tag.h | 73 ------------------------------------------
block/blk-mq.c | 1 -
block/blk-mq.h | 61 ++++++++++++++++++++++++++++++++++-
block/blk-pm.c | 1 -
block/kyber-iosched.c | 1 -
block/mq-deadline.c | 1 -
13 files changed, 60 insertions(+), 85 deletions(-)
delete mode 100644 block/blk-mq-tag.h
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d9ed3108c17af6..37f68c907ac08c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -129,7 +129,6 @@
#include "elevator.h"
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
#include "bfq-iosched.h"
#include "blk-wbt.h"
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 53202eff545efb..a13a1d6caa0f3e 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -73,7 +73,6 @@
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/* PREFLUSH/FUA sequences */
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 212a7f301e7302..ace2bcf1cf9a6f 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -12,7 +12,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#include "blk-rq-qos.h"
static int queue_poll_stat_show(void *data, struct seq_file *m)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 06b312c691143f..1029e8eed5eef6 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -15,7 +15,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#include "blk-wbt.h"
/*
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 0250139724539a..65cab6e475be8e 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -4,7 +4,6 @@
#include "elevator.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
#define MAX_SCHED_RQ (16 * BLKDEV_DEFAULT_RQ)
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index 1b2b0d258e465f..ba84caa868dd54 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -13,7 +13,6 @@
#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
-#include "blk-mq-tag.h"
static void blk_mq_sysfs_release(struct kobject *kobj)
{
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 9eb968e14d31f8..1f8b065d72c5f2 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -14,7 +14,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
/*
* Recalculate wakeup batch when tag is shared by hctx.
diff --git a/block/blk-mq-tag.h b/block/blk-mq-tag.h
deleted file mode 100644
index 91ff37e3b43dff..00000000000000
--- a/block/blk-mq-tag.h
+++ /dev/null
@@ -1,73 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef INT_BLK_MQ_TAG_H
-#define INT_BLK_MQ_TAG_H
-
-struct blk_mq_alloc_data;
-
-extern struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
- unsigned int reserved_tags,
- int node, int alloc_policy);
-extern void blk_mq_free_tags(struct blk_mq_tags *tags);
-extern int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags,
- struct sbitmap_queue *breserved_tags,
- unsigned int queue_depth,
- unsigned int reserved,
- int node, int alloc_policy);
-
-extern unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
-unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
- unsigned int *offset);
-extern void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx,
- unsigned int tag);
-void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);
-extern int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_tags **tags,
- unsigned int depth, bool can_grow);
-extern void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,
- unsigned int size);
-extern void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);
-
-extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
-void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
- void *priv);
-void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
- void *priv);
-
-static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt,
- struct blk_mq_hw_ctx *hctx)
-{
- if (!hctx)
- return &bt->ws[0];
- return sbq_wait_ptr(bt, &hctx->wait_index);
-}
-
-enum {
- BLK_MQ_NO_TAG = -1U,
- BLK_MQ_TAG_MIN = 1,
- BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
-};
-
-extern void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
-extern void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
-
-static inline void blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
-{
- if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
- __blk_mq_tag_busy(hctx);
-}
-
-static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
-{
- if (!(hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED))
- return;
-
- __blk_mq_tag_idle(hctx);
-}
-
-static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags,
- unsigned int tag)
-{
- return tag < tags->nr_reserved_tags;
-}
-
-#endif
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7908d19f140815..545600be2063ac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -37,7 +37,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
-#include "blk-mq-tag.h"
#include "blk-pm.h"
#include "blk-stat.h"
#include "blk-mq-sched.h"
diff --git a/block/blk-mq.h b/block/blk-mq.h
index ef59fee62780d3..7a041fecea02e4 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -3,7 +3,6 @@
#define INT_BLK_MQ_H
#include "blk-stat.h"
-#include "blk-mq-tag.h"
struct blk_mq_tag_set;
@@ -30,6 +29,12 @@ struct blk_mq_ctx {
struct kobject kobj;
} ____cacheline_aligned_in_smp;
+enum {
+ BLK_MQ_NO_TAG = -1U,
+ BLK_MQ_TAG_MIN = 1,
+ BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
+};
+
void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
@@ -164,6 +169,60 @@ struct blk_mq_alloc_data {
struct blk_mq_hw_ctx *hctx;
};
+struct blk_mq_tags *blk_mq_init_tags(unsigned int nr_tags,
+ unsigned int reserved_tags, int node, int alloc_policy);
+void blk_mq_free_tags(struct blk_mq_tags *tags);
+int blk_mq_init_bitmaps(struct sbitmap_queue *bitmap_tags,
+ struct sbitmap_queue *breserved_tags, unsigned int queue_depth,
+ unsigned int reserved, int node, int alloc_policy);
+
+unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data);
+unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags,
+ unsigned int *offset);
+void blk_mq_put_tag(struct blk_mq_tags *tags, struct blk_mq_ctx *ctx,
+ unsigned int tag);
+void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);
+int blk_mq_tag_update_depth(struct blk_mq_hw_ctx *hctx,
+ struct blk_mq_tags **tags, unsigned int depth, bool can_grow);
+void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,
+ unsigned int size);
+void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);
+
+void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
+void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
+ void *priv);
+void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
+ void *priv);
+
+static inline struct sbq_wait_state *bt_wait_ptr(struct sbitmap_queue *bt,
+ struct blk_mq_hw_ctx *hctx)
+{
+ if (!hctx)
+ return &bt->ws[0];
+ return sbq_wait_ptr(bt, &hctx->wait_index);
+}
+
+void __blk_mq_tag_busy(struct blk_mq_hw_ctx *);
+void __blk_mq_tag_idle(struct blk_mq_hw_ctx *);
+
+static inline void blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+{
+ if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
+ __blk_mq_tag_busy(hctx);
+}
+
+static inline void blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
+{
+ if (hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)
+ __blk_mq_tag_idle(hctx);
+}
+
+static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags,
+ unsigned int tag)
+{
+ return tag < tags->nr_reserved_tags;
+}
+
static inline bool blk_mq_is_shared_tags(unsigned int flags)
{
return flags & BLK_MQ_F_TAG_HCTX_SHARED;
diff --git a/block/blk-pm.c b/block/blk-pm.c
index 2dad62cc157272..8af5ee54feb406 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -5,7 +5,6 @@
#include <linux/blkdev.h>
#include <linux/pm_runtime.h>
#include "blk-mq.h"
-#include "blk-mq-tag.h"
/**
* blk_pm_runtime_init - Block layer runtime PM initialization routine
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 2146969237bfed..d0a4838ce7fc63 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -19,7 +19,6 @@
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
-#include "blk-mq-tag.h"
#define CREATE_TRACE_POINTS
#include <trace/events/kyber.h>
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index f10c2a0d18d411..a18526e11194ca 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -23,7 +23,6 @@
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
-#include "blk-mq-tag.h"
#include "blk-mq-sched.h"
/*
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
2023-04-12 5:32 ` [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
2023-04-12 5:32 ` [PATCH 02/18] blk-mq: remove blk-mq-tag.h Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 6:58 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
` (14 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
block/blk-mq.h needs various definitions from <linux/blk-mq.h>,
include it there instead of relying on the source files to include
both.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 1 -
block/blk-mq-cpumap.c | 1 -
block/blk-mq-debugfs.c | 1 -
block/blk-mq-pci.c | 1 -
block/blk-mq-sched.c | 1 -
block/blk-mq-sysfs.c | 1 -
block/blk-mq-tag.c | 1 -
block/blk-mq-virtio.c | 1 -
block/blk-mq.c | 1 -
block/blk-mq.h | 1 +
block/blk-pm.c | 1 -
block/blk-stat.c | 1 -
block/blk-sysfs.c | 1 -
block/kyber-iosched.c | 1 -
block/mq-deadline.c | 1 -
15 files changed, 1 insertion(+), 14 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index a13a1d6caa0f3e..3c81b0af5b3964 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -68,7 +68,6 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/gfp.h>
-#include <linux/blk-mq.h>
#include <linux/part_stat.h>
#include "blk.h"
diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 0c612c19feb8b1..9638b25fd52124 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -12,7 +12,6 @@
#include <linux/cpu.h>
#include <linux/group_cpus.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index ace2bcf1cf9a6f..d23a8554ec4aeb 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -7,7 +7,6 @@
#include <linux/blkdev.h>
#include <linux/debugfs.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
#include "blk-mq-debugfs.h"
diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index a90b88fd1332ce..d47b5c73c9eb71 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -4,7 +4,6 @@
*/
#include <linux/kobject.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/blk-mq-pci.h>
#include <linux/pci.h>
#include <linux/module.h>
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 1029e8eed5eef6..c4b2d44b2d4ebf 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -6,7 +6,6 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/blk-mq.h>
#include <linux/list_sort.h>
#include <trace/events/block.h>
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index ba84caa868dd54..156e9bb07abf1a 100644
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -10,7 +10,6 @@
#include <linux/workqueue.h>
#include <linux/smp.h>
-#include <linux/blk-mq.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 1f8b065d72c5f2..d6af9d431dc631 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -9,7 +9,6 @@
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/blk-mq.h>
#include <linux/delay.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c
index 6589f076a09635..68d0945c0b08a2 100644
--- a/block/blk-mq-virtio.c
+++ b/block/blk-mq-virtio.c
@@ -3,7 +3,6 @@
* Copyright (c) 2016 Christoph Hellwig.
*/
#include <linux/device.h>
-#include <linux/blk-mq.h>
#include <linux/blk-mq-virtio.h>
#include <linux/virtio_config.h>
#include <linux/module.h>
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 545600be2063ac..29014a0f9f39b1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -32,7 +32,6 @@
#include <trace/events/block.h>
-#include <linux/blk-mq.h>
#include <linux/t10-pi.h>
#include "blk.h"
#include "blk-mq.h"
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 7a041fecea02e4..fa13b694ff27d6 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -2,6 +2,7 @@
#ifndef INT_BLK_MQ_H
#define INT_BLK_MQ_H
+#include <linux/blk-mq.h>
#include "blk-stat.h"
struct blk_mq_tag_set;
diff --git a/block/blk-pm.c b/block/blk-pm.c
index 8af5ee54feb406..6b72b2e03fc8a8 100644
--- a/block/blk-pm.c
+++ b/block/blk-pm.c
@@ -1,6 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/blk-mq.h>
#include <linux/blk-pm.h>
#include <linux/blkdev.h>
#include <linux/pm_runtime.h>
diff --git a/block/blk-stat.c b/block/blk-stat.c
index 74a1a8c32d86f8..6226405142ff95 100644
--- a/block/blk-stat.c
+++ b/block/blk-stat.c
@@ -6,7 +6,6 @@
*/
#include <linux/kernel.h>
#include <linux/rculist.h>
-#include <linux/blk-mq.h>
#include "blk-stat.h"
#include "blk-mq.h"
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 1a743b4f29582d..a642085838531f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -9,7 +9,6 @@
#include <linux/blkdev.h>
#include <linux/backing-dev.h>
#include <linux/blktrace_api.h>
-#include <linux/blk-mq.h>
#include <linux/debugfs.h>
#include "blk.h"
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index d0a4838ce7fc63..3f9fb2090c9158 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -8,7 +8,6 @@
#include <linux/kernel.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/module.h>
#include <linux/sbitmap.h>
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index a18526e11194ca..af9e79050dcc1f 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -8,7 +8,6 @@
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/blkdev.h>
-#include <linux/blk-mq.h>
#include <linux/bio.h>
#include <linux/module.h>
#include <linux/slab.h>
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (2 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:07 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
` (13 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Move all logic related to the direct insert into blk_mq_insert_requests
to clean the code flow up a bit, and to allow marking
blk_mq_try_issue_list_directly static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq-sched.c | 17 ++---------------
block/blk-mq.c | 20 ++++++++++++++++++--
block/blk-mq.h | 4 +---
3 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index c4b2d44b2d4ebf..811a9765b745c0 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -472,23 +472,10 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
e = hctx->queue->elevator;
if (e) {
e->type->ops.insert_requests(hctx, list, false);
+ blk_mq_run_hw_queue(hctx, run_queue_async);
} else {
- /*
- * try to issue requests directly if the hw queue isn't
- * busy in case of 'none' scheduler, and this way may save
- * us one extra enqueue & dequeue to sw queue.
- */
- if (!hctx->dispatch_busy && !run_queue_async) {
- blk_mq_run_dispatch_ops(hctx->queue,
- blk_mq_try_issue_list_directly(hctx, list));
- if (list_empty(list))
- goto out;
- }
- blk_mq_insert_requests(hctx, ctx, list);
+ blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
}
-
- blk_mq_run_hw_queue(hctx, run_queue_async);
- out:
percpu_ref_put(&q->q_usage_counter);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 29014a0f9f39b1..536f001282bb63 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,6 +44,9 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list);
+
static inline struct blk_mq_hw_ctx *blk_qc_to_hctx(struct request_queue *q,
blk_qc_t qc)
{
@@ -2495,12 +2498,23 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
}
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list)
+ struct list_head *list, bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
+ /*
+ * Try to issue requests directly if the hw queue isn't busy to save an
+ * extra enqueue & dequeue to the sw queue.
+ */
+ if (!hctx->dispatch_busy && !run_queue_async) {
+ blk_mq_run_dispatch_ops(hctx->queue,
+ blk_mq_try_issue_list_directly(hctx, list));
+ if (list_empty(list))
+ goto out;
+ }
+
/*
* preemption doesn't flush plug list, so it's possible ctx->cpu is
* offline now
@@ -2514,6 +2528,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
list_splice_tail_init(list, &ctx->rq_lists[type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
+out:
+ blk_mq_run_hw_queue(hctx, run_queue_async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2755,7 +2771,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
} while (!rq_list_empty(plug->mq_list));
}
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list)
{
int queued = 0;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index fa13b694ff27d6..5d551f9ef2d6be 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -70,9 +70,7 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list);
-void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- struct list_head *list);
+ struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (3 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:09 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
` (12 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_dispatch_plug_list is the only caller of
blk_mq_sched_insert_requests, and it makes sense to just fold it there
as blk_mq_sched_insert_requests isn't specific to I/O schedulers despite
the name.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq-sched.c | 24 ------------------------
block/blk-mq-sched.h | 3 ---
block/blk-mq.c | 17 +++++++++++++----
block/blk-mq.h | 2 --
block/mq-deadline.c | 2 +-
5 files changed, 14 insertions(+), 34 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 811a9765b745c0..9c0d231722d9ce 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -455,30 +455,6 @@ void blk_mq_sched_insert_request(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, async);
}
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-{
- struct elevator_queue *e;
- struct request_queue *q = hctx->queue;
-
- /*
- * blk_mq_sched_insert_requests() is called from flush plug
- * context only, and hold one usage counter to prevent queue
- * from being released.
- */
- percpu_ref_get(&q->q_usage_counter);
-
- e = hctx->queue->elevator;
- if (e) {
- e->type->ops.insert_requests(hctx, list, false);
- blk_mq_run_hw_queue(hctx, run_queue_async);
- } else {
- blk_mq_insert_requests(hctx, ctx, list, run_queue_async);
- }
- percpu_ref_put(&q->q_usage_counter);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 65cab6e475be8e..1ec01e9934dc45 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -18,9 +18,6 @@ void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
void blk_mq_sched_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async);
-void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 536f001282bb63..f1da4f053cc691 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2497,9 +2497,9 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
blk_mq_run_hw_queue(hctx, false);
}
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async)
-
+static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
+ struct blk_mq_ctx *ctx, struct list_head *list,
+ bool run_queue_async)
{
struct request *rq;
enum hctx_type type = hctx->type;
@@ -2725,7 +2725,16 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
plug->mq_list = requeue_list;
trace_block_unplug(this_hctx->queue, depth, !from_sched);
- blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched);
+
+ percpu_ref_get(&this_hctx->queue->q_usage_counter);
+ if (this_hctx->queue->elevator) {
+ this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
+ &list, false);
+ blk_mq_run_hw_queue(this_hctx, from_sched);
+ } else {
+ blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
+ }
+ percpu_ref_put(&this_hctx->queue->q_usage_counter);
}
void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 5d551f9ef2d6be..bd7ae5e67a526b 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -69,8 +69,6 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
-void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- struct list_head *list, bool run_queue_async);
/*
* CPU -> queue mappings
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index af9e79050dcc1f..d62a3039c8e04f 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -820,7 +820,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_sched_insert_requests().
+ * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (4 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:14 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
` (11 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_sched_insert_request is the main request insert helper and not
directly I/O scheduler related. Move blk_mq_sched_insert_request to
blk-mq.c, rename it to blk_mq_insert_request and mark it static.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq-sched.c | 73 -------------------------------------
block/blk-mq-sched.h | 3 --
block/blk-mq.c | 87 +++++++++++++++++++++++++++++++++++++++++---
block/mq-deadline.c | 2 +-
4 files changed, 82 insertions(+), 83 deletions(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 9c0d231722d9ce..f90fc42a88ca2f 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -382,79 +382,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
}
EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge);
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
-{
- struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
-
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
- /*
- * Firstly normal IO request is inserted to scheduler queue or
- * sw queue, meantime we add flush request to dispatch queue(
- * hctx->dispatch) directly and there is at most one in-flight
- * flush request for each hw queue, so it doesn't matter to add
- * flush request to tail or front of the dispatch queue.
- *
- * Secondly in case of NCQ, flush request belongs to non-NCQ
- * command, and queueing it will fail when there is any
- * in-flight normal IO request(NCQ command). When adding flush
- * rq to the front of hctx->dispatch, it is easier to introduce
- * extra time to flush rq's latency because of S_SCHED_RESTART
- * compared with adding to the tail of dispatch queue, then
- * chance of flush merge is increased, and less flush requests
- * will be issued to controller. It is observed that ~10% time
- * is saved in blktests block/004 on disk attached to AHCI/NCQ
- * drive when adding flush rq to the front of hctx->dispatch.
- *
- * Simply queue flush rq to the front of hctx->dispatch so that
- * intensive flush workloads can benefit in case of NCQ HW.
- */
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
- LIST_HEAD(list);
-
- list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
- } else {
- spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
- spin_unlock(&ctx->lock);
- }
-
-run:
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
-}
-
static int blk_mq_sched_alloc_map_and_rqs(struct request_queue *q,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index 1ec01e9934dc45..7c3cbad17f3052 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -16,9 +16,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq,
void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx);
void __blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx);
-void blk_mq_sched_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
-
void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx);
int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index f1da4f053cc691..78e54a64fe920b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,6 +44,8 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1303,7 +1305,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
if (current->plug && !at_head)
blk_add_rq_to_plug(current->plug, rq);
else
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1364,7 +1366,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_sched_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head, true, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1438,13 +1440,13 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP)
blk_mq_request_bypass_insert(rq, false, false);
else
- blk_mq_sched_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true, false, false);
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_sched_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false, false, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2532,6 +2534,79 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
+static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
+ struct request *rq)
+{
+ /*
+ * dispatch flush and passthrough rq directly
+ *
+ * passthrough request has to be added to hctx->dispatch directly.
+ * For some reason, device may be in one situation which can't
+ * handle FS request, so STS_RESOURCE is always returned and the
+ * FS request will be added to hctx->dispatch. However passthrough
+ * request may be required at that time for fixing the problem. If
+ * passthrough request is added to scheduler queue, there isn't any
+ * chance to dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
+ return true;
+
+ return false;
+}
+
+static void blk_mq_insert_request(struct request *rq, bool at_head,
+ bool run_queue, bool async)
+{
+ struct request_queue *q = rq->q;
+ struct elevator_queue *e = q->elevator;
+ struct blk_mq_ctx *ctx = rq->mq_ctx;
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
+
+ if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ /*
+ * Firstly normal IO request is inserted to scheduler queue or
+ * sw queue, meantime we add flush request to dispatch queue(
+ * hctx->dispatch) directly and there is at most one in-flight
+ * flush request for each hw queue, so it doesn't matter to add
+ * flush request to tail or front of the dispatch queue.
+ *
+ * Secondly in case of NCQ, flush request belongs to non-NCQ
+ * command, and queueing it will fail when there is any
+ * in-flight normal IO request(NCQ command). When adding flush
+ * rq to the front of hctx->dispatch, it is easier to introduce
+ * extra time to flush rq's latency because of S_SCHED_RESTART
+ * compared with adding to the tail of dispatch queue, then
+ * chance of flush merge is increased, and less flush requests
+ * will be issued to controller. It is observed that ~10% time
+ * is saved in blktests block/004 on disk attached to AHCI/NCQ
+ * drive when adding flush rq to the front of hctx->dispatch.
+ *
+ * Simply queue flush rq to the front of hctx->dispatch so that
+ * intensive flush workloads can benefit in case of NCQ HW.
+ */
+ at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ goto run;
+ }
+
+ if (e) {
+ LIST_HEAD(list);
+
+ list_add(&rq->queuelist, &list);
+ e->type->ops.insert_requests(hctx, &list, at_head);
+ } else {
+ spin_lock(&ctx->lock);
+ __blk_mq_insert_request(hctx, rq, at_head);
+ spin_unlock(&ctx->lock);
+ }
+
+run:
+ if (run_queue)
+ blk_mq_run_hw_queue(hctx, async);
+}
+
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
unsigned int nr_segs)
{
@@ -2623,7 +2698,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (bypass_insert)
return BLK_STS_RESOURCE;
- blk_mq_sched_insert_request(rq, false, run_queue, false);
+ blk_mq_insert_request(rq, false, run_queue, false);
return BLK_STS_OK;
}
@@ -2975,7 +3050,7 @@ void blk_mq_submit_bio(struct bio *bio)
else if ((rq->rq_flags & RQF_ELV) ||
(rq->mq_hctx->dispatch_busy &&
(q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_sched_insert_request(rq, false, true, true);
+ blk_mq_insert_request(rq, false, true, true);
else
blk_mq_run_dispatch_ops(rq->q,
blk_mq_try_issue_directly(rq->mq_hctx, rq));
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index d62a3039c8e04f..ceae477c3571a3 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -820,7 +820,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
/*
- * Called from blk_mq_sched_insert_request() or blk_mq_dispatch_plug_list().
+ * Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *list, bool at_head)
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (5 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:15 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
` (10 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
There is no good point in keeping the __blk_mq_insert_request around
for two function calls and a singler caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 14 ++------------
block/blk-mq.h | 2 --
2 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 78e54a64fe920b..103caf1bae2769 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2463,17 +2463,6 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
}
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
-
- lockdep_assert_held(&ctx->lock);
-
- __blk_mq_insert_req_list(hctx, rq, at_head);
- blk_mq_hctx_mark_pending(hctx, ctx);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2598,7 +2587,8 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
spin_lock(&ctx->lock);
- __blk_mq_insert_request(hctx, rq, at_head);
+ __blk_mq_insert_req_list(hctx, rq, at_head);
+ blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index bd7ae5e67a526b..e2d59e33046e30 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -65,8 +65,6 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head);
void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
bool run_queue);
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (6 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:16 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 09/18] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
` (9 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Remove this very small helper and fold it into the only caller.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 25 +++++++------------------
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 103caf1bae2769..7e9f7d00452f11 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2446,23 +2446,6 @@ static void blk_mq_run_work_fn(struct work_struct *work)
__blk_mq_run_hw_queue(hctx);
}
-static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool at_head)
-{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
- enum hctx_type type = hctx->type;
-
- lockdep_assert_held(&ctx->lock);
-
- trace_block_rq_insert(rq);
-
- if (at_head)
- list_add(&rq->queuelist, &ctx->rq_lists[type]);
- else
- list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
-}
-
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
@@ -2586,8 +2569,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
list_add(&rq->queuelist, &list);
e->type->ops.insert_requests(hctx, &list, at_head);
} else {
+ trace_block_rq_insert(rq);
+
spin_lock(&ctx->lock);
- __blk_mq_insert_req_list(hctx, rq, at_head);
+ if (at_head)
+ list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
+ else
+ list_add_tail(&rq->queuelist,
+ &ctx->rq_lists[hctx->type]);
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 09/18] blk-mq: remove blk_flush_queue_rq
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (7 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:17 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
` (8 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Just call blk_mq_add_to_requeue_list directly from the two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-flush.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 3c81b0af5b3964..62ef98f604fbf9 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -136,11 +136,6 @@ static void blk_flush_restore_request(struct request *rq)
rq->end_io = rq->flush.saved_end_io;
}
-static void blk_flush_queue_rq(struct request *rq, bool add_front)
-{
- blk_mq_add_to_requeue_list(rq, add_front, true);
-}
-
static void blk_account_io_flush(struct request *rq)
{
struct block_device *part = rq->q->disk->part0;
@@ -193,7 +188,7 @@ static void blk_flush_complete_seq(struct request *rq,
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
- blk_flush_queue_rq(rq, true);
+ blk_mq_add_to_requeue_list(rq, true, true);
break;
case REQ_FSEQ_DONE:
@@ -350,7 +345,7 @@ static void blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq,
smp_wmb();
req_ref_set(flush_rq, 1);
- blk_flush_queue_rq(flush_rq, false);
+ blk_mq_add_to_requeue_list(flush_rq, false, true);
}
static enum rq_end_io_ret mq_flush_data_end_io(struct request *rq,
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (8 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 09/18] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:22 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
` (7 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
While both passthrough and flush requests call directly into
blk_mq_request_bypass_insert, the parameters aren't the same.
Split the handling into two separate conditionals and turn the whole
function into an if/elif/elif/else flow instead of the gotos.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 50 ++++++++++++++++++--------------------------------
1 file changed, 18 insertions(+), 32 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7e9f7d00452f11..c3de03217f4f1a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2506,37 +2506,26 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx,
- struct request *rq)
-{
- /*
- * dispatch flush and passthrough rq directly
- *
- * passthrough request has to be added to hctx->dispatch directly.
- * For some reason, device may be in one situation which can't
- * handle FS request, so STS_RESOURCE is always returned and the
- * FS request will be added to hctx->dispatch. However passthrough
- * request may be required at that time for fixing the problem. If
- * passthrough request is added to scheduler queue, there isn't any
- * chance to dispatch it given we prioritize requests in hctx->dispatch.
- */
- if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
- return true;
-
- return false;
-}
-
static void blk_mq_insert_request(struct request *rq, bool at_head,
bool run_queue, bool async)
{
struct request_queue *q = rq->q;
- struct elevator_queue *e = q->elevator;
struct blk_mq_ctx *ctx = rq->mq_ctx;
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
- WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
-
- if (blk_mq_sched_bypass_insert(hctx, rq)) {
+ if (blk_rq_is_passthrough(rq)) {
+ /*
+ * Passthrough request have to be added to hctx->dispatch
+ * directly. The device may be in a situation where it can't
+ * handle FS request, and always returns BLK_STS_RESOURCE for
+ * them, which gets them added to hctx->dispatch.
+ *
+ * If a passthrough request is required to unblock the queues,
+ * and it is added to the scheduler queue, there is no chance to
+ * dispatch it given we prioritize requests in hctx->dispatch.
+ */
+ blk_mq_request_bypass_insert(rq, at_head, false);
+ } else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
* sw queue, meantime we add flush request to dispatch queue(
@@ -2558,16 +2547,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;
- blk_mq_request_bypass_insert(rq, at_head, false);
- goto run;
- }
-
- if (e) {
+ blk_mq_request_bypass_insert(rq, true, false);
+ } else if (q->elevator) {
LIST_HEAD(list);
+ WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
+
list_add(&rq->queuelist, &list);
- e->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list, at_head);
} else {
trace_block_rq_insert(rq);
@@ -2581,7 +2568,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
spin_unlock(&ctx->lock);
}
-run:
if (run_queue)
blk_mq_run_hw_queue(hctx, async);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (9 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:24 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
` (6 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Split the RQF_DONTPREP and RQF_SOFTBARRIER in separate branches to make
the code more readable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c3de03217f4f1a..5dfb927d1b9145 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1427,20 +1427,20 @@ static void blk_mq_requeue_work(struct work_struct *work)
spin_unlock_irq(&q->requeue_lock);
list_for_each_entry_safe(rq, next, &rq_list, queuelist) {
- if (!(rq->rq_flags & (RQF_SOFTBARRIER | RQF_DONTPREP)))
- continue;
-
- rq->rq_flags &= ~RQF_SOFTBARRIER;
- list_del_init(&rq->queuelist);
/*
* If RQF_DONTPREP, rq has contained some driver specific
* data, so insert it to hctx dispatch list to avoid any
* merge.
*/
- if (rq->rq_flags & RQF_DONTPREP)
+ if (rq->rq_flags & RQF_DONTPREP) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_request_bypass_insert(rq, false, false);
- else
+ } else if (rq->rq_flags & RQF_SOFTBARRIER) {
+ rq->rq_flags &= ~RQF_SOFTBARRIER;
+ list_del_init(&rq->queuelist);
blk_mq_insert_request(rq, true, false, false);
+ }
}
while (!list_empty(&rq_list)) {
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (10 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:26 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
` (5 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Factor out a helper from __blk_mq_try_issue_directly in preparation
of folding that function into its two callers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5dfb927d1b9145..54bd8e30c30abd 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2623,13 +2623,27 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
return ret;
}
+static bool blk_mq_get_budget_and_tag(struct request *rq)
+{
+ int budget_token;
+
+ budget_token = blk_mq_get_dispatch_budget(rq->q);
+ if (budget_token < 0)
+ return false;
+ blk_mq_set_rq_budget_token(rq, budget_token);
+ if (!blk_mq_get_driver_tag(rq)) {
+ blk_mq_put_dispatch_budget(rq->q, budget_token);
+ return false;
+ }
+ return true;
+}
+
static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq,
bool bypass_insert, bool last)
{
struct request_queue *q = rq->q;
bool run_queue = true;
- int budget_token;
/*
* RCU or SRCU read lock is needed before checking quiesced flag.
@@ -2647,16 +2661,8 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
goto insert;
- budget_token = blk_mq_get_dispatch_budget(q);
- if (budget_token < 0)
- goto insert;
-
- blk_mq_set_rq_budget_token(rq, budget_token);
-
- if (!blk_mq_get_driver_tag(rq)) {
- blk_mq_put_dispatch_budget(q, budget_token);
+ if (!blk_mq_get_budget_and_tag(rq))
goto insert;
- }
return __blk_mq_issue_directly(hctx, rq, last);
insert:
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (11 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:31 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
` (4 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Due the wildly different behavior based on the bypass_insert argument,
not a whole lot of code in __blk_mq_try_issue_directly is actually shared
between blk_mq_try_issue_directly and blk_mq_request_issue_directly.
Remove __blk_mq_try_issue_directly and fold the code into the two callers
instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 72 ++++++++++++++++++++++----------------------------
1 file changed, 31 insertions(+), 41 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 54bd8e30c30abd..4309debfa1ca84 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2638,42 +2638,6 @@ static bool blk_mq_get_budget_and_tag(struct request *rq)
return true;
}
-static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
- struct request *rq,
- bool bypass_insert, bool last)
-{
- struct request_queue *q = rq->q;
- bool run_queue = true;
-
- /*
- * RCU or SRCU read lock is needed before checking quiesced flag.
- *
- * When queue is stopped or quiesced, ignore 'bypass_insert' from
- * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
- * and avoid driver to try to dispatch again.
- */
- if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
- run_queue = false;
- bypass_insert = false;
- goto insert;
- }
-
- if ((rq->rq_flags & RQF_ELV) && !bypass_insert)
- goto insert;
-
- if (!blk_mq_get_budget_and_tag(rq))
- goto insert;
-
- return __blk_mq_issue_directly(hctx, rq, last);
-insert:
- if (bypass_insert)
- return BLK_STS_RESOURCE;
-
- blk_mq_insert_request(rq, false, run_queue, false);
-
- return BLK_STS_OK;
-}
-
/**
* blk_mq_try_issue_directly - Try to send a request directly to device driver.
* @hctx: Pointer of the associated hardware queue.
@@ -2687,18 +2651,44 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq)
{
- blk_status_t ret =
- __blk_mq_try_issue_directly(hctx, rq, false, true);
+ blk_status_t ret;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return;
+ }
+
+ if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
+ blk_mq_insert_request(rq, false, true, false);
+ return;
+ }
- if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
+ ret = __blk_mq_issue_directly(hctx, rq, true);
+ switch (ret) {
+ case BLK_STS_OK:
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
blk_mq_request_bypass_insert(rq, false, true);
- else if (ret != BLK_STS_OK)
+ break;
+ default:
blk_mq_end_request(rq, ret);
+ break;
+ }
}
static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
{
- return __blk_mq_try_issue_directly(rq->mq_hctx, rq, true, last);
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, false, false, false);
+ return BLK_STS_OK;
+ }
+
+ if (!blk_mq_get_budget_and_tag(rq))
+ return BLK_STS_RESOURCE;
+ return __blk_mq_issue_directly(hctx, rq, last);
}
static void blk_mq_plug_issue_direct(struct blk_plug *plug)
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (12 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:40 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
` (3 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_insert_request takes two bool parameters to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-mq.c | 56 ++++++++++++++++++++++++++++----------------------
1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4309debfa1ca84..90a0c365db9152 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,8 +44,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async);
+static void blk_mq_insert_request(struct request *rq, bool at_head);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1292,6 +1291,8 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
*/
void blk_execute_rq_nowait(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
WARN_ON(irqs_disabled());
WARN_ON(!blk_rq_is_passthrough(rq));
@@ -1302,10 +1303,13 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
* device, directly accessing the plug instead of using blk_mq_plug()
* should not have any consequences.
*/
- if (current->plug && !at_head)
+ if (current->plug && !at_head) {
blk_add_rq_to_plug(current->plug, rq);
- else
- blk_mq_insert_request(rq, at_head, true, false);
+ return;
+ }
+
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1355,6 +1359,7 @@ static void blk_rq_poll_completion(struct request *rq, struct completion *wait)
*/
blk_status_t blk_execute_rq(struct request *rq, bool at_head)
{
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
struct blk_rq_wait wait = {
.done = COMPLETION_INITIALIZER_ONSTACK(wait.done),
};
@@ -1366,7 +1371,8 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head, true, false);
+ blk_mq_insert_request(rq, at_head);
+ blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
blk_rq_poll_completion(rq, &wait.done);
@@ -1439,14 +1445,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true, false, false);
+ blk_mq_insert_request(rq, true);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
}
blk_mq_run_hw_queues(q, false);
@@ -2506,8 +2512,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head,
- bool run_queue, bool async)
+static void blk_mq_insert_request(struct request *rq, bool at_head)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2567,9 +2572,6 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
blk_mq_hctx_mark_pending(hctx, ctx);
spin_unlock(&ctx->lock);
}
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, async);
}
static void blk_mq_bio_to_request(struct request *rq, struct bio *bio,
@@ -2654,12 +2656,13 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
blk_status_t ret;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return;
}
if ((rq->rq_flags & RQF_ELV) || !blk_mq_get_budget_and_tag(rq)) {
- blk_mq_insert_request(rq, false, true, false);
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
@@ -2682,7 +2685,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
- blk_mq_insert_request(rq, false, false, false);
+ blk_mq_insert_request(rq, false);
return BLK_STS_OK;
}
@@ -2962,6 +2965,7 @@ void blk_mq_submit_bio(struct bio *bio)
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
struct blk_plug *plug = blk_mq_plug(bio);
const int is_sync = op_is_sync(bio->bi_opf);
+ struct blk_mq_hw_ctx *hctx;
struct request *rq;
unsigned int nr_segs = 1;
blk_status_t ret;
@@ -3006,15 +3010,19 @@ void blk_mq_submit_bio(struct bio *bio)
return;
}
- if (plug)
+ if (plug) {
blk_add_rq_to_plug(plug, rq);
- else if ((rq->rq_flags & RQF_ELV) ||
- (rq->mq_hctx->dispatch_busy &&
- (q->nr_hw_queues == 1 || !is_sync)))
- blk_mq_insert_request(rq, false, true, true);
- else
- blk_mq_run_dispatch_ops(rq->q,
- blk_mq_try_issue_directly(rq->mq_hctx, rq));
+ return;
+ }
+
+ hctx = rq->mq_hctx;
+ if ((rq->rq_flags & RQF_ELV) ||
+ (hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
+ blk_mq_insert_request(rq, false);
+ blk_mq_run_hw_queue(hctx, true);
+ } else {
+ blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
+ }
}
#ifdef CONFIG_BLK_MQ_STACKING
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (13 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:42 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
` (2 subsequent siblings)
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
blk_mq_request_bypass_insert takes a bool parameter to control how to run
the queue at the end of the function. Move the blk_mq_run_hw_queue call
to the callers that want it instead.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-flush.c | 4 +++-
block/blk-mq.c | 24 +++++++++++-------------
block/blk-mq.h | 3 +--
3 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 62ef98f604fbf9..3561aba8cc23f8 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -389,6 +389,7 @@ void blk_insert_flush(struct request *rq)
unsigned long fflags = q->queue_flags; /* may change, cache */
unsigned int policy = blk_flush_policy(fflags, rq);
struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
/*
* @policy now records what operations need to be done. Adjust
@@ -425,7 +426,8 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 90a0c365db9152..0e4a02ea6ed335 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1441,7 +1441,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false, false);
+ blk_mq_request_bypass_insert(rq, false);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2456,13 +2456,11 @@ static void blk_mq_run_work_fn(struct work_struct *work)
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
* @at_head: true if the request should be inserted at the head of the list.
- * @run_queue: If we should run the hardware queue after inserting the request.
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue)
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
@@ -2472,9 +2470,6 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
spin_unlock(&hctx->lock);
-
- if (run_queue)
- blk_mq_run_hw_queue(hctx, false);
}
static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
@@ -2529,7 +2524,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head, false);
+ blk_mq_request_bypass_insert(rq, at_head);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2552,7 +2547,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true, false);
+ blk_mq_request_bypass_insert(rq, true);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2672,7 +2667,8 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
break;
default:
blk_mq_end_request(rq, ret);
@@ -2719,7 +2715,8 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_request_bypass_insert(rq, false);
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
@@ -2837,8 +2834,9 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false,
- list_empty(list));
+ blk_mq_request_bypass_insert(rq, false);
+ if (list_empty(list))
+ blk_mq_run_hw_queue(hctx, false);
goto out;
default:
blk_mq_end_request(rq, ret);
diff --git a/block/blk-mq.h b/block/blk-mq.h
index e2d59e33046e30..f30f99166f3870 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -65,8 +65,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
- bool run_queue);
+void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (14 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:45 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
2023-04-12 5:32 ` [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Replace the at_head bool with a flags argument that so far only contains
a single BLK_MQ_INSERT_AT_HEAD value. This makes it much easier to grep
for head insertions into the blk-mq dispatch queues.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 19 ++++++++++---------
block/blk-mq.h | 3 +++
2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0e4a02ea6ed335..c23c32f429a0e9 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -44,7 +44,7 @@
static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
-static void blk_mq_insert_request(struct request *rq, bool at_head);
+static void blk_mq_insert_request(struct request *rq, blk_insert_t flags);
static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
struct list_head *list);
@@ -1308,7 +1308,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
return;
}
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
}
EXPORT_SYMBOL_GPL(blk_execute_rq_nowait);
@@ -1371,7 +1371,7 @@ blk_status_t blk_execute_rq(struct request *rq, bool at_head)
rq->end_io = blk_end_sync_rq;
blk_account_io_start(rq);
- blk_mq_insert_request(rq, at_head);
+ blk_mq_insert_request(rq, at_head ? BLK_MQ_INSERT_AT_HEAD : 0);
blk_mq_run_hw_queue(hctx, false);
if (blk_rq_is_poll(rq)) {
@@ -1445,14 +1445,14 @@ static void blk_mq_requeue_work(struct work_struct *work)
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, true);
+ blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
}
}
while (!list_empty(&rq_list)) {
rq = list_entry(rq_list.next, struct request, queuelist);
list_del_init(&rq->queuelist);
- blk_mq_insert_request(rq, false);
+ blk_mq_insert_request(rq, 0);
}
blk_mq_run_hw_queues(q, false);
@@ -2507,7 +2507,7 @@ static void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx,
blk_mq_run_hw_queue(hctx, run_queue_async);
}
-static void blk_mq_insert_request(struct request *rq, bool at_head)
+static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
{
struct request_queue *q = rq->q;
struct blk_mq_ctx *ctx = rq->mq_ctx;
@@ -2524,7 +2524,7 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, at_head);
+ blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2554,12 +2554,13 @@ static void blk_mq_insert_request(struct request *rq, bool at_head)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list, at_head);
+ q->elevator->type->ops.insert_requests(hctx, &list,
+ flags & BLK_MQ_INSERT_AT_HEAD);
} else {
trace_block_rq_insert(rq);
spin_lock(&ctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
else
list_add_tail(&rq->queuelist,
diff --git a/block/blk-mq.h b/block/blk-mq.h
index f30f99166f3870..2c165de2f3f1fe 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -36,6 +36,9 @@ enum {
BLK_MQ_TAG_MAX = BLK_MQ_NO_TAG - 1,
};
+typedef unsigned int __bitwise blk_insert_t;
+#define BLK_MQ_INSERT_AT_HEAD ((__force blk_insert_t)0x01)
+
void blk_mq_submit_bio(struct bio *bio);
int blk_mq_poll(struct request_queue *q, blk_qc_t cookie, struct io_comp_batch *iob,
unsigned int flags);
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (15 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:46 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Replace the boolean at_head argument with the same flags that are already
passed to blk_mq_insert_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-flush.c | 2 +-
block/blk-mq.c | 18 +++++++++---------
block/blk-mq.h | 2 +-
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 3561aba8cc23f8..fa9607160c84a2 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -426,7 +426,7 @@ void blk_insert_flush(struct request *rq)
*/
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
return;
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c23c32f429a0e9..3f1b30e59e115f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1441,7 +1441,7 @@ static void blk_mq_requeue_work(struct work_struct *work)
if (rq->rq_flags & RQF_DONTPREP) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
} else if (rq->rq_flags & RQF_SOFTBARRIER) {
rq->rq_flags &= ~RQF_SOFTBARRIER;
list_del_init(&rq->queuelist);
@@ -2455,17 +2455,17 @@ static void blk_mq_run_work_fn(struct work_struct *work)
/**
* blk_mq_request_bypass_insert - Insert a request at dispatch list.
* @rq: Pointer to request to be inserted.
- * @at_head: true if the request should be inserted at the head of the list.
+ * @flags: BLK_MQ_INSERT_*
*
* Should only be used carefully, when the caller knows we want to
* bypass a potential IO scheduler on the target device.
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head)
+void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags)
{
struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
spin_lock(&hctx->lock);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_add(&rq->queuelist, &hctx->dispatch);
else
list_add_tail(&rq->queuelist, &hctx->dispatch);
@@ -2524,7 +2524,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
* and it is added to the scheduler queue, there is no chance to
* dispatch it given we prioritize requests in hctx->dispatch.
*/
- blk_mq_request_bypass_insert(rq, flags & BLK_MQ_INSERT_AT_HEAD);
+ blk_mq_request_bypass_insert(rq, flags);
} else if (rq->rq_flags & RQF_FLUSH_SEQ) {
/*
* Firstly normal IO request is inserted to scheduler queue or
@@ -2547,7 +2547,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
* Simply queue flush rq to the front of hctx->dispatch so that
* intensive flush workloads can benefit in case of NCQ HW.
*/
- blk_mq_request_bypass_insert(rq, true);
+ blk_mq_request_bypass_insert(rq, BLK_MQ_INSERT_AT_HEAD);
} else if (q->elevator) {
LIST_HEAD(list);
@@ -2668,7 +2668,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
break;
default:
@@ -2716,7 +2716,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
blk_mq_run_hw_queue(hctx, false);
goto out;
default:
@@ -2835,7 +2835,7 @@ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
break;
case BLK_STS_RESOURCE:
case BLK_STS_DEV_RESOURCE:
- blk_mq_request_bypass_insert(rq, false);
+ blk_mq_request_bypass_insert(rq, 0);
if (list_empty(list))
blk_mq_run_hw_queue(hctx, false);
goto out;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 2c165de2f3f1fe..849b53396f78b6 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -68,7 +68,7 @@ void blk_mq_free_map_and_rqs(struct blk_mq_tag_set *set,
/*
* Internal helpers for request insertion into sw queues
*/
-void blk_mq_request_bypass_insert(struct request *rq, bool at_head);
+void blk_mq_request_bypass_insert(struct request *rq, blk_insert_t flags);
/*
* CPU -> queue mappings
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
` (16 preceding siblings ...)
2023-04-12 5:32 ` [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-12 5:32 ` Christoph Hellwig
2023-04-12 7:47 ` Damien Le Moal
17 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 5:32 UTC (permalink / raw)
To: Jens Axboe; +Cc: Bart Van Assche, linux-block
Instead of passing a bool at_head, pass down the full flags from the
blk_mq_insert_request interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
block/bfq-iosched.c | 16 ++++++++--------
block/blk-mq.c | 5 ++---
block/elevator.h | 4 +++-
block/kyber-iosched.c | 5 +++--
block/mq-deadline.c | 9 +++++----
5 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 37f68c907ac08c..b4c4b4808c6c4c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -6231,7 +6231,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
static struct bfq_queue *bfq_init_rq(struct request *rq);
static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
@@ -6254,11 +6254,10 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (!bfqq || at_head) {
- if (at_head)
- list_add(&rq->queuelist, &bfqd->dispatch);
- else
- list_add_tail(&rq->queuelist, &bfqd->dispatch);
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
+ list_add(&rq->queuelist, &bfqd->dispatch);
+ } else if (!bfqq) {
+ list_add_tail(&rq->queuelist, &bfqd->dispatch);
} else {
idle_timer_disabled = __bfq_insert_request(bfqd, rq);
/*
@@ -6288,14 +6287,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ blk_insert_t flags)
{
while (!list_empty(list)) {
struct request *rq;
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- bfq_insert_request(hctx, rq, at_head);
+ bfq_insert_request(hctx, rq, flags);
}
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 3f1b30e59e115f..03c6fa4afcdb91 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2554,8 +2554,7 @@ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags)
WARN_ON_ONCE(rq->tag != BLK_MQ_NO_TAG);
list_add(&rq->queuelist, &list);
- q->elevator->type->ops.insert_requests(hctx, &list,
- flags & BLK_MQ_INSERT_AT_HEAD);
+ q->elevator->type->ops.insert_requests(hctx, &list, flags);
} else {
trace_block_rq_insert(rq);
@@ -2766,7 +2765,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
percpu_ref_get(&this_hctx->queue->q_usage_counter);
if (this_hctx->queue->elevator) {
this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
- &list, false);
+ &list, 0);
blk_mq_run_hw_queue(this_hctx, from_sched);
} else {
blk_mq_insert_requests(this_hctx, this_ctx, &list, from_sched);
diff --git a/block/elevator.h b/block/elevator.h
index 774a8f6b99e69e..7ca3d7b6ed8289 100644
--- a/block/elevator.h
+++ b/block/elevator.h
@@ -4,6 +4,7 @@
#include <linux/percpu.h>
#include <linux/hashtable.h>
+#include "blk-mq.h"
struct io_cq;
struct elevator_type;
@@ -37,7 +38,8 @@ struct elevator_mq_ops {
void (*limit_depth)(blk_opf_t, struct blk_mq_alloc_data *);
void (*prepare_request)(struct request *);
void (*finish_request)(struct request *);
- void (*insert_requests)(struct blk_mq_hw_ctx *, struct list_head *, bool);
+ void (*insert_requests)(struct blk_mq_hw_ctx *hctx, struct list_head *list,
+ blk_insert_t flags);
struct request *(*dispatch_request)(struct blk_mq_hw_ctx *);
bool (*has_work)(struct blk_mq_hw_ctx *);
void (*completed_request)(struct request *, u64);
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index 3f9fb2090c9158..4155594aefc657 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -588,7 +588,8 @@ static void kyber_prepare_request(struct request *rq)
}
static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *rq_list, bool at_head)
+ struct list_head *rq_list,
+ blk_insert_t flags)
{
struct kyber_hctx_data *khd = hctx->sched_data;
struct request *rq, *next;
@@ -600,7 +601,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
spin_lock(&kcq->lock);
trace_block_rq_insert(rq);
- if (at_head)
+ if (flags & BLK_MQ_INSERT_AT_HEAD)
list_move(&rq->queuelist, head);
else
list_move_tail(&rq->queuelist, head);
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index ceae477c3571a3..5839a027e0f051 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -766,7 +766,7 @@ static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
* add rq to rbtree and fifo
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
- bool at_head)
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -799,7 +799,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
trace_block_rq_insert(rq);
- if (at_head) {
+ if (flags & BLK_MQ_INSERT_AT_HEAD) {
list_add(&rq->queuelist, &per_prio->dispatch);
rq->fifo_time = jiffies;
} else {
@@ -823,7 +823,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* Called from blk_mq_insert_request() or blk_mq_dispatch_plug_list().
*/
static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
- struct list_head *list, bool at_head)
+ struct list_head *list,
+ blk_insert_t flags)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -834,7 +835,7 @@ static void dd_insert_requests(struct blk_mq_hw_ctx *hctx,
rq = list_first_entry(list, struct request, queuelist);
list_del_init(&rq->queuelist);
- dd_insert_request(hctx, rq, at_head);
+ dd_insert_request(hctx, rq, flags);
}
spin_unlock(&dd->lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 43+ messages in thread
* Re: [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait
2023-04-12 5:32 ` [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
@ 2023-04-12 6:55 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 6:55 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Plugs never insert at head, so don't plug for head insertions.
>
> Fixes: 1c2d2fff6dc0 ("block: wire-up support for passthrough plugging")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
> block/blk-mq.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 52f8e0099c7f4b..7908d19f140815 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1299,7 +1299,7 @@ void blk_execute_rq_nowait(struct request *rq, bool at_head)
> * device, directly accessing the plug instead of using blk_mq_plug()
> * should not have any consequences.
> */
> - if (current->plug)
> + if (current->plug && !at_head)
> blk_add_rq_to_plug(current->plug, rq);
> else
> blk_mq_sched_insert_request(rq, at_head, true, false);
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 02/18] blk-mq: remove blk-mq-tag.h
2023-04-12 5:32 ` [PATCH 02/18] blk-mq: remove blk-mq-tag.h Christoph Hellwig
@ 2023-04-12 6:57 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 6:57 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> blk-mq-tag.h is always included by blk-mq.h, and causes recursive
> inclusion hell with further changes. Just merge it into blk-mq.h
> instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Looks good to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h
2023-04-12 5:32 ` [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
@ 2023-04-12 6:58 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 6:58 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> block/blk-mq.h needs various definitions from <linux/blk-mq.h>,
> include it there instead of relying on the source files to include
> both.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Looks good to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests
2023-04-12 5:32 ` [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
@ 2023-04-12 7:07 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:07 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Move all logic related to the direct insert into blk_mq_insert_requests
> to clean the code flow up a bit, and to allow marking
> blk_mq_try_issue_list_directly static.
Nit: maybe mention that blk_mq_insert_requests() will now call
blk_mq_run_hw_queue(), just to be clear (even though that is implied by the
"move all logic" statement).
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list
2023-04-12 5:32 ` [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
@ 2023-04-12 7:09 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:09 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> blk_mq_dispatch_plug_list is the only caller of
> blk_mq_sched_insert_requests, and it makes sense to just fold it there
> as blk_mq_sched_insert_requests isn't specific to I/O schedulers despite
> the name.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-12 5:32 ` [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
@ 2023-04-12 7:14 ` Damien Le Moal
2023-04-12 7:18 ` Damien Le Moal
0 siblings, 1 reply; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:14 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> blk_mq_sched_insert_request is the main request insert helper and not
> directly I/O scheduler related. Move blk_mq_sched_insert_request to
> blk-mq.c, rename it to blk_mq_insert_request and mark it static.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
[...]
> -void blk_mq_sched_insert_request(struct request *rq, bool at_head,
> - bool run_queue, bool async)
> -{
> - struct request_queue *q = rq->q;
> - struct elevator_queue *e = q->elevator;
> - struct blk_mq_ctx *ctx = rq->mq_ctx;
> - struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
> -
> - WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
> -
> - if (blk_mq_sched_bypass_insert(hctx, rq)) {
Nit: given the super confusing name that blk_mq_sched_bypass_insert() has,
replacing the above if with:
if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
with the comment above it would make things even more readable I think.
Otherwise looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request
2023-04-12 5:32 ` [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 7:15 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:15 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> There is no good point in keeping the __blk_mq_insert_request around
> for two function calls and a singler caller.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 5:32 ` [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
@ 2023-04-12 7:16 ` Damien Le Moal
2023-04-12 7:20 ` Christoph Hellwig
0 siblings, 1 reply; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:16 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Remove this very small helper and fold it into the only caller.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
> ---
> block/blk-mq.c | 25 +++++++------------------
> 1 file changed, 7 insertions(+), 18 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 103caf1bae2769..7e9f7d00452f11 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2446,23 +2446,6 @@ static void blk_mq_run_work_fn(struct work_struct *work)
> __blk_mq_run_hw_queue(hctx);
> }
>
> -static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
> - struct request *rq,
> - bool at_head)
> -{
> - struct blk_mq_ctx *ctx = rq->mq_ctx;
> - enum hctx_type type = hctx->type;
> -
> - lockdep_assert_held(&ctx->lock);
> -
> - trace_block_rq_insert(rq);
> -
> - if (at_head)
> - list_add(&rq->queuelist, &ctx->rq_lists[type]);
> - else
> - list_add_tail(&rq->queuelist, &ctx->rq_lists[type]);
> -}
> -
> /**
> * blk_mq_request_bypass_insert - Insert a request at dispatch list.
> * @rq: Pointer to request to be inserted.
> @@ -2586,8 +2569,14 @@ static void blk_mq_insert_request(struct request *rq, bool at_head,
> list_add(&rq->queuelist, &list);
> e->type->ops.insert_requests(hctx, &list, at_head);
> } else {
> + trace_block_rq_insert(rq);
Shouldn't we keep the trace call under ctx->lock to preserve precise tracing ?
> +
> spin_lock(&ctx->lock);
> - __blk_mq_insert_req_list(hctx, rq, at_head);
> + if (at_head)
> + list_add(&rq->queuelist, &ctx->rq_lists[hctx->type]);
> + else
> + list_add_tail(&rq->queuelist,
> + &ctx->rq_lists[hctx->type]);
> blk_mq_hctx_mark_pending(hctx, ctx);
> spin_unlock(&ctx->lock);
> }
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 09/18] blk-mq: remove blk_flush_queue_rq
2023-04-12 5:32 ` [PATCH 09/18] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
@ 2023-04-12 7:17 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:17 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Just call blk_mq_add_to_requeue_list directly from the two callers.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c
2023-04-12 7:14 ` Damien Le Moal
@ 2023-04-12 7:18 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:18 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 16:14, Damien Le Moal wrote:
> On 4/12/23 14:32, Christoph Hellwig wrote:
>> blk_mq_sched_insert_request is the main request insert helper and not
>> directly I/O scheduler related. Move blk_mq_sched_insert_request to
>> blk-mq.c, rename it to blk_mq_insert_request and mark it static.
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
>
> [...]
>
>> -void blk_mq_sched_insert_request(struct request *rq, bool at_head,
>> - bool run_queue, bool async)
>> -{
>> - struct request_queue *q = rq->q;
>> - struct elevator_queue *e = q->elevator;
>> - struct blk_mq_ctx *ctx = rq->mq_ctx;
>> - struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
>> -
>> - WARN_ON(e && (rq->tag != BLK_MQ_NO_TAG));
>> -
>> - if (blk_mq_sched_bypass_insert(hctx, rq)) {
>
> Nit: given the super confusing name that blk_mq_sched_bypass_insert() has,
> replacing the above if with:
>
> if ((rq->rq_flags & RQF_FLUSH_SEQ) || blk_rq_is_passthrough(rq))
>
> with the comment above it would make things even more readable I think.
Ignore. Just saw you reworked this in patch 10.
>
> Otherwise looks good.
>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
>
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 7:16 ` Damien Le Moal
@ 2023-04-12 7:20 ` Christoph Hellwig
2023-04-12 7:33 ` Damien Le Moal
0 siblings, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 7:20 UTC (permalink / raw)
To: Damien Le Moal
Cc: Christoph Hellwig, Jens Axboe, Bart Van Assche, linux-block
On Wed, Apr 12, 2023 at 04:16:36PM +0900, Damien Le Moal wrote:
> > } else {
> > + trace_block_rq_insert(rq);
>
> Shouldn't we keep the trace call under ctx->lock to preserve precise tracing ?
ctx->lock doesn't synchronize any of the in the request that is traced
here.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request
2023-04-12 5:32 ` [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 7:22 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:22 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> While both passthrough and flush requests call directly into
> blk_mq_request_bypass_insert, the parameters aren't the same.
> Split the handling into two separate conditionals and turn the whole
> function into an if/elif/elif/else flow instead of the gotos.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work
2023-04-12 5:32 ` [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
@ 2023-04-12 7:24 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:24 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Split the RQF_DONTPREP and RQF_SOFTBARRIER in separate branches to make
> the code more readable.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
> ---
> block/blk-mq.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index c3de03217f4f1a..5dfb927d1b9145 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1427,20 +1427,20 @@ static void blk_mq_requeue_work(struct work_struct *work)
> spin_unlock_irq(&q->requeue_lock);
>
> list_for_each_entry_safe(rq, next, &rq_list, queuelist) {
> - if (!(rq->rq_flags & (RQF_SOFTBARRIER | RQF_DONTPREP)))
> - continue;
> -
> - rq->rq_flags &= ~RQF_SOFTBARRIER;
> - list_del_init(&rq->queuelist);
> /*
> * If RQF_DONTPREP, rq has contained some driver specific
Nit: while at it, you could fix the bad english here:
rq has contained som... -> rq has some...
Otherwise looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper
2023-04-12 5:32 ` [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
@ 2023-04-12 7:26 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:26 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Factor out a helper from __blk_mq_try_issue_directly in preparation
> of folding that function into its two callers.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers
2023-04-12 5:32 ` [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
@ 2023-04-12 7:31 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:31 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Due the wildly different behavior based on the bypass_insert argument,
s/Due/Due to
> not a whole lot of code in __blk_mq_try_issue_directly is actually shared
> between blk_mq_try_issue_directly and blk_mq_request_issue_directly.
>
> Remove __blk_mq_try_issue_directly and fold the code into the two callers
> instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Otherwise looks good to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 7:20 ` Christoph Hellwig
@ 2023-04-12 7:33 ` Damien Le Moal
2023-04-12 11:45 ` Christoph Hellwig
2023-04-13 6:14 ` Christoph Hellwig
0 siblings, 2 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:33 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, Bart Van Assche, linux-block
On 4/12/23 16:20, Christoph Hellwig wrote:
> On Wed, Apr 12, 2023 at 04:16:36PM +0900, Damien Le Moal wrote:
>>> } else {
>>> + trace_block_rq_insert(rq);
>>
>> Shouldn't we keep the trace call under ctx->lock to preserve precise tracing ?
>
> ctx->lock doesn't synchronize any of the in the request that is traced
> here.
I am not worried about the values shown by the trace entries, but rather the
order of the inserts: with the trace call outside the lock, the trace may end up
showing an incorrect insertion order ?
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request
2023-04-12 5:32 ` [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 7:40 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:40 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> blk_mq_insert_request takes two bool parameters to control how to run
> the queue at the end of the function. Move the blk_mq_run_hw_queue call
> to the callers that want it instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
This is nice !
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert
2023-04-12 5:32 ` [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-12 7:42 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:42 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> blk_mq_request_bypass_insert takes a bool parameter to control how to run
> the queue at the end of the function. Move the blk_mq_run_hw_queue call
> to the callers that want it instead.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request
2023-04-12 5:32 ` [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
@ 2023-04-12 7:45 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:45 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Replace the at_head bool with a flags argument that so far only contains
> a single BLK_MQ_INSERT_AT_HEAD value. This makes it much easier to grep
> for head insertions into the blk-mq dispatch queues.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert
2023-04-12 5:32 ` [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
@ 2023-04-12 7:46 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:46 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Replace the boolean at_head argument with the same flags that are already
> passed to blk_mq_insert_request.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Nice cleanup.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests
2023-04-12 5:32 ` [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
@ 2023-04-12 7:47 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-12 7:47 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe; +Cc: Bart Van Assche, linux-block
On 4/12/23 14:32, Christoph Hellwig wrote:
> Instead of passing a bool at_head, pass down the full flags from the
> blk_mq_insert_request interface.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 7:33 ` Damien Le Moal
@ 2023-04-12 11:45 ` Christoph Hellwig
2023-04-13 6:14 ` Christoph Hellwig
1 sibling, 0 replies; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-12 11:45 UTC (permalink / raw)
To: Damien Le Moal
Cc: Christoph Hellwig, Jens Axboe, Bart Van Assche, linux-block
On Wed, Apr 12, 2023 at 04:33:04PM +0900, Damien Le Moal wrote:
> On 4/12/23 16:20, Christoph Hellwig wrote:
> > On Wed, Apr 12, 2023 at 04:16:36PM +0900, Damien Le Moal wrote:
> >>> } else {
> >>> + trace_block_rq_insert(rq);
> >>
> >> Shouldn't we keep the trace call under ctx->lock to preserve precise tracing ?
> >
> > ctx->lock doesn't synchronize any of the in the request that is traced
> > here.
>
> I am not worried about the values shown by the trace entries, but rather the
> order of the inserts: with the trace call outside the lock, the trace may end up
> showing an incorrect insertion order ?
Maybe. I can respin the series and move it back under the lock.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-12 7:33 ` Damien Le Moal
2023-04-12 11:45 ` Christoph Hellwig
@ 2023-04-13 6:14 ` Christoph Hellwig
2023-04-13 6:16 ` Damien Le Moal
1 sibling, 1 reply; 43+ messages in thread
From: Christoph Hellwig @ 2023-04-13 6:14 UTC (permalink / raw)
To: Damien Le Moal
Cc: Christoph Hellwig, Jens Axboe, Bart Van Assche, linux-block
On Wed, Apr 12, 2023 at 04:33:04PM +0900, Damien Le Moal wrote:
> I am not worried about the values shown by the trace entries, but rather the
> order of the inserts: with the trace call outside the lock, the trace may end up
> showing an incorrect insertion order ?
... turns out none of the other calls to trace_block_rq_insert is
under ctx->lock either. The I/O scheduler ones are under their
own per-request_queue locks, so maybe that counts as ordering,
but blk_mq_insert_requests doesn't lock at all.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list into blk_mq_insert_request
2023-04-13 6:14 ` Christoph Hellwig
@ 2023-04-13 6:16 ` Damien Le Moal
0 siblings, 0 replies; 43+ messages in thread
From: Damien Le Moal @ 2023-04-13 6:16 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, Bart Van Assche, linux-block
On 4/13/23 15:14, Christoph Hellwig wrote:
> On Wed, Apr 12, 2023 at 04:33:04PM +0900, Damien Le Moal wrote:
>> I am not worried about the values shown by the trace entries, but rather the
>> order of the inserts: with the trace call outside the lock, the trace may end up
>> showing an incorrect insertion order ?
>
> ... turns out none of the other calls to trace_block_rq_insert is
> under ctx->lock either. The I/O scheduler ones are under their
> own per-request_queue locks, so maybe that counts as ordering,
> but blk_mq_insert_requests doesn't lock at all.
OK. And since nobody ever complained (that I know of), I guess it is fine then.
Feel free to add:
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2023-04-13 6:16 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-12 5:32 cleanup request insertation parameters v2 Christoph Hellwig
2023-04-12 5:32 ` [PATCH 01/18] blk-mq: don't plug for head insertions in blk_execute_rq_nowait Christoph Hellwig
2023-04-12 6:55 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 02/18] blk-mq: remove blk-mq-tag.h Christoph Hellwig
2023-04-12 6:57 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 03/18] blk-mq: include <linux/blk-mq.h> in block/blk-mq.h Christoph Hellwig
2023-04-12 6:58 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 04/18] blk-mq: move more logic into blk_mq_insert_requests Christoph Hellwig
2023-04-12 7:07 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 05/18] blk-mq: fold blk_mq_sched_insert_requests into blk_mq_dispatch_plug_list Christoph Hellwig
2023-04-12 7:09 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 06/18] blk-mq: move blk_mq_sched_insert_request to blk-mq.c Christoph Hellwig
2023-04-12 7:14 ` Damien Le Moal
2023-04-12 7:18 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 07/18] blk-mq: fold __blk_mq_insert_request into blk_mq_insert_request Christoph Hellwig
2023-04-12 7:15 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 08/18] blk-mq: fold __blk_mq_insert_req_list " Christoph Hellwig
2023-04-12 7:16 ` Damien Le Moal
2023-04-12 7:20 ` Christoph Hellwig
2023-04-12 7:33 ` Damien Le Moal
2023-04-12 11:45 ` Christoph Hellwig
2023-04-13 6:14 ` Christoph Hellwig
2023-04-13 6:16 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 09/18] blk-mq: remove blk_flush_queue_rq Christoph Hellwig
2023-04-12 7:17 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 10/18] blk-mq: refactor passthrough vs flush handling in blk_mq_insert_request Christoph Hellwig
2023-04-12 7:22 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 11/18] blk-mq: refactor the DONTPREP/SOFTBARRIER andling in blk_mq_requeue_work Christoph Hellwig
2023-04-12 7:24 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 12/18] blk-mq: factor out a blk_mq_get_budget_and_tag helper Christoph Hellwig
2023-04-12 7:26 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 13/18] blk-mq: fold __blk_mq_try_issue_directly into its two callers Christoph Hellwig
2023-04-12 7:31 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 14/18] blk-mq: don't run the hw_queue from blk_mq_insert_request Christoph Hellwig
2023-04-12 7:40 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 15/18] blk-mq: don't run the hw_queue from blk_mq_request_bypass_insert Christoph Hellwig
2023-04-12 7:42 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 16/18] blk-mq: pass a flags argument to blk_mq_insert_request Christoph Hellwig
2023-04-12 7:45 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 17/18] blk-mq: pass a flags argument to blk_mq_request_bypass_insert Christoph Hellwig
2023-04-12 7:46 ` Damien Le Moal
2023-04-12 5:32 ` [PATCH 18/18] blk-mq: pass the flags argument to elevator_type->insert_requests Christoph Hellwig
2023-04-12 7:47 ` Damien Le Moal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).