* [RFC] more make_request optimizations
@ 2020-04-25 17:09 Christoph Hellwig
2020-04-25 17:09 ` [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request Christoph Hellwig
` (10 more replies)
0 siblings, 11 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Hi Jens,
this fresh off the press series optimizes the submit_bio /
generic_make_request to avoid the setup and manipulation of the
on-stack bio list for the case of issuing I/O directly to blk-mq.
Let me know what you think, this has only survived very basic testing
so far.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-26 2:59 ` Ming Lei
2020-04-25 17:09 ` [PATCH 02/11] block: cleanup the memory stall accounting in submit_bio Christoph Hellwig
` (9 subsequent siblings)
10 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
The current documentation is a little weird, as it doesn't clearly
explain which function to use, and also has the guts of the information
on generic_make_request, which is the internal interface for stacking
drivers.
Fix this up by properly documenting submit_bio, and only documenting
the differences and the use case for generic_make_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 35 ++++++++++++-----------------------
1 file changed, 12 insertions(+), 23 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index dffff21008886..68351ee94ad2e 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -992,28 +992,13 @@ generic_make_request_checks(struct bio *bio)
}
/**
- * generic_make_request - hand a buffer to its device driver for I/O
+ * generic_make_request - re-submit a bio to the block device layer for I/O
* @bio: The bio describing the location in memory and on the device.
*
- * generic_make_request() is used to make I/O requests of block
- * devices. It is passed a &struct bio, which describes the I/O that needs
- * to be done.
- *
- * generic_make_request() does not return any status. The
- * success/failure status of the request, along with notification of
- * completion, is delivered asynchronously through the bio->bi_end_io
- * function described (one day) else where.
- *
- * The caller of generic_make_request must make sure that bi_io_vec
- * are set to describe the memory buffer, and that bi_dev and bi_sector are
- * set to describe the device address, and the
- * bi_end_io and optionally bi_private are set to describe how
- * completion notification should be signaled.
- *
- * generic_make_request and the drivers it calls may use bi_next if this
- * bio happens to be merged with someone else, and may resubmit the bio to
- * a lower device by calling into generic_make_request recursively, which
- * means the bio should NOT be touched after the call to ->make_request_fn.
+ * This is a version of submit_bio() that shall only be used for I/O that is
+ * resubmitted to lower level drivers by stacking block drivers. All file
+ * systems and other upper level users of the block layer should use
+ * submit_bio() instead.
*/
blk_qc_t generic_make_request(struct bio *bio)
{
@@ -1152,10 +1137,14 @@ EXPORT_SYMBOL_GPL(direct_make_request);
* submit_bio - submit a bio to the block device layer for I/O
* @bio: The &struct bio which describes the I/O
*
- * submit_bio() is very similar in purpose to generic_make_request(), and
- * uses that function to do most of the work. Both are fairly rough
- * interfaces; @bio must be presetup and ready for I/O.
+ * submit_bio() is used to submit I/O requests to block devices. It is passed a
+ * fully set up &struct bio that describes the I/O that needs to be done. The
+ * bio will be send to the device described by the bi_disk and bi_partno fields.
*
+ * The success/failure status of the request, along with notification of
+ * completion, is delivered asynchronously through the ->bi_end_io() callback
+ * in @bio. The bio must NOT be touched by thecaller until ->bi_end_io() has
+ * been called.
*/
blk_qc_t submit_bio(struct bio *bio)
{
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 02/11] block: cleanup the memory stall accounting in submit_bio
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
2020-04-25 17:09 ` [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 03/11] block: replace BIO_QUEUE_ENTERED with BIO_CGROUP_ACCT Christoph Hellwig
` (8 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Instead of a convoluted chain just check for REQ_OP_READ directly,
and keep all the memory stall code together in a single unlikely
branch.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 68351ee94ad2e..81a291085c6ca 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1148,10 +1148,6 @@ EXPORT_SYMBOL_GPL(direct_make_request);
*/
blk_qc_t submit_bio(struct bio *bio)
{
- bool workingset_read = false;
- unsigned long pflags;
- blk_qc_t ret;
-
if (blkcg_punt_bio_submit(bio))
return BLK_QC_T_NONE;
@@ -1170,8 +1166,6 @@ blk_qc_t submit_bio(struct bio *bio)
if (op_is_write(bio_op(bio))) {
count_vm_events(PGPGOUT, count);
} else {
- if (bio_flagged(bio, BIO_WORKINGSET))
- workingset_read = true;
task_io_account_read(bio->bi_iter.bi_size);
count_vm_events(PGPGIN, count);
}
@@ -1187,20 +1181,24 @@ blk_qc_t submit_bio(struct bio *bio)
}
/*
- * If we're reading data that is part of the userspace
- * workingset, count submission time as memory stall. When the
- * device is congested, or the submitting cgroup IO-throttled,
- * submission can be a significant part of overall IO time.
+ * If we're reading data that is part of the userspace workingset, count
+ * submission time as memory stall. When the device is congested, or
+ * the submitting cgroup IO-throttled, submission can be a significant
+ * part of overall IO time.
*/
- if (workingset_read)
- psi_memstall_enter(&pflags);
-
- ret = generic_make_request(bio);
+ if (unlikely(bio_op(bio) == REQ_OP_READ &&
+ bio_flagged(bio, BIO_WORKINGSET))) {
+ unsigned long pflags;
+ blk_qc_t ret;
- if (workingset_read)
+ psi_memstall_enter(&pflags);
+ ret = generic_make_request(bio);
psi_memstall_leave(&pflags);
- return ret;
+ return ret;
+ }
+
+ return generic_make_request(bio);
}
EXPORT_SYMBOL(submit_bio);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 03/11] block: replace BIO_QUEUE_ENTERED with BIO_CGROUP_ACCT
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
2020-04-25 17:09 ` [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request Christoph Hellwig
2020-04-25 17:09 ` [PATCH 02/11] block: cleanup the memory stall accounting in submit_bio Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 04/11] block: add a bio_queue_enter helper Christoph Hellwig
` (7 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
BIO_QUEUE_ENTERED is only used for cgroup accounting now, so rename
the flag and move setting it into the cgroup code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-merge.c | 10 ----------
include/linux/blk-cgroup.h | 10 ++++++----
include/linux/blk_types.h | 2 +-
3 files changed, 7 insertions(+), 15 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index c49eb3bdd0be8..a04e991b5ded9 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -336,16 +336,6 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
/* there isn't chance to merge the splitted bio */
split->bi_opf |= REQ_NOMERGE;
- /*
- * Since we're recursing into make_request here, ensure
- * that we mark this bio as already having entered the queue.
- * If not, and the queue is going away, we can get stuck
- * forever on waiting for the queue reference to drop. But
- * that will never happen, as we're already holding a
- * reference to it.
- */
- bio_set_flag(*bio, BIO_QUEUE_ENTERED);
-
bio_chain(split, *bio);
trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
generic_make_request(*bio);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 35f8ffe92b702..4deb8bb7b6afa 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -607,12 +607,14 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
u64_stats_update_begin(&bis->sync);
/*
- * If the bio is flagged with BIO_QUEUE_ENTERED it means this
- * is a split bio and we would have already accounted for the
- * size of the bio.
+ * If the bio is flagged with BIO_CGROUP_ACCT it means this is a
+ * split bio and we would have already accounted for the size of
+ * the bio.
*/
- if (!bio_flagged(bio, BIO_QUEUE_ENTERED))
+ if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
+ bio_set_flag(bio, BIO_CGROUP_ACCT);
bis->cur.bytes[rwd] += bio->bi_iter.bi_size;
+ }
bis->cur.ios[rwd]++;
u64_stats_update_end(&bis->sync);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 31eb92876be7c..431e59d05550f 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -220,7 +220,7 @@ enum {
* throttling rules. Don't do it again. */
BIO_TRACE_COMPLETION, /* bio_endio() should trace the final completion
* of this bio. */
- BIO_QUEUE_ENTERED, /* can use blk_queue_enter_live() */
+ BIO_CGROUP_ACCT, /* has been accounted to a cgroup */
BIO_TRACKED, /* set if bio goes through the rq_qos path */
BIO_FLAG_LAST
};
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 04/11] block: add a bio_queue_enter helper
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (2 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 03/11] block: replace BIO_QUEUE_ENTERED with BIO_CGROUP_ACCT Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 05/11] block: refactor generic_make_request Christoph Hellwig
` (6 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Add a little helper that passes the right nowait flag to blk_queue_enter
based on the bio flag, and terminates the bio with the right error code
if entering the queue fails.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 50 +++++++++++++++++++++++-------------------------
1 file changed, 24 insertions(+), 26 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 81a291085c6ca..7f11560bfddbb 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -440,6 +440,23 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
}
}
+static inline int bio_queue_enter(struct bio *bio)
+{
+ struct request_queue *q = bio->bi_disk->queue;
+ bool nowait = bio->bi_opf & REQ_NOWAIT;
+ int ret;
+
+ ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);
+ if (unlikely(ret)) {
+ if (nowait && !blk_queue_dying(q))
+ bio_wouldblock_error(bio);
+ else
+ bio_io_error(bio);
+ }
+
+ return ret;
+}
+
void blk_queue_exit(struct request_queue *q)
{
percpu_ref_put(&q->q_usage_counter);
@@ -1049,10 +1066,8 @@ blk_qc_t generic_make_request(struct bio *bio)
current->bio_list = bio_list_on_stack;
do {
struct request_queue *q = bio->bi_disk->queue;
- blk_mq_req_flags_t flags = bio->bi_opf & REQ_NOWAIT ?
- BLK_MQ_REQ_NOWAIT : 0;
- if (likely(blk_queue_enter(q, flags) == 0)) {
+ if (likely(bio_queue_enter(bio) == 0)) {
struct bio_list lower, same;
/* Create a fresh bio_list for all subordinate requests */
@@ -1079,12 +1094,6 @@ blk_qc_t generic_make_request(struct bio *bio)
bio_list_merge(&bio_list_on_stack[0], &lower);
bio_list_merge(&bio_list_on_stack[0], &same);
bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]);
- } else {
- if (unlikely(!blk_queue_dying(q) &&
- (bio->bi_opf & REQ_NOWAIT)))
- bio_wouldblock_error(bio);
- else
- bio_io_error(bio);
}
bio = bio_list_pop(&bio_list_on_stack[0]);
} while (bio);
@@ -1106,30 +1115,19 @@ EXPORT_SYMBOL(generic_make_request);
blk_qc_t direct_make_request(struct bio *bio)
{
struct request_queue *q = bio->bi_disk->queue;
- bool nowait = bio->bi_opf & REQ_NOWAIT;
blk_qc_t ret;
- if (WARN_ON_ONCE(q->make_request_fn))
- goto io_error;
- if (!generic_make_request_checks(bio))
+ if (WARN_ON_ONCE(q->make_request_fn)) {
+ bio_io_error(bio);
return BLK_QC_T_NONE;
-
- if (unlikely(blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0))) {
- if (nowait && !blk_queue_dying(q))
- goto would_block;
- goto io_error;
}
-
+ if (!generic_make_request_checks(bio))
+ return BLK_QC_T_NONE;
+ if (unlikely(bio_queue_enter(bio)))
+ return BLK_QC_T_NONE;
ret = blk_mq_make_request(q, bio);
blk_queue_exit(q);
return ret;
-
-would_block:
- bio_wouldblock_error(bio);
- return BLK_QC_T_NONE;
-io_error:
- bio_io_error(bio);
- return BLK_QC_T_NONE;
}
EXPORT_SYMBOL_GPL(direct_make_request);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 05/11] block: refactor generic_make_request
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (3 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 04/11] block: add a bio_queue_enter helper Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue Christoph Hellwig
` (5 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Split the recursion prevention loop into its own little helpers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 160 ++++++++++++++++++++++++++---------------------
1 file changed, 88 insertions(+), 72 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 7f11560bfddbb..732d5b8d3cd25 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1008,6 +1008,85 @@ generic_make_request_checks(struct bio *bio)
return false;
}
+static blk_qc_t do_make_request(struct bio *bio,
+ struct bio_list bio_list_on_stack[2])
+{
+ struct request_queue *q = bio->bi_disk->queue;
+ struct bio_list lower, same;
+ blk_qc_t ret;
+
+ /*
+ * Create a fresh bio_list for all subordinate requests.
+ */
+ bio_list_on_stack[1] = bio_list_on_stack[0];
+ bio_list_init(&bio_list_on_stack[0]);
+
+ if (unlikely(bio_queue_enter(bio) != 0))
+ return BLK_QC_T_NONE;
+ if (q->make_request_fn)
+ ret = q->make_request_fn(q, bio);
+ else
+ ret = blk_mq_make_request(q, bio);
+ blk_queue_exit(q);
+
+ /*
+ * Sort new bios into those for a lower level and those for the same
+ * level.
+ */
+ bio_list_init(&lower);
+ bio_list_init(&same);
+ while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL)
+ if (q == bio->bi_disk->queue)
+ bio_list_add(&same, bio);
+ else
+ bio_list_add(&lower, bio);
+
+ /*
+ * Now assemble so we handle the lowest level first.
+ */
+ bio_list_merge(&bio_list_on_stack[0], &lower);
+ bio_list_merge(&bio_list_on_stack[0], &same);
+ bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]);
+ return ret;
+}
+
+static blk_qc_t __generic_make_request(struct bio *bio)
+{
+ /*
+ * bio_list_on_stack[0] contains bios submitted by the current
+ * make_request_fn.
+ * bio_list_on_stack[1] contains bios that were submitted before the
+ * current make_request_fn, but that haven't been processed yet.
+ */
+ struct bio_list bio_list_on_stack[2];
+ blk_qc_t ret = BLK_QC_T_NONE;
+
+ BUG_ON(bio->bi_next);
+
+ /*
+ * The following loop may be a bit non-obvious, and so deserves some
+ * explanation: Before entering the loop, bio->bi_next is NULL (as all
+ * callers ensure that) so we have a list with a single bio. We pretend
+ * that we have just taken it off a longer list, so we assign bio_list
+ * to a pointer to the bio_list_on_stack, thus initialising the bio_list
+ * of new bios to be added. ->make_request() may indeed add some more
+ * bios through a recursive call to generic_make_request. If it did, we
+ * find a non-NULL value in bio_list and re-enter the loop from the top.
+ * In this case we really did just take the bio of the top of the list
+ * (no pretending) and so remove it from bio_list, and call into
+ * ->make_request() again.
+ */
+ bio_list_init(&bio_list_on_stack[0]);
+
+ current->bio_list = bio_list_on_stack;
+ do {
+ ret = do_make_request(bio, bio_list_on_stack);
+ } while ((bio = bio_list_pop(&bio_list_on_stack[0])));
+ current->bio_list = NULL; /* deactivate */
+
+ return ret;
+}
+
/**
* generic_make_request - re-submit a bio to the block device layer for I/O
* @bio: The bio describing the location in memory and on the device.
@@ -1019,88 +1098,25 @@ generic_make_request_checks(struct bio *bio)
*/
blk_qc_t generic_make_request(struct bio *bio)
{
- /*
- * bio_list_on_stack[0] contains bios submitted by the current
- * make_request_fn.
- * bio_list_on_stack[1] contains bios that were submitted before
- * the current make_request_fn, but that haven't been processed
- * yet.
- */
- struct bio_list bio_list_on_stack[2];
- blk_qc_t ret = BLK_QC_T_NONE;
-
if (!generic_make_request_checks(bio))
- goto out;
+ return BLK_QC_T_NONE;
/*
* We only want one ->make_request_fn to be active at a time, else
* stack usage with stacked devices could be a problem. So use
- * current->bio_list to keep a list of requests submited by a
- * make_request_fn function. current->bio_list is also used as a
- * flag to say if generic_make_request is currently active in this
- * task or not. If it is NULL, then no make_request is active. If
- * it is non-NULL, then a make_request is active, and new requests
- * should be added at the tail
+ * current->bio_list to keep a list of requests submitted by a
+ * make_request_fn function. current->bio_list is also used as a flag
+ * to say if generic_make_request is currently active in this thread or
+ * not. If it is NULL, then no make_request is active. If it is
+ * non-NULL, then a make_request is active, and new requests should be
+ * added at the tail
*/
if (current->bio_list) {
bio_list_add(¤t->bio_list[0], bio);
- goto out;
+ return BLK_QC_T_NONE;
}
- /* following loop may be a bit non-obvious, and so deserves some
- * explanation.
- * Before entering the loop, bio->bi_next is NULL (as all callers
- * ensure that) so we have a list with a single bio.
- * We pretend that we have just taken it off a longer list, so
- * we assign bio_list to a pointer to the bio_list_on_stack,
- * thus initialising the bio_list of new bios to be
- * added. ->make_request() may indeed add some more bios
- * through a recursive call to generic_make_request. If it
- * did, we find a non-NULL value in bio_list and re-enter the loop
- * from the top. In this case we really did just take the bio
- * of the top of the list (no pretending) and so remove it from
- * bio_list, and call into ->make_request() again.
- */
- BUG_ON(bio->bi_next);
- bio_list_init(&bio_list_on_stack[0]);
- current->bio_list = bio_list_on_stack;
- do {
- struct request_queue *q = bio->bi_disk->queue;
-
- if (likely(bio_queue_enter(bio) == 0)) {
- struct bio_list lower, same;
-
- /* Create a fresh bio_list for all subordinate requests */
- bio_list_on_stack[1] = bio_list_on_stack[0];
- bio_list_init(&bio_list_on_stack[0]);
- if (q->make_request_fn)
- ret = q->make_request_fn(q, bio);
- else
- ret = blk_mq_make_request(q, bio);
-
- blk_queue_exit(q);
-
- /* sort new bios into those for a lower level
- * and those for the same level
- */
- bio_list_init(&lower);
- bio_list_init(&same);
- while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL)
- if (q == bio->bi_disk->queue)
- bio_list_add(&same, bio);
- else
- bio_list_add(&lower, bio);
- /* now assemble so we handle the lowest level first */
- bio_list_merge(&bio_list_on_stack[0], &lower);
- bio_list_merge(&bio_list_on_stack[0], &same);
- bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]);
- }
- bio = bio_list_pop(&bio_list_on_stack[0]);
- } while (bio);
- current->bio_list = NULL; /* deactivate */
-
-out:
- return ret;
+ return __generic_make_request(bio);
}
EXPORT_SYMBOL(generic_make_request);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (4 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 05/11] block: refactor generic_make_request Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-26 2:53 ` Ming Lei
2020-04-25 17:09 ` [PATCH 07/11] block: optimize do_make_request " Christoph Hellwig
` (4 subsequent siblings)
10 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Don't bother with the on-stack bio list if we know that we directly
issue to a request based driver that can't re-inject bios.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 29 +++++++++++++++++++----------
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 732d5b8d3cd25..e8c48203b2c55 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1008,6 +1008,18 @@ generic_make_request_checks(struct bio *bio)
return false;
}
+static inline blk_qc_t __direct_make_request(struct bio *bio)
+{
+ struct request_queue *q = bio->bi_disk->queue;
+ blk_qc_t ret;
+
+ if (unlikely(bio_queue_enter(bio)))
+ return BLK_QC_T_NONE;
+ ret = blk_mq_make_request(q, bio);
+ blk_queue_exit(q);
+ return ret;
+}
+
static blk_qc_t do_make_request(struct bio *bio,
struct bio_list bio_list_on_stack[2])
{
@@ -1116,7 +1128,10 @@ blk_qc_t generic_make_request(struct bio *bio)
return BLK_QC_T_NONE;
}
- return __generic_make_request(bio);
+ if (bio->bi_disk->queue->make_request_fn)
+ return __generic_make_request(bio);
+ return __direct_make_request(bio);
+
}
EXPORT_SYMBOL(generic_make_request);
@@ -1130,20 +1145,14 @@ EXPORT_SYMBOL(generic_make_request);
*/
blk_qc_t direct_make_request(struct bio *bio)
{
- struct request_queue *q = bio->bi_disk->queue;
- blk_qc_t ret;
-
- if (WARN_ON_ONCE(q->make_request_fn)) {
+ if (WARN_ON_ONCE(bio->bi_disk->queue->make_request_fn)) {
bio_io_error(bio);
return BLK_QC_T_NONE;
}
+
if (!generic_make_request_checks(bio))
return BLK_QC_T_NONE;
- if (unlikely(bio_queue_enter(bio)))
- return BLK_QC_T_NONE;
- ret = blk_mq_make_request(q, bio);
- blk_queue_exit(q);
- return ret;
+ return __direct_make_request(bio);
}
EXPORT_SYMBOL_GPL(direct_make_request);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 07/11] block: optimize do_make_request for direct to blk-mq issue
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (5 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 08/11] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
` (3 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Don't bother with reshuffling the on-stack bio list if we know that we
directly issue to a request based driver that can't re-inject bios.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index e8c48203b2c55..d196799e68881 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1035,10 +1035,7 @@ static blk_qc_t do_make_request(struct bio *bio,
if (unlikely(bio_queue_enter(bio) != 0))
return BLK_QC_T_NONE;
- if (q->make_request_fn)
- ret = q->make_request_fn(q, bio);
- else
- ret = blk_mq_make_request(q, bio);
+ ret = q->make_request_fn(q, bio);
blk_queue_exit(q);
/*
@@ -1092,7 +1089,10 @@ static blk_qc_t __generic_make_request(struct bio *bio)
current->bio_list = bio_list_on_stack;
do {
- ret = do_make_request(bio, bio_list_on_stack);
+ if (bio->bi_disk->queue->make_request_fn)
+ ret = do_make_request(bio, bio_list_on_stack);
+ else
+ ret = __direct_make_request(bio);
} while ((bio = bio_list_pop(&bio_list_on_stack[0])));
current->bio_list = NULL; /* deactivate */
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 08/11] block: move the call to blk_queue_enter_live out of blk_mq_get_request
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (6 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 07/11] block: optimize do_make_request " Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 09/11] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
` (2 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
Move the blk_queue_enter_live calls into the callers, where they can
successively be cleaned up.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index bcc3a2397d4ae..0d94437362644 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -340,8 +340,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
bool clear_ctx_on_error = false;
u64 alloc_time_ns = 0;
- blk_queue_enter_live(q);
-
/* alloc_time includes depth and tag waits */
if (blk_queue_rq_alloc_time(q))
alloc_time_ns = ktime_get_ns();
@@ -377,7 +375,6 @@ static struct request *blk_mq_get_request(struct request_queue *q,
if (tag == BLK_MQ_TAG_FAIL) {
if (clear_ctx_on_error)
data->ctx = NULL;
- blk_queue_exit(q);
return NULL;
}
@@ -407,11 +404,14 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
if (ret)
return ERR_PTR(ret);
+ blk_queue_enter_live(q);
rq = blk_mq_get_request(q, NULL, &alloc_data);
blk_queue_exit(q);
- if (!rq)
+ if (!rq) {
+ blk_queue_exit(q);
return ERR_PTR(-EWOULDBLOCK);
+ }
rq->__data_len = 0;
rq->__sector = (sector_t) -1;
@@ -456,11 +456,14 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
+ blk_queue_enter_live(q);
rq = blk_mq_get_request(q, NULL, &alloc_data);
blk_queue_exit(q);
- if (!rq)
+ if (!rq) {
+ blk_queue_exit(q);
return ERR_PTR(-EWOULDBLOCK);
+ }
return rq;
}
@@ -2011,8 +2014,10 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
rq_qos_throttle(q, bio);
data.cmd_flags = bio->bi_opf;
+ blk_queue_enter_live(q);
rq = blk_mq_get_request(q, bio, &data);
if (unlikely(!rq)) {
+ blk_queue_exit(q);
rq_qos_cleanup(q, bio);
if (bio->bi_opf & REQ_NOWAIT)
bio_wouldblock_error(bio);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 09/11] block: remove a pointless queue enter pair in blk_mq_alloc_request
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (7 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 08/11] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 10/11] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
2020-04-25 17:09 ` [PATCH 11/11] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
No need for two queue references.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0d94437362644..fc1df2a5969a0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -404,10 +404,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
if (ret)
return ERR_PTR(ret);
- blk_queue_enter_live(q);
rq = blk_mq_get_request(q, NULL, &alloc_data);
- blk_queue_exit(q);
-
if (!rq) {
blk_queue_exit(q);
return ERR_PTR(-EWOULDBLOCK);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 10/11] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (8 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 09/11] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 11/11] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
No need for two queue references.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-mq.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index fc1df2a5969a0..6375ed55cdfa7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -445,24 +445,22 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
* Check if the hardware context is actually mapped to anything.
* If not tell the caller that it should skip this queue.
*/
+ ret = -EXDEV;
alloc_data.hctx = q->queue_hw_ctx[hctx_idx];
- if (!blk_mq_hw_queue_mapped(alloc_data.hctx)) {
- blk_queue_exit(q);
- return ERR_PTR(-EXDEV);
- }
+ if (!blk_mq_hw_queue_mapped(alloc_data.hctx))
+ goto out_queue_exit;
cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
- blk_queue_enter_live(q);
+ ret = -EWOULDBLOCK;
rq = blk_mq_get_request(q, NULL, &alloc_data);
- blk_queue_exit(q);
-
- if (!rq) {
- blk_queue_exit(q);
- return ERR_PTR(-EWOULDBLOCK);
- }
+ if (!rq)
+ goto out_queue_exit;
return rq;
+out_queue_exit:
+ blk_queue_exit(q);
+ return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(blk_mq_alloc_request_hctx);
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 11/11] block: allow blk_mq_make_request to consume the q_usage_counter reference
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
` (9 preceding siblings ...)
2020-04-25 17:09 ` [PATCH 10/11] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
@ 2020-04-25 17:09 ` Christoph Hellwig
10 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-25 17:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block
blk_mq_make_request currently needs to grab an q_usage_counter
reference when allocating a request. This is because the block layer
grabs one before calling blk_mq_make_request, but also releases it as
soon as blk_mq_make_request returns. Remove the blk_queue_exit call
after blk_mq_make_request returns, and instead let it consume the
reference. This works perfectly fine for the block layer caller, just
device mapper needs an extra reference as the old problem still
persists there. Open code blk_queue_enter_live in device mapper,
as there should be no other callers and this allows better documenting
why we do a non-try get.
Also remove the pointless request_queue argument to blk_mq_make_request.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 7 +------
block/blk-mq.c | 17 +++++++++--------
block/blk.h | 11 -----------
drivers/md/dm.c | 13 +++++++++++--
include/linux/blk-mq.h | 2 +-
5 files changed, 22 insertions(+), 28 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index d196799e68881..1fda07af3ff3b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1010,14 +1010,9 @@ generic_make_request_checks(struct bio *bio)
static inline blk_qc_t __direct_make_request(struct bio *bio)
{
- struct request_queue *q = bio->bi_disk->queue;
- blk_qc_t ret;
-
if (unlikely(bio_queue_enter(bio)))
return BLK_QC_T_NONE;
- ret = blk_mq_make_request(q, bio);
- blk_queue_exit(q);
- return ret;
+ return blk_mq_make_request(bio);
}
static blk_qc_t do_make_request(struct bio *bio,
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6375ed55cdfa7..d97f74a82e8f8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1968,7 +1968,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
/**
* blk_mq_make_request - Create and send a request to block device.
- * @q: Request queue pointer.
* @bio: Bio pointer.
*
* Builds up a request structure from @q and @bio and send to the device. The
@@ -1982,8 +1981,9 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
*
* Returns: Request queue cookie.
*/
-blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+blk_qc_t blk_mq_make_request(struct bio *bio)
{
+ struct request_queue *q = bio->bi_disk->queue;
const int is_sync = op_is_sync(bio->bi_opf);
const int is_flush_fua = op_is_flush(bio->bi_opf);
struct blk_mq_alloc_data data = { .flags = 0};
@@ -1997,26 +1997,24 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
__blk_queue_split(q, &bio, &nr_segs);
if (!bio_integrity_prep(bio))
- return BLK_QC_T_NONE;
+ goto queue_exit;
if (!is_flush_fua && !blk_queue_nomerges(q) &&
blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq))
- return BLK_QC_T_NONE;
+ goto queue_exit;
if (blk_mq_sched_bio_merge(q, bio, nr_segs))
- return BLK_QC_T_NONE;
+ goto queue_exit;
rq_qos_throttle(q, bio);
data.cmd_flags = bio->bi_opf;
- blk_queue_enter_live(q);
rq = blk_mq_get_request(q, bio, &data);
if (unlikely(!rq)) {
- blk_queue_exit(q);
rq_qos_cleanup(q, bio);
if (bio->bi_opf & REQ_NOWAIT)
bio_wouldblock_error(bio);
- return BLK_QC_T_NONE;
+ goto queue_exit;
}
trace_block_getrq(q, bio, bio->bi_opf);
@@ -2095,6 +2093,9 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
}
return cookie;
+queue_exit:
+ blk_queue_exit(q);
+ return BLK_QC_T_NONE;
}
EXPORT_SYMBOL_GPL(blk_mq_make_request); /* only for request based dm */
diff --git a/block/blk.h b/block/blk.h
index 73bd3b1c69384..f5b271a8a5016 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -62,17 +62,6 @@ void blk_free_flush_queue(struct blk_flush_queue *q);
void blk_freeze_queue(struct request_queue *q);
-static inline void blk_queue_enter_live(struct request_queue *q)
-{
- /*
- * Given that running in generic_make_request() context
- * guarantees that a live reference against q_usage_counter has
- * been established, further references under that same context
- * need not check that the queue has been frozen (marked dead).
- */
- percpu_ref_get(&q->q_usage_counter);
-}
-
static inline bool biovec_phys_mergeable(struct request_queue *q,
struct bio_vec *vec1, struct bio_vec *vec2)
{
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 0eb93da44ea2a..dc191da217f78 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1788,8 +1788,17 @@ static blk_qc_t dm_make_request(struct request_queue *q, struct bio *bio)
int srcu_idx;
struct dm_table *map;
- if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED)
- return blk_mq_make_request(q, bio);
+ if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) {
+ /*
+ * We are called with a live reference on q_usage_counter, but
+ * that one will be released as soon as we return. Grab an
+ * extra one as blk_mq_make_request expects to be able to
+ * consume a reference (which lives until the request is freed
+ * in case a request is allocated).
+ */
+ percpu_ref_get(&q->q_usage_counter);
+ return blk_mq_make_request(bio);
+ }
map = dm_get_live_table(md, &srcu_idx);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index d7307795439a4..13038954f67be 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -578,6 +578,6 @@ static inline void blk_mq_cleanup_rq(struct request *rq)
rq->q->mq_ops->cleanup_rq(rq);
}
-blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio);
+blk_qc_t blk_mq_make_request(struct bio *bio);
#endif
--
2.26.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue
2020-04-25 17:09 ` [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue Christoph Hellwig
@ 2020-04-26 2:53 ` Ming Lei
2020-04-27 15:10 ` Christoph Hellwig
0 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2020-04-26 2:53 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block
On Sat, Apr 25, 2020 at 07:09:39PM +0200, Christoph Hellwig wrote:
> Don't bother with the on-stack bio list if we know that we directly
> issue to a request based driver that can't re-inject bios.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-core.c | 29 +++++++++++++++++++----------
> 1 file changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 732d5b8d3cd25..e8c48203b2c55 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1008,6 +1008,18 @@ generic_make_request_checks(struct bio *bio)
> return false;
> }
>
> +static inline blk_qc_t __direct_make_request(struct bio *bio)
> +{
> + struct request_queue *q = bio->bi_disk->queue;
> + blk_qc_t ret;
> +
> + if (unlikely(bio_queue_enter(bio)))
> + return BLK_QC_T_NONE;
> + ret = blk_mq_make_request(q, bio);
> + blk_queue_exit(q);
> + return ret;
> +}
> +
> static blk_qc_t do_make_request(struct bio *bio,
> struct bio_list bio_list_on_stack[2])
> {
> @@ -1116,7 +1128,10 @@ blk_qc_t generic_make_request(struct bio *bio)
> return BLK_QC_T_NONE;
> }
>
> - return __generic_make_request(bio);
> + if (bio->bi_disk->queue->make_request_fn)
> + return __generic_make_request(bio);
> + return __direct_make_request(bio);
> +
blk_mq_make_request() still calls into generic_make_request(), so bio
may be added to current->bio_list, then looks __direct_make_request()
can't cover recursive bio submission any more.
Thanks,
Ming
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request
2020-04-25 17:09 ` [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request Christoph Hellwig
@ 2020-04-26 2:59 ` Ming Lei
0 siblings, 0 replies; 15+ messages in thread
From: Ming Lei @ 2020-04-26 2:59 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block
On Sat, Apr 25, 2020 at 07:09:34PM +0200, Christoph Hellwig wrote:
> The current documentation is a little weird, as it doesn't clearly
> explain which function to use, and also has the guts of the information
> on generic_make_request, which is the internal interface for stacking
> drivers.
>
> Fix this up by properly documenting submit_bio, and only documenting
> the differences and the use case for generic_make_request.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-core.c | 35 ++++++++++++-----------------------
> 1 file changed, 12 insertions(+), 23 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index dffff21008886..68351ee94ad2e 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -992,28 +992,13 @@ generic_make_request_checks(struct bio *bio)
> }
>
> /**
> - * generic_make_request - hand a buffer to its device driver for I/O
> + * generic_make_request - re-submit a bio to the block device layer for I/O
> * @bio: The bio describing the location in memory and on the device.
> *
> - * generic_make_request() is used to make I/O requests of block
> - * devices. It is passed a &struct bio, which describes the I/O that needs
> - * to be done.
> - *
> - * generic_make_request() does not return any status. The
> - * success/failure status of the request, along with notification of
> - * completion, is delivered asynchronously through the bio->bi_end_io
> - * function described (one day) else where.
> - *
> - * The caller of generic_make_request must make sure that bi_io_vec
> - * are set to describe the memory buffer, and that bi_dev and bi_sector are
> - * set to describe the device address, and the
> - * bi_end_io and optionally bi_private are set to describe how
> - * completion notification should be signaled.
> - *
> - * generic_make_request and the drivers it calls may use bi_next if this
> - * bio happens to be merged with someone else, and may resubmit the bio to
> - * a lower device by calling into generic_make_request recursively, which
> - * means the bio should NOT be touched after the call to ->make_request_fn.
> + * This is a version of submit_bio() that shall only be used for I/O that is
> + * resubmitted to lower level drivers by stacking block drivers. All file
No, generic_make_request() can be used by any block driver instead of
stacking drivers, see bio split, blk_throttle.c and bounce, maybe more.
Thanks,
Ming
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue
2020-04-26 2:53 ` Ming Lei
@ 2020-04-27 15:10 ` Christoph Hellwig
0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2020-04-27 15:10 UTC (permalink / raw)
To: Ming Lei; +Cc: Christoph Hellwig, Jens Axboe, linux-block
On Sun, Apr 26, 2020 at 10:53:52AM +0800, Ming Lei wrote:
> blk_mq_make_request() still calls into generic_make_request(), so bio
> may be added to current->bio_list, then looks __direct_make_request()
> can't cover recursive bio submission any more.
True, we can't do the series as-is.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2020-04-27 15:10 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-25 17:09 [RFC] more make_request optimizations Christoph Hellwig
2020-04-25 17:09 ` [PATCH 01/11] block: improve the kerneldoc comments for submit_bio and generic_make_request Christoph Hellwig
2020-04-26 2:59 ` Ming Lei
2020-04-25 17:09 ` [PATCH 02/11] block: cleanup the memory stall accounting in submit_bio Christoph Hellwig
2020-04-25 17:09 ` [PATCH 03/11] block: replace BIO_QUEUE_ENTERED with BIO_CGROUP_ACCT Christoph Hellwig
2020-04-25 17:09 ` [PATCH 04/11] block: add a bio_queue_enter helper Christoph Hellwig
2020-04-25 17:09 ` [PATCH 05/11] block: refactor generic_make_request Christoph Hellwig
2020-04-25 17:09 ` [PATCH 06/11] block: optimize generic_make_request for direct to blk-mq issue Christoph Hellwig
2020-04-26 2:53 ` Ming Lei
2020-04-27 15:10 ` Christoph Hellwig
2020-04-25 17:09 ` [PATCH 07/11] block: optimize do_make_request " Christoph Hellwig
2020-04-25 17:09 ` [PATCH 08/11] block: move the call to blk_queue_enter_live out of blk_mq_get_request Christoph Hellwig
2020-04-25 17:09 ` [PATCH 09/11] block: remove a pointless queue enter pair in blk_mq_alloc_request Christoph Hellwig
2020-04-25 17:09 ` [PATCH 10/11] block: remove a pointless queue enter pair in blk_mq_alloc_request_hctx Christoph Hellwig
2020-04-25 17:09 ` [PATCH 11/11] block: allow blk_mq_make_request to consume the q_usage_counter reference Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).