public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] blk-mq: merge bio into sw queue before plugging
@ 2017-05-23 11:47 Ming Lei
  2017-05-23 11:47 ` [PATCH 1/2] " Ming Lei
  2017-05-23 11:47 ` [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge Ming Lei
  0 siblings, 2 replies; 7+ messages in thread
From: Ming Lei @ 2017-05-23 11:47 UTC (permalink / raw)
  To: Jens Axboe, linux-block, Christoph Hellwig; +Cc: Ming Lei

Hi,

The 1st patch moves sw queue's bio merge before plugging, then
the sequential I/O performance regression by blk-mq can be fixed.

The 2nd patch makes the sw queue's bio merge as the default .bio_merge
if no io scheduler is used or the io scheduler doesn't provide
.bio_merge.

This post splits the orginal one into two patches to make diff easy-read
as suggested by Christoph.

Thanks,
Ming


Ming Lei (2):
  blk-mq: merge bio into sw queue before plugging
  blk-mq: make per-sw-queue bio merge as default .bio_merge

 block/blk-mq-sched.c | 62 ++++++++++++++++++++++++++++++++++++----
 block/blk-mq-sched.h |  4 +--
 block/blk-mq.c       | 80 +++++++---------------------------------------------
 3 files changed, 68 insertions(+), 78 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] blk-mq: merge bio into sw queue before plugging
  2017-05-23 11:47 [PATCH 0/2] blk-mq: merge bio into sw queue before plugging Ming Lei
@ 2017-05-23 11:47 ` Ming Lei
  2017-05-24  9:48   ` Christoph Hellwig
  2017-05-23 11:47 ` [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge Ming Lei
  1 sibling, 1 reply; 7+ messages in thread
From: Ming Lei @ 2017-05-23 11:47 UTC (permalink / raw)
  To: Jens Axboe, linux-block, Christoph Hellwig; +Cc: Ming Lei

Before blk-mq is introduced, I/O is merged to elevator
before being putted into plug queue, but blk-mq changed the
order and makes merging to sw queue basically impossible.
Then it is observed that throughput of sequential I/O is degraded
about 10%~20% on virtio-blk in the test[1] if mq-deadline isn't used.

This patch moves the bio merging per sw queue before plugging,
like what blk_queue_bio() does, and the performance regression is
fixed under this situation.

[1]. test script:
sudo fio --direct=1 --size=128G --bsrange=4k-4k --runtime=40 --numjobs=16 --ioengine=libaio --iodepth=64 --group_reporting=1 --filename=/dev/vdb --name=virtio_blk-test-$RW --rw=$RW --output-format=json

RW=read or write

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 53 +++++++++++++++++++++++++++++------------------------
 1 file changed, 29 insertions(+), 24 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a69ad122ed66..b7ca64ef15e8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1446,30 +1446,31 @@ static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
 		!blk_queue_nomerges(hctx->queue);
 }
 
-static inline bool blk_mq_merge_queue_io(struct blk_mq_hw_ctx *hctx,
-					 struct blk_mq_ctx *ctx,
-					 struct request *rq, struct bio *bio)
+/* attempt to merge bio into current sw queue */
+static inline bool blk_mq_merge_bio(struct request_queue *q, struct bio *bio)
 {
-	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio)) {
-		blk_mq_bio_to_request(rq, bio);
-		spin_lock(&ctx->lock);
-insert_rq:
-		__blk_mq_insert_request(hctx, rq, false);
-		spin_unlock(&ctx->lock);
-		return false;
-	} else {
-		struct request_queue *q = hctx->queue;
+	bool ret = false;
+	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
 
-		spin_lock(&ctx->lock);
-		if (!blk_mq_attempt_merge(q, ctx, bio)) {
-			blk_mq_bio_to_request(rq, bio);
-			goto insert_rq;
-		}
+	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio))
+		goto exit;
 
-		spin_unlock(&ctx->lock);
-		__blk_mq_finish_request(hctx, ctx, rq);
-		return true;
-	}
+	spin_lock(&ctx->lock);
+	ret = blk_mq_attempt_merge(q, ctx, bio);
+	spin_unlock(&ctx->lock);
+exit:
+	blk_mq_put_ctx(ctx);
+	return ret;
+}
+
+static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx,
+				   struct blk_mq_ctx *ctx,
+				   struct request *rq)
+{
+	spin_lock(&ctx->lock);
+	__blk_mq_insert_request(hctx, rq, false);
+	spin_unlock(&ctx->lock);
 }
 
 static blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx, struct request *rq)
@@ -1568,6 +1569,9 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	if (blk_mq_sched_bio_merge(q, bio))
 		return BLK_QC_T_NONE;
 
+	if (blk_mq_merge_bio(q, bio))
+		return BLK_QC_T_NONE;
+
 	wb_acct = wbt_wait(q->rq_wb, bio, NULL);
 
 	trace_block_getrq(q, bio, bio->bi_opf);
@@ -1649,11 +1653,12 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 		blk_mq_put_ctx(data.ctx);
 		blk_mq_bio_to_request(rq, bio);
 		blk_mq_sched_insert_request(rq, false, true, true, true);
-	} else if (!blk_mq_merge_queue_io(data.hctx, data.ctx, rq, bio)) {
+	} else {
 		blk_mq_put_ctx(data.ctx);
+		blk_mq_bio_to_request(rq, bio);
+		blk_mq_queue_io(data.hctx, data.ctx, rq);
 		blk_mq_run_hw_queue(data.hctx, true);
-	} else
-		blk_mq_put_ctx(data.ctx);
+	}
 
 	return cookie;
 }
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge
  2017-05-23 11:47 [PATCH 0/2] blk-mq: merge bio into sw queue before plugging Ming Lei
  2017-05-23 11:47 ` [PATCH 1/2] " Ming Lei
@ 2017-05-23 11:47 ` Ming Lei
  2017-05-24  9:58   ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Ming Lei @ 2017-05-23 11:47 UTC (permalink / raw)
  To: Jens Axboe, linux-block, Christoph Hellwig; +Cc: Ming Lei

Because what the per-sw-queue bio merge does is basically same with
scheduler's .bio_merge(), this patch makes per-sw-queue bio merge
as the default .bio_merge if no scheduler is used or io scheduler
doesn't provide .bio_merge().

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-sched.c | 62 +++++++++++++++++++++++++++++++++++++++++++++----
 block/blk-mq-sched.h |  4 +---
 block/blk-mq.c       | 65 ----------------------------------------------------
 3 files changed, 58 insertions(+), 73 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 1f5b692526ae..c4e2afb9d12d 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -221,19 +221,71 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
 }
 EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
 
+/*
+ * Reverse check our software queue for entries that we could potentially
+ * merge with. Currently includes a hand-wavy stop count of 8, to not spend
+ * too much time checking for merges.
+ */
+static bool blk_mq_attempt_merge(struct request_queue *q,
+				 struct blk_mq_ctx *ctx, struct bio *bio)
+{
+	struct request *rq;
+	int checked = 8;
+
+	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
+		bool merged = false;
+
+		if (!checked--)
+			break;
+
+		if (!blk_rq_merge_ok(rq, bio))
+			continue;
+
+		switch (blk_try_merge(rq, bio)) {
+		case ELEVATOR_BACK_MERGE:
+			if (blk_mq_sched_allow_merge(q, rq, bio))
+				merged = bio_attempt_back_merge(q, rq, bio);
+			break;
+		case ELEVATOR_FRONT_MERGE:
+			if (blk_mq_sched_allow_merge(q, rq, bio))
+				merged = bio_attempt_front_merge(q, rq, bio);
+			break;
+		case ELEVATOR_DISCARD_MERGE:
+			merged = bio_attempt_discard_merge(q, rq, bio);
+			break;
+		default:
+			continue;
+		}
+
+		if (merged)
+			ctx->rq_merged++;
+		return merged;
+	}
+
+	return false;
+}
+
 bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
 {
 	struct elevator_queue *e = q->elevator;
+	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
+	bool ret = false;
 
-	if (e->type->ops.mq.bio_merge) {
-		struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
-		struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
-
+	if (e && e->type->ops.mq.bio_merge) {
 		blk_mq_put_ctx(ctx);
 		return e->type->ops.mq.bio_merge(hctx, bio);
 	}
 
-	return false;
+	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
+		/* default per sw-queue merge */
+		spin_lock(&ctx->lock);
+		ret = blk_mq_attempt_merge(q, ctx, bio);
+		spin_unlock(&ctx->lock);
+	}
+
+	blk_mq_put_ctx(ctx);
+	return ret;
 }
 
 bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq)
diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h
index edafb5383b7b..b87e5be5db8c 100644
--- a/block/blk-mq-sched.h
+++ b/block/blk-mq-sched.h
@@ -38,9 +38,7 @@ int blk_mq_sched_init(struct request_queue *q);
 static inline bool
 blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
 {
-	struct elevator_queue *e = q->elevator;
-
-	if (!e || blk_queue_nomerges(q) || !bio_mergeable(bio))
+	if (blk_queue_nomerges(q) || !bio_mergeable(bio))
 		return false;
 
 	return __blk_mq_sched_bio_merge(q, bio);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b7ca64ef15e8..9aec650aea2a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -772,50 +772,6 @@ static void blk_mq_timeout_work(struct work_struct *work)
 	blk_queue_exit(q);
 }
 
-/*
- * Reverse check our software queue for entries that we could potentially
- * merge with. Currently includes a hand-wavy stop count of 8, to not spend
- * too much time checking for merges.
- */
-static bool blk_mq_attempt_merge(struct request_queue *q,
-				 struct blk_mq_ctx *ctx, struct bio *bio)
-{
-	struct request *rq;
-	int checked = 8;
-
-	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
-		bool merged = false;
-
-		if (!checked--)
-			break;
-
-		if (!blk_rq_merge_ok(rq, bio))
-			continue;
-
-		switch (blk_try_merge(rq, bio)) {
-		case ELEVATOR_BACK_MERGE:
-			if (blk_mq_sched_allow_merge(q, rq, bio))
-				merged = bio_attempt_back_merge(q, rq, bio);
-			break;
-		case ELEVATOR_FRONT_MERGE:
-			if (blk_mq_sched_allow_merge(q, rq, bio))
-				merged = bio_attempt_front_merge(q, rq, bio);
-			break;
-		case ELEVATOR_DISCARD_MERGE:
-			merged = bio_attempt_discard_merge(q, rq, bio);
-			break;
-		default:
-			continue;
-		}
-
-		if (merged)
-			ctx->rq_merged++;
-		return merged;
-	}
-
-	return false;
-}
-
 struct flush_busy_ctx_data {
 	struct blk_mq_hw_ctx *hctx;
 	struct list_head *list;
@@ -1446,24 +1402,6 @@ static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
 		!blk_queue_nomerges(hctx->queue);
 }
 
-/* attempt to merge bio into current sw queue */
-static inline bool blk_mq_merge_bio(struct request_queue *q, struct bio *bio)
-{
-	bool ret = false;
-	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
-	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
-
-	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio))
-		goto exit;
-
-	spin_lock(&ctx->lock);
-	ret = blk_mq_attempt_merge(q, ctx, bio);
-	spin_unlock(&ctx->lock);
-exit:
-	blk_mq_put_ctx(ctx);
-	return ret;
-}
-
 static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx,
 				   struct blk_mq_ctx *ctx,
 				   struct request *rq)
@@ -1569,9 +1507,6 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	if (blk_mq_sched_bio_merge(q, bio))
 		return BLK_QC_T_NONE;
 
-	if (blk_mq_merge_bio(q, bio))
-		return BLK_QC_T_NONE;
-
 	wb_acct = wbt_wait(q->rq_wb, bio, NULL);
 
 	trace_block_getrq(q, bio, bio->bi_opf);
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] blk-mq: merge bio into sw queue before plugging
  2017-05-23 11:47 ` [PATCH 1/2] " Ming Lei
@ 2017-05-24  9:48   ` Christoph Hellwig
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2017-05-24  9:48 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Tue, May 23, 2017 at 07:47:35PM +0800, Ming Lei wrote:
> Before blk-mq is introduced, I/O is merged to elevator
> before being putted into plug queue, but blk-mq changed the
> order and makes merging to sw queue basically impossible.
> Then it is observed that throughput of sequential I/O is degraded
> about 10%~20% on virtio-blk in the test[1] if mq-deadline isn't used.
> 
> This patch moves the bio merging per sw queue before plugging,
> like what blk_queue_bio() does, and the performance regression is
> fixed under this situation.
> 
> [1]. test script:
> sudo fio --direct=1 --size=128G --bsrange=4k-4k --runtime=40 --numjobs=16 --ioengine=libaio --iodepth=64 --group_reporting=1 --filename=/dev/vdb --name=virtio_blk-test-$RW --rw=$RW --output-format=json
> 
> RW=read or write
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq.c | 53 +++++++++++++++++++++++++++++------------------------
>  1 file changed, 29 insertions(+), 24 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index a69ad122ed66..b7ca64ef15e8 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1446,30 +1446,31 @@ static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
>  		!blk_queue_nomerges(hctx->queue);
>  }
> +/* attempt to merge bio into current sw queue */
> +static inline bool blk_mq_merge_bio(struct request_queue *q, struct bio *bio)
>  {
> +	bool ret = false;
> +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
>  
> +	if (!hctx_allow_merges(hctx) || !bio_mergeable(bio))
> +		goto exit;
>  
> +	spin_lock(&ctx->lock);
> +	ret = blk_mq_attempt_merge(q, ctx, bio);
> +	spin_unlock(&ctx->lock);
> +exit:
> +	blk_mq_put_ctx(ctx);
> +	return ret;

If I'd want to nitpick I'd say that this would probably be a bit
cleaner without the goto..

But otherwise this looks fine to me:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge
  2017-05-23 11:47 ` [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge Ming Lei
@ 2017-05-24  9:58   ` Christoph Hellwig
  2017-05-24 10:12     ` Ming Lei
  2017-05-24 10:31     ` Ming Lei
  0 siblings, 2 replies; 7+ messages in thread
From: Christoph Hellwig @ 2017-05-24  9:58 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Christoph Hellwig

On Tue, May 23, 2017 at 07:47:36PM +0800, Ming Lei wrote:
> Because what the per-sw-queue bio merge does is basically same with
> scheduler's .bio_merge(), this patch makes per-sw-queue bio merge
> as the default .bio_merge if no scheduler is used or io scheduler
> doesn't provide .bio_merge().
> 
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-mq-sched.c | 62 +++++++++++++++++++++++++++++++++++++++++++++----
>  block/blk-mq-sched.h |  4 +---
>  block/blk-mq.c       | 65 ----------------------------------------------------
>  3 files changed, 58 insertions(+), 73 deletions(-)
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 1f5b692526ae..c4e2afb9d12d 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -221,19 +221,71 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
>  }
>  EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
>  
> +/*
> + * Reverse check our software queue for entries that we could potentially
> + * merge with. Currently includes a hand-wavy stop count of 8, to not spend
> + * too much time checking for merges.
> + */
> +static bool blk_mq_attempt_merge(struct request_queue *q,
> +				 struct blk_mq_ctx *ctx, struct bio *bio)
> +{
> +	struct request *rq;
> +	int checked = 8;
> +
> +	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
> +		bool merged = false;
> +
> +		if (!checked--)
> +			break;
> +
> +		if (!blk_rq_merge_ok(rq, bio))
> +			continue;
> +
> +		switch (blk_try_merge(rq, bio)) {
> +		case ELEVATOR_BACK_MERGE:
> +			if (blk_mq_sched_allow_merge(q, rq, bio))
> +				merged = bio_attempt_back_merge(q, rq, bio);
> +			break;
> +		case ELEVATOR_FRONT_MERGE:
> +			if (blk_mq_sched_allow_merge(q, rq, bio))
> +				merged = bio_attempt_front_merge(q, rq, bio);
> +			break;
> +		case ELEVATOR_DISCARD_MERGE:
> +			merged = bio_attempt_discard_merge(q, rq, bio);
> +			break;
> +		default:
> +			continue;
> +		}
> +
> +		if (merged)
> +			ctx->rq_merged++;
> +		return merged;
> +	}
> +
> +	return false;
> +}
> +
>  bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
>  {
>  	struct elevator_queue *e = q->elevator;
> +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
> +	bool ret = false;
>  
> +	if (e && e->type->ops.mq.bio_merge) {
>  		blk_mq_put_ctx(ctx);
>  		return e->type->ops.mq.bio_merge(hctx, bio);
>  	}
>  
> +	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
> +		/* default per sw-queue merge */
> +		spin_lock(&ctx->lock);
> +		ret = blk_mq_attempt_merge(q, ctx, bio);
> +		spin_unlock(&ctx->lock);
> +	}
> +
> +	blk_mq_put_ctx(ctx);
> +	return ret;

I'd rather move __blk_mq_sched_bio_merge/blk_mq_sched_bio_merge into
blk-mq.c (and dropping the sched in the name) rather than moving
blk_mq_attempt_merge out.

But except that this looks fine to me.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge
  2017-05-24  9:58   ` Christoph Hellwig
@ 2017-05-24 10:12     ` Ming Lei
  2017-05-24 10:31     ` Ming Lei
  1 sibling, 0 replies; 7+ messages in thread
From: Ming Lei @ 2017-05-24 10:12 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Wed, May 24, 2017 at 02:58:43AM -0700, Christoph Hellwig wrote:
> On Tue, May 23, 2017 at 07:47:36PM +0800, Ming Lei wrote:
> > Because what the per-sw-queue bio merge does is basically same with
> > scheduler's .bio_merge(), this patch makes per-sw-queue bio merge
> > as the default .bio_merge if no scheduler is used or io scheduler
> > doesn't provide .bio_merge().
> > 
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  block/blk-mq-sched.c | 62 +++++++++++++++++++++++++++++++++++++++++++++----
> >  block/blk-mq-sched.h |  4 +---
> >  block/blk-mq.c       | 65 ----------------------------------------------------
> >  3 files changed, 58 insertions(+), 73 deletions(-)
> > 
> > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> > index 1f5b692526ae..c4e2afb9d12d 100644
> > --- a/block/blk-mq-sched.c
> > +++ b/block/blk-mq-sched.c
> > @@ -221,19 +221,71 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
> >  }
> >  EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
> >  
> > +/*
> > + * Reverse check our software queue for entries that we could potentially
> > + * merge with. Currently includes a hand-wavy stop count of 8, to not spend
> > + * too much time checking for merges.
> > + */
> > +static bool blk_mq_attempt_merge(struct request_queue *q,
> > +				 struct blk_mq_ctx *ctx, struct bio *bio)
> > +{
> > +	struct request *rq;
> > +	int checked = 8;
> > +
> > +	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
> > +		bool merged = false;
> > +
> > +		if (!checked--)
> > +			break;
> > +
> > +		if (!blk_rq_merge_ok(rq, bio))
> > +			continue;
> > +
> > +		switch (blk_try_merge(rq, bio)) {
> > +		case ELEVATOR_BACK_MERGE:
> > +			if (blk_mq_sched_allow_merge(q, rq, bio))
> > +				merged = bio_attempt_back_merge(q, rq, bio);
> > +			break;
> > +		case ELEVATOR_FRONT_MERGE:
> > +			if (blk_mq_sched_allow_merge(q, rq, bio))
> > +				merged = bio_attempt_front_merge(q, rq, bio);
> > +			break;
> > +		case ELEVATOR_DISCARD_MERGE:
> > +			merged = bio_attempt_discard_merge(q, rq, bio);
> > +			break;
> > +		default:
> > +			continue;
> > +		}
> > +
> > +		if (merged)
> > +			ctx->rq_merged++;
> > +		return merged;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> >  bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> > +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
> > +	bool ret = false;
> >  
> > +	if (e && e->type->ops.mq.bio_merge) {
> >  		blk_mq_put_ctx(ctx);
> >  		return e->type->ops.mq.bio_merge(hctx, bio);
> >  	}
> >  
> > +	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
> > +		/* default per sw-queue merge */
> > +		spin_lock(&ctx->lock);
> > +		ret = blk_mq_attempt_merge(q, ctx, bio);
> > +		spin_unlock(&ctx->lock);
> > +	}
> > +
> > +	blk_mq_put_ctx(ctx);
> > +	return ret;
> 
> I'd rather move __blk_mq_sched_bio_merge/blk_mq_sched_bio_merge into
> blk-mq.c (and dropping the sched in the name) rather than moving
> blk_mq_attempt_merge out.

OK, will do that in V3.

Thanks,
Ming

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge
  2017-05-24  9:58   ` Christoph Hellwig
  2017-05-24 10:12     ` Ming Lei
@ 2017-05-24 10:31     ` Ming Lei
  1 sibling, 0 replies; 7+ messages in thread
From: Ming Lei @ 2017-05-24 10:31 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Wed, May 24, 2017 at 02:58:43AM -0700, Christoph Hellwig wrote:
> On Tue, May 23, 2017 at 07:47:36PM +0800, Ming Lei wrote:
> > Because what the per-sw-queue bio merge does is basically same with
> > scheduler's .bio_merge(), this patch makes per-sw-queue bio merge
> > as the default .bio_merge if no scheduler is used or io scheduler
> > doesn't provide .bio_merge().
> > 
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  block/blk-mq-sched.c | 62 +++++++++++++++++++++++++++++++++++++++++++++----
> >  block/blk-mq-sched.h |  4 +---
> >  block/blk-mq.c       | 65 ----------------------------------------------------
> >  3 files changed, 58 insertions(+), 73 deletions(-)
> > 
> > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> > index 1f5b692526ae..c4e2afb9d12d 100644
> > --- a/block/blk-mq-sched.c
> > +++ b/block/blk-mq-sched.c
> > @@ -221,19 +221,71 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
> >  }
> >  EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge);
> >  
> > +/*
> > + * Reverse check our software queue for entries that we could potentially
> > + * merge with. Currently includes a hand-wavy stop count of 8, to not spend
> > + * too much time checking for merges.
> > + */
> > +static bool blk_mq_attempt_merge(struct request_queue *q,
> > +				 struct blk_mq_ctx *ctx, struct bio *bio)
> > +{
> > +	struct request *rq;
> > +	int checked = 8;
> > +
> > +	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
> > +		bool merged = false;
> > +
> > +		if (!checked--)
> > +			break;
> > +
> > +		if (!blk_rq_merge_ok(rq, bio))
> > +			continue;
> > +
> > +		switch (blk_try_merge(rq, bio)) {
> > +		case ELEVATOR_BACK_MERGE:
> > +			if (blk_mq_sched_allow_merge(q, rq, bio))
> > +				merged = bio_attempt_back_merge(q, rq, bio);
> > +			break;
> > +		case ELEVATOR_FRONT_MERGE:
> > +			if (blk_mq_sched_allow_merge(q, rq, bio))
> > +				merged = bio_attempt_front_merge(q, rq, bio);
> > +			break;
> > +		case ELEVATOR_DISCARD_MERGE:
> > +			merged = bio_attempt_discard_merge(q, rq, bio);
> > +			break;
> > +		default:
> > +			continue;
> > +		}
> > +
> > +		if (merged)
> > +			ctx->rq_merged++;
> > +		return merged;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> >  bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio)
> >  {
> >  	struct elevator_queue *e = q->elevator;
> > +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> > +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
> > +	bool ret = false;
> >  
> > +	if (e && e->type->ops.mq.bio_merge) {
> >  		blk_mq_put_ctx(ctx);
> >  		return e->type->ops.mq.bio_merge(hctx, bio);
> >  	}
> >  
> > +	if (hctx->flags & BLK_MQ_F_SHOULD_MERGE) {
> > +		/* default per sw-queue merge */
> > +		spin_lock(&ctx->lock);
> > +		ret = blk_mq_attempt_merge(q, ctx, bio);
> > +		spin_unlock(&ctx->lock);
> > +	}
> > +
> > +	blk_mq_put_ctx(ctx);
> > +	return ret;
> 
> I'd rather move __blk_mq_sched_bio_merge/blk_mq_sched_bio_merge into
> blk-mq.c (and dropping the sched in the name) rather than moving
> blk_mq_attempt_merge out.

Looked at the code further, maybe it isn't good to move
__blk_mq_sched_bio_merge() to blk-mq.c because the stuff of
'e && e->type->ops.mq.*' has never been put into blk-mq.c.

So how about not moving blk_mq_attempt_merge() and just define it as
global?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-05-24 10:32 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-05-23 11:47 [PATCH 0/2] blk-mq: merge bio into sw queue before plugging Ming Lei
2017-05-23 11:47 ` [PATCH 1/2] " Ming Lei
2017-05-24  9:48   ` Christoph Hellwig
2017-05-23 11:47 ` [PATCH 2/2] blk-mq: make per-sw-queue bio merge as default .bio_merge Ming Lei
2017-05-24  9:58   ` Christoph Hellwig
2017-05-24 10:12     ` Ming Lei
2017-05-24 10:31     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox