Linux block layer
 help / color / mirror / Atom feed
* cleanup blk_mq_run_hw_queue v2
@ 2023-04-13  6:06 Christoph Hellwig
  2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

Hi Jens,

this series cleans up blk_mq_run_hw_queue and related functions.

Changes since v1:
 - drop pointless blk_mq_hctx_stopped calls
 - additional cleanups

Diffstat:
 blk-mq-sched.c |   31 ++++++++++------------
 blk-mq.c       |   79 ++++++++++++++++-----------------------------------------
 2 files changed, 37 insertions(+), 73 deletions(-)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
@ 2023-04-13  6:06 ` Christoph Hellwig
  2023-04-13  6:23   ` Damien Le Moal
  2023-04-13 12:55   ` Jens Axboe
  2023-04-13  6:06 ` [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn Christoph Hellwig
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

__blk_mq_sched_dispatch_requests currently has duplicated logic
for the cases where requests are on the hctx dispatch list or not.
Merge the two with a new need_dispatch variable and remove a few
pointless local variables.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-sched.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 06b312c691143f..f3257e1607a00c 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -271,9 +271,7 @@ static int blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx)
 
 static int __blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
 {
-	struct request_queue *q = hctx->queue;
-	const bool has_sched = q->elevator;
-	int ret = 0;
+	bool need_dispatch = false;
 	LIST_HEAD(rq_list);
 
 	/*
@@ -302,23 +300,22 @@ static int __blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
 	 */
 	if (!list_empty(&rq_list)) {
 		blk_mq_sched_mark_restart_hctx(hctx);
-		if (blk_mq_dispatch_rq_list(hctx, &rq_list, 0)) {
-			if (has_sched)
-				ret = blk_mq_do_dispatch_sched(hctx);
-			else
-				ret = blk_mq_do_dispatch_ctx(hctx);
-		}
-	} else if (has_sched) {
-		ret = blk_mq_do_dispatch_sched(hctx);
-	} else if (hctx->dispatch_busy) {
-		/* dequeue request one by one from sw queue if queue is busy */
-		ret = blk_mq_do_dispatch_ctx(hctx);
+		if (!blk_mq_dispatch_rq_list(hctx, &rq_list, 0)) 
+			return 0;
+		need_dispatch = true;
 	} else {
-		blk_mq_flush_busy_ctxs(hctx, &rq_list);
-		blk_mq_dispatch_rq_list(hctx, &rq_list, 0);
+		need_dispatch = hctx->dispatch_busy;
 	}
 
-	return ret;
+	if (hctx->queue->elevator)
+		return blk_mq_do_dispatch_sched(hctx);
+
+	/* dequeue request one by one from sw queue if queue is busy */
+	if (need_dispatch)
+		return blk_mq_do_dispatch_ctx(hctx);
+	blk_mq_flush_busy_ctxs(hctx, &rq_list);
+	blk_mq_dispatch_rq_list(hctx, &rq_list, 0);
+	return 0;
 }
 
 void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
  2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
@ 2023-04-13  6:06 ` Christoph Hellwig
  2023-04-13  6:24   ` Damien Le Moal
  2023-04-13  6:06 ` [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue Christoph Hellwig
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

blk_mq_hctx_stopped is alredy checked in blk_mq_sched_dispatch_requests
under blk_mq_run_dispatch_ops() protetion, so remove the duplicate check.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 52f8e0099c7f4b..5289a34e68b937 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2430,15 +2430,8 @@ EXPORT_SYMBOL(blk_mq_start_stopped_hw_queues);
 
 static void blk_mq_run_work_fn(struct work_struct *work)
 {
-	struct blk_mq_hw_ctx *hctx;
-
-	hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work);
-
-	/*
-	 * If we are stopped, don't run the queue.
-	 */
-	if (blk_mq_hctx_stopped(hctx))
-		return;
+	struct blk_mq_hw_ctx *hctx =
+		container_of(work, struct blk_mq_hw_ctx, run_work.work);
 
 	__blk_mq_run_hw_queue(hctx);
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
  2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
  2023-04-13  6:06 ` [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn Christoph Hellwig
@ 2023-04-13  6:06 ` Christoph Hellwig
  2023-04-13  6:26   ` Damien Le Moal
  2023-04-13  6:06 ` [PATCH 4/5] blk-mq: move the !async handling out of __blk_mq_delay_run_hw_queue Christoph Hellwig
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

For the in-context dispatch, blk_mq_hctx_stopped is alredy checked in
blk_mq_sched_dispatch_requests under blk_mq_run_dispatch_ops() protetion.
For the async dispatch case having a check before scheduling the work
still makes sense to avoid needless workqueue scheduling, so just keep
it for that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5289a34e68b937..e0c914651f7946 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2212,9 +2212,6 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 					unsigned long msecs)
 {
-	if (unlikely(blk_mq_hctx_stopped(hctx)))
-		return;
-
 	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
 		if (cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
 			__blk_mq_run_hw_queue(hctx);
@@ -2222,6 +2219,8 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 		}
 	}
 
+	if (unlikely(blk_mq_hctx_stopped(hctx)))
+		return;
 	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
 				    msecs_to_jiffies(msecs));
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] blk-mq: move the !async handling out of __blk_mq_delay_run_hw_queue
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2023-04-13  6:06 ` [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue Christoph Hellwig
@ 2023-04-13  6:06 ` Christoph Hellwig
  2023-04-13  6:06 ` [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue Christoph Hellwig
  2023-04-13 13:11 ` cleanup blk_mq_run_hw_queue v2 Jens Axboe
  5 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

Only blk_mq_run_hw_queue can call __blk_mq_delay_run_hw_queue with
async=false, so move the handling there.

With this __blk_mq_delay_run_hw_queue can be merged into
blk_mq_delay_run_hw_queue.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq.c | 40 +++++++++++++---------------------------
 1 file changed, 13 insertions(+), 27 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index e0c914651f7946..6eef65ac4996bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2201,41 +2201,19 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 }
 
 /**
- * __blk_mq_delay_run_hw_queue - Run (or schedule to run) a hardware queue.
+ * blk_mq_delay_run_hw_queue - Run a hardware queue asynchronously.
  * @hctx: Pointer to the hardware queue to run.
- * @async: If we want to run the queue asynchronously.
  * @msecs: Milliseconds of delay to wait before running the queue.
  *
- * If !@async, try to run the queue now. Else, run the queue asynchronously and
- * with a delay of @msecs.
+ * Run a hardware queue asynchronously with a delay of @msecs.
  */
-static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
-					unsigned long msecs)
+void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
 {
-	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
-		if (cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
-			__blk_mq_run_hw_queue(hctx);
-			return;
-		}
-	}
-
 	if (unlikely(blk_mq_hctx_stopped(hctx)))
 		return;
 	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
 				    msecs_to_jiffies(msecs));
 }
-
-/**
- * blk_mq_delay_run_hw_queue - Run a hardware queue asynchronously.
- * @hctx: Pointer to the hardware queue to run.
- * @msecs: Milliseconds of delay to wait before running the queue.
- *
- * Run a hardware queue asynchronously with a delay of @msecs.
- */
-void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
-{
-	__blk_mq_delay_run_hw_queue(hctx, true, msecs);
-}
 EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
 
 /**
@@ -2263,8 +2241,16 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 		need_run = !blk_queue_quiesced(hctx->queue) &&
 		blk_mq_hctx_has_pending(hctx));
 
-	if (need_run)
-		__blk_mq_delay_run_hw_queue(hctx, async, 0);
+	if (!need_run)
+		return;
+
+	if (async || (hctx->flags & BLK_MQ_F_BLOCKING) ||
+	    !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
+		blk_mq_delay_run_hw_queue(hctx, 0);
+		return;
+	}
+
+	__blk_mq_run_hw_queue(hctx);
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queue);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2023-04-13  6:06 ` [PATCH 4/5] blk-mq: move the !async handling out of __blk_mq_delay_run_hw_queue Christoph Hellwig
@ 2023-04-13  6:06 ` Christoph Hellwig
  2023-04-13  6:27   ` Damien Le Moal
  2023-04-13 13:11 ` cleanup blk_mq_run_hw_queue v2 Jens Axboe
  5 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2023-04-13  6:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Damien Le Moal

__blk_mq_run_hw_queue just contains a WARN_ON_ONCE for calls from
interrupt context and a blk_mq_run_dispatch_ops-protected call to
blk_mq_sched_dispatch_requests.  Open code the call to
blk_mq_sched_dispatch_requests in both callers, and move the WARN_ON_ONCE
to blk_mq_run_hw_queue where it can be extented to all !async calls,
while the other call is from workqueue context and thus obviously does
not need the assert.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 29 +++++++++--------------------
 1 file changed, 9 insertions(+), 20 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6eef65ac4996bf..9e683f511f8ac0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2127,24 +2127,6 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *hctx, struct list_head *list,
 	return true;
 }
 
-/**
- * __blk_mq_run_hw_queue - Run a hardware queue.
- * @hctx: Pointer to the hardware queue to run.
- *
- * Send pending requests to the hardware.
- */
-static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
-{
-	/*
-	 * We can't run the queue inline with ints disabled. Ensure that
-	 * we catch bad users of this early.
-	 */
-	WARN_ON_ONCE(in_interrupt());
-
-	blk_mq_run_dispatch_ops(hctx->queue,
-			blk_mq_sched_dispatch_requests(hctx));
-}
-
 static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
 {
 	int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask);
@@ -2229,6 +2211,11 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 {
 	bool need_run;
 
+	/*
+	 * We can't run the queue inline with interrupts disabled.
+	 */
+	WARN_ON_ONCE(!async && in_interrupt());
+
 	/*
 	 * When queue is quiesced, we may be switching io scheduler, or
 	 * updating nr_hw_queues, or other things, and we can't run queue
@@ -2250,7 +2237,8 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
 		return;
 	}
 
-	__blk_mq_run_hw_queue(hctx);
+	blk_mq_run_dispatch_ops(hctx->queue,
+				blk_mq_sched_dispatch_requests(hctx));
 }
 EXPORT_SYMBOL(blk_mq_run_hw_queue);
 
@@ -2418,7 +2406,8 @@ static void blk_mq_run_work_fn(struct work_struct *work)
 	struct blk_mq_hw_ctx *hctx =
 		container_of(work, struct blk_mq_hw_ctx, run_work.work);
 
-	__blk_mq_run_hw_queue(hctx);
+	blk_mq_run_dispatch_ops(hctx->queue,
+				blk_mq_sched_dispatch_requests(hctx));
 }
 
 static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests
  2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
@ 2023-04-13  6:23   ` Damien Le Moal
  2023-04-13 12:55   ` Jens Axboe
  1 sibling, 0 replies; 12+ messages in thread
From: Damien Le Moal @ 2023-04-13  6:23 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block

On 4/13/23 15:06, Christoph Hellwig wrote:
> __blk_mq_sched_dispatch_requests currently has duplicated logic
> for the cases where requests are on the hctx dispatch list or not.
> Merge the two with a new need_dispatch variable and remove a few
> pointless local variables.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn
  2023-04-13  6:06 ` [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn Christoph Hellwig
@ 2023-04-13  6:24   ` Damien Le Moal
  0 siblings, 0 replies; 12+ messages in thread
From: Damien Le Moal @ 2023-04-13  6:24 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block

On 4/13/23 15:06, Christoph Hellwig wrote:
> blk_mq_hctx_stopped is alredy checked in blk_mq_sched_dispatch_requests
> under blk_mq_run_dispatch_ops() protetion, so remove the duplicate check.

s/protetion/protection

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

> ---
>  block/blk-mq.c | 11 ++---------
>  1 file changed, 2 insertions(+), 9 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 52f8e0099c7f4b..5289a34e68b937 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2430,15 +2430,8 @@ EXPORT_SYMBOL(blk_mq_start_stopped_hw_queues);
>  
>  static void blk_mq_run_work_fn(struct work_struct *work)
>  {
> -	struct blk_mq_hw_ctx *hctx;
> -
> -	hctx = container_of(work, struct blk_mq_hw_ctx, run_work.work);
> -
> -	/*
> -	 * If we are stopped, don't run the queue.
> -	 */
> -	if (blk_mq_hctx_stopped(hctx))
> -		return;
> +	struct blk_mq_hw_ctx *hctx =
> +		container_of(work, struct blk_mq_hw_ctx, run_work.work);
>  
>  	__blk_mq_run_hw_queue(hctx);
>  }


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue
  2023-04-13  6:06 ` [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue Christoph Hellwig
@ 2023-04-13  6:26   ` Damien Le Moal
  0 siblings, 0 replies; 12+ messages in thread
From: Damien Le Moal @ 2023-04-13  6:26 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block

On 4/13/23 15:06, Christoph Hellwig wrote:
> For the in-context dispatch, blk_mq_hctx_stopped is alredy checked in
> blk_mq_sched_dispatch_requests under blk_mq_run_dispatch_ops() protetion.
> For the async dispatch case having a check before scheduling the work
> still makes sense to avoid needless workqueue scheduling, so just keep
> it for that case.

s/alredy/already
s/protetion/protection

Otherwise looks good.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue
  2023-04-13  6:06 ` [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue Christoph Hellwig
@ 2023-04-13  6:27   ` Damien Le Moal
  0 siblings, 0 replies; 12+ messages in thread
From: Damien Le Moal @ 2023-04-13  6:27 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe; +Cc: linux-block

On 4/13/23 15:06, Christoph Hellwig wrote:
> __blk_mq_run_hw_queue just contains a WARN_ON_ONCE for calls from
> interrupt context and a blk_mq_run_dispatch_ops-protected call to
> blk_mq_sched_dispatch_requests.  Open code the call to
> blk_mq_sched_dispatch_requests in both callers, and move the WARN_ON_ONCE
> to blk_mq_run_hw_queue where it can be extented to all !async calls,

s/extented/extended

> while the other call is from workqueue context and thus obviously does
> not need the assert.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests
  2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
  2023-04-13  6:23   ` Damien Le Moal
@ 2023-04-13 12:55   ` Jens Axboe
  1 sibling, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-04-13 12:55 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, Damien Le Moal

On 4/13/23 12:06?AM, Christoph Hellwig wrote:
> __blk_mq_sched_dispatch_requests currently has duplicated logic
> for the cases where requests are on the hctx dispatch list or not.
> Merge the two with a new need_dispatch variable and remove a few
> pointless local variables.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/blk-mq-sched.c | 31 ++++++++++++++-----------------
>  1 file changed, 14 insertions(+), 17 deletions(-)
> 
> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> index 06b312c691143f..f3257e1607a00c 100644
> --- a/block/blk-mq-sched.c
> +++ b/block/blk-mq-sched.c
> @@ -271,9 +271,7 @@ static int blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx)
>  
>  static int __blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
>  {
> -	struct request_queue *q = hctx->queue;
> -	const bool has_sched = q->elevator;
> -	int ret = 0;
> +	bool need_dispatch = false;
>  	LIST_HEAD(rq_list);
>  
>  	/*
> @@ -302,23 +300,22 @@ static int __blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx)
>  	 */
>  	if (!list_empty(&rq_list)) {
>  		blk_mq_sched_mark_restart_hctx(hctx);
> -		if (blk_mq_dispatch_rq_list(hctx, &rq_list, 0)) {
> -			if (has_sched)
> -				ret = blk_mq_do_dispatch_sched(hctx);
> -			else
> -				ret = blk_mq_do_dispatch_ctx(hctx);
> -		}
> -	} else if (has_sched) {
> -		ret = blk_mq_do_dispatch_sched(hctx);
> -	} else if (hctx->dispatch_busy) {
> -		/* dequeue request one by one from sw queue if queue is busy */
> -		ret = blk_mq_do_dispatch_ctx(hctx);
> +		if (!blk_mq_dispatch_rq_list(hctx, &rq_list, 0)) 

There's some trailing whitespace here. Patch 5 also doesn't seem to
apply, but I'll see what that's about and comment there if there's a
concern.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: cleanup blk_mq_run_hw_queue v2
  2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
                   ` (4 preceding siblings ...)
  2023-04-13  6:06 ` [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue Christoph Hellwig
@ 2023-04-13 13:11 ` Jens Axboe
  5 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2023-04-13 13:11 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-block, Damien Le Moal


On Thu, 13 Apr 2023 08:06:46 +0200, Christoph Hellwig wrote:
> this series cleans up blk_mq_run_hw_queue and related functions.
> 
> Changes since v1:
>  - drop pointless blk_mq_hctx_stopped calls
>  - additional cleanups
> 
> Diffstat:
>  blk-mq-sched.c |   31 ++++++++++------------
>  blk-mq.c       |   79 ++++++++++++++++-----------------------------------------
>  2 files changed, 37 insertions(+), 73 deletions(-)
> 
> [...]

Applied, thanks!

[1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests
      commit: 89ea5ceb53d14f52ecbad8393be47f382c47c37d
[2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn
      commit: c20a1a2c1a9f5b1081121cd18be444e7610b0c6f
[3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue
      commit: cd735e11130d4c84a073e1056aa019ca0f3305f9
[4/5] blk-mq: move the !async handling out of __blk_mq_delay_run_hw_queue
      commit: 1aa8d875b523d61347a6887e4a4ab65a6d799d40
[5/5] blk-mq: remove __blk_mq_run_hw_queue
      commit: 4d5bba5bee0aa002523125e51789e95d47794a06

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-04-13 13:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-13  6:06 cleanup blk_mq_run_hw_queue v2 Christoph Hellwig
2023-04-13  6:06 ` [PATCH 1/5] blk-mq: cleanup __blk_mq_sched_dispatch_requests Christoph Hellwig
2023-04-13  6:23   ` Damien Le Moal
2023-04-13 12:55   ` Jens Axboe
2023-04-13  6:06 ` [PATCH 2/5] blk-mq: remove the blk_mq_hctx_stopped check in blk_mq_run_work_fn Christoph Hellwig
2023-04-13  6:24   ` Damien Le Moal
2023-04-13  6:06 ` [PATCH 3/5] blk-mq: move the blk_mq_hctx_stopped check in __blk_mq_delay_run_hw_queue Christoph Hellwig
2023-04-13  6:26   ` Damien Le Moal
2023-04-13  6:06 ` [PATCH 4/5] blk-mq: move the !async handling out of __blk_mq_delay_run_hw_queue Christoph Hellwig
2023-04-13  6:06 ` [PATCH 5/5] blk-mq: remove __blk_mq_run_hw_queue Christoph Hellwig
2023-04-13  6:27   ` Damien Le Moal
2023-04-13 13:11 ` cleanup blk_mq_run_hw_queue v2 Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox