Linux block layer
 help / color / mirror / Atom feed
* [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support
@ 2025-10-13 19:28 Bart Van Assche
  2025-10-13 19:28 ` [PATCH 1/2] block/mq-deadline: Introduce dd_start_request() Bart Van Assche
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Bart Van Assche @ 2025-10-13 19:28 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Hi Jens,

Commit c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
modified the behavior of request flag BLK_MQ_INSERT_AT_HEAD from
dispatching a request before other requests into dispatching a request
before other requests with the same I/O priority. This is not correct since
BLK_MQ_INSERT_AT_HEAD is used when requeuing requests and also when a flush
request is inserted. Both types of requests should be dispatched as soon
as possible. Hence this patch series that makes the mq-deadline I/O scheduler
again ignore the I/O priority for BLK_MQ_INSERT_AT_HEAD requests.

Please consider this patch series for the next merge window.

Thanks,

Bart.

Bart Van Assche (2):
  block/mq-deadline: Introduce dd_start_request()
  block/mq-deadline: Switch back to a single dispatch list

 block/mq-deadline.c | 129 +++++++++++++++++++++-----------------------
 1 file changed, 61 insertions(+), 68 deletions(-)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] block/mq-deadline: Introduce dd_start_request()
  2025-10-13 19:28 [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Bart Van Assche
@ 2025-10-13 19:28 ` Bart Van Assche
  2025-10-14  4:16   ` Damien Le Moal
  2025-10-13 19:28 ` [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list Bart Van Assche
  2025-10-14 13:12 ` [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Jens Axboe
  2 siblings, 1 reply; 9+ messages in thread
From: Bart Van Assche @ 2025-10-13 19:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Damien Le Moal,
	Yu Kuai, chengkaitao

Prepare for adding a second caller of this function. No functionality
has been changed.

Cc: Damien Le Moal <dlemoal@kernel.org>
Cc: Yu Kuai <yukuai@kernel.org>
Cc: chengkaitao <chengkaitao@kylinos.cn>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/mq-deadline.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 3e741d33142d..647a45f6d935 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -306,6 +306,19 @@ static bool started_after(struct deadline_data *dd, struct request *rq,
 	return time_after(start_time, latest_start);
 }
 
+static struct request *dd_start_request(struct deadline_data *dd,
+					enum dd_data_dir data_dir,
+					struct request *rq)
+{
+	u8 ioprio_class = dd_rq_ioclass(rq);
+	enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
+
+	dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
+	dd->per_prio[prio].stats.dispatched++;
+	rq->rq_flags |= RQF_STARTED;
+	return rq;
+}
+
 /*
  * deadline_dispatch_requests selects the best request according to
  * read/write expire, fifo_batch, etc and with a start time <= @latest_start.
@@ -316,8 +329,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
 {
 	struct request *rq, *next_rq;
 	enum dd_data_dir data_dir;
-	enum dd_prio prio;
-	u8 ioprio_class;
 
 	lockdep_assert_held(&dd->lock);
 
@@ -411,12 +422,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
 	dd->batching++;
 	deadline_move_request(dd, per_prio, rq);
 done:
-	ioprio_class = dd_rq_ioclass(rq);
-	prio = ioprio_class_to_prio[ioprio_class];
-	dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
-	dd->per_prio[prio].stats.dispatched++;
-	rq->rq_flags |= RQF_STARTED;
-	return rq;
+	return dd_start_request(dd, data_dir, rq);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list
  2025-10-13 19:28 [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Bart Van Assche
  2025-10-13 19:28 ` [PATCH 1/2] block/mq-deadline: Introduce dd_start_request() Bart Van Assche
@ 2025-10-13 19:28 ` Bart Van Assche
  2025-10-14  4:19   ` Damien Le Moal
  2025-10-14  6:10   ` Yu Kuai
  2025-10-14 13:12 ` [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Jens Axboe
  2 siblings, 2 replies; 9+ messages in thread
From: Bart Van Assche @ 2025-10-13 19:28 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Damien Le Moal,
	Yu Kuai, chengkaitao

Commit c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
modified the behavior of request flag BLK_MQ_INSERT_AT_HEAD from
dispatching a request before other requests into dispatching a request
before other requests with the same I/O priority. This is not correct since
BLK_MQ_INSERT_AT_HEAD is used when requeuing requests and also when a flush
request is inserted.  Both types of requests should be dispatched as soon
as possible. Hence, make the mq-deadline I/O scheduler again ignore the I/O
priority for BLK_MQ_INSERT_AT_HEAD requests.

Cc: Damien Le Moal <dlemoal@kernel.org>
Cc: Yu Kuai <yukuai@kernel.org>
Reported-by: chengkaitao <chengkaitao@kylinos.cn>
Closes: https://lore.kernel.org/linux-block/20251009155253.14611-1-pilgrimtao@gmail.com/
Fixes: c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/mq-deadline.c | 107 +++++++++++++++++++-------------------------
 1 file changed, 47 insertions(+), 60 deletions(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 647a45f6d935..3e3719093aec 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -71,7 +71,6 @@ struct io_stats_per_prio {
  * present on both sort_list[] and fifo_list[].
  */
 struct dd_per_prio {
-	struct list_head dispatch;
 	struct rb_root sort_list[DD_DIR_COUNT];
 	struct list_head fifo_list[DD_DIR_COUNT];
 	/* Position of the most recently dispatched request. */
@@ -84,6 +83,7 @@ struct deadline_data {
 	 * run time data
 	 */
 
+	struct list_head dispatch;
 	struct dd_per_prio per_prio[DD_PRIO_COUNT];
 
 	/* Data direction of latest dispatched request. */
@@ -332,16 +332,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
 
 	lockdep_assert_held(&dd->lock);
 
-	if (!list_empty(&per_prio->dispatch)) {
-		rq = list_first_entry(&per_prio->dispatch, struct request,
-				      queuelist);
-		if (started_after(dd, rq, latest_start))
-			return NULL;
-		list_del_init(&rq->queuelist);
-		data_dir = rq_data_dir(rq);
-		goto done;
-	}
-
 	/*
 	 * batches are currently reads XOR writes
 	 */
@@ -421,7 +411,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
 	 */
 	dd->batching++;
 	deadline_move_request(dd, per_prio, rq);
-done:
 	return dd_start_request(dd, data_dir, rq);
 }
 
@@ -469,6 +458,14 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
 	enum dd_prio prio;
 
 	spin_lock(&dd->lock);
+
+	if (!list_empty(&dd->dispatch)) {
+		rq = list_first_entry(&dd->dispatch, struct request, queuelist);
+		list_del_init(&rq->queuelist);
+		dd_start_request(dd, rq_data_dir(rq), rq);
+		goto unlock;
+	}
+
 	rq = dd_dispatch_prio_aged_requests(dd, now);
 	if (rq)
 		goto unlock;
@@ -557,10 +554,10 @@ static int dd_init_sched(struct request_queue *q, struct elevator_queue *eq)
 
 	eq->elevator_data = dd;
 
+	INIT_LIST_HEAD(&dd->dispatch);
 	for (prio = 0; prio <= DD_PRIO_MAX; prio++) {
 		struct dd_per_prio *per_prio = &dd->per_prio[prio];
 
-		INIT_LIST_HEAD(&per_prio->dispatch);
 		INIT_LIST_HEAD(&per_prio->fifo_list[DD_READ]);
 		INIT_LIST_HEAD(&per_prio->fifo_list[DD_WRITE]);
 		per_prio->sort_list[DD_READ] = RB_ROOT;
@@ -664,7 +661,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	trace_block_rq_insert(rq);
 
 	if (flags & BLK_MQ_INSERT_AT_HEAD) {
-		list_add(&rq->queuelist, &per_prio->dispatch);
+		list_add(&rq->queuelist, &dd->dispatch);
 		rq->fifo_time = jiffies;
 	} else {
 		deadline_add_rq_rb(per_prio, rq);
@@ -731,8 +728,7 @@ static void dd_finish_request(struct request *rq)
 
 static bool dd_has_work_for_prio(struct dd_per_prio *per_prio)
 {
-	return !list_empty_careful(&per_prio->dispatch) ||
-		!list_empty_careful(&per_prio->fifo_list[DD_READ]) ||
+	return !list_empty_careful(&per_prio->fifo_list[DD_READ]) ||
 		!list_empty_careful(&per_prio->fifo_list[DD_WRITE]);
 }
 
@@ -741,6 +737,9 @@ static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
 	struct deadline_data *dd = hctx->queue->elevator->elevator_data;
 	enum dd_prio prio;
 
+	if (!list_empty_careful(&dd->dispatch))
+		return true;
+
 	for (prio = 0; prio <= DD_PRIO_MAX; prio++)
 		if (dd_has_work_for_prio(&dd->per_prio[prio]))
 			return true;
@@ -949,49 +948,39 @@ static int dd_owned_by_driver_show(void *data, struct seq_file *m)
 	return 0;
 }
 
-#define DEADLINE_DISPATCH_ATTR(prio)					\
-static void *deadline_dispatch##prio##_start(struct seq_file *m,	\
-					     loff_t *pos)		\
-	__acquires(&dd->lock)						\
-{									\
-	struct request_queue *q = m->private;				\
-	struct deadline_data *dd = q->elevator->elevator_data;		\
-	struct dd_per_prio *per_prio = &dd->per_prio[prio];		\
-									\
-	spin_lock(&dd->lock);						\
-	return seq_list_start(&per_prio->dispatch, *pos);		\
-}									\
-									\
-static void *deadline_dispatch##prio##_next(struct seq_file *m,		\
-					    void *v, loff_t *pos)	\
-{									\
-	struct request_queue *q = m->private;				\
-	struct deadline_data *dd = q->elevator->elevator_data;		\
-	struct dd_per_prio *per_prio = &dd->per_prio[prio];		\
-									\
-	return seq_list_next(v, &per_prio->dispatch, pos);		\
-}									\
-									\
-static void deadline_dispatch##prio##_stop(struct seq_file *m, void *v)	\
-	__releases(&dd->lock)						\
-{									\
-	struct request_queue *q = m->private;				\
-	struct deadline_data *dd = q->elevator->elevator_data;		\
-									\
-	spin_unlock(&dd->lock);						\
-}									\
-									\
-static const struct seq_operations deadline_dispatch##prio##_seq_ops = { \
-	.start	= deadline_dispatch##prio##_start,			\
-	.next	= deadline_dispatch##prio##_next,			\
-	.stop	= deadline_dispatch##prio##_stop,			\
-	.show	= blk_mq_debugfs_rq_show,				\
+static void *deadline_dispatch_start(struct seq_file *m, loff_t *pos)
+	__acquires(&dd->lock)
+{
+	struct request_queue *q = m->private;
+	struct deadline_data *dd = q->elevator->elevator_data;
+
+	spin_lock(&dd->lock);
+	return seq_list_start(&dd->dispatch, *pos);
 }
 
-DEADLINE_DISPATCH_ATTR(0);
-DEADLINE_DISPATCH_ATTR(1);
-DEADLINE_DISPATCH_ATTR(2);
-#undef DEADLINE_DISPATCH_ATTR
+static void *deadline_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
+{
+	struct request_queue *q = m->private;
+	struct deadline_data *dd = q->elevator->elevator_data;
+
+	return seq_list_next(v, &dd->dispatch, pos);
+}
+
+static void deadline_dispatch_stop(struct seq_file *m, void *v)
+	__releases(&dd->lock)
+{
+	struct request_queue *q = m->private;
+	struct deadline_data *dd = q->elevator->elevator_data;
+
+	spin_unlock(&dd->lock);
+}
+
+static const struct seq_operations deadline_dispatch_seq_ops = {
+	.start	= deadline_dispatch_start,
+	.next	= deadline_dispatch_next,
+	.stop	= deadline_dispatch_stop,
+	.show	= blk_mq_debugfs_rq_show,
+};
 
 #define DEADLINE_QUEUE_DDIR_ATTRS(name)					\
 	{#name "_fifo_list", 0400,					\
@@ -1014,9 +1003,7 @@ static const struct blk_mq_debugfs_attr deadline_queue_debugfs_attrs[] = {
 	{"batching", 0400, deadline_batching_show},
 	{"starved", 0400, deadline_starved_show},
 	{"async_depth", 0400, dd_async_depth_show},
-	{"dispatch0", 0400, .seq_ops = &deadline_dispatch0_seq_ops},
-	{"dispatch1", 0400, .seq_ops = &deadline_dispatch1_seq_ops},
-	{"dispatch2", 0400, .seq_ops = &deadline_dispatch2_seq_ops},
+	{"dispatch", 0400, .seq_ops = &deadline_dispatch_seq_ops},
 	{"owned_by_driver", 0400, dd_owned_by_driver_show},
 	{"queued", 0400, dd_queued_show},
 	{},

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] block/mq-deadline: Introduce dd_start_request()
  2025-10-13 19:28 ` [PATCH 1/2] block/mq-deadline: Introduce dd_start_request() Bart Van Assche
@ 2025-10-14  4:16   ` Damien Le Moal
  2025-10-14 15:47     ` Bart Van Assche
  0 siblings, 1 reply; 9+ messages in thread
From: Damien Le Moal @ 2025-10-14  4:16 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Yu Kuai, chengkaitao

On 2025/10/14 4:28, Bart Van Assche wrote:
> Prepare for adding a second caller of this function. No functionality
> has been changed.
> 
> Cc: Damien Le Moal <dlemoal@kernel.org>
> Cc: Yu Kuai <yukuai@kernel.org>
> Cc: chengkaitao <chengkaitao@kylinos.cn>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

One nit below.

Other than that, looks fine to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

> ---
>  block/mq-deadline.c | 22 ++++++++++++++--------
>  1 file changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index 3e741d33142d..647a45f6d935 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -306,6 +306,19 @@ static bool started_after(struct deadline_data *dd, struct request *rq,
>  	return time_after(start_time, latest_start);
>  }
>  
> +static struct request *dd_start_request(struct deadline_data *dd,
> +					enum dd_data_dir data_dir,
> +					struct request *rq)

Why return the request that is passed ? Not sure that is necessary.


> +{
> +	u8 ioprio_class = dd_rq_ioclass(rq);
> +	enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
> +
> +	dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
> +	dd->per_prio[prio].stats.dispatched++;
> +	rq->rq_flags |= RQF_STARTED;
> +	return rq;
> +}



-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list
  2025-10-13 19:28 ` [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list Bart Van Assche
@ 2025-10-14  4:19   ` Damien Le Moal
  2025-10-14  6:10   ` Yu Kuai
  1 sibling, 0 replies; 9+ messages in thread
From: Damien Le Moal @ 2025-10-14  4:19 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Yu Kuai, chengkaitao

On 2025/10/14 4:28, Bart Van Assche wrote:
> Commit c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
> modified the behavior of request flag BLK_MQ_INSERT_AT_HEAD from
> dispatching a request before other requests into dispatching a request
> before other requests with the same I/O priority. This is not correct since
> BLK_MQ_INSERT_AT_HEAD is used when requeuing requests and also when a flush
> request is inserted.  Both types of requests should be dispatched as soon
> as possible. Hence, make the mq-deadline I/O scheduler again ignore the I/O
> priority for BLK_MQ_INSERT_AT_HEAD requests.
> 
> Cc: Damien Le Moal <dlemoal@kernel.org>
> Cc: Yu Kuai <yukuai@kernel.org>
> Reported-by: chengkaitao <chengkaitao@kylinos.cn>
> Closes: https://lore.kernel.org/linux-block/20251009155253.14611-1-pilgrimtao@gmail.com/
> Fixes: c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Nice cleanup !

Reviewed-by: Damien Le Moalv <dlemoal@kernel.org>


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list
  2025-10-13 19:28 ` [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list Bart Van Assche
  2025-10-14  4:19   ` Damien Le Moal
@ 2025-10-14  6:10   ` Yu Kuai
  2025-10-14 15:55     ` Bart Van Assche
  1 sibling, 1 reply; 9+ messages in thread
From: Yu Kuai @ 2025-10-14  6:10 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Yu Kuai,
	chengkaitao, yukuai (C)

Hi,

在 2025/10/14 3:28, Bart Van Assche 写道:
> Commit c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
> modified the behavior of request flag BLK_MQ_INSERT_AT_HEAD from
> dispatching a request before other requests into dispatching a request
> before other requests with the same I/O priority. This is not correct since
> BLK_MQ_INSERT_AT_HEAD is used when requeuing requests and also when a flush
> request is inserted.  Both types of requests should be dispatched as soon
> as possible. Hence, make the mq-deadline I/O scheduler again ignore the I/O
> priority for BLK_MQ_INSERT_AT_HEAD requests.
> 
> Cc: Damien Le Moal <dlemoal@kernel.org>
> Cc: Yu Kuai <yukuai@kernel.org>
> Reported-by: chengkaitao <chengkaitao@kylinos.cn>
> Closes: https://lore.kernel.org/linux-block/20251009155253.14611-1-pilgrimtao@gmail.com/
> Fixes: c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>   block/mq-deadline.c | 107 +++++++++++++++++++-------------------------
>   1 file changed, 47 insertions(+), 60 deletions(-)
> 
> diff --git a/block/mq-deadline.c b/block/mq-deadline.c
> index 647a45f6d935..3e3719093aec 100644
> --- a/block/mq-deadline.c
> +++ b/block/mq-deadline.c
> @@ -71,7 +71,6 @@ struct io_stats_per_prio {
>    * present on both sort_list[] and fifo_list[].
>    */
>   struct dd_per_prio {
> -	struct list_head dispatch;
>   	struct rb_root sort_list[DD_DIR_COUNT];
>   	struct list_head fifo_list[DD_DIR_COUNT];
>   	/* Position of the most recently dispatched request. */
> @@ -84,6 +83,7 @@ struct deadline_data {
>   	 * run time data
>   	 */
>   
> +	struct list_head dispatch;
>   	struct dd_per_prio per_prio[DD_PRIO_COUNT];
>   
>   	/* Data direction of latest dispatched request. */
> @@ -332,16 +332,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
>   
>   	lockdep_assert_held(&dd->lock);
>   
> -	if (!list_empty(&per_prio->dispatch)) {
> -		rq = list_first_entry(&per_prio->dispatch, struct request,
> -				      queuelist);
> -		if (started_after(dd, rq, latest_start))
> -			return NULL;
> -		list_del_init(&rq->queuelist);
> -		data_dir = rq_data_dir(rq);
> -		goto done;
> -	}
> -
>   	/*
>   	 * batches are currently reads XOR writes
>   	 */
> @@ -421,7 +411,6 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
>   	 */
>   	dd->batching++;
>   	deadline_move_request(dd, per_prio, rq);
> -done:
>   	return dd_start_request(dd, data_dir, rq);
>   }
>   
> @@ -469,6 +458,14 @@ static struct request *dd_dispatch_request(struct blk_mq_hw_ctx *hctx)
>   	enum dd_prio prio;
>   
>   	spin_lock(&dd->lock);
> +
> +	if (!list_empty(&dd->dispatch)) {
> +		rq = list_first_entry(&dd->dispatch, struct request, queuelist);
> +		list_del_init(&rq->queuelist);
> +		dd_start_request(dd, rq_data_dir(rq), rq);
> +		goto unlock;
> +	}
> +
>   	rq = dd_dispatch_prio_aged_requests(dd, now);
>   	if (rq)
>   		goto unlock;
> @@ -557,10 +554,10 @@ static int dd_init_sched(struct request_queue *q, struct elevator_queue *eq)
>   
>   	eq->elevator_data = dd;
>   
> +	INIT_LIST_HEAD(&dd->dispatch);
>   	for (prio = 0; prio <= DD_PRIO_MAX; prio++) {
>   		struct dd_per_prio *per_prio = &dd->per_prio[prio];
>   
> -		INIT_LIST_HEAD(&per_prio->dispatch);
>   		INIT_LIST_HEAD(&per_prio->fifo_list[DD_READ]);
>   		INIT_LIST_HEAD(&per_prio->fifo_list[DD_WRITE]);
>   		per_prio->sort_list[DD_READ] = RB_ROOT;
> @@ -664,7 +661,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>   	trace_block_rq_insert(rq);
>   
>   	if (flags & BLK_MQ_INSERT_AT_HEAD) {
> -		list_add(&rq->queuelist, &per_prio->dispatch);
> +		list_add(&rq->queuelist, &dd->dispatch);
>   		rq->fifo_time = jiffies;
>   	} else {
>   		deadline_add_rq_rb(per_prio, rq);

Do you still want this request to be accounted into per_prio stat? I
feel we should not, otherwise perhaps can you explain more?

Thanks,
Kuai

> @@ -731,8 +728,7 @@ static void dd_finish_request(struct request *rq)
>   
>   static bool dd_has_work_for_prio(struct dd_per_prio *per_prio)
>   {
> -	return !list_empty_careful(&per_prio->dispatch) ||
> -		!list_empty_careful(&per_prio->fifo_list[DD_READ]) ||
> +	return !list_empty_careful(&per_prio->fifo_list[DD_READ]) ||
>   		!list_empty_careful(&per_prio->fifo_list[DD_WRITE]);
>   }
>   
> @@ -741,6 +737,9 @@ static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
>   	struct deadline_data *dd = hctx->queue->elevator->elevator_data;
>   	enum dd_prio prio;
>   
> +	if (!list_empty_careful(&dd->dispatch))
> +		return true;
> +
>   	for (prio = 0; prio <= DD_PRIO_MAX; prio++)
>   		if (dd_has_work_for_prio(&dd->per_prio[prio]))
>   			return true;
> @@ -949,49 +948,39 @@ static int dd_owned_by_driver_show(void *data, struct seq_file *m)
>   	return 0;
>   }
>   
> -#define DEADLINE_DISPATCH_ATTR(prio)					\
> -static void *deadline_dispatch##prio##_start(struct seq_file *m,	\
> -					     loff_t *pos)		\
> -	__acquires(&dd->lock)						\
> -{									\
> -	struct request_queue *q = m->private;				\
> -	struct deadline_data *dd = q->elevator->elevator_data;		\
> -	struct dd_per_prio *per_prio = &dd->per_prio[prio];		\
> -									\
> -	spin_lock(&dd->lock);						\
> -	return seq_list_start(&per_prio->dispatch, *pos);		\
> -}									\
> -									\
> -static void *deadline_dispatch##prio##_next(struct seq_file *m,		\
> -					    void *v, loff_t *pos)	\
> -{									\
> -	struct request_queue *q = m->private;				\
> -	struct deadline_data *dd = q->elevator->elevator_data;		\
> -	struct dd_per_prio *per_prio = &dd->per_prio[prio];		\
> -									\
> -	return seq_list_next(v, &per_prio->dispatch, pos);		\
> -}									\
> -									\
> -static void deadline_dispatch##prio##_stop(struct seq_file *m, void *v)	\
> -	__releases(&dd->lock)						\
> -{									\
> -	struct request_queue *q = m->private;				\
> -	struct deadline_data *dd = q->elevator->elevator_data;		\
> -									\
> -	spin_unlock(&dd->lock);						\
> -}									\
> -									\
> -static const struct seq_operations deadline_dispatch##prio##_seq_ops = { \
> -	.start	= deadline_dispatch##prio##_start,			\
> -	.next	= deadline_dispatch##prio##_next,			\
> -	.stop	= deadline_dispatch##prio##_stop,			\
> -	.show	= blk_mq_debugfs_rq_show,				\
> +static void *deadline_dispatch_start(struct seq_file *m, loff_t *pos)
> +	__acquires(&dd->lock)
> +{
> +	struct request_queue *q = m->private;
> +	struct deadline_data *dd = q->elevator->elevator_data;
> +
> +	spin_lock(&dd->lock);
> +	return seq_list_start(&dd->dispatch, *pos);
>   }
>   
> -DEADLINE_DISPATCH_ATTR(0);
> -DEADLINE_DISPATCH_ATTR(1);
> -DEADLINE_DISPATCH_ATTR(2);
> -#undef DEADLINE_DISPATCH_ATTR
> +static void *deadline_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
> +{
> +	struct request_queue *q = m->private;
> +	struct deadline_data *dd = q->elevator->elevator_data;
> +
> +	return seq_list_next(v, &dd->dispatch, pos);
> +}
> +
> +static void deadline_dispatch_stop(struct seq_file *m, void *v)
> +	__releases(&dd->lock)
> +{
> +	struct request_queue *q = m->private;
> +	struct deadline_data *dd = q->elevator->elevator_data;
> +
> +	spin_unlock(&dd->lock);
> +}
> +
> +static const struct seq_operations deadline_dispatch_seq_ops = {
> +	.start	= deadline_dispatch_start,
> +	.next	= deadline_dispatch_next,
> +	.stop	= deadline_dispatch_stop,
> +	.show	= blk_mq_debugfs_rq_show,
> +};
>   
>   #define DEADLINE_QUEUE_DDIR_ATTRS(name)					\
>   	{#name "_fifo_list", 0400,					\
> @@ -1014,9 +1003,7 @@ static const struct blk_mq_debugfs_attr deadline_queue_debugfs_attrs[] = {
>   	{"batching", 0400, deadline_batching_show},
>   	{"starved", 0400, deadline_starved_show},
>   	{"async_depth", 0400, dd_async_depth_show},
> -	{"dispatch0", 0400, .seq_ops = &deadline_dispatch0_seq_ops},
> -	{"dispatch1", 0400, .seq_ops = &deadline_dispatch1_seq_ops},
> -	{"dispatch2", 0400, .seq_ops = &deadline_dispatch2_seq_ops},
> +	{"dispatch", 0400, .seq_ops = &deadline_dispatch_seq_ops},
>   	{"owned_by_driver", 0400, dd_owned_by_driver_show},
>   	{"queued", 0400, dd_queued_show},
>   	{},
> 
> .
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support
  2025-10-13 19:28 [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Bart Van Assche
  2025-10-13 19:28 ` [PATCH 1/2] block/mq-deadline: Introduce dd_start_request() Bart Van Assche
  2025-10-13 19:28 ` [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list Bart Van Assche
@ 2025-10-14 13:12 ` Jens Axboe
  2 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2025-10-14 13:12 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-block, Christoph Hellwig


On Mon, 13 Oct 2025 12:28:01 -0700, Bart Van Assche wrote:
> Commit c807ab520fc3 ("block/mq-deadline: Add I/O priority support")
> modified the behavior of request flag BLK_MQ_INSERT_AT_HEAD from
> dispatching a request before other requests into dispatching a request
> before other requests with the same I/O priority. This is not correct since
> BLK_MQ_INSERT_AT_HEAD is used when requeuing requests and also when a flush
> request is inserted. Both types of requests should be dispatched as soon
> as possible. Hence this patch series that makes the mq-deadline I/O scheduler
> again ignore the I/O priority for BLK_MQ_INSERT_AT_HEAD requests.
> 
> [...]

Applied, thanks!

[1/2] block/mq-deadline: Introduce dd_start_request()
      commit: 667312e1c0c091bd6d62cabd3d6e03e0a757d87c
[2/2] block/mq-deadline: Switch back to a single dispatch list
      commit: 2f52aa87a0b7da80f50aff13904b82d24171d1a7

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] block/mq-deadline: Introduce dd_start_request()
  2025-10-14  4:16   ` Damien Le Moal
@ 2025-10-14 15:47     ` Bart Van Assche
  0 siblings, 0 replies; 9+ messages in thread
From: Bart Van Assche @ 2025-10-14 15:47 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Yu Kuai, chengkaitao

On 10/13/25 9:16 PM, Damien Le Moal wrote:
> On 2025/10/14 4:28, Bart Van Assche wrote:
>> +static struct request *dd_start_request(struct deadline_data *dd,
>> +					enum dd_data_dir data_dir,
>> +					struct request *rq)
> 
> Why return the request that is passed ? Not sure that is necessary.

If anyone wants to modify the return type of dd_start_request() into
void that's fine with me.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list
  2025-10-14  6:10   ` Yu Kuai
@ 2025-10-14 15:55     ` Bart Van Assche
  0 siblings, 0 replies; 9+ messages in thread
From: Bart Van Assche @ 2025-10-14 15:55 UTC (permalink / raw)
  To: Yu Kuai, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Damien Le Moal, Yu Kuai,
	chengkaitao, yukuai (C)

On 10/13/25 11:10 PM, Yu Kuai wrote:
> Do you still want this request to be accounted into per_prio stat? I
> feel we should not, otherwise perhaps can you explain more?

This is something I don't have a strong opinion about. I think it is
possible to exclude AT HEAD requests from the per_prio statistics but 
I'm not sure it's worth the additional code complexity.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-10-14 15:55 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-13 19:28 [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Bart Van Assche
2025-10-13 19:28 ` [PATCH 1/2] block/mq-deadline: Introduce dd_start_request() Bart Van Assche
2025-10-14  4:16   ` Damien Le Moal
2025-10-14 15:47     ` Bart Van Assche
2025-10-13 19:28 ` [PATCH 2/2] block/mq-deadline: Switch back to a single dispatch list Bart Van Assche
2025-10-14  4:19   ` Damien Le Moal
2025-10-14  6:10   ` Yu Kuai
2025-10-14 15:55     ` Bart Van Assche
2025-10-14 13:12 ` [PATCH 0/2] block/mq-deadline: Fix BLK_MQ_INSERT_AT_HEAD support Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox