linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Bart Van Assche <bvanassche@acm.org>,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 3/3] block/mq-deadline: Disable I/O prioritization in certain cases
Date: Mon,  4 Dec 2023 21:32:13 -0800	[thread overview]
Message-ID: <20231205053213.522772-4-bvanassche@acm.org> (raw)
In-Reply-To: <20231205053213.522772-1-bvanassche@acm.org>

Fix the following two issues:
- Even with prio_aging_expire set to zero, I/O priorities still affect the
  request order.
- Assigning I/O priorities with the ioprio cgroup policy breaks zoned
  storage support in the mq-deadline scheduler.

This patch fixes both issues by disabling I/O prioritization for these
two cases.

Cc: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/mq-deadline.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index fe5da2ade953..6781cef0109e 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -123,14 +123,16 @@ deadline_rb_root(struct dd_per_prio *per_prio, struct request *rq)
  * Returns the I/O priority class (IOPRIO_CLASS_*) that has been assigned to a
  * request.
  */
-static u8 dd_rq_ioclass(struct request *rq)
+static u8 dd_rq_ioclass(struct deadline_data *dd, struct request *rq)
 {
-	return IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
+	return dd->prio_aging_expire ? IOPRIO_PRIO_CLASS(req_get_ioprio(rq)) :
+				       IOPRIO_CLASS_NONE;
 }
 
-static u8 dd_bio_ioclass(struct bio *bio)
+static u8 dd_bio_ioclass(struct deadline_data *dd, struct bio *bio)
 {
-	return IOPRIO_PRIO_CLASS(bio->bi_ioprio);
+	return dd->prio_aging_expire ? IOPRIO_PRIO_CLASS(bio->bi_ioprio) :
+				       IOPRIO_CLASS_NONE;
 }
 
 /*
@@ -233,7 +235,7 @@ static void dd_request_merged(struct request_queue *q, struct request *req,
 			      enum elv_merge type)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const u8 ioprio_class = dd_rq_ioclass(req);
+	const u8 ioprio_class = dd_rq_ioclass(dd, req);
 	const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
 	struct dd_per_prio *per_prio = &dd->per_prio[prio];
 
@@ -253,7 +255,7 @@ static void dd_merged_requests(struct request_queue *q, struct request *req,
 			       struct request *next)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const u8 ioprio_class = dd_rq_ioclass(next);
+	const u8 ioprio_class = dd_rq_ioclass(dd, next);
 	const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
 
 	lockdep_assert_held(&dd->lock);
@@ -550,7 +552,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
 	dd->batching++;
 	deadline_move_request(dd, per_prio, rq);
 done:
-	ioprio_class = dd_rq_ioclass(rq);
+	ioprio_class = dd_rq_ioclass(dd, rq);
 	prio = ioprio_class_to_prio[ioprio_class];
 	dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
 	dd->per_prio[prio].stats.dispatched++;
@@ -749,7 +751,7 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
 			    struct bio *bio)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const u8 ioprio_class = dd_bio_ioclass(bio);
+	const u8 ioprio_class = dd_bio_ioclass(dd, bio);
 	const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
 	struct dd_per_prio *per_prio = &dd->per_prio[prio];
 	sector_t sector = bio_end_sector(bio);
@@ -814,7 +816,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 	 */
 	blk_req_zone_write_unlock(rq);
 
-	prio = ioprio_class_to_prio[dd_rq_ioclass(rq)];
+	prio = ioprio_class_to_prio[dd_rq_ioclass(dd, rq)];
 	per_prio = &dd->per_prio[prio];
 	if (!rq->elv.priv[0]) {
 		per_prio->stats.inserted++;
@@ -923,7 +925,7 @@ static void dd_finish_request(struct request *rq)
 {
 	struct request_queue *q = rq->q;
 	struct deadline_data *dd = q->elevator->elevator_data;
-	const u8 ioprio_class = dd_rq_ioclass(rq);
+	const u8 ioprio_class = dd_rq_ioclass(dd, rq);
 	const enum dd_prio prio = ioprio_class_to_prio[ioprio_class];
 	struct dd_per_prio *per_prio = &dd->per_prio[prio];
 

  parent reply	other threads:[~2023-12-05  5:32 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-05  5:32 [PATCH 0/3] Improve mq-deadline I/O priority support Bart Van Assche
2023-12-05  5:32 ` [PATCH 1/3] block/mq-deadline: Use dd_rq_ioclass() instead of open-coding it Bart Van Assche
2023-12-06  2:35   ` Damien Le Moal
2023-12-11 16:54   ` Christoph Hellwig
2023-12-05  5:32 ` [PATCH 2/3] block/mq-deadline: Introduce dd_bio_ioclass() Bart Van Assche
2023-12-06  2:35   ` Damien Le Moal
2023-12-11 16:55   ` Christoph Hellwig
2023-12-18 17:35     ` Bart Van Assche
2023-12-05  5:32 ` Bart Van Assche [this message]
2023-12-06  2:42   ` [PATCH 3/3] block/mq-deadline: Disable I/O prioritization in certain cases Damien Le Moal
2023-12-06  3:24     ` Bart Van Assche
2023-12-08  0:03     ` Bart Van Assche
2023-12-08  3:37       ` Damien Le Moal
2023-12-08 18:40         ` Bart Van Assche
2023-12-11  7:40           ` Damien Le Moal
2023-12-12 22:44             ` Bart Van Assche
2023-12-12 23:52               ` Damien Le Moal
2023-12-13  1:02                 ` Bart Van Assche
2023-12-13  5:29                   ` Damien Le Moal
2023-12-11 16:57   ` Christoph Hellwig
2023-12-11 17:20     ` Bart Van Assche
2023-12-12 15:40       ` Christoph Hellwig
2023-12-11 22:40     ` Damien Le Moal
2023-12-12 15:41       ` Christoph Hellwig
2023-12-12 17:15         ` Bart Van Assche
2023-12-12 17:18           ` Christoph Hellwig
2023-12-12 17:42             ` Bart Van Assche
2023-12-12 17:48               ` Christoph Hellwig
2023-12-12 18:09                 ` Bart Van Assche
2023-12-12 18:13                   ` Christoph Hellwig
2023-12-12 18:19                     ` Bart Van Assche
2023-12-12 18:26                       ` Christoph Hellwig
2023-12-12 19:03                         ` Jaegeuk Kim
2023-12-12 23:44                           ` Damien Le Moal
2023-12-13 16:49                             ` Jaegeuk Kim
2023-12-13 22:55                               ` Damien Le Moal
2023-12-13 15:56                           ` Christoph Hellwig
2023-12-13 16:41                             ` Jaegeuk Kim
2023-12-14  8:57                               ` Christoph Hellwig
2023-12-14 17:22                                 ` Jaegeuk Kim
2023-12-15  1:12                                   ` Damien Le Moal
2023-12-15  2:03                                     ` Jaegeuk Kim
2023-12-15  2:20                                       ` Keith Busch
2023-12-15  4:49                                         ` Christoph Hellwig
2023-12-14 19:32                                 ` Bart Van Assche
2023-12-14  0:08                     ` Bart Van Assche
2023-12-14  0:37                       ` Damien Le Moal
2023-12-14  8:51                         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231205053213.522772-4-bvanassche@acm.org \
    --to=bvanassche@acm.org \
    --cc=axboe@kernel.dk \
    --cc=dlemoal@kernel.org \
    --cc=hch@lst.de \
    --cc=jaegeuk@kernel.org \
    --cc=linux-block@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).