* [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices
@ 2023-05-03 22:51 Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock() Bart Van Assche
` (10 more replies)
0 siblings, 11 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:51 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche
Hi Jens,
This patch series improves support for zoned block devices in the mq-deadline
scheduler by preserving the order of requeued writes (REQ_OP_WRITE*).
Please consider this patch series for the next merge window.
Thank you,
Bart.
Changes compared to v3:
- Addressed Christoph's review feedback.
- Dropped patch "block: Micro-optimize blk_req_needs_zone_write_lock()".
- Added three new patches:
* block: Fix the type of the second bdev_op_is_zoned_write() argument
* block: Introduce op_is_zoned_write()
* block: mq-deadline: Reduce lock contention
Changes compared to v2:
- In the patch that micro-optimizes blk_req_needs_zone_write_lock(), inline
bdev_op_is_zoned_write() instead of modifying it.
- In patch "block: Introduce blk_rq_is_seq_zoned_write()", converted "case
REQ_OP_ZONE_APPEND" into a source code comment.
- Reworked deadline_skip_seq_writes() as suggested by Christoph.
- Dropped the patch that disabled head insertion for zoned writes.
- Dropped patch "mq-deadline: Fix a race condition related to zoned writes".
- Reworked handling of requeued requests: the 'next_rq' pointer has been
removed and instead the position of the most recently dispatched request is
tracked.
- Dropped the patches for tracking zone capacity and for restricting the number
of active zones.
Changes compared to v1:
- Left out the patches related to request insertion and requeuing since
Christoph is busy with reworking these patches.
- Added a patch for enforcing the active zone limit.
Bart Van Assche (11):
block: Simplify blk_req_needs_zone_write_lock()
block: Fix the type of the second bdev_op_is_zoned_write() argument
block: Introduce op_is_zoned_write()
block: Introduce blk_rq_is_seq_zoned_write()
block: mq-deadline: Clean up deadline_check_fifo()
block: mq-deadline: Simplify deadline_skip_seq_writes()
block: mq-deadline: Improve deadline_skip_seq_writes()
block: mq-deadline: Reduce lock contention
block: mq-deadline: Track the dispatch position
block: mq-deadline: Handle requeued requests correctly
block: mq-deadline: Fix handling of at-head zoned writes
block/blk-zoned.c | 20 +++++---
block/mq-deadline.c | 114 +++++++++++++++++++++++++++++------------
include/linux/blk-mq.h | 6 +++
include/linux/blkdev.h | 13 +++--
4 files changed, 107 insertions(+), 46 deletions(-)
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
@ 2023-05-03 22:51 ` Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
` (9 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:51 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Remove the blk_rq_is_passthrough() check because it is redundant:
blk_req_needs_zone_write_lock() also calls bdev_op_is_zoned_write()
and the latter function returns false for pass-through requests.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-zoned.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index fce9082384d6..835d9e937d4d 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -57,9 +57,6 @@ EXPORT_SYMBOL_GPL(blk_zone_cond_str);
*/
bool blk_req_needs_zone_write_lock(struct request *rq)
{
- if (blk_rq_is_passthrough(rq))
- return false;
-
if (!rq->q->disk->seq_zones_wlock)
return false;
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock() Bart Van Assche
@ 2023-05-03 22:51 ` Bart Van Assche
2023-05-04 7:16 ` Johannes Thumshirn
` (2 more replies)
2023-05-03 22:52 ` [PATCH v4 03/11] block: Introduce op_is_zoned_write() Bart Van Assche
` (8 subsequent siblings)
10 siblings, 3 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:51 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei, Pankaj Raghav, Johannes Thumshirn
Change the type of the second argument of bdev_op_is_zoned_write() from
blk_opf_t into enum req_op because this function expects an operation
without flags as second argument.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Pankaj Raghav <p.raghav@samsung.com>
Fixes: 8cafdb5ab94c ("block: adapt blk_mq_plug() to not plug for writes that require a zone lock")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
include/linux/blkdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b441e633f4dd..db24cf98ccfb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1282,7 +1282,7 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec)
}
static inline bool bdev_op_is_zoned_write(struct block_device *bdev,
- blk_opf_t op)
+ enum req_op op)
{
if (!bdev_is_zoned(bdev))
return false;
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 03/11] block: Introduce op_is_zoned_write()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock() Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:52 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write() Bart Van Assche
` (7 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Introduce a helper function for checking whether write serialization is
required if the operation will be sent to a zoned device. A second caller
for op_is_zoned_write() will be introduced in the next patch in this
series.
Suggested-by: Christoph Hellwig <hch@lst.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
include/linux/blkdev.h | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index db24cf98ccfb..a4f85781060c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1281,13 +1281,16 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec)
return disk_zone_no(bdev->bd_disk, sec);
}
+/* Whether write serialization is required for @op on zoned devices. */
+static inline bool op_is_zoned_write(enum req_op op)
+{
+ return op == REQ_OP_WRITE || op == REQ_OP_WRITE_ZEROES;
+}
+
static inline bool bdev_op_is_zoned_write(struct block_device *bdev,
enum req_op op)
{
- if (!bdev_is_zoned(bdev))
- return false;
-
- return op == REQ_OP_WRITE || op == REQ_OP_WRITE_ZEROES;
+ return bdev_is_zoned(bdev) && op_is_zoned_write(op);
}
static inline sector_t bdev_zone_sectors(struct block_device *bdev)
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (2 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 03/11] block: Introduce op_is_zoned_write() Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:53 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 05/11] block: mq-deadline: Clean up deadline_check_fifo() Bart Van Assche
` (6 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Introduce the function blk_rq_is_seq_zoned_write(). This function will
be used in later patches to preserve the order of zoned writes that
require write serialization.
This patch includes an optimization: instead of using
rq->q->disk->part0->bd_queue to check whether or not the queue is
associated with a zoned block device, use rq->q->disk->queue.
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-zoned.c | 17 +++++++++++++----
include/linux/blk-mq.h | 6 ++++++
2 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 835d9e937d4d..4f44b74ba4df 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -52,6 +52,18 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond)
}
EXPORT_SYMBOL_GPL(blk_zone_cond_str);
+/**
+ * blk_rq_is_seq_zoned_write() - Check if @rq requires write serialization.
+ * @rq: Request to examine.
+ *
+ * Note: REQ_OP_ZONE_APPEND requests do not require serialization.
+ */
+bool blk_rq_is_seq_zoned_write(struct request *rq)
+{
+ return op_is_zoned_write(req_op(rq)) && blk_rq_zone_is_seq(rq);
+}
+EXPORT_SYMBOL_GPL(blk_rq_is_seq_zoned_write);
+
/*
* Return true if a request is a write requests that needs zone write locking.
*/
@@ -60,10 +72,7 @@ bool blk_req_needs_zone_write_lock(struct request *rq)
if (!rq->q->disk->seq_zones_wlock)
return false;
- if (bdev_op_is_zoned_write(rq->q->disk->part0, req_op(rq)))
- return blk_rq_zone_is_seq(rq);
-
- return false;
+ return blk_rq_is_seq_zoned_write(rq);
}
EXPORT_SYMBOL_GPL(blk_req_needs_zone_write_lock);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 06caacd77ed6..e498b85bc470 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -1164,6 +1164,7 @@ static inline unsigned int blk_rq_zone_is_seq(struct request *rq)
return disk_zone_is_seq(rq->q->disk, blk_rq_pos(rq));
}
+bool blk_rq_is_seq_zoned_write(struct request *rq);
bool blk_req_needs_zone_write_lock(struct request *rq);
bool blk_req_zone_write_trylock(struct request *rq);
void __blk_req_zone_write_lock(struct request *rq);
@@ -1194,6 +1195,11 @@ static inline bool blk_req_can_dispatch_to_zone(struct request *rq)
return !blk_req_zone_is_write_locked(rq);
}
#else /* CONFIG_BLK_DEV_ZONED */
+static inline bool blk_rq_is_seq_zoned_write(struct request *rq)
+{
+ return false;
+}
+
static inline bool blk_req_needs_zone_write_lock(struct request *rq)
{
return false;
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 05/11] block: mq-deadline: Clean up deadline_check_fifo()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (3 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write() Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 06/11] block: mq-deadline: Simplify deadline_skip_seq_writes() Bart Van Assche
` (5 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Change the return type of deadline_check_fifo() from 'int' into 'bool'.
Use time_is_before_eq_jiffies() instead of time_after_eq(). No
functionality has been changed.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 16 +++++-----------
1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 5839a027e0f0..a016cafa54b3 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -272,21 +272,15 @@ static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio)
}
/*
- * deadline_check_fifo returns 0 if there are no expired requests on the fifo,
- * 1 otherwise. Requires !list_empty(&dd->fifo_list[data_dir])
+ * deadline_check_fifo returns true if and only if there are expired requests
+ * in the FIFO list. Requires !list_empty(&dd->fifo_list[data_dir]).
*/
-static inline int deadline_check_fifo(struct dd_per_prio *per_prio,
- enum dd_data_dir data_dir)
+static inline bool deadline_check_fifo(struct dd_per_prio *per_prio,
+ enum dd_data_dir data_dir)
{
struct request *rq = rq_entry_fifo(per_prio->fifo_list[data_dir].next);
- /*
- * rq is expired!
- */
- if (time_after_eq(jiffies, (unsigned long)rq->fifo_time))
- return 1;
-
- return 0;
+ return time_is_before_eq_jiffies((unsigned long)rq->fifo_time);
}
/*
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 06/11] block: mq-deadline: Simplify deadline_skip_seq_writes()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (4 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 05/11] block: mq-deadline: Clean up deadline_check_fifo() Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 07/11] block: mq-deadline: Improve deadline_skip_seq_writes() Bart Van Assche
` (4 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Make the deadline_skip_seq_writes() code shorter without changing its
functionality.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index a016cafa54b3..6276afede9cd 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -304,14 +304,11 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd,
struct request *rq)
{
sector_t pos = blk_rq_pos(rq);
- sector_t skipped_sectors = 0;
- while (rq) {
- if (blk_rq_pos(rq) != pos + skipped_sectors)
- break;
- skipped_sectors += blk_rq_sectors(rq);
+ do {
+ pos += blk_rq_sectors(rq);
rq = deadline_latter_request(rq);
- }
+ } while (rq && blk_rq_pos(rq) == pos);
return rq;
}
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 07/11] block: mq-deadline: Improve deadline_skip_seq_writes()
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (5 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 06/11] block: mq-deadline: Simplify deadline_skip_seq_writes() Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 08/11] block: mq-deadline: Reduce lock contention Bart Van Assche
` (3 subsequent siblings)
10 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Make deadline_skip_seq_writes() do what its name suggests, namely to
skip sequential writes.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 6276afede9cd..dbc0feca963e 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -308,7 +308,7 @@ static struct request *deadline_skip_seq_writes(struct deadline_data *dd,
do {
pos += blk_rq_sectors(rq);
rq = deadline_latter_request(rq);
- } while (rq && blk_rq_pos(rq) == pos);
+ } while (rq && blk_rq_pos(rq) == pos && blk_rq_is_seq_zoned_write(rq));
return rq;
}
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 08/11] block: mq-deadline: Reduce lock contention
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (6 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 07/11] block: mq-deadline: Improve deadline_skip_seq_writes() Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:56 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 09/11] block: mq-deadline: Track the dispatch position Bart Van Assche
` (2 subsequent siblings)
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
blk_mq_free_requests() calls dd_finish_request() indirectly. Prevent
nested locking of dd->lock and dd->zone_lock by unlocking dd->lock
before calling blk_mq_free_requests().
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index dbc0feca963e..56cc29953e15 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -758,6 +758,7 @@ static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
*/
static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
blk_insert_t flags)
+ __must_hold(dd->lock)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
@@ -784,7 +785,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
}
if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
+ spin_unlock(&dd->lock);
blk_mq_free_requests(&free);
+ spin_lock(&dd->lock);
return;
}
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 09/11] block: mq-deadline: Track the dispatch position
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (7 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 08/11] block: mq-deadline: Reduce lock contention Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:56 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes Bart Van Assche
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Track the position (sector_t) of the most recently dispatched request
instead of tracking a pointer to the next request to dispatch. This
patch is the basis for patch "Handle requeued requests correctly".
Without this patch it would be significantly more complicated to make
sure that zoned writes are dispatched in LBA order per zone.
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 45 +++++++++++++++++++++++++++++++--------------
1 file changed, 31 insertions(+), 14 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 56cc29953e15..b482b707cb37 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -74,8 +74,8 @@ struct dd_per_prio {
struct list_head dispatch;
struct rb_root sort_list[DD_DIR_COUNT];
struct list_head fifo_list[DD_DIR_COUNT];
- /* Next request in FIFO order. Read, write or both are NULL. */
- struct request *next_rq[DD_DIR_COUNT];
+ /* Position of the most recently dispatched request. */
+ sector_t latest_pos[DD_DIR_COUNT];
struct io_stats_per_prio stats;
};
@@ -156,6 +156,25 @@ deadline_latter_request(struct request *rq)
return NULL;
}
+/* Return the first request for which blk_rq_pos() >= pos. */
+static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio,
+ enum dd_data_dir data_dir, sector_t pos)
+{
+ struct rb_node *node = per_prio->sort_list[data_dir].rb_node;
+ struct request *rq, *res = NULL;
+
+ while (node) {
+ rq = rb_entry_rq(node);
+ if (blk_rq_pos(rq) >= pos) {
+ res = rq;
+ node = node->rb_left;
+ } else {
+ node = node->rb_right;
+ }
+ }
+ return res;
+}
+
static void
deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq)
{
@@ -167,11 +186,6 @@ deadline_add_rq_rb(struct dd_per_prio *per_prio, struct request *rq)
static inline void
deadline_del_rq_rb(struct dd_per_prio *per_prio, struct request *rq)
{
- const enum dd_data_dir data_dir = rq_data_dir(rq);
-
- if (per_prio->next_rq[data_dir] == rq)
- per_prio->next_rq[data_dir] = deadline_latter_request(rq);
-
elv_rb_del(deadline_rb_root(per_prio, rq), rq);
}
@@ -251,10 +265,6 @@ static void
deadline_move_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
struct request *rq)
{
- const enum dd_data_dir data_dir = rq_data_dir(rq);
-
- per_prio->next_rq[data_dir] = deadline_latter_request(rq);
-
/*
* take it off the sort and fifo list
*/
@@ -363,7 +373,8 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
struct request *rq;
unsigned long flags;
- rq = per_prio->next_rq[data_dir];
+ rq = deadline_from_pos(per_prio, data_dir,
+ per_prio->latest_pos[data_dir]);
if (!rq)
return NULL;
@@ -426,6 +437,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
if (started_after(dd, rq, latest_start))
return NULL;
list_del_init(&rq->queuelist);
+ data_dir = rq_data_dir(rq);
goto done;
}
@@ -433,9 +445,11 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
* batches are currently reads XOR writes
*/
rq = deadline_next_request(dd, per_prio, dd->last_dir);
- if (rq && dd->batching < dd->fifo_batch)
+ if (rq && dd->batching < dd->fifo_batch) {
/* we have a next request are still entitled to batch */
+ data_dir = rq_data_dir(rq);
goto dispatch_request;
+ }
/*
* at this point we are not running a batch. select the appropriate
@@ -513,6 +527,7 @@ static struct request *__dd_dispatch_request(struct deadline_data *dd,
done:
ioprio_class = dd_rq_ioclass(rq);
prio = ioprio_class_to_prio[ioprio_class];
+ dd->per_prio[prio].latest_pos[data_dir] = blk_rq_pos(rq);
dd->per_prio[prio].stats.dispatched++;
/*
* If the request needs its target zone locked, do it.
@@ -1029,8 +1044,10 @@ static int deadline_##name##_next_rq_show(void *data, \
struct request_queue *q = data; \
struct deadline_data *dd = q->elevator->elevator_data; \
struct dd_per_prio *per_prio = &dd->per_prio[prio]; \
- struct request *rq = per_prio->next_rq[data_dir]; \
+ struct request *rq; \
\
+ rq = deadline_from_pos(per_prio, data_dir, \
+ per_prio->latest_pos[data_dir]); \
if (rq) \
__blk_mq_debugfs_rq_show(m, rq); \
return 0; \
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (8 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 09/11] block: mq-deadline: Track the dispatch position Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:57 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes Bart Van Assche
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Start dispatching from the start of a zone instead of from the starting
position of the most recently dispatched request.
If a zoned write is requeued with an LBA that is lower than already
inserted zoned writes, make sure that it is submitted first.
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 34 ++++++++++++++++++++++++++++++++--
1 file changed, 32 insertions(+), 2 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index b482b707cb37..6c196182f86c 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -156,13 +156,28 @@ deadline_latter_request(struct request *rq)
return NULL;
}
-/* Return the first request for which blk_rq_pos() >= pos. */
+/*
+ * Return the first request for which blk_rq_pos() >= @pos. For zoned devices,
+ * return the first request after the highest zone start <= @pos.
+ */
static inline struct request *deadline_from_pos(struct dd_per_prio *per_prio,
enum dd_data_dir data_dir, sector_t pos)
{
struct rb_node *node = per_prio->sort_list[data_dir].rb_node;
struct request *rq, *res = NULL;
+ if (!node)
+ return NULL;
+
+ rq = rb_entry_rq(node);
+ /*
+ * A zoned write may have been requeued with a starting position that
+ * is below that of the most recently dispatched request. Hence, for
+ * zoned writes, start searching from the start of a zone.
+ */
+ if (blk_rq_is_seq_zoned_write(rq))
+ pos -= round_down(pos, rq->q->limits.chunk_sectors);
+
while (node) {
rq = rb_entry_rq(node);
if (blk_rq_pos(rq) >= pos) {
@@ -812,6 +827,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
list_add(&rq->queuelist, &per_prio->dispatch);
rq->fifo_time = jiffies;
} else {
+ struct list_head *insert_before;
+
deadline_add_rq_rb(per_prio, rq);
if (rq_mergeable(rq)) {
@@ -824,7 +841,20 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
* set expire time and add to fifo list
*/
rq->fifo_time = jiffies + dd->fifo_expire[data_dir];
- list_add_tail(&rq->queuelist, &per_prio->fifo_list[data_dir]);
+ insert_before = &per_prio->fifo_list[data_dir];
+#ifdef CONFIG_BLK_DEV_ZONED
+ /*
+ * Insert zoned writes such that requests are sorted by
+ * position per zone.
+ */
+ if (blk_rq_is_seq_zoned_write(rq)) {
+ struct request *rq2 = deadline_latter_request(rq);
+
+ if (rq2 && blk_rq_zone_no(rq2) == blk_rq_zone_no(rq))
+ insert_before = &rq2->queuelist;
+ }
+#endif
+ list_add_tail(&rq->queuelist, insert_before);
}
}
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
` (9 preceding siblings ...)
2023-05-03 22:52 ` [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly Bart Van Assche
@ 2023-05-03 22:52 ` Bart Van Assche
2023-05-05 5:57 ` Christoph Hellwig
10 siblings, 1 reply; 23+ messages in thread
From: Bart Van Assche @ 2023-05-03 22:52 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Bart Van Assche,
Damien Le Moal, Ming Lei
Before dispatching a zoned write from the FIFO list, check whether there
are any zoned writes in the RB-tree with a lower LBA for the same zone.
This patch ensures that zoned writes happen in order even if at_head is
set for some writes for a zone and not for others.
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/mq-deadline.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 6c196182f86c..e556a6dd6616 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -346,7 +346,7 @@ static struct request *
deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
enum dd_data_dir data_dir)
{
- struct request *rq;
+ struct request *rq, *rb_rq, *next;
unsigned long flags;
if (list_empty(&per_prio->fifo_list[data_dir]))
@@ -364,7 +364,12 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
* zones and these zones are unlocked.
*/
spin_lock_irqsave(&dd->zone_lock, flags);
- list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) {
+ list_for_each_entry_safe(rq, next, &per_prio->fifo_list[DD_WRITE],
+ queuelist) {
+ /* Check whether a prior request exists for the same zone. */
+ rb_rq = deadline_from_pos(per_prio, data_dir, blk_rq_pos(rq));
+ if (rb_rq && blk_rq_pos(rb_rq) < blk_rq_pos(rq))
+ rq = rb_rq;
if (blk_req_can_dispatch_to_zone(rq) &&
(blk_queue_nonrot(rq->q) ||
!deadline_is_seq_write(dd, rq)))
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
@ 2023-05-04 7:16 ` Johannes Thumshirn
2023-05-04 8:18 ` Pankaj Raghav
2023-05-05 5:52 ` Christoph Hellwig
2 siblings, 0 replies; 23+ messages in thread
From: Johannes Thumshirn @ 2023-05-04 7:16 UTC (permalink / raw)
To: Bart Van Assche, Jens Axboe
Cc: linux-block@vger.kernel.org, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei, Pankaj Raghav
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
2023-05-04 7:16 ` Johannes Thumshirn
@ 2023-05-04 8:18 ` Pankaj Raghav
2023-05-05 5:52 ` Christoph Hellwig
2 siblings, 0 replies; 23+ messages in thread
From: Pankaj Raghav @ 2023-05-04 8:18 UTC (permalink / raw)
To: Bart Van Assche, Jens Axboe
Cc: linux-block, Jaegeuk Kim, Christoph Hellwig, Damien Le Moal,
Ming Lei, Johannes Thumshirn
> Change the type of the second argument of bdev_op_is_zoned_write() from
> blk_opf_t into enum req_op because this function expects an operation
> without flags as second argument.
>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Pankaj Raghav <p.raghav@samsung.com>
> Fixes: 8cafdb5ab94c ("block: adapt blk_mq_plug() to not plug for writes that require a zone lock")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Looks good,
Reviewed-by: Pankaj Raghav <p.raghav@samsung.com>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
2023-05-04 7:16 ` Johannes Thumshirn
2023-05-04 8:18 ` Pankaj Raghav
@ 2023-05-05 5:52 ` Christoph Hellwig
2 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:52 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei, Pankaj Raghav, Johannes Thumshirn
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 03/11] block: Introduce op_is_zoned_write()
2023-05-03 22:52 ` [PATCH v4 03/11] block: Introduce op_is_zoned_write() Bart Van Assche
@ 2023-05-05 5:52 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:52 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
On Wed, May 03, 2023 at 03:52:00PM -0700, Bart Van Assche wrote:
> Introduce a helper function for checking whether write serialization is
> required if the operation will be sent to a zoned device. A second caller
> for op_is_zoned_write() will be introduced in the next patch in this
> series.
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write()
2023-05-03 22:52 ` [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write() Bart Van Assche
@ 2023-05-05 5:53 ` Christoph Hellwig
2023-05-05 21:56 ` Bart Van Assche
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:53 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
On Wed, May 03, 2023 at 03:52:01PM -0700, Bart Van Assche wrote:
> +bool blk_rq_is_seq_zoned_write(struct request *rq)
> +{
> + return op_is_zoned_write(req_op(rq)) && blk_rq_zone_is_seq(rq);
> +}
> +EXPORT_SYMBOL_GPL(blk_rq_is_seq_zoned_write);
Would it make more sense to just inline this function?
Otherwise looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 08/11] block: mq-deadline: Reduce lock contention
2023-05-03 22:52 ` [PATCH v4 08/11] block: mq-deadline: Reduce lock contention Bart Van Assche
@ 2023-05-05 5:56 ` Christoph Hellwig
2023-05-05 16:16 ` Bart Van Assche
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:56 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
On Wed, May 03, 2023 at 03:52:05PM -0700, Bart Van Assche wrote:
> blk_mq_free_requests() calls dd_finish_request() indirectly. Prevent
> nested locking of dd->lock and dd->zone_lock by unlocking dd->lock
> before calling blk_mq_free_requests().
Do you have a reproducer for this that we could wire up in blktests?
Also please add a Fixes tag and move it to the beginning of the series.
> static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> blk_insert_t flags)
> + __must_hold(dd->lock)
> {
> struct request_queue *q = hctx->queue;
> struct deadline_data *dd = q->elevator->elevator_data;
> @@ -784,7 +785,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> }
>
> if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
> + spin_unlock(&dd->lock);
> blk_mq_free_requests(&free);
> + spin_lock(&dd->lock);
> return;
Fiven that free is a list, why don't we declare the free list in
dd_insert_requests and just pass it to dd_insert_request and then do
one single blk_mq_free_requests call after the loop?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 09/11] block: mq-deadline: Track the dispatch position
2023-05-03 22:52 ` [PATCH v4 09/11] block: mq-deadline: Track the dispatch position Bart Van Assche
@ 2023-05-05 5:56 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:56 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly
2023-05-03 22:52 ` [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly Bart Van Assche
@ 2023-05-05 5:57 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:57 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes
2023-05-03 22:52 ` [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes Bart Van Assche
@ 2023-05-05 5:57 ` Christoph Hellwig
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-05-05 5:57 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Christoph Hellwig,
Damien Le Moal, Ming Lei
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 08/11] block: mq-deadline: Reduce lock contention
2023-05-05 5:56 ` Christoph Hellwig
@ 2023-05-05 16:16 ` Bart Van Assche
0 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-05 16:16 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Damien Le Moal, Ming Lei
On 5/4/23 22:56, Christoph Hellwig wrote:
> On Wed, May 03, 2023 at 03:52:05PM -0700, Bart Van Assche wrote:
>> blk_mq_free_requests() calls dd_finish_request() indirectly. Prevent
>> nested locking of dd->lock and dd->zone_lock by unlocking dd->lock
>> before calling blk_mq_free_requests().
>
> Do you have a reproducer for this that we could wire up in blktests?
> Also please add a Fixes tag and move it to the beginning of the series.
Hi Christoph,
I think the nested locking is triggered during every run of blktests.
Additionally, I don't think that nested locking of spinlocks is a bug
so I'm surprised to see a request to add a Fixes: tag?
>> static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>> blk_insert_t flags)
>> + __must_hold(dd->lock)
>> {
>> struct request_queue *q = hctx->queue;
>> struct deadline_data *dd = q->elevator->elevator_data;
>> @@ -784,7 +785,9 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>> }
>>
>> if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
>> + spin_unlock(&dd->lock);
>> blk_mq_free_requests(&free);
>> + spin_lock(&dd->lock);
>> return;
>
> Fiven that free is a list, why don't we declare the free list in
> dd_insert_requests and just pass it to dd_insert_request and then do
> one single blk_mq_free_requests call after the loop?
That sounds like an interesting approach to me. I will make this change.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write()
2023-05-05 5:53 ` Christoph Hellwig
@ 2023-05-05 21:56 ` Bart Van Assche
0 siblings, 0 replies; 23+ messages in thread
From: Bart Van Assche @ 2023-05-05 21:56 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, linux-block, Jaegeuk Kim, Damien Le Moal, Ming Lei
On 5/4/23 22:53, Christoph Hellwig wrote:
> On Wed, May 03, 2023 at 03:52:01PM -0700, Bart Van Assche wrote:
>> +bool blk_rq_is_seq_zoned_write(struct request *rq)
>> +{
>> + return op_is_zoned_write(req_op(rq)) && blk_rq_zone_is_seq(rq);
>> +}
>> +EXPORT_SYMBOL_GPL(blk_rq_is_seq_zoned_write);
>
> Would it make more sense to just inline this function?
Hi Christoph,
I will declare this function as inline and move it into include/linux/blk-mq.h.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2023-05-05 21:56 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-03 22:51 [PATCH v4 00/11] mq-deadline: Improve support for zoned block devices Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 01/11] block: Simplify blk_req_needs_zone_write_lock() Bart Van Assche
2023-05-03 22:51 ` [PATCH v4 02/11] block: Fix the type of the second bdev_op_is_zoned_write() argument Bart Van Assche
2023-05-04 7:16 ` Johannes Thumshirn
2023-05-04 8:18 ` Pankaj Raghav
2023-05-05 5:52 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 03/11] block: Introduce op_is_zoned_write() Bart Van Assche
2023-05-05 5:52 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 04/11] block: Introduce blk_rq_is_seq_zoned_write() Bart Van Assche
2023-05-05 5:53 ` Christoph Hellwig
2023-05-05 21:56 ` Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 05/11] block: mq-deadline: Clean up deadline_check_fifo() Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 06/11] block: mq-deadline: Simplify deadline_skip_seq_writes() Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 07/11] block: mq-deadline: Improve deadline_skip_seq_writes() Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 08/11] block: mq-deadline: Reduce lock contention Bart Van Assche
2023-05-05 5:56 ` Christoph Hellwig
2023-05-05 16:16 ` Bart Van Assche
2023-05-03 22:52 ` [PATCH v4 09/11] block: mq-deadline: Track the dispatch position Bart Van Assche
2023-05-05 5:56 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 10/11] block: mq-deadline: Handle requeued requests correctly Bart Van Assche
2023-05-05 5:57 ` Christoph Hellwig
2023-05-03 22:52 ` [PATCH v4 11/11] block: mq-deadline: Fix handling of at-head zoned writes Bart Van Assche
2023-05-05 5:57 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).