linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* convert xen-blkfront to atomic queue limit updates v2
@ 2024-02-21 12:58 Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX Christoph Hellwig
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21 12:58 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

Hi all,

this series converts xen-blkfront to the new atomic queue limits update
API in the block tree.  I don't have a Xen setup so this is compile
tested only.

Changes since v1:
 - constify the info argument to blkif_set_queue_limits
 - remove a spurious word from a commit message

Diffstat:
 xen-blkfront.c |   53 +++++++++++++++++++++++++++--------------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
@ 2024-02-21 12:58 ` Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 2/4] xen-blkfront: rely on the default discard granularity Christoph Hellwig
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21 12:58 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

Currently xen-blkfront set the max discard limit to the capacity of
the device, which is suboptimal when the capacity changes.  Just set
it to UINT_MAX, which has the same effect and is simpler.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 drivers/block/xen-blkfront.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 4cc2884e748463..f78167cd5a6333 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -944,20 +944,18 @@ static const struct blk_mq_ops blkfront_mq_ops = {
 static void blkif_set_queue_limits(struct blkfront_info *info)
 {
 	struct request_queue *rq = info->rq;
-	struct gendisk *gd = info->gd;
 	unsigned int segments = info->max_indirect_segments ? :
 				BLKIF_MAX_SEGMENTS_PER_REQUEST;
 
 	blk_queue_flag_set(QUEUE_FLAG_VIRT, rq);
 
 	if (info->feature_discard) {
-		blk_queue_max_discard_sectors(rq, get_capacity(gd));
+		blk_queue_max_discard_sectors(rq, UINT_MAX);
 		rq->limits.discard_granularity = info->discard_granularity ?:
 						 info->physical_sector_size;
 		rq->limits.discard_alignment = info->discard_alignment;
 		if (info->feature_secdiscard)
-			blk_queue_max_secure_erase_sectors(rq,
-							   get_capacity(gd));
+			blk_queue_max_secure_erase_sectors(rq, UINT_MAX);
 	}
 
 	/* Hard sector size and max sectors impersonate the equiv. hardware. */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] xen-blkfront: rely on the default discard granularity
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX Christoph Hellwig
@ 2024-02-21 12:58 ` Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 3/4] xen-blkfront: don't redundantly set max_sements in blkif_recover Christoph Hellwig
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21 12:58 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

The block layer now sets the discard granularity to the physical
block size default.  Take advantage of that in xen-blkfront and only
set the discard granularity if explicitly specified.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 drivers/block/xen-blkfront.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index f78167cd5a6333..1258f24b285500 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -951,8 +951,8 @@ static void blkif_set_queue_limits(struct blkfront_info *info)
 
 	if (info->feature_discard) {
 		blk_queue_max_discard_sectors(rq, UINT_MAX);
-		rq->limits.discard_granularity = info->discard_granularity ?:
-						 info->physical_sector_size;
+		if (info->discard_granularity)
+			rq->limits.discard_granularity = info->discard_granularity;
 		rq->limits.discard_alignment = info->discard_alignment;
 		if (info->feature_secdiscard)
 			blk_queue_max_secure_erase_sectors(rq, UINT_MAX);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] xen-blkfront: don't redundantly set max_sements in blkif_recover
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 2/4] xen-blkfront: rely on the default discard granularity Christoph Hellwig
@ 2024-02-21 12:58 ` Christoph Hellwig
  2024-02-21 12:58 ` [PATCH 4/4] xen-blkfront: atomically update queue limits Christoph Hellwig
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21 12:58 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

blkif_set_queue_limits already sets the max_sements limits, so don't do
it a second time.  Also remove a comment about a long fixe bug in
blk_mq_update_nr_hw_queues.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 drivers/block/xen-blkfront.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 1258f24b285500..7664638a0abbfa 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2008,14 +2008,10 @@ static int blkif_recover(struct blkfront_info *info)
 	struct request *req, *n;
 	int rc;
 	struct bio *bio;
-	unsigned int segs;
 	struct blkfront_ring_info *rinfo;
 
 	blkfront_gather_backend_features(info);
-	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
-	segs = info->max_indirect_segments ? : BLKIF_MAX_SEGMENTS_PER_REQUEST;
-	blk_queue_max_segments(info->rq, segs / GRANTS_PER_PSEG);
 
 	for_each_rinfo(info, rinfo, r_index) {
 		rc = blkfront_setup_indirect(rinfo);
@@ -2035,7 +2031,9 @@ static int blkif_recover(struct blkfront_info *info)
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
-		BUG_ON(req->nr_phys_segments > segs);
+		BUG_ON(req->nr_phys_segments >
+		       (info->max_indirect_segments ? :
+			BLKIF_MAX_SEGMENTS_PER_REQUEST));
 		blk_mq_requeue_request(req, false);
 	}
 	blk_mq_start_stopped_hw_queues(info->rq, true);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] xen-blkfront: atomically update queue limits
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2024-02-21 12:58 ` [PATCH 3/4] xen-blkfront: don't redundantly set max_sements in blkif_recover Christoph Hellwig
@ 2024-02-21 12:58 ` Christoph Hellwig
  2024-02-27 15:46 ` convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
  2024-02-27 21:21 ` Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21 12:58 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

Pass the initial queue limits to blk_mq_alloc_disk and use the
blkif_set_queue_limits API to update the limits on reconnect.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
 drivers/block/xen-blkfront.c | 41 ++++++++++++++++++++----------------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 7664638a0abbfa..fd7c0ff2139cee 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -941,37 +941,35 @@ static const struct blk_mq_ops blkfront_mq_ops = {
 	.complete = blkif_complete_rq,
 };
 
-static void blkif_set_queue_limits(struct blkfront_info *info)
+static void blkif_set_queue_limits(const struct blkfront_info *info,
+		struct queue_limits *lim)
 {
-	struct request_queue *rq = info->rq;
 	unsigned int segments = info->max_indirect_segments ? :
 				BLKIF_MAX_SEGMENTS_PER_REQUEST;
 
-	blk_queue_flag_set(QUEUE_FLAG_VIRT, rq);
-
 	if (info->feature_discard) {
-		blk_queue_max_discard_sectors(rq, UINT_MAX);
+		lim->max_hw_discard_sectors = UINT_MAX;
 		if (info->discard_granularity)
-			rq->limits.discard_granularity = info->discard_granularity;
-		rq->limits.discard_alignment = info->discard_alignment;
+			lim->discard_granularity = info->discard_granularity;
+		lim->discard_alignment = info->discard_alignment;
 		if (info->feature_secdiscard)
-			blk_queue_max_secure_erase_sectors(rq, UINT_MAX);
+			lim->max_secure_erase_sectors = UINT_MAX;
 	}
 
 	/* Hard sector size and max sectors impersonate the equiv. hardware. */
-	blk_queue_logical_block_size(rq, info->sector_size);
-	blk_queue_physical_block_size(rq, info->physical_sector_size);
-	blk_queue_max_hw_sectors(rq, (segments * XEN_PAGE_SIZE) / 512);
+	lim->logical_block_size = info->sector_size;
+	lim->physical_block_size = info->physical_sector_size;
+	lim->max_hw_sectors = (segments * XEN_PAGE_SIZE) / 512;
 
 	/* Each segment in a request is up to an aligned page in size. */
-	blk_queue_segment_boundary(rq, PAGE_SIZE - 1);
-	blk_queue_max_segment_size(rq, PAGE_SIZE);
+	lim->seg_boundary_mask = PAGE_SIZE - 1;
+	lim->max_segment_size = PAGE_SIZE;
 
 	/* Ensure a merged request will fit in a single I/O ring slot. */
-	blk_queue_max_segments(rq, segments / GRANTS_PER_PSEG);
+	lim->max_segments = segments / GRANTS_PER_PSEG;
 
 	/* Make sure buffer addresses are sector-aligned. */
-	blk_queue_dma_alignment(rq, 511);
+	lim->dma_alignment = 511;
 }
 
 static const char *flush_info(struct blkfront_info *info)
@@ -1068,6 +1066,7 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 		struct blkfront_info *info, u16 sector_size,
 		unsigned int physical_sector_size)
 {
+	struct queue_limits lim = {};
 	struct gendisk *gd;
 	int nr_minors = 1;
 	int err;
@@ -1134,11 +1133,13 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 	if (err)
 		goto out_release_minors;
 
-	gd = blk_mq_alloc_disk(&info->tag_set, NULL, info);
+	blkif_set_queue_limits(info, &lim);
+	gd = blk_mq_alloc_disk(&info->tag_set, &lim, info);
 	if (IS_ERR(gd)) {
 		err = PTR_ERR(gd);
 		goto out_free_tag_set;
 	}
+	blk_queue_flag_set(QUEUE_FLAG_VIRT, gd->queue);
 
 	strcpy(gd->disk_name, DEV_NAME);
 	ptr = encode_disk_name(gd->disk_name + sizeof(DEV_NAME) - 1, offset);
@@ -1160,7 +1161,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 	info->gd = gd;
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
-	blkif_set_queue_limits(info);
 
 	xlvbd_flush(info);
 
@@ -2004,14 +2004,19 @@ static int blkfront_probe(struct xenbus_device *dev,
 
 static int blkif_recover(struct blkfront_info *info)
 {
+	struct queue_limits lim;
 	unsigned int r_index;
 	struct request *req, *n;
 	int rc;
 	struct bio *bio;
 	struct blkfront_ring_info *rinfo;
 
+	lim = queue_limits_start_update(info->rq);
 	blkfront_gather_backend_features(info);
-	blkif_set_queue_limits(info);
+	blkif_set_queue_limits(info, &lim);
+	rc = queue_limits_commit_update(info->rq, &lim);
+	if (rc)
+		return rc;
 
 	for_each_rinfo(info, rinfo, r_index) {
 		rc = blkfront_setup_indirect(rinfo);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: convert xen-blkfront to atomic queue limit updates v2
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2024-02-21 12:58 ` [PATCH 4/4] xen-blkfront: atomically update queue limits Christoph Hellwig
@ 2024-02-27 15:46 ` Christoph Hellwig
  2024-02-27 21:21 ` Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-27 15:46 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, Jens Axboe, xen-devel, linux-block

Jens,

can you pick this up with the Xen maintainer review in place?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: convert xen-blkfront to atomic queue limit updates v2
  2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
                   ` (4 preceding siblings ...)
  2024-02-27 15:46 ` convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
@ 2024-02-27 21:21 ` Jens Axboe
  5 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2024-02-27 21:21 UTC (permalink / raw)
  To: Juergen Gross, Stefano Stabellini, Oleksandr Tyshchenko,
	Roger Pau Monné, xen-devel, linux-block, Christoph Hellwig


On Wed, 21 Feb 2024 13:58:41 +0100, Christoph Hellwig wrote:
> this series converts xen-blkfront to the new atomic queue limits update
> API in the block tree.  I don't have a Xen setup so this is compile
> tested only.
> 
> Changes since v1:
>  - constify the info argument to blkif_set_queue_limits
>  - remove a spurious word from a commit message
> 
> [...]

Applied, thanks!

[1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX
      commit: 4a718d7dbab873bc24034fc865d3a5442632d1fd
[2/4] xen-blkfront: rely on the default discard granularity
      commit: 738be136327a56e5a67e1942a2c318fb91914a3f
[3/4] xen-blkfront: don't redundantly set max_sements in blkif_recover
      commit: 4f81b87d91be2a00195f85847d040c2276cac2ae
[4/4] xen-blkfront: atomically update queue limits
      commit: ba3f67c1163812b5d7ec33705c31edaa30ce6c51

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-02-27 21:21 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-21 12:58 convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
2024-02-21 12:58 ` [PATCH 1/4] xen-blkfront: set max_discard/secure erase limits to UINT_MAX Christoph Hellwig
2024-02-21 12:58 ` [PATCH 2/4] xen-blkfront: rely on the default discard granularity Christoph Hellwig
2024-02-21 12:58 ` [PATCH 3/4] xen-blkfront: don't redundantly set max_sements in blkif_recover Christoph Hellwig
2024-02-21 12:58 ` [PATCH 4/4] xen-blkfront: atomically update queue limits Christoph Hellwig
2024-02-27 15:46 ` convert xen-blkfront to atomic queue limit updates v2 Christoph Hellwig
2024-02-27 21:21 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).