linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* queue_limits fixups and tidyups v2
@ 2024-06-25 14:59 Christoph Hellwig
  2024-06-25 14:59 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig
                   ` (7 more replies)
  0 siblings, 8 replies; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

Hi Jens,

this series has a few fixes for the queue_limits conversion in the first
few patches and then has a bunch more cleanups and improvements in that
area.

Changes since v1:
 - remove an incorrect use of a flag on the features field
 - add a patch to switch to __bitwise annotations for features and flags
 - update a commit log

Diffstat:
 block/bio-integrity.c     |    2 
 block/blk-map.c           |    2 
 block/blk-settings.c      |   46 +++++----------------
 block/blk-sysfs.c         |    9 ++--
 block/blk.h               |    2 
 block/genhd.c             |    2 
 drivers/ata/libata-scsi.c |    3 -
 drivers/ata/pata_macio.c  |    4 -
 drivers/md/md.c           |   13 +++--
 drivers/md/md.h           |    1 
 drivers/md/raid0.c        |    2 
 drivers/md/raid1.c        |    2 
 drivers/md/raid10.c       |    2 
 drivers/md/raid5.c        |    2 
 drivers/scsi/scsi_lib.c   |    4 -
 drivers/ufs/core/ufshcd.c |    9 ++--
 include/linux/blkdev.h    |  100 +++++++++++++++++++++++-----------------------
 17 files changed, 96 insertions(+), 109 deletions(-)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/8] md: set md-specific flags for all queue limits
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:21   ` Damien Le Moal
  2024-06-25 14:59 ` [PATCH 2/8] block: correctly report cache type Christoph Hellwig
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi, kernel test robot

The md driver wants to enforce a number of flags to an all devices, even
when not inheriting them from the underlying devices.  To make sure these
flags survive the queue_limits_set calls that md uses to update the
queue limits without deriving them form the previous limits add a new
md_init_stacking_limits helper that calls blk_set_stacking_limits and sets
these flags.

Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md.c     | 13 ++++++++-----
 drivers/md/md.h     |  1 +
 drivers/md/raid0.c  |  2 +-
 drivers/md/raid1.c  |  2 +-
 drivers/md/raid10.c |  2 +-
 drivers/md/raid5.c  |  2 +-
 6 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 69ea54aedd99a1..8368438e58e989 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5853,6 +5853,13 @@ static void mddev_delayed_delete(struct work_struct *ws)
 	kobject_put(&mddev->kobj);
 }
 
+void md_init_stacking_limits(struct queue_limits *lim)
+{
+	blk_set_stacking_limits(lim);
+	lim->features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+			BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+}
+
 struct mddev *md_alloc(dev_t dev, char *name)
 {
 	/*
@@ -5871,10 +5878,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int shift;
 	int unit;
 	int error;
-	struct queue_limits lim = {
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
-	};
 
 	/*
 	 * Wait for any previous instance of this device to be completely
@@ -5914,7 +5917,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 		 */
 		mddev->hold_active = UNTIL_STOP;
 
-	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
+	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		error = PTR_ERR(disk);
 		goto out_free_mddev;
diff --git a/drivers/md/md.h b/drivers/md/md.h
index c4d7ebf9587d07..28cb4b0b6c1740 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -893,6 +893,7 @@ extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
 
 extern int mddev_init(struct mddev *mddev);
 extern void mddev_destroy(struct mddev *mddev);
+void md_init_stacking_limits(struct queue_limits *lim);
 struct mddev *md_alloc(dev_t dev, char *name);
 void mddev_put(struct mddev *mddev);
 extern int md_run(struct mddev *mddev);
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 62634e2a33bd0f..32d58752477847 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -379,7 +379,7 @@ static int raid0_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_hw_sectors = mddev->chunk_sectors;
 	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
 	lim.io_min = mddev->chunk_sectors << 9;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 1a0eba65b8a92b..04a0c2ca173245 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3194,7 +3194,7 @@ static int raid1_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
 	if (err) {
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 3334aa803c8380..2a9c4ee982e023 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3974,7 +3974,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0192a6323f09ba..10219205160bbf 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7708,7 +7708,7 @@ static int raid5_set_limits(struct mddev *mddev)
 	 */
 	stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9));
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded);
 	lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/8] block: correctly report cache type
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
  2024-06-25 14:59 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:22   ` Damien Le Moal
  2024-06-26  7:59   ` John Garry
  2024-06-25 14:59 ` [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED Christoph Hellwig
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

Check the features flag and the override flag, otherwise we're going to
always report "write through".

Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-sysfs.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 1a984179f3acc5..6db65886e7ed5a 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -453,7 +453,8 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_wc_show(struct request_queue *q, char *page)
 {
-	if (q->limits.features & BLK_FLAG_WRITE_CACHE_DISABLED)
+	if (!(q->limits.features & BLK_FEAT_WRITE_CACHE) ||
+	    (q->limits.flags & BLK_FLAG_WRITE_CACHE_DISABLED))
 		return sprintf(page, "write through\n");
 	return sprintf(page, "write back\n");
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
  2024-06-25 14:59 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig
  2024-06-25 14:59 ` [PATCH 2/8] block: correctly report cache type Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:23   ` Damien Le Moal
  2024-06-26  8:07   ` John Garry
  2024-06-25 14:59 ` [PATCH 4/8] block: convert features and flags to __bitwise types Christoph Hellwig
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

This is a flag for ->flags and not a feature for ->features.  And fix the
one place that actually incorrectly cleared it from ->features.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-settings.c   | 18 +++++++++---------
 include/linux/blkdev.h |  2 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index ec7dbe93d5c324..ed39a55c5bae7c 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -351,7 +351,7 @@ static int blk_validate_limits(struct queue_limits *lim)
 
 	if (lim->alignment_offset) {
 		lim->alignment_offset &= (lim->physical_block_size - 1);
-		lim->features &= ~BLK_FEAT_MISALIGNED;
+		lim->flags &= ~BLK_FLAG_MISALIGNED;
 	}
 
 	if (!(lim->features & BLK_FEAT_WRITE_CACHE))
@@ -564,7 +564,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 	if (!(b->features & BLK_FEAT_POLL))
 		t->features &= ~BLK_FEAT_POLL;
 
-	t->flags |= (b->flags & BLK_FEAT_MISALIGNED);
+	t->flags |= (b->flags & BLK_FLAG_MISALIGNED);
 
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
@@ -603,7 +603,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 		/* Verify that top and bottom intervals line up */
 		if (max(top, bottom) % min(top, bottom)) {
-			t->flags |= BLK_FEAT_MISALIGNED;
+			t->flags |= BLK_FLAG_MISALIGNED;
 			ret = -1;
 		}
 	}
@@ -625,28 +625,28 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 	/* Physical block size a multiple of the logical block size? */
 	if (t->physical_block_size & (t->logical_block_size - 1)) {
 		t->physical_block_size = t->logical_block_size;
-		t->flags |= BLK_FEAT_MISALIGNED;
+		t->flags |= BLK_FLAG_MISALIGNED;
 		ret = -1;
 	}
 
 	/* Minimum I/O a multiple of the physical block size? */
 	if (t->io_min & (t->physical_block_size - 1)) {
 		t->io_min = t->physical_block_size;
-		t->flags |= BLK_FEAT_MISALIGNED;
+		t->flags |= BLK_FLAG_MISALIGNED;
 		ret = -1;
 	}
 
 	/* Optimal I/O a multiple of the physical block size? */
 	if (t->io_opt & (t->physical_block_size - 1)) {
 		t->io_opt = 0;
-		t->flags |= BLK_FEAT_MISALIGNED;
+		t->flags |= BLK_FLAG_MISALIGNED;
 		ret = -1;
 	}
 
 	/* chunk_sectors a multiple of the physical block size? */
 	if ((t->chunk_sectors << 9) & (t->physical_block_size - 1)) {
 		t->chunk_sectors = 0;
-		t->flags |= BLK_FEAT_MISALIGNED;
+		t->flags |= BLK_FLAG_MISALIGNED;
 		ret = -1;
 	}
 
@@ -656,7 +656,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 	/* Verify that new alignment_offset is on a logical block boundary */
 	if (t->alignment_offset & (t->logical_block_size - 1)) {
-		t->flags |= BLK_FEAT_MISALIGNED;
+		t->flags |= BLK_FLAG_MISALIGNED;
 		ret = -1;
 	}
 
@@ -809,7 +809,7 @@ int bdev_alignment_offset(struct block_device *bdev)
 {
 	struct request_queue *q = bdev_get_queue(bdev);
 
-	if (q->limits.flags & BLK_FEAT_MISALIGNED)
+	if (q->limits.flags & BLK_FLAG_MISALIGNED)
 		return -1;
 	if (bdev_is_partition(bdev))
 		return queue_limit_alignment_offset(&q->limits,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b2f1362c46814f..1a7e9d9c16d78b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -347,7 +347,7 @@ enum {
 	BLK_FLAG_WRITE_CACHE_DISABLED		= (1u << 0),
 
 	/* I/O topology is misaligned */
-	BLK_FEAT_MISALIGNED			= (1u << 1),
+	BLK_FLAG_MISALIGNED			= (1u << 1),
 };
 
 struct queue_limits {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/8] block: convert features and flags to __bitwise types
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
                   ` (2 preceding siblings ...)
  2024-06-25 14:59 ` [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:27   ` Damien Le Moal
  2024-06-25 14:59 ` [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio Christoph Hellwig
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

... and let sparse help us catch mismatches or abuses.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-sysfs.c      |  6 +--
 include/linux/blkdev.h | 85 +++++++++++++++++++++---------------------
 2 files changed, 46 insertions(+), 45 deletions(-)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 6db65886e7ed5a..2d033275da6ea4 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -288,7 +288,7 @@ static ssize_t queue_dma_alignment_show(struct request_queue *q, char *page)
 }
 
 static ssize_t queue_feature_store(struct request_queue *q, const char *page,
-		size_t count, unsigned int feature)
+		size_t count, blk_features_t feature)
 {
 	struct queue_limits lim;
 	unsigned long val;
@@ -418,7 +418,7 @@ static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_poll_show(struct request_queue *q, char *page)
 {
-	return queue_var_show(q->limits.features & BLK_FEAT_POLL, page);
+	return queue_var_show(!!(q->limits.features & BLK_FEAT_POLL), page);
 }
 
 static ssize_t queue_poll_store(struct request_queue *q, const char *page,
@@ -493,7 +493,7 @@ static ssize_t queue_fua_show(struct request_queue *q, char *page)
 
 static ssize_t queue_dax_show(struct request_queue *q, char *page)
 {
-	return queue_var_show(blk_queue_dax(q), page);
+	return queue_var_show(!!blk_queue_dax(q), page);
 }
 
 #define QUEUE_RO_ENTRY(_prefix, _name)			\
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1a7e9d9c16d78b..b37826b350a2e3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -283,55 +283,56 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
 }
 
 /* flags set by the driver in queue_limits.features */
-enum {
-	/* supports a volatile write cache */
-	BLK_FEAT_WRITE_CACHE			= (1u << 0),
+typedef unsigned int __bitwise blk_features_t;
 
-	/* supports passing on the FUA bit */
-	BLK_FEAT_FUA				= (1u << 1),
+/* supports a volatile write cache */
+#define BLK_FEAT_WRITE_CACHE		((__force blk_features_t)(1u << 0))
 
-	/* rotational device (hard drive or floppy) */
-	BLK_FEAT_ROTATIONAL			= (1u << 2),
+/* supports passing on the FUA bit */
+#define BLK_FEAT_FUA			((__force blk_features_t)(1u << 1))
 
-	/* contributes to the random number pool */
-	BLK_FEAT_ADD_RANDOM			= (1u << 3),
+/* rotational device (hard drive or floppy) */
+#define BLK_FEAT_ROTATIONAL		((__force blk_features_t)(1u << 2))
 
-	/* do disk/partitions IO accounting */
-	BLK_FEAT_IO_STAT			= (1u << 4),
+/* contributes to the random number pool */
+#define BLK_FEAT_ADD_RANDOM		((__force blk_features_t)(1u << 3))
 
-	/* don't modify data until writeback is done */
-	BLK_FEAT_STABLE_WRITES			= (1u << 5),
+/* do disk/partitions IO accounting */
+#define BLK_FEAT_IO_STAT		((__force blk_features_t)(1u << 4))
 
-	/* always completes in submit context */
-	BLK_FEAT_SYNCHRONOUS			= (1u << 6),
+/* don't modify data until writeback is done */
+#define BLK_FEAT_STABLE_WRITES		((__force blk_features_t)(1u << 5))
 
-	/* supports REQ_NOWAIT */
-	BLK_FEAT_NOWAIT				= (1u << 7),
+/* always completes in submit context */
+#define BLK_FEAT_SYNCHRONOUS		((__force blk_features_t)(1u << 6))
 
-	/* supports DAX */
-	BLK_FEAT_DAX				= (1u << 8),
+/* supports REQ_NOWAIT */
+#define BLK_FEAT_NOWAIT			((__force blk_features_t)(1u << 7))
 
-	/* supports I/O polling */
-	BLK_FEAT_POLL				= (1u << 9),
+/* supports DAX */
+#define BLK_FEAT_DAX			((__force blk_features_t)(1u << 8))
 
-	/* is a zoned device */
-	BLK_FEAT_ZONED				= (1u << 10),
+/* supports I/O polling */
+#define BLK_FEAT_POLL			((__force blk_features_t)(1u << 9))
 
-	/* supports Zone Reset All */
-	BLK_FEAT_ZONE_RESETALL			= (1u << 11),
+/* is a zoned device */
+#define BLK_FEAT_ZONED			((__force blk_features_t)(1u << 10))
 
-	/* supports PCI(e) p2p requests */
-	BLK_FEAT_PCI_P2PDMA			= (1u << 12),
+/* supports Zone Reset All */
+#define BLK_FEAT_ZONE_RESETALL		((__force blk_features_t)(1u << 11))
 
-	/* skip this queue in blk_mq_(un)quiesce_tagset */
-	BLK_FEAT_SKIP_TAGSET_QUIESCE		= (1u << 13),
+/* supports PCI(e) p2p requests */
+#define BLK_FEAT_PCI_P2PDMA		((__force blk_features_t)(1u << 12))
 
-	/* bounce all highmem pages */
-	BLK_FEAT_BOUNCE_HIGH			= (1u << 14),
+/* skip this queue in blk_mq_(un)quiesce_tagset */
+#define BLK_FEAT_SKIP_TAGSET_QUIESCE	((__force blk_features_t)(1u << 13))
 
-	/* undocumented magic for bcache */
-	BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE	= (1u << 15),
-};
+/* bounce all highmem pages */
+#define BLK_FEAT_BOUNCE_HIGH		((__force blk_features_t)(1u << 14))
+
+/* undocumented magic for bcache */
+#define BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE \
+	((__force blk_features_t)(1u << 15))
 
 /*
  * Flags automatically inherited when stacking limits.
@@ -342,17 +343,17 @@ enum {
 	 BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE)
 
 /* internal flags in queue_limits.flags */
-enum {
-	/* do not send FLUSH/FUA commands despite advertising a write cache */
-	BLK_FLAG_WRITE_CACHE_DISABLED		= (1u << 0),
+typedef unsigned int __bitwise blk_flags_t;
 
-	/* I/O topology is misaligned */
-	BLK_FLAG_MISALIGNED			= (1u << 1),
-};
+/* do not send FLUSH/FUA commands despite advertising a write cache */
+#define BLK_FLAG_WRITE_CACHE_DISABLED	((__force blk_flags_t)(1u << 0))
+
+/* I/O topology is misaligned */
+#define BLK_FLAG_MISALIGNED		((__force blk_flags_t)(1u << 1))
 
 struct queue_limits {
-	unsigned int		features;
-	unsigned int		flags;
+	blk_features_t		features;
+	blk_flags_t		flags;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
                   ` (3 preceding siblings ...)
  2024-06-25 14:59 ` [PATCH 4/8] block: convert features and flags to __bitwise types Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:27   ` Damien Le Moal
  2024-06-25 14:59 ` [PATCH 6/8] block: remove disk_update_readahead Christoph Hellwig
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi, John Garry

"static" never goes on a line of its own.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Garry <john.g.garry@oracle.com>
---
 block/blk-settings.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index ed39a55c5bae7c..c2221b7406d46a 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -142,8 +142,7 @@ static int blk_validate_integrity_limits(struct queue_limits *lim)
  * so we assume that we can fit in at least PAGE_SIZE in a segment, apart from
  * the first and last segments.
  */
-static
-unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *lim)
+static unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *lim)
 {
 	unsigned int max_segments = min(BIO_MAX_VECS, lim->max_segments);
 	unsigned int length;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 6/8] block: remove disk_update_readahead
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
                   ` (4 preceding siblings ...)
  2024-06-25 14:59 ` [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:29   ` Damien Le Moal
  2024-06-25 14:59 ` [PATCH 7/8] block: remove the fallback case in queue_dma_alignment Christoph Hellwig
  2024-06-25 14:59 ` [PATCH 8/8] block: move dma_pad_mask into queue_limits Christoph Hellwig
  7 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

Mark blk_apply_bdi_limits non-static and open code disk_update_readahead
in the only caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-settings.c   | 8 +-------
 block/blk.h            | 2 ++
 block/genhd.c          | 2 +-
 include/linux/blkdev.h | 1 -
 4 files changed, 4 insertions(+), 9 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index c2221b7406d46a..c692e80bb4f890 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -55,7 +55,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
 }
 EXPORT_SYMBOL(blk_set_stacking_limits);
 
-static void blk_apply_bdi_limits(struct backing_dev_info *bdi,
+void blk_apply_bdi_limits(struct backing_dev_info *bdi,
 		struct queue_limits *lim)
 {
 	/*
@@ -434,12 +434,6 @@ int queue_limits_set(struct request_queue *q, struct queue_limits *lim)
 }
 EXPORT_SYMBOL_GPL(queue_limits_set);
 
-void disk_update_readahead(struct gendisk *disk)
-{
-	blk_apply_bdi_limits(disk->bdi, &disk->queue->limits);
-}
-EXPORT_SYMBOL_GPL(disk_update_readahead);
-
 /**
  * blk_limits_io_min - set minimum request size for a device
  * @limits: the queue limits
diff --git a/block/blk.h b/block/blk.h
index d0a986d8ee507e..95e5a4f81693c4 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -358,6 +358,8 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio);
 enum elv_merge blk_try_merge(struct request *rq, struct bio *bio);
 
 int blk_set_default_limits(struct queue_limits *lim);
+void blk_apply_bdi_limits(struct backing_dev_info *bdi,
+		struct queue_limits *lim);
 int blk_dev_init(void);
 
 /*
diff --git a/block/genhd.c b/block/genhd.c
index 8f1f3c6b4d6729..4dc95a46350532 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -524,7 +524,7 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
 		disk->part0->bd_dev = MKDEV(disk->major, disk->first_minor);
 	}
 
-	disk_update_readahead(disk);
+	blk_apply_bdi_limits(disk->bdi, &disk->queue->limits);
 	disk_add_events(disk);
 	set_bit(GD_ADDED, &disk->state);
 	return 0;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b37826b350a2e3..6b88382012e958 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -973,7 +973,6 @@ static inline void blk_queue_disable_write_zeroes(struct request_queue *q)
 /*
  * Access functions for manipulating queue properties
  */
-void disk_update_readahead(struct gendisk *disk);
 extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min);
 extern void blk_limits_io_opt(struct queue_limits *limits, unsigned int opt);
 extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 7/8] block: remove the fallback case in queue_dma_alignment
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
                   ` (5 preceding siblings ...)
  2024-06-25 14:59 ` [PATCH 6/8] block: remove disk_update_readahead Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 15:07   ` John Garry
  2024-06-25 21:29   ` Damien Le Moal
  2024-06-25 14:59 ` [PATCH 8/8] block: move dma_pad_mask into queue_limits Christoph Hellwig
  7 siblings, 2 replies; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

Now that all updates go through blk_validate_limits the default of 511
is set at initialization time.  Also remove the unused NULL check as
calling this helper on a NULL queue can't happen (and doesn't make
much sense to start with).

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/blkdev.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 6b88382012e958..94fcbc91231208 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1394,7 +1394,7 @@ static inline bool bdev_is_zone_start(struct block_device *bdev,
 
 static inline int queue_dma_alignment(const struct request_queue *q)
 {
-	return q ? q->limits.dma_alignment : 511;
+	return q->limits.dma_alignment;
 }
 
 static inline unsigned int
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 8/8] block: move dma_pad_mask into queue_limits
  2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
                   ` (6 preceding siblings ...)
  2024-06-25 14:59 ` [PATCH 7/8] block: remove the fallback case in queue_dma_alignment Christoph Hellwig
@ 2024-06-25 14:59 ` Christoph Hellwig
  2024-06-25 21:31   ` Damien Le Moal
  7 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-25 14:59 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

dma_pad_mask is a queue_limits by all ways of looking at it, so move it
there and set it through the atomic queue limits APIs.

Add a little helper that takes the alignment and pad into account to
simply the code that is touched a bit.

Note that there never was any need for the > check in
blk_queue_update_dma_pad, this probably was just copy and paste from
dma_update_dma_alignment.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio-integrity.c     |  2 +-
 block/blk-map.c           |  2 +-
 block/blk-settings.c      | 17 -----------------
 drivers/ata/libata-scsi.c |  3 +--
 drivers/ata/pata_macio.c  |  4 ++--
 drivers/scsi/scsi_lib.c   |  4 ++--
 drivers/ufs/core/ufshcd.c |  9 +++++----
 include/linux/blkdev.h    | 12 ++++++++----
 8 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 173ffd4d623788..356ca0d3d62f5a 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -312,7 +312,7 @@ int bio_integrity_map_user(struct bio *bio, void __user *ubuf, ssize_t bytes,
 			   u32 seed)
 {
 	struct request_queue *q = bdev_get_queue(bio->bi_bdev);
-	unsigned int align = q->dma_pad_mask | queue_dma_alignment(q);
+	unsigned int align = blk_lim_dma_alignment_and_pad(&q->limits);
 	struct page *stack_pages[UIO_FASTIOV], **pages = stack_pages;
 	struct bio_vec stack_vec[UIO_FASTIOV], *bvec = stack_vec;
 	unsigned int direction, nr_bvecs;
diff --git a/block/blk-map.c b/block/blk-map.c
index 71210cdb34426d..bce144091128f6 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -634,7 +634,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
 			const struct iov_iter *iter, gfp_t gfp_mask)
 {
 	bool copy = false, map_bvec = false;
-	unsigned long align = q->dma_pad_mask | queue_dma_alignment(q);
+	unsigned long align = blk_lim_dma_alignment_and_pad(&q->limits);
 	struct bio *bio = NULL;
 	struct iov_iter i;
 	int ret = -EINVAL;
diff --git a/block/blk-settings.c b/block/blk-settings.c
index c692e80bb4f890..2e559cf97cc834 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -768,23 +768,6 @@ bool queue_limits_stack_integrity(struct queue_limits *t,
 }
 EXPORT_SYMBOL_GPL(queue_limits_stack_integrity);
 
-/**
- * blk_queue_update_dma_pad - update pad mask
- * @q:     the request queue for the device
- * @mask:  pad mask
- *
- * Update dma pad mask.
- *
- * Appending pad buffer to a request modifies the last entry of a
- * scatter list such that it includes the pad buffer.
- **/
-void blk_queue_update_dma_pad(struct request_queue *q, unsigned int mask)
-{
-	if (mask > q->dma_pad_mask)
-		q->dma_pad_mask = mask;
-}
-EXPORT_SYMBOL(blk_queue_update_dma_pad);
-
 /**
  * blk_set_queue_depth - tell the block layer about the device queue depth
  * @q:		the request queue for the device
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index cdf29b178ddc1e..682971c4cbe418 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1024,7 +1024,6 @@ EXPORT_SYMBOL_GPL(ata_scsi_dma_need_drain);
 int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
 		struct ata_device *dev)
 {
-	struct request_queue *q = sdev->request_queue;
 	int depth = 1;
 
 	if (!ata_id_has_unload(dev->id))
@@ -1038,7 +1037,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
 		sdev->sector_size = ATA_SECT_SIZE;
 
 		/* set DMA padding */
-		blk_queue_update_dma_pad(q, ATA_DMA_PAD_SZ - 1);
+		lim->dma_pad_mask = ATA_DMA_PAD_SZ - 1;
 
 		/* make room for appending the drain */
 		lim->max_segments--;
diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c
index 3cb455a32d9266..1b85e8bf4ef91b 100644
--- a/drivers/ata/pata_macio.c
+++ b/drivers/ata/pata_macio.c
@@ -816,7 +816,7 @@ static int pata_macio_device_configure(struct scsi_device *sdev,
 	/* OHare has issues with non cache aligned DMA on some chipsets */
 	if (priv->kind == controller_ohare) {
 		lim->dma_alignment = 31;
-		blk_queue_update_dma_pad(sdev->request_queue, 31);
+		lim->dma_pad_mask = 31;
 
 		/* Tell the world about it */
 		ata_dev_info(dev, "OHare alignment limits applied\n");
@@ -831,7 +831,7 @@ static int pata_macio_device_configure(struct scsi_device *sdev,
 	if (priv->kind == controller_sh_ata6 || priv->kind == controller_k2_ata6) {
 		/* Allright these are bad, apply restrictions */
 		lim->dma_alignment = 15;
-		blk_queue_update_dma_pad(sdev->request_queue, 15);
+		lim->dma_pad_mask = 15;
 
 		/* We enable MWI and hack cache line size directly here, this
 		 * is specific to this chipset and not normal values, we happen
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e2f7bfb2b9e450..3958a6d14bf457 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1139,9 +1139,9 @@ blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
 	 */
 	count = __blk_rq_map_sg(rq->q, rq, cmd->sdb.table.sgl, &last_sg);
 
-	if (blk_rq_bytes(rq) & rq->q->dma_pad_mask) {
+	if (blk_rq_bytes(rq) & rq->q->limits.dma_pad_mask) {
 		unsigned int pad_len =
-			(rq->q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
+			(rq->q->limits.dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
 
 		last_sg->length += pad_len;
 		cmd->extra_len += pad_len;
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index 0cf07194bbe89d..62d20eef13537d 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -5193,17 +5193,18 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
 }
 
 /**
- * ufshcd_slave_configure - adjust SCSI device configurations
+ * ufshcd_device_configure - adjust SCSI device configurations
  * @sdev: pointer to SCSI device
  *
  * Return: 0 (success).
  */
-static int ufshcd_slave_configure(struct scsi_device *sdev)
+static int ufshcd_device_configure(struct scsi_device *sdev,
+		struct queue_limits *lim)
 {
 	struct ufs_hba *hba = shost_priv(sdev->host);
 	struct request_queue *q = sdev->request_queue;
 
-	blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
+	lim->dma_pad_mask = PRDT_DATA_BYTE_COUNT_PAD - 1;
 
 	/*
 	 * Block runtime-pm until all consumers are added.
@@ -8907,7 +8908,7 @@ static const struct scsi_host_template ufshcd_driver_template = {
 	.queuecommand		= ufshcd_queuecommand,
 	.mq_poll		= ufshcd_poll,
 	.slave_alloc		= ufshcd_slave_alloc,
-	.slave_configure	= ufshcd_slave_configure,
+	.device_configure	= ufshcd_device_configure,
 	.slave_destroy		= ufshcd_slave_destroy,
 	.change_queue_depth	= ufshcd_change_queue_depth,
 	.eh_abort_handler	= ufshcd_abort,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 94fcbc91231208..a53e3434e1a28c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -401,6 +401,7 @@ struct queue_limits {
 	 * due to possible offsets.
 	 */
 	unsigned int		dma_alignment;
+	unsigned int		dma_pad_mask;
 
 	struct blk_integrity	integrity;
 };
@@ -509,8 +510,6 @@ struct request_queue {
 	 */
 	int			id;
 
-	unsigned int		dma_pad_mask;
-
 	/*
 	 * queue settings
 	 */
@@ -981,7 +980,6 @@ extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 			    sector_t offset);
 void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev,
 		sector_t offset, const char *pfx);
-extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int);
 extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
 
 struct blk_independent_access_ranges *
@@ -1433,10 +1431,16 @@ static inline bool bdev_iter_is_aligned(struct block_device *bdev,
 				   bdev_logical_block_size(bdev) - 1);
 }
 
+static inline int blk_lim_dma_alignment_and_pad(struct queue_limits *lim)
+{
+	return lim->dma_alignment | lim->dma_pad_mask;
+}
+
 static inline int blk_rq_aligned(struct request_queue *q, unsigned long addr,
 				 unsigned int len)
 {
-	unsigned int alignment = queue_dma_alignment(q) | q->dma_pad_mask;
+	unsigned int alignment = blk_lim_dma_alignment_and_pad(&q->limits);
+
 	return !(addr & alignment) && !(len & alignment);
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 7/8] block: remove the fallback case in queue_dma_alignment
  2024-06-25 14:59 ` [PATCH 7/8] block: remove the fallback case in queue_dma_alignment Christoph Hellwig
@ 2024-06-25 15:07   ` John Garry
  2024-06-25 21:29   ` Damien Le Moal
  1 sibling, 0 replies; 21+ messages in thread
From: John Garry @ 2024-06-25 15:07 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

On 25/06/2024 15:59, Christoph Hellwig wrote:
> Now that all updates go through blk_validate_limits the default of 511
> is set at initialization time.  Also remove the unused NULL check as
> calling this helper on a NULL queue can't happen (and doesn't make
> much sense to start with).
> 
> Signed-off-by: Christoph Hellwig<hch@lst.de>

Reviewed-by: John Garry <john.g.garry@oracle.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/8] md: set md-specific flags for all queue limits
  2024-06-25 14:59 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig
@ 2024-06-25 21:21   ` Damien Le Moal
  0 siblings, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:21 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi, kernel test robot

On 6/25/24 23:59, Christoph Hellwig wrote:
> The md driver wants to enforce a number of flags to an all devices, even

s/to an/for

> when not inheriting them from the underlying devices.  To make sure these
> flags survive the queue_limits_set calls that md uses to update the
> queue limits without deriving them form the previous limits add a new
> md_init_stacking_limits helper that calls blk_set_stacking_limits and sets
> these flags.
> 
> Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Other than the above nit, looks good to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/8] block: correctly report cache type
  2024-06-25 14:59 ` [PATCH 2/8] block: correctly report cache type Christoph Hellwig
@ 2024-06-25 21:22   ` Damien Le Moal
  2024-06-26  7:59   ` John Garry
  1 sibling, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:22 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> Check the features flag and the override flag, otherwise we're going to
> always report "write through".
> 
> Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED
  2024-06-25 14:59 ` [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED Christoph Hellwig
@ 2024-06-25 21:23   ` Damien Le Moal
  2024-06-26  8:07   ` John Garry
  1 sibling, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:23 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> This is a flag for ->flags and not a feature for ->features.  And fix the
> one place that actually incorrectly cleared it from ->features.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

I think the patch title should be "rename BLK_FEAT_MISALIGNED", but not a big deal.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/8] block: convert features and flags to __bitwise types
  2024-06-25 14:59 ` [PATCH 4/8] block: convert features and flags to __bitwise types Christoph Hellwig
@ 2024-06-25 21:27   ` Damien Le Moal
  0 siblings, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:27 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> ... and let sparse help us catch mismatches or abuses.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio
  2024-06-25 14:59 ` [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio Christoph Hellwig
@ 2024-06-25 21:27   ` Damien Le Moal
  0 siblings, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:27 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi, John Garry

On 6/25/24 23:59, Christoph Hellwig wrote:
> "static" never goes on a line of its own.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: John Garry <john.g.garry@oracle.com>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 6/8] block: remove disk_update_readahead
  2024-06-25 14:59 ` [PATCH 6/8] block: remove disk_update_readahead Christoph Hellwig
@ 2024-06-25 21:29   ` Damien Le Moal
  0 siblings, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:29 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> Mark blk_apply_bdi_limits non-static and open code disk_update_readahead
> in the only caller.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 7/8] block: remove the fallback case in queue_dma_alignment
  2024-06-25 14:59 ` [PATCH 7/8] block: remove the fallback case in queue_dma_alignment Christoph Hellwig
  2024-06-25 15:07   ` John Garry
@ 2024-06-25 21:29   ` Damien Le Moal
  1 sibling, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:29 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> Now that all updates go through blk_validate_limits the default of 511
> is set at initialization time.  Also remove the unused NULL check as
> calling this helper on a NULL queue can't happen (and doesn't make
> much sense to start with).
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 8/8] block: move dma_pad_mask into queue_limits
  2024-06-25 14:59 ` [PATCH 8/8] block: move dma_pad_mask into queue_limits Christoph Hellwig
@ 2024-06-25 21:31   ` Damien Le Moal
  0 siblings, 0 replies; 21+ messages in thread
From: Damien Le Moal @ 2024-06-25 21:31 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Niklas Cassel, Song Liu, Yu Kuai, Martin K. Petersen, Alim Akhtar,
	Avri Altman, Bart Van Assche, linux-block, linux-ide, linux-raid,
	linux-scsi

On 6/25/24 23:59, Christoph Hellwig wrote:
> dma_pad_mask is a queue_limits by all ways of looking at it, so move it
> there and set it through the atomic queue limits APIs.
> 
> Add a little helper that takes the alignment and pad into account to
> simply the code that is touched a bit.

s/simply/simplify

> Note that there never was any need for the > check in
> blk_queue_update_dma_pad, this probably was just copy and paste from
> dma_update_dma_alignment.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/8] block: correctly report cache type
  2024-06-25 14:59 ` [PATCH 2/8] block: correctly report cache type Christoph Hellwig
  2024-06-25 21:22   ` Damien Le Moal
@ 2024-06-26  7:59   ` John Garry
  1 sibling, 0 replies; 21+ messages in thread
From: John Garry @ 2024-06-26  7:59 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi

On 25/06/2024 15:59, Christoph Hellwig wrote:
> Check the features flag and the override flag, otherwise we're going to
> always report "write through".
> 
> Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-sysfs.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 1a984179f3acc5..6db65886e7ed5a 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -453,7 +453,8 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
>   
>   static ssize_t queue_wc_show(struct request_queue *q, char *page)
>   {
> -	if (q->limits.features & BLK_FLAG_WRITE_CACHE_DISABLED)
> +	if (!(q->limits.features & BLK_FEAT_WRITE_CACHE) ||
> +	    (q->limits.flags & BLK_FLAG_WRITE_CACHE_DISABLED))

Could you use blk_queue_write_cache() helper here to reduce duplication?

>   		return sprintf(page, "write through\n");
>   	return sprintf(page, "write back\n");
>   }



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED
  2024-06-25 14:59 ` [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED Christoph Hellwig
  2024-06-25 21:23   ` Damien Le Moal
@ 2024-06-26  8:07   ` John Garry
  1 sibling, 0 replies; 21+ messages in thread
From: John Garry @ 2024-06-26  8:07 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi


Reviewed-by: John Garry <john.g.garry@oracle.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/8] md: set md-specific flags for all queue limits
  2024-06-26 14:26 queue_limits fixups and tidyups v3 Christoph Hellwig
@ 2024-06-26 14:26 ` Christoph Hellwig
  0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2024-06-26 14:26 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Damien Le Moal, Niklas Cassel, Song Liu, Yu Kuai,
	Martin K. Petersen, Alim Akhtar, Avri Altman, Bart Van Assche,
	linux-block, linux-ide, linux-raid, linux-scsi, kernel test robot

The md driver wants to enforce a number of flags for all devices, even
when not inheriting them from the underlying devices.  To make sure these
flags survive the queue_limits_set calls that md uses to update the
queue limits without deriving them form the previous limits add a new
md_init_stacking_limits helper that calls blk_set_stacking_limits and sets
these flags.

Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/md/md.c     | 14 +++++++++-----
 drivers/md/md.h     |  1 +
 drivers/md/raid0.c  |  2 +-
 drivers/md/raid1.c  |  2 +-
 drivers/md/raid10.c |  2 +-
 drivers/md/raid5.c  |  2 +-
 6 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 69ea54aedd99a1..0ff26a547f1afc 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5853,6 +5853,14 @@ static void mddev_delayed_delete(struct work_struct *ws)
 	kobject_put(&mddev->kobj);
 }
 
+void md_init_stacking_limits(struct queue_limits *lim)
+{
+	blk_set_stacking_limits(lim);
+	lim->features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+			BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+}
+EXPORT_SYMBOL_GPL(md_init_stacking_limits);
+
 struct mddev *md_alloc(dev_t dev, char *name)
 {
 	/*
@@ -5871,10 +5879,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int shift;
 	int unit;
 	int error;
-	struct queue_limits lim = {
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
-	};
 
 	/*
 	 * Wait for any previous instance of this device to be completely
@@ -5914,7 +5918,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 		 */
 		mddev->hold_active = UNTIL_STOP;
 
-	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
+	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		error = PTR_ERR(disk);
 		goto out_free_mddev;
diff --git a/drivers/md/md.h b/drivers/md/md.h
index c4d7ebf9587d07..28cb4b0b6c1740 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -893,6 +893,7 @@ extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
 
 extern int mddev_init(struct mddev *mddev);
 extern void mddev_destroy(struct mddev *mddev);
+void md_init_stacking_limits(struct queue_limits *lim);
 struct mddev *md_alloc(dev_t dev, char *name);
 void mddev_put(struct mddev *mddev);
 extern int md_run(struct mddev *mddev);
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 62634e2a33bd0f..32d58752477847 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -379,7 +379,7 @@ static int raid0_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_hw_sectors = mddev->chunk_sectors;
 	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
 	lim.io_min = mddev->chunk_sectors << 9;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 1a0eba65b8a92b..04a0c2ca173245 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3194,7 +3194,7 @@ static int raid1_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
 	if (err) {
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 3334aa803c8380..2a9c4ee982e023 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3974,7 +3974,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0192a6323f09ba..10219205160bbf 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7708,7 +7708,7 @@ static int raid5_set_limits(struct mddev *mddev)
 	 */
 	stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9));
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded);
 	lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2024-06-26 14:26 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-25 14:59 queue_limits fixups and tidyups v2 Christoph Hellwig
2024-06-25 14:59 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig
2024-06-25 21:21   ` Damien Le Moal
2024-06-25 14:59 ` [PATCH 2/8] block: correctly report cache type Christoph Hellwig
2024-06-25 21:22   ` Damien Le Moal
2024-06-26  7:59   ` John Garry
2024-06-25 14:59 ` [PATCH 3/8] block: rename BLK_FLAG_MISALIGNED Christoph Hellwig
2024-06-25 21:23   ` Damien Le Moal
2024-06-26  8:07   ` John Garry
2024-06-25 14:59 ` [PATCH 4/8] block: convert features and flags to __bitwise types Christoph Hellwig
2024-06-25 21:27   ` Damien Le Moal
2024-06-25 14:59 ` [PATCH 5/8] block: conding style fixup for blk_queue_max_guaranteed_bio Christoph Hellwig
2024-06-25 21:27   ` Damien Le Moal
2024-06-25 14:59 ` [PATCH 6/8] block: remove disk_update_readahead Christoph Hellwig
2024-06-25 21:29   ` Damien Le Moal
2024-06-25 14:59 ` [PATCH 7/8] block: remove the fallback case in queue_dma_alignment Christoph Hellwig
2024-06-25 15:07   ` John Garry
2024-06-25 21:29   ` Damien Le Moal
2024-06-25 14:59 ` [PATCH 8/8] block: move dma_pad_mask into queue_limits Christoph Hellwig
2024-06-25 21:31   ` Damien Le Moal
  -- strict thread matches above, loose matches on Subject: below --
2024-06-26 14:26 queue_limits fixups and tidyups v3 Christoph Hellwig
2024-06-26 14:26 ` [PATCH 1/8] md: set md-specific flags for all queue limits Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).