linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues
@ 2025-06-17 13:43 Daniel Wagner
  2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

I am still working on the change request for the "blk: honor isolcpus
configuration" series [1]. Teaching group_cpus_evenly to use the
housekeeping mask depending on the context is not a trivial change.

The first part of the series has already been reviewed and doesn't
contain any controversial changes, so let's get them processed
independely.

[1] https://patch.msgid.link/20250424-isolcpus-io-queues-v6-0-9a53a870ca1f@kernel.org

Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
Changes in from https://patch.msgid.link/20250424-isolcpus-io-queues-v6-0-9a53a870ca1f@kernel.org
- limit number of allocated masks to the max allocated number of masks
- commit message improvements
- typo fixes
- formatting fixed
- collected tags

---
Daniel Wagner (5):
      lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
      blk-mq: add number of queue calc helper
      nvme-pci: use block layer helpers to calculate num of queues
      scsi: use block layer helpers to calculate num of queues
      virtio: blk/scsi: use block layer helpers to calculate num of queues

 block/blk-mq-cpumap.c                     | 46 +++++++++++++++++++++++++++++--
 drivers/block/virtio_blk.c                |  5 ++--
 drivers/nvme/host/pci.c                   |  5 ++--
 drivers/scsi/megaraid/megaraid_sas_base.c | 15 ++++++----
 drivers/scsi/qla2xxx/qla_isr.c            | 10 +++----
 drivers/scsi/smartpqi/smartpqi_init.c     |  5 ++--
 drivers/scsi/virtio_scsi.c                |  1 +
 drivers/virtio/virtio_vdpa.c              |  9 +++---
 fs/fuse/virtio_fs.c                       |  6 ++--
 include/linux/blk-mq.h                    |  2 ++
 include/linux/group_cpus.h                |  2 +-
 kernel/irq/affinity.c                     | 11 ++++----
 lib/group_cpus.c                          | 16 +++++------
 13 files changed, 89 insertions(+), 44 deletions(-)
---
base-commit: e04c78d86a9699d136910cfc0bdcf01087e3267e
change-id: 20250617-isolcpus-queue-counters-42ba0ee77390

Best regards,
-- 
Daniel Wagner <wagi@kernel.org>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
@ 2025-06-17 13:43 ` Daniel Wagner
  2025-06-23  5:19   ` Christoph Hellwig
  2025-06-25  4:34   ` Chaitanya Kulkarni
  2025-06-17 13:43 ` [PATCH 2/5] blk-mq: add number of queue calc helper Daniel Wagner
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

group_cpu_evenly() might have allocated less groups then requested:

group_cpu_evenly()
  __group_cpus_evenly()
    alloc_nodes_groups()
      # allocated total groups may be less than numgrps when
      # active total CPU number is less then numgrps

In this case, the caller will do an out of bound access because the
caller assumes the masks returned has numgrps.

Return the number of groups created so the caller can limit the access
range accordingly.

Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
 block/blk-mq-cpumap.c        |  6 +++---
 drivers/virtio/virtio_vdpa.c |  9 +++++----
 fs/fuse/virtio_fs.c          |  6 +++---
 include/linux/group_cpus.h   |  2 +-
 kernel/irq/affinity.c        | 11 +++++------
 lib/group_cpus.c             | 16 ++++++++--------
 6 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 444798c5374f48088b661b519f2638bda8556cf2..269161252add756897fce1b65cae5b2e6aebd647 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -19,9 +19,9 @@
 void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
 {
 	const struct cpumask *masks;
-	unsigned int queue, cpu;
+	unsigned int queue, cpu, nr_masks;
 
-	masks = group_cpus_evenly(qmap->nr_queues);
+	masks = group_cpus_evenly(qmap->nr_queues, &nr_masks);
 	if (!masks) {
 		for_each_possible_cpu(cpu)
 			qmap->mq_map[cpu] = qmap->queue_offset;
@@ -29,7 +29,7 @@ void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
 	}
 
 	for (queue = 0; queue < qmap->nr_queues; queue++) {
-		for_each_cpu(cpu, &masks[queue])
+		for_each_cpu(cpu, &masks[queue % nr_masks])
 			qmap->mq_map[cpu] = qmap->queue_offset + queue;
 	}
 	kfree(masks);
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 1f60c9d5cb1810a6f208c24bb2ac640d537391a0..a7b297dae4890c9d6002744b90fc133bbedb7b44 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -329,20 +329,21 @@ create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
 
 	for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) {
 		unsigned int this_vecs = affd->set_size[i];
+		unsigned int nr_masks;
 		int j;
-		struct cpumask *result = group_cpus_evenly(this_vecs);
+		struct cpumask *result = group_cpus_evenly(this_vecs, &nr_masks);
 
 		if (!result) {
 			kfree(masks);
 			return NULL;
 		}
 
-		for (j = 0; j < this_vecs; j++)
+		for (j = 0; j < nr_masks; j++)
 			cpumask_copy(&masks[curvec + j], &result[j]);
 		kfree(result);
 
-		curvec += this_vecs;
-		usedvecs += this_vecs;
+		curvec += nr_masks;
+		usedvecs += nr_masks;
 	}
 
 	/* Fill out vectors at the end that don't need affinity */
diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
index 53c2626e90e723ad88f1aee69d7507b4f197ab13..3fbfb1a2942b753643015a45fa0c5d89ff72aa2f 100644
--- a/fs/fuse/virtio_fs.c
+++ b/fs/fuse/virtio_fs.c
@@ -862,7 +862,7 @@ static void virtio_fs_requests_done_work(struct work_struct *work)
 static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *fs)
 {
 	const struct cpumask *mask, *masks;
-	unsigned int q, cpu;
+	unsigned int q, cpu, nr_masks;
 
 	/* First attempt to map using existing transport layer affinities
 	 * e.g. PCIe MSI-X
@@ -882,7 +882,7 @@ static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *f
 	return;
 fallback:
 	/* Attempt to map evenly in groups over the CPUs */
-	masks = group_cpus_evenly(fs->num_request_queues);
+	masks = group_cpus_evenly(fs->num_request_queues, &nr_masks);
 	/* If even this fails we default to all CPUs use first request queue */
 	if (!masks) {
 		for_each_possible_cpu(cpu)
@@ -891,7 +891,7 @@ static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *f
 	}
 
 	for (q = 0; q < fs->num_request_queues; q++) {
-		for_each_cpu(cpu, &masks[q])
+		for_each_cpu(cpu, &masks[q % nr_masks])
 			fs->mq_map[cpu] = q + VQ_REQUEST;
 	}
 	kfree(masks);
diff --git a/include/linux/group_cpus.h b/include/linux/group_cpus.h
index e42807ec61f6e8cf3787af7daa0d8686edfef0a3..9d4e5ab6c314b31c09fda82c3f6ac18f77e9de36 100644
--- a/include/linux/group_cpus.h
+++ b/include/linux/group_cpus.h
@@ -9,6 +9,6 @@
 #include <linux/kernel.h>
 #include <linux/cpu.h>
 
-struct cpumask *group_cpus_evenly(unsigned int numgrps);
+struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks);
 
 #endif
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 44a4eba80315cc098ecfa366ca1d88483641b12a..4013e6ad2b2f1cb91de12bb428b3281105f7d23b 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -69,21 +69,20 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd)
 	 * have multiple sets, build each sets affinity mask separately.
 	 */
 	for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) {
-		unsigned int this_vecs = affd->set_size[i];
-		int j;
-		struct cpumask *result = group_cpus_evenly(this_vecs);
+		unsigned int nr_masks, this_vecs = affd->set_size[i];
+		struct cpumask *result = group_cpus_evenly(this_vecs, &nr_masks);
 
 		if (!result) {
 			kfree(masks);
 			return NULL;
 		}
 
-		for (j = 0; j < this_vecs; j++)
+		for (int j = 0; j < nr_masks; j++)
 			cpumask_copy(&masks[curvec + j].mask, &result[j]);
 		kfree(result);
 
-		curvec += this_vecs;
-		usedvecs += this_vecs;
+		curvec += nr_masks;
+		usedvecs += nr_masks;
 	}
 
 	/* Fill out vectors at the end that don't need affinity */
diff --git a/lib/group_cpus.c b/lib/group_cpus.c
index ee272c4cefcc13907ce9f211f479615d2e3c9154..a075959ccb212ece84334e4859c884f4217d30b6 100644
--- a/lib/group_cpus.c
+++ b/lib/group_cpus.c
@@ -332,9 +332,11 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
 /**
  * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
  * @numgrps: number of groups
+ * @nummasks: number of initialized cpumasks
  *
  * Return: cpumask array if successful, NULL otherwise. And each element
- * includes CPUs assigned to this group
+ * includes CPUs assigned to this group. nummasks contains the number
+ * of initialized masks which can be less than numgrps.
  *
  * Try to put close CPUs from viewpoint of CPU and NUMA locality into
  * same group, and run two-stage grouping:
@@ -344,7 +346,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps,
  * We guarantee in the resulted grouping that all CPUs are covered, and
  * no same CPU is assigned to multiple groups
  */
-struct cpumask *group_cpus_evenly(unsigned int numgrps)
+struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks)
 {
 	unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
 	cpumask_var_t *node_to_cpumask;
@@ -386,7 +388,7 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
 	ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask,
 				  npresmsk, nmsk, masks);
 	if (ret < 0)
-		goto fail_build_affinity;
+		goto fail_node_to_cpumask;
 	nr_present = ret;
 
 	/*
@@ -405,10 +407,6 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
 	if (ret >= 0)
 		nr_others = ret;
 
- fail_build_affinity:
-	if (ret >= 0)
-		WARN_ON(nr_present + nr_others < numgrps);
-
  fail_node_to_cpumask:
 	free_node_to_cpumask(node_to_cpumask);
 
@@ -421,10 +419,11 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
 		kfree(masks);
 		return NULL;
 	}
+	*nummasks = min(nr_present + nr_others, numgrps);
 	return masks;
 }
 #else /* CONFIG_SMP */
-struct cpumask *group_cpus_evenly(unsigned int numgrps)
+struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks)
 {
 	struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
 
@@ -433,6 +432,7 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps)
 
 	/* assign all CPUs(cpu 0) to the 1st group only */
 	cpumask_copy(&masks[0], cpu_possible_mask);
+	*nummasks = 1;
 	return masks;
 }
 #endif /* CONFIG_SMP */

-- 
2.49.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] blk-mq: add number of queue calc helper
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
  2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
@ 2025-06-17 13:43 ` Daniel Wagner
  2025-06-25  4:35   ` Chaitanya Kulkarni
  2025-06-17 13:43 ` [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

Add two variants of helper functions that calculate the correct number
of queues to use. Two variants are needed because some drivers base
their maximum number of queues on the possible CPU mask, while others
use the online CPU mask.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
 block/blk-mq-cpumap.c  | 40 ++++++++++++++++++++++++++++++++++++++++
 include/linux/blk-mq.h |  2 ++
 2 files changed, 42 insertions(+)

diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
index 269161252add756897fce1b65cae5b2e6aebd647..705da074ad6c7e88042296f21b739c6d686a72b6 100644
--- a/block/blk-mq-cpumap.c
+++ b/block/blk-mq-cpumap.c
@@ -12,10 +12,50 @@
 #include <linux/cpu.h>
 #include <linux/group_cpus.h>
 #include <linux/device/bus.h>
+#include <linux/sched/isolation.h>
 
 #include "blk.h"
 #include "blk-mq.h"
 
+static unsigned int blk_mq_num_queues(const struct cpumask *mask,
+				      unsigned int max_queues)
+{
+	unsigned int num;
+
+	num = cpumask_weight(mask);
+	return min_not_zero(num, max_queues);
+}
+
+/**
+ * blk_mq_num_possible_queues - Calc nr of queues for multiqueue devices
+ * @max_queues:	The maximum number of queues the hardware/driver
+ *		supports. If max_queues is 0, the argument is
+ *		ignored.
+ *
+ * Calculates the number of queues to be used for a multiqueue
+ * device based on the number of possible CPUs.
+ */
+unsigned int blk_mq_num_possible_queues(unsigned int max_queues)
+{
+	return blk_mq_num_queues(cpu_possible_mask, max_queues);
+}
+EXPORT_SYMBOL_GPL(blk_mq_num_possible_queues);
+
+/**
+ * blk_mq_num_online_queues - Calc nr of queues for multiqueue devices
+ * @max_queues:	The maximum number of queues the hardware/driver
+ *		supports. If max_queues is 0, the argument is
+ *		ignored.
+ *
+ * Calculates the number of queues to be used for a multiqueue
+ * device based on the number of online CPUs.
+ */
+unsigned int blk_mq_num_online_queues(unsigned int max_queues)
+{
+	return blk_mq_num_queues(cpu_online_mask, max_queues);
+}
+EXPORT_SYMBOL_GPL(blk_mq_num_online_queues);
+
 void blk_mq_map_queues(struct blk_mq_queue_map *qmap)
 {
 	const struct cpumask *masks;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index de8c85a03bb7f40501f449ae98919a5352f55db8..2a5a828f19a0ba6ff0812daf40eed67f0e12ada1 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -947,6 +947,8 @@ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
 void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
 void blk_freeze_queue_start_non_owner(struct request_queue *q);
 
+unsigned int blk_mq_num_possible_queues(unsigned int max_queues);
+unsigned int blk_mq_num_online_queues(unsigned int max_queues);
 void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
 void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
 			  struct device *dev, unsigned int offset);

-- 
2.49.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
  2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
  2025-06-17 13:43 ` [PATCH 2/5] blk-mq: add number of queue calc helper Daniel Wagner
@ 2025-06-17 13:43 ` Daniel Wagner
  2025-06-25  4:37   ` Chaitanya Kulkarni
  2025-06-17 13:43 ` [PATCH 4/5] scsi: " Daniel Wagner
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.

To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
 drivers/nvme/host/pci.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 8ff12e415cb5d1529d760b33f3e0cf3b8d1555f1..f134bf4f41b2581e4809e618250de7985b5c9701 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -97,7 +97,7 @@ static int io_queue_count_set(const char *val, const struct kernel_param *kp)
 	int ret;
 
 	ret = kstrtouint(val, 10, &n);
-	if (ret != 0 || n > num_possible_cpus())
+	if (ret != 0 || n > blk_mq_num_possible_queues(0))
 		return -EINVAL;
 	return param_set_uint(val, kp);
 }
@@ -2520,7 +2520,8 @@ static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
 	 */
 	if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
 		return 1;
-	return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues;
+	return blk_mq_num_possible_queues(0) + dev->nr_write_queues +
+		dev->nr_poll_queues;
 }
 
 static int nvme_setup_io_queues(struct nvme_dev *dev)

-- 
2.49.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] scsi: use block layer helpers to calculate num of queues
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
                   ` (2 preceding siblings ...)
  2025-06-17 13:43 ` [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
@ 2025-06-17 13:43 ` Daniel Wagner
  2025-06-25  4:37   ` Chaitanya Kulkarni
  2025-06-17 13:43 ` [PATCH 5/5] virtio: blk/scsi: " Daniel Wagner
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

The calculation of the upper limit for queues does not depend solely on
the number of online CPUs; for example, the isolcpus kernel
command-line option must also be considered.

To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
 drivers/scsi/megaraid/megaraid_sas_base.c | 15 +++++++++------
 drivers/scsi/qla2xxx/qla_isr.c            | 10 +++++-----
 drivers/scsi/smartpqi/smartpqi_init.c     |  5 ++---
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
index 3aac0e17cb00612ed7b6fb4a2e8745c7120fc506..0224eb97092bd938250f108522daaf9f033c1a4d 100644
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -5967,7 +5967,8 @@ megasas_alloc_irq_vectors(struct megasas_instance *instance)
 		else
 			instance->iopoll_q_count = 0;
 
-		num_msix_req = num_online_cpus() + instance->low_latency_index_start;
+		num_msix_req = blk_mq_num_online_queues(0) +
+			instance->low_latency_index_start;
 		instance->msix_vectors = min(num_msix_req,
 				instance->msix_vectors);
 
@@ -5983,7 +5984,8 @@ megasas_alloc_irq_vectors(struct megasas_instance *instance)
 		/* Disable Balanced IOPS mode and try realloc vectors */
 		instance->perf_mode = MR_LATENCY_PERF_MODE;
 		instance->low_latency_index_start = 1;
-		num_msix_req = num_online_cpus() + instance->low_latency_index_start;
+		num_msix_req = blk_mq_num_online_queues(0) +
+			instance->low_latency_index_start;
 
 		instance->msix_vectors = min(num_msix_req,
 				instance->msix_vectors);
@@ -6239,7 +6241,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
 		intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ?
 								true : false;
 		if (intr_coalescing &&
-			(num_online_cpus() >= MR_HIGH_IOPS_QUEUE_COUNT) &&
+			(blk_mq_num_online_queues(0) >= MR_HIGH_IOPS_QUEUE_COUNT) &&
 			(instance->msix_vectors == MEGASAS_MAX_MSIX_QUEUES))
 			instance->perf_mode = MR_BALANCED_PERF_MODE;
 		else
@@ -6283,7 +6285,8 @@ static int megasas_init_fw(struct megasas_instance *instance)
 		else
 			instance->low_latency_index_start = 1;
 
-		num_msix_req = num_online_cpus() + instance->low_latency_index_start;
+		num_msix_req = blk_mq_num_online_queues(0) +
+			instance->low_latency_index_start;
 
 		instance->msix_vectors = min(num_msix_req,
 				instance->msix_vectors);
@@ -6315,8 +6318,8 @@ static int megasas_init_fw(struct megasas_instance *instance)
 	megasas_setup_reply_map(instance);
 
 	dev_info(&instance->pdev->dev,
-		"current msix/online cpus\t: (%d/%d)\n",
-		instance->msix_vectors, (unsigned int)num_online_cpus());
+		"current msix/max num queues\t: (%d/%u)\n",
+		instance->msix_vectors, blk_mq_num_online_queues(0));
 	dev_info(&instance->pdev->dev,
 		"RDPQ mode\t: (%s)\n", instance->is_rdpq ? "enabled" : "disabled");
 
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index fe98c76e9be32ff03a1960f366f0d700d1168383..c4c6b5c6658c0734f7ff68bcc31b33dde87296dd 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -4533,13 +4533,13 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
 	if (USER_CTRL_IRQ(ha) || !ha->mqiobase) {
 		/* user wants to control IRQ setting for target mode */
 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
-		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
-		    PCI_IRQ_MSIX);
+			blk_mq_num_online_queues(ha->msix_count) + min_vecs,
+			PCI_IRQ_MSIX);
 	} else
 		ret = pci_alloc_irq_vectors_affinity(ha->pdev, min_vecs,
-		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
-		    PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
-		    &desc);
+			blk_mq_num_online_queues(ha->msix_count) + min_vecs,
+			PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
+			&desc);
 
 	if (ret < 0) {
 		ql_log(ql_log_fatal, vha, 0x00c7,
diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
index 3d40a63e378d792ffc005c51cb2fdbb04e1acc5f..125944941601e683e9aa9d4fc6a346230bef904b 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -5294,15 +5294,14 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info)
 	if (is_kdump_kernel()) {
 		num_queue_groups = 1;
 	} else {
-		int num_cpus;
 		int max_queue_groups;
 
 		max_queue_groups = min(ctrl_info->max_inbound_queues / 2,
 			ctrl_info->max_outbound_queues - 1);
 		max_queue_groups = min(max_queue_groups, PQI_MAX_QUEUE_GROUPS);
 
-		num_cpus = num_online_cpus();
-		num_queue_groups = min(num_cpus, ctrl_info->max_msix_vectors);
+		num_queue_groups =
+			blk_mq_num_online_queues(ctrl_info->max_msix_vectors);
 		num_queue_groups = min(num_queue_groups, max_queue_groups);
 	}
 

-- 
2.49.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] virtio: blk/scsi: use block layer helpers to calculate num of queues
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
                   ` (3 preceding siblings ...)
  2025-06-17 13:43 ` [PATCH 4/5] scsi: " Daniel Wagner
@ 2025-06-17 13:43 ` Daniel Wagner
  2025-06-25  4:38   ` Chaitanya Kulkarni
  2025-06-30  6:29 ` [PATCH 0/5] blk: introduce " Daniel Wagner
  2025-07-01 16:29 ` Jens Axboe
  6 siblings, 1 reply; 14+ messages in thread
From: Daniel Wagner @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream, Daniel Wagner

The calculation of the upper limit for queues does not depend solely on
the number of possible CPUs; for example, the isolcpus kernel
command-line option must also be considered.

To account for this, the block layer provides a helper function to
retrieve the maximum number of queues. Use it to set an appropriate
upper queue number limit.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Daniel Wagner <wagi@kernel.org>
---
 drivers/block/virtio_blk.c | 5 ++---
 drivers/scsi/virtio_scsi.c | 1 +
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 30bca8cb7106040d3bbb11ba9e0b546510534324..e649fa67bac16b4f0c6e8e8f0e6bec111897c355 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -976,9 +976,8 @@ static int init_vq(struct virtio_blk *vblk)
 		return -EINVAL;
 	}
 
-	num_vqs = min_t(unsigned int,
-			min_not_zero(num_request_queues, nr_cpu_ids),
-			num_vqs);
+	num_vqs = blk_mq_num_possible_queues(
+			min_not_zero(num_request_queues, num_vqs));
 
 	num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);
 
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 21ce3e9401929cd273fde08b0944e8b47e1e66cc..96a69edddbe5555574fc8fed1ba7c82a99df4472 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -919,6 +919,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	/* We need to know how many queues before we allocate. */
 	num_queues = virtscsi_config_get(vdev, num_queues) ? : 1;
 	num_queues = min_t(unsigned int, nr_cpu_ids, num_queues);
+	num_queues = blk_mq_num_possible_queues(num_queues);
 
 	num_targets = virtscsi_config_get(vdev, max_target) + 1;
 

-- 
2.49.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
  2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
@ 2025-06-23  5:19   ` Christoph Hellwig
  2025-06-25  4:34   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2025-06-23  5:19 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: Jens Axboe, Christoph Hellwig, Keith Busch, Sagi Grimberg,
	Michael S. Tsirkin, Martin K. Petersen, Thomas Gleixner,
	Costa Shulyupin, Juri Lelli, Valentin Schneider, Waiman Long,
	Ming Lei, Frederic Weisbecker, Hannes Reinecke, linux-kernel,
	linux-block, linux-nvme, megaraidlinux.pdl, linux-scsi,
	storagedev, virtualization, GR-QLogic-Storage-Upstream

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
  2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
  2025-06-23  5:19   ` Christoph Hellwig
@ 2025-06-25  4:34   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2025-06-25  4:34 UTC (permalink / raw)
  To: Daniel Wagner, Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
	storagedev@microchip.com, virtualization@lists.linux.dev,
	GR-QLogic-Storage-Upstream@marvell.com

On 6/17/25 06:43, Daniel Wagner wrote:
> group_cpu_evenly() might have allocated less groups then requested:
>
> group_cpu_evenly()
>    __group_cpus_evenly()
>      alloc_nodes_groups()
>        # allocated total groups may be less than numgrps when
>        # active total CPU number is less then numgrps
>
> In this case, the caller will do an out of bound access because the
> caller assumes the masks returned has numgrps.
>
> Return the number of groups created so the caller can limit the access
> range accordingly.
>
> Acked-by: Thomas Gleixner<tglx@linutronix.de>
> Reviewed-by: Hannes Reinecke<hare@suse.de>
> Reviewed-by: Ming Lei<ming.lei@redhat.com>
> Signed-off-by: Daniel Wagner<wagi@kernel.org>
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] blk-mq: add number of queue calc helper
  2025-06-17 13:43 ` [PATCH 2/5] blk-mq: add number of queue calc helper Daniel Wagner
@ 2025-06-25  4:35   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2025-06-25  4:35 UTC (permalink / raw)
  To: Daniel Wagner, Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
	storagedev@microchip.com, virtualization@lists.linux.dev,
	GR-QLogic-Storage-Upstream@marvell.com

On 6/17/25 06:43, Daniel Wagner wrote:
> Add two variants of helper functions that calculate the correct number
> of queues to use. Two variants are needed because some drivers base
> their maximum number of queues on the possible CPU mask, while others
> use the online CPU mask.
>
> Reviewed-by: Christoph Hellwig<hch@lst.de>
> Reviewed-by: Hannes Reinecke<hare@suse.de>
> Reviewed-by: Ming Lei<ming.lei@redhat.com>
> Signed-off-by: Daniel Wagner<wagi@kernel.org>

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues
  2025-06-17 13:43 ` [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
@ 2025-06-25  4:37   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2025-06-25  4:37 UTC (permalink / raw)
  To: Daniel Wagner, Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
	storagedev@microchip.com, virtualization@lists.linux.dev,
	GR-QLogic-Storage-Upstream@marvell.com

On 6/17/25 06:43, Daniel Wagner wrote:
> The calculation of the upper limit for queues does not depend solely on
> the number of possible CPUs; for example, the isolcpus kernel
> command-line option must also be considered.
>
> To account for this, the block layer provides a helper function to
> retrieve the maximum number of queues. Use it to set an appropriate
> upper queue number limit.
>
> Reviewed-by: Christoph Hellwig<hch@lst.de>
> Reviewed-by: Hannes Reinecke<hare@suse.de>
> Signed-off-by: Daniel Wagner<wagi@kernel.o

Thanks a lot for this it really makes code clean with a new helper
which shows association with queue than open coded cpu helper.

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] scsi: use block layer helpers to calculate num of queues
  2025-06-17 13:43 ` [PATCH 4/5] scsi: " Daniel Wagner
@ 2025-06-25  4:37   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2025-06-25  4:37 UTC (permalink / raw)
  To: Daniel Wagner, Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
	storagedev@microchip.com, virtualization@lists.linux.dev,
	GR-QLogic-Storage-Upstream@marvell.com

On 6/17/25 06:43, Daniel Wagner wrote:
> The calculation of the upper limit for queues does not depend solely on
> the number of online CPUs; for example, the isolcpus kernel
> command-line option must also be considered.
>
> To account for this, the block layer provides a helper function to
> retrieve the maximum number of queues. Use it to set an appropriate
> upper queue number limit.
>
> Reviewed-by: Martin K. Petersen<martin.petersen@oracle.com>
> Reviewed-by: Hannes Reinecke<hare@suse.de>
> Reviewed-by: Ming Lei<ming.lei@redhat.com>
> Signed-off-by: Daniel Wagner<wagi@kernel.org>


Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] virtio: blk/scsi: use block layer helpers to calculate num of queues
  2025-06-17 13:43 ` [PATCH 5/5] virtio: blk/scsi: " Daniel Wagner
@ 2025-06-25  4:38   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2025-06-25  4:38 UTC (permalink / raw)
  To: Daniel Wagner, Jens Axboe, Christoph Hellwig
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org,
	storagedev@microchip.com, virtualization@lists.linux.dev,
	GR-QLogic-Storage-Upstream@marvell.com

On 6/17/25 06:43, Daniel Wagner wrote:
> The calculation of the upper limit for queues does not depend solely on
> the number of possible CPUs; for example, the isolcpus kernel
> command-line option must also be considered.
>
> To account for this, the block layer provides a helper function to
> retrieve the maximum number of queues. Use it to set an appropriate
> upper queue number limit.
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Reviewed-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Daniel Wagner <wagi@kernel.org>
> ---
>   

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
                   ` (4 preceding siblings ...)
  2025-06-17 13:43 ` [PATCH 5/5] virtio: blk/scsi: " Daniel Wagner
@ 2025-06-30  6:29 ` Daniel Wagner
  2025-07-01 16:29 ` Jens Axboe
  6 siblings, 0 replies; 14+ messages in thread
From: Daniel Wagner @ 2025-06-30  6:29 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream

Hi Jens,

On Tue, Jun 17, 2025 at 03:43:22PM +0200, Daniel Wagner wrote:
> I am still working on the change request for the "blk: honor isolcpus
> configuration" series [1]. Teaching group_cpus_evenly to use the
> housekeeping mask depending on the context is not a trivial change.
> 
> The first part of the series has already been reviewed and doesn't
> contain any controversial changes, so let's get them processed
> independely.
> 
> [1] https://patch.msgid.link/20250424-isolcpus-io-queues-v6-0-9a53a870ca1f@kernel.org

Would you mind to route this series via your tree? There are changes in
several different trees though all the patches have been acked/reviewed
by the corresponding maintainers. Would be great to get some weeks in
'next' so that this series gets some more testing.

Thanks a lot,
Daniel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues
  2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
                   ` (5 preceding siblings ...)
  2025-06-30  6:29 ` [PATCH 0/5] blk: introduce " Daniel Wagner
@ 2025-07-01 16:29 ` Jens Axboe
  6 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2025-07-01 16:29 UTC (permalink / raw)
  To: Christoph Hellwig, Daniel Wagner
  Cc: Keith Busch, Sagi Grimberg, Michael S. Tsirkin,
	Martin K. Petersen, Thomas Gleixner, Costa Shulyupin, Juri Lelli,
	Valentin Schneider, Waiman Long, Ming Lei, Frederic Weisbecker,
	Hannes Reinecke, linux-kernel, linux-block, linux-nvme,
	megaraidlinux.pdl, linux-scsi, storagedev, virtualization,
	GR-QLogic-Storage-Upstream


On Tue, 17 Jun 2025 15:43:22 +0200, Daniel Wagner wrote:
> I am still working on the change request for the "blk: honor isolcpus
> configuration" series [1]. Teaching group_cpus_evenly to use the
> housekeeping mask depending on the context is not a trivial change.
> 
> The first part of the series has already been reviewed and doesn't
> contain any controversial changes, so let's get them processed
> independely.
> 
> [...]

Applied, thanks!

[1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks
      commit: b6139a6abf673029008f80d42abd3848d80a9108
[2/5] blk-mq: add number of queue calc helper
      commit: 3f27c1de5df265f9d8edf0cc5d75dc92e328484a
[3/5] nvme-pci: use block layer helpers to calculate num of queues
      commit: 4082c98c1fefd276b34ba411ac59c50b336dfbb1
[4/5] scsi: use block layer helpers to calculate num of queues
      commit: 94970cfb5f10ea381df8c402d36c5023765599da
[5/5] virtio: blk/scsi: use block layer helpers to calculate num of queues
      commit: 0a50ed0574ffe853f15c3430794b5439b2e6150a

Best regards,
-- 
Jens Axboe





^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-07-01 19:00 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-17 13:43 [PATCH 0/5] blk: introduce block layer helpers to calculate num of queues Daniel Wagner
2025-06-17 13:43 ` [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Daniel Wagner
2025-06-23  5:19   ` Christoph Hellwig
2025-06-25  4:34   ` Chaitanya Kulkarni
2025-06-17 13:43 ` [PATCH 2/5] blk-mq: add number of queue calc helper Daniel Wagner
2025-06-25  4:35   ` Chaitanya Kulkarni
2025-06-17 13:43 ` [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues Daniel Wagner
2025-06-25  4:37   ` Chaitanya Kulkarni
2025-06-17 13:43 ` [PATCH 4/5] scsi: " Daniel Wagner
2025-06-25  4:37   ` Chaitanya Kulkarni
2025-06-17 13:43 ` [PATCH 5/5] virtio: blk/scsi: " Daniel Wagner
2025-06-25  4:38   ` Chaitanya Kulkarni
2025-06-30  6:29 ` [PATCH 0/5] blk: introduce " Daniel Wagner
2025-07-01 16:29 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).