public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] Introduce new max-queue-size configuration
@ 2024-01-04  9:25 Max Gurtovoy
  2024-01-04  9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

Hi Christoph/Sagi/Keith,
This patch series is mainly for adding an interface for a user to
configure the maximal queue size for fabrics via port configfs. Using
this interface a user will be able to better control the system and HW
resources.

Also, I've increased the maximal queue depth for RDMA controllers to be
256 after request from Guixin Liu. This new value will be valid only for
controllers that don't support PI.

While developing this feature I've made some minor cleanups as well.

Changes from v1:
 - collected Reviewed-by signatures (Sagi and Guixin Liu)
 - removed the patches that unify fabric host and target max/min/default
   queue size definitions (Sagi)
 - align MQES and SQ size according to the NVMe Spec (patch 2/8)

Max Gurtovoy (8):
  nvme-rdma: move NVME_RDMA_IP_PORT from common file
  nvmet: compare mqes and sqsize only for IO SQ
  nvmet: set maxcmd to be per controller
  nvmet: set ctrl pi_support cap before initializing cap reg
  nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
  nvme-rdma: clamp queue size according to ctrl cap
  nvmet: introduce new max queue size configuration entry
  nvmet-rdma: set max_queue_size for RDMA transport

 drivers/nvme/host/rdma.c          | 19 ++++++++++++++-----
 drivers/nvme/target/admin-cmd.c   |  2 +-
 drivers/nvme/target/configfs.c    | 28 ++++++++++++++++++++++++++++
 drivers/nvme/target/core.c        | 18 ++++++++++++++++--
 drivers/nvme/target/discovery.c   |  2 +-
 drivers/nvme/target/fabrics-cmd.c |  5 ++---
 drivers/nvme/target/nvmet.h       |  6 ++++--
 drivers/nvme/target/passthru.c    |  2 +-
 drivers/nvme/target/rdma.c        | 10 ++++++++++
 include/linux/nvme-rdma.h         |  6 +++++-
 include/linux/nvme.h              |  2 --
 11 files changed, 82 insertions(+), 18 deletions(-)

-- 
2.18.1



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-23  8:53   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

The correct place for this definition is the nvme rdma header file and
not the common nvme header file.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 include/linux/nvme-rdma.h | 2 ++
 include/linux/nvme.h      | 2 --
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index 4dd7e6fe92fb..146dd2223a5f 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -6,6 +6,8 @@
 #ifndef _LINUX_NVME_RDMA_H
 #define _LINUX_NVME_RDMA_H
 
+#define NVME_RDMA_IP_PORT		4420
+
 #define NVME_RDMA_MAX_QUEUE_SIZE	128
 
 enum nvme_rdma_cm_fmt {
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 462c21e0e417..ee28acdb8d43 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -23,8 +23,6 @@
 
 #define NVME_DISC_SUBSYS_NAME	"nqn.2014-08.org.nvmexpress.discovery"
 
-#define NVME_RDMA_IP_PORT	4420
-
 #define NVME_NSID_ALL		0xffffffff
 
 enum nvme_subsys_type {
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
  2024-01-04  9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-22 13:06   ` Sagi Grimberg
  2024-01-23  8:53   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

According to the NVMe Spec:
"
MQES: This field indicates the maximum individual queue size that the
controller supports. For NVMe over PCIe implementations, this value
applies to the I/O Submission Queues and I/O Completion Queues that the
host creates. For NVMe over Fabrics implementations, this value applies
to only the I/O Submission Queues that the host creates.
"

Align the target code to compare mqes and sqsize as mentioned in the
NVMe Spec.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/fabrics-cmd.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index d8da840a1c0e..4d014c5d0b6a 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -157,7 +157,8 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
 		return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
 	}
 
-	if (sqsize > mqes) {
+	/* for fabrics, this value applies to only the I/O Submission Queues */
+	if (qid && sqsize > mqes) {
 		pr_warn("sqsize %u is larger than MQES supported %u cntlid %d\n",
 				sqsize, mqes, ctrl->cntlid);
 		req->error_loc = offsetof(struct nvmf_connect_command, sqsize);
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 3/8] nvmet: set maxcmd to be per controller
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
  2024-01-04  9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
  2024-01-04  9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-23  8:53   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

This is a preparation for having a dynamic configuration of max queue
size for a controller. Make sure that the maxcmd field stays the same as
the MQES (+1) value as we do today.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/admin-cmd.c | 2 +-
 drivers/nvme/target/discovery.c | 2 +-
 drivers/nvme/target/nvmet.h     | 2 +-
 drivers/nvme/target/passthru.c  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 39cb570f833d..f5b7054a4a05 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -428,7 +428,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 	id->cqes = (0x4 << 4) | 0x4;
 
 	/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
-	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
 
 	id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES);
 	id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 668d257fa986..0d5014905069 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -282,7 +282,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
 	id->lpa = (1 << 2);
 
 	/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
-	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
 
 	id->sgls = cpu_to_le32(1 << 0);	/* we always support SGLs */
 	if (ctrl->ops->flags & NVMF_KEYED_SGLS)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 6c8acebe1a1a..144aca2fa6ad 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -545,7 +545,7 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
 
 #define NVMET_QUEUE_SIZE	1024
 #define NVMET_NR_QUEUES		128
-#define NVMET_MAX_CMD		NVMET_QUEUE_SIZE
+#define NVMET_MAX_CMD(ctrl)	(NVME_CAP_MQES(ctrl->cap) + 1)
 
 /*
  * Nice round number that makes a list of nsids fit into a page.
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index f2d963e1fe94..bb4a69d538fd 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -132,7 +132,7 @@ static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req)
 
 	id->sqes = min_t(__u8, ((0x6 << 4) | 0x6), id->sqes);
 	id->cqes = min_t(__u8, ((0x4 << 4) | 0x4), id->cqes);
-	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+	id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
 
 	/* don't support fuse commands */
 	id->fuses = 0;
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (2 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-23  8:53   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

This is a preparation for setting the maximal queue size of a controller
that supports PI.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/core.c        | 1 +
 drivers/nvme/target/fabrics-cmd.c | 2 --
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index d26aa30f8702..ade5a7bd7f6d 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1411,6 +1411,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 
 	kref_init(&ctrl->ref);
 	ctrl->subsys = subsys;
+	ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
 	nvmet_init_cap(ctrl);
 	WRITE_ONCE(ctrl->aen_enabled, NVMET_AEN_CFG_OPTIONAL);
 
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 4d014c5d0b6a..08e9c6b6f551 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -252,8 +252,6 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 	if (status)
 		goto out;
 
-	ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
-
 	uuid_copy(&ctrl->hostid, &d->hostid);
 
 	ret = nvmet_setup_auth(ctrl);
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (3 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-22 13:19   ` Sagi Grimberg
  2024-01-23  8:54   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

This definition will be used by controllers that are configured with
metadata support. For now, both regular and metadata controllers have
the same maximal queue size but later commit will increase the maximal
queue size for regular RDMA controllers to 256.
We'll keep the maximal queue size for metadata controllers to be 128
since there are more resources that are needed for metadata operations
and 128 is the optimal size found for metadata controllers base on
testing.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/rdma.c | 2 ++
 include/linux/nvme-rdma.h  | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..f298295c0b0f 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -2002,6 +2002,8 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
 
 static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl)
 {
+	if (ctrl->pi_support)
+		return NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
 	return NVME_RDMA_MAX_QUEUE_SIZE;
 }
 
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index 146dd2223a5f..d0b9941911a1 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,7 +8,8 @@
 
 #define NVME_RDMA_IP_PORT		4420
 
-#define NVME_RDMA_MAX_QUEUE_SIZE	128
+#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
 
 enum nvme_rdma_cm_fmt {
 	NVME_RDMA_CM_FMT_1_0 = 0x0,
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (4 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-22 17:39   ` Sagi Grimberg
  2024-01-23  8:54   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
                   ` (2 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

If a controller is configured with metadata support, clamp the maximal
queue size to be 128 since there are more resources that are needed
for metadata operations. Otherwise, clamp it to 256.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index bc90ec3c51b0..d81a7148fbc5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 			ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
 	}
 
-	if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
-		dev_warn(ctrl->ctrl.device,
-			"ctrl sqsize %u > max queue size %u, clamping down\n",
-			ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
-		ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+	if (ctrl->ctrl.max_integrity_segments) {
+		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
+			dev_warn(ctrl->ctrl.device,
+				"ctrl sqsize %u > max queue size %u, clamping down\n",
+				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
+			ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
+		}
+	} else {
+		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+			dev_warn(ctrl->ctrl.device,
+				"ctrl sqsize %u > max queue size %u, clamping down\n",
+				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
+			ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+		}
 	}
 
 	if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 7/8] nvmet: introduce new max queue size configuration entry
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (5 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-22 17:44   ` Sagi Grimberg
  2024-01-23  8:55   ` Christoph Hellwig
  2024-01-04  9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
  2024-01-22 12:09 ` [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
  8 siblings, 2 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

Using this port configuration, one will be able to set the maximal queue
size to be used for any controller that will be associated to the
configured port.

The default value stayed 1024 but each transport will be able to set the
its own values before enabling the port.

Introduce lower limit of 16 for minimal queue depth (same as we use in
the host fabrics drivers).

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Guixin Liu <kanie@linux.alibaba.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/configfs.c | 28 ++++++++++++++++++++++++++++
 drivers/nvme/target/core.c     | 17 +++++++++++++++--
 drivers/nvme/target/nvmet.h    |  4 +++-
 3 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index bd514d4c4a5b..f8df2ef715ba 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -272,6 +272,32 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
 
 CONFIGFS_ATTR(nvmet_, param_inline_data_size);
 
+static ssize_t nvmet_param_max_queue_size_show(struct config_item *item,
+		char *page)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+
+	return snprintf(page, PAGE_SIZE, "%d\n", port->max_queue_size);
+}
+
+static ssize_t nvmet_param_max_queue_size_store(struct config_item *item,
+		const char *page, size_t count)
+{
+	struct nvmet_port *port = to_nvmet_port(item);
+	int ret;
+
+	if (nvmet_is_port_enabled(port, __func__))
+		return -EACCES;
+	ret = kstrtoint(page, 0, &port->max_queue_size);
+	if (ret) {
+		pr_err("Invalid value '%s' for max_queue_size\n", page);
+		return -EINVAL;
+	}
+	return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_max_queue_size);
+
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
 		char *page)
@@ -1856,6 +1882,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
 	&nvmet_attr_addr_trtype,
 	&nvmet_attr_addr_tsas,
 	&nvmet_attr_param_inline_data_size,
+	&nvmet_attr_param_max_queue_size,
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	&nvmet_attr_param_pi_enable,
 #endif
@@ -1914,6 +1941,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
 	INIT_LIST_HEAD(&port->subsystems);
 	INIT_LIST_HEAD(&port->referrals);
 	port->inline_data_size = -1;	/* < 0 == let the transport choose */
+	port->max_queue_size = -1;	/* < 0 == let the transport choose */
 
 	port->disc_addr.portid = cpu_to_le16(portid);
 	port->disc_addr.adrfam = NVMF_ADDR_FAMILY_MAX;
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index ade5a7bd7f6d..d34520359cc9 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -358,6 +358,18 @@ int nvmet_enable_port(struct nvmet_port *port)
 	if (port->inline_data_size < 0)
 		port->inline_data_size = 0;
 
+	/*
+	 * If the transport didn't set the max_queue_size properly, then clamp
+	 * it to the target limits. Also set default values in case the
+	 * transport didn't set it at all.
+	 */
+	if (port->max_queue_size < 0)
+		port->max_queue_size = NVMET_MAX_QUEUE_SIZE;
+	else
+		port->max_queue_size = clamp_t(int, port->max_queue_size,
+					       NVMET_MIN_QUEUE_SIZE,
+					       NVMET_MAX_QUEUE_SIZE);
+
 	port->enabled = true;
 	port->tr_ops = ops;
 	return 0;
@@ -1223,9 +1235,10 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
 	ctrl->cap |= (15ULL << 24);
 	/* maximum queue entries supported: */
 	if (ctrl->ops->get_max_queue_size)
-		ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1;
+		ctrl->cap |= min_t(u16, ctrl->ops->get_max_queue_size(ctrl),
+				   ctrl->port->max_queue_size) - 1;
 	else
-		ctrl->cap |= NVMET_QUEUE_SIZE - 1;
+		ctrl->cap |= ctrl->port->max_queue_size - 1;
 
 	if (nvmet_is_passthru_subsys(ctrl->subsys))
 		nvmet_passthrough_override_cap(ctrl);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 144aca2fa6ad..7c6e7e65b032 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -163,6 +163,7 @@ struct nvmet_port {
 	void				*priv;
 	bool				enabled;
 	int				inline_data_size;
+	int				max_queue_size;
 	const struct nvmet_fabrics_ops	*tr_ops;
 	bool				pi_enable;
 };
@@ -543,7 +544,8 @@ void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys,
 void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
 		u8 event_info, u8 log_page);
 
-#define NVMET_QUEUE_SIZE	1024
+#define NVMET_MIN_QUEUE_SIZE	16
+#define NVMET_MAX_QUEUE_SIZE	1024
 #define NVMET_NR_QUEUES		128
 #define NVMET_MAX_CMD(ctrl)	(NVME_CAP_MQES(ctrl->cap) + 1)
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (6 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
@ 2024-01-04  9:25 ` Max Gurtovoy
  2024-01-22 17:44   ` Sagi Grimberg
  2024-01-23  8:55   ` Christoph Hellwig
  2024-01-22 12:09 ` [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
  8 siblings, 2 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-04  9:25 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren, Max Gurtovoy

A new port configuration was added to set max_queue_size. Clamp user
configuration to RDMA transport limits.

Increase the maximal queue size of RDMA controllers from 128 to 256
(the default size stays 128 same as before).

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/rdma.c | 8 ++++++++
 include/linux/nvme-rdma.h  | 3 ++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index f298295c0b0f..3a3686efe008 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct nvmet_port *nport)
 		nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
 	}
 
+	if (nport->max_queue_size < 0) {
+		nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
+	} else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
+		pr_warn("max_queue_size %u is too large, reducing to %u\n",
+			nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
+		nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
+	}
+
 	ret = inet_pton_with_scope(&init_net, af, nport->disc_addr.traddr,
 			nport->disc_addr.trsvcid, &port->addr);
 	if (ret) {
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index d0b9941911a1..eb2f04d636c8 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,8 +8,9 @@
 
 #define NVME_RDMA_IP_PORT		4420
 
-#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_QUEUE_SIZE 256
 #define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
+#define NVME_RDMA_DEFAULT_QUEUE_SIZE 128
 
 enum nvme_rdma_cm_fmt {
 	NVME_RDMA_CM_FMT_1_0 = 0x0,
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 0/8] Introduce new max-queue-size configuration
  2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
                   ` (7 preceding siblings ...)
  2024-01-04  9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
@ 2024-01-22 12:09 ` Max Gurtovoy
  8 siblings, 0 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-22 12:09 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme; +Cc: israelr, kch, oren



On 04/01/2024 11:25, Max Gurtovoy wrote:
> Hi Christoph/Sagi/Keith,

Hi Christoph/Keith,
are we considering taking this series into nvme-6.8 ?
Most of it was reviewed by Sagi, the rest is pretty trivial

> This patch series is mainly for adding an interface for a user to
> configure the maximal queue size for fabrics via port configfs. Using
> this interface a user will be able to better control the system and HW
> resources.
> 
> Also, I've increased the maximal queue depth for RDMA controllers to be
> 256 after request from Guixin Liu. This new value will be valid only for
> controllers that don't support PI.
> 
> While developing this feature I've made some minor cleanups as well.
> 
> Changes from v1:
>   - collected Reviewed-by signatures (Sagi and Guixin Liu)
>   - removed the patches that unify fabric host and target max/min/default
>     queue size definitions (Sagi)
>   - align MQES and SQ size according to the NVMe Spec (patch 2/8)
> 
> Max Gurtovoy (8):
>    nvme-rdma: move NVME_RDMA_IP_PORT from common file
>    nvmet: compare mqes and sqsize only for IO SQ
>    nvmet: set maxcmd to be per controller
>    nvmet: set ctrl pi_support cap before initializing cap reg
>    nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
>    nvme-rdma: clamp queue size according to ctrl cap
>    nvmet: introduce new max queue size configuration entry
>    nvmet-rdma: set max_queue_size for RDMA transport
> 
>   drivers/nvme/host/rdma.c          | 19 ++++++++++++++-----
>   drivers/nvme/target/admin-cmd.c   |  2 +-
>   drivers/nvme/target/configfs.c    | 28 ++++++++++++++++++++++++++++
>   drivers/nvme/target/core.c        | 18 ++++++++++++++++--
>   drivers/nvme/target/discovery.c   |  2 +-
>   drivers/nvme/target/fabrics-cmd.c |  5 ++---
>   drivers/nvme/target/nvmet.h       |  6 ++++--
>   drivers/nvme/target/passthru.c    |  2 +-
>   drivers/nvme/target/rdma.c        | 10 ++++++++++
>   include/linux/nvme-rdma.h         |  6 +++++-
>   include/linux/nvme.h              |  2 --
>   11 files changed, 82 insertions(+), 18 deletions(-)
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ
  2024-01-04  9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
@ 2024-01-22 13:06   ` Sagi Grimberg
  2024-01-23  8:53   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-01-22 13:06 UTC (permalink / raw)
  To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: israelr, kch, oren

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
  2024-01-04  9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
@ 2024-01-22 13:19   ` Sagi Grimberg
  2024-01-23  8:54   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-01-22 13:19 UTC (permalink / raw)
  To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: israelr, kch, oren

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap
  2024-01-04  9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
@ 2024-01-22 17:39   ` Sagi Grimberg
  2024-01-23  8:54   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-01-22 17:39 UTC (permalink / raw)
  To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: israelr, kch, oren

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 7/8] nvmet: introduce new max queue size configuration entry
  2024-01-04  9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
@ 2024-01-22 17:44   ` Sagi Grimberg
  2024-01-23  8:55   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-01-22 17:44 UTC (permalink / raw)
  To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: israelr, kch, oren

|Reviewed-by: Sagi Grimberg <sagi@grimberg.me>|


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport
  2024-01-04  9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
@ 2024-01-22 17:44   ` Sagi Grimberg
  2024-01-23  8:55   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Sagi Grimberg @ 2024-01-22 17:44 UTC (permalink / raw)
  To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: israelr, kch, oren

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file
  2024-01-04  9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
@ 2024-01-23  8:53   ` Christoph Hellwig
  0 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:53 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ
  2024-01-04  9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
  2024-01-22 13:06   ` Sagi Grimberg
@ 2024-01-23  8:53   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:53 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 3/8] nvmet: set maxcmd to be per controller
  2024-01-04  9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
@ 2024-01-23  8:53   ` Christoph Hellwig
  0 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:53 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg
  2024-01-04  9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
@ 2024-01-23  8:53   ` Christoph Hellwig
  0 siblings, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:53 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
  2024-01-04  9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
  2024-01-22 13:19   ` Sagi Grimberg
@ 2024-01-23  8:54   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:54 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap
  2024-01-04  9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
  2024-01-22 17:39   ` Sagi Grimberg
@ 2024-01-23  8:54   ` Christoph Hellwig
  2024-01-23  9:32     ` Max Gurtovoy
  1 sibling, 1 reply; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:54 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

On Thu, Jan 04, 2024 at 11:25:47AM +0200, Max Gurtovoy wrote:
> If a controller is configured with metadata support, clamp the maximal
> queue size to be 128 since there are more resources that are needed
> for metadata operations. Otherwise, clamp it to 256.
> 
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>  drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index bc90ec3c51b0..d81a7148fbc5 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
>  			ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
>  	}
>  
> +	if (ctrl->ctrl.max_integrity_segments) {
> +		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
> +			dev_warn(ctrl->ctrl.device,
> +				"ctrl sqsize %u > max queue size %u, clamping down\n",
> +				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
> +			ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
> +		}
> +	} else {
> +		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
> +			dev_warn(ctrl->ctrl.device,
> +				"ctrl sqsize %u > max queue size %u, clamping down\n",
> +				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
> +			ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
> +		}

Can you just add a local max_queue_size variable instead of
duplicating all this?



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 7/8] nvmet: introduce new max queue size configuration entry
  2024-01-04  9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
  2024-01-22 17:44   ` Sagi Grimberg
@ 2024-01-23  8:55   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:55 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport
  2024-01-04  9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
  2024-01-22 17:44   ` Sagi Grimberg
@ 2024-01-23  8:55   ` Christoph Hellwig
  1 sibling, 0 replies; 25+ messages in thread
From: Christoph Hellwig @ 2024-01-23  8:55 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: kbusch, hch, sagi, linux-nvme, israelr, kch, oren

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap
  2024-01-23  8:54   ` Christoph Hellwig
@ 2024-01-23  9:32     ` Max Gurtovoy
  0 siblings, 0 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-23  9:32 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: kbusch, sagi, linux-nvme, israelr, kch, oren



On 23/01/2024 10:54, Christoph Hellwig wrote:
> On Thu, Jan 04, 2024 at 11:25:47AM +0200, Max Gurtovoy wrote:
>> If a controller is configured with metadata support, clamp the maximal
>> queue size to be 128 since there are more resources that are needed
>> for metadata operations. Otherwise, clamp it to 256.
>>
>> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>> ---
>>   drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
>>   1 file changed, 14 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
>> index bc90ec3c51b0..d81a7148fbc5 100644
>> --- a/drivers/nvme/host/rdma.c
>> +++ b/drivers/nvme/host/rdma.c
>> @@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
>>   			ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
>>   	}
>>   
>> +	if (ctrl->ctrl.max_integrity_segments) {
>> +		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
>> +			dev_warn(ctrl->ctrl.device,
>> +				"ctrl sqsize %u > max queue size %u, clamping down\n",
>> +				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
>> +			ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
>> +		}
>> +	} else {
>> +		if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
>> +			dev_warn(ctrl->ctrl.device,
>> +				"ctrl sqsize %u > max queue size %u, clamping down\n",
>> +				ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
>> +			ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
>> +		}
> 
> Can you just add a local max_queue_size variable instead of
> duplicating all this?
> 

Sure.
something like:


         if (ctrl->ctrl.max_integrity_segments)
                 max_queue_size = NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
         else
                 max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;

         if (ctrl->ctrl.sqsize + 1 > max_queue_size) {
                 dev_warn(ctrl->ctrl.device,
                         "ctrl sqsize %u > max queue size %u, clamping 
down\n",
                         ctrl->ctrl.sqsize + 1, max_queue_size);
                 ctrl->ctrl.sqsize = max_queue_size - 1;




^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport
  2024-01-23 14:40 [PATCH v3 " Max Gurtovoy
@ 2024-01-23 14:40 ` Max Gurtovoy
  0 siblings, 0 replies; 25+ messages in thread
From: Max Gurtovoy @ 2024-01-23 14:40 UTC (permalink / raw)
  To: kbusch, hch, sagi, linux-nvme, kch; +Cc: oren, israelr, Max Gurtovoy

A new port configuration was added to set max_queue_size. Clamp user
configuration to RDMA transport limits.

Increase the maximal queue size of RDMA controllers from 128 to 256
(the default size stays 128 same as before).

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/rdma.c | 8 ++++++++
 include/linux/nvme-rdma.h  | 3 ++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index aaf9c891b2c7..be57446f398b 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1956,6 +1956,14 @@ static int nvmet_rdma_add_port(struct nvmet_port *nport)
 		nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
 	}
 
+	if (nport->max_queue_size < 0) {
+		nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
+	} else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
+		pr_warn("max_queue_size %u is too large, reducing to %u\n",
+			nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
+		nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
+	}
+
 	ret = inet_pton_with_scope(&init_net, af, nport->disc_addr.traddr,
 			nport->disc_addr.trsvcid, &port->addr);
 	if (ret) {
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index d0b9941911a1..eb2f04d636c8 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,8 +8,9 @@
 
 #define NVME_RDMA_IP_PORT		4420
 
-#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_QUEUE_SIZE 256
 #define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
+#define NVME_RDMA_DEFAULT_QUEUE_SIZE 128
 
 enum nvme_rdma_cm_fmt {
 	NVME_RDMA_CM_FMT_1_0 = 0x0,
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2024-01-23 14:41 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-04  9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-04  9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23  8:53   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-22 13:06   ` Sagi Grimberg
2024-01-23  8:53   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23  8:53   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23  8:53   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-22 13:19   ` Sagi Grimberg
2024-01-23  8:54   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-22 17:39   ` Sagi Grimberg
2024-01-23  8:54   ` Christoph Hellwig
2024-01-23  9:32     ` Max Gurtovoy
2024-01-04  9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-22 17:44   ` Sagi Grimberg
2024-01-23  8:55   ` Christoph Hellwig
2024-01-04  9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-22 17:44   ` Sagi Grimberg
2024-01-23  8:55   ` Christoph Hellwig
2024-01-22 12:09 ` [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
  -- strict thread matches above, loose matches on Subject: below --
2024-01-23 14:40 [PATCH v3 " Max Gurtovoy
2024-01-23 14:40 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox