* [PATCH 01/10] nvme: remove unused definition
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
` (8 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
There is no users for NVMF_AUTH_HASH_LEN macro.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
include/linux/nvme.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 44325c068b6a..462c21e0e417 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -20,7 +20,6 @@
#define NVMF_TRSVCID_SIZE 32
#define NVMF_TRADDR_SIZE 256
#define NVMF_TSAS_SIZE 256
-#define NVMF_AUTH_HASH_LEN 64
#define NVME_DISC_SUBSYS_NAME "nqn.2014-08.org.nvmexpress.discovery"
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31 0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
` (7 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
The correct place for this definition is the nvme rdma header file and
not the common nvme header file.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
include/linux/nvme-rdma.h | 2 ++
include/linux/nvme.h | 2 --
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index 4dd7e6fe92fb..146dd2223a5f 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -6,6 +6,8 @@
#ifndef _LINUX_NVME_RDMA_H
#define _LINUX_NVME_RDMA_H
+#define NVME_RDMA_IP_PORT 4420
+
#define NVME_RDMA_MAX_QUEUE_SIZE 128
enum nvme_rdma_cm_fmt {
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 462c21e0e417..ee28acdb8d43 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -23,8 +23,6 @@
#define NVME_DISC_SUBSYS_NAME "nqn.2014-08.org.nvmexpress.discovery"
-#define NVME_RDMA_IP_PORT 4420
-
#define NVME_NSID_ALL 0xffffffff
enum nvme_subsys_type {
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* [PATCH 03/10] nvme-fabrics: move queue size definitions to common header
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31 0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2023-12-31 0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:27 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
` (6 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
These definitions will be used by host and target fabrics drivers. Move
them to a common place.
In the future we can introduce a common nvme-fabrics header to ease on
the maintenance of the global nvme.h header.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/host/fabrics.h | 3 ---
include/linux/nvme.h | 4 ++++
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index fbaee5a7be19..1fd75702d5f1 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -9,9 +9,6 @@
#include <linux/in.h>
#include <linux/inet.h>
-#define NVMF_MIN_QUEUE_SIZE 16
-#define NVMF_MAX_QUEUE_SIZE 1024
-#define NVMF_DEF_QUEUE_SIZE 128
#define NVMF_DEF_RECONNECT_DELAY 10
/* default to 600 seconds of reconnect attempts before giving up */
#define NVMF_DEF_CTRL_LOSS_TMO 600
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index ee28acdb8d43..6fca2ff6d0fb 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -21,6 +21,10 @@
#define NVMF_TRADDR_SIZE 256
#define NVMF_TSAS_SIZE 256
+#define NVMF_MIN_QUEUE_SIZE 16
+#define NVMF_MAX_QUEUE_SIZE 1024
+#define NVMF_DEF_QUEUE_SIZE 128
+
#define NVME_DISC_SUBSYS_NAME "nqn.2014-08.org.nvmexpress.discovery"
#define NVME_NSID_ALL 0xffffffff
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 03/10] nvme-fabrics: move queue size definitions to common header
2023-12-31 0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
@ 2024-01-01 9:27 ` Sagi Grimberg
2024-01-01 10:06 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:27 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
> These definitions will be used by host and target fabrics drivers. Move
> them to a common place.
Don't see why these should in any way be common.
>
> In the future we can introduce a common nvme-fabrics header to ease on
> the maintenance of the global nvme.h header.
nvme.h is the wrong place IMO.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 03/10] nvme-fabrics: move queue size definitions to common header
2024-01-01 9:27 ` Sagi Grimberg
@ 2024-01-01 10:06 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-01 10:06 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 11:27, Sagi Grimberg wrote:
>
>> These definitions will be used by host and target fabrics drivers. Move
>> them to a common place.
>
> Don't see why these should in any way be common.
why not ?
why should we define these values separately for hosts and targets ?
>
>>
>> In the future we can introduce a common nvme-fabrics header to ease on
>> the maintenance of the global nvme.h header.
>
> nvme.h is the wrong place IMO.
I've mentioned that it will be better to create
include/linux/nvme-fabrics.h file for common definitions for fabrics
drivers.
But we can do it incrementally.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 03/10] nvme-fabrics: move queue size definitions to common header
2024-01-01 10:06 ` Max Gurtovoy
@ 2024-01-01 11:20 ` Sagi Grimberg
2024-01-01 12:34 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 11:20 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
>>> These definitions will be used by host and target fabrics drivers. Move
>>> them to a common place.
>>
>> Don't see why these should in any way be common.
>
> why not ?
> why should we define these values separately for hosts and targets ?
Because the host and the target are separate systems. Don't know why
they should share the same defines for something like this.
>
>>
>>>
>>> In the future we can introduce a common nvme-fabrics header to ease on
>>> the maintenance of the global nvme.h header.
>>
>> nvme.h is the wrong place IMO.
>
> I've mentioned that it will be better to create
> include/linux/nvme-fabrics.h file for common definitions for fabrics
> drivers.
> But we can do it incrementally.
I still think that nvme.h is the wrong place for this. But if others
don't mind then fine.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 03/10] nvme-fabrics: move queue size definitions to common header
2024-01-01 11:20 ` Sagi Grimberg
@ 2024-01-01 12:34 ` Max Gurtovoy
0 siblings, 0 replies; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-01 12:34 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 13:20, Sagi Grimberg wrote:
>
>>>> These definitions will be used by host and target fabrics drivers. Move
>>>> them to a common place.
>>>
>>> Don't see why these should in any way be common.
>>
>> why not ?
>> why should we define these values separately for hosts and targets ?
>
> Because the host and the target are separate systems. Don't know why
> they should share the same defines for something like this.
It is the same Linux kernel framework so why should we have different
values ?
We have an option today to set queue_depth in nvme connect, and now
we'll have an interface to set queue_depth in the target side as well.
do you prefer having duplication for min/max/default queue sizes for
fabric host/target ?
>
>>
>>>
>>>>
>>>> In the future we can introduce a common nvme-fabrics header to ease on
>>>> the maintenance of the global nvme.h header.
>>>
>>> nvme.h is the wrong place IMO.
>>
>> I've mentioned that it will be better to create
>> include/linux/nvme-fabrics.h file for common definitions for fabrics
>> drivers.
>> But we can do it incrementally.
>
> I still think that nvme.h is the wrong place for this. But if others
> don't mind then fine.
nvme.h is already overloaded and splitting it to nvme-fabrics.h is
recommended anyway, but I feel that we can do it incrementally.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (2 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:28 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
` (5 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
Use the common NVMF_MAX_QUEUE_SIZE definition instead.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/core.c | 2 +-
drivers/nvme/target/nvmet.h | 3 +--
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index d26aa30f8702..128156145d29 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1225,7 +1225,7 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
if (ctrl->ops->get_max_queue_size)
ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1;
else
- ctrl->cap |= NVMET_QUEUE_SIZE - 1;
+ ctrl->cap |= NVMF_MAX_QUEUE_SIZE - 1;
if (nvmet_is_passthru_subsys(ctrl->subsys))
nvmet_passthrough_override_cap(ctrl);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 6c8acebe1a1a..bf99c58aab52 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -543,9 +543,8 @@ void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys,
void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
u8 event_info, u8 log_page);
-#define NVMET_QUEUE_SIZE 1024
#define NVMET_NR_QUEUES 128
-#define NVMET_MAX_CMD NVMET_QUEUE_SIZE
+#define NVMET_MAX_CMD NVMF_MAX_QUEUE_SIZE
/*
* Nice round number that makes a list of nsids fit into a page.
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition
2023-12-31 0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
@ 2024-01-01 9:28 ` Sagi Grimberg
2024-01-01 10:10 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:28 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
> Use the common NVMF_MAX_QUEUE_SIZE definition instead.
Why?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition
2024-01-01 9:28 ` Sagi Grimberg
@ 2024-01-01 10:10 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-01 10:10 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 11:28, Sagi Grimberg wrote:
>
>> Use the common NVMF_MAX_QUEUE_SIZE definition instead.
>
> Why?
In the next commits we introduce new interface to configure
max_queue_size of a port. We need MIN/MAX/DEFAULT definitions for this
configuration.
I choose using existing definitions exist today in the hosts drivers.
If needed, I can duplicate those to the target but I think there is no
good reason to separate these definitions.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition
2024-01-01 10:10 ` Max Gurtovoy
@ 2024-01-01 11:20 ` Sagi Grimberg
0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 11:20 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
>>> Use the common NVMF_MAX_QUEUE_SIZE definition instead.
>>
>> Why?
>
> In the next commits we introduce new interface to configure
> max_queue_size of a port. We need MIN/MAX/DEFAULT definitions for this
> configuration.
> I choose using existing definitions exist today in the hosts drivers.
> If needed, I can duplicate those to the target but I think there is no
> good reason to separate these definitions.
Aren't they separated already?
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 05/10] nvmet: set maxcmd to be per controller
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (3 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
` (4 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
This is a preparation for having a dynamic configuration of max queue
size for a controller. Make sure that the maxcmd field stays the same as
the MQES (+1) value as we do today.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/admin-cmd.c | 2 +-
drivers/nvme/target/discovery.c | 2 +-
drivers/nvme/target/nvmet.h | 2 +-
drivers/nvme/target/passthru.c | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 39cb570f833d..f5b7054a4a05 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -428,7 +428,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
id->cqes = (0x4 << 4) | 0x4;
/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES);
id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 668d257fa986..0d5014905069 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -282,7 +282,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
id->lpa = (1 << 2);
/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
id->sgls = cpu_to_le32(1 << 0); /* we always support SGLs */
if (ctrl->ops->flags & NVMF_KEYED_SGLS)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index bf99c58aab52..a76f816edf1d 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -544,7 +544,7 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
u8 event_info, u8 log_page);
#define NVMET_NR_QUEUES 128
-#define NVMET_MAX_CMD NVMF_MAX_QUEUE_SIZE
+#define NVMET_MAX_CMD(ctrl) (NVME_CAP_MQES(ctrl->cap) + 1)
/*
* Nice round number that makes a list of nsids fit into a page.
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index f2d963e1fe94..bb4a69d538fd 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -132,7 +132,7 @@ static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req)
id->sqes = min_t(__u8, ((0x6 << 4) | 0x6), id->sqes);
id->cqes = min_t(__u8, ((0x4 << 4) | 0x4), id->cqes);
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
/* don't support fuse commands */
id->fuses = 0;
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (4 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
` (3 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
This is a preperation for setting the maximal queue size of a controller
that supports PI.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/core.c | 1 +
drivers/nvme/target/fabrics-cmd.c | 2 --
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 128156145d29..f08997f58101 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1411,6 +1411,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
kref_init(&ctrl->ref);
ctrl->subsys = subsys;
+ ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
nvmet_init_cap(ctrl);
WRITE_ONCE(ctrl->aen_enabled, NVMET_AEN_CFG_OPTIONAL);
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index d8da840a1c0e..6f898aac1ad4 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -251,8 +251,6 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
if (status)
goto out;
- ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
-
uuid_copy(&ctrl->hostid, &d->hostid);
ret = nvmet_setup_auth(ctrl);
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (5 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:34 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
` (2 subsequent siblings)
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
This definition will be used by controllers that are configured with
metadata support. For now, both regular and metadata controllers have
the same maximal queue size but later commit will increase the maximal
queue size for regular RDMA controllers to 256.
We'll keep the maximal queue size for metadata controller to be 128
since there are more resources that are needed for metadata operations.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/rdma.c | 2 ++
include/linux/nvme-rdma.h | 3 ++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..f298295c0b0f 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -2002,6 +2002,8 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl)
{
+ if (ctrl->pi_support)
+ return NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
return NVME_RDMA_MAX_QUEUE_SIZE;
}
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index 146dd2223a5f..d0b9941911a1 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,7 +8,8 @@
#define NVME_RDMA_IP_PORT 4420
-#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
enum nvme_rdma_cm_fmt {
NVME_RDMA_CM_FMT_1_0 = 0x0,
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2023-12-31 0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
@ 2024-01-01 9:34 ` Sagi Grimberg
2024-01-01 10:57 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:34 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
Another completely arbitrary limit...
Can this come somehow from the rdma core?
If not, its not making things any worse. So its fine by me.
Please comment that this limit is based on testing the devices
that support PI.
On 12/31/23 02:52, Max Gurtovoy wrote:
> This definition will be used by controllers that are configured with
> metadata support. For now, both regular and metadata controllers have
> the same maximal queue size but later commit will increase the maximal
> queue size for regular RDMA controllers to 256.
> We'll keep the maximal queue size for metadata controller to be 128
> since there are more resources that are needed for metadata operations.
>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
> drivers/nvme/target/rdma.c | 2 ++
> include/linux/nvme-rdma.h | 3 ++-
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 4597bca43a6d..f298295c0b0f 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -2002,6 +2002,8 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
>
> static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl)
> {
> + if (ctrl->pi_support)
> + return NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
> return NVME_RDMA_MAX_QUEUE_SIZE;
> }
>
> diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
> index 146dd2223a5f..d0b9941911a1 100644
> --- a/include/linux/nvme-rdma.h
> +++ b/include/linux/nvme-rdma.h
> @@ -8,7 +8,8 @@
>
> #define NVME_RDMA_IP_PORT 4420
>
> -#define NVME_RDMA_MAX_QUEUE_SIZE 128
> +#define NVME_RDMA_MAX_QUEUE_SIZE 128
> +#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
>
> enum nvme_rdma_cm_fmt {
> NVME_RDMA_CM_FMT_1_0 = 0x0,
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2024-01-01 9:34 ` Sagi Grimberg
@ 2024-01-01 10:57 ` Max Gurtovoy
2024-01-01 11:21 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-01 10:57 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 11:34, Sagi Grimberg wrote:
> Another completely arbitrary limit...
> Can this come somehow from the rdma core?
>
> If not, its not making things any worse. So its fine by me.
> Please comment that this limit is based on testing the devices
> that support PI.
>
We are not changing the logic for PI controllers at all in this commit.
I can say something like:
"... We'll keep the maximal queue size for metadata controllers to be
128 since there are more resources that are needed for metadata
operations and 128 is the optimal size found for metadata controllers
based on testing.
> On 12/31/23 02:52, Max Gurtovoy wrote:
>> This definition will be used by controllers that are configured with
>> metadata support. For now, both regular and metadata controllers have
>> the same maximal queue size but later commit will increase the maximal
>> queue size for regular RDMA controllers to 256.
>> We'll keep the maximal queue size for metadata controller to be 128
>> since there are more resources that are needed for metadata operations.
>>
>> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>> ---
>> drivers/nvme/target/rdma.c | 2 ++
>> include/linux/nvme-rdma.h | 3 ++-
>> 2 files changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 4597bca43a6d..f298295c0b0f 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -2002,6 +2002,8 @@ static u8 nvmet_rdma_get_mdts(const struct
>> nvmet_ctrl *ctrl)
>> static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl)
>> {
>> + if (ctrl->pi_support)
>> + return NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
>> return NVME_RDMA_MAX_QUEUE_SIZE;
>> }
>> diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
>> index 146dd2223a5f..d0b9941911a1 100644
>> --- a/include/linux/nvme-rdma.h
>> +++ b/include/linux/nvme-rdma.h
>> @@ -8,7 +8,8 @@
>> #define NVME_RDMA_IP_PORT 4420
>> -#define NVME_RDMA_MAX_QUEUE_SIZE 128
>> +#define NVME_RDMA_MAX_QUEUE_SIZE 128
>> +#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
>> enum nvme_rdma_cm_fmt {
>> NVME_RDMA_CM_FMT_1_0 = 0x0,
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2024-01-01 10:57 ` Max Gurtovoy
@ 2024-01-01 11:21 ` Sagi Grimberg
2024-01-03 22:37 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 11:21 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
>> Another completely arbitrary limit...
>> Can this come somehow from the rdma core?
>>
>> If not, its not making things any worse. So its fine by me.
>> Please comment that this limit is based on testing the devices
>> that support PI.
>>
>
> We are not changing the logic for PI controllers at all in this commit.
>
> I can say something like:
>
> "... We'll keep the maximal queue size for metadata controllers to be
> 128 since there are more resources that are needed for metadata
> operations and 128 is the optimal size found for metadata controllers
> based on testing.
That would be better, yes.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2024-01-01 11:21 ` Sagi Grimberg
@ 2024-01-03 22:37 ` Max Gurtovoy
2024-01-04 8:23 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-03 22:37 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 13:21, Sagi Grimberg wrote:
>>> Another completely arbitrary limit...
>>> Can this come somehow from the rdma core?
>>>
>>> If not, its not making things any worse. So its fine by me.
>>> Please comment that this limit is based on testing the devices
>>> that support PI.
>>>
>>
>> We are not changing the logic for PI controllers at all in this commit.
>>
>> I can say something like:
>>
>> "... We'll keep the maximal queue size for metadata controllers to be
>> 128 since there are more resources that are needed for metadata
>> operations and 128 is the optimal size found for metadata controllers
>> based on testing.
>
> That would be better, yes.
np.
I would like to send a new version.
any other comments for this commit beside the language for the commit
message ?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
2024-01-03 22:37 ` Max Gurtovoy
@ 2024-01-04 8:23 ` Sagi Grimberg
0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-04 8:23 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 1/4/24 00:37, Max Gurtovoy wrote:
>
>
> On 01/01/2024 13:21, Sagi Grimberg wrote:
>>>> Another completely arbitrary limit...
>>>> Can this come somehow from the rdma core?
>>>>
>>>> If not, its not making things any worse. So its fine by me.
>>>> Please comment that this limit is based on testing the devices
>>>> that support PI.
>>>>
>>>
>>> We are not changing the logic for PI controllers at all in this commit.
>>>
>>> I can say something like:
>>>
>>> "... We'll keep the maximal queue size for metadata controllers to
>>> be 128 since there are more resources that are needed for metadata
>>> operations and 128 is the optimal size found for metadata
>>> controllers based on testing.
>>
>> That would be better, yes.
>
> np.
> I would like to send a new version.
> any other comments for this commit beside the language for the commit
> message ?
No
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (6 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:35 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2023-12-31 0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
If a controller is configured with metadata support, clamp the maximal
queue size to be 128 since there are more resources that are needed
for metadata operations. Otherwise, clamp it to 256.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 7c99c87688dd..a0ff406c10a9 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
}
- if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
- dev_warn(ctrl->ctrl.device,
- "ctrl sqsize %u > max queue size %u, clamping down\n",
- ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
- ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ if (ctrl->ctrl.max_integrity_segments) {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
+ }
+ } else {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ }
}
if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
2023-12-31 0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
@ 2024-01-01 9:35 ` Sagi Grimberg
2024-01-01 17:22 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:35 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
> If a controller is configured with metadata support, clamp the maximal
> queue size to be 128 since there are more resources that are needed
> for metadata operations. Otherwise, clamp it to 256.
Does the qp allocation succeeds or fails if attempting to create
256 size with metadata?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
2024-01-01 9:35 ` Sagi Grimberg
@ 2024-01-01 17:22 ` Max Gurtovoy
2024-01-02 7:59 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-01 17:22 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 11:35, Sagi Grimberg wrote:
>
>> If a controller is configured with metadata support, clamp the maximal
>> queue size to be 128 since there are more resources that are needed
>> for metadata operations. Otherwise, clamp it to 256.
>
> Does the qp allocation succeeds or fails if attempting to create
> 256 size with metadata?
It succeeds (tried ConnectX-4 and ConnectX-6 dx), but there are a lot of
metadata resources we allocate so the scale will be lower. I don't see a
reason to allocate more than 128 entries for metadata controllers.
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
2024-01-01 17:22 ` Max Gurtovoy
@ 2024-01-02 7:59 ` Sagi Grimberg
0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-02 7:59 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
>>> If a controller is configured with metadata support, clamp the maximal
>>> queue size to be 128 since there are more resources that are needed
>>> for metadata operations. Otherwise, clamp it to 256.
>>
>> Does the qp allocation succeeds or fails if attempting to create
>> 256 size with metadata?
>
> It succeeds (tried ConnectX-4 and ConnectX-6 dx), but there are a lot of
> metadata resources we allocate so the scale will be lower. I don't see a
> reason to allocate more than 128 entries for metadata controllers.
Me neither, it would have been nice to get this information from the
rdma core though.
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 09/10] nvmet: introduce new max queue size configuration entry
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (7 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:37 ` Sagi Grimberg
2024-01-02 2:07 ` Guixin Liu
2023-12-31 0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
9 siblings, 2 replies; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
Using this port configuration, one will be able to set the maximal queue
size to be used for any controller that will be associated to the
configured port.
The default value stayed 1024 but each transport will be able to set the
its own values before enabling the port.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/configfs.c | 28 ++++++++++++++++++++++++++++
drivers/nvme/target/core.c | 16 ++++++++++++++--
drivers/nvme/target/nvmet.h | 1 +
3 files changed, 43 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index bd514d4c4a5b..f8df2ef715ba 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -272,6 +272,32 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
CONFIGFS_ATTR(nvmet_, param_inline_data_size);
+static ssize_t nvmet_param_max_queue_size_show(struct config_item *item,
+ char *page)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ return snprintf(page, PAGE_SIZE, "%d\n", port->max_queue_size);
+}
+
+static ssize_t nvmet_param_max_queue_size_store(struct config_item *item,
+ const char *page, size_t count)
+{
+ struct nvmet_port *port = to_nvmet_port(item);
+ int ret;
+
+ if (nvmet_is_port_enabled(port, __func__))
+ return -EACCES;
+ ret = kstrtoint(page, 0, &port->max_queue_size);
+ if (ret) {
+ pr_err("Invalid value '%s' for max_queue_size\n", page);
+ return -EINVAL;
+ }
+ return count;
+}
+
+CONFIGFS_ATTR(nvmet_, param_max_queue_size);
+
#ifdef CONFIG_BLK_DEV_INTEGRITY
static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
char *page)
@@ -1856,6 +1882,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
&nvmet_attr_addr_trtype,
&nvmet_attr_addr_tsas,
&nvmet_attr_param_inline_data_size,
+ &nvmet_attr_param_max_queue_size,
#ifdef CONFIG_BLK_DEV_INTEGRITY
&nvmet_attr_param_pi_enable,
#endif
@@ -1914,6 +1941,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
INIT_LIST_HEAD(&port->subsystems);
INIT_LIST_HEAD(&port->referrals);
port->inline_data_size = -1; /* < 0 == let the transport choose */
+ port->max_queue_size = -1; /* < 0 == let the transport choose */
port->disc_addr.portid = cpu_to_le16(portid);
port->disc_addr.adrfam = NVMF_ADDR_FAMILY_MAX;
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index f08997f58101..f7d82da4c1bc 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -358,6 +358,17 @@ int nvmet_enable_port(struct nvmet_port *port)
if (port->inline_data_size < 0)
port->inline_data_size = 0;
+ /*
+ * If the transport didn't set the max_queue_size properly, then clamp
+ * it to the global fabrics limits.
+ */
+ if (port->max_queue_size < 0)
+ port->max_queue_size = NVMF_MAX_QUEUE_SIZE;
+ else
+ port->max_queue_size = clamp_t(int, port->max_queue_size,
+ NVMF_MIN_QUEUE_SIZE,
+ NVMF_MAX_QUEUE_SIZE);
+
port->enabled = true;
port->tr_ops = ops;
return 0;
@@ -1223,9 +1234,10 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
ctrl->cap |= (15ULL << 24);
/* maximum queue entries supported: */
if (ctrl->ops->get_max_queue_size)
- ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1;
+ ctrl->cap |= min_t(u16 , ctrl->ops->get_max_queue_size(ctrl),
+ ctrl->port->max_queue_size) - 1;
else
- ctrl->cap |= NVMF_MAX_QUEUE_SIZE - 1;
+ ctrl->cap |= ctrl->port->max_queue_size - 1;
if (nvmet_is_passthru_subsys(ctrl->subsys))
nvmet_passthrough_override_cap(ctrl);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index a76f816edf1d..ab6459441376 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -163,6 +163,7 @@ struct nvmet_port {
void *priv;
bool enabled;
int inline_data_size;
+ int max_queue_size;
const struct nvmet_fabrics_ops *tr_ops;
bool pi_enable;
};
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 09/10] nvmet: introduce new max queue size configuration entry
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
@ 2024-01-01 9:37 ` Sagi Grimberg
2024-01-02 2:07 ` Guixin Liu
1 sibling, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:37 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
> Using this port configuration, one will be able to set the maximal queue
> size to be used for any controller that will be associated to the
> configured port.
>
> The default value stayed 1024 but each transport will be able to set the
> its own values before enabling the port.
>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 09/10] nvmet: introduce new max queue size configuration entry
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01 9:37 ` Sagi Grimberg
@ 2024-01-02 2:07 ` Guixin Liu
1 sibling, 0 replies; 36+ messages in thread
From: Guixin Liu @ 2024-01-02 2:07 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, sagi, linux-nvme; +Cc: oren, israelr, kch
lol, I have made this configurable too,
Reviewed-by: Guixin Liu <kanie@linux.alibaba.com>
在 2023/12/31 08:52, Max Gurtovoy 写道:
> Using this port configuration, one will be able to set the maximal queue
> size to be used for any controller that will be associated to the
> configured port.
>
> The default value stayed 1024 but each transport will be able to set the
> its own values before enabling the port.
>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
> drivers/nvme/target/configfs.c | 28 ++++++++++++++++++++++++++++
> drivers/nvme/target/core.c | 16 ++++++++++++++--
> drivers/nvme/target/nvmet.h | 1 +
> 3 files changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
> index bd514d4c4a5b..f8df2ef715ba 100644
> --- a/drivers/nvme/target/configfs.c
> +++ b/drivers/nvme/target/configfs.c
> @@ -272,6 +272,32 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
>
> CONFIGFS_ATTR(nvmet_, param_inline_data_size);
>
> +static ssize_t nvmet_param_max_queue_size_show(struct config_item *item,
> + char *page)
> +{
> + struct nvmet_port *port = to_nvmet_port(item);
> +
> + return snprintf(page, PAGE_SIZE, "%d\n", port->max_queue_size);
> +}
> +
> +static ssize_t nvmet_param_max_queue_size_store(struct config_item *item,
> + const char *page, size_t count)
> +{
> + struct nvmet_port *port = to_nvmet_port(item);
> + int ret;
> +
> + if (nvmet_is_port_enabled(port, __func__))
> + return -EACCES;
> + ret = kstrtoint(page, 0, &port->max_queue_size);
> + if (ret) {
> + pr_err("Invalid value '%s' for max_queue_size\n", page);
> + return -EINVAL;
> + }
> + return count;
> +}
> +
> +CONFIGFS_ATTR(nvmet_, param_max_queue_size);
> +
> #ifdef CONFIG_BLK_DEV_INTEGRITY
> static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
> char *page)
> @@ -1856,6 +1882,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
> &nvmet_attr_addr_trtype,
> &nvmet_attr_addr_tsas,
> &nvmet_attr_param_inline_data_size,
> + &nvmet_attr_param_max_queue_size,
> #ifdef CONFIG_BLK_DEV_INTEGRITY
> &nvmet_attr_param_pi_enable,
> #endif
> @@ -1914,6 +1941,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
> INIT_LIST_HEAD(&port->subsystems);
> INIT_LIST_HEAD(&port->referrals);
> port->inline_data_size = -1; /* < 0 == let the transport choose */
> + port->max_queue_size = -1; /* < 0 == let the transport choose */
>
> port->disc_addr.portid = cpu_to_le16(portid);
> port->disc_addr.adrfam = NVMF_ADDR_FAMILY_MAX;
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index f08997f58101..f7d82da4c1bc 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -358,6 +358,17 @@ int nvmet_enable_port(struct nvmet_port *port)
> if (port->inline_data_size < 0)
> port->inline_data_size = 0;
>
> + /*
> + * If the transport didn't set the max_queue_size properly, then clamp
> + * it to the global fabrics limits.
> + */
> + if (port->max_queue_size < 0)
> + port->max_queue_size = NVMF_MAX_QUEUE_SIZE;
> + else
> + port->max_queue_size = clamp_t(int, port->max_queue_size,
> + NVMF_MIN_QUEUE_SIZE,
> + NVMF_MAX_QUEUE_SIZE);
> +
> port->enabled = true;
> port->tr_ops = ops;
> return 0;
> @@ -1223,9 +1234,10 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
> ctrl->cap |= (15ULL << 24);
> /* maximum queue entries supported: */
> if (ctrl->ops->get_max_queue_size)
> - ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1;
> + ctrl->cap |= min_t(u16 , ctrl->ops->get_max_queue_size(ctrl),
> + ctrl->port->max_queue_size) - 1;
> else
> - ctrl->cap |= NVMF_MAX_QUEUE_SIZE - 1;
> + ctrl->cap |= ctrl->port->max_queue_size - 1;
>
> if (nvmet_is_passthru_subsys(ctrl->subsys))
> nvmet_passthrough_override_cap(ctrl);
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index a76f816edf1d..ab6459441376 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -163,6 +163,7 @@ struct nvmet_port {
> void *priv;
> bool enabled;
> int inline_data_size;
> + int max_queue_size;
> const struct nvmet_fabrics_ops *tr_ops;
> bool pi_enable;
> };
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
` (8 preceding siblings ...)
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
@ 2023-12-31 0:52 ` Max Gurtovoy
2024-01-01 9:39 ` Sagi Grimberg
9 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2023-12-31 0:52 UTC (permalink / raw)
To: kbusch, hch, sagi, linux-nvme; +Cc: kanie, oren, israelr, kch, Max Gurtovoy
A new port configuration was added to set max_queue_size. Clamp user
configuration to RDMA transport limits.
Increase the maximal queue size of RDMA controllers from 128 to 256
(the default size stays 128 same as before).
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/rdma.c | 8 ++++++++
include/linux/nvme-rdma.h | 3 ++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index f298295c0b0f..3a3686efe008 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct nvmet_port *nport)
nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
}
+ if (nport->max_queue_size < 0) {
+ nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
+ } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
+ pr_warn("max_queue_size %u is too large, reducing to %u\n",
+ nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
+ nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
+ }
+
ret = inet_pton_with_scope(&init_net, af, nport->disc_addr.traddr,
nport->disc_addr.trsvcid, &port->addr);
if (ret) {
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index d0b9941911a1..eb2f04d636c8 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,8 +8,9 @@
#define NVME_RDMA_IP_PORT 4420
-#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_QUEUE_SIZE 256
#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
+#define NVME_RDMA_DEFAULT_QUEUE_SIZE 128
enum nvme_rdma_cm_fmt {
NVME_RDMA_CM_FMT_1_0 = 0x0,
--
2.18.1
^ permalink raw reply related [flat|nested] 36+ messages in thread* Re: [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
2023-12-31 0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
@ 2024-01-01 9:39 ` Sagi Grimberg
2024-01-03 22:42 ` Max Gurtovoy
0 siblings, 1 reply; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-01 9:39 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
> A new port configuration was added to set max_queue_size. Clamp user
> configuration to RDMA transport limits.
>
> Increase the maximal queue size of RDMA controllers from 128 to 256
> (the default size stays 128 same as before).
>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
> drivers/nvme/target/rdma.c | 8 ++++++++
> include/linux/nvme-rdma.h | 3 ++-
> 2 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index f298295c0b0f..3a3686efe008 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct nvmet_port *nport)
> nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
> }
>
> + if (nport->max_queue_size < 0) {
> + nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
> + } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
> + pr_warn("max_queue_size %u is too large, reducing to %u\n",
> + nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
> + nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
> + }
> +
Not sure its a good idea to tie the host and nvmet default values
together.
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
2024-01-01 9:39 ` Sagi Grimberg
@ 2024-01-03 22:42 ` Max Gurtovoy
2024-01-04 8:27 ` Sagi Grimberg
0 siblings, 1 reply; 36+ messages in thread
From: Max Gurtovoy @ 2024-01-03 22:42 UTC (permalink / raw)
To: Sagi Grimberg, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 01/01/2024 11:39, Sagi Grimberg wrote:
>
>> A new port configuration was added to set max_queue_size. Clamp user
>> configuration to RDMA transport limits.
>>
>> Increase the maximal queue size of RDMA controllers from 128 to 256
>> (the default size stays 128 same as before).
>>
>> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>> ---
>> drivers/nvme/target/rdma.c | 8 ++++++++
>> include/linux/nvme-rdma.h | 3 ++-
>> 2 files changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index f298295c0b0f..3a3686efe008 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct
>> nvmet_port *nport)
>> nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
>> }
>> + if (nport->max_queue_size < 0) {
>> + nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
>> + } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
>> + pr_warn("max_queue_size %u is too large, reducing to %u\n",
>> + nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
>> + nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
>> + }
>> +
>
> Not sure its a good idea to tie the host and nvmet default values
> together.
It is already tied for RDMA. I don't see a reason to change it.
I will keep the other default values for fabrics separate, as it is
today, following your review in other commits.
We can discuss it in a dedicated series since it is not related to the
feature we would like to introduce here.
^ permalink raw reply [flat|nested] 36+ messages in thread* Re: [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
2024-01-03 22:42 ` Max Gurtovoy
@ 2024-01-04 8:27 ` Sagi Grimberg
0 siblings, 0 replies; 36+ messages in thread
From: Sagi Grimberg @ 2024-01-04 8:27 UTC (permalink / raw)
To: Max Gurtovoy, kbusch, hch, linux-nvme; +Cc: kanie, oren, israelr, kch
On 1/4/24 00:42, Max Gurtovoy wrote:
>
>
> On 01/01/2024 11:39, Sagi Grimberg wrote:
>>
>>> A new port configuration was added to set max_queue_size. Clamp user
>>> configuration to RDMA transport limits.
>>>
>>> Increase the maximal queue size of RDMA controllers from 128 to 256
>>> (the default size stays 128 same as before).
>>>
>>> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
>>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>>> ---
>>> drivers/nvme/target/rdma.c | 8 ++++++++
>>> include/linux/nvme-rdma.h | 3 ++-
>>> 2 files changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>> index f298295c0b0f..3a3686efe008 100644
>>> --- a/drivers/nvme/target/rdma.c
>>> +++ b/drivers/nvme/target/rdma.c
>>> @@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct
>>> nvmet_port *nport)
>>> nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
>>> }
>>> + if (nport->max_queue_size < 0) {
>>> + nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
>>> + } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
>>> + pr_warn("max_queue_size %u is too large, reducing to %u\n",
>>> + nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
>>> + nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
>>> + }
>>> +
>>
>> Not sure its a good idea to tie the host and nvmet default values
>> together.
>
> It is already tied for RDMA. I don't see a reason to change it.
> I will keep the other default values for fabrics separate, as it is
> today, following your review in other commits.
> We can discuss it in a dedicated series since it is not related to the
> feature we would like to introduce here.
OK, the discussion is too long anyways. I don't care all that much tbh.
^ permalink raw reply [flat|nested] 36+ messages in thread