From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <kbusch@kernel.org>, <hch@lst.de>, <sagi@grimberg.me>,
<linux-nvme@lists.infradead.org>
Cc: <israelr@nvidia.com>, <kch@nvidia.com>, <oren@nvidia.com>,
Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport
Date: Thu, 4 Jan 2024 11:25:49 +0200 [thread overview]
Message-ID: <20240104092549.25721-9-mgurtovoy@nvidia.com> (raw)
In-Reply-To: <20240104092549.25721-1-mgurtovoy@nvidia.com>
A new port configuration was added to set max_queue_size. Clamp user
configuration to RDMA transport limits.
Increase the maximal queue size of RDMA controllers from 128 to 256
(the default size stays 128 same as before).
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/rdma.c | 8 ++++++++
include/linux/nvme-rdma.h | 3 ++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index f298295c0b0f..3a3686efe008 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct nvmet_port *nport)
nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
}
+ if (nport->max_queue_size < 0) {
+ nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
+ } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
+ pr_warn("max_queue_size %u is too large, reducing to %u\n",
+ nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
+ nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
+ }
+
ret = inet_pton_with_scope(&init_net, af, nport->disc_addr.traddr,
nport->disc_addr.trsvcid, &port->addr);
if (ret) {
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index d0b9941911a1..eb2f04d636c8 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,8 +8,9 @@
#define NVME_RDMA_IP_PORT 4420
-#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_QUEUE_SIZE 256
#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
+#define NVME_RDMA_DEFAULT_QUEUE_SIZE 128
enum nvme_rdma_cm_fmt {
NVME_RDMA_CM_FMT_1_0 = 0x0,
--
2.18.1
next prev parent reply other threads:[~2024-01-04 9:26 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-04 9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-04 9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-22 13:06 ` Sagi Grimberg
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-22 13:19 ` Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-22 17:39 ` Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-23 9:32 ` Max Gurtovoy
2024-01-04 9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-22 17:44 ` Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-04 9:25 ` Max Gurtovoy [this message]
2024-01-22 17:44 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-22 12:09 ` [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
-- strict thread matches above, loose matches on Subject: below --
2024-01-23 14:40 [PATCH v3 " Max Gurtovoy
2024-01-23 14:40 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240104092549.25721-9-mgurtovoy@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox