public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <kbusch@kernel.org>, <hch@lst.de>, <sagi@grimberg.me>,
	<linux-nvme@lists.infradead.org>
Cc: <kanie@linux.alibaba.com>, <oren@nvidia.com>,
	<israelr@nvidia.com>, <kch@nvidia.com>,
	Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
Date: Sun, 31 Dec 2023 02:52:46 +0200	[thread overview]
Message-ID: <20231231005249.18294-8-mgurtovoy@nvidia.com> (raw)
In-Reply-To: <20231231005249.18294-1-mgurtovoy@nvidia.com>

This definition will be used by controllers that are configured with
metadata support. For now, both regular and metadata controllers have
the same maximal queue size but later commit will increase the maximal
queue size for regular RDMA controllers to 256.
We'll keep the maximal queue size for metadata controller to be 128
since there are more resources that are needed for metadata operations.

Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/rdma.c | 2 ++
 include/linux/nvme-rdma.h  | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..f298295c0b0f 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -2002,6 +2002,8 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl)
 
 static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl)
 {
+	if (ctrl->pi_support)
+		return NVME_RDMA_MAX_METADATA_QUEUE_SIZE;
 	return NVME_RDMA_MAX_QUEUE_SIZE;
 }
 
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index 146dd2223a5f..d0b9941911a1 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -8,7 +8,8 @@
 
 #define NVME_RDMA_IP_PORT		4420
 
-#define NVME_RDMA_MAX_QUEUE_SIZE	128
+#define NVME_RDMA_MAX_QUEUE_SIZE 128
+#define NVME_RDMA_MAX_METADATA_QUEUE_SIZE 128
 
 enum nvme_rdma_cm_fmt {
 	NVME_RDMA_CM_FMT_1_0 = 0x0,
-- 
2.18.1



  parent reply	other threads:[~2023-12-31  0:53 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-31  0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31  0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2024-01-01  9:57     ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
2024-01-01  9:27   ` Sagi Grimberg
2024-01-01 10:06     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2024-01-01 12:34         ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
2024-01-01  9:28   ` Sagi Grimberg
2024-01-01 10:10     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` Max Gurtovoy [this message]
2024-01-01  9:34   ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Sagi Grimberg
2024-01-01 10:57     ` Max Gurtovoy
2024-01-01 11:21       ` Sagi Grimberg
2024-01-03 22:37         ` Max Gurtovoy
2024-01-04  8:23           ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-01  9:35   ` Sagi Grimberg
2024-01-01 17:22     ` Max Gurtovoy
2024-01-02  7:59       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01  9:37   ` Sagi Grimberg
2024-01-02  2:07   ` Guixin Liu
2023-12-31  0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-01  9:39   ` Sagi Grimberg
2024-01-03 22:42     ` Max Gurtovoy
2024-01-04  8:27       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231231005249.18294-8-mgurtovoy@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=kanie@linux.alibaba.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox