From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <kbusch@kernel.org>, <hch@lst.de>, <sagi@grimberg.me>,
<linux-nvme@lists.infradead.org>
Cc: <kanie@linux.alibaba.com>, <oren@nvidia.com>,
<israelr@nvidia.com>, <kch@nvidia.com>,
Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
Date: Sun, 31 Dec 2023 02:52:47 +0200 [thread overview]
Message-ID: <20231231005249.18294-9-mgurtovoy@nvidia.com> (raw)
In-Reply-To: <20231231005249.18294-1-mgurtovoy@nvidia.com>
If a controller is configured with metadata support, clamp the maximal
queue size to be 128 since there are more resources that are needed
for metadata operations. Otherwise, clamp it to 256.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 7c99c87688dd..a0ff406c10a9 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
}
- if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
- dev_warn(ctrl->ctrl.device,
- "ctrl sqsize %u > max queue size %u, clamping down\n",
- ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
- ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ if (ctrl->ctrl.max_integrity_segments) {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
+ }
+ } else {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ }
}
if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
--
2.18.1
next prev parent reply other threads:[~2023-12-31 0:53 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31 0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2024-01-01 9:57 ` Max Gurtovoy
2023-12-31 0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
2024-01-01 9:27 ` Sagi Grimberg
2024-01-01 10:06 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
2024-01-01 12:34 ` Max Gurtovoy
2023-12-31 0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
2024-01-01 9:28 ` Sagi Grimberg
2024-01-01 10:10 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-01 9:34 ` Sagi Grimberg
2024-01-01 10:57 ` Max Gurtovoy
2024-01-01 11:21 ` Sagi Grimberg
2024-01-03 22:37 ` Max Gurtovoy
2024-01-04 8:23 ` Sagi Grimberg
2023-12-31 0:52 ` Max Gurtovoy [this message]
2024-01-01 9:35 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Sagi Grimberg
2024-01-01 17:22 ` Max Gurtovoy
2024-01-02 7:59 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01 9:37 ` Sagi Grimberg
2024-01-02 2:07 ` Guixin Liu
2023-12-31 0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-01 9:39 ` Sagi Grimberg
2024-01-03 22:42 ` Max Gurtovoy
2024-01-04 8:27 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231231005249.18294-9-mgurtovoy@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kanie@linux.alibaba.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox