From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <kbusch@kernel.org>, <hch@lst.de>, <sagi@grimberg.me>,
<linux-nvme@lists.infradead.org>
Cc: <israelr@nvidia.com>, <kch@nvidia.com>, <oren@nvidia.com>,
Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap
Date: Thu, 4 Jan 2024 11:25:47 +0200 [thread overview]
Message-ID: <20240104092549.25721-7-mgurtovoy@nvidia.com> (raw)
In-Reply-To: <20240104092549.25721-1-mgurtovoy@nvidia.com>
If a controller is configured with metadata support, clamp the maximal
queue size to be 128 since there are more resources that are needed
for metadata operations. Otherwise, clamp it to 256.
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index bc90ec3c51b0..d81a7148fbc5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1029,11 +1029,20 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
}
- if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
- dev_warn(ctrl->ctrl.device,
- "ctrl sqsize %u > max queue size %u, clamping down\n",
- ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
- ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ if (ctrl->ctrl.max_integrity_segments) {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_METADATA_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_METADATA_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_METADATA_QUEUE_SIZE - 1;
+ }
+ } else {
+ if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+ dev_warn(ctrl->ctrl.device,
+ "ctrl sqsize %u > max queue size %u, clamping down\n",
+ ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
+ ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+ }
}
if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
--
2.18.1
next prev parent reply other threads:[~2024-01-04 9:26 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-04 9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-04 9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-22 13:06 ` Sagi Grimberg
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-22 13:19 ` Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-04 9:25 ` Max Gurtovoy [this message]
2024-01-22 17:39 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-23 9:32 ` Max Gurtovoy
2024-01-04 9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-22 17:44 ` Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-22 17:44 ` Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-22 12:09 ` [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
-- strict thread matches above, loose matches on Subject: below --
2024-01-23 14:40 [PATCH v3 " Max Gurtovoy
2024-01-23 14:40 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-24 9:03 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240104092549.25721-7-mgurtovoy@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox