public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Guixin Liu <kanie@linux.alibaba.com>
To: hch@lst.de, sagi@grimberg.me, kch@nvidia.com, axboe@kernel.dk
Cc: linux-nvme@lists.infradead.org
Subject: [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize
Date: Mon, 18 Dec 2023 19:05:33 +0800	[thread overview]
Message-ID: <1702897533-49685-4-git-send-email-kanie@linux.alibaba.com> (raw)
In-Reply-To: <1702897533-49685-1-git-send-email-kanie@linux.alibaba.com>

Currently, the host is limited to creating queues with a depth of
128. To enable larger queue sizes, constrain the sqsize based on
the ib_device's max_qp_wr capability.

And also remove unused NVME_RDMA_MAX_QUEUE_SIZE macro.

Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
---
 drivers/nvme/host/rdma.c  | 14 ++++++++------
 include/linux/nvme-rdma.h |  2 --
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 81e2621..982f3e4 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -489,8 +489,7 @@ static int nvme_rdma_create_cq(struct ib_device *ibdev,
 static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
 {
 	struct ib_device *ibdev;
-	const int send_wr_factor = 3;			/* MR, SEND, INV */
-	const int cq_factor = send_wr_factor + 1;	/* + RECV */
+	const int cq_factor = NVME_RDMA_SEND_WR_FACTOR + 1;	/* + RECV */
 	int ret, pages_per_mr;
 
 	queue->device = nvme_rdma_find_get_device(queue->cm_id);
@@ -508,7 +507,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
 	if (ret)
 		goto out_put_dev;
 
-	ret = nvme_rdma_create_qp(queue, send_wr_factor);
+	ret = nvme_rdma_create_qp(queue, NVME_RDMA_SEND_WR_FACTOR);
 	if (ret)
 		goto out_destroy_ib_cq;
 
@@ -1006,6 +1005,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 {
 	int ret;
 	bool changed;
+	int ib_max_qsize;
 
 	ret = nvme_rdma_configure_admin_queue(ctrl, new);
 	if (ret)
@@ -1030,11 +1030,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
 			ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
 	}
 
-	if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
+	ib_max_qsize = ctrl->device->dev->attrs.max_qp_wr /
+			(NVME_RDMA_SEND_WR_FACTOR + 1);
+	if (ctrl->ctrl.sqsize + 1 > ib_max_qsize) {
 		dev_warn(ctrl->ctrl.device,
 			"ctrl sqsize %u > max queue size %u, clamping down\n",
-			ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
-		ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
+			ctrl->ctrl.sqsize + 1, ib_max_qsize);
+		ctrl->ctrl.sqsize = ib_max_qsize - 1;
 	}
 
 	if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) {
diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
index c19858b..67ee770 100644
--- a/include/linux/nvme-rdma.h
+++ b/include/linux/nvme-rdma.h
@@ -6,8 +6,6 @@
 #ifndef _LINUX_NVME_RDMA_H
 #define _LINUX_NVME_RDMA_H
 
-#define NVME_RDMA_MAX_QUEUE_SIZE	128
-
 #define NVME_RDMA_SEND_WR_FACTOR 3  /* MR, SEND, INV */
 
 enum nvme_rdma_cm_fmt {
-- 
1.8.3.1



  parent reply	other threads:[~2023-12-18 11:06 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu
2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu
2023-12-18 11:52   ` Sagi Grimberg
2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu
2023-12-18 11:57   ` Sagi Grimberg
2023-12-18 12:41     ` Guixin Liu
2023-12-18 11:05 ` Guixin Liu [this message]
2023-12-18 11:57   ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Sagi Grimberg
2023-12-18 12:31     ` Guixin Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1702897533-49685-4-git-send-email-kanie@linux.alibaba.com \
    --to=kanie@linux.alibaba.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox