public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Guixin Liu <kanie@linux.alibaba.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	hch@lst.de, kch@nvidia.com, axboe@kernel.dk
Cc: linux-nvme@lists.infradead.org
Subject: Re: [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize
Date: Mon, 18 Dec 2023 20:31:32 +0800	[thread overview]
Message-ID: <3ee29484-09a3-4fc5-8738-fb1ce6f12ce5@linux.alibaba.com> (raw)
In-Reply-To: <d301da61-1076-462b-a679-4b847dd334ff@grimberg.me>


在 2023/12/18 19:57, Sagi Grimberg 写道:
>
>
> On 12/18/23 13:05, Guixin Liu wrote:
>> Currently, the host is limited to creating queues with a depth of
>> 128. To enable larger queue sizes, constrain the sqsize based on
>> the ib_device's max_qp_wr capability.
>>
>> And also remove unused NVME_RDMA_MAX_QUEUE_SIZE macro.
>>
>> Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
>> ---
>>   drivers/nvme/host/rdma.c  | 14 ++++++++------
>>   include/linux/nvme-rdma.h |  2 --
>>   2 files changed, 8 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
>> index 81e2621..982f3e4 100644
>> --- a/drivers/nvme/host/rdma.c
>> +++ b/drivers/nvme/host/rdma.c
>> @@ -489,8 +489,7 @@ static int nvme_rdma_create_cq(struct ib_device 
>> *ibdev,
>>   static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue)
>>   {
>>       struct ib_device *ibdev;
>> -    const int send_wr_factor = 3;            /* MR, SEND, INV */
>> -    const int cq_factor = send_wr_factor + 1;    /* + RECV */
>> +    const int cq_factor = NVME_RDMA_SEND_WR_FACTOR + 1;    /* + RECV */
>>       int ret, pages_per_mr;
>>         queue->device = nvme_rdma_find_get_device(queue->cm_id);
>> @@ -508,7 +507,7 @@ static int nvme_rdma_create_queue_ib(struct 
>> nvme_rdma_queue *queue)
>>       if (ret)
>>           goto out_put_dev;
>>   -    ret = nvme_rdma_create_qp(queue, send_wr_factor);
>> +    ret = nvme_rdma_create_qp(queue, NVME_RDMA_SEND_WR_FACTOR);
>>       if (ret)
>>           goto out_destroy_ib_cq;
>>   @@ -1006,6 +1005,7 @@ static int nvme_rdma_setup_ctrl(struct 
>> nvme_rdma_ctrl *ctrl, bool new)
>>   {
>>       int ret;
>>       bool changed;
>> +    int ib_max_qsize;
>>         ret = nvme_rdma_configure_admin_queue(ctrl, new);
>>       if (ret)
>> @@ -1030,11 +1030,13 @@ static int nvme_rdma_setup_ctrl(struct 
>> nvme_rdma_ctrl *ctrl, bool new)
>>               ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1);
>>       }
>>   -    if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) {
>> +    ib_max_qsize = ctrl->device->dev->attrs.max_qp_wr /
>> +            (NVME_RDMA_SEND_WR_FACTOR + 1);
>> +    if (ctrl->ctrl.sqsize + 1 > ib_max_qsize) {
>>           dev_warn(ctrl->ctrl.device,
>>               "ctrl sqsize %u > max queue size %u, clamping down\n",
>> -            ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE);
>> -        ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1;
>> +            ctrl->ctrl.sqsize + 1, ib_max_qsize);
>> +        ctrl->ctrl.sqsize = ib_max_qsize - 1;
>>       }
>
> This can be very very big, not sure why we should allow a queue of depth
> of a potentially giant size. We should also impose a hard limit, maybe
> align to the pci driver limit.

When we exec "nvme connect", the queue depth will be restricted to 
between 16 and 1024 in nvmf_parse_options(),

so this will not be very big, the max is 1024 in any case.



      reply	other threads:[~2023-12-18 12:31 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu
2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu
2023-12-18 11:52   ` Sagi Grimberg
2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu
2023-12-18 11:57   ` Sagi Grimberg
2023-12-18 12:41     ` Guixin Liu
2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu
2023-12-18 11:57   ` Sagi Grimberg
2023-12-18 12:31     ` Guixin Liu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3ee29484-09a3-4fc5-8738-fb1ce6f12ce5@linux.alibaba.com \
    --to=kanie@linux.alibaba.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox