public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Guixin Liu <kanie@linux.alibaba.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	hch@lst.de, kch@nvidia.com, axboe@kernel.dk
Cc: linux-nvme@lists.infradead.org
Subject: Re: [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size
Date: Mon, 18 Dec 2023 20:41:14 +0800	[thread overview]
Message-ID: <92139239-c9c7-4356-8bb1-dbac5ab7cbc4@linux.alibaba.com> (raw)
In-Reply-To: <b1d2b392-f6a5-4eb4-bc44-d2850c934e8c@grimberg.me>


在 2023/12/18 19:57, Sagi Grimberg 写道:
>
>> Respond with the smaller value between 1024 and the ib_device's
>> max_qp_wr as the RDMA max queue size.
>>
>> Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
>> ---
>>   drivers/nvme/target/rdma.c | 7 ++++++-
>>   include/linux/nvme-rdma.h  | 2 ++
>>   2 files changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 8a728c5..c3884dd 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -2002,7 +2002,12 @@ static u8 nvmet_rdma_get_mdts(const struct 
>> nvmet_ctrl *ctrl)
>>     static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_sq 
>> *nvmet_sq)
>>   {
>> -    return NVME_RDMA_MAX_QUEUE_SIZE;
>> +    struct nvmet_rdma_queue *queue =
>> +        container_of(nvmet_sq, struct nvmet_rdma_queue, nvme_sq);
>> +    int max_qp_wr = queue->dev->device->attrs.max_qp_wr;
>> +
>> +    return (u16)min_t(int, NVMET_QUEUE_SIZE,
>> +              max_qp_wr / (NVME_RDMA_SEND_WR_FACTOR + 1));
>>   }
>
> Should be folded to prev patch
OK, I will do it.
>
>>     static const struct nvmet_fabrics_ops nvmet_rdma_ops = {
>> diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h
>> index 4dd7e6f..c19858b 100644
>> --- a/include/linux/nvme-rdma.h
>> +++ b/include/linux/nvme-rdma.h
>> @@ -8,6 +8,8 @@
>>     #define NVME_RDMA_MAX_QUEUE_SIZE    128
>>   +#define NVME_RDMA_SEND_WR_FACTOR 3  /* MR, SEND, INV */
>> +
>>   enum nvme_rdma_cm_fmt {
>>       NVME_RDMA_CM_FMT_1_0 = 0x0,
>>   };


  reply	other threads:[~2023-12-18 12:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu
2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu
2023-12-18 11:52   ` Sagi Grimberg
2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu
2023-12-18 11:57   ` Sagi Grimberg
2023-12-18 12:41     ` Guixin Liu [this message]
2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu
2023-12-18 11:57   ` Sagi Grimberg
2023-12-18 12:31     ` Guixin Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=92139239-c9c7-4356-8bb1-dbac5ab7cbc4@linux.alibaba.com \
    --to=kanie@linux.alibaba.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox