public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	kbusch@kernel.org, hch@lst.de, linux-nvme@lists.infradead.org
Cc: kanie@linux.alibaba.com, oren@nvidia.com, israelr@nvidia.com,
	kch@nvidia.com
Subject: Re: [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport
Date: Thu, 4 Jan 2024 00:42:30 +0200	[thread overview]
Message-ID: <08b1645f-b4bf-4e76-8d7d-867e88f9a240@nvidia.com> (raw)
In-Reply-To: <1493ff9c-f9f5-4c54-96c5-92eddeb85516@grimberg.me>



On 01/01/2024 11:39, Sagi Grimberg wrote:
> 
>> A new port configuration was added to set max_queue_size. Clamp user
>> configuration to RDMA transport limits.
>>
>> Increase the maximal queue size of RDMA controllers from 128 to 256
>> (the default size stays 128 same as before).
>>
>> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>> ---
>>   drivers/nvme/target/rdma.c | 8 ++++++++
>>   include/linux/nvme-rdma.h  | 3 ++-
>>   2 files changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index f298295c0b0f..3a3686efe008 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -1943,6 +1943,14 @@ static int nvmet_rdma_add_port(struct 
>> nvmet_port *nport)
>>           nport->inline_data_size = NVMET_RDMA_MAX_INLINE_DATA_SIZE;
>>       }
>> +    if (nport->max_queue_size < 0) {
>> +        nport->max_queue_size = NVME_RDMA_DEFAULT_QUEUE_SIZE;
>> +    } else if (nport->max_queue_size > NVME_RDMA_MAX_QUEUE_SIZE) {
>> +        pr_warn("max_queue_size %u is too large, reducing to %u\n",
>> +            nport->max_queue_size, NVME_RDMA_MAX_QUEUE_SIZE);
>> +        nport->max_queue_size = NVME_RDMA_MAX_QUEUE_SIZE;
>> +    }
>> +
> 
> Not sure its a good idea to tie the host and nvmet default values
> together.

It is already tied for RDMA. I don't see a reason to change it.
I will keep the other default values for fabrics separate, as it is 
today, following your review in other commits.
We can discuss it in a dedicated series since it is not related to the 
feature we would like to introduce here.


  reply	other threads:[~2024-01-03 22:42 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-31  0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31  0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2024-01-01  9:57     ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
2024-01-01  9:27   ` Sagi Grimberg
2024-01-01 10:06     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2024-01-01 12:34         ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
2024-01-01  9:28   ` Sagi Grimberg
2024-01-01 10:10     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-01  9:34   ` Sagi Grimberg
2024-01-01 10:57     ` Max Gurtovoy
2024-01-01 11:21       ` Sagi Grimberg
2024-01-03 22:37         ` Max Gurtovoy
2024-01-04  8:23           ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-01  9:35   ` Sagi Grimberg
2024-01-01 17:22     ` Max Gurtovoy
2024-01-02  7:59       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01  9:37   ` Sagi Grimberg
2024-01-02  2:07   ` Guixin Liu
2023-12-31  0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-01  9:39   ` Sagi Grimberg
2024-01-03 22:42     ` Max Gurtovoy [this message]
2024-01-04  8:27       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=08b1645f-b4bf-4e76-8d7d-867e88f9a240@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=kanie@linux.alibaba.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox