From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
kbusch@kernel.org, hch@lst.de, linux-nvme@lists.infradead.org
Cc: kanie@linux.alibaba.com, oren@nvidia.com, israelr@nvidia.com,
kch@nvidia.com
Subject: Re: [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap
Date: Mon, 1 Jan 2024 19:22:10 +0200 [thread overview]
Message-ID: <1633aed8-47d3-4998-b2a8-394783adf899@nvidia.com> (raw)
In-Reply-To: <764288fe-0a62-4c22-9f10-307d5a156239@grimberg.me>
On 01/01/2024 11:35, Sagi Grimberg wrote:
>
>> If a controller is configured with metadata support, clamp the maximal
>> queue size to be 128 since there are more resources that are needed
>> for metadata operations. Otherwise, clamp it to 256.
>
> Does the qp allocation succeeds or fails if attempting to create
> 256 size with metadata?
It succeeds (tried ConnectX-4 and ConnectX-6 dx), but there are a lot of
metadata resources we allocate so the scale will be lower. I don't see a
reason to allocate more than 128 entries for metadata controllers.
next prev parent reply other threads:[~2024-01-01 17:22 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-31 0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31 0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2024-01-01 9:57 ` Max Gurtovoy
2023-12-31 0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-01 9:25 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
2024-01-01 9:27 ` Sagi Grimberg
2024-01-01 10:06 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
2024-01-01 12:34 ` Max Gurtovoy
2023-12-31 0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
2024-01-01 9:28 ` Sagi Grimberg
2024-01-01 10:10 ` Max Gurtovoy
2024-01-01 11:20 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-01 9:31 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-01 9:34 ` Sagi Grimberg
2024-01-01 10:57 ` Max Gurtovoy
2024-01-01 11:21 ` Sagi Grimberg
2024-01-03 22:37 ` Max Gurtovoy
2024-01-04 8:23 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-01 9:35 ` Sagi Grimberg
2024-01-01 17:22 ` Max Gurtovoy [this message]
2024-01-02 7:59 ` Sagi Grimberg
2023-12-31 0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01 9:37 ` Sagi Grimberg
2024-01-02 2:07 ` Guixin Liu
2023-12-31 0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-01 9:39 ` Sagi Grimberg
2024-01-03 22:42 ` Max Gurtovoy
2024-01-04 8:27 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1633aed8-47d3-4998-b2a8-394783adf899@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kanie@linux.alibaba.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox