From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
linux-nvme@lists.infradead.org
Cc: israelr@nvidia.com, kch@nvidia.com, oren@nvidia.com
Subject: Re: [PATCH v2 0/8] Introduce new max-queue-size configuration
Date: Mon, 22 Jan 2024 14:09:18 +0200 [thread overview]
Message-ID: <d0c49fe2-98fa-49a2-93aa-36d1ecf63d5e@nvidia.com> (raw)
In-Reply-To: <20240104092549.25721-1-mgurtovoy@nvidia.com>
On 04/01/2024 11:25, Max Gurtovoy wrote:
> Hi Christoph/Sagi/Keith,
Hi Christoph/Keith,
are we considering taking this series into nvme-6.8 ?
Most of it was reviewed by Sagi, the rest is pretty trivial
> This patch series is mainly for adding an interface for a user to
> configure the maximal queue size for fabrics via port configfs. Using
> this interface a user will be able to better control the system and HW
> resources.
>
> Also, I've increased the maximal queue depth for RDMA controllers to be
> 256 after request from Guixin Liu. This new value will be valid only for
> controllers that don't support PI.
>
> While developing this feature I've made some minor cleanups as well.
>
> Changes from v1:
> - collected Reviewed-by signatures (Sagi and Guixin Liu)
> - removed the patches that unify fabric host and target max/min/default
> queue size definitions (Sagi)
> - align MQES and SQ size according to the NVMe Spec (patch 2/8)
>
> Max Gurtovoy (8):
> nvme-rdma: move NVME_RDMA_IP_PORT from common file
> nvmet: compare mqes and sqsize only for IO SQ
> nvmet: set maxcmd to be per controller
> nvmet: set ctrl pi_support cap before initializing cap reg
> nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
> nvme-rdma: clamp queue size according to ctrl cap
> nvmet: introduce new max queue size configuration entry
> nvmet-rdma: set max_queue_size for RDMA transport
>
> drivers/nvme/host/rdma.c | 19 ++++++++++++++-----
> drivers/nvme/target/admin-cmd.c | 2 +-
> drivers/nvme/target/configfs.c | 28 ++++++++++++++++++++++++++++
> drivers/nvme/target/core.c | 18 ++++++++++++++++--
> drivers/nvme/target/discovery.c | 2 +-
> drivers/nvme/target/fabrics-cmd.c | 5 ++---
> drivers/nvme/target/nvmet.h | 6 ++++--
> drivers/nvme/target/passthru.c | 2 +-
> drivers/nvme/target/rdma.c | 10 ++++++++++
> include/linux/nvme-rdma.h | 6 +++++-
> include/linux/nvme.h | 2 --
> 11 files changed, 82 insertions(+), 18 deletions(-)
>
prev parent reply other threads:[~2024-01-22 12:09 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-04 9:25 [PATCH v2 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-04 9:25 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-22 13:06 ` Sagi Grimberg
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-22 13:19 ` Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-22 17:39 ` Sagi Grimberg
2024-01-23 8:54 ` Christoph Hellwig
2024-01-23 9:32 ` Max Gurtovoy
2024-01-04 9:25 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-22 17:44 ` Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-04 9:25 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-22 17:44 ` Sagi Grimberg
2024-01-23 8:55 ` Christoph Hellwig
2024-01-22 12:09 ` Max Gurtovoy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d0c49fe2-98fa-49a2-93aa-36d1ecf63d5e@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox