public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
	linux-nvme@lists.infradead.org, kch@nvidia.com
Cc: oren@nvidia.com, israelr@nvidia.com
Subject: Re: [PATCH v3 0/8] Introduce new max-queue-size configuration
Date: Mon, 26 Feb 2024 17:46:59 +0200	[thread overview]
Message-ID: <a94ee121-bafa-41be-a648-ea0a1e97520c@nvidia.com> (raw)
In-Reply-To: <20240123144032.27801-1-mgurtovoy@nvidia.com>

Hi Keith,
This series was fully reviewed by Sagi and Christoph for 6.8 but we 
probably somehow missed taking it.

can you please take it for the next 6.9 merge window and/or merge it to 
nvme-6.9 ?

On 23/01/2024 16:40, Max Gurtovoy wrote:
> Hi Christoph/Sagi/Keith,
> This patch series is mainly for adding an interface for a user to
> configure the maximal queue size for fabrics via port configfs. Using
> this interface a user will be able to better control the system and HW
> resources.
> 
> Also, I've increased the maximal queue depth for RDMA controllers to be
> 256 after request from Guixin Liu. This new value will be valid only for
> controllers that don't support PI.
> 
> While developing this feature I've made some minor cleanups as well.
> 
> Changes from v2:
>   - collected Reviewed-by signatures (Sagi and Christoph)
>   - added local variable to simplify code in patch 6/8 (Christoph)
> 
> Changes from v1:
>   - collected Reviewed-by signatures (Sagi and Guixin Liu)
>   - removed the patches that unify fabric host and target max/min/default
>     queue size definitions (Sagi)
>   - align MQES and SQ size according to the NVMe Spec (patch 2/8)
> 
> Max Gurtovoy (8):
>    nvme-rdma: move NVME_RDMA_IP_PORT from common file
>    nvmet: compare mqes and sqsize only for IO SQ
>    nvmet: set maxcmd to be per controller
>    nvmet: set ctrl pi_support cap before initializing cap reg
>    nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition
>    nvme-rdma: clamp queue size according to ctrl cap
>    nvmet: introduce new max queue size configuration entry
>    nvmet-rdma: set max_queue_size for RDMA transport
> 
>   drivers/nvme/host/rdma.c          | 14 ++++++++++----
>   drivers/nvme/target/admin-cmd.c   |  2 +-
>   drivers/nvme/target/configfs.c    | 28 ++++++++++++++++++++++++++++
>   drivers/nvme/target/core.c        | 18 ++++++++++++++++--
>   drivers/nvme/target/discovery.c   |  2 +-
>   drivers/nvme/target/fabrics-cmd.c |  5 ++---
>   drivers/nvme/target/nvmet.h       |  6 ++++--
>   drivers/nvme/target/passthru.c    |  2 +-
>   drivers/nvme/target/rdma.c        | 10 ++++++++++
>   include/linux/nvme-rdma.h         |  6 +++++-
>   include/linux/nvme.h              |  2 --
>   11 files changed, 78 insertions(+), 17 deletions(-)
> 


  parent reply	other threads:[~2024-02-26 15:47 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-23 14:40 [PATCH v3 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-23 14:40 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23 14:40 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-23 14:40 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23 14:40 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23 14:40 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-23 14:40 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-24  9:03   ` Christoph Hellwig
2024-01-23 14:40 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-23 14:40 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-02-26 15:46 ` Max Gurtovoy [this message]
2024-02-26 16:38 ` [PATCH v3 0/8] Introduce new max-queue-size configuration Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a94ee121-bafa-41be-a648-ea0a1e97520c@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox