public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Guixin Liu <kanie@linux.alibaba.com>
To: Max Gurtovoy <mgurtovoy@nvidia.com>,
	kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
	linux-nvme@lists.infradead.org
Cc: oren@nvidia.com, israelr@nvidia.com, kch@nvidia.com
Subject: Re: [PATCH 09/10] nvmet: introduce new max queue size configuration entry
Date: Tue, 2 Jan 2024 10:07:26 +0800	[thread overview]
Message-ID: <2bd8a6db-43de-4e2b-8ac2-2af1732ab50b@linux.alibaba.com> (raw)
In-Reply-To: <20231231005249.18294-10-mgurtovoy@nvidia.com>

lol, I have made this configurable too,

Reviewed-by: Guixin Liu <kanie@linux.alibaba.com>

在 2023/12/31 08:52, Max Gurtovoy 写道:
> Using this port configuration, one will be able to set the maximal queue
> size to be used for any controller that will be associated to the
> configured port.
>
> The default value stayed 1024 but each transport will be able to set the
> its own values before enabling the port.
>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
>   drivers/nvme/target/configfs.c | 28 ++++++++++++++++++++++++++++
>   drivers/nvme/target/core.c     | 16 ++++++++++++++--
>   drivers/nvme/target/nvmet.h    |  1 +
>   3 files changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
> index bd514d4c4a5b..f8df2ef715ba 100644
> --- a/drivers/nvme/target/configfs.c
> +++ b/drivers/nvme/target/configfs.c
> @@ -272,6 +272,32 @@ static ssize_t nvmet_param_inline_data_size_store(struct config_item *item,
>   
>   CONFIGFS_ATTR(nvmet_, param_inline_data_size);
>   
> +static ssize_t nvmet_param_max_queue_size_show(struct config_item *item,
> +		char *page)
> +{
> +	struct nvmet_port *port = to_nvmet_port(item);
> +
> +	return snprintf(page, PAGE_SIZE, "%d\n", port->max_queue_size);
> +}
> +
> +static ssize_t nvmet_param_max_queue_size_store(struct config_item *item,
> +		const char *page, size_t count)
> +{
> +	struct nvmet_port *port = to_nvmet_port(item);
> +	int ret;
> +
> +	if (nvmet_is_port_enabled(port, __func__))
> +		return -EACCES;
> +	ret = kstrtoint(page, 0, &port->max_queue_size);
> +	if (ret) {
> +		pr_err("Invalid value '%s' for max_queue_size\n", page);
> +		return -EINVAL;
> +	}
> +	return count;
> +}
> +
> +CONFIGFS_ATTR(nvmet_, param_max_queue_size);
> +
>   #ifdef CONFIG_BLK_DEV_INTEGRITY
>   static ssize_t nvmet_param_pi_enable_show(struct config_item *item,
>   		char *page)
> @@ -1856,6 +1882,7 @@ static struct configfs_attribute *nvmet_port_attrs[] = {
>   	&nvmet_attr_addr_trtype,
>   	&nvmet_attr_addr_tsas,
>   	&nvmet_attr_param_inline_data_size,
> +	&nvmet_attr_param_max_queue_size,
>   #ifdef CONFIG_BLK_DEV_INTEGRITY
>   	&nvmet_attr_param_pi_enable,
>   #endif
> @@ -1914,6 +1941,7 @@ static struct config_group *nvmet_ports_make(struct config_group *group,
>   	INIT_LIST_HEAD(&port->subsystems);
>   	INIT_LIST_HEAD(&port->referrals);
>   	port->inline_data_size = -1;	/* < 0 == let the transport choose */
> +	port->max_queue_size = -1;	/* < 0 == let the transport choose */
>   
>   	port->disc_addr.portid = cpu_to_le16(portid);
>   	port->disc_addr.adrfam = NVMF_ADDR_FAMILY_MAX;
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index f08997f58101..f7d82da4c1bc 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -358,6 +358,17 @@ int nvmet_enable_port(struct nvmet_port *port)
>   	if (port->inline_data_size < 0)
>   		port->inline_data_size = 0;
>   
> +	/*
> +	 * If the transport didn't set the max_queue_size properly, then clamp
> +	 * it to the global fabrics limits.
> +	 */
> +	if (port->max_queue_size < 0)
> +		port->max_queue_size = NVMF_MAX_QUEUE_SIZE;
> +	else
> +		port->max_queue_size = clamp_t(int, port->max_queue_size,
> +					       NVMF_MIN_QUEUE_SIZE,
> +					       NVMF_MAX_QUEUE_SIZE);
> +
>   	port->enabled = true;
>   	port->tr_ops = ops;
>   	return 0;
> @@ -1223,9 +1234,10 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
>   	ctrl->cap |= (15ULL << 24);
>   	/* maximum queue entries supported: */
>   	if (ctrl->ops->get_max_queue_size)
> -		ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1;
> +		ctrl->cap |= min_t(u16 , ctrl->ops->get_max_queue_size(ctrl),
> +				   ctrl->port->max_queue_size) - 1;
>   	else
> -		ctrl->cap |= NVMF_MAX_QUEUE_SIZE - 1;
> +		ctrl->cap |= ctrl->port->max_queue_size - 1;
>   
>   	if (nvmet_is_passthru_subsys(ctrl->subsys))
>   		nvmet_passthrough_override_cap(ctrl);
> diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
> index a76f816edf1d..ab6459441376 100644
> --- a/drivers/nvme/target/nvmet.h
> +++ b/drivers/nvme/target/nvmet.h
> @@ -163,6 +163,7 @@ struct nvmet_port {
>   	void				*priv;
>   	bool				enabled;
>   	int				inline_data_size;
> +	int				max_queue_size;
>   	const struct nvmet_fabrics_ops	*tr_ops;
>   	bool				pi_enable;
>   };


  parent reply	other threads:[~2024-01-02  2:07 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-31  0:52 [PATCH v1 00/10] Introduce new max-queue-size configuration Max Gurtovoy
2023-12-31  0:52 ` [PATCH 01/10] nvme: remove unused definition Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2024-01-01  9:57     ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 02/10] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-01  9:25   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 03/10] nvme-fabrics: move queue size definitions to common header Max Gurtovoy
2024-01-01  9:27   ` Sagi Grimberg
2024-01-01 10:06     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2024-01-01 12:34         ` Max Gurtovoy
2023-12-31  0:52 ` [PATCH 04/10] nvmet: remove NVMET_QUEUE_SIZE definition Max Gurtovoy
2024-01-01  9:28   ` Sagi Grimberg
2024-01-01 10:10     ` Max Gurtovoy
2024-01-01 11:20       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 05/10] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 06/10] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-01  9:31   ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 07/10] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-01  9:34   ` Sagi Grimberg
2024-01-01 10:57     ` Max Gurtovoy
2024-01-01 11:21       ` Sagi Grimberg
2024-01-03 22:37         ` Max Gurtovoy
2024-01-04  8:23           ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 08/10] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-01  9:35   ` Sagi Grimberg
2024-01-01 17:22     ` Max Gurtovoy
2024-01-02  7:59       ` Sagi Grimberg
2023-12-31  0:52 ` [PATCH 09/10] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-01  9:37   ` Sagi Grimberg
2024-01-02  2:07   ` Guixin Liu [this message]
2023-12-31  0:52 ` [PATCH 10/10] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-01-01  9:39   ` Sagi Grimberg
2024-01-03 22:42     ` Max Gurtovoy
2024-01-04  8:27       ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2bd8a6db-43de-4e2b-8ac2-2af1732ab50b@linux.alibaba.com \
    --to=kanie@linux.alibaba.com \
    --cc=hch@lst.de \
    --cc=israelr@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mgurtovoy@nvidia.com \
    --cc=oren@nvidia.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox