From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <kbusch@kernel.org>, <hch@lst.de>, <sagi@grimberg.me>,
<linux-nvme@lists.infradead.org>, <kch@nvidia.com>
Cc: <oren@nvidia.com>, <israelr@nvidia.com>,
Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH 3/8] nvmet: set maxcmd to be per controller
Date: Tue, 23 Jan 2024 16:40:27 +0200 [thread overview]
Message-ID: <20240123144032.27801-4-mgurtovoy@nvidia.com> (raw)
In-Reply-To: <20240123144032.27801-1-mgurtovoy@nvidia.com>
This is a preparation for having a dynamic configuration of max queue
size for a controller. Make sure that the maxcmd field stays the same as
the MQES (+1) value as we do today.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
drivers/nvme/target/admin-cmd.c | 2 +-
drivers/nvme/target/discovery.c | 2 +-
drivers/nvme/target/nvmet.h | 2 +-
drivers/nvme/target/passthru.c | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 39cb570f833d..f5b7054a4a05 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -428,7 +428,7 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
id->cqes = (0x4 << 4) | 0x4;
/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
id->nn = cpu_to_le32(NVMET_MAX_NAMESPACES);
id->mnan = cpu_to_le32(NVMET_MAX_NAMESPACES);
diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
index 668d257fa986..0d5014905069 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -282,7 +282,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
id->lpa = (1 << 2);
/* no enforcement soft-limit for maxcmd - pick arbitrary high value */
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
id->sgls = cpu_to_le32(1 << 0); /* we always support SGLs */
if (ctrl->ops->flags & NVMF_KEYED_SGLS)
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 6c8acebe1a1a..144aca2fa6ad 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -545,7 +545,7 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
#define NVMET_QUEUE_SIZE 1024
#define NVMET_NR_QUEUES 128
-#define NVMET_MAX_CMD NVMET_QUEUE_SIZE
+#define NVMET_MAX_CMD(ctrl) (NVME_CAP_MQES(ctrl->cap) + 1)
/*
* Nice round number that makes a list of nsids fit into a page.
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index f2d963e1fe94..bb4a69d538fd 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -132,7 +132,7 @@ static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req)
id->sqes = min_t(__u8, ((0x6 << 4) | 0x6), id->sqes);
id->cqes = min_t(__u8, ((0x4 << 4) | 0x4), id->cqes);
- id->maxcmd = cpu_to_le16(NVMET_MAX_CMD);
+ id->maxcmd = cpu_to_le16(NVMET_MAX_CMD(ctrl));
/* don't support fuse commands */
id->fuses = 0;
--
2.18.1
next prev parent reply other threads:[~2024-01-23 14:41 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-23 14:40 [PATCH v3 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-01-23 14:40 ` [PATCH 1/8] nvme-rdma: move NVME_RDMA_IP_PORT from common file Max Gurtovoy
2024-01-23 14:40 ` [PATCH 2/8] nvmet: compare mqes and sqsize only for IO SQ Max Gurtovoy
2024-01-23 14:40 ` Max Gurtovoy [this message]
2024-01-23 14:40 ` [PATCH 4/8] nvmet: set ctrl pi_support cap before initializing cap reg Max Gurtovoy
2024-01-23 14:40 ` [PATCH 5/8] nvme-rdma: introduce NVME_RDMA_MAX_METADATA_QUEUE_SIZE definition Max Gurtovoy
2024-01-23 14:40 ` [PATCH 6/8] nvme-rdma: clamp queue size according to ctrl cap Max Gurtovoy
2024-01-24 9:03 ` Christoph Hellwig
2024-01-23 14:40 ` [PATCH 7/8] nvmet: introduce new max queue size configuration entry Max Gurtovoy
2024-01-23 14:40 ` [PATCH 8/8] nvmet-rdma: set max_queue_size for RDMA transport Max Gurtovoy
2024-02-26 15:46 ` [PATCH v3 0/8] Introduce new max-queue-size configuration Max Gurtovoy
2024-02-26 16:38 ` Keith Busch
-- strict thread matches above, loose matches on Subject: below --
2024-01-04 9:25 [PATCH v2 " Max Gurtovoy
2024-01-04 9:25 ` [PATCH 3/8] nvmet: set maxcmd to be per controller Max Gurtovoy
2024-01-23 8:53 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240123144032.27801-4-mgurtovoy@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=oren@nvidia.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox