public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems
@ 2025-09-24 20:26 Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 1/4] nvmet: forbid changing ctrl ID attributes " Max Gurtovoy
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-24 20:26 UTC (permalink / raw)
  To: hch, linux-nvme, kbusch, sagi, kch; +Cc: dwagner, israelr, Max Gurtovoy

Hello,
This patch series addresses issues in the NVMe target configfs attribute
handling to ensure subsystem configuration consistency and prevent races
or invalid states once a subsystem has been discovered by a host.

The main goals of this series are:

1. Forbid changes to controller ID min/max attributes values on already
   discovered subsystems.
2. Switch cntlid ida allocation from global to per-subsystem scope,
   matching the granularity of controller ID ranges.
3. Forbid changes to vendor ID and subsystem vendor ID attributes values
   on already discovered subsystems.
4. Forbid changes to max_qid attribute values on already discovered
   subsystems.

This improves consistency by ensuring that user-space configuration
updates do not conflict with controller objects already instantiated in
the kernel.

Max Gurtovoy (4):
  nvmet: forbid changing ctrl ID attributes for discovered subsystems
  nvmet: make cntlid ida per subsystem
  nvmet: prevent max_qid changes for discovered subsystems
  nvmet: prevent vid/ssvid changes for discovered subsystems

 drivers/nvme/target/admin-cmd.c |   6 --
 drivers/nvme/target/configfs.c  | 140 ++++++++++++++++++++++++++------
 drivers/nvme/target/core.c      |  16 ++--
 drivers/nvme/target/nvmet.h     |   1 +
 4 files changed, 125 insertions(+), 38 deletions(-)

-- 
2.18.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/4] nvmet: forbid changing ctrl ID attributes for discovered subsystems
  2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
@ 2025-09-24 20:26 ` Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 2/4] nvmet: make cntlid ida per subsystem Max Gurtovoy
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-24 20:26 UTC (permalink / raw)
  To: hch, linux-nvme, kbusch, sagi, kch; +Cc: dwagner, israelr, Max Gurtovoy

Controller identifiers are dynamically assigned when an NVMe host
connects to a target. The minimum and maximum allowed controller ID
values for a subsystem are configurable via configfs. Do not allow
changes to these attributes after a subsystem has already been
discovered to prevent invalid configuration.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/admin-cmd.c |  6 ----
 drivers/nvme/target/configfs.c  | 63 ++++++++++++++++++++++++---------
 drivers/nvme/target/core.c      |  6 ++++
 3 files changed, 53 insertions(+), 22 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 3e378153a781..72c741a95ac8 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -654,12 +654,6 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
 	u32 cmd_capsule_size, ctratt;
 	u16 status = 0;
 
-	if (!subsys->subsys_discovered) {
-		mutex_lock(&subsys->lock);
-		subsys->subsys_discovered = true;
-		mutex_unlock(&subsys->lock);
-	}
-
 	id = kzalloc(sizeof(*id), GFP_KERNEL);
 	if (!id) {
 		status = NVME_SC_INTERNAL;
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index e44ef69dffc2..979eb184756a 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -1348,10 +1348,30 @@ static ssize_t nvmet_subsys_attr_cntlid_min_show(struct config_item *item,
 	return snprintf(page, PAGE_SIZE, "%u\n", to_subsys(item)->cntlid_min);
 }
 
+static ssize_t
+nvmet_subsys_attr_cntlid_min_store_locked(struct nvmet_subsys *subsys,
+		u16 cntlid_min, size_t cnt)
+{
+
+	if (subsys->subsys_discovered) {
+		pr_err("Can't set minimal cntlid. %u is already assigned\n",
+		       subsys->cntlid_min);
+		return -EINVAL;
+	}
+
+	if (cntlid_min > subsys->cntlid_max)
+		return -EINVAL;
+
+	subsys->cntlid_min = cntlid_min;
+	return cnt;
+}
+
 static ssize_t nvmet_subsys_attr_cntlid_min_store(struct config_item *item,
 						  const char *page, size_t cnt)
 {
+	struct nvmet_subsys *subsys = to_subsys(item);
 	u16 cntlid_min;
+	ssize_t ret;
 
 	if (sscanf(page, "%hu\n", &cntlid_min) != 1)
 		return -EINVAL;
@@ -1360,15 +1380,11 @@ static ssize_t nvmet_subsys_attr_cntlid_min_store(struct config_item *item,
 		return -EINVAL;
 
 	down_write(&nvmet_config_sem);
-	if (cntlid_min > to_subsys(item)->cntlid_max)
-		goto out_unlock;
-	to_subsys(item)->cntlid_min = cntlid_min;
-	up_write(&nvmet_config_sem);
-	return cnt;
-
-out_unlock:
+	mutex_lock(&subsys->lock);
+	ret = nvmet_subsys_attr_cntlid_min_store_locked(subsys, cntlid_min, cnt);
+	mutex_unlock(&subsys->lock);
 	up_write(&nvmet_config_sem);
-	return -EINVAL;
+	return ret;
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_cntlid_min);
 
@@ -1378,10 +1394,29 @@ static ssize_t nvmet_subsys_attr_cntlid_max_show(struct config_item *item,
 	return snprintf(page, PAGE_SIZE, "%u\n", to_subsys(item)->cntlid_max);
 }
 
+static ssize_t
+nvmet_subsys_attr_cntlid_max_store_locked(struct nvmet_subsys *subsys,
+		u16 cntlid_max, size_t cnt)
+{
+
+	if (subsys->subsys_discovered) {
+		pr_err("Can't set maximal cntlid. %u is already assigned\n",
+		       subsys->cntlid_max);
+		return -EINVAL;
+	}
+
+	if (cntlid_max < subsys->cntlid_min)
+		return -EINVAL;
+	subsys->cntlid_max = cntlid_max;
+	return cnt;
+}
+
 static ssize_t nvmet_subsys_attr_cntlid_max_store(struct config_item *item,
 						  const char *page, size_t cnt)
 {
+	struct nvmet_subsys *subsys = to_subsys(item);
 	u16 cntlid_max;
+	ssize_t ret;
 
 	if (sscanf(page, "%hu\n", &cntlid_max) != 1)
 		return -EINVAL;
@@ -1390,15 +1425,11 @@ static ssize_t nvmet_subsys_attr_cntlid_max_store(struct config_item *item,
 		return -EINVAL;
 
 	down_write(&nvmet_config_sem);
-	if (cntlid_max < to_subsys(item)->cntlid_min)
-		goto out_unlock;
-	to_subsys(item)->cntlid_max = cntlid_max;
-	up_write(&nvmet_config_sem);
-	return cnt;
-
-out_unlock:
+	mutex_lock(&subsys->lock);
+	ret = nvmet_subsys_attr_cntlid_max_store_locked(subsys, cntlid_max, cnt);
+	mutex_unlock(&subsys->lock);
 	up_write(&nvmet_config_sem);
-	return -EINVAL;
+	return ret;
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_cntlid_max);
 
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 099a05409ac5..20e7b3d6a810 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -1667,6 +1667,12 @@ struct nvmet_ctrl *nvmet_alloc_ctrl(struct nvmet_alloc_ctrl_args *args)
 	if (!ctrl->changed_ns_list)
 		goto out_free_ctrl;
 
+	if (!subsys->subsys_discovered) {
+		mutex_lock(&subsys->lock);
+		subsys->subsys_discovered = true;
+		mutex_unlock(&subsys->lock);
+	}
+
 	ctrl->sqs = kcalloc(subsys->max_qid + 1,
 			sizeof(struct nvmet_sq *),
 			GFP_KERNEL);
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/4] nvmet: make cntlid ida per subsystem
  2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 1/4] nvmet: forbid changing ctrl ID attributes " Max Gurtovoy
@ 2025-09-24 20:26 ` Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems Max Gurtovoy
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-24 20:26 UTC (permalink / raw)
  To: hch, linux-nvme, kbusch, sagi, kch; +Cc: dwagner, israelr, Max Gurtovoy

Commit 15fbad96fc5f ("nvmet: Make cntlid globally unique") moved the
cntlid ida allocator to global scope. This worked until commit
94a39d61f80f ("nvmet: make ctrl-id configurable") introduced subsystem
specific cntlid_min and cntlid_max configfs attributes.
Since controller ID ranges are now per subsystem, the ida should also be
per subsystem. Fix that to ensure controller IDs are managed correctly
per subsystem.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/core.c  | 10 +++++-----
 drivers/nvme/target/nvmet.h |  1 +
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 20e7b3d6a810..b0bdb20132ab 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -22,7 +22,6 @@ struct kmem_cache *nvmet_bvec_cache;
 struct workqueue_struct *buffered_io_wq;
 struct workqueue_struct *zbd_wq;
 static LIST_HEAD(nvmet_transports);
-static DEFINE_IDA(cntlid_ida);
 
 struct workqueue_struct *nvmet_wq;
 EXPORT_SYMBOL_GPL(nvmet_wq);
@@ -1684,7 +1683,7 @@ struct nvmet_ctrl *nvmet_alloc_ctrl(struct nvmet_alloc_ctrl_args *args)
 	if (!ctrl->cqs)
 		goto out_free_sqs;
 
-	ret = ida_alloc_range(&cntlid_ida,
+	ret = ida_alloc_range(&subsys->cntlid_ida,
 			     subsys->cntlid_min, subsys->cntlid_max,
 			     GFP_KERNEL);
 	if (ret < 0) {
@@ -1747,7 +1746,7 @@ struct nvmet_ctrl *nvmet_alloc_ctrl(struct nvmet_alloc_ctrl_args *args)
 init_pr_fail:
 	mutex_unlock(&subsys->lock);
 	nvmet_stop_keep_alive_timer(ctrl);
-	ida_free(&cntlid_ida, ctrl->cntlid);
+	ida_free(&subsys->cntlid_ida, ctrl->cntlid);
 out_free_cqs:
 	kfree(ctrl->cqs);
 out_free_sqs:
@@ -1782,7 +1781,7 @@ static void nvmet_ctrl_free(struct kref *ref)
 
 	nvmet_debugfs_ctrl_free(ctrl);
 
-	ida_free(&cntlid_ida, ctrl->cntlid);
+	ida_free(&subsys->cntlid_ida, ctrl->cntlid);
 
 	nvmet_async_events_free(ctrl);
 	kfree(ctrl->sqs);
@@ -1911,6 +1910,7 @@ struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn,
 	xa_init(&subsys->namespaces);
 	INIT_LIST_HEAD(&subsys->ctrls);
 	INIT_LIST_HEAD(&subsys->hosts);
+	ida_init(&subsys->cntlid_ida);
 
 	ret = nvmet_debugfs_subsys_setup(subsys);
 	if (ret)
@@ -1940,6 +1940,7 @@ static void nvmet_subsys_free(struct kref *ref)
 
 	nvmet_debugfs_subsys_free(subsys);
 
+	ida_destroy(&subsys->cntlid_ida);
 	xa_destroy(&subsys->namespaces);
 	nvmet_passthru_subsys_free(subsys);
 
@@ -2024,7 +2025,6 @@ static void __exit nvmet_exit(void)
 	nvmet_exit_configfs();
 	nvmet_exit_discovery();
 	nvmet_exit_debugfs();
-	ida_destroy(&cntlid_ida);
 	destroy_workqueue(nvmet_wq);
 	destroy_workqueue(buffered_io_wq);
 	destroy_workqueue(zbd_wq);
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index e5a4571199aa..9245fe4f6cae 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -322,6 +322,7 @@ struct nvmet_subsys {
 	u32			max_nsid;
 	u16			cntlid_min;
 	u16			cntlid_max;
+	struct ida		cntlid_ida;
 
 	struct list_head	ctrls;
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 1/4] nvmet: forbid changing ctrl ID attributes " Max Gurtovoy
  2025-09-24 20:26 ` [PATCH 2/4] nvmet: make cntlid ida per subsystem Max Gurtovoy
@ 2025-09-24 20:26 ` Max Gurtovoy
  2025-09-25  7:36   ` Daniel Wagner
  2025-09-24 20:26 ` [PATCH 4/4] nvmet: prevent vid/ssvid " Max Gurtovoy
  2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
  4 siblings, 1 reply; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-24 20:26 UTC (permalink / raw)
  To: hch, linux-nvme, kbusch, sagi, kch; +Cc: dwagner, israelr, Max Gurtovoy

Disallow updates to the max_qid attribute via configfs after a
subsystem has been discovered. This prevents invalid configuration
and avoids races during controller setup. The maximal queue
identifier can now only be set on non-discovered subsystems,
ensuring consistent configuration state.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/configfs.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index 979eb184756a..ac12326e036c 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -1673,12 +1673,27 @@ static ssize_t nvmet_subsys_attr_qid_max_show(struct config_item *item,
 	return snprintf(page, PAGE_SIZE, "%u\n", to_subsys(item)->max_qid);
 }
 
+static ssize_t
+nvmet_subsys_attr_qid_max_store_locked(struct nvmet_subsys *subsys,
+		u16 qid_max, size_t cnt)
+{
+	if (subsys->subsys_discovered) {
+		pr_err("Can't set maximal qid. %u is already assigned\n",
+		       subsys->max_qid);
+		return -EINVAL;
+	}
+
+	subsys->max_qid = qid_max;
+
+	return cnt;
+}
+
 static ssize_t nvmet_subsys_attr_qid_max_store(struct config_item *item,
 					       const char *page, size_t cnt)
 {
 	struct nvmet_subsys *subsys = to_subsys(item);
-	struct nvmet_ctrl *ctrl;
 	u16 qid_max;
+	ssize_t ret;
 
 	if (sscanf(page, "%hu\n", &qid_max) != 1)
 		return -EINVAL;
@@ -1687,14 +1702,12 @@ static ssize_t nvmet_subsys_attr_qid_max_store(struct config_item *item,
 		return -EINVAL;
 
 	down_write(&nvmet_config_sem);
-	subsys->max_qid = qid_max;
-
-	/* Force reconnect */
-	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
-		ctrl->ops->delete_ctrl(ctrl);
+	mutex_lock(&subsys->lock);
+	ret = nvmet_subsys_attr_qid_max_store_locked(subsys, qid_max, cnt);
+	mutex_unlock(&subsys->lock);
 	up_write(&nvmet_config_sem);
 
-	return cnt;
+	return ret;
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_qid_max);
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/4] nvmet: prevent vid/ssvid changes for discovered subsystems
  2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
                   ` (2 preceding siblings ...)
  2025-09-24 20:26 ` [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems Max Gurtovoy
@ 2025-09-24 20:26 ` Max Gurtovoy
  2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
  4 siblings, 0 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-24 20:26 UTC (permalink / raw)
  To: hch, linux-nvme, kbusch, sagi, kch; +Cc: dwagner, israelr, Max Gurtovoy

Disallow updates to the vendor_id and subsys_vendor_id attributes via
configfs after a subsystem has been discovered. This prevents invalid
configuration.
These attributes can now only be set on non-discovered subsystems,
ensuring consistent configuration state.

Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
---
 drivers/nvme/target/configfs.c | 50 +++++++++++++++++++++++++++++++---
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index ac12326e036c..dab10825ca0b 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -1439,18 +1439,38 @@ static ssize_t nvmet_subsys_attr_vendor_id_show(struct config_item *item,
 	return snprintf(page, PAGE_SIZE, "0x%x\n", to_subsys(item)->vendor_id);
 }
 
+static ssize_t
+nvmet_subsys_attr_vendor_id_store_locked(struct nvmet_subsys *subsys,
+		u16 vid, size_t count)
+{
+	if (subsys->subsys_discovered) {
+		pr_err("Can't set vendor id. %u is already assigned\n",
+		       subsys->vendor_id);
+		return -EINVAL;
+	}
+
+	subsys->vendor_id = vid;
+
+	return count;
+}
+
 static ssize_t nvmet_subsys_attr_vendor_id_store(struct config_item *item,
 		const char *page, size_t count)
 {
+	struct nvmet_subsys *subsys = to_subsys(item);
 	u16 vid;
+	ssize_t ret;
 
 	if (kstrtou16(page, 0, &vid))
 		return -EINVAL;
 
 	down_write(&nvmet_config_sem);
-	to_subsys(item)->vendor_id = vid;
+	mutex_lock(&subsys->lock);
+	ret = nvmet_subsys_attr_vendor_id_store_locked(subsys, vid, count);
+	mutex_unlock(&subsys->lock);
 	up_write(&nvmet_config_sem);
-	return count;
+
+	return ret;
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_vendor_id);
 
@@ -1461,18 +1481,40 @@ static ssize_t nvmet_subsys_attr_subsys_vendor_id_show(struct config_item *item,
 			to_subsys(item)->subsys_vendor_id);
 }
 
+static ssize_t
+nvmet_subsys_attr_subsys_vendor_id_store_locked(struct nvmet_subsys *subsys,
+		u16 ssvid, size_t count)
+{
+	if (subsys->subsys_discovered) {
+		pr_err("Can't set subsystem vendor id. %u is already assigned\n",
+		       subsys->subsys_vendor_id);
+		return -EINVAL;
+	}
+
+	subsys->subsys_vendor_id = ssvid;
+
+	return count;
+}
+
 static ssize_t nvmet_subsys_attr_subsys_vendor_id_store(struct config_item *item,
 		const char *page, size_t count)
 {
+	struct nvmet_subsys *subsys = to_subsys(item);
 	u16 ssvid;
+	ssize_t ret;
 
 	if (kstrtou16(page, 0, &ssvid))
 		return -EINVAL;
 
 	down_write(&nvmet_config_sem);
-	to_subsys(item)->subsys_vendor_id = ssvid;
+	mutex_lock(&subsys->lock);
+	ret = nvmet_subsys_attr_subsys_vendor_id_store_locked(subsys,
+							      ssvid,
+							      count);
+	mutex_unlock(&subsys->lock);
 	up_write(&nvmet_config_sem);
-	return count;
+
+	return ret;
 }
 CONFIGFS_ATTR(nvmet_subsys_, attr_subsys_vendor_id);
 
-- 
2.18.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems
  2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
                   ` (3 preceding siblings ...)
  2025-09-24 20:26 ` [PATCH 4/4] nvmet: prevent vid/ssvid " Max Gurtovoy
@ 2025-09-24 22:13 ` Keith Busch
  2025-09-28 13:31   ` Max Gurtovoy
                     ` (2 more replies)
  4 siblings, 3 replies; 18+ messages in thread
From: Keith Busch @ 2025-09-24 22:13 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, sagi, kch, dwagner, israelr

On Wed, Sep 24, 2025 at 11:26:00PM +0300, Max Gurtovoy wrote:
> 1. Forbid changes to controller ID min/max attributes values on already
>    discovered subsystems.
> 2. Switch cntlid ida allocation from global to per-subsystem scope,
>    matching the granularity of controller ID ranges.
> 3. Forbid changes to vendor ID and subsystem vendor ID attributes values
>    on already discovered subsystems.
> 4. Forbid changes to max_qid attribute values on already discovered
>    subsystems.

Is there a reason these should be changeable after they're initialized
even prior to being discovered?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-24 20:26 ` [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems Max Gurtovoy
@ 2025-09-25  7:36   ` Daniel Wagner
  2025-09-25  8:28     ` Max Gurtovoy
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Wagner @ 2025-09-25  7:36 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr

On Wed, Sep 24, 2025 at 11:26:03PM +0300, Max Gurtovoy wrote:
> Disallow updates to the max_qid attribute via configfs after a
> subsystem has been discovered. This prevents invalid configuration
> and avoids races during controller setup. The maximal queue
> identifier can now only be set on non-discovered subsystems,
> ensuring consistent configuration state.

IIUC, this change will break an existing blktests where it tests if
the max queue changes on reconnect. This took a while to fix in the host
and the very reason the disconnect call is buried in the max queue
update. blktests nvme/048 

The scenario where this happens if you have a multi node target and one
node after the other gets a new firmware. For some reason the number of
max queue changes in the new firmware. When the host fails over to an
updated node the host can't blindly reuse the old queue max:

555f66d0f8a3 ("nvme-fc: update hardware queues before using them")

So if we find a way to keep this tests scenario alive I don't mind. But
as I said I think this will break it.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25  7:36   ` Daniel Wagner
@ 2025-09-25  8:28     ` Max Gurtovoy
  2025-09-25 11:32       ` Daniel Wagner
  0 siblings, 1 reply; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-25  8:28 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr


On 25/09/2025 10:36, Daniel Wagner wrote:
> On Wed, Sep 24, 2025 at 11:26:03PM +0300, Max Gurtovoy wrote:
>> Disallow updates to the max_qid attribute via configfs after a
>> subsystem has been discovered. This prevents invalid configuration
>> and avoids races during controller setup. The maximal queue
>> identifier can now only be set on non-discovered subsystems,
>> ensuring consistent configuration state.
> IIUC, this change will break an existing blktests where it tests if
> the max queue changes on reconnect. This took a while to fix in the host
> and the very reason the disconnect call is buried in the max queue
> update. blktests nvme/048
>
> The scenario where this happens if you have a multi node target and one
> node after the other gets a new firmware. For some reason the number of
> max queue changes in the new firmware. When the host fails over to an
> updated node the host can't blindly reuse the old queue max:
>
> 555f66d0f8a3 ("nvme-fc: update hardware queues before using them")
>
> So if we find a way to keep this tests scenario alive I don't mind. But
> as I said I think this will break it.

The above mentioned commit 555f66d0f8a3 ("nvme-fc: update hardware 
queues before using them") was added to Linux-v5.15.

The qid_max support was added to Linux-v6.1 commit 3e980f5995e0 ("nvmet: 
expose max queues to configfs").

IMO the real objective of adding qid_max support is to make the 
subsystem more configurable and not for testing reconnect attempts of 
the host, as was mentioned in the commit message.

This re-connect scenario can be easily tested in a different manner, and 
the target driver code is not there for testing the host driver unique 
scenarios. At least it is not its main goal.

If we allow changing the qid_max attribute at any time during the 
lifecycle of the subsystem it may lead to a bad configuration and 
unexpected behavior.

For example the following race:

1. user set max_qid attr to 10

2. during "nvmet_alloc_ctrl" - ctrl->sqs = kcalloc(subsys->max_qid + 1, 
sizeof(struct nvmet_sq *), GFP_KERNEL);//max_qid = 10

3. user set max_qid to attr 20

4. during "nvmet_alloc_ctrl" - ctrl->cqs = kcalloc(subsys->max_qid + 1, 
sizeof(struct nvmet_cq *), GFP_KERNEL);//max_qid = 20

The above happens before we call "list_add_tail(&ctrl->subsys_entry, 
&subsys->ctrls);" and it will lead to sqs[11] and cqs[21] and max_qid == 
20 without any reconnect attempts.

I don't know how the driver will handle it but for sure this is a 
situation we would like to avoid.

I guess we can try fixing this problem in other ways, but why should we 
complicate the code so much ? let's try to avoid this scenario to happen 
and simplify the code where we can...






^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25  8:28     ` Max Gurtovoy
@ 2025-09-25 11:32       ` Daniel Wagner
  2025-09-25 12:06         ` Max Gurtovoy
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Wagner @ 2025-09-25 11:32 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr

On Thu, Sep 25, 2025 at 11:28:04AM +0300, Max Gurtovoy wrote:
> The above mentioned commit 555f66d0f8a3 ("nvme-fc: update hardware queues
> before using them") was added to Linux-v5.15.
> 
> The qid_max support was added to Linux-v6.1 commit 3e980f5995e0 ("nvmet:
> expose max queues to configfs").
> 
> IMO the real objective of adding qid_max support is to make the subsystem
> more configurable and not for testing reconnect attempts of the host, as was
> mentioned in the commit message.

The nvmet change took a bit longer to get added. My main objective when
I wrote this ode was to test the reconnect attempt.

> This re-connect scenario can be easily tested in a different manner, and the
> target driver code is not there for testing the host driver unique
> scenarios. At least it is not its main goal.

As I said, I don't mind to change/adapt the test infrastructure, but not
the test case. FWIW, it found more than the above bug over time.

> I don't know how the driver will handle it but for sure this is a situation
> we would like to avoid.

No objection.

> I guess we can try fixing this problem in other ways, but why should we
> complicate the code so much ? let's try to avoid this scenario to happen and
> simplify the code where we can...

Sure. Maybe the failover tests should setup two controllers instead a
single one. But not sure if this is supported at all from nvmet or
blktests perspective.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25 11:32       ` Daniel Wagner
@ 2025-09-25 12:06         ` Max Gurtovoy
  2025-09-25 16:02           ` Daniel Wagner
  0 siblings, 1 reply; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-25 12:06 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr


On 25/09/2025 14:32, Daniel Wagner wrote:
> On Thu, Sep 25, 2025 at 11:28:04AM +0300, Max Gurtovoy wrote:
>> The above mentioned commit 555f66d0f8a3 ("nvme-fc: update hardware queues
>> before using them") was added to Linux-v5.15.
>>
>> The qid_max support was added to Linux-v6.1 commit 3e980f5995e0 ("nvmet:
>> expose max queues to configfs").
>>
>> IMO the real objective of adding qid_max support is to make the subsystem
>> more configurable and not for testing reconnect attempts of the host, as was
>> mentioned in the commit message.
> The nvmet change took a bit longer to get added. My main objective when
> I wrote this ode was to test the reconnect attempt.
>
>> This re-connect scenario can be easily tested in a different manner, and the
>> target driver code is not there for testing the host driver unique
>> scenarios. At least it is not its main goal.
> As I said, I don't mind to change/adapt the test infrastructure, but not
> the test case. FWIW, it found more than the above bug over time.

Sure, test case can stay.

for example one can:

1. create a port/subsystem with max_qid = 20
2. connect from a host
3. destroy the subsystem/port (this will issue a 
reconnect/error_recovery flow from host)
4. create the same port/subsystem with max_qid = 10
5. reconnect attempt X will succeed and new host controller will have 10 
IO queues - tagset should be updated as it does today.

WDYT ?

>
>> I don't know how the driver will handle it but for sure this is a situation
>> we would like to avoid.
> No objection.
>
>> I guess we can try fixing this problem in other ways, but why should we
>> complicate the code so much ? let's try to avoid this scenario to happen and
>> simplify the code where we can...
> Sure. Maybe the failover tests should setup two controllers instead a
> single one. But not sure if this is supported at all from nvmet or
> blktests perspective.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25 12:06         ` Max Gurtovoy
@ 2025-09-25 16:02           ` Daniel Wagner
  2025-09-25 22:09             ` Max Gurtovoy
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Wagner @ 2025-09-25 16:02 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr

On Thu, Sep 25, 2025 at 03:06:42PM +0300, Max Gurtovoy wrote:
> Sure, test case can stay.
> 
> for example one can:
> 
> 1. create a port/subsystem with max_qid = 20
> 2. connect from a host
> 3. destroy the subsystem/port (this will issue a reconnect/error_recovery
> flow from host)
> 4. create the same port/subsystem with max_qid = 10
> 5. reconnect attempt X will succeed and new host controller will have 10 IO
> queues - tagset should be updated as it does today.
> 
> WDYT ?

I can't remember why this approach didn't work when I added the test
case. IIRC, the fc transport was way too buggy.

 set_qid_max() {
-       local subsys_name="$1"
+       local subsysnqn="$1"
        local qid_max="$2"

-       set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}"
-       nvmf_check_queue_count "${subsys_name}" "${qid_max}" || return 1
-       _nvmf_wait_for_state "${subsys_name}" "live" || return 1
+       _get_nvmet_ports "${subsysnqn}" ports
+       for port in "${ports[@]}"; do
+               _remove_nvmet_subsystem_from_port "${port}" "${subsysnqn}"
+               _remove_nvmet_port "${port}"
+       done
+
+       set_nvmet_attr_qid_max "${subsysnqn}" "${qid_max}"
+
+       local p=0
+       local num_ports=1
+       while (( p < num_ports )); do
+               port="$(_create_nvmet_port)"
+               _add_nvmet_subsys_to_port "${port}" "${subsysnqn}"
+               p=$(( p + 1 ))
+       done
+
+       nvmf_check_queue_count "${subsysnqn}" "${qid_max}" || return 1
+       _nvmf_wait_for_state "${subsysnqn}" "live" || return 1

        return 0
 }


[  965.045477][ T1741] nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2.
[  965.049342][   T69] nvme nvme1: reconnect: revising io queue count from 4 to 1
[  965.082787][   T69] group_mask_cpus_evenly:458 cpu_online_mask 0-7
[  965.087982][   T69] nvme nvme1: NVME-FC{0}: controller connect complete
[  965.357063][   T69] (NULL device *): {0:0} Association deleted
[  965.390686][ T1741] nvme nvme1: NVME-FC{0}: io failed due to lldd error 6
[  965.392129][   T66] nvme nvme1: NVME-FC{0}: transport association event: transport detected io error
[  965.393259][   T66] nvme nvme1: NVME-FC{0}: resetting controller
[  965.422856][   T69] (NULL device *): {0:0} Association freed
[  965.438365][ T1788] nvme nvme1: NVME-FC{0}: controller connectivity lost. Awaiting Reconnect
[  965.542093][ T1952] nvme nvme1: NVME-FC{0}: connectivity re-established. Attempting reconnect
[  965.552933][   T66] nvme nvme1: long keepalive RTT (666024 ms)
[  965.554266][   T66] nvme nvme1: failed nvme_keep_alive_end_io error=4
[  967.473132][   T69] nvme nvme1: NVME-FC{0}: create association : host wwpn 0x20001100aa000001  rport"
[  967.475251][   T64] (NULL device *): {0:0} Association created
[  967.476634][   T66] nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2.
[  967.479208][   T69] nvme nvme1: reconnect: revising io queue count from 1 to 2
[  967.506681][   T69] group_mask_cpus_evenly:458 cpu_online_mask 0-7
[  967.511893][   T69] nvme nvme1: NVME-FC{0}: controller connect complete
[  967.807060][ T1985] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
[  967.984963][   T66] nvme nvme1: long keepalive RTT (668456 ms)
[  967.986014][   T66] nvme nvme1: failed nvme_keep_alive_end_io error=4
[  968.029060][  T129] (NULL device *): {0:0} Association deleted
[  968.131057][  T129] (NULL device *): {0:0} Association freed
[  968.131752][  T815] (NULL device *): Disconnect LS failed: No Association


This seems to work, there is some fallouts in the test case

root@localhost:~# ./test
nvme/048 (tr=fc) (Test queue count changes on reconnect)     [failed]
    runtime  6.040s  ...  5.735s
    --- tests/nvme/048.out      2024-04-16 16:30:22.861404878 +0000
    +++ /tmp/blktests/nodev_tr_fc/nvme/048.out.bad      2025-09-25 15:56:00.620053169 +0000
    @@ -1,3 +1,8 @@
     Running nvme/048
    +rm: cannot remove '/sys/kernel/config/nvmet//ports/0/subsystems/blktests-subsystem-1': No such file or directory
    +common/nvme: line 133: echo: write error: No such file or directory
    +common/nvme: line 111: echo: write error: No such file or directory
    +rmdir: failed to remove '/sys/kernel/config/nvmet//ports/0/ana_groups/*': No such file or directory
    +rmdir: failed to remove '/sys/kernel/config/nvmet//ports/0': No such file or directory
     disconnected 1 controller(s)
    ...
    (Run 'diff -u tests/nvme/048.out /tmp/blktests/nodev_tr_fc/nvme/048.out.bad' to see the entire diff)


But the more annoying part is yet another UAF:


[ 1090.923342][   T69] (NULL device *): {0:0} Association freed
[ 1090.924134][   T69] ==================================================================
[ 1090.925072][   T69] BUG: KASAN: slab-use-after-free in process_scheduled_works+0x27a/0x1310
[ 1090.926115][   T69] Read of size 8 at addr ffff888111efa448 by task kworker/u32:6/69
[ 1090.927054][   T69]
[ 1090.927329][   T69] CPU: 5 UID: 0 PID: 69 Comm: kworker/u32:6 Not tainted 6.17.0-rc4+ #651 PREEMPT(vc
[ 1090.927333][   T69] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[ 1090.927336][   T69] Workqueue:  0x0 (nvmet-wq)
[ 1090.927342][   T69] Call Trace:
[ 1090.927344][   T69]  <TASK>
[ 1090.927346][   T69]  dump_stack_lvl+0x60/0x80
[ 1090.927350][   T69]  print_report+0xbc/0x260
[ 1090.927353][   T69]  ? process_scheduled_works+0x27a/0x1310
[ 1090.927355][   T69]  kasan_report+0x9f/0xe0
[ 1090.927358][   T69]  ? process_scheduled_works+0x27a/0x1310
[ 1090.927361][   T69]  kasan_check_range+0x297/0x2a0
[ 1090.927363][   T69]  process_scheduled_works+0x27a/0x1310
[ 1090.927368][   T69]  ? __pfx_process_scheduled_works+0x10/0x10
[ 1090.927371][   T69]  ? lock_is_held_type+0x81/0x110
[ 1090.927374][   T69]  worker_thread+0x83a/0xca0
[ 1090.927376][   T69]  ? do_raw_spin_trylock+0xac/0x180
[ 1090.927378][   T69]  ? __pfx_do_raw_spin_trylock+0x10/0x10
[ 1090.927381][   T69]  ? __kthread_parkme+0x7d/0x1a0
[ 1090.927384][   T69]  kthread+0x540/0x660
[ 1090.927385][   T69]  ? __pfx_worker_thread+0x10/0x10
[ 1090.927387][   T69]  ? __pfx_kthread+0x10/0x10
[ 1090.927389][   T69]  ? __pfx_kthread+0x10/0x10
[ 1090.927391][   T69]  ret_from_fork+0x1c9/0x3e0
[ 1090.927393][   T69]  ? __pfx_kthread+0x10/0x10
[ 1090.927395][   T69]  ret_from_fork_asm+0x1a/0x30
[ 1090.927399][   T69]  </TASK>
[ 1090.927400][   T69]
[ 1090.943292][   T69] Allocated by task 64:
[ 1090.943818][   T69]  kasan_save_track+0x2b/0x70
[ 1090.944401][   T69]  __kasan_kmalloc+0x6a/0x80
[ 1090.944907][   T69]  __kmalloc_cache_noprof+0x25d/0x410
[ 1090.945507][   T69]  nvmet_fc_alloc_target_assoc+0xd3/0xc70 [nvmet_fc]
[ 1090.946279][   T69]  nvmet_fc_handle_ls_rqst_work+0xd89/0x2b60 [nvmet_fc]
[ 1090.947078][   T69]  process_scheduled_works+0x969/0x1310
[ 1090.947689][   T69]  worker_thread+0x83a/0xca0
[ 1090.948204][   T69]  kthread+0x540/0x660
[ 1090.948668][   T69]  ret_from_fork+0x1c9/0x3e0
[ 1090.949193][   T69]  ret_from_fork_asm+0x1a/0x30
[ 1090.949744][   T69]
[ 1090.950018][   T69] Freed by task 69:
[ 1090.950493][   T69]  kasan_save_track+0x2b/0x70
[ 1090.951015][   T69]  kasan_save_free_info+0x42/0x50
[ 1090.951579][   T69]  __kasan_slab_free+0x3d/0x50
[ 1090.952116][   T69]  kfree+0x164/0x410
[ 1090.952565][   T69]  nvmet_fc_delete_assoc_work+0x70/0x240 [nvmet_fc]
[ 1090.953331][   T69]  process_scheduled_works+0x969/0x1310
[ 1090.953953][   T69]  worker_thread+0x83a/0xca0
[ 1090.954488][   T69]  kthread+0x540/0x660
[ 1090.954941][   T69]  ret_from_fork+0x1c9/0x3e0
[ 1090.955504][   T69]  ret_from_fork_asm+0x1a/0x30
[ 1090.956078][   T69]
[ 1090.956359][   T69] Last potentially related work creation:
[ 1090.956980][   T69]  kasan_save_stack+0x2b/0x50
[ 1090.957522][   T69]  kasan_record_aux_stack+0x95/0xb0
[ 1090.958158][   T69]  insert_work+0x2c/0x1f0
[ 1090.958708][   T69]  __queue_work+0x8b3/0xae0
[ 1090.959206][   T69]  queue_work_on+0xab/0xe0
[ 1090.959703][   T69]  __nvmet_fc_free_assocs+0x13b/0x1f0 [nvmet_fc]
[ 1090.960471][   T69]  nvmet_fc_remove_port+0x1c5/0x1f0 [nvmet_fc]
[ 1090.961215][   T69]  nvmet_disable_port+0xf6/0x180 [nvmet]
[ 1090.961895][   T69]  nvmet_port_subsys_drop_link+0x188/0x1b0 [nvmet]
[ 1090.962683][   T69]  configfs_unlink+0x389/0x580
[ 1090.963207][   T69]  vfs_unlink+0x284/0x4e0
[ 1090.963722][   T69]  do_unlinkat+0x2b6/0x440
[ 1090.964245][   T69]  __x64_sys_unlinkat+0x9a/0xb0
[ 1090.964808][   T69]  do_syscall_64+0xa1/0x2e0
[ 1090.965311][   T69]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1090.965978][   T69]
[ 1090.966272][   T69] Second to last potentially related work creation:
[ 1090.967076][   T69]  kasan_save_stack+0x2b/0x50
[ 1090.967593][   T69]  kasan_record_aux_stack+0x95/0xb0
[ 1090.968162][   T69]  insert_work+0x2c/0x1f0
[ 1090.968643][   T69]  __queue_work+0x8b3/0xae0
[ 1090.969142][   T69]  queue_work_on+0xab/0xe0
[ 1090.969663][   T69]  nvmet_fc_delete_ctrl+0x2a5/0x2e0 [nvmet_fc]
[ 1090.970397][   T69]  nvmet_port_del_ctrls+0xc7/0x100 [nvmet]
[ 1090.971083][   T69]  nvmet_port_subsys_drop_link+0x157/0x1b0 [nvmet]
[ 1090.971825][   T69]  configfs_unlink+0x389/0x580
[ 1090.972396][   T69]  vfs_unlink+0x284/0x4e0
[ 1090.972909][   T69]  do_unlinkat+0x2b6/0x440
[ 1090.973413][   T69]  __x64_sys_unlinkat+0x9a/0xb0
[ 1090.973943][   T69]  do_syscall_64+0xa1/0x2e0
[ 1090.974476][   T69]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 1090.975129][   T69]
[ 1090.975410][   T69] The buggy address belongs to the object at ffff888111efa000
[ 1090.975410][   T69]  which belongs to the cache kmalloc-2k of size 2048
[ 1090.977002][   T69] The buggy address is located 1096 bytes inside of
[ 1090.977002][   T69]  freed 2048-byte region [ffff888111efa000, ffff888111efa800)
[ 1090.978600][   T69]
[ 1090.978864][   T69] The buggy address belongs to the physical page:
[ 1090.979596][   T69] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff888111ef9000 pf8
[ 1090.980717][   T69] head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[ 1090.981655][   T69] flags: 0x17ffffc0000040(head|node=0|zone=2|lastcpupid=0x1fffff)
[ 1090.982626][   T69] page_type: f5(slab)
[ 1090.983067][   T69] raw: 0017ffffc0000040 ffff888100042f00 dead000000000122 0000000000000000
[ 1090.984053][   T69] raw: ffff888111ef9000 0000000080080005 00000000f5000000 0000000000000000
[ 1090.985165][   T69] head: 0017ffffc0000040 ffff888100042f00 dead000000000122 0000000000000000
[ 1090.986228][   T69] head: ffff888111ef9000 0000000080080005 00000000f5000000 0000000000000000
[ 1090.987286][   T69] head: 0017ffffc0000003 ffffea000447be01 00000000ffffffff 00000000ffffffff
[ 1090.988313][   T69] head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
[ 1090.989355][   T69] page dumped because: kasan: bad access detected
[ 1090.990209][   T69]
[ 1090.990557][   T69] Memory state around the buggy address:
[ 1090.991315][   T69]  ffff888111efa300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 1090.992367][   T69]  ffff888111efa380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 1090.993417][   T69] >ffff888111efa400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 1090.994529][   T69]                                               ^
[ 1090.995387][   T69]  ffff888111efa480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 1090.996312][   T69]  ffff888111efa500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 1090.997232][   T69] ==================================================================


In short, nvme/048 should be updated and the UAF needs to be addressed.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25 16:02           ` Daniel Wagner
@ 2025-09-25 22:09             ` Max Gurtovoy
  2025-09-26  6:58               ` Daniel Wagner
  0 siblings, 1 reply; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-25 22:09 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr


On 25/09/2025 19:02, Daniel Wagner wrote:
> On Thu, Sep 25, 2025 at 03:06:42PM +0300, Max Gurtovoy wrote:
>> Sure, test case can stay.
>>
>> for example one can:
>>
>> 1. create a port/subsystem with max_qid = 20
>> 2. connect from a host
>> 3. destroy the subsystem/port (this will issue a reconnect/error_recovery
>> flow from host)
>> 4. create the same port/subsystem with max_qid = 10
>> 5. reconnect attempt X will succeed and new host controller will have 10 IO
>> queues - tagset should be updated as it does today.
>>
>> WDYT ?
> I can't remember why this approach didn't work when I added the test
> case. IIRC, the fc transport was way too buggy.
>
>   set_qid_max() {
> -       local subsys_name="$1"
> +       local subsysnqn="$1"
>          local qid_max="$2"
>
> -       set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}"
> -       nvmf_check_queue_count "${subsys_name}" "${qid_max}" || return 1
> -       _nvmf_wait_for_state "${subsys_name}" "live" || return 1
> +       _get_nvmet_ports "${subsysnqn}" ports
> +       for port in "${ports[@]}"; do
> +               _remove_nvmet_subsystem_from_port "${port}" "${subsysnqn}"
> +               _remove_nvmet_port "${port}"
> +       done
> +

I'm not an expert in the blktests but it seems like we should do inside 
the set_qid_max() something like:

_nvmet_target_cleanup
_nvmet_target_setup --blkdev file (Add here --qid_max "${qid_max}" option)
nvmf_check_queue_count "${subsysnqn}" "${qid_max}" || return 1 
_nvmf_wait_for_state "${subsysnqn}" "live" || return 1

I expect the above to work with and without my patches..

If possible - lets try it also with tcp/rdma transports

> +       set_nvmet_attr_qid_max "${subsysnqn}" "${qid_max}"
> +
> +       local p=0
> +       local num_ports=1
> +       while (( p < num_ports )); do
> +               port="$(_create_nvmet_port)"
> +               _add_nvmet_subsys_to_port "${port}" "${subsysnqn}"
> +               p=$(( p + 1 ))
> +       done
> +
> +       nvmf_check_queue_count "${subsysnqn}" "${qid_max}" || return 1
> +       _nvmf_wait_for_state "${subsysnqn}" "live" || return 1
>
>          return 0
>   }
>
>
> [  965.045477][ T1741] nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2.
> [  965.049342][   T69] nvme nvme1: reconnect: revising io queue count from 4 to 1
> [  965.082787][   T69] group_mask_cpus_evenly:458 cpu_online_mask 0-7
> [  965.087982][   T69] nvme nvme1: NVME-FC{0}: controller connect complete
> [  965.357063][   T69] (NULL device *): {0:0} Association deleted
> [  965.390686][ T1741] nvme nvme1: NVME-FC{0}: io failed due to lldd error 6
> [  965.392129][   T66] nvme nvme1: NVME-FC{0}: transport association event: transport detected io error
> [  965.393259][   T66] nvme nvme1: NVME-FC{0}: resetting controller
> [  965.422856][   T69] (NULL device *): {0:0} Association freed
> [  965.438365][ T1788] nvme nvme1: NVME-FC{0}: controller connectivity lost. Awaiting Reconnect
> [  965.542093][ T1952] nvme nvme1: NVME-FC{0}: connectivity re-established. Attempting reconnect
> [  965.552933][   T66] nvme nvme1: long keepalive RTT (666024 ms)
> [  965.554266][   T66] nvme nvme1: failed nvme_keep_alive_end_io error=4
> [  967.473132][   T69] nvme nvme1: NVME-FC{0}: create association : host wwpn 0x20001100aa000001  rport"
> [  967.475251][   T64] (NULL device *): {0:0} Association created
> [  967.476634][   T66] nvmet: Created nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2.
> [  967.479208][   T69] nvme nvme1: reconnect: revising io queue count from 1 to 2
> [  967.506681][   T69] group_mask_cpus_evenly:458 cpu_online_mask 0-7
> [  967.511893][   T69] nvme nvme1: NVME-FC{0}: controller connect complete
> [  967.807060][ T1985] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
> [  967.984963][   T66] nvme nvme1: long keepalive RTT (668456 ms)
> [  967.986014][   T66] nvme nvme1: failed nvme_keep_alive_end_io error=4
> [  968.029060][  T129] (NULL device *): {0:0} Association deleted
> [  968.131057][  T129] (NULL device *): {0:0} Association freed
> [  968.131752][  T815] (NULL device *): Disconnect LS failed: No Association
>
>
> This seems to work, there is some fallouts in the test case
>
> root@localhost:~# ./test
> nvme/048 (tr=fc) (Test queue count changes on reconnect)     [failed]
>      runtime  6.040s  ...  5.735s
>      --- tests/nvme/048.out      2024-04-16 16:30:22.861404878 +0000
>      +++ /tmp/blktests/nodev_tr_fc/nvme/048.out.bad      2025-09-25 15:56:00.620053169 +0000
>      @@ -1,3 +1,8 @@
>       Running nvme/048
>      +rm: cannot remove '/sys/kernel/config/nvmet//ports/0/subsystems/blktests-subsystem-1': No such file or directory
>      +common/nvme: line 133: echo: write error: No such file or directory
>      +common/nvme: line 111: echo: write error: No such file or directory
>      +rmdir: failed to remove '/sys/kernel/config/nvmet//ports/0/ana_groups/*': No such file or directory
>      +rmdir: failed to remove '/sys/kernel/config/nvmet//ports/0': No such file or directory
>       disconnected 1 controller(s)
>      ...
>      (Run 'diff -u tests/nvme/048.out /tmp/blktests/nodev_tr_fc/nvme/048.out.bad' to see the entire diff)
>
>
> But the more annoying part is yet another UAF:
>
>
> [ 1090.923342][   T69] (NULL device *): {0:0} Association freed
> [ 1090.924134][   T69] ==================================================================
> [ 1090.925072][   T69] BUG: KASAN: slab-use-after-free in process_scheduled_works+0x27a/0x1310
> [ 1090.926115][   T69] Read of size 8 at addr ffff888111efa448 by task kworker/u32:6/69
> [ 1090.927054][   T69]
> [ 1090.927329][   T69] CPU: 5 UID: 0 PID: 69 Comm: kworker/u32:6 Not tainted 6.17.0-rc4+ #651 PREEMPT(vc
> [ 1090.927333][   T69] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
> [ 1090.927336][   T69] Workqueue:  0x0 (nvmet-wq)
> [ 1090.927342][   T69] Call Trace:
> [ 1090.927344][   T69]  <TASK>
> [ 1090.927346][   T69]  dump_stack_lvl+0x60/0x80
> [ 1090.927350][   T69]  print_report+0xbc/0x260
> [ 1090.927353][   T69]  ? process_scheduled_works+0x27a/0x1310
> [ 1090.927355][   T69]  kasan_report+0x9f/0xe0
> [ 1090.927358][   T69]  ? process_scheduled_works+0x27a/0x1310
> [ 1090.927361][   T69]  kasan_check_range+0x297/0x2a0
> [ 1090.927363][   T69]  process_scheduled_works+0x27a/0x1310
> [ 1090.927368][   T69]  ? __pfx_process_scheduled_works+0x10/0x10
> [ 1090.927371][   T69]  ? lock_is_held_type+0x81/0x110
> [ 1090.927374][   T69]  worker_thread+0x83a/0xca0
> [ 1090.927376][   T69]  ? do_raw_spin_trylock+0xac/0x180
> [ 1090.927378][   T69]  ? __pfx_do_raw_spin_trylock+0x10/0x10
> [ 1090.927381][   T69]  ? __kthread_parkme+0x7d/0x1a0
> [ 1090.927384][   T69]  kthread+0x540/0x660
> [ 1090.927385][   T69]  ? __pfx_worker_thread+0x10/0x10
> [ 1090.927387][   T69]  ? __pfx_kthread+0x10/0x10
> [ 1090.927389][   T69]  ? __pfx_kthread+0x10/0x10
> [ 1090.927391][   T69]  ret_from_fork+0x1c9/0x3e0
> [ 1090.927393][   T69]  ? __pfx_kthread+0x10/0x10
> [ 1090.927395][   T69]  ret_from_fork_asm+0x1a/0x30
> [ 1090.927399][   T69]  </TASK>
> [ 1090.927400][   T69]
> [ 1090.943292][   T69] Allocated by task 64:
> [ 1090.943818][   T69]  kasan_save_track+0x2b/0x70
> [ 1090.944401][   T69]  __kasan_kmalloc+0x6a/0x80
> [ 1090.944907][   T69]  __kmalloc_cache_noprof+0x25d/0x410
> [ 1090.945507][   T69]  nvmet_fc_alloc_target_assoc+0xd3/0xc70 [nvmet_fc]
> [ 1090.946279][   T69]  nvmet_fc_handle_ls_rqst_work+0xd89/0x2b60 [nvmet_fc]
> [ 1090.947078][   T69]  process_scheduled_works+0x969/0x1310
> [ 1090.947689][   T69]  worker_thread+0x83a/0xca0
> [ 1090.948204][   T69]  kthread+0x540/0x660
> [ 1090.948668][   T69]  ret_from_fork+0x1c9/0x3e0
> [ 1090.949193][   T69]  ret_from_fork_asm+0x1a/0x30
> [ 1090.949744][   T69]
> [ 1090.950018][   T69] Freed by task 69:
> [ 1090.950493][   T69]  kasan_save_track+0x2b/0x70
> [ 1090.951015][   T69]  kasan_save_free_info+0x42/0x50
> [ 1090.951579][   T69]  __kasan_slab_free+0x3d/0x50
> [ 1090.952116][   T69]  kfree+0x164/0x410
> [ 1090.952565][   T69]  nvmet_fc_delete_assoc_work+0x70/0x240 [nvmet_fc]
> [ 1090.953331][   T69]  process_scheduled_works+0x969/0x1310
> [ 1090.953953][   T69]  worker_thread+0x83a/0xca0
> [ 1090.954488][   T69]  kthread+0x540/0x660
> [ 1090.954941][   T69]  ret_from_fork+0x1c9/0x3e0
> [ 1090.955504][   T69]  ret_from_fork_asm+0x1a/0x30
> [ 1090.956078][   T69]
> [ 1090.956359][   T69] Last potentially related work creation:
> [ 1090.956980][   T69]  kasan_save_stack+0x2b/0x50
> [ 1090.957522][   T69]  kasan_record_aux_stack+0x95/0xb0
> [ 1090.958158][   T69]  insert_work+0x2c/0x1f0
> [ 1090.958708][   T69]  __queue_work+0x8b3/0xae0
> [ 1090.959206][   T69]  queue_work_on+0xab/0xe0
> [ 1090.959703][   T69]  __nvmet_fc_free_assocs+0x13b/0x1f0 [nvmet_fc]
> [ 1090.960471][   T69]  nvmet_fc_remove_port+0x1c5/0x1f0 [nvmet_fc]
> [ 1090.961215][   T69]  nvmet_disable_port+0xf6/0x180 [nvmet]
> [ 1090.961895][   T69]  nvmet_port_subsys_drop_link+0x188/0x1b0 [nvmet]
> [ 1090.962683][   T69]  configfs_unlink+0x389/0x580
> [ 1090.963207][   T69]  vfs_unlink+0x284/0x4e0
> [ 1090.963722][   T69]  do_unlinkat+0x2b6/0x440
> [ 1090.964245][   T69]  __x64_sys_unlinkat+0x9a/0xb0
> [ 1090.964808][   T69]  do_syscall_64+0xa1/0x2e0
> [ 1090.965311][   T69]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 1090.965978][   T69]
> [ 1090.966272][   T69] Second to last potentially related work creation:
> [ 1090.967076][   T69]  kasan_save_stack+0x2b/0x50
> [ 1090.967593][   T69]  kasan_record_aux_stack+0x95/0xb0
> [ 1090.968162][   T69]  insert_work+0x2c/0x1f0
> [ 1090.968643][   T69]  __queue_work+0x8b3/0xae0
> [ 1090.969142][   T69]  queue_work_on+0xab/0xe0
> [ 1090.969663][   T69]  nvmet_fc_delete_ctrl+0x2a5/0x2e0 [nvmet_fc]
> [ 1090.970397][   T69]  nvmet_port_del_ctrls+0xc7/0x100 [nvmet]
> [ 1090.971083][   T69]  nvmet_port_subsys_drop_link+0x157/0x1b0 [nvmet]
> [ 1090.971825][   T69]  configfs_unlink+0x389/0x580
> [ 1090.972396][   T69]  vfs_unlink+0x284/0x4e0
> [ 1090.972909][   T69]  do_unlinkat+0x2b6/0x440
> [ 1090.973413][   T69]  __x64_sys_unlinkat+0x9a/0xb0
> [ 1090.973943][   T69]  do_syscall_64+0xa1/0x2e0
> [ 1090.974476][   T69]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 1090.975129][   T69]
> [ 1090.975410][   T69] The buggy address belongs to the object at ffff888111efa000
> [ 1090.975410][   T69]  which belongs to the cache kmalloc-2k of size 2048
> [ 1090.977002][   T69] The buggy address is located 1096 bytes inside of
> [ 1090.977002][   T69]  freed 2048-byte region [ffff888111efa000, ffff888111efa800)
> [ 1090.978600][   T69]
> [ 1090.978864][   T69] The buggy address belongs to the physical page:
> [ 1090.979596][   T69] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff888111ef9000 pf8
> [ 1090.980717][   T69] head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [ 1090.981655][   T69] flags: 0x17ffffc0000040(head|node=0|zone=2|lastcpupid=0x1fffff)
> [ 1090.982626][   T69] page_type: f5(slab)
> [ 1090.983067][   T69] raw: 0017ffffc0000040 ffff888100042f00 dead000000000122 0000000000000000
> [ 1090.984053][   T69] raw: ffff888111ef9000 0000000080080005 00000000f5000000 0000000000000000
> [ 1090.985165][   T69] head: 0017ffffc0000040 ffff888100042f00 dead000000000122 0000000000000000
> [ 1090.986228][   T69] head: ffff888111ef9000 0000000080080005 00000000f5000000 0000000000000000
> [ 1090.987286][   T69] head: 0017ffffc0000003 ffffea000447be01 00000000ffffffff 00000000ffffffff
> [ 1090.988313][   T69] head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008
> [ 1090.989355][   T69] page dumped because: kasan: bad access detected
> [ 1090.990209][   T69]
> [ 1090.990557][   T69] Memory state around the buggy address:
> [ 1090.991315][   T69]  ffff888111efa300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> [ 1090.992367][   T69]  ffff888111efa380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> [ 1090.993417][   T69] >ffff888111efa400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> [ 1090.994529][   T69]                                               ^
> [ 1090.995387][   T69]  ffff888111efa480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> [ 1090.996312][   T69]  ffff888111efa500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> [ 1090.997232][   T69] ==================================================================
>
>
> In short, nvme/048 should be updated and the UAF needs to be addressed.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-25 22:09             ` Max Gurtovoy
@ 2025-09-26  6:58               ` Daniel Wagner
  2025-09-28 11:53                 ` Max Gurtovoy
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Wagner @ 2025-09-26  6:58 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr

On Fri, Sep 26, 2025 at 01:09:22AM +0300, Max Gurtovoy wrote:
> I'm not an expert in the blktests but it seems like we should do inside the
> set_qid_max() something like:
> 
> _nvmet_target_cleanup
> _nvmet_target_setup --blkdev file (Add here --qid_max "${qid_max}" option)
> nvmf_check_queue_count "${subsysnqn}" "${qid_max}" || return 1
> _nvmf_wait_for_state "${subsysnqn}" "live" || return 1
> 
> I expect the above to work with and without my patches..

My patch was just a quick hack to see if it works. Updating
__nvmet_target_setup to handle the qid_max argument is the right
approach. 
 
> If possible - lets try it also with tcp/rdma transports

You are proposing a patch which has never run blktests, as it clearly
would regress nvme/048. I don't see myself here to do your work.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-26  6:58               ` Daniel Wagner
@ 2025-09-28 11:53                 ` Max Gurtovoy
  2025-10-01  6:04                   ` Daniel Wagner
  0 siblings, 1 reply; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-28 11:53 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr


On 26/09/2025 9:58, Daniel Wagner wrote:
> On Fri, Sep 26, 2025 at 01:09:22AM +0300, Max Gurtovoy wrote:
>> I'm not an expert in the blktests but it seems like we should do inside the
>> set_qid_max() something like:
>>
>> _nvmet_target_cleanup
>> _nvmet_target_setup --blkdev file (Add here --qid_max "${qid_max}" option)
>> nvmf_check_queue_count "${subsysnqn}" "${qid_max}" || return 1
>> _nvmf_wait_for_state "${subsysnqn}" "live" || return 1
>>
>> I expect the above to work with and without my patches..
> My patch was just a quick hack to see if it works. Updating
> __nvmet_target_setup to handle the qid_max argument is the right
> approach.

Thanks for trying this.


>   
>> If possible - lets try it also with tcp/rdma transports
> You are proposing a patch which has never run blktests, as it clearly
> would regress nvme/048. I don't see myself here to do your work.

This was not my intention :)

I've sent a patch series to blktests with the needed fixes.

I've tested nvme/048 with RDMA/TCP transports.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems
  2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
@ 2025-09-28 13:31   ` Max Gurtovoy
  2025-10-08 22:56   ` Max Gurtovoy
  2025-10-29  9:32   ` Christoph Hellwig
  2 siblings, 0 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-09-28 13:31 UTC (permalink / raw)
  To: Keith Busch; +Cc: hch, linux-nvme, sagi, kch, dwagner, israelr


On 25/09/2025 1:13, Keith Busch wrote:
> On Wed, Sep 24, 2025 at 11:26:00PM +0300, Max Gurtovoy wrote:
>> 1. Forbid changes to controller ID min/max attributes values on already
>>     discovered subsystems.
>> 2. Switch cntlid ida allocation from global to per-subsystem scope,
>>     matching the granularity of controller ID ranges.
>> 3. Forbid changes to vendor ID and subsystem vendor ID attributes values
>>     on already discovered subsystems.
>> 4. Forbid changes to max_qid attribute values on already discovered
>>     subsystems.
> Is there a reason these should be changeable after they're initialized
> even prior to being discovered?

It allows admins to modify some properties after initialization.

We might want to finish subsystem initialization changes once it is 
linked to a port for the first time (instead of the time it creates the 
first controller).




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems
  2025-09-28 11:53                 ` Max Gurtovoy
@ 2025-10-01  6:04                   ` Daniel Wagner
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel Wagner @ 2025-10-01  6:04 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: hch, linux-nvme, kbusch, sagi, kch, israelr

On Sun, Sep 28, 2025 at 02:53:12PM +0300, Max Gurtovoy wrote:
> On 26/09/2025 9:58, Daniel Wagner wrote:
> > > If possible - lets try it also with tcp/rdma transports
> > You are proposing a patch which has never run blktests, as it clearly
> > would regress nvme/048. I don't see myself here to do your work.
> 
> This was not my intention :)

Ah okay, I understood it wrong then :)





> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems
  2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
  2025-09-28 13:31   ` Max Gurtovoy
@ 2025-10-08 22:56   ` Max Gurtovoy
  2025-10-29  9:32   ` Christoph Hellwig
  2 siblings, 0 replies; 18+ messages in thread
From: Max Gurtovoy @ 2025-10-08 22:56 UTC (permalink / raw)
  To: Keith Busch; +Cc: hch, linux-nvme, sagi, kch, dwagner, israelr


On 25/09/2025 1:13, Keith Busch wrote:
> On Wed, Sep 24, 2025 at 11:26:00PM +0300, Max Gurtovoy wrote:
>> 1. Forbid changes to controller ID min/max attributes values on already
>>     discovered subsystems.
>> 2. Switch cntlid ida allocation from global to per-subsystem scope,
>>     matching the granularity of controller ID ranges.
>> 3. Forbid changes to vendor ID and subsystem vendor ID attributes values
>>     on already discovered subsystems.
>> 4. Forbid changes to max_qid attribute values on already discovered
>>     subsystems.
> Is there a reason these should be changeable after they're initialized
> even prior to being discovered?

I think we can make the subsystem not-changeable during the first 
successful call of nvmet_port_subsys_allow_link() related to the 
specific subsystem.

Then make all the _store() functions to fail after first linkage to some 
nvmet_port.

any thoughts ?




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems
  2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
  2025-09-28 13:31   ` Max Gurtovoy
  2025-10-08 22:56   ` Max Gurtovoy
@ 2025-10-29  9:32   ` Christoph Hellwig
  2 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2025-10-29  9:32 UTC (permalink / raw)
  To: Keith Busch; +Cc: Max Gurtovoy, hch, linux-nvme, sagi, kch, dwagner, israelr

On Wed, Sep 24, 2025 at 04:13:57PM -0600, Keith Busch wrote:
> On Wed, Sep 24, 2025 at 11:26:00PM +0300, Max Gurtovoy wrote:
> > 1. Forbid changes to controller ID min/max attributes values on already
> >    discovered subsystems.
> > 2. Switch cntlid ida allocation from global to per-subsystem scope,
> >    matching the granularity of controller ID ranges.
> > 3. Forbid changes to vendor ID and subsystem vendor ID attributes values
> >    on already discovered subsystems.
> > 4. Forbid changes to max_qid attribute values on already discovered
> >    subsystems.
> 
> Is there a reason these should be changeable after they're initialized
> even prior to being discovered?

Yes, the whole idea of allowing modifications when enabled, but not
discovered seems odd.

This seems to come from:

commit 87fd4cc1c0dda038c9a3617c9d07d5159326e80f
Author: Noam Gottlieb <ngottlieb@nvidia.com>
Date:   Mon Jun 7 12:23:24 2021 +0300

    nvmet: make ver stable once connection established

I really think we should try to replace all that with an enable
check, hoping we're not going to break something that started to
rely on it.


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2025-10-29  9:32 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-24 20:26 [PATCH 0/4] nvmet: fix configfs attr update handling for discovered subsystems Max Gurtovoy
2025-09-24 20:26 ` [PATCH 1/4] nvmet: forbid changing ctrl ID attributes " Max Gurtovoy
2025-09-24 20:26 ` [PATCH 2/4] nvmet: make cntlid ida per subsystem Max Gurtovoy
2025-09-24 20:26 ` [PATCH 3/4] nvmet: prevent max_qid changes for discovered subsystems Max Gurtovoy
2025-09-25  7:36   ` Daniel Wagner
2025-09-25  8:28     ` Max Gurtovoy
2025-09-25 11:32       ` Daniel Wagner
2025-09-25 12:06         ` Max Gurtovoy
2025-09-25 16:02           ` Daniel Wagner
2025-09-25 22:09             ` Max Gurtovoy
2025-09-26  6:58               ` Daniel Wagner
2025-09-28 11:53                 ` Max Gurtovoy
2025-10-01  6:04                   ` Daniel Wagner
2025-09-24 20:26 ` [PATCH 4/4] nvmet: prevent vid/ssvid " Max Gurtovoy
2025-09-24 22:13 ` [PATCH 0/4] nvmet: fix configfs attr update handling " Keith Busch
2025-09-28 13:31   ` Max Gurtovoy
2025-10-08 22:56   ` Max Gurtovoy
2025-10-29  9:32   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox