Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Randy Jennings <randyj@purestorage.com>
To: lsf-pc@lists.linux-foundation.org, linux-nvme@lists.infradead.org
Cc: cleech@redhat.com, mkhalfella@purestorage.com
Subject: [PATCH 3/7] nvmet: add delay debugfs file to nvmet_ctrl
Date: Thu, 30 Apr 2026 17:29:09 -0600	[thread overview]
Message-ID: <20260430232913.129271-4-randyj@purestorage.com> (raw)
In-Reply-To: <20260430232913.129271-1-randyj@purestorage.com>

From: Chris Leech <cleech@redhat.com>

Creates an delay attribute file on the controler in debugfs
/sys/kernel/debug/nvmet/<subsystem>/ctrlN/delay

Reading this file returns two numbers, a reqeust delay count and a delay
time in ms.  Each delayed request will decrement the delay count until
it reaches 0.

Writing to this file can set both the delay and count at once, or just
the count to trigger more delays.

 # delay the next 5 request by 5 seconds each
 echo 5 5000 > delay

 # set the delay time to 3 seconds without starting a count
 echo 0 3000 > delay

 # delay to the next 5 requests by the current delay time
 echo 5 > delay

Signed-off-by: Chris Leech <cleech@redhat.com>
---
 drivers/nvme/target/Kconfig   |  9 ++++++++
 drivers/nvme/target/debugfs.c | 40 +++++++++++++++++++++++++++++++++++
 drivers/nvme/target/nvmet.h   |  4 ++++
 3 files changed, 53 insertions(+)

diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig
index 4904097dfd49..cfcc652c6f9f 100644
--- a/drivers/nvme/target/Kconfig
+++ b/drivers/nvme/target/Kconfig
@@ -127,3 +127,12 @@ config NVME_TARGET_PCI_EPF
 	  capable PCI controller.
 
 	  If unsure, say N.
+
+config NVME_TARGET_DELAY_REQUESTS
+	bool "NVMe over Fabrics target request delay"
+	depends on NVME_TARGET && NVME_TARGET_DEBUGFS
+	help
+	 This is a testing feature to allow delaying request completion in an
+	 NVMe over Fabrics target, which allows for support of the cancel command.
+
+	 If unsure, say N.
diff --git a/drivers/nvme/target/debugfs.c b/drivers/nvme/target/debugfs.c
index 1300adf6c1fb..ae45aca728ea 100644
--- a/drivers/nvme/target/debugfs.c
+++ b/drivers/nvme/target/debugfs.c
@@ -170,6 +170,42 @@ static int nvmet_ctrl_instance_cirn_show(struct seq_file *m, void *p)
 }
 NVMET_DEBUGFS_ATTR(nvmet_ctrl_instance_cirn);
 
+#if IS_ENABLED(CONFIG_NVME_TARGET_DELAY_REQUESTS)
+static int nvmet_ctrl_delay_show(struct seq_file *m, void *p)
+{
+	struct nvmet_ctrl *ctrl = m->private;
+	int delay_count = atomic_read(&ctrl->delay_count);
+
+	seq_printf(m, "%u %u\n", delay_count, ctrl->delay_msec);
+	return 0;
+}
+
+static ssize_t nvmet_ctrl_delay_write(struct file *file, const char __user *buf,
+				      size_t count, loff_t *ppos)
+{
+	struct seq_file *m = file->private_data;
+	struct nvmet_ctrl *ctrl = m->private;
+	char delay_buf[22] = {};
+	int delay_count;
+	int delay_msec;
+	int n;
+
+	if (count >= sizeof(delay_buf))
+		return -EINVAL;
+	if (copy_from_user(delay_buf, buf, count))
+		return -EFAULT;
+
+	n = sscanf(delay_buf, "%u %u", &delay_count, &delay_msec);
+	if (n < 1 || n > 2)
+		return -EINVAL;
+	if (n == 2)
+		ctrl->delay_msec = delay_msec;
+	atomic_set(&ctrl->delay_count, delay_count);
+	return count;
+}
+NVMET_DEBUGFS_RW_ATTR(nvmet_ctrl_delay);
+#endif /* CONFIG_NVME_TARGET_DELAY_REQUESTS */
+
 int nvmet_debugfs_ctrl_setup(struct nvmet_ctrl *ctrl)
 {
 	char name[32];
@@ -205,6 +241,10 @@ int nvmet_debugfs_ctrl_setup(struct nvmet_ctrl *ctrl)
 			    &nvmet_ctrl_instance_ciu_fops);
 	debugfs_create_file("cirn", S_IRUSR, ctrl->debugfs_dir, ctrl,
 			    &nvmet_ctrl_instance_cirn_fops);
+#if IS_ENABLED(CONFIG_NVME_TARGET_DELAY_REQUESTS)
+	debugfs_create_file("delay", S_IWUSR, ctrl->debugfs_dir, ctrl,
+			    &nvmet_ctrl_delay_fops);
+#endif
 	return 0;
 }
 
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index acb2f0f3cdc8..beade281164a 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -313,6 +313,10 @@ struct nvmet_ctrl {
 #endif
 #ifdef CONFIG_NVME_TARGET_TCP_TLS
 	struct key		*tls_key;
+#endif
+#ifdef CONFIG_NVME_TARGET_DELAY_REQUESTS
+	atomic_t		delay_count;
+	u32			delay_msec;
 #endif
 	struct nvmet_pr_log_mgr pr_log_mgr;
 };
-- 
2.54.0



  parent reply	other threads:[~2026-04-30 23:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-30 23:29 [PATCH 0/7] NOT FOR MERGE nvmet code to exercise CCR/CQT Randy Jennings
2026-04-30 23:29 ` [PATCH 1/7] fixup: nvme fix CCR command Randy Jennings
2026-04-30 23:29 ` [PATCH 2/7] nvmet: put all nvmet_req.execute calls behind a function name Randy Jennings
2026-04-30 23:29 ` Randy Jennings [this message]
2026-04-30 23:29 ` [PATCH 4/7] nvmet: delay requests Randy Jennings
2026-04-30 23:29 ` [PATCH 5/7] nvmet: Added debugfs fatal opcode Randy Jennings
2026-04-30 23:29 ` [PATCH 6/7] nvmet: kill nvme controller when fatal opcode is received Randy Jennings
2026-04-30 23:29 ` [PATCH 7/7] Force CCR operation to fail Randy Jennings
2026-05-10 22:39 ` [PATCH 0/7] NOT FOR MERGE nvmet code to exercise CCR/CQT Sagi Grimberg
2026-05-11 19:14   ` Randy Jennings

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260430232913.129271-4-randyj@purestorage.com \
    --to=randyj@purestorage.com \
    --cc=cleech@redhat.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mkhalfella@purestorage.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox