Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Maurizio Lombardi <mlombard@redhat.com>
To: kbusch@kernel.org
Cc: mheyne@amazon.de, emilne@redhat.com, jmeneghi@redhat.com,
	linux-nvme@lists.infradead.org, dwagner@suse.de,
	mlombard@arkamax.eu, mkhalfella@purestorage.com,
	chaitanyak@nvidia.com, hare@kernel.org, hch@lst.de
Subject: [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller
Date: Fri,  8 May 2026 15:33:31 +0200	[thread overview]
Message-ID: <20260508133335.98612-6-mlombard@redhat.com> (raw)
In-Reply-To: <20260508133335.98612-1-mlombard@redhat.com>

Currently, there is no method to adjust the timeout values on a
per controller basis with nvme I/O queues.
Add an io_timeout attribute to nvme so that different nvme controllers
which may have different timeout requirements can have custom
I/O timeouts set.

Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
 drivers/nvme/host/core.c  |  2 ++
 drivers/nvme/host/nvme.h  |  1 +
 drivers/nvme/host/sysfs.c | 47 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 22be8cf5e982..fa60e10e05d5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4203,6 +4203,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 		mutex_unlock(&ctrl->namespaces_lock);
 		goto out_unlink_ns;
 	}
+	blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
 	nvme_ns_add_to_ctrl_list(ns);
 	mutex_unlock(&ctrl->namespaces_lock);
 	synchronize_srcu(&ctrl->srcu);
@@ -5141,6 +5142,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
 	ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
 	ctrl->ka_last_check_time = jiffies;
 	ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
+	ctrl->io_timeout = NVME_IO_TIMEOUT;
 
 	BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) >
 			PAGE_SIZE);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9da3ebebe9c8..a6d998c2e0e5 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -371,6 +371,7 @@ struct nvme_ctrl {
 	u32 ctrl_config;
 	u32 queue_count;
 	u32 admin_timeout;
+	u32 io_timeout;
 
 	u64 cap;
 	u32 max_hw_sectors;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 9456af955aff..7e6810423735 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -664,6 +664,52 @@ static ssize_t nvme_admin_timeout_store(struct device *dev,
 static DEVICE_ATTR(admin_timeout, S_IRUGO | S_IWUSR,
 	nvme_admin_timeout_show, nvme_admin_timeout_store);
 
+static ssize_t nvme_io_timeout_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+	return sysfs_emit(buf, "%u\n", jiffies_to_msecs(ctrl->io_timeout));
+}
+
+static ssize_t nvme_io_timeout_store(struct device *dev,
+			struct device_attribute *attr,
+			const char *buf, size_t count)
+{
+	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+	struct nvme_ns *ns;
+	u32 timeout;
+	int err;
+
+	/*
+	 * Wait until the controller reaches the LIVE state
+	 * to be sure that connect_q is properly initialized.
+	 */
+	if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
+		return -EBUSY;
+
+	err = kstrtou32(buf, 10, &timeout);
+	if (err || !timeout)
+		return -EINVAL;
+
+	/* Take the namespaces_lock to avoid racing against nvme_alloc_ns() */
+	mutex_lock(&ctrl->namespaces_lock);
+
+	ctrl->io_timeout = msecs_to_jiffies(timeout);
+	list_for_each_entry(ns, &ctrl->namespaces, list)
+		blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
+
+	mutex_unlock(&ctrl->namespaces_lock);
+
+	if (ctrl->connect_q)
+		blk_queue_rq_timeout(ctrl->connect_q, ctrl->io_timeout);
+
+	return count;
+}
+
+static DEVICE_ATTR(io_timeout, S_IRUGO | S_IWUSR,
+	nvme_io_timeout_show, nvme_io_timeout_store);
+
 #ifdef CONFIG_NVME_HOST_AUTH
 static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
@@ -807,6 +853,7 @@ static struct attribute *nvme_dev_attrs[] = {
 	&dev_attr_dctype.attr,
 	&dev_attr_quirks.attr,
 	&dev_attr_admin_timeout.attr,
+	&dev_attr_io_timeout.attr,
 #ifdef CONFIG_NVME_HOST_AUTH
 	&dev_attr_dhchap_secret.attr,
 	&dev_attr_dhchap_ctrl_secret.attr,
-- 
2.54.0



  parent reply	other threads:[~2026-05-08 13:34 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
2026-05-11  9:37   ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
2026-05-08 16:57   ` Daniel Wagner
2026-05-10 22:10   ` Sagi Grimberg
2026-05-11  8:07   ` Christoph Hellwig
2026-05-11 11:29     ` Maurizio Lombardi
2026-05-11 12:31       ` Christoph Hellwig
2026-05-11  9:46   ` Hannes Reinecke
2026-05-11 10:05     ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
2026-05-08 16:57   ` Daniel Wagner
2026-05-10 22:10   ` Sagi Grimberg
2026-05-11  8:07     ` Christoph Hellwig
2026-05-11 12:54       ` Sagi Grimberg
2026-05-11 15:09         ` Keith Busch
2026-05-11 15:45           ` Maurizio Lombardi
2026-05-11 17:10             ` Keith Busch
2026-05-11  8:08   ` Christoph Hellwig
2026-05-11  9:47   ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
2026-05-10 22:10   ` Sagi Grimberg
2026-05-11  8:08   ` Christoph Hellwig
2026-05-11  9:48   ` Hannes Reinecke
2026-05-08 13:33 ` Maurizio Lombardi [this message]
2026-05-08 17:08   ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Daniel Wagner
2026-05-10 22:12   ` Sagi Grimberg
2026-05-11  8:52     ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
2026-05-11  8:10   ` Christoph Hellwig
2026-05-11 11:42     ` Maurizio Lombardi
2026-05-11 12:32       ` Christoph Hellwig
2026-05-11  9:50   ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
2026-05-10 22:15   ` Sagi Grimberg
2026-05-11  8:11   ` Christoph Hellwig
2026-05-11  9:53   ` Hannes Reinecke
2026-05-11  9:57     ` Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-05-08 17:09   ` Daniel Wagner
2026-05-10 22:16   ` Sagi Grimberg
2026-05-11  8:12   ` Christoph Hellwig
2026-05-11  9:55   ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
2026-05-10 22:16   ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260508133335.98612-6-mlombard@redhat.com \
    --to=mlombard@redhat.com \
    --cc=chaitanyak@nvidia.com \
    --cc=dwagner@suse.de \
    --cc=emilne@redhat.com \
    --cc=hare@kernel.org \
    --cc=hch@lst.de \
    --cc=jmeneghi@redhat.com \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mheyne@amazon.de \
    --cc=mkhalfella@purestorage.com \
    --cc=mlombard@arkamax.eu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox