From: Maurizio Lombardi <mlombard@redhat.com>
To: kbusch@kernel.org
Cc: mheyne@amazon.de, emilne@redhat.com, jmeneghi@redhat.com,
linux-nvme@lists.infradead.org, dwagner@suse.de,
mlombard@arkamax.eu, mkhalfella@purestorage.com,
chaitanyak@nvidia.com, hare@kernel.org, hch@lst.de
Subject: [PATCH V3 4/8] nvme: add sysfs attribute to change IO timeout per nvme controller
Date: Fri, 10 Apr 2026 09:39:20 +0200 [thread overview]
Message-ID: <20260410073924.61078-5-mlombard@redhat.com> (raw)
In-Reply-To: <20260410073924.61078-1-mlombard@redhat.com>
Currently, there is no method to adjust the timeout values on a
per controller basis with nvme I/O queues.
Add an io_timeout attribute to nvme so that different nvme controllers
which may have different timeout requirements can have custom
I/O timeouts set.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 2 ++
drivers/nvme/host/nvme.h | 1 +
drivers/nvme/host/sysfs.c | 47 +++++++++++++++++++++++++++++++++++++++
3 files changed, 50 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6dc3d273623c..1b676fd4d454 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4197,6 +4197,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
mutex_unlock(&ctrl->namespaces_lock);
goto out_unlink_ns;
}
+ blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
nvme_ns_add_to_ctrl_list(ns);
mutex_unlock(&ctrl->namespaces_lock);
synchronize_srcu(&ctrl->srcu);
@@ -5136,6 +5137,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
ctrl->ka_last_check_time = jiffies;
ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
+ ctrl->io_timeout = NVME_IO_TIMEOUT;
BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) >
PAGE_SIZE);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9da3ebebe9c8..a6d998c2e0e5 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -371,6 +371,7 @@ struct nvme_ctrl {
u32 ctrl_config;
u32 queue_count;
u32 admin_timeout;
+ u32 io_timeout;
u64 cap;
u32 max_hw_sectors;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 000c29bd1f1f..1e6a0eecb30e 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -664,6 +664,52 @@ static ssize_t nvme_admin_timeout_store(struct device *dev,
static DEVICE_ATTR(admin_timeout, S_IRUGO | S_IWUSR,
nvme_admin_timeout_show, nvme_admin_timeout_store);
+static ssize_t nvme_io_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", jiffies_to_msecs(ctrl->io_timeout));
+}
+
+static ssize_t nvme_io_timeout_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ struct nvme_ns *ns;
+ u32 timeout;
+ int err;
+
+ /*
+ * Wait until the controller reaches the LIVE state
+ * to be sure that connect_q is properly initialized.
+ */
+ if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
+ return -EBUSY;
+
+ err = kstrtou32(buf, 10, &timeout);
+ if (err || !timeout)
+ return -EINVAL;
+
+ /* Take the namespaces_lock to avoid racing against nvme_alloc_ns() */
+ mutex_lock(&ctrl->namespaces_lock);
+
+ ctrl->io_timeout = msecs_to_jiffies(timeout);
+ list_for_each_entry(ns, &ctrl->namespaces, list)
+ blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
+
+ if (ctrl->connect_q)
+ blk_queue_rq_timeout(ctrl->connect_q, ctrl->io_timeout);
+
+ mutex_unlock(&ctrl->namespaces_lock);
+
+ return count;
+}
+
+static DEVICE_ATTR(io_timeout, S_IRUGO | S_IWUSR,
+ nvme_io_timeout_show, nvme_io_timeout_store);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -807,6 +853,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_dctype.attr,
&dev_attr_quirks.attr,
&dev_attr_admin_timeout.attr,
+ &dev_attr_io_timeout.attr,
#ifdef CONFIG_NVME_HOST_AUTH
&dev_attr_dhchap_secret.attr,
&dev_attr_dhchap_ctrl_secret.attr,
--
2.53.0
next prev parent reply other threads:[~2026-04-10 7:39 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 7:39 [PATCH V3 0/8] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 1/8] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 2/8] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 3/8] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
2026-04-10 7:39 ` Maurizio Lombardi [this message]
2026-04-10 7:39 ` [PATCH V3 5/8] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 6/8] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 7/8] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-04-10 7:39 ` [PATCH V3 8/8] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260410073924.61078-5-mlombard@redhat.com \
--to=mlombard@redhat.com \
--cc=chaitanyak@nvidia.com \
--cc=dwagner@suse.de \
--cc=emilne@redhat.com \
--cc=hare@kernel.org \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mheyne@amazon.de \
--cc=mkhalfella@purestorage.com \
--cc=mlombard@arkamax.eu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox