* [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-11 9:37 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
` (7 subsequent siblings)
8 siblings, 1 reply; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
From: "Heyne, Maximilian" <mheyne@amazon.de>
When initializing an nvme request which is about to be send to the block
layer, we do not need to initialize its timeout. If it's left
uninitialized at 0 the block layer will use the request queue's timeout
in blk_add_timer (via nvme_start_request which is called from
nvme_*_queue_rq). These timeouts are setup to either NVME_IO_TIMEOUT or
NVME_ADMIN_TIMEOUT when the request queues were created.
Because the io_timeout of the IO queues can be modified via sysfs, the
following situation can occur:
1) NVME_IO_TIMEOUT = 30 (default module parameter)
2) nvme1n1 is probed. IO queues default timeout is 30 s
3) manually change the IO timeout to 90 s
echo 90000 > /sys/class/nvme/nvme1/nvme1n1/queue/io_timeout
4) Any call of __submit_sync_cmd on nvme1n1 to an IO queue will issue
commands with the 30 s timeout instead of the wanted 90 s which might
be more suitable for this device.
Commit 470e900c8036 ("nvme: refactor nvme_alloc_request") silently
changed the behavior for ioctl's already because it unconditionally
overrides the request's timeout that was set in nvme_init_request. If it
was unset by the user of the ioctl if will be overridden with 0 meaning
the block layer will pick the request queue's IO timeout.
Following up on that, this patch further improves the consistency of IO
timeout usage. However, there are still uses of NVME_IO_TIMEOUT which
could be inconsistent with what is set in the device's request_queue by
the user.
Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Maximilian Heyne <mheyne@amazon.de>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dc388e24caad..89948d0acf18 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -729,10 +729,8 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
struct nvme_ns *ns = req->q->disk->private_data;
logging_enabled = ns->head->passthru_err_log_enabled;
- req->timeout = NVME_IO_TIMEOUT;
} else { /* no queuedata implies admin queue */
logging_enabled = nr->ctrl->passthru_err_log_enabled;
- req->timeout = NVME_ADMIN_TIMEOUT;
}
if (!logging_enabled)
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests
2026-05-08 13:33 ` [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
@ 2026-05-11 9:37 ` Hannes Reinecke
0 siblings, 0 replies; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:37 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> From: "Heyne, Maximilian" <mheyne@amazon.de>
>
> When initializing an nvme request which is about to be send to the block
> layer, we do not need to initialize its timeout. If it's left
> uninitialized at 0 the block layer will use the request queue's timeout
> in blk_add_timer (via nvme_start_request which is called from
> nvme_*_queue_rq). These timeouts are setup to either NVME_IO_TIMEOUT or
> NVME_ADMIN_TIMEOUT when the request queues were created.
>
> Because the io_timeout of the IO queues can be modified via sysfs, the
> following situation can occur:
>
> 1) NVME_IO_TIMEOUT = 30 (default module parameter)
> 2) nvme1n1 is probed. IO queues default timeout is 30 s
> 3) manually change the IO timeout to 90 s
> echo 90000 > /sys/class/nvme/nvme1/nvme1n1/queue/io_timeout
> 4) Any call of __submit_sync_cmd on nvme1n1 to an IO queue will issue
> commands with the 30 s timeout instead of the wanted 90 s which might
> be more suitable for this device.
>
> Commit 470e900c8036 ("nvme: refactor nvme_alloc_request") silently
> changed the behavior for ioctl's already because it unconditionally
> overrides the request's timeout that was set in nvme_init_request. If it
> was unset by the user of the ioctl if will be overridden with 0 meaning
> the block layer will pick the request queue's IO timeout.
>
> Following up on that, this patch further improves the consistency of IO
> timeout usage. However, there are still uses of NVME_IO_TIMEOUT which
> could be inconsistent with what is set in the device's request_queue by
> the user.
>
> Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Maximilian Heyne <mheyne@amazon.de>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/core.c | 2 --
> 1 file changed, 2 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
` (3 more replies)
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
` (6 subsequent siblings)
8 siblings, 4 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Currently, there is no method to adjust the timeout values
on a per controller basis with nvme admin queues.
Add an admin_timeout attribute to nvme so that different
nvme controllers which may have different timeout
requirements can have custom admin timeouts set.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 1 +
drivers/nvme/host/nvme.h | 1 +
drivers/nvme/host/sysfs.c | 42 +++++++++++++++++++++++++++++++++++++++
3 files changed, 44 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 89948d0acf18..b1bfcd0a0e5b 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -5140,6 +5140,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
ctrl->ka_last_check_time = jiffies;
+ ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) >
PAGE_SIZE);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index ccd5e05dac98..9da3ebebe9c8 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -370,6 +370,7 @@ struct nvme_ctrl {
u16 mtfa;
u32 ctrl_config;
u32 queue_count;
+ u32 admin_timeout;
u64 cap;
u32 max_hw_sectors;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index e59758616f27..9456af955aff 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -623,6 +623,47 @@ static ssize_t quirks_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(quirks);
+static ssize_t nvme_admin_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n",
+ jiffies_to_msecs(ctrl->admin_timeout));
+}
+
+static ssize_t nvme_admin_timeout_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ u32 timeout;
+ int err;
+
+ /*
+ * Wait until the controller reaches the LIVE state
+ * to be sure that admin_q and fabrics_q are
+ * properly initialized.
+ */
+ if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
+ return -EBUSY;
+
+ err = kstrtou32(buf, 10, &timeout);
+ if (err || !timeout)
+ return -EINVAL;
+
+ ctrl->admin_timeout = msecs_to_jiffies(timeout);
+
+ blk_queue_rq_timeout(ctrl->admin_q, ctrl->admin_timeout);
+ if (ctrl->fabrics_q)
+ blk_queue_rq_timeout(ctrl->fabrics_q, ctrl->admin_timeout);
+
+ return count;
+}
+
+static DEVICE_ATTR(admin_timeout, S_IRUGO | S_IWUSR,
+ nvme_admin_timeout_show, nvme_admin_timeout_store);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -765,6 +806,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_cntrltype.attr,
&dev_attr_dctype.attr,
&dev_attr_quirks.attr,
+ &dev_attr_admin_timeout.attr,
#ifdef CONFIG_NVME_HOST_AUTH
&dev_attr_dhchap_secret.attr,
&dev_attr_dhchap_ctrl_secret.attr,
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
@ 2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
` (2 subsequent siblings)
3 siblings, 0 replies; 46+ messages in thread
From: Daniel Wagner @ 2026-05-08 16:57 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:28PM +0200, Maurizio Lombardi wrote:
> Currently, there is no method to adjust the timeout values
> on a per controller basis with nvme admin queues.
> Add an admin_timeout attribute to nvme so that different
> nvme controllers which may have different timeout
> requirements can have custom admin timeouts set.
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
@ 2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 9:46 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:10 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
@ 2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 11:29 ` Maurizio Lombardi
2026-05-11 9:46 ` Hannes Reinecke
3 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:07 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:28PM +0200, Maurizio Lombardi wrote:
> Currently, there is no method to adjust the timeout values
> on a per controller basis with nvme admin queues.
> Add an admin_timeout attribute to nvme so that different
> nvme controllers which may have different timeout
> requirements can have custom admin timeouts set.
Please use up all 73 characters for commit messages.
>
> +static ssize_t nvme_admin_timeout_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
The indent for prototype continuations should use two, not three tabs.
> + * Wait until the controller reaches the LIVE state
> + * to be sure that admin_q and fabrics_q are
> + * properly initialized.
Please use up all 80 characters for comments.
> + */
> + if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
> + return -EBUSY;
> +
> + err = kstrtou32(buf, 10, &timeout);
> + if (err || !timeout)
> + return -EINVAL;
> +
> + ctrl->admin_timeout = msecs_to_jiffies(timeout);
> +
> + blk_queue_rq_timeout(ctrl->admin_q, ctrl->admin_timeout);
> + if (ctrl->fabrics_q)
> + blk_queue_rq_timeout(ctrl->fabrics_q, ctrl->admin_timeout);
Do we really want to apply the the admin timeout for the fabrics queue?
If so can you document here why we do that?
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-11 8:07 ` Christoph Hellwig
@ 2026-05-11 11:29 ` Maurizio Lombardi
2026-05-11 12:31 ` Christoph Hellwig
0 siblings, 1 reply; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 11:29 UTC (permalink / raw)
To: Christoph Hellwig, Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare
On Mon May 11, 2026 at 10:07 AM CEST, Christoph Hellwig wrote:
> On Fri, May 08, 2026 at 03:33:28PM +0200, Maurizio Lombardi wrote:
>> Currently, there is no method to adjust the timeout values
>> on a per controller basis with nvme admin queues.
>> Add an admin_timeout attribute to nvme so that different
>> nvme controllers which may have different timeout
>> requirements can have custom admin timeouts set.
>
> Please use up all 73 characters for commit messages.
>
>>
>> +static ssize_t nvme_admin_timeout_show(struct device *dev,
>> + struct device_attribute *attr, char *buf)
>
> The indent for prototype continuations should use two, not three tabs.
>
>> + * Wait until the controller reaches the LIVE state
>> + * to be sure that admin_q and fabrics_q are
>> + * properly initialized.
>
> Please use up all 80 characters for comments.
Ok.
>
>> + */
>> + if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
>> + return -EBUSY;
>> +
>> + err = kstrtou32(buf, 10, &timeout);
>> + if (err || !timeout)
>> + return -EINVAL;
>> +
>> + ctrl->admin_timeout = msecs_to_jiffies(timeout);
>> +
>> + blk_queue_rq_timeout(ctrl->admin_q, ctrl->admin_timeout);
>> + if (ctrl->fabrics_q)
>> + blk_queue_rq_timeout(ctrl->fabrics_q, ctrl->admin_timeout);
>
> Do we really want to apply the the admin timeout for the fabrics queue?
> If so can you document here why we do that?
Both admin_q and fabrics_q are initialized to share the same
NVME_ADMIN_TIMEOUT value, therefore keeping them in sync maintains the
consistency.
If we didn't apply the timeout to fabrics_q, it would end up
operating under a different timeout than the standard admin_q, is there
any reason why we would want that?
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-11 11:29 ` Maurizio Lombardi
@ 2026-05-11 12:31 ` Christoph Hellwig
0 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 12:31 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: Christoph Hellwig, Maurizio Lombardi, kbusch, mheyne, emilne,
jmeneghi, linux-nvme, dwagner, mkhalfella, chaitanyak, hare
On Mon, May 11, 2026 at 01:29:10PM +0200, Maurizio Lombardi wrote:
> Both admin_q and fabrics_q are initialized to share the same
> NVME_ADMIN_TIMEOUT value, therefore keeping them in sync maintains the
> consistency.
>
> If we didn't apply the timeout to fabrics_q, it would end up
> operating under a different timeout than the standard admin_q, is there
> any reason why we would want that?
Not sure. I just want to make sure everything is properly documented.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
` (2 preceding siblings ...)
2026-05-11 8:07 ` Christoph Hellwig
@ 2026-05-11 9:46 ` Hannes Reinecke
2026-05-11 10:05 ` Maurizio Lombardi
3 siblings, 1 reply; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:46 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> Currently, there is no method to adjust the timeout values
> on a per controller basis with nvme admin queues.
> Add an admin_timeout attribute to nvme so that different
> nvme controllers which may have different timeout
> requirements can have custom admin timeouts set.
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/core.c | 1 +
> drivers/nvme/host/nvme.h | 1 +
> drivers/nvme/host/sysfs.c | 42 +++++++++++++++++++++++++++++++++++++++
> 3 files changed, 44 insertions(+)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 89948d0acf18..b1bfcd0a0e5b 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -5140,6 +5140,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
> ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
> ctrl->ka_last_check_time = jiffies;
> + ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
Why do you remove the default timeout?
Shouldn't admin requests run with the admin timeout per default?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller
2026-05-11 9:46 ` Hannes Reinecke
@ 2026-05-11 10:05 ` Maurizio Lombardi
0 siblings, 0 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 10:05 UTC (permalink / raw)
To: Hannes Reinecke, Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Mon May 11, 2026 at 11:46 AM CEST, Hannes Reinecke wrote:
> On 5/8/26 15:33, Maurizio Lombardi wrote:
>> Currently, there is no method to adjust the timeout values
>> on a per controller basis with nvme admin queues.
>> Add an admin_timeout attribute to nvme so that different
>> nvme controllers which may have different timeout
>> requirements can have custom admin timeouts set.
>>
>> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
>> ---
>> drivers/nvme/host/core.c | 1 +
>> drivers/nvme/host/nvme.h | 1 +
>> drivers/nvme/host/sysfs.c | 42 +++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 44 insertions(+)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 89948d0acf18..b1bfcd0a0e5b 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -5140,6 +5140,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
>> memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
>> ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
>> ctrl->ka_last_check_time = jiffies;
>> + ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
>
> Why do you remove the default timeout?
nvme_init_ctrl() initializes a new controller, so it must inherit the
default global NVME_ADMIN_TIMEOUT settings
> Shouldn't admin requests run with the admin timeout per default?
They do, the default admin timeout for new controllers remains
NVME_ADMIN_TIMEOUT and it's applied to both admin_q and fabrics_q queues
when they are initialized.
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 1/9] nvme: Let the blocklayer set timeouts for requests Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 2/9] nvme: add sysfs attribute to change admin timeout per nvme controller Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
` (3 more replies)
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
` (5 subsequent siblings)
8 siblings, 4 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
When a controller connects, nvme_start_ctrl() emits the
"NVME_EVENT=connected" uevent and sets the NVME_CTRL_STARTED_ONCE flag.
Currently, the uevent is emitted before the flag is set.
This creates a race condition for userspace tools (like udev rules)
that might rely on the "connected" event to configure sysfs attributes.
Specifically, if a udev rule attempts to set the newly introduced
`admin_timeout` attribute immediately after receiving the uevent,
the sysfs store function might evaluate the NVME_CTRL_STARTED_ONCE
bit before it is actually set, resulting in spurious -EBUSY error.
Swap the order of operations in nvme_start_ctrl() so that the
NVME_CTRL_STARTED_ONCE flag is set before the uevent is sent.
This guarantees that the admin_timeout can already be changed
when userspace is notified.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index b1bfcd0a0e5b..22be8cf5e982 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -5039,8 +5039,8 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
nvme_mpath_update(ctrl);
}
- nvme_change_uevent(ctrl, "NVME_EVENT=connected");
set_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags);
+ nvme_change_uevent(ctrl, "NVME_EVENT=connected");
}
EXPORT_SYMBOL_GPL(nvme_start_ctrl);
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
@ 2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
` (2 subsequent siblings)
3 siblings, 0 replies; 46+ messages in thread
From: Daniel Wagner @ 2026-05-08 16:57 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:29PM +0200, Maurizio Lombardi wrote:
> When a controller connects, nvme_start_ctrl() emits the
> "NVME_EVENT=connected" uevent and sets the NVME_CTRL_STARTED_ONCE flag.
> Currently, the uevent is emitted before the flag is set.
>
> This creates a race condition for userspace tools (like udev rules)
> that might rely on the "connected" event to configure sysfs attributes.
> Specifically, if a udev rule attempts to set the newly introduced
> `admin_timeout` attribute immediately after receiving the uevent,
> the sysfs store function might evaluate the NVME_CTRL_STARTED_ONCE
> bit before it is actually set, resulting in spurious -EBUSY error.
>
> Swap the order of operations in nvme_start_ctrl() so that the
> NVME_CTRL_STARTED_ONCE flag is set before the uevent is sent.
> This guarantees that the admin_timeout can already be changed
> when userspace is notified.
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
@ 2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:47 ` Hannes Reinecke
3 siblings, 1 reply; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:10 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
This should even go as a separate patch:
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-10 22:10 ` Sagi Grimberg
@ 2026-05-11 8:07 ` Christoph Hellwig
2026-05-11 12:54 ` Sagi Grimberg
0 siblings, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:07 UTC (permalink / raw)
To: Sagi Grimberg
Cc: Maurizio Lombardi, kbusch, mheyne, emilne, jmeneghi, linux-nvme,
dwagner, mlombard, mkhalfella, chaitanyak, hare, hch
On Mon, May 11, 2026 at 01:10:34AM +0300, Sagi Grimberg wrote:
> This should even go as a separate patch:
You mean a stadalone fix? Because it already is a separate patch.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-11 8:07 ` Christoph Hellwig
@ 2026-05-11 12:54 ` Sagi Grimberg
2026-05-11 15:09 ` Keith Busch
0 siblings, 1 reply; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-11 12:54 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Maurizio Lombardi, kbusch, mheyne, emilne, jmeneghi, linux-nvme,
dwagner, mlombard, mkhalfella, chaitanyak, hare
> On Mon, May 11, 2026 at 01:10:34AM +0300, Sagi Grimberg wrote:
>> This should even go as a separate patch:
> You mean a stadalone fix? Because it already is a separate patch.
Yes, this is a race condition fix that can go separately.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-11 12:54 ` Sagi Grimberg
@ 2026-05-11 15:09 ` Keith Busch
2026-05-11 15:45 ` Maurizio Lombardi
0 siblings, 1 reply; 46+ messages in thread
From: Keith Busch @ 2026-05-11 15:09 UTC (permalink / raw)
To: Sagi Grimberg
Cc: Christoph Hellwig, Maurizio Lombardi, mheyne, emilne, jmeneghi,
linux-nvme, dwagner, mlombard, mkhalfella, chaitanyak, hare
On Mon, May 11, 2026 at 03:54:04PM +0300, Sagi Grimberg wrote:
>
> > On Mon, May 11, 2026 at 01:10:34AM +0300, Sagi Grimberg wrote:
> > > This should even go as a separate patch:
> > You mean a stadalone fix? Because it already is a separate patch.
>
> Yes, this is a race condition fix that can go separately.
Sure thing, I've applied patch 3/9 to nvme-7.1 as a separate bug fix.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-11 15:09 ` Keith Busch
@ 2026-05-11 15:45 ` Maurizio Lombardi
2026-05-11 17:10 ` Keith Busch
0 siblings, 1 reply; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 15:45 UTC (permalink / raw)
To: Keith Busch, Sagi Grimberg
Cc: Christoph Hellwig, Maurizio Lombardi, mheyne, emilne, jmeneghi,
linux-nvme, dwagner, mlombard, mkhalfella, chaitanyak, hare
On Mon May 11, 2026 at 5:09 PM CEST, Keith Busch wrote:
> On Mon, May 11, 2026 at 03:54:04PM +0300, Sagi Grimberg wrote:
>>
>> > On Mon, May 11, 2026 at 01:10:34AM +0300, Sagi Grimberg wrote:
>> > > This should even go as a separate patch:
>> > You mean a stadalone fix? Because it already is a separate patch.
>>
>> Yes, this is a race condition fix that can go separately.
>
> Sure thing, I've applied patch 3/9 to nvme-7.1 as a separate bug fix.
Oh, that was way faster than I expected.
The issue is that, in explaining the rationale behind PATCH 3, I
mentioned in its commit message the admin_timeout sysfs attribute
introduced by this patchset.
You may want to remove that reference if you plan to merge patch 3
indipendently of the rest of this patchset.
In the vanilla kernel, the only sysfs attribute that could be affected by
this race condition is nvme_sysfs_delete().
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-11 15:45 ` Maurizio Lombardi
@ 2026-05-11 17:10 ` Keith Busch
0 siblings, 0 replies; 46+ messages in thread
From: Keith Busch @ 2026-05-11 17:10 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: Sagi Grimberg, Christoph Hellwig, Maurizio Lombardi, mheyne,
emilne, jmeneghi, linux-nvme, dwagner, mkhalfella, chaitanyak,
hare
On Mon, May 11, 2026 at 05:45:36PM +0200, Maurizio Lombardi wrote:
> On Mon May 11, 2026 at 5:09 PM CEST, Keith Busch wrote:
> > On Mon, May 11, 2026 at 03:54:04PM +0300, Sagi Grimberg wrote:
> >>
> >> > On Mon, May 11, 2026 at 01:10:34AM +0300, Sagi Grimberg wrote:
> >> > > This should even go as a separate patch:
> >> > You mean a stadalone fix? Because it already is a separate patch.
> >>
> >> Yes, this is a race condition fix that can go separately.
> >
> > Sure thing, I've applied patch 3/9 to nvme-7.1 as a separate bug fix.
>
> Oh, that was way faster than I expected.
>
> The issue is that, in explaining the rationale behind PATCH 3, I
> mentioned in its commit message the admin_timeout sysfs attribute
> introduced by this patchset.
>
> You may want to remove that reference if you plan to merge patch 3
> indipendently of the rest of this patchset.
>
> In the vanilla kernel, the only sysfs attribute that could be affected by
> this race condition is nvme_sysfs_delete().
Ah, thanks. I got to this one quicker because I had some free time for
patch maintenance. I'll fix up the message.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
2026-05-08 16:57 ` Daniel Wagner
2026-05-10 22:10 ` Sagi Grimberg
@ 2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:47 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:08 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
` (2 preceding siblings ...)
2026-05-11 8:08 ` Christoph Hellwig
@ 2026-05-11 9:47 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:47 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> When a controller connects, nvme_start_ctrl() emits the
> "NVME_EVENT=connected" uevent and sets the NVME_CTRL_STARTED_ONCE flag.
> Currently, the uevent is emitted before the flag is set.
>
> This creates a race condition for userspace tools (like udev rules)
> that might rely on the "connected" event to configure sysfs attributes.
> Specifically, if a udev rule attempts to set the newly introduced
> `admin_timeout` attribute immediately after receiving the uevent,
> the sysfs store function might evaluate the NVME_CTRL_STARTED_ONCE
> bit before it is actually set, resulting in spurious -EBUSY error.
>
> Swap the order of operations in nvme_start_ctrl() so that the
> NVME_CTRL_STARTED_ONCE flag is set before the uevent is sent.
> This guarantees that the admin_timeout can already be changed
> when userspace is notified.
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index b1bfcd0a0e5b..22be8cf5e982 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -5039,8 +5039,8 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
> nvme_mpath_update(ctrl);
> }
>
> - nvme_change_uevent(ctrl, "NVME_EVENT=connected");
> set_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags);
> + nvme_change_uevent(ctrl, "NVME_EVENT=connected");
> }
> EXPORT_SYMBOL_GPL(nvme_start_ctrl);
>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (2 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 3/9] nvme: fix race condition between connected uevent and STARTED_ONCE flag Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-10 22:10 ` Sagi Grimberg
` (2 more replies)
2026-05-08 13:33 ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Maurizio Lombardi
` (4 subsequent siblings)
8 siblings, 3 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
While tearing down its queues, nvme-pci uses NVME_ADMIN_TIMEOUT as its
timeout target. Instead, use the configured admin queue's timeout value
to match the device's existing timeout setting.
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 9fd04cd7c5cb..dd1bc3807a2d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -3094,7 +3094,7 @@ static bool __nvme_delete_io_queues(struct nvme_dev *dev, u8 opcode)
unsigned long timeout;
retry:
- timeout = NVME_ADMIN_TIMEOUT;
+ timeout = dev->ctrl.admin_timeout;
while (nr_queues > 0) {
if (nvme_delete_queue(&dev->queues[nr_queues], opcode))
break;
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
@ 2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:48 ` Hannes Reinecke
2 siblings, 0 replies; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:10 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
2026-05-10 22:10 ` Sagi Grimberg
@ 2026-05-11 8:08 ` Christoph Hellwig
2026-05-11 9:48 ` Hannes Reinecke
2 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:08 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:30PM +0200, Maurizio Lombardi wrote:
> While tearing down its queues, nvme-pci uses NVME_ADMIN_TIMEOUT as its
> timeout target. Instead, use the configured admin queue's timeout value
> to match the device's existing timeout setting.
Shouldn't this go into the patch introducing the configurable timeout?
Otherwise looks good.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
2026-05-10 22:10 ` Sagi Grimberg
2026-05-11 8:08 ` Christoph Hellwig
@ 2026-05-11 9:48 ` Hannes Reinecke
2 siblings, 0 replies; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:48 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> While tearing down its queues, nvme-pci uses NVME_ADMIN_TIMEOUT as its
> timeout target. Instead, use the configured admin queue's timeout value
> to match the device's existing timeout setting.
>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/pci.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 9fd04cd7c5cb..dd1bc3807a2d 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -3094,7 +3094,7 @@ static bool __nvme_delete_io_queues(struct nvme_dev *dev, u8 opcode)
> unsigned long timeout;
>
> retry:
> - timeout = NVME_ADMIN_TIMEOUT;
> + timeout = dev->ctrl.admin_timeout;
> while (nr_queues > 0) {
> if (nvme_delete_queue(&dev->queues[nr_queues], opcode))
> break;
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (3 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 4/9] nvme: pci: use admin queue timeout over NVME_ADMIN_TIMEOUT Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-08 17:08 ` Daniel Wagner
2026-05-10 22:12 ` Sagi Grimberg
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
` (3 subsequent siblings)
8 siblings, 2 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Currently, there is no method to adjust the timeout values on a
per controller basis with nvme I/O queues.
Add an io_timeout attribute to nvme so that different nvme controllers
which may have different timeout requirements can have custom
I/O timeouts set.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 2 ++
drivers/nvme/host/nvme.h | 1 +
drivers/nvme/host/sysfs.c | 47 +++++++++++++++++++++++++++++++++++++++
3 files changed, 50 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 22be8cf5e982..fa60e10e05d5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4203,6 +4203,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
mutex_unlock(&ctrl->namespaces_lock);
goto out_unlink_ns;
}
+ blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
nvme_ns_add_to_ctrl_list(ns);
mutex_unlock(&ctrl->namespaces_lock);
synchronize_srcu(&ctrl->srcu);
@@ -5141,6 +5142,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
ctrl->ka_last_check_time = jiffies;
ctrl->admin_timeout = NVME_ADMIN_TIMEOUT;
+ ctrl->io_timeout = NVME_IO_TIMEOUT;
BUILD_BUG_ON(NVME_DSM_MAX_RANGES * sizeof(struct nvme_dsm_range) >
PAGE_SIZE);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9da3ebebe9c8..a6d998c2e0e5 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -371,6 +371,7 @@ struct nvme_ctrl {
u32 ctrl_config;
u32 queue_count;
u32 admin_timeout;
+ u32 io_timeout;
u64 cap;
u32 max_hw_sectors;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 9456af955aff..7e6810423735 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -664,6 +664,52 @@ static ssize_t nvme_admin_timeout_store(struct device *dev,
static DEVICE_ATTR(admin_timeout, S_IRUGO | S_IWUSR,
nvme_admin_timeout_show, nvme_admin_timeout_store);
+static ssize_t nvme_io_timeout_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", jiffies_to_msecs(ctrl->io_timeout));
+}
+
+static ssize_t nvme_io_timeout_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+ struct nvme_ns *ns;
+ u32 timeout;
+ int err;
+
+ /*
+ * Wait until the controller reaches the LIVE state
+ * to be sure that connect_q is properly initialized.
+ */
+ if (!test_bit(NVME_CTRL_STARTED_ONCE, &ctrl->flags))
+ return -EBUSY;
+
+ err = kstrtou32(buf, 10, &timeout);
+ if (err || !timeout)
+ return -EINVAL;
+
+ /* Take the namespaces_lock to avoid racing against nvme_alloc_ns() */
+ mutex_lock(&ctrl->namespaces_lock);
+
+ ctrl->io_timeout = msecs_to_jiffies(timeout);
+ list_for_each_entry(ns, &ctrl->namespaces, list)
+ blk_queue_rq_timeout(ns->queue, ctrl->io_timeout);
+
+ mutex_unlock(&ctrl->namespaces_lock);
+
+ if (ctrl->connect_q)
+ blk_queue_rq_timeout(ctrl->connect_q, ctrl->io_timeout);
+
+ return count;
+}
+
+static DEVICE_ATTR(io_timeout, S_IRUGO | S_IWUSR,
+ nvme_io_timeout_show, nvme_io_timeout_store);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -807,6 +853,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_dctype.attr,
&dev_attr_quirks.attr,
&dev_attr_admin_timeout.attr,
+ &dev_attr_io_timeout.attr,
#ifdef CONFIG_NVME_HOST_AUTH
&dev_attr_dhchap_secret.attr,
&dev_attr_dhchap_ctrl_secret.attr,
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller
2026-05-08 13:33 ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Maurizio Lombardi
@ 2026-05-08 17:08 ` Daniel Wagner
2026-05-10 22:12 ` Sagi Grimberg
1 sibling, 0 replies; 46+ messages in thread
From: Daniel Wagner @ 2026-05-08 17:08 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:31PM +0200, Maurizio Lombardi wrote:
> Currently, there is no method to adjust the timeout values on a
> per controller basis with nvme I/O queues.
> Add an io_timeout attribute to nvme so that different nvme controllers
> which may have different timeout requirements can have custom
> I/O timeouts set.
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
I was wondering if namespaces would ever want different timeouts but
given that at this point it was always the default one and this change
scopes to controllers is properly enough.
Reviewed-by: Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller
2026-05-08 13:33 ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Maurizio Lombardi
2026-05-08 17:08 ` Daniel Wagner
@ 2026-05-10 22:12 ` Sagi Grimberg
2026-05-11 8:52 ` Maurizio Lombardi
1 sibling, 1 reply; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:12 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 08/05/2026 16:33, Maurizio Lombardi wrote:
> Currently, there is no method to adjust the timeout values on a
> per controller basis with nvme I/O queues.
> Add an io_timeout attribute to nvme so that different nvme controllers
> which may have different timeout requirements can have custom
> I/O timeouts set.
Why is this needed? Why not simply change the timeout on
the namespaces themselves?
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller
2026-05-10 22:12 ` Sagi Grimberg
@ 2026-05-11 8:52 ` Maurizio Lombardi
0 siblings, 0 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 8:52 UTC (permalink / raw)
To: Sagi Grimberg, Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Mon May 11, 2026 at 12:12 AM CEST, Sagi Grimberg wrote:
>
>
> On 08/05/2026 16:33, Maurizio Lombardi wrote:
>> Currently, there is no method to adjust the timeout values on a
>> per controller basis with nvme I/O queues.
>> Add an io_timeout attribute to nvme so that different nvme controllers
>> which may have different timeout requirements can have custom
>> I/O timeouts set.
>
> Why is this needed? Why not simply change the timeout on
> the namespaces themselves?
One reason is convenience, having an io_timeout default value at the
controller level means that a newly discovered or hot-plugged namespace
will automatically inherit the controller-specific timeout.
Additionally, for nvmeof, the controller allocates a dedicated I/O queue
(ctrl->connect_q) that doesn't have a block device representation.
The io_timeout attribute will allow userspace to change the timeout
for this internal queue.
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (4 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 5/9] nvme: add sysfs attribute to change IO timeout per controller Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-11 8:10 ` Christoph Hellwig
2026-05-11 9:50 ` Hannes Reinecke
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
` (2 subsequent siblings)
8 siblings, 2 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Instead of passing NVME_IO_TIMEOUT as a parameter with every call to
nvme_wait_freeze_timeout, use the controller's preferred timeout.
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/apple.c | 2 +-
drivers/nvme/host/core.c | 3 ++-
drivers/nvme/host/nvme.h | 2 +-
drivers/nvme/host/pci.c | 2 +-
drivers/nvme/host/rdma.c | 2 +-
drivers/nvme/host/tcp.c | 2 +-
6 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
index 423c9c628e7b..e77c47408102 100644
--- a/drivers/nvme/host/apple.c
+++ b/drivers/nvme/host/apple.c
@@ -858,7 +858,7 @@ static void apple_nvme_disable(struct apple_nvme *anv, bool shutdown)
* doing a safe shutdown.
*/
if (!dead && shutdown && freeze)
- nvme_wait_freeze_timeout(&anv->ctrl, NVME_IO_TIMEOUT);
+ nvme_wait_freeze_timeout(&anv->ctrl);
nvme_quiesce_io_queues(&anv->ctrl);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fa60e10e05d5..5d3200a66f8e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -5249,8 +5249,9 @@ void nvme_unfreeze(struct nvme_ctrl *ctrl)
}
EXPORT_SYMBOL_GPL(nvme_unfreeze);
-int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout)
+int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl)
{
+ unsigned long timeout = ctrl->io_timeout;
struct nvme_ns *ns;
int srcu_idx;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index a6d998c2e0e5..9ccaed0b9dbf 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -902,7 +902,7 @@ void nvme_sync_queues(struct nvme_ctrl *ctrl);
void nvme_sync_io_queues(struct nvme_ctrl *ctrl);
void nvme_unfreeze(struct nvme_ctrl *ctrl);
void nvme_wait_freeze(struct nvme_ctrl *ctrl);
-int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
+int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl);
void nvme_start_freeze(struct nvme_ctrl *ctrl);
static inline enum req_op nvme_req_op(struct nvme_command *cmd)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index dd1bc3807a2d..35affda088f4 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -3276,7 +3276,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
* if doing a safe shutdown.
*/
if (!dead && shutdown)
- nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT);
+ nvme_wait_freeze_timeout(&dev->ctrl);
}
nvme_quiesce_io_queues(&dev->ctrl);
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index f77c960f7632..bf73135c1439 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -888,7 +888,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
if (!new) {
nvme_start_freeze(&ctrl->ctrl);
nvme_unquiesce_io_queues(&ctrl->ctrl);
- if (!nvme_wait_freeze_timeout(&ctrl->ctrl, NVME_IO_TIMEOUT)) {
+ if (!nvme_wait_freeze_timeout(&ctrl->ctrl)) {
/*
* If we timed out waiting for freeze we are likely to
* be stuck. Fail the controller initialization just
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 15d36d6a728e..0552aa8a1150 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -2208,7 +2208,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
if (!new) {
nvme_start_freeze(ctrl);
nvme_unquiesce_io_queues(ctrl);
- if (!nvme_wait_freeze_timeout(ctrl, NVME_IO_TIMEOUT)) {
+ if (!nvme_wait_freeze_timeout(ctrl)) {
/*
* If we timed out waiting for freeze we are likely to
* be stuck. Fail the controller initialization just
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
@ 2026-05-11 8:10 ` Christoph Hellwig
2026-05-11 11:42 ` Maurizio Lombardi
2026-05-11 9:50 ` Hannes Reinecke
1 sibling, 1 reply; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:10 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:32PM +0200, Maurizio Lombardi wrote:
> Instead of passing NVME_IO_TIMEOUT as a parameter with every call to
> nvme_wait_freeze_timeout, use the controller's preferred timeout.
This should probably go very early in the series before even adding
configurable timeouts as it just drops a pointless paramter.
The patch itself looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default
2026-05-11 8:10 ` Christoph Hellwig
@ 2026-05-11 11:42 ` Maurizio Lombardi
2026-05-11 12:32 ` Christoph Hellwig
0 siblings, 1 reply; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 11:42 UTC (permalink / raw)
To: Christoph Hellwig, Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare
On Mon May 11, 2026 at 10:10 AM CEST, Christoph Hellwig wrote:
> On Fri, May 08, 2026 at 03:33:32PM +0200, Maurizio Lombardi wrote:
>> Instead of passing NVME_IO_TIMEOUT as a parameter with every call to
>> nvme_wait_freeze_timeout, use the controller's preferred timeout.
>
> This should probably go very early in the series before even adding
> configurable timeouts as it just drops a pointless paramter.
It drops the parameter but uses the timeout value stored in
ctrl->io_timeout, so it actually depends on the patch introducing
the per-controller io_timeout.
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default
2026-05-11 11:42 ` Maurizio Lombardi
@ 2026-05-11 12:32 ` Christoph Hellwig
0 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 12:32 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: Christoph Hellwig, Maurizio Lombardi, kbusch, mheyne, emilne,
jmeneghi, linux-nvme, dwagner, mkhalfella, chaitanyak, hare
On Mon, May 11, 2026 at 01:42:47PM +0200, Maurizio Lombardi wrote:
> On Mon May 11, 2026 at 10:10 AM CEST, Christoph Hellwig wrote:
> > On Fri, May 08, 2026 at 03:33:32PM +0200, Maurizio Lombardi wrote:
> >> Instead of passing NVME_IO_TIMEOUT as a parameter with every call to
> >> nvme_wait_freeze_timeout, use the controller's preferred timeout.
> >
> > This should probably go very early in the series before even adding
> > configurable timeouts as it just drops a pointless paramter.
>
> It drops the parameter but uses the timeout value stored in
> ctrl->io_timeout, so it actually depends on the patch introducing
> the per-controller io_timeout.
Well, switch to hardcoded NVME_IO_TIMEOUT early, and then replace it
with the variable timeout like the other instances of NVME_IO_TIMEOUT.
>
> Maurizio
---end quoted text---
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
2026-05-11 8:10 ` Christoph Hellwig
@ 2026-05-11 9:50 ` Hannes Reinecke
1 sibling, 0 replies; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:50 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> Instead of passing NVME_IO_TIMEOUT as a parameter with every call to
> nvme_wait_freeze_timeout, use the controller's preferred timeout.
>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> Reviewed-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/apple.c | 2 +-
> drivers/nvme/host/core.c | 3 ++-
> drivers/nvme/host/nvme.h | 2 +-
> drivers/nvme/host/pci.c | 2 +-
> drivers/nvme/host/rdma.c | 2 +-
> drivers/nvme/host/tcp.c | 2 +-
> 6 files changed, 7 insertions(+), 6 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (5 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 6/9] nvme: use per controller timeout waits over depending on global default Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-10 22:15 ` Sagi Grimberg
` (2 more replies)
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-05-08 13:33 ` [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
8 siblings, 3 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Currently, the final reference for the fabrics admin queue (fabrics_q)
is dropped inside nvme_remove_admin_tag_set(). However, the primary
admin queue (admin_q) defers dropping its final reference until
nvme_free_ctrl().
Move the blk_put_queue() call for fabrics_q from nvme_remove_admin_tag_set()
to nvme_free_ctrl(). This aligns the lifecycle management of both admin
queues, ensuring they are freed symmetrically when the controller is finally
torn down.
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5d3200a66f8e..73575d087a07 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4932,10 +4932,8 @@ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
*/
nvme_stop_keep_alive(ctrl);
blk_mq_destroy_queue(ctrl->admin_q);
- if (ctrl->ops->flags & NVME_F_FABRICS) {
+ if (ctrl->ops->flags & NVME_F_FABRICS)
blk_mq_destroy_queue(ctrl->fabrics_q);
- blk_put_queue(ctrl->fabrics_q);
- }
blk_mq_free_tag_set(ctrl->admin_tagset);
}
EXPORT_SYMBOL_GPL(nvme_remove_admin_tag_set);
@@ -5077,6 +5075,8 @@ static void nvme_free_ctrl(struct device *dev)
if (ctrl->admin_q)
blk_put_queue(ctrl->admin_q);
+ if (ctrl->fabrics_q)
+ blk_put_queue(ctrl->fabrics_q);
if (!subsys || ctrl->instance != subsys->instance)
ida_free(&nvme_instance_ida, ctrl->instance);
nvme_free_cels(ctrl);
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
@ 2026-05-10 22:15 ` Sagi Grimberg
2026-05-11 8:11 ` Christoph Hellwig
2026-05-11 9:53 ` Hannes Reinecke
2 siblings, 0 replies; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:15 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
2026-05-10 22:15 ` Sagi Grimberg
@ 2026-05-11 8:11 ` Christoph Hellwig
2026-05-11 9:53 ` Hannes Reinecke
2 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:11 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
2026-05-10 22:15 ` Sagi Grimberg
2026-05-11 8:11 ` Christoph Hellwig
@ 2026-05-11 9:53 ` Hannes Reinecke
2026-05-11 9:57 ` Maurizio Lombardi
2 siblings, 1 reply; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:53 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> Currently, the final reference for the fabrics admin queue (fabrics_q)
> is dropped inside nvme_remove_admin_tag_set(). However, the primary
> admin queue (admin_q) defers dropping its final reference until
> nvme_free_ctrl().
>
> Move the blk_put_queue() call for fabrics_q from nvme_remove_admin_tag_set()
> to nvme_free_ctrl(). This aligns the lifecycle management of both admin
> queues, ensuring they are freed symmetrically when the controller is finally
> torn down.
>
> Reviewed-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/host/core.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5d3200a66f8e..73575d087a07 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -4932,10 +4932,8 @@ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
> */
> nvme_stop_keep_alive(ctrl);
> blk_mq_destroy_queue(ctrl->admin_q);
> - if (ctrl->ops->flags & NVME_F_FABRICS) {
> + if (ctrl->ops->flags & NVME_F_FABRICS)
> blk_mq_destroy_queue(ctrl->fabrics_q);
> - blk_put_queue(ctrl->fabrics_q);
> - }
> blk_mq_free_tag_set(ctrl->admin_tagset);
> }
> EXPORT_SYMBOL_GPL(nvme_remove_admin_tag_set);
> @@ -5077,6 +5075,8 @@ static void nvme_free_ctrl(struct device *dev)
>
> if (ctrl->admin_q)
> blk_put_queue(ctrl->admin_q);
> + if (ctrl->fabrics_q)
> + blk_put_queue(ctrl->fabrics_q);
> if (!subsys || ctrl->instance != subsys->instance)
> ida_free(&nvme_instance_ida, ctrl->instance);
> nvme_free_cels(ctrl);
One wonders why we check for 'flags' in the first hunk, but for the
existence of 'fabrics_q' in the second hunk.
But anyway.
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl
2026-05-11 9:53 ` Hannes Reinecke
@ 2026-05-11 9:57 ` Maurizio Lombardi
0 siblings, 0 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-11 9:57 UTC (permalink / raw)
To: Hannes Reinecke, Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On Mon May 11, 2026 at 11:53 AM CEST, Hannes Reinecke wrote:
> On 5/8/26 15:33, Maurizio Lombardi wrote:
>> Currently, the final reference for the fabrics admin queue (fabrics_q)
>> is dropped inside nvme_remove_admin_tag_set(). However, the primary
>> admin queue (admin_q) defers dropping its final reference until
>> nvme_free_ctrl().
>>
>> Move the blk_put_queue() call for fabrics_q from nvme_remove_admin_tag_set()
>> to nvme_free_ctrl(). This aligns the lifecycle management of both admin
>> queues, ensuring they are freed symmetrically when the controller is finally
>> torn down.
>>
>> Reviewed-by: Daniel Wagner <dwagner@suse.de>
>> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
>> ---
>> drivers/nvme/host/core.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 5d3200a66f8e..73575d087a07 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -4932,10 +4932,8 @@ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
>> */
>> nvme_stop_keep_alive(ctrl);
>> blk_mq_destroy_queue(ctrl->admin_q);
>> - if (ctrl->ops->flags & NVME_F_FABRICS) {
>> + if (ctrl->ops->flags & NVME_F_FABRICS)
>> blk_mq_destroy_queue(ctrl->fabrics_q);
>> - blk_put_queue(ctrl->fabrics_q);
>> - }
>> blk_mq_free_tag_set(ctrl->admin_tagset);
>> }
>> EXPORT_SYMBOL_GPL(nvme_remove_admin_tag_set);
>> @@ -5077,6 +5075,8 @@ static void nvme_free_ctrl(struct device *dev)
>>
>> if (ctrl->admin_q)
>> blk_put_queue(ctrl->admin_q);
>> + if (ctrl->fabrics_q)
>> + blk_put_queue(ctrl->fabrics_q);
>> if (!subsys || ctrl->instance != subsys->instance)
>> ida_free(&nvme_instance_ida, ctrl->instance);
>> nvme_free_cels(ctrl);
>
> One wonders why we check for 'flags' in the first hunk, but for the
> existence of 'fabrics_q' in the second hunk.
> But anyway.
That is true,
I have to send a V5 anyway to address other comments, so I will change it to use
if(ctrl->fabrics_q).
Maurizio
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (6 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 7/9] nvme-core: align fabrics_q teardown with admin_q in nvme_free_ctrl Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-08 17:09 ` Daniel Wagner
` (3 more replies)
2026-05-08 13:33 ` [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
8 siblings, 4 replies; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Currently, resetting a loopback controller unconditionally invokes
nvme_alloc_admin_tag_set() inside nvme_loop_configure_admin_queue().
Doing so drops the old queue and allocates a new one. Consequently,
this reverts the admin queue's timeout (q->rq_timeout) back to the
module default (NVME_ADMIN_TIMEOUT), completely wiping out any custom
timeout values the user may have configured via sysfs and potentially
racing against the sysfs nvme_admin_timeout_store() function
that may dereference the admin_q pointer during the RESETTING state.
Decouple the admin tag set lifecycle from the admin queue
configuration and destruction paths, which are executed during resets;
Specifically:
* Move nvme_alloc_admin_tag_set() into nvme_loop_create_ctrl() so it
is only allocated once during the initial controller creation.
* Defer the destruction of the admin tag set to
nvme_loop_delete_ctrl_host() and the terminal error-handling
paths of nvme_loop_reset_ctrl_work() and
nvme_loop_create_ctrl().
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/target/loop.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index d98d0cdc5d6f..070d16068e6b 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -274,7 +274,6 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
nvmet_cq_put(&ctrl->queues[0].nvme_cq);
- nvme_remove_admin_tag_set(&ctrl->ctrl);
}
static void nvme_loop_free_ctrl(struct nvme_ctrl *nctrl)
@@ -375,25 +374,18 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
}
ctrl->ctrl.queue_count = 1;
- error = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set,
- &nvme_loop_admin_mq_ops,
- sizeof(struct nvme_loop_iod) +
- NVME_INLINE_SG_CNT * sizeof(struct scatterlist));
- if (error)
- goto out_free_sq;
-
/* reset stopped state for the fresh admin queue */
clear_bit(NVME_CTRL_ADMIN_Q_STOPPED, &ctrl->ctrl.flags);
error = nvmf_connect_admin_queue(&ctrl->ctrl);
if (error)
- goto out_cleanup_tagset;
+ goto out_free_sq;
set_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
error = nvme_enable_ctrl(&ctrl->ctrl);
if (error)
- goto out_cleanup_tagset;
+ goto out_free_sq;
ctrl->ctrl.max_hw_sectors =
(NVME_LOOP_MAX_SEGMENTS - 1) << PAGE_SECTORS_SHIFT;
@@ -402,14 +394,12 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
error = nvme_init_ctrl_finish(&ctrl->ctrl, false);
if (error)
- goto out_cleanup_tagset;
+ goto out_free_sq;
return 0;
-out_cleanup_tagset:
- clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
- nvme_remove_admin_tag_set(&ctrl->ctrl);
out_free_sq:
+ clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags);
nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
nvmet_cq_put(&ctrl->queues[0].nvme_cq);
return error;
@@ -432,6 +422,7 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
static void nvme_loop_delete_ctrl_host(struct nvme_ctrl *ctrl)
{
nvme_loop_shutdown_ctrl(to_loop_ctrl(ctrl));
+ nvme_remove_admin_tag_set(ctrl);
}
static void nvme_loop_delete_ctrl(struct nvmet_ctrl *nctrl)
@@ -494,6 +485,7 @@ static void nvme_loop_reset_ctrl_work(struct work_struct *work)
nvme_cancel_admin_tagset(&ctrl->ctrl);
nvme_loop_destroy_admin_queue(ctrl);
out_disable:
+ nvme_remove_admin_tag_set(&ctrl->ctrl);
dev_warn(ctrl->ctrl.device, "Removing after reset failure\n");
nvme_uninit_ctrl(&ctrl->ctrl);
}
@@ -594,10 +586,17 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
if (!ctrl->queues)
goto out_uninit_ctrl;
- ret = nvme_loop_configure_admin_queue(ctrl);
+ ret = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set,
+ &nvme_loop_admin_mq_ops,
+ sizeof(struct nvme_loop_iod) +
+ NVME_INLINE_SG_CNT * sizeof(struct scatterlist));
if (ret)
goto out_free_queues;
+ ret = nvme_loop_configure_admin_queue(ctrl);
+ if (ret)
+ goto out_remove_admin_tagset;
+
if (opts->queue_size > ctrl->ctrl.maxcmd) {
/* warn if maxcmd is lower than queue_size */
dev_warn(ctrl->ctrl.device,
@@ -633,6 +632,8 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev,
nvme_quiesce_admin_queue(&ctrl->ctrl);
nvme_cancel_admin_tagset(&ctrl->ctrl);
nvme_loop_destroy_admin_queue(ctrl);
+out_remove_admin_tagset:
+ nvme_remove_admin_tag_set(&ctrl->ctrl);
out_free_queues:
kfree(ctrl->queues);
out_uninit_ctrl:
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
@ 2026-05-08 17:09 ` Daniel Wagner
2026-05-10 22:16 ` Sagi Grimberg
` (2 subsequent siblings)
3 siblings, 0 replies; 46+ messages in thread
From: Daniel Wagner @ 2026-05-08 17:09 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, mlombard,
mkhalfella, chaitanyak, hare, hch
On Fri, May 08, 2026 at 03:33:34PM +0200, Maurizio Lombardi wrote:
> Currently, resetting a loopback controller unconditionally invokes
> nvme_alloc_admin_tag_set() inside nvme_loop_configure_admin_queue().
> Doing so drops the old queue and allocates a new one. Consequently,
> this reverts the admin queue's timeout (q->rq_timeout) back to the
> module default (NVME_ADMIN_TIMEOUT), completely wiping out any custom
> timeout values the user may have configured via sysfs and potentially
> racing against the sysfs nvme_admin_timeout_store() function
> that may dereference the admin_q pointer during the RESETTING state.
>
> Decouple the admin tag set lifecycle from the admin queue
> configuration and destruction paths, which are executed during resets;
> Specifically:
>
> * Move nvme_alloc_admin_tag_set() into nvme_loop_create_ctrl() so it
> is only allocated once during the initial controller creation.
>
> * Defer the destruction of the admin tag set to
> nvme_loop_delete_ctrl_host() and the terminal error-handling
> paths of nvme_loop_reset_ctrl_work() and
> nvme_loop_create_ctrl().
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 46+ messages in thread* Re: [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-05-08 17:09 ` Daniel Wagner
@ 2026-05-10 22:16 ` Sagi Grimberg
2026-05-11 8:12 ` Christoph Hellwig
2026-05-11 9:55 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:16 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
2026-05-08 17:09 ` Daniel Wagner
2026-05-10 22:16 ` Sagi Grimberg
@ 2026-05-11 8:12 ` Christoph Hellwig
2026-05-11 9:55 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Christoph Hellwig @ 2026-05-11 8:12 UTC (permalink / raw)
To: Maurizio Lombardi
Cc: kbusch, mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
(I still wish we could consolidate some of the alloc/reset flow a little
more instead of having to do this in every driver)
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
` (2 preceding siblings ...)
2026-05-11 8:12 ` Christoph Hellwig
@ 2026-05-11 9:55 ` Hannes Reinecke
3 siblings, 0 replies; 46+ messages in thread
From: Hannes Reinecke @ 2026-05-11 9:55 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
On 5/8/26 15:33, Maurizio Lombardi wrote:
> Currently, resetting a loopback controller unconditionally invokes
> nvme_alloc_admin_tag_set() inside nvme_loop_configure_admin_queue().
> Doing so drops the old queue and allocates a new one. Consequently,
> this reverts the admin queue's timeout (q->rq_timeout) back to the
> module default (NVME_ADMIN_TIMEOUT), completely wiping out any custom
> timeout values the user may have configured via sysfs and potentially
> racing against the sysfs nvme_admin_timeout_store() function
> that may dereference the admin_q pointer during the RESETTING state.
>
> Decouple the admin tag set lifecycle from the admin queue
> configuration and destruction paths, which are executed during resets;
> Specifically:
>
> * Move nvme_alloc_admin_tag_set() into nvme_loop_create_ctrl() so it
> is only allocated once during the initial controller creation.
>
> * Defer the destruction of the admin tag set to
> nvme_loop_delete_ctrl_host() and the terminal error-handling
> paths of nvme_loop_reset_ctrl_work() and
> nvme_loop_create_ctrl().
>
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
> drivers/nvme/target/loop.c | 31 ++++++++++++++++---------------
> 1 file changed, 16 insertions(+), 15 deletions(-)
>
Reviewed-by: Hannes Reinecke <hare@kernel.org>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue
2026-05-08 13:33 [PATCH V4 0/9] nvme: Refactor and expose per-controller timeout configuration Maurizio Lombardi
` (7 preceding siblings ...)
2026-05-08 13:33 ` [PATCH V4 8/9] nvmet-loop: do not alloc admin tag set during reset Maurizio Lombardi
@ 2026-05-08 13:33 ` Maurizio Lombardi
2026-05-10 22:16 ` Sagi Grimberg
8 siblings, 1 reply; 46+ messages in thread
From: Maurizio Lombardi @ 2026-05-08 13:33 UTC (permalink / raw)
To: kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Currently, nvme_alloc_admin_tag_set() silently drops and releases
the existing admin_q if it called on a controller that already
had one (e.g., during a controller reset).
However, transport drivers should not be reallocating the admin tag
set and queue during a reset. Dropping the old queue and allocating
a new one destroys user-configured timeouts and may race against
nvme_admin_timeout_store()
Since all transport drivers are now expected to preserve the admin queue
across resets, calling nvme_alloc_admin_tag_set() when ctrl->admin_q
is already populated is a bug.
Remove the silent cleanup and replace it with a WARN_ON_ONCE() to
explicitly catch any transport drivers that violate this lifecycle rule
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
---
drivers/nvme/host/core.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 73575d087a07..14876b5ec5e3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4889,12 +4889,7 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
if (ret)
return ret;
- /*
- * If a previous admin queue exists (e.g., from before a reset),
- * put it now before allocating a new one to avoid orphaning it.
- */
- if (ctrl->admin_q)
- blk_put_queue(ctrl->admin_q);
+ WARN_ON_ONCE(ctrl->admin_q);
ctrl->admin_q = blk_mq_alloc_queue(set, NULL, NULL);
if (IS_ERR(ctrl->admin_q)) {
--
2.54.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue
2026-05-08 13:33 ` [PATCH V4 9/9] nvme-core: warn on allocating admin tag set with existing queue Maurizio Lombardi
@ 2026-05-10 22:16 ` Sagi Grimberg
0 siblings, 0 replies; 46+ messages in thread
From: Sagi Grimberg @ 2026-05-10 22:16 UTC (permalink / raw)
To: Maurizio Lombardi, kbusch
Cc: mheyne, emilne, jmeneghi, linux-nvme, dwagner, mlombard,
mkhalfella, chaitanyak, hare, hch
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
^ permalink raw reply [flat|nested] 46+ messages in thread