* [PATCH 0/7] nvme: export additional diagnostic counters via sysfs
@ 2026-01-30 18:20 Nilay Shroff
2026-01-30 18:20 ` [PATCH 1/7] nvme: export command retry count " Nilay Shroff
` (8 more replies)
0 siblings, 9 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
Hi,
The NVMe driver encounters various events and conditions during normal
operation that are either not tracked today or not exposed to userspace
via sysfs. Lack of visibility into these events can make it difficult to
diagnose subtle issues related to controller behavior, multipath
stability, and I/O reliability.
This patchset adds several diagnostic counters that provide improved
observability into NVMe behavior. These counters are intended to help
users understand events such as transient path unavailability,
controller retries/reconnect/reset, failovers, and I/O failures. They
can also be consumed by monitoring tools such as nvme-top.
Specifically, this series proposes to export the following counters via
sysfs:
- Command retry count
- Multipath failover count
- Command error count
- I/O requeue count
- I/O failure count
- Controller reset event counts
- Controller reconnect counts
The patchset consists of seven patches:
Patch 1: Export command retry count
Patch 2: Export multipath failover count
Patch 3: Export command error count
Patch 4: Export I/O requeue count
Patch 5: Export I/O failure count
Patch 6: Export controller reset event counts
Patch 7: Export controller reconnect event count
Please note that this patchset doesn't make any functional change but
rather export relevant counters to user space via sysfs.
As usual, feedback/comments/suggestions are welcome!
Nilay Shroff (7):
nvme: export command retry count via sysfs
nvme: export multipath failover count via sysfs
nvme: export command error counters via sysfs
nvme: export I/O requeue count when no path is available via sysfs
nvme: export I/O failure count when no path is available via sysfs
nvme: export controller reset event count via sysfs
nvme: export controller reconnect event count via sysfs
drivers/nvme/host/core.c | 23 ++++++++-
drivers/nvme/host/multipath.c | 32 +++++++++++++
drivers/nvme/host/nvme.h | 12 ++++-
drivers/nvme/host/sysfs.c | 90 +++++++++++++++++++++++++++++++++++
4 files changed, 154 insertions(+), 3 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/7] nvme: export command retry count via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 20:33 ` Keith Busch
2026-01-30 18:20 ` [PATCH 2/7] nvme: export multipath failover " Nilay Shroff
` (7 subsequent siblings)
8 siblings, 1 reply; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When Advanced Command Retry Enable (ACRE) is configured, a controller
may interrupt command execution and return a completion status
indicating command interrupted with the DNR bit cleared. In this case,
the driver retries the command based on the Command Retry Delay (CRD)
value provided in the completion status.
Currently, these command retries are handled entirely within the NVMe
driver and are not visible to userspace. As a result, there is no
observability into retry behavior, which can be a useful diagnostic
signal.
Expose the command retries count through sysfs to provide visibility
into retry activity. This information can help identify controller-side
congestion under load and enables comparison across paths in multipath
setups (for example, detecting cases where one path experiences
significantly more retries than another under identical workloads).
This exported metric is intended for diagnostics and monitoring tools
such as nvme-top, and does not change command retry behavior.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 6 ++++++
drivers/nvme/host/nvme.h | 3 ++-
drivers/nvme/host/sysfs.c | 30 ++++++++++++++++++++++++++++++
3 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 7bf228df6001..d6490cc2a8e3 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -323,6 +323,7 @@ static void nvme_retry_req(struct request *req)
{
unsigned long delay = 0;
u16 crd;
+ struct nvme_ns *ns = req->q->queuedata;
/* The mask and shift result must be <= 3 */
crd = (nvme_req(req)->status & NVME_STATUS_CRD) >> 11;
@@ -330,6 +331,11 @@ static void nvme_retry_req(struct request *req)
delay = nvme_req(req)->ctrl->crdt[crd - 1] * 100;
nvme_req(req)->retries++;
+ if (ns)
+ ns->retries++;
+ else
+ nvme_req(req)->ctrl->retries++;
+
blk_mq_requeue_request(req, false);
blk_mq_delay_kick_requeue_list(req->q, delay);
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9a5f28c5103c..d8a2831ed34c 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -359,7 +359,7 @@ struct nvme_ctrl {
unsigned long ka_last_check_time;
struct work_struct fw_act_work;
unsigned long events;
-
+ u64 retries;
#ifdef CONFIG_NVME_MULTIPATH
/* asymmetric namespace access: */
u8 anacap;
@@ -535,6 +535,7 @@ struct nvme_ns {
enum nvme_ana_state ana_state;
u32 ana_grpid;
#endif
+ u64 retries;
struct list_head siblings;
struct kref kref;
struct nvme_ns_head *head;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 29430949ce2f..c1e27088a053 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -246,6 +246,17 @@ static ssize_t nuse_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(nuse);
+static ssize_t nvme_io_command_retries_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
+
+ return sysfs_emit(buf, "%llu\n", ns->retries);
+}
+static struct device_attribute dev_attr_io_command_retries =
+ __ATTR(command_retry_count, 0444,
+ nvme_io_command_retries_show, NULL);
+
static struct attribute *nvme_ns_attrs[] = {
&dev_attr_wwid.attr,
&dev_attr_uuid.attr,
@@ -263,6 +274,7 @@ static struct attribute *nvme_ns_attrs[] = {
&dev_attr_delayed_removal_secs.attr,
#endif
&dev_attr_io_passthru_err_log_enabled.attr,
+ &dev_attr_io_command_retries.attr,
NULL,
};
@@ -285,6 +297,12 @@ static umode_t nvme_ns_attrs_are_visible(struct kobject *kobj,
if (!memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
return 0;
}
+ if (a == &dev_attr_io_command_retries.attr) {
+ struct gendisk *disk = dev_to_disk(dev);
+
+ if (nvme_disk_is_ns_head(disk))
+ return 0;
+ }
#ifdef CONFIG_NVME_MULTIPATH
if (a == &dev_attr_ana_grpid.attr || a == &dev_attr_ana_state.attr) {
/* per-path attr */
@@ -601,6 +619,17 @@ static ssize_t dctype_show(struct device *dev,
}
static DEVICE_ATTR_RO(dctype);
+static ssize_t nvme_adm_command_retries_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%llu\n", ctrl->retries);
+}
+static struct device_attribute dev_attr_adm_command_retries =
+ __ATTR(command_retry_count, 0444,
+ nvme_adm_command_retries_show, NULL);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -747,6 +776,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_dhchap_ctrl_secret.attr,
#endif
&dev_attr_adm_passthru_err_log_enabled.attr,
+ &dev_attr_adm_command_retries.attr,
NULL
};
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/7] nvme: export multipath failover count via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
2026-01-30 18:20 ` [PATCH 1/7] nvme: export command retry count " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 3/7] nvme: export command error counters " Nilay Shroff
` (6 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When an NVMe command completes with a path-specific error, the NVMe
driver may retry the command on an alternate controller or path if one
is available. These failover events indicate that I/O was redirected
away from the original path.
Currently, the number of times requests are failed over to another
available path is not visible to userspace. Exposing this information
can be useful for diagnosing path health and stability.
Export the multipath failover count through sysfs to provide visibility
into path failover behavior. This statistic can be consumed by
monitoring tools such as nvme-top to help identify paths that
consistently trigger failovers under load.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/multipath.c | 10 ++++++++++
drivers/nvme/host/nvme.h | 2 ++
drivers/nvme/host/sysfs.c | 5 +++++
3 files changed, 17 insertions(+)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 174027d1cc19..366b820e654a 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -142,6 +142,7 @@ void nvme_failover_req(struct request *req)
struct bio *bio;
nvme_mpath_clear_current_path(ns);
+ ns->failover++;
/*
* If we got back an ANA error, we know the controller is alive but not
@@ -1168,6 +1169,15 @@ static ssize_t delayed_removal_secs_store(struct device *dev,
DEVICE_ATTR_RW(delayed_removal_secs);
+static ssize_t multipath_failover_count_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
+
+ return sysfs_emit(buf, "%llu\n", ns->failover);
+}
+DEVICE_ATTR_RO(multipath_failover_count);
+
static int nvme_lookup_ana_group_desc(struct nvme_ctrl *ctrl,
struct nvme_ana_group_desc *desc, void *data)
{
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index d8a2831ed34c..119ba2344039 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -534,6 +534,7 @@ struct nvme_ns {
#ifdef CONFIG_NVME_MULTIPATH
enum nvme_ana_state ana_state;
u32 ana_grpid;
+ u64 failover;
#endif
u64 retries;
struct list_head siblings;
@@ -1001,6 +1002,7 @@ extern struct device_attribute dev_attr_ana_state;
extern struct device_attribute dev_attr_queue_depth;
extern struct device_attribute dev_attr_numa_nodes;
extern struct device_attribute dev_attr_delayed_removal_secs;
+extern struct device_attribute dev_attr_multipath_failover_count;
extern struct device_attribute subsys_attr_iopolicy;
static inline bool nvme_disk_is_ns_head(struct gendisk *disk)
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index c1e27088a053..8fc593c36b74 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -272,6 +272,7 @@ static struct attribute *nvme_ns_attrs[] = {
&dev_attr_queue_depth.attr,
&dev_attr_numa_nodes.attr,
&dev_attr_delayed_removal_secs.attr,
+ &dev_attr_multipath_failover_count.attr,
#endif
&dev_attr_io_passthru_err_log_enabled.attr,
&dev_attr_io_command_retries.attr,
@@ -321,6 +322,10 @@ static umode_t nvme_ns_attrs_are_visible(struct kobject *kobj,
if (!nvme_disk_is_ns_head(disk))
return 0;
}
+ if (a == &dev_attr_multipath_failover_count.attr) {
+ if (nvme_disk_is_ns_head(dev_to_disk(dev)))
+ return 0;
+ }
#endif
return a->mode;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/7] nvme: export command error counters via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
2026-01-30 18:20 ` [PATCH 1/7] nvme: export command retry count " Nilay Shroff
2026-01-30 18:20 ` [PATCH 2/7] nvme: export multipath failover " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 4/7] nvme: export I/O requeue count when no path is available " Nilay Shroff
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When an NVMe command completes with an error status, the driver
logs the error to the kernel log. However, these messages may be
lost or overwritten over time since dmesg is a circular buffer.
Expose per-path and ctrl command error counters through sysfs to
provide persistent visibility into error occurrences. This allows
users to observe the total number of commands that have failed on
a given path over time, which can be useful for diagnosing path
health and stability.
These counters can also be consumed by observability tools such as
nvme-top to provide additional insight into NVMe error behavior.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 16 ++++++++++++++--
drivers/nvme/host/nvme.h | 2 ++
drivers/nvme/host/sysfs.c | 29 +++++++++++++++++++++++++++++
3 files changed, 45 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d6490cc2a8e3..d16a3f4cc466 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -440,11 +440,23 @@ static inline void nvme_end_req_zoned(struct request *req)
static inline void __nvme_end_req(struct request *req)
{
+ struct nvme_ns *ns = req->q->queuedata;
+
if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) {
- if (blk_rq_is_passthrough(req))
+ if (blk_rq_is_passthrough(req)) {
nvme_log_err_passthru(req);
- else
+ if (ns)
+ ns->errors++;
+ else {
+ struct nvme_request *nr = nvme_req(req);
+ struct nvme_ctrl *ctrl = nr->ctrl;
+
+ ctrl->errors++;
+ }
+ } else {
nvme_log_error(req);
+ ns->errors++;
+ }
}
nvme_end_req_zoned(req);
nvme_trace_bio_complete(req);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 119ba2344039..b7e46bdd2d59 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -360,6 +360,7 @@ struct nvme_ctrl {
struct work_struct fw_act_work;
unsigned long events;
u64 retries;
+ u64 errors;
#ifdef CONFIG_NVME_MULTIPATH
/* asymmetric namespace access: */
u8 anacap;
@@ -537,6 +538,7 @@ struct nvme_ns {
u64 failover;
#endif
u64 retries;
+ u64 errors;
struct list_head siblings;
struct kref kref;
struct nvme_ns_head *head;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 8fc593c36b74..41218fa5081f 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -6,6 +6,7 @@
*/
#include <linux/nvme-auth.h>
+#include <linux/blkdev.h>
#include "nvme.h"
#include "fabrics.h"
@@ -257,6 +258,16 @@ static struct device_attribute dev_attr_io_command_retries =
__ATTR(command_retry_count, 0444,
nvme_io_command_retries_show, NULL);
+static ssize_t nvme_io_errors_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
+
+ return sysfs_emit(buf, "%llu\n", ns->errors);
+}
+struct device_attribute dev_attr_io_errors =
+ __ATTR(command_error_count, 0444, nvme_io_errors_show, NULL);
+
static struct attribute *nvme_ns_attrs[] = {
&dev_attr_wwid.attr,
&dev_attr_uuid.attr,
@@ -276,6 +287,7 @@ static struct attribute *nvme_ns_attrs[] = {
#endif
&dev_attr_io_passthru_err_log_enabled.attr,
&dev_attr_io_command_retries.attr,
+ &dev_attr_io_errors.attr,
NULL,
};
@@ -301,6 +313,12 @@ static umode_t nvme_ns_attrs_are_visible(struct kobject *kobj,
if (a == &dev_attr_io_command_retries.attr) {
struct gendisk *disk = dev_to_disk(dev);
+ if (nvme_disk_is_ns_head(disk))
+ return 0;
+ }
+ if (a == &dev_attr_io_errors.attr) {
+ struct gendisk *disk = dev_to_disk(dev);
+
if (nvme_disk_is_ns_head(disk))
return 0;
}
@@ -635,6 +653,16 @@ static struct device_attribute dev_attr_adm_command_retries =
__ATTR(command_retry_count, 0444,
nvme_adm_command_retries_show, NULL);
+static ssize_t nvme_adm_errors_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%llu\n", ctrl->errors);
+}
+struct device_attribute dev_attr_adm_errors =
+ __ATTR(command_error_count, 0444, nvme_adm_errors_show, NULL);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -782,6 +810,7 @@ static struct attribute *nvme_dev_attrs[] = {
#endif
&dev_attr_adm_passthru_err_log_enabled.attr,
&dev_attr_adm_command_retries.attr,
+ &dev_attr_adm_errors.attr,
NULL
};
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/7] nvme: export I/O requeue count when no path is available via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (2 preceding siblings ...)
2026-01-30 18:20 ` [PATCH 3/7] nvme: export command error counters " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 5/7] nvme: export I/O failure " Nilay Shroff
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When the NVMe namespace head determines that there is no currently
available path to handle I/O (for example, while a controller is
resetting/connecting or due to a transient link failure), incoming
I/Os are added to the requeue list.
Currently, there is no visibility into how many I/Os have been requeued
in this situation. Add a new sysfs counter, requeue_no_available_path,
to expose the number of I/Os that were requeued due to the absence of
an available path.
This statistic can help users understand I/O slowdowns or stalls caused
by temporary path unavailability, and can be consumed by monitoring
tools such as nvme-top for real-time observability.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/multipath.c | 11 +++++++++++
drivers/nvme/host/nvme.h | 2 ++
drivers/nvme/host/sysfs.c | 5 +++++
3 files changed, 18 insertions(+)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 366b820e654a..4e5f8523ca40 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -539,6 +539,7 @@ static void nvme_ns_head_submit_bio(struct bio *bio)
spin_lock_irq(&head->requeue_lock);
bio_list_add(&head->requeue_list, bio);
spin_unlock_irq(&head->requeue_lock);
+ head->requeue_no_usable_path++;
} else {
dev_warn_ratelimited(dev, "no available path - failing I/O\n");
@@ -1178,6 +1179,16 @@ static ssize_t multipath_failover_count_show(struct device *dev,
}
DEVICE_ATTR_RO(multipath_failover_count);
+static ssize_t requeue_no_usable_path_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct gendisk *disk = dev_to_disk(dev);
+ struct nvme_ns_head *head = disk->private_data;
+
+ return sysfs_emit(buf, "%llu\n", head->requeue_no_usable_path);
+}
+DEVICE_ATTR_RO(requeue_no_usable_path);
+
static int nvme_lookup_ana_group_desc(struct nvme_ctrl *ctrl,
struct nvme_ana_group_desc *desc, void *data)
{
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index b7e46bdd2d59..5836e4c557a2 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -509,6 +509,7 @@ struct nvme_ns_head {
unsigned long flags;
struct delayed_work remove_work;
unsigned int delayed_removal_secs;
+ u64 requeue_no_usable_path;
#define NVME_NSHEAD_DISK_LIVE 0
#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
struct nvme_ns __rcu *current_path[];
@@ -1005,6 +1006,7 @@ extern struct device_attribute dev_attr_queue_depth;
extern struct device_attribute dev_attr_numa_nodes;
extern struct device_attribute dev_attr_delayed_removal_secs;
extern struct device_attribute dev_attr_multipath_failover_count;
+extern struct device_attribute dev_attr_requeue_no_usable_path;
extern struct device_attribute subsys_attr_iopolicy;
static inline bool nvme_disk_is_ns_head(struct gendisk *disk)
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 41218fa5081f..84d33445a578 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -284,6 +284,7 @@ static struct attribute *nvme_ns_attrs[] = {
&dev_attr_numa_nodes.attr,
&dev_attr_delayed_removal_secs.attr,
&dev_attr_multipath_failover_count.attr,
+ &dev_attr_requeue_no_usable_path.attr,
#endif
&dev_attr_io_passthru_err_log_enabled.attr,
&dev_attr_io_command_retries.attr,
@@ -344,6 +345,10 @@ static umode_t nvme_ns_attrs_are_visible(struct kobject *kobj,
if (nvme_disk_is_ns_head(dev_to_disk(dev)))
return 0;
}
+ if (a == &dev_attr_requeue_no_usable_path.attr) {
+ if (!nvme_disk_is_ns_head(dev_to_disk(dev)))
+ return 0;
+ }
#endif
return a->mode;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 5/7] nvme: export I/O failure count when no path is available via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (3 preceding siblings ...)
2026-01-30 18:20 ` [PATCH 4/7] nvme: export I/O requeue count when no path is available " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 6/7] nvme: export controller reset event count " Nilay Shroff
` (3 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When I/O is submitted to the NVMe namespace head and no available path
can handle the request, the driver fails the I/O immediately. Currently,
such failures are only reported via kernel log messages, which may be
lost over time since dmesg is a circular buffer.
Add a new sysfs counter, fail_no_available_path, to expose the number of
I/Os that failed due to the absence of an available path. This provides
persistent visibility into path-related I/O failures and can help users
diagnose the cause of I/O errors.
This counter can also be consumed by monitoring tools such as nvme-top.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/multipath.c | 11 +++++++++++
drivers/nvme/host/nvme.h | 2 ++
drivers/nvme/host/sysfs.c | 1 +
3 files changed, 14 insertions(+)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 4e5f8523ca40..85641e8852ad 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -544,6 +544,7 @@ static void nvme_ns_head_submit_bio(struct bio *bio)
dev_warn_ratelimited(dev, "no available path - failing I/O\n");
bio_io_error(bio);
+ head->fail_no_available_path++;
}
srcu_read_unlock(&head->srcu, srcu_idx);
@@ -1189,6 +1190,16 @@ static ssize_t requeue_no_usable_path_show(struct device *dev,
}
DEVICE_ATTR_RO(requeue_no_usable_path);
+static ssize_t fail_no_available_path_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct gendisk *disk = dev_to_disk(dev);
+ struct nvme_ns_head *head = disk->private_data;
+
+ return sysfs_emit(buf, "%llu\n", head->fail_no_available_path);
+}
+DEVICE_ATTR_RO(fail_no_available_path);
+
static int nvme_lookup_ana_group_desc(struct nvme_ctrl *ctrl,
struct nvme_ana_group_desc *desc, void *data)
{
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 5836e4c557a2..66bd4db1fe0f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -510,6 +510,7 @@ struct nvme_ns_head {
struct delayed_work remove_work;
unsigned int delayed_removal_secs;
u64 requeue_no_usable_path;
+ u64 fail_no_available_path;
#define NVME_NSHEAD_DISK_LIVE 0
#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
struct nvme_ns __rcu *current_path[];
@@ -1007,6 +1008,7 @@ extern struct device_attribute dev_attr_numa_nodes;
extern struct device_attribute dev_attr_delayed_removal_secs;
extern struct device_attribute dev_attr_multipath_failover_count;
extern struct device_attribute dev_attr_requeue_no_usable_path;
+extern struct device_attribute dev_attr_fail_no_available_path;
extern struct device_attribute subsys_attr_iopolicy;
static inline bool nvme_disk_is_ns_head(struct gendisk *disk)
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 84d33445a578..c23d9a0ba3f4 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -285,6 +285,7 @@ static struct attribute *nvme_ns_attrs[] = {
&dev_attr_delayed_removal_secs.attr,
&dev_attr_multipath_failover_count.attr,
&dev_attr_requeue_no_usable_path.attr,
+ &dev_attr_fail_no_available_path.attr,
#endif
&dev_attr_io_passthru_err_log_enabled.attr,
&dev_attr_io_command_retries.attr,
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 6/7] nvme: export controller reset event count via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (4 preceding siblings ...)
2026-01-30 18:20 ` [PATCH 5/7] nvme: export I/O failure " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 7/7] nvme: export controller reconnect " Nilay Shroff
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
The NVMe controller transitions into the RESETTING state during error
recovery, link instability, firmware activation, or when a reset is
explicitly triggered by the user.
Expose a controller reset event count via sysfs to provide visibility
into these RESETTING state transitions. Observing the frequency of reset
events can help users identify issues such as PCIe errors or unstable
fabric links.
This counter can also be consumed by monitoring tools such as nvme-top
to improve controller-level observability.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 1 +
drivers/nvme/host/nvme.h | 1 +
drivers/nvme/host/sysfs.c | 10 ++++++++++
3 files changed, 12 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d16a3f4cc466..bb74834c4ed8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -597,6 +597,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_NEW:
case NVME_CTRL_LIVE:
changed = true;
+ ctrl->nr_reset++;
fallthrough;
default:
break;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 66bd4db1fe0f..d76e42fee01f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -361,6 +361,7 @@ struct nvme_ctrl {
unsigned long events;
u64 retries;
u64 errors;
+ u32 nr_reset;
#ifdef CONFIG_NVME_MULTIPATH
/* asymmetric namespace access: */
u8 anacap;
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index c23d9a0ba3f4..e1ef44e69768 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -669,6 +669,15 @@ static ssize_t nvme_adm_errors_show(struct device *dev,
struct device_attribute dev_attr_adm_errors =
__ATTR(command_error_count, 0444, nvme_adm_errors_show, NULL);
+static ssize_t reset_events_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", ctrl->nr_reset);
+}
+static DEVICE_ATTR_RO(reset_events);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -817,6 +826,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_adm_passthru_err_log_enabled.attr,
&dev_attr_adm_command_retries.attr,
&dev_attr_adm_errors.attr,
+ &dev_attr_reset_events.attr,
NULL
};
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 7/7] nvme: export controller reconnect event count via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (5 preceding siblings ...)
2026-01-30 18:20 ` [PATCH 6/7] nvme: export controller reset event count " Nilay Shroff
@ 2026-01-30 18:20 ` Nilay Shroff
2026-02-02 22:56 ` [PATCH 0/7] nvme: export additional diagnostic counters " Hannes Reinecke
2026-02-03 12:26 ` Ming Lei
8 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-01-30 18:20 UTC (permalink / raw)
To: linux-nvme
Cc: kbusch, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce,
Nilay Shroff
When an NVMe-oF link goes down, the driver attempts to recover the
connection by repeatedly try reconnecting to the target at configured
intervals. A maximum number of reconnect attempts is also configured,
after which recovery stops and the host controller is removed if the
connection cannot be re-established.
The driver maintains a counter, nr_reconnects, which is incremented on
each reconnect attempt. Currently, this counter is only reported via
kernel log messages and is not exposed to userspace. Since dmesg is a
circular buffer, this information may be lost over time.
Expose the nr_reconnects counter via a new sysfs attribute, reconnect_
events, to provide persistent visibility into the number of reconnect
attempts made by the host. This information can help users diagnose
unstable links or connectivity issues.
This counter can also be consumed by monitoring tools such as nvme-top
to improve controller-level observability.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/sysfs.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index e1ef44e69768..c78c494f4ee0 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -678,6 +678,15 @@ static ssize_t reset_events_show(struct device *dev,
}
static DEVICE_ATTR_RO(reset_events);
+static ssize_t reconnect_events_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", ctrl->nr_reconnects);
+}
+static DEVICE_ATTR_RO(reconnect_events);
+
#ifdef CONFIG_NVME_HOST_AUTH
static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -827,6 +836,7 @@ static struct attribute *nvme_dev_attrs[] = {
&dev_attr_adm_command_retries.attr,
&dev_attr_adm_errors.attr,
&dev_attr_reset_events.attr,
+ &dev_attr_reconnect_events.attr,
NULL
};
--
2.52.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/7] nvme: export command retry count via sysfs
2026-01-30 18:20 ` [PATCH 1/7] nvme: export command retry count " Nilay Shroff
@ 2026-01-30 20:33 ` Keith Busch
2026-02-02 13:33 ` Nilay Shroff
0 siblings, 1 reply; 14+ messages in thread
From: Keith Busch @ 2026-01-30 20:33 UTC (permalink / raw)
To: Nilay Shroff
Cc: linux-nvme, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce
On Fri, Jan 30, 2026 at 11:50:18PM +0530, Nilay Shroff wrote:
> nvme_req(req)->retries++;
> + if (ns)
> + ns->retries++;
> + else
> + nvme_req(req)->ctrl->retries++;
I don't think admin commands ever retry with this driver, so probably
not worth tracking it.
And as unlikely as it is to happen, might want to ensure it doesn't
wrap back to zero. The size_add() function handles that.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/7] nvme: export command retry count via sysfs
2026-01-30 20:33 ` Keith Busch
@ 2026-02-02 13:33 ` Nilay Shroff
0 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-02-02 13:33 UTC (permalink / raw)
To: Keith Busch; +Cc: linux-nvme, axboe, hch, sagi, hare, dwagner, wenxiong, gjoyce
On 1/31/26 2:03 AM, Keith Busch wrote:
> On Fri, Jan 30, 2026 at 11:50:18PM +0530, Nilay Shroff wrote:
>> nvme_req(req)->retries++;
>> + if (ns)
>> + ns->retries++;
>> + else
>> + nvme_req(req)->ctrl->retries++;
>
> I don't think admin commands ever retry with this driver, so probably
> not worth tracking it.
Yeah got it. I will drop retry counter for admin commands.
>
> And as unlikely as it is to happen, might want to ensure it doesn't
> wrap back to zero. The size_add() function handles that.
Sure, makes sense.
I will fix these in next patch revision.
Thanks,
--Nilay
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/7] nvme: export additional diagnostic counters via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (6 preceding siblings ...)
2026-01-30 18:20 ` [PATCH 7/7] nvme: export controller reconnect " Nilay Shroff
@ 2026-02-02 22:56 ` Hannes Reinecke
2026-02-03 9:07 ` Nilay Shroff
2026-02-03 12:26 ` Ming Lei
8 siblings, 1 reply; 14+ messages in thread
From: Hannes Reinecke @ 2026-02-02 22:56 UTC (permalink / raw)
To: Nilay Shroff, linux-nvme
Cc: kbusch, axboe, hch, sagi, dwagner, wenxiong, gjoyce
On 1/30/26 19:20, Nilay Shroff wrote:
> Hi,
>
> The NVMe driver encounters various events and conditions during normal
> operation that are either not tracked today or not exposed to userspace
> via sysfs. Lack of visibility into these events can make it difficult to
> diagnose subtle issues related to controller behavior, multipath
> stability, and I/O reliability.
>
> This patchset adds several diagnostic counters that provide improved
> observability into NVMe behavior. These counters are intended to help
> users understand events such as transient path unavailability,
> controller retries/reconnect/reset, failovers, and I/O failures. They
> can also be consumed by monitoring tools such as nvme-top.
>
> Specifically, this series proposes to export the following counters via
> sysfs:
> - Command retry count
> - Multipath failover count
> - Command error count
> - I/O requeue count
> - I/O failure count
> - Controller reset event counts
> - Controller reconnect counts
>
> The patchset consists of seven patches:
> Patch 1: Export command retry count
> Patch 2: Export multipath failover count
> Patch 3: Export command error count
> Patch 4: Export I/O requeue count
> Patch 5: Export I/O failure count
> Patch 6: Export controller reset event counts
> Patch 7: Export controller reconnect event count
>
> Please note that this patchset doesn't make any functional change but
> rather export relevant counters to user space via sysfs.
>
> As usual, feedback/comments/suggestions are welcome!
>
While I do agree with the general idea, I do wonder whether debugfs
would not be a better suited place for all of this. Having all of
this information in sysfs will clutter is by quite a bit, plus we
do have the usual issues with ABI stability if we ever see the need
to change (or, heaven forbid, remove) any of these counters.
(And when doing so it might be an idea to add a 'version' entry
to debugfs such that we can manage userspace expectation).
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/7] nvme: export additional diagnostic counters via sysfs
2026-02-02 22:56 ` [PATCH 0/7] nvme: export additional diagnostic counters " Hannes Reinecke
@ 2026-02-03 9:07 ` Nilay Shroff
0 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-02-03 9:07 UTC (permalink / raw)
To: Hannes Reinecke, linux-nvme
Cc: kbusch, axboe, hch, sagi, dwagner, wenxiong, gjoyce
On 2/3/26 4:26 AM, Hannes Reinecke wrote:
> On 1/30/26 19:20, Nilay Shroff wrote:
>> Hi,
>>
>> The NVMe driver encounters various events and conditions during normal
>> operation that are either not tracked today or not exposed to userspace
>> via sysfs. Lack of visibility into these events can make it difficult to
>> diagnose subtle issues related to controller behavior, multipath
>> stability, and I/O reliability.
>>
>> This patchset adds several diagnostic counters that provide improved
>> observability into NVMe behavior. These counters are intended to help
>> users understand events such as transient path unavailability,
>> controller retries/reconnect/reset, failovers, and I/O failures. They
>> can also be consumed by monitoring tools such as nvme-top.
>>
>> Specifically, this series proposes to export the following counters via
>> sysfs:
>> - Command retry count
>> - Multipath failover count
>> - Command error count
>> - I/O requeue count
>> - I/O failure count
>> - Controller reset event counts
>> - Controller reconnect counts
>>
>> The patchset consists of seven patches:
>> Patch 1: Export command retry count
>> Patch 2: Export multipath failover count
>> Patch 3: Export command error count
>> Patch 4: Export I/O requeue count
>> Patch 5: Export I/O failure count
>> Patch 6: Export controller reset event counts
>> Patch 7: Export controller reconnect event count
>>
>> Please note that this patchset doesn't make any functional change but
>> rather export relevant counters to user space via sysfs.
>>
>> As usual, feedback/comments/suggestions are welcome!
>>
>
> While I do agree with the general idea, I do wonder whether debugfs
> would not be a better suited place for all of this. Having all of
> this information in sysfs will clutter is by quite a bit, plus we
> do have the usual issues with ABI stability if we ever see the need
> to change (or, heaven forbid, remove) any of these counters.
>
> (And when doing so it might be an idea to add a 'version' entry
> to debugfs such that we can manage userspace expectation).
>
I understand the concern regarding ABI stability and potential sysfs clutter.
However, one of the challenges with relying on debugfs is that it is not always
guaranteed to be available or enabled in production environments. As a result,
exposing these statistics exclusively via debugfs could limit their usefulness
for real-world deployments.
These counters are intended to be consumed by user-space tools such as nvme-cli/
nvme-top, which are often used in production systems for monitoring and diagnostics.
Depending on debugfs in such cases may therefore not be reliable.
In fact, there has been prior discussion expressing reservations about relying on
debugfs for NVMe-related statistics. For example, Daniel previously raised concerns
in the context of nvme-cli:
https://lore.kernel.org/all/803f429d-60f3-4af7-9535-37a2038e53c1@flourine.local/
Given this, IMO, exposing a carefully scoped and well-documented set of diagnostic
counters via sysfs seems more appropriate for long-term usability, provided we
remain mindful of ABI stability considerations.
Thanks,
--Nilay
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/7] nvme: export additional diagnostic counters via sysfs
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
` (7 preceding siblings ...)
2026-02-02 22:56 ` [PATCH 0/7] nvme: export additional diagnostic counters " Hannes Reinecke
@ 2026-02-03 12:26 ` Ming Lei
2026-02-03 13:03 ` Nilay Shroff
8 siblings, 1 reply; 14+ messages in thread
From: Ming Lei @ 2026-02-03 12:26 UTC (permalink / raw)
To: Nilay Shroff
Cc: linux-nvme, kbusch, axboe, hch, sagi, hare, dwagner, wenxiong,
gjoyce, Ming Lei
On Sat, Jan 31, 2026 at 2:22 AM Nilay Shroff <nilay@linux.ibm.com> wrote:
>
> Hi,
>
> The NVMe driver encounters various events and conditions during normal
> operation that are either not tracked today or not exposed to userspace
> via sysfs. Lack of visibility into these events can make it difficult to
Not sure if it is true, you may get tons of results by googling `linux
bpf observe`.
Thanks,
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/7] nvme: export additional diagnostic counters via sysfs
2026-02-03 12:26 ` Ming Lei
@ 2026-02-03 13:03 ` Nilay Shroff
0 siblings, 0 replies; 14+ messages in thread
From: Nilay Shroff @ 2026-02-03 13:03 UTC (permalink / raw)
To: Ming Lei
Cc: linux-nvme, kbusch, axboe, hch, sagi, hare, dwagner, wenxiong,
gjoyce
On 2/3/26 5:56 PM, Ming Lei wrote:
> On Sat, Jan 31, 2026 at 2:22 AM Nilay Shroff <nilay@linux.ibm.com> wrote:
>>
>> Hi,
>>
>> The NVMe driver encounters various events and conditions during normal
>> operation that are either not tracked today or not exposed to userspace
>> via sysfs. Lack of visibility into these events can make it difficult to
>
> Not sure if it is true, you may get tons of results by googling `linux
> bpf observe`.
>
Yeah true, and the intent was not to mention/generalize that _no_ NVMe events
being tracked today. The goal of this patchset is to export specific event and
statistic counters that are either not tracked at all or not exposed to userspace
in a consistent way, and that could be useful for analyzing NVMe device and
multipath behavior.
So this patchset proposes to track these counters (command retry, failover, errors,
I/O requeue, controller reset/reconnect etc.) and export via sysfs. These counters
are intended to be consumed by tools such as nvme-cli/nvme-top. For nvme-cli, in
particular, sysfs is currently the only practical and supported interface for
consuming such statistics.
Once these counters exist in the kernel, they can of course also be observed via
BPF if desired. However, the primary motivation here is to make these events
explicitly tracked and reliably available, rather than requiring ad-hoc instrumentation
to infer them.
Thanks,
--Nilay
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-02-03 13:03 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-30 18:20 [PATCH 0/7] nvme: export additional diagnostic counters via sysfs Nilay Shroff
2026-01-30 18:20 ` [PATCH 1/7] nvme: export command retry count " Nilay Shroff
2026-01-30 20:33 ` Keith Busch
2026-02-02 13:33 ` Nilay Shroff
2026-01-30 18:20 ` [PATCH 2/7] nvme: export multipath failover " Nilay Shroff
2026-01-30 18:20 ` [PATCH 3/7] nvme: export command error counters " Nilay Shroff
2026-01-30 18:20 ` [PATCH 4/7] nvme: export I/O requeue count when no path is available " Nilay Shroff
2026-01-30 18:20 ` [PATCH 5/7] nvme: export I/O failure " Nilay Shroff
2026-01-30 18:20 ` [PATCH 6/7] nvme: export controller reset event count " Nilay Shroff
2026-01-30 18:20 ` [PATCH 7/7] nvme: export controller reconnect " Nilay Shroff
2026-02-02 22:56 ` [PATCH 0/7] nvme: export additional diagnostic counters " Hannes Reinecke
2026-02-03 9:07 ` Nilay Shroff
2026-02-03 12:26 ` Ming Lei
2026-02-03 13:03 ` Nilay Shroff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox