linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command
@ 2025-08-12 12:56 Nilay Shroff
  2025-08-12 12:56 ` [PATCHv2 1/4] nvme: support <device> option in " Nilay Shroff
                   ` (3 more replies)
  0 siblings, 4 replies; 26+ messages in thread
From: Nilay Shroff @ 2025-08-12 12:56 UTC (permalink / raw)
  To: linux-nvme; +Cc: dwagner, hare, kbusch, gjoyce

Hi,

There has been recent work to improve the visibility of NVMe multipath
configurations. The native NVMe multipath kernel driver supports three
I/O path selection policies: numa, round-robin, and queue-depth. However,
until now, users have had no insight into which path is selected by the
multipath logic for forwarding I/O.

To address this, an RFC was proposed [1] and has since been merged into
the Linux kernel as of v6.15. Following that, libnvme was extended to
discover multiple paths to a shared NVMe namespace [2]. This patchset
builds on those efforts to improve userspace observability of multipath
configurations via nvme-cli.

This patchset contains four patches:
The first patch in the series fixes a <device> filter option using which
we could be able to filter the output of nvme show-topology command.

The second patch in the series enhances the nvme show-topology command to
display multipath configuration, including all discovered paths to a
namespace, along with details such as ANA state, NUMA nodes, and queue
depth. The show-topology "--ranking" option is extended with a new sub-
option: multipath.

The third patch in the series adds common table APIs for printing nvme cli
command output in tabular format.  With these APIs, developers no longer
need to pre-calculate column or row widths. The output is consistently
aligned and easy to read.

The fourth patch adds support for printing show-topology in tabular form
leveraging the introduced table APIs to produce well-aligned, easy-to-read
output.

As usual, any feedback/sugegstion is most welcome!

Thanks!

Changes from v1:
  - Added the third patch in the series that implements the common table
    APIs for printing nvme cli command output in tabular format
    (Daniel Wagner)
  - Added the fourth patch in the series which adds the support for 
    printing show-topology in tabular form (Daniel Wagner)
Link to v1: https://lore.kernel.org/all/20250704135001.292763-1-nilay@linux.ibm.com/    

Nilay Shroff (4):
  nvme: support <device> option in show-topology command
  nvme: extend show-topology command to add support for multipath
  nvme: add common APIs for printing tabular format output
  nvme: add support for printing show-topology in tabular form

 nvme-print-binary.c |   1 +
 nvme-print-json.c   |  25 +++-
 nvme-print-stdout.c | 233 ++++++++++++++++++++++++++++++++++++-
 nvme-print.c        |   9 +-
 nvme-print.h        |   3 +
 nvme.c              |  29 ++++-
 nvme.h              |   2 +
 util/meson.build    |   3 +-
 util/table.c        | 278 ++++++++++++++++++++++++++++++++++++++++++++
 util/table.h        | 117 +++++++++++++++++++
 10 files changed, 685 insertions(+), 15 deletions(-)
 create mode 100644 util/table.c
 create mode 100644 util/table.h

-- 
2.50.1



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCHv2 1/4] nvme: support <device> option in show-topology command
  2025-08-12 12:56 [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command Nilay Shroff
@ 2025-08-12 12:56 ` Nilay Shroff
  2025-08-18  7:12   ` Hannes Reinecke
  2025-08-12 12:56 ` [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath Nilay Shroff
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-12 12:56 UTC (permalink / raw)
  To: linux-nvme; +Cc: dwagner, hare, kbusch, gjoyce

Although the help text for the nvme show-topology command indicates
support for a <device> option, this option has no effect in practice
— specifying an NVMe device name does not filter the output.

This commit adds proper support for the <device> option, enabling users
to filter the topology output based on the specified NVMe device.

Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
 nvme-print-stdout.c |  9 +++++++++
 nvme.c              | 17 ++++++++++++++++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
index c597b608..eb56349a 100644
--- a/nvme-print-stdout.c
+++ b/nvme-print-stdout.c
@@ -5683,6 +5683,15 @@ static void stdout_simple_topology(nvme_root_t r,
 
 	nvme_for_each_host(r, h) {
 		nvme_for_each_subsystem(h, s) {
+			bool no_ctrl = true;
+			nvme_ctrl_t c;
+
+			nvme_subsystem_for_each_ctrl(s, c)
+				no_ctrl = false;
+
+			if (no_ctrl)
+				continue;
+
 			if (!first)
 				printf("\n");
 			first = false;
diff --git a/nvme.c b/nvme.c
index 27dac37b..c6779cf4 100644
--- a/nvme.c
+++ b/nvme.c
@@ -10155,6 +10155,8 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
 	const char *ranking = "Ranking order: namespace|ctrl";
 	nvme_print_flags_t flags;
 	_cleanup_nvme_root_ nvme_root_t r = NULL;
+	char *devname = NULL;
+	nvme_scan_filter_t filter = NULL;
 	enum nvme_cli_topo_ranking rank;
 	int err;
 
@@ -10197,7 +10199,20 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
 		return -errno;
 	}
 
-	err = nvme_scan_topology(r, NULL, NULL);
+	if (optind < argc)
+		devname = basename(argv[optind++]);
+
+	if (devname) {
+		int subsys_id, nsid;
+
+		if (sscanf(devname, "nvme%dn%d", &subsys_id, &nsid) != 2) {
+			nvme_show_error("Invalid device name %s\n", devname);
+			return -EINVAL;
+		}
+		filter = nvme_match_device_filter;
+	}
+
+	err = nvme_scan_topology(r, filter, (void *)devname);
 	if (err < 0) {
 		nvme_show_error("Failed to scan topology: %s", nvme_strerror(errno));
 		return err;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-12 12:56 [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command Nilay Shroff
  2025-08-12 12:56 ` [PATCHv2 1/4] nvme: support <device> option in " Nilay Shroff
@ 2025-08-12 12:56 ` Nilay Shroff
  2025-08-18  7:22   ` Hannes Reinecke
  2025-08-12 12:56 ` [PATCHv2 3/4] nvme: add common APIs for printing tabular format output Nilay Shroff
  2025-08-12 12:56 ` [PATCHv2 4/4] nvme: add support for printing show-topology in tabular form Nilay Shroff
  3 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-12 12:56 UTC (permalink / raw)
  To: linux-nvme; +Cc: dwagner, hare, kbusch, gjoyce

This commit enhances the show-topology command by adding support for
NVMe multipath. With this change, users can now list all paths to a
namespace from its corresponding head node device. Each NVMe path
entry then also includes additional details such as ANA state, NUMA
node, and queue depth, improving visibility into multipath configs.
This information can be particularly helpful for debugging and
analyzing NVMe multipath setups.

To support this functionality, the "--ranking" option of the nvme
show-topology command has been extended with a new sub-option:
"multipath".

Since this enhancement is specific to NVMe multipath, the iopolicy
configured under each subsystem is now always displayed. Previously,
iopolicy was shown only with nvme show-topology verbose output, but
it is now included by default to improve usability and provide better
context when reviewing multipath configurations via show-topology.

With this update, users can view the multipath topology of a multi
controller/port NVMe disk using:

$ nvme show-topology -r multipath

nvme-subsys2 - NQN=nvmet_subsystem
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
               iopolicy=numa

          _ _ _<head-node>
         /              _ _ _ <ana-state>
        /              /      _ _ _ <numa-node-list>
       /              /      /  _ _ _<queue-depth>
      |              /      /  /
 +- nvme2n1 (ns 1)  /      /  /
 \                 |      |  |
  +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
  +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live

Please note that the annotations shown above (e.g., <numa-node-list>,
<ana-state>, <hed-node>, and <queue-depth>) are included for clarity
only and are not part of the actual output.

Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
 nvme-print-binary.c |  1 +
 nvme-print-json.c   | 25 ++++++++++++++++------
 nvme-print-stdout.c | 52 +++++++++++++++++++++++++++++++++++++++++----
 nvme-print.c        |  4 +++-
 nvme-print.h        |  1 +
 nvme.c              |  4 +++-
 nvme.h              |  1 +
 7 files changed, 76 insertions(+), 12 deletions(-)

diff --git a/nvme-print-binary.c b/nvme-print-binary.c
index 351513b9..17def142 100644
--- a/nvme-print-binary.c
+++ b/nvme-print-binary.c
@@ -428,6 +428,7 @@ static struct print_ops binary_print_ops = {
 	.print_nvme_subsystem_list	= NULL,
 	.topology_ctrl			= NULL,
 	.topology_namespace		= NULL,
+	.topology_multipath		= NULL,
 
 	/* status and error messages */
 	.connect_msg			= NULL,
diff --git a/nvme-print-json.c b/nvme-print-json.c
index 829ba718..5ea968c9 100644
--- a/nvme-print-json.c
+++ b/nvme-print-json.c
@@ -4750,19 +4750,30 @@ static unsigned int json_subsystem_topology_multipath(nvme_subsystem_t s,
 
 		ns_attrs = json_create_object();
 		obj_add_int(ns_attrs, "NSID", nvme_ns_get_nsid(n));
+		obj_add_str(ns_attrs, "Name", nvme_ns_get_name(n));
 
 		paths = json_create_array();
 		nvme_namespace_for_each_path(n, p) {
 			struct json_object *path_attrs;
-
-			nvme_ctrl_t c = nvme_path_get_ctrl(p);
+			struct json_object *ctrls, *ctrl_attrs;
+			nvme_ctrl_t c;
 
 			path_attrs = json_create_object();
-			obj_add_str(path_attrs, "Name", nvme_ctrl_get_name(c));
-			obj_add_str(path_attrs, "Transport", nvme_ctrl_get_transport(c));
-			obj_add_str(path_attrs, "Address", nvme_ctrl_get_address(c));
-			obj_add_str(path_attrs, "State", nvme_ctrl_get_state(c));
+			obj_add_str(path_attrs, "Path", nvme_path_get_name(p));
 			obj_add_str(path_attrs, "ANAState", nvme_path_get_ana_state(p));
+			obj_add_str(path_attrs, "NUMANodes", nvme_path_get_numa_nodes(p));
+			obj_add_int(path_attrs, "Qdepth", nvme_path_get_queue_depth(p));
+
+			c = nvme_path_get_ctrl(p);
+			ctrls = json_create_array();
+			ctrl_attrs = json_create_object();
+			obj_add_str(ctrl_attrs, "Name", nvme_ctrl_get_name(c));
+			obj_add_str(ctrl_attrs, "Transport", nvme_ctrl_get_transport(c));
+			obj_add_str(ctrl_attrs, "Address", nvme_ctrl_get_address(c));
+			obj_add_str(ctrl_attrs, "State", nvme_ctrl_get_state(c));
+			array_add_obj(ctrls, ctrl_attrs);
+			obj_add_array(path_attrs, "Controller", ctrls);
+
 			array_add_obj(paths, path_attrs);
 		}
 		obj_add_array(ns_attrs, "Paths", paths);
@@ -4787,6 +4798,7 @@ static void json_print_nvme_subsystem_topology(nvme_subsystem_t s,
 
 			ns_attrs = json_create_object();
 			obj_add_int(ns_attrs, "NSID", nvme_ns_get_nsid(n));
+			obj_add_str(ns_attrs, "Name", nvme_ns_get_name(n));
 
 			ctrl = json_create_array();
 			ctrl_attrs = json_create_object();
@@ -5546,6 +5558,7 @@ static struct print_ops json_print_ops = {
 	.print_nvme_subsystem_list	= json_print_nvme_subsystem_list,
 	.topology_ctrl			= json_simple_topology,
 	.topology_namespace		= json_simple_topology,
+	.topology_multipath		= json_simple_topology,
 
 	/* status and error messages */
 	.connect_msg			= json_connect_msg,
diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
index eb56349a..ef9ee881 100644
--- a/nvme-print-stdout.c
+++ b/nvme-print-stdout.c
@@ -1126,6 +1126,8 @@ static void stdout_subsys_config(nvme_subsystem_t s)
 	       nvme_subsystem_get_nqn(s));
 	printf("%*s   hostnqn=%s\n", len, " ",
 	       nvme_host_get_hostnqn(nvme_subsystem_get_host(s)));
+	printf("%*s   iopolicy=%s\n", len, " ",
+		nvme_subsystem_get_iopolicy(s));
 
 	if (stdout_print_ops.flags & VERBOSE) {
 		printf("%*s   model=%s\n", len, " ",
@@ -1134,8 +1136,6 @@ static void stdout_subsys_config(nvme_subsystem_t s)
 			nvme_subsystem_get_serial(s));
 		printf("%*s   firmware=%s\n", len, " ",
 			nvme_subsystem_get_fw_rev(s));
-		printf("%*s   iopolicy=%s\n", len, " ",
-			nvme_subsystem_get_iopolicy(s));
 		printf("%*s   type=%s\n", len, " ",
 			nvme_subsystem_get_type(s));
 	}
@@ -5615,7 +5615,7 @@ static void stdout_subsystem_topology_multipath(nvme_subsystem_t s,
 				       nvme_path_get_ana_state(p));
 			}
 		}
-	} else {
+	} else if (ranking == NVME_CLI_TOPO_CTRL) {
 		/* NVME_CLI_TOPO_CTRL */
 		nvme_subsystem_for_each_ctrl(s, c) {
 			printf(" +- %s %s %s\n",
@@ -5636,6 +5636,27 @@ static void stdout_subsystem_topology_multipath(nvme_subsystem_t s,
 				}
 			}
 		}
+	} else {
+		/* NVME_CLI_TOPO_MULTIPATH */
+		nvme_subsystem_for_each_ns(s, n) {
+			printf(" +- %s (ns %d)\n",
+					nvme_ns_get_name(n),
+					nvme_ns_get_nsid(n));
+			printf(" \\\n");
+			nvme_namespace_for_each_path(n, p) {
+				c = nvme_path_get_ctrl(p);
+
+				printf("  +- %s %s %s %d %s %s %s %s\n",
+						nvme_path_get_name(p),
+						nvme_path_get_ana_state(p),
+						nvme_path_get_numa_nodes(p),
+						nvme_path_get_queue_depth(p),
+						nvme_ctrl_get_name(c),
+						nvme_ctrl_get_transport(c),
+						nvme_ctrl_get_address(c),
+						nvme_ctrl_get_state(c));
+			}
+		}
 	}
 }
 
@@ -5657,7 +5678,7 @@ static void stdout_subsystem_topology(nvme_subsystem_t s,
 				       nvme_ctrl_get_state(c));
 			}
 		}
-	} else {
+	} else if (ranking == NVME_CLI_TOPO_CTRL) {
 		/* NVME_CLI_TOPO_CTRL */
 		nvme_subsystem_for_each_ctrl(s, c) {
 			printf(" +- %s %s %s\n",
@@ -5671,6 +5692,23 @@ static void stdout_subsystem_topology(nvme_subsystem_t s,
 				       nvme_ctrl_get_state(c));
 			}
 		}
+	} else {
+		/* NVME_CLI_TOPO_MULTIPATH */
+		nvme_subsystem_for_each_ctrl(s, c) {
+			nvme_ctrl_for_each_ns(c, n) {
+				c = nvme_ns_get_ctrl(n);
+
+				printf(" +- %s (ns %d)\n",
+						nvme_ns_get_name(n),
+						nvme_ns_get_nsid(n));
+				printf(" \\\n");
+				printf("  +- %s %s %s %s\n",
+						nvme_ctrl_get_name(c),
+						nvme_ctrl_get_transport(c),
+						nvme_ctrl_get_address(c),
+						nvme_ctrl_get_state(c));
+			}
+		}
 	}
 }
 
@@ -5717,6 +5755,11 @@ static void stdout_topology_ctrl(nvme_root_t r)
 	stdout_simple_topology(r, NVME_CLI_TOPO_CTRL);
 }
 
+static void stdout_topology_multipath(nvme_root_t r)
+{
+	stdout_simple_topology(r, NVME_CLI_TOPO_MULTIPATH);
+}
+
 static void stdout_message(bool error, const char *msg, va_list ap)
 {
 	vfprintf(error ? stderr : stdout, msg, ap);
@@ -6286,6 +6329,7 @@ static struct print_ops stdout_print_ops = {
 	.print_nvme_subsystem_list	= stdout_subsystem_list,
 	.topology_ctrl			= stdout_topology_ctrl,
 	.topology_namespace		= stdout_topology_namespace,
+	.topology_multipath		= stdout_topology_multipath,
 
 	/* status and error messages */
 	.connect_msg			= stdout_connect_msg,
diff --git a/nvme-print.c b/nvme-print.c
index 3a71dffc..473a6814 100644
--- a/nvme-print.c
+++ b/nvme-print.c
@@ -1551,8 +1551,10 @@ void nvme_show_topology(nvme_root_t r,
 {
 	if (ranking == NVME_CLI_TOPO_NAMESPACE)
 		nvme_print(topology_namespace, flags, r);
-	else
+	else if (ranking == NVME_CLI_TOPO_CTRL)
 		nvme_print(topology_ctrl, flags, r);
+	else
+		nvme_print(topology_multipath, flags, r);
 }
 
 void nvme_show_message(bool error, const char *msg, ...)
diff --git a/nvme-print.h b/nvme-print.h
index 4ccf5e63..0f23b711 100644
--- a/nvme-print.h
+++ b/nvme-print.h
@@ -106,6 +106,7 @@ struct print_ops {
 	void (*print_nvme_subsystem_list)(nvme_root_t r, bool show_ana);
 	void (*topology_ctrl)(nvme_root_t r);
 	void (*topology_namespace)(nvme_root_t r);
+	void (*topology_multipath)(nvme_root_t r);
 
 	/* status and error messages */
 	void (*connect_msg)(nvme_ctrl_t c);
diff --git a/nvme.c b/nvme.c
index c6779cf4..86fb1d4b 100644
--- a/nvme.c
+++ b/nvme.c
@@ -10152,7 +10152,7 @@ static int tls_key(int argc, char **argv, struct command *command, struct plugin
 static int show_topology_cmd(int argc, char **argv, struct command *command, struct plugin *plugin)
 {
 	const char *desc = "Show the topology\n";
-	const char *ranking = "Ranking order: namespace|ctrl";
+	const char *ranking = "Ranking order: namespace|ctrl|multipath";
 	nvme_print_flags_t flags;
 	_cleanup_nvme_root_ nvme_root_t r = NULL;
 	char *devname = NULL;
@@ -10188,6 +10188,8 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
 		rank = NVME_CLI_TOPO_NAMESPACE;
 	} else if (!strcmp(cfg.ranking, "ctrl")) {
 		rank = NVME_CLI_TOPO_CTRL;
+	} else if (!strcmp(cfg.ranking, "multipath")) {
+		rank = NVME_CLI_TOPO_MULTIPATH;
 	} else {
 		nvme_show_error("Invalid ranking argument: %s", cfg.ranking);
 		return -EINVAL;
diff --git a/nvme.h b/nvme.h
index 248c5c7c..f02f39ca 100644
--- a/nvme.h
+++ b/nvme.h
@@ -46,6 +46,7 @@ typedef uint32_t nvme_print_flags_t;
 enum nvme_cli_topo_ranking {
 	NVME_CLI_TOPO_NAMESPACE,
 	NVME_CLI_TOPO_CTRL,
+	NVME_CLI_TOPO_MULTIPATH,
 };
 
 #define SYS_NVME "/sys/class/nvme"
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 3/4] nvme: add common APIs for printing tabular format output
  2025-08-12 12:56 [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command Nilay Shroff
  2025-08-12 12:56 ` [PATCHv2 1/4] nvme: support <device> option in " Nilay Shroff
  2025-08-12 12:56 ` [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath Nilay Shroff
@ 2025-08-12 12:56 ` Nilay Shroff
  2025-08-18  7:27   ` Hannes Reinecke
  2025-08-12 12:56 ` [PATCHv2 4/4] nvme: add support for printing show-topology in tabular form Nilay Shroff
  3 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-12 12:56 UTC (permalink / raw)
  To: linux-nvme; +Cc: dwagner, hare, kbusch, gjoyce

Some nvme-cli commands, such as nvme list and nvme list -v, support output
in tabular format. Currently, the table output is not well aligned because
column widths are fixed at print time, regardless of the length of the data
in each column. This often results in uneven and hard-to-read output.For
any new CLI command that requires tabular output, developers must manually
specify the column width and row value width, which is both error-prone and
inconsistent.

This patch introduces a set of common table APIs that:
- Automatically calculate column widths based on the content
- Maintain proper alignment regardless of value length
- Simplify adding tabular output support to new and existing commands

The new APIs are:
1. table_init() — Allocate a table instance.
2. table_add_columns() — Add column definitions (struct table_column),
   including name and alignment (LEFT, RIGHT, CENTERED).
3. table_get_row_id() — Reserve a row index for inserting values.
4. table_add_row() — Add a row to the table.
5. table_print() — Print the table with auto-calculated widths.
6. table_free() — Free resources allocated for the table.

For adding values, the following setter APIs are provided, each
supporting alignment types (LEFT, RIGHT, or CENTERED):
- table_set_value_str()
- table_set_value_int()
- table_set_value_unsigned()
- table_set_value_long()
- table_set_value_unsigned_long()

Usage steps:
1. Call table_init() to create a table handle.
2. Define an array of struct table_column specifying column names and
   alignment, then call table_add_columns().
3. Obtain a row ID using table_get_row_id() and set values using the
   appropriate setter table APIs : table_set_value_*() function.
4. Add the completed row using table_add_row().
5. Repeat steps 3–4 for each additional row.
5. Call table_print() to display the table.
6. Call table_free() to release table resources.

With these APIs, developers no longer need to pre-calculate column or row
widths. The output is consistently aligned and easy to read.

Suggested-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
 util/meson.build |   3 +-
 util/table.c     | 278 +++++++++++++++++++++++++++++++++++++++++++++++
 util/table.h     | 117 ++++++++++++++++++++
 3 files changed, 397 insertions(+), 1 deletion(-)
 create mode 100644 util/table.c
 create mode 100644 util/table.h

diff --git a/util/meson.build b/util/meson.build
index 75aed49c..5b402b1a 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -8,7 +8,8 @@ sources += [
   'util/sighdl.c',
   'util/suffix.c',
   'util/types.c',
-  'util/utils.c'
+  'util/utils.c',
+  'util/table.c'
 ]
 
 if json_c_dep.found()
diff --git a/util/table.c b/util/table.c
new file mode 100644
index 00000000..48918b37
--- /dev/null
+++ b/util/table.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * table.c : Common APIs for printing tabular format output.
+ *
+ * Copyright (c) 2025 Nilay Shroff, IBM
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <string.h>
+
+#include "table.h"
+
+static int table_get_value_width(struct value *v)
+{
+	char buf[64];
+	int len = 0;
+
+	switch (v->type) {
+	case FMT_STRING:
+		len = strlen((const char *)v->s);
+		break;
+	case FMT_INT:
+		len = snprintf(buf, sizeof(buf), "%d", v->i);
+		break;
+	default:
+		printf("Invalid print format!\n");
+		break;
+	}
+	return len;
+}
+
+static void table_print_centered(struct value *val, int width, enum fmt_type type)
+{
+	int i, len, left_pad, right_pad;
+	char buf[64];
+
+	if (type == FMT_STRING)
+		len = strlen(val->s);
+	else if (type == FMT_INT)
+		len = snprintf(buf, sizeof(buf), "%d", val->i);
+	else if (type == FMT_UNSIGNED)
+		len = snprintf(buf, sizeof(buf), "%u", val->u);
+	else if (type == FMT_LONG)
+		len = snprintf(buf, sizeof(buf), "%ld", val->ld);
+	else if (type == FMT_UNSIGNED_LONG)
+		len = snprintf(buf, sizeof(buf), "%lu", val->lu);
+	else {
+		fprintf(stderr, "Invalid format!\n");
+		return;
+	}
+
+	left_pad = (width - len) / 2;
+	right_pad = width - len - left_pad;
+
+	/* add left padding */
+	for (i = 0; i < left_pad; i++)
+		putchar(' ');
+
+	/* print value */
+	if (type == FMT_STRING)
+		printf("%s ", val->s);
+	else if (type == FMT_INT)
+		printf("%d ", val->i);
+	else if (type == FMT_UNSIGNED)
+		printf("%u ", val->u);
+	else if (type == FMT_LONG)
+		printf("%ld ", val->ld);
+	else if (type == FMT_UNSIGNED_LONG)
+		printf("%lu", val->lu);
+
+	/* add right padding */
+	for (i = 0; i < right_pad; i++)
+		putchar(' ');
+}
+
+static void table_print_columns(const struct table *t)
+{
+	int col, j, width;
+	struct table_column *c;
+	struct value v;
+
+	for (col = 0; col < t->num_columns; col++) {
+		c = &t->columns[col];
+		width = c->width;
+		if (c->align == LEFT)
+			width *= -1;
+
+		if (c->align == CENTERED) {
+			v.s = c->name;
+			v.align = c->align;
+			table_print_centered(&v, width, FMT_STRING);
+		} else
+			printf("%*s ", width, c->name);
+	}
+
+	printf("\n");
+
+	for (col = 0; col < t->num_columns; col++) {
+		for (j = 0; j < t->columns[col].width; j++)
+			putchar('-');
+		printf(" ");
+	}
+
+	printf("\n");
+}
+
+static void table_print_rows(const struct table *t)
+{
+	int row, col;
+	struct table_column *c;
+	struct table_row *r;
+	int width;
+	struct value *v;
+
+	for (row = 0; row < t->num_rows; row++) {
+		for (col = 0; col < t->num_columns; col++) {
+			c = &t->columns[col];
+			r = &t->rows[row];
+			v = &r->val[col];
+
+			width = c->width;
+			if (v->align == LEFT)
+				width *= -1;
+
+			if (v->align == CENTERED)
+				table_print_centered(v, width, v->type);
+			else {
+				switch (v->type) {
+				case FMT_STRING:
+					printf("%*s ", width, v->s);
+					break;
+
+				case FMT_INT:
+					printf("%*d ", width, v->i);
+					break;
+
+				case FMT_UNSIGNED:
+					printf("%*u ", width, v->u);
+					break;
+
+				case FMT_LONG:
+					printf("%*ld ", width, v->ld);
+					break;
+
+				case FMT_UNSIGNED_LONG:
+					printf("%*lu ", width, v->lu);
+					break;
+
+				default:
+					fprintf(stderr, "Invalid format!\n");
+					break;
+				}
+			}
+		}
+		printf("\n");
+	}
+}
+
+void table_print(struct table *t)
+{
+	/* first print columns */
+	table_print_columns(t);
+
+	/* next print rows */
+	table_print_rows(t);
+}
+
+int table_get_row_id(struct table *t)
+{
+	int row = t->num_rows;
+
+	t->rows = reallocarray(t->rows, (row + 1), sizeof(struct table_row));
+	if (!t->rows)
+		return -ENOMEM;
+
+	t->rows[row].val = calloc(t->num_columns, sizeof(struct value));
+	if (!t->rows->val) {
+		free(t->rows);
+		return -ENOMEM;
+	}
+
+	t->num_rows++;
+	return row;
+}
+
+void table_add_row(struct table *t, int row_id)
+{
+	int col, max_width, width;
+	struct table_row *row = &t->rows[row_id];
+
+	for (col = 0; col < t->num_columns; col++) {
+		max_width = t->columns[col].width;
+		width = table_get_value_width(&row->val[col]);
+		if (width > max_width)
+			t->columns[col].width = width;
+	}
+}
+
+struct table *table_init(void)
+{
+	struct table *t;
+
+	t = malloc(sizeof(struct table));
+	if (!t)
+		return NULL;
+
+	return t;
+}
+
+int table_add_columns(struct table *t, struct table_column *c, int num_columns)
+{
+	int col;
+
+	t->columns = calloc(num_columns, sizeof(struct table_column));
+	if (!t->columns)
+		return -ENOMEM;
+
+	t->num_columns = num_columns;
+
+	for (col = 0; col < num_columns; col++) {
+		t->columns[col].name = strdup(c[col].name);
+		if (!t->columns[col].name)
+			goto free_col;
+
+		t->columns[col].align = c[col].align;
+		t->columns[col].width = strlen(t->columns[col].name);
+	}
+
+	t->rows = NULL;
+	t->num_rows = 0;
+
+	return 0;
+free_col:
+	while (--col >= 0)
+		free(t->columns[col].name);
+	free(t->columns);
+	return -ENOMEM;
+}
+
+void table_free(struct table *t)
+{
+	int row, col;
+	struct table_row *r;
+	struct value *v;
+
+	/* free rows */
+	for (row = 0; row < t->num_rows; row++) {
+		r = &t->rows[row];
+		for (col = 0; col < t->num_columns; col++) {
+			v = &r->val[col];
+
+			if (v->type == FMT_STRING)
+				free(v->s);
+		}
+		free(r->val);
+	}
+	free(t->rows);
+
+	/* free columns */
+	for (col = 0; col < t->num_columns; col++)
+		free(t->columns[col].name);
+	free(t->columns);
+
+	/* free table */
+	free(t);
+}
diff --git a/util/table.h b/util/table.h
new file mode 100644
index 00000000..7477eba4
--- /dev/null
+++ b/util/table.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _TABLE_H_
+#define _TABLE_H_
+
+enum fmt_type {
+	FMT_STRING,
+	FMT_INT,
+	FMT_UNSIGNED,
+	FMT_LONG,
+	FMT_UNSIGNED_LONG,
+};
+
+enum alignment {
+	RIGHT,
+	LEFT,
+	CENTERED
+};
+
+struct value {
+	union {
+		char *s;
+		int i;
+		unsigned int u;
+		long ld;
+		unsigned long lu;
+	};
+	enum alignment align;
+	enum fmt_type type;
+};
+
+struct table_row {
+	struct value *val;
+};
+
+struct table_column {
+	char *name;
+	enum alignment align;
+	int width;		/* auto populated */
+};
+
+struct table {
+	struct table_column *columns;
+	int num_columns;
+	struct table_row *rows;
+	int num_rows;
+};
+
+static inline int table_set_value_str(struct table *t, int col,
+		int row, const char *str, enum alignment align)
+{
+	struct table_row *r = &t->rows[row];
+	struct value *v = &r->val[col];
+	char *s;
+
+	s = strdup(str);
+	if (!s)
+		return -ENOMEM;
+
+	v->s = s;
+	v->align = align;
+	v->type = FMT_STRING;
+
+	return 0;
+}
+
+static inline void table_set_value_int(struct table *t, int col,
+		int row, int i, enum alignment align)
+{
+	struct table_row *r = &t->rows[row];
+	struct value *v = &r->val[col];
+
+	v->i = i;
+	v->align = align;
+	v->type = FMT_INT;
+}
+
+static inline void table_set_value_unsigned(struct table *t, int col,
+		int row, int u, enum alignment align)
+{
+	struct table_row *r = &t->rows[row];
+	struct value *v = &r->val[col];
+
+	v->u = u;
+	v->align = align;
+	v->type = FMT_UNSIGNED;
+}
+
+static inline void table_set_value_long(struct table *t, int col,
+		int row, long ld, enum alignment align)
+{
+	struct table_row *r = &t->rows[row];
+	struct value *v = &r->val[col];
+
+	v->ld = ld;
+	v->align = align;
+	v->type = FMT_LONG;
+}
+
+static inline void table_set_value_unsigned_long(struct table *t, int col,
+		int row, long lu, enum alignment align)
+{
+	struct table_row *r = &t->rows[row];
+	struct value *v = &r->val[col];
+
+	v->lu = lu;
+	v->align = align;
+	v->type = FMT_UNSIGNED_LONG;
+}
+
+struct table *table_init(void);
+int table_add_columns(struct table *t, struct table_column *c, int num_columns);
+int table_get_row_id(struct table *t);
+void table_add_row(struct table *t, int row);
+void table_print(struct table *t);
+void table_free(struct table *t);
+
+#endif /* _TABLE_H_ */
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCHv2 4/4] nvme: add support for printing show-topology in tabular form
  2025-08-12 12:56 [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command Nilay Shroff
                   ` (2 preceding siblings ...)
  2025-08-12 12:56 ` [PATCHv2 3/4] nvme: add common APIs for printing tabular format output Nilay Shroff
@ 2025-08-12 12:56 ` Nilay Shroff
  3 siblings, 0 replies; 26+ messages in thread
From: Nilay Shroff @ 2025-08-12 12:56 UTC (permalink / raw)
  To: linux-nvme; +Cc: dwagner, hare, kbusch, gjoyce

The nvme CLI command show-topology currently prints the topology
output in a tree format. However, in some cases it is more convenient
and easier to read or interpret the output when displayed in a tabular
form.

This patch adds support for printing the show-topology output in a
tabular format. To achieve this, the --output-format option, which
previously supported only normal and json formats, is extended to
include a new tabular format.

With this change, the user can now choose to print the topology in any
of the following formats: normal, json, or tabular. The new tabular
output leverages the recently introduced table APIs to produce well-
aligned, easy-to-read output.

Suggested-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
 nvme-print-stdout.c | 172 ++++++++++++++++++++++++++++++++++++++++++++
 nvme-print.c        |   5 ++
 nvme-print.h        |   2 +
 nvme.c              |   8 ++-
 nvme.h              |   1 +
 5 files changed, 187 insertions(+), 1 deletion(-)

diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
index ef9ee881..fef688a3 100644
--- a/nvme-print-stdout.c
+++ b/nvme-print-stdout.c
@@ -20,6 +20,7 @@
 #include "nvme-models.h"
 #include "util/suffix.h"
 #include "util/types.h"
+#include "util/table.h"
 #include "logging.h"
 #include "common.h"
 
@@ -5589,6 +5590,88 @@ static void stdout_list_items(nvme_root_t r)
 		stdout_simple_list(r);
 }
 
+static void stdout_tabular_subsystem_topology_multipath(nvme_subsystem_t s)
+{
+	nvme_ns_t n;
+	nvme_path_t p;
+	nvme_ctrl_t c;
+	int row;
+	bool first;
+	struct table *t;
+	struct table_column columns[] = {
+		{"NSHead", LEFT, 0},
+		{"NSID", LEFT, 0},
+		{"NSPath", LEFT, 0},
+		{"ANAState", LEFT, 0},
+		{"Nodes", LEFT, 0},
+		{"Qdepth", LEFT, 0},
+		{"Controller", LEFT, 0},
+		{"TrType", LEFT, 0},
+		{"Address", LEFT, 0},
+		{"State", LEFT, 0},
+	};
+
+	t = table_init();
+	if (!t) {
+		printf("Failed to init table\n");
+		return;
+	}
+	if (table_add_columns(t, columns, ARRAY_SIZE(columns)) < 0) {
+		printf("Failed to add columns\n");
+		goto free_tbl;
+	}
+
+	nvme_subsystem_for_each_ns(s, n) {
+		first = true;
+		nvme_namespace_for_each_path(n, p) {
+			c = nvme_path_get_ctrl(p);
+
+			row = table_get_row_id(t);
+			if (row < 0) {
+				printf("Failed to add row\n");
+				goto free_tbl;
+			}
+			/* For the first row we print actual NSHead name,
+			 * however, for the subsequent rows we print "arrow"
+			 * ("-->") symbol for NSHead. This "arrow" style makes
+			 * it visually obvious that susequenet entries (if
+			 * present) are a path under the first NSHead.
+			 */
+			if (first)
+				table_set_value_str(t, 0, row,
+						nvme_ns_get_name(n), LEFT);
+			else
+				table_set_value_str(t, 0, row,
+						"-->", CENTERED);
+
+			table_set_value_int(t, 1, row,
+					nvme_ns_get_nsid(n), CENTERED);
+			table_set_value_str(t, 2, row,
+					nvme_path_get_name(p), LEFT);
+			table_set_value_str(t, 3, row,
+					nvme_path_get_ana_state(p), LEFT);
+			table_set_value_str(t, 4, row,
+					nvme_path_get_numa_nodes(p), CENTERED);
+			table_set_value_int(t, 5, row,
+					nvme_path_get_queue_depth(p), CENTERED);
+			table_set_value_str(t, 6, row,
+					nvme_ctrl_get_name(c), LEFT);
+			table_set_value_str(t, 7, row,
+					nvme_ctrl_get_transport(c), LEFT);
+			table_set_value_str(t, 8, row,
+					nvme_ctrl_get_address(c), LEFT);
+			table_set_value_str(t, 9, row,
+					nvme_ctrl_get_state(c), LEFT);
+
+			table_add_row(t, row);
+			first = false;
+		}
+	}
+	table_print(t);
+free_tbl:
+	table_free(t);
+}
+
 static void stdout_subsystem_topology_multipath(nvme_subsystem_t s,
 						     enum nvme_cli_topo_ranking ranking)
 {
@@ -5660,6 +5743,62 @@ static void stdout_subsystem_topology_multipath(nvme_subsystem_t s,
 	}
 }
 
+static void stdout_tabular_subsystem_topology(nvme_subsystem_t s)
+{
+	nvme_ctrl_t c;
+	nvme_ns_t n;
+	int row;
+	struct table *t;
+	struct table_column columns[] = {
+		{"Namespace", LEFT, 0},
+		{"NSID", LEFT, 0},
+		{"Controller", LEFT, 0},
+		{"Trtype", LEFT, 0},
+		{"Address", LEFT, 0},
+		{"State", LEFT, 0},
+	};
+
+	t = table_init();
+	if (!t) {
+		printf("Failed to init table\n");
+		return;
+	}
+
+	if (table_add_columns(t, columns, ARRAY_SIZE(columns)) < 0) {
+		printf("Failed to add columns\n");
+		goto free_tbl;
+	}
+
+	nvme_subsystem_for_each_ctrl(s, c) {
+		nvme_ctrl_for_each_ns(c, n) {
+			c = nvme_ns_get_ctrl(n);
+
+			row = table_get_row_id(t);
+			if (row < 0) {
+				printf("Failed to add row\n");
+				goto free_tbl;
+			}
+			table_set_value_str(t, 0, row,
+					nvme_ns_get_name(n), LEFT);
+			table_set_value_int(t, 1, row,
+					nvme_ns_get_nsid(n), CENTERED);
+			table_set_value_str(t, 2, row,
+					nvme_ctrl_get_name(c), LEFT);
+			table_set_value_str(t, 3, row,
+					nvme_ctrl_get_transport(c), LEFT);
+			table_set_value_str(t, 4, row,
+					nvme_ctrl_get_address(c), LEFT);
+			table_set_value_str(t, 5, row,
+					nvme_ctrl_get_state(c), LEFT);
+
+			table_add_row(t, row);
+		}
+	}
+	table_print(t);
+free_tbl:
+	table_free(t);
+}
+
 static void stdout_subsystem_topology(nvme_subsystem_t s,
 					   enum nvme_cli_topo_ranking ranking)
 {
@@ -5712,6 +5851,38 @@ static void stdout_subsystem_topology(nvme_subsystem_t s,
 	}
 }
 
+static void stdout_topology_tabular(nvme_root_t r)
+{
+	nvme_host_t h;
+	nvme_subsystem_t s;
+	bool first = true;
+
+	nvme_for_each_host(r, h) {
+		nvme_for_each_subsystem(h, s) {
+			bool no_ctrl = true;
+			nvme_ctrl_t c;
+
+			nvme_subsystem_for_each_ctrl(s, c)
+				no_ctrl = false;
+
+			if (no_ctrl)
+				continue;
+
+			if (!first)
+				printf("\n");
+			first = false;
+
+			stdout_subsys_config(s);
+			printf("\n");
+
+			if (nvme_is_multipath(s))
+				stdout_tabular_subsystem_topology_multipath(s);
+			else
+				stdout_tabular_subsystem_topology(s);
+		}
+	}
+}
+
 static void stdout_simple_topology(nvme_root_t r,
 				   enum nvme_cli_topo_ranking ranking)
 {
@@ -6330,6 +6501,7 @@ static struct print_ops stdout_print_ops = {
 	.topology_ctrl			= stdout_topology_ctrl,
 	.topology_namespace		= stdout_topology_namespace,
 	.topology_multipath		= stdout_topology_multipath,
+	.topology_tabular		= stdout_topology_tabular,
 
 	/* status and error messages */
 	.connect_msg			= stdout_connect_msg,
diff --git a/nvme-print.c b/nvme-print.c
index 473a6814..d1af8284 100644
--- a/nvme-print.c
+++ b/nvme-print.c
@@ -1557,6 +1557,11 @@ void nvme_show_topology(nvme_root_t r,
 		nvme_print(topology_multipath, flags, r);
 }
 
+void nvme_show_topology_tabular(nvme_root_t r, nvme_print_flags_t flags)
+{
+	nvme_print(topology_tabular, flags, r);
+}
+
 void nvme_show_message(bool error, const char *msg, ...)
 {
 	struct print_ops *ops = nvme_print_ops(NORMAL);
diff --git a/nvme-print.h b/nvme-print.h
index 0f23b711..a7982566 100644
--- a/nvme-print.h
+++ b/nvme-print.h
@@ -107,6 +107,7 @@ struct print_ops {
 	void (*topology_ctrl)(nvme_root_t r);
 	void (*topology_namespace)(nvme_root_t r);
 	void (*topology_multipath)(nvme_root_t r);
+	void (*topology_tabular)(nvme_root_t r);
 
 	/* status and error messages */
 	void (*connect_msg)(nvme_ctrl_t c);
@@ -251,6 +252,7 @@ void nvme_show_list_ns(struct nvme_ns_list *ns_list,
 void nvme_show_topology(nvme_root_t t,
 			enum nvme_cli_topo_ranking ranking,
 			nvme_print_flags_t flags);
+void nvme_show_topology_tabular(nvme_root_t t, nvme_print_flags_t flags);
 
 void nvme_feature_show(enum nvme_features_id fid, int sel, unsigned int result);
 void nvme_feature_show_fields(enum nvme_features_id fid, unsigned int result, unsigned char *buf);
diff --git a/nvme.c b/nvme.c
index 86fb1d4b..c221bf24 100644
--- a/nvme.c
+++ b/nvme.c
@@ -544,6 +544,8 @@ int validate_output_format(const char *format, nvme_print_flags_t *flags)
 #endif /* CONFIG_JSONC */
 	else if (!strcmp(format, "binary"))
 		f = BINARY;
+	else if (!strcmp(format, "tabular"))
+		f = TABULAR;
 	else
 		return -EINVAL;
 
@@ -10152,6 +10154,7 @@ static int tls_key(int argc, char **argv, struct command *command, struct plugin
 static int show_topology_cmd(int argc, char **argv, struct command *command, struct plugin *plugin)
 {
 	const char *desc = "Show the topology\n";
+	const char *output_format = "Output format: normal|json|binary|tabular";
 	const char *ranking = "Ranking order: namespace|ctrl|multipath";
 	nvme_print_flags_t flags;
 	_cleanup_nvme_root_ nvme_root_t r = NULL;
@@ -10220,7 +10223,10 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
 		return err;
 	}
 
-	nvme_show_topology(r, rank, flags);
+	if (flags & TABULAR)
+		nvme_show_topology_tabular(r, flags);
+	else
+		nvme_show_topology(r, rank, flags);
 
 	return err;
 }
diff --git a/nvme.h b/nvme.h
index f02f39ca..c4f0f0cf 100644
--- a/nvme.h
+++ b/nvme.h
@@ -39,6 +39,7 @@ enum nvme_print_flags {
 	JSON		= 1 << 1,	/* display in json format */
 	VS		= 1 << 2,	/* hex dump vendor specific data areas */
 	BINARY		= 1 << 3,	/* binary dump raw bytes */
+	TABULAR		= 1 << 4,	/* prints aligned columns for easy reading */
 };
 
 typedef uint32_t nvme_print_flags_t;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 1/4] nvme: support <device> option in show-topology command
  2025-08-12 12:56 ` [PATCHv2 1/4] nvme: support <device> option in " Nilay Shroff
@ 2025-08-18  7:12   ` Hannes Reinecke
  2025-08-19  4:43     ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-18  7:12 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/12/25 14:56, Nilay Shroff wrote:
> Although the help text for the nvme show-topology command indicates
> support for a <device> option, this option has no effect in practice
> — specifying an NVMe device name does not filter the output.
> 
> This commit adds proper support for the <device> option, enabling users
> to filter the topology output based on the specified NVMe device.
> 
> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
> ---
>   nvme-print-stdout.c |  9 +++++++++
>   nvme.c              | 17 ++++++++++++++++-
>   2 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
> index c597b608..eb56349a 100644
> --- a/nvme-print-stdout.c
> +++ b/nvme-print-stdout.c
> @@ -5683,6 +5683,15 @@ static void stdout_simple_topology(nvme_root_t r,
>   
>   	nvme_for_each_host(r, h) {
>   		nvme_for_each_subsystem(h, s) {
> +			bool no_ctrl = true;
> +			nvme_ctrl_t c;
> +
> +			nvme_subsystem_for_each_ctrl(s, c)
> +				no_ctrl = false;
> +
> +			if (no_ctrl)
> +				continue;
> +
>   			if (!first)
>   				printf("\n");
>   			first = false;
> diff --git a/nvme.c b/nvme.c
> index 27dac37b..c6779cf4 100644
> --- a/nvme.c
> +++ b/nvme.c
> @@ -10155,6 +10155,8 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>   	const char *ranking = "Ranking order: namespace|ctrl";
>   	nvme_print_flags_t flags;
>   	_cleanup_nvme_root_ nvme_root_t r = NULL;
> +	char *devname = NULL;
> +	nvme_scan_filter_t filter = NULL;
>   	enum nvme_cli_topo_ranking rank;
>   	int err;
>   
> @@ -10197,7 +10199,20 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>   		return -errno;
>   	}
>   
> -	err = nvme_scan_topology(r, NULL, NULL);
> +	if (optind < argc)
> +		devname = basename(argv[optind++]);
> +
> +	if (devname) {
> +		int subsys_id, nsid;
> +
> +		if (sscanf(devname, "nvme%dn%d", &subsys_id, &nsid) != 2) {
> +			nvme_show_error("Invalid device name %s\n", devname);
> +			return -EINVAL;
> +		}
> +		filter = nvme_match_device_filter;
> +	}
> +
> +	err = nvme_scan_topology(r, filter, (void *)devname);
>   	if (err < 0) {
>   		nvme_show_error("Failed to scan topology: %s", nvme_strerror(errno));
>   		return err;

While I welcome this change, we've had discussions that
'nvme show-topology <device>' really should _not_ print out
anything if the device does not exist.
Have you checked this for your patch?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-12 12:56 ` [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath Nilay Shroff
@ 2025-08-18  7:22   ` Hannes Reinecke
  2025-08-19  4:49     ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-18  7:22 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/12/25 14:56, Nilay Shroff wrote:
> This commit enhances the show-topology command by adding support for
> NVMe multipath. With this change, users can now list all paths to a
> namespace from its corresponding head node device. Each NVMe path
> entry then also includes additional details such as ANA state, NUMA
> node, and queue depth, improving visibility into multipath configs.
> This information can be particularly helpful for debugging and
> analyzing NVMe multipath setups.
> 
> To support this functionality, the "--ranking" option of the nvme
> show-topology command has been extended with a new sub-option:
> "multipath".
> 
> Since this enhancement is specific to NVMe multipath, the iopolicy
> configured under each subsystem is now always displayed. Previously,
> iopolicy was shown only with nvme show-topology verbose output, but
> it is now included by default to improve usability and provide better
> context when reviewing multipath configurations via show-topology.
> 
> With this update, users can view the multipath topology of a multi
> controller/port NVMe disk using:
> 
> $ nvme show-topology -r multipath
> 
> nvme-subsys2 - NQN=nvmet_subsystem
>                 hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>                 iopolicy=numa
> 
>            _ _ _<head-node>
>           /              _ _ _ <ana-state>
>          /              /      _ _ _ <numa-node-list>
>         /              /      /  _ _ _<queue-depth>
>        |              /      /  /
>   +- nvme2n1 (ns 1)  /      /  /
>   \                 |      |  |
>    +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>    +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
> 
> Please note that the annotations shown above (e.g., <numa-node-list>,
> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
> only and are not part of the actual output.
> 

Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
both in here? They are tied to the selected IO policy, and pretty
meaningless if that IO policy is not selected.
Please include only the values relevant for the selected IO policy;
this will increase readability of the resulting status string.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 3/4] nvme: add common APIs for printing tabular format output
  2025-08-12 12:56 ` [PATCHv2 3/4] nvme: add common APIs for printing tabular format output Nilay Shroff
@ 2025-08-18  7:27   ` Hannes Reinecke
  2025-08-19  8:56     ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-18  7:27 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/12/25 14:56, Nilay Shroff wrote:
> Some nvme-cli commands, such as nvme list and nvme list -v, support output
> in tabular format. Currently, the table output is not well aligned because
> column widths are fixed at print time, regardless of the length of the data
> in each column. This often results in uneven and hard-to-read output.For
> any new CLI command that requires tabular output, developers must manually
> specify the column width and row value width, which is both error-prone and
> inconsistent.
> 
> This patch introduces a set of common table APIs that:
> - Automatically calculate column widths based on the content
> - Maintain proper alignment regardless of value length
> - Simplify adding tabular output support to new and existing commands
> 
> The new APIs are:
> 1. table_init() — Allocate a table instance.
> 2. table_add_columns() — Add column definitions (struct table_column),
>     including name and alignment (LEFT, RIGHT, CENTERED).
> 3. table_get_row_id() — Reserve a row index for inserting values.
> 4. table_add_row() — Add a row to the table.
> 5. table_print() — Print the table with auto-calculated widths.
> 6. table_free() — Free resources allocated for the table.
> 
> For adding values, the following setter APIs are provided, each
> supporting alignment types (LEFT, RIGHT, or CENTERED):
> - table_set_value_str()
> - table_set_value_int()
> - table_set_value_unsigned()
> - table_set_value_long()
> - table_set_value_unsigned_long()
> 
> Usage steps:
> 1. Call table_init() to create a table handle.
> 2. Define an array of struct table_column specifying column names and
>     alignment, then call table_add_columns().
> 3. Obtain a row ID using table_get_row_id() and set values using the
>     appropriate setter table APIs : table_set_value_*() function.
> 4. Add the completed row using table_add_row().
> 5. Repeat steps 3–4 for each additional row.
> 5. Call table_print() to display the table.
> 6. Call table_free() to release table resources.
> 
> With these APIs, developers no longer need to pre-calculate column or row
> widths. The output is consistently aligned and easy to read.
> 
> Suggested-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
> ---
>   util/meson.build |   3 +-
>   util/table.c     | 278 +++++++++++++++++++++++++++++++++++++++++++++++
>   util/table.h     | 117 ++++++++++++++++++++
>   3 files changed, 397 insertions(+), 1 deletion(-)
>   create mode 100644 util/table.c
>   create mode 100644 util/table.h
> 
Sheesh. Can o'worms.

Displaying a table on an ASCII terminal is black magic (check
manpage for ncurses ...), and I'm pretty certain that printing
spaces for alignment is _not_ the way to go.
If you really want to implement this then at the very least use
tabs. Ideally check with the ncurses manpage how one can display
a table with the correct escape characters.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 1/4] nvme: support <device> option in show-topology command
  2025-08-18  7:12   ` Hannes Reinecke
@ 2025-08-19  4:43     ` Nilay Shroff
  2025-08-19  6:11       ` Hannes Reinecke
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-19  4:43 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: dwagner, kbusch, gjoyce



On 8/18/25 12:42 PM, Hannes Reinecke wrote:
> On 8/12/25 14:56, Nilay Shroff wrote:
>> Although the help text for the nvme show-topology command indicates
>> support for a <device> option, this option has no effect in practice
>> — specifying an NVMe device name does not filter the output.
>>
>> This commit adds proper support for the <device> option, enabling users
>> to filter the topology output based on the specified NVMe device.
>>
>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>> ---
>>   nvme-print-stdout.c |  9 +++++++++
>>   nvme.c              | 17 ++++++++++++++++-
>>   2 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
>> index c597b608..eb56349a 100644
>> --- a/nvme-print-stdout.c
>> +++ b/nvme-print-stdout.c
>> @@ -5683,6 +5683,15 @@ static void stdout_simple_topology(nvme_root_t r,
>>         nvme_for_each_host(r, h) {
>>           nvme_for_each_subsystem(h, s) {
>> +            bool no_ctrl = true;
>> +            nvme_ctrl_t c;
>> +
>> +            nvme_subsystem_for_each_ctrl(s, c)
>> +                no_ctrl = false;
>> +
>> +            if (no_ctrl)
>> +                continue;
>> +
>>               if (!first)
>>                   printf("\n");
>>               first = false;
>> diff --git a/nvme.c b/nvme.c
>> index 27dac37b..c6779cf4 100644
>> --- a/nvme.c
>> +++ b/nvme.c
>> @@ -10155,6 +10155,8 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>>       const char *ranking = "Ranking order: namespace|ctrl";
>>       nvme_print_flags_t flags;
>>       _cleanup_nvme_root_ nvme_root_t r = NULL;
>> +    char *devname = NULL;
>> +    nvme_scan_filter_t filter = NULL;
>>       enum nvme_cli_topo_ranking rank;
>>       int err;
>>   @@ -10197,7 +10199,20 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>>           return -errno;
>>       }
>>   -    err = nvme_scan_topology(r, NULL, NULL);
>> +    if (optind < argc)
>> +        devname = basename(argv[optind++]);
>> +
>> +    if (devname) {
>> +        int subsys_id, nsid;
>> +
>> +        if (sscanf(devname, "nvme%dn%d", &subsys_id, &nsid) != 2) {
>> +            nvme_show_error("Invalid device name %s\n", devname);
>> +            return -EINVAL;
>> +        }
>> +        filter = nvme_match_device_filter;
>> +    }
>> +
>> +    err = nvme_scan_topology(r, filter, (void *)devname);
>>       if (err < 0) {
>>           nvme_show_error("Failed to scan topology: %s", nvme_strerror(errno));
>>           return err;
> 
> While I welcome this change, we've had discussions that
> 'nvme show-topology <device>' really should _not_ print out
> anything if the device does not exist.
> Have you checked this for your patch?
> 
Yes I checked that. If <device> doesn't exist then 
nvme show-topology would print nothing.

Thanks,
--Nilay




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-18  7:22   ` Hannes Reinecke
@ 2025-08-19  4:49     ` Nilay Shroff
  2025-08-19  6:15       ` Hannes Reinecke
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-19  4:49 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: dwagner, kbusch, gjoyce



On 8/18/25 12:52 PM, Hannes Reinecke wrote:
> On 8/12/25 14:56, Nilay Shroff wrote:
>> This commit enhances the show-topology command by adding support for
>> NVMe multipath. With this change, users can now list all paths to a
>> namespace from its corresponding head node device. Each NVMe path
>> entry then also includes additional details such as ANA state, NUMA
>> node, and queue depth, improving visibility into multipath configs.
>> This information can be particularly helpful for debugging and
>> analyzing NVMe multipath setups.
>>
>> To support this functionality, the "--ranking" option of the nvme
>> show-topology command has been extended with a new sub-option:
>> "multipath".
>>
>> Since this enhancement is specific to NVMe multipath, the iopolicy
>> configured under each subsystem is now always displayed. Previously,
>> iopolicy was shown only with nvme show-topology verbose output, but
>> it is now included by default to improve usability and provide better
>> context when reviewing multipath configurations via show-topology.
>>
>> With this update, users can view the multipath topology of a multi
>> controller/port NVMe disk using:
>>
>> $ nvme show-topology -r multipath
>>
>> nvme-subsys2 - NQN=nvmet_subsystem
>>                 hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>>                 iopolicy=numa
>>
>>            _ _ _<head-node>
>>           /              _ _ _ <ana-state>
>>          /              /      _ _ _ <numa-node-list>
>>         /              /      /  _ _ _<queue-depth>
>>        |              /      /  /
>>   +- nvme2n1 (ns 1)  /      /  /
>>   \                 |      |  |
>>    +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>>    +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
>>
>> Please note that the annotations shown above (e.g., <numa-node-list>,
>> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
>> only and are not part of the actual output.
>>
> 
> Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
> both in here? They are tied to the selected IO policy, and pretty
> meaningless if that IO policy is not selected.
> Please include only the values relevant for the selected IO policy;
> this will increase readability of the resulting status string.
> 
Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
I'll update this in the next patch.

Thanks,
--Nilay





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 1/4] nvme: support <device> option in show-topology command
  2025-08-19  4:43     ` Nilay Shroff
@ 2025-08-19  6:11       ` Hannes Reinecke
  0 siblings, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-19  6:11 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/19/25 06:43, Nilay Shroff wrote:
> 
> 
> On 8/18/25 12:42 PM, Hannes Reinecke wrote:
>> On 8/12/25 14:56, Nilay Shroff wrote:
>>> Although the help text for the nvme show-topology command indicates
>>> support for a <device> option, this option has no effect in practice
>>> — specifying an NVMe device name does not filter the output.
>>>
>>> This commit adds proper support for the <device> option, enabling users
>>> to filter the topology output based on the specified NVMe device.
>>>
>>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>>> ---
>>>    nvme-print-stdout.c |  9 +++++++++
>>>    nvme.c              | 17 ++++++++++++++++-
>>>    2 files changed, 25 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
>>> index c597b608..eb56349a 100644
>>> --- a/nvme-print-stdout.c
>>> +++ b/nvme-print-stdout.c
>>> @@ -5683,6 +5683,15 @@ static void stdout_simple_topology(nvme_root_t r,
>>>          nvme_for_each_host(r, h) {
>>>            nvme_for_each_subsystem(h, s) {
>>> +            bool no_ctrl = true;
>>> +            nvme_ctrl_t c;
>>> +
>>> +            nvme_subsystem_for_each_ctrl(s, c)
>>> +                no_ctrl = false;
>>> +
>>> +            if (no_ctrl)
>>> +                continue;
>>> +
>>>                if (!first)
>>>                    printf("\n");
>>>                first = false;
>>> diff --git a/nvme.c b/nvme.c
>>> index 27dac37b..c6779cf4 100644
>>> --- a/nvme.c
>>> +++ b/nvme.c
>>> @@ -10155,6 +10155,8 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>>>        const char *ranking = "Ranking order: namespace|ctrl";
>>>        nvme_print_flags_t flags;
>>>        _cleanup_nvme_root_ nvme_root_t r = NULL;
>>> +    char *devname = NULL;
>>> +    nvme_scan_filter_t filter = NULL;
>>>        enum nvme_cli_topo_ranking rank;
>>>        int err;
>>>    @@ -10197,7 +10199,20 @@ static int show_topology_cmd(int argc, char **argv, struct command *command, str
>>>            return -errno;
>>>        }
>>>    -    err = nvme_scan_topology(r, NULL, NULL);
>>> +    if (optind < argc)
>>> +        devname = basename(argv[optind++]);
>>> +
>>> +    if (devname) {
>>> +        int subsys_id, nsid;
>>> +
>>> +        if (sscanf(devname, "nvme%dn%d", &subsys_id, &nsid) != 2) {
>>> +            nvme_show_error("Invalid device name %s\n", devname);
>>> +            return -EINVAL;
>>> +        }
>>> +        filter = nvme_match_device_filter;
>>> +    }
>>> +
>>> +    err = nvme_scan_topology(r, filter, (void *)devname);
>>>        if (err < 0) {
>>>            nvme_show_error("Failed to scan topology: %s", nvme_strerror(errno));
>>>            return err;
>>
>> While I welcome this change, we've had discussions that
>> 'nvme show-topology <device>' really should _not_ print out
>> anything if the device does not exist.
>> Have you checked this for your patch?
>>
> Yes I checked that. If <device> doesn't exist then
> nvme show-topology would print nothing.
> 
Thanks.
So you can add:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-19  4:49     ` Nilay Shroff
@ 2025-08-19  6:15       ` Hannes Reinecke
  2025-08-19 10:31         ` Nilay Shroff
  2025-08-20  8:17         ` Daniel Wagner
  0 siblings, 2 replies; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-19  6:15 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/19/25 06:49, Nilay Shroff wrote:
> 
> 
> On 8/18/25 12:52 PM, Hannes Reinecke wrote:
>> On 8/12/25 14:56, Nilay Shroff wrote:
>>> This commit enhances the show-topology command by adding support for
>>> NVMe multipath. With this change, users can now list all paths to a
>>> namespace from its corresponding head node device. Each NVMe path
>>> entry then also includes additional details such as ANA state, NUMA
>>> node, and queue depth, improving visibility into multipath configs.
>>> This information can be particularly helpful for debugging and
>>> analyzing NVMe multipath setups.
>>>
>>> To support this functionality, the "--ranking" option of the nvme
>>> show-topology command has been extended with a new sub-option:
>>> "multipath".
>>>
>>> Since this enhancement is specific to NVMe multipath, the iopolicy
>>> configured under each subsystem is now always displayed. Previously,
>>> iopolicy was shown only with nvme show-topology verbose output, but
>>> it is now included by default to improve usability and provide better
>>> context when reviewing multipath configurations via show-topology.
>>>
>>> With this update, users can view the multipath topology of a multi
>>> controller/port NVMe disk using:
>>>
>>> $ nvme show-topology -r multipath
>>>
>>> nvme-subsys2 - NQN=nvmet_subsystem
>>>                  hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>>>                  iopolicy=numa
>>>
>>>             _ _ _<head-node>
>>>            /              _ _ _ <ana-state>
>>>           /              /      _ _ _ <numa-node-list>
>>>          /              /      /  _ _ _<queue-depth>
>>>         |              /      /  /
>>>    +- nvme2n1 (ns 1)  /      /  /
>>>    \                 |      |  |
>>>     +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>>>     +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
>>>
>>> Please note that the annotations shown above (e.g., <numa-node-list>,
>>> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
>>> only and are not part of the actual output.
>>>
>>
>> Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
>> both in here? They are tied to the selected IO policy, and pretty
>> meaningless if that IO policy is not selected.
>> Please include only the values relevant for the selected IO policy;
>> this will increase readability of the resulting status string.
>>
> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
> I'll update this in the next patch.
> 
Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
the number of fields will be different (and making parsing harder).

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 3/4] nvme: add common APIs for printing tabular format output
  2025-08-18  7:27   ` Hannes Reinecke
@ 2025-08-19  8:56     ` Nilay Shroff
  2025-08-20  8:23       ` Daniel Wagner
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-19  8:56 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: dwagner, kbusch, gjoyce



On 8/18/25 12:57 PM, Hannes Reinecke wrote:
> On 8/12/25 14:56, Nilay Shroff wrote:
>> Some nvme-cli commands, such as nvme list and nvme list -v, support output
>> in tabular format. Currently, the table output is not well aligned because
>> column widths are fixed at print time, regardless of the length of the data
>> in each column. This often results in uneven and hard-to-read output.For
>> any new CLI command that requires tabular output, developers must manually
>> specify the column width and row value width, which is both error-prone and
>> inconsistent.
>>
>> This patch introduces a set of common table APIs that:
>> - Automatically calculate column widths based on the content
>> - Maintain proper alignment regardless of value length
>> - Simplify adding tabular output support to new and existing commands
>>
>> The new APIs are:
>> 1. table_init() — Allocate a table instance.
>> 2. table_add_columns() — Add column definitions (struct table_column),
>>     including name and alignment (LEFT, RIGHT, CENTERED).
>> 3. table_get_row_id() — Reserve a row index for inserting values.
>> 4. table_add_row() — Add a row to the table.
>> 5. table_print() — Print the table with auto-calculated widths.
>> 6. table_free() — Free resources allocated for the table.
>>
>> For adding values, the following setter APIs are provided, each
>> supporting alignment types (LEFT, RIGHT, or CENTERED):
>> - table_set_value_str()
>> - table_set_value_int()
>> - table_set_value_unsigned()
>> - table_set_value_long()
>> - table_set_value_unsigned_long()
>>
>> Usage steps:
>> 1. Call table_init() to create a table handle.
>> 2. Define an array of struct table_column specifying column names and
>>     alignment, then call table_add_columns().
>> 3. Obtain a row ID using table_get_row_id() and set values using the
>>     appropriate setter table APIs : table_set_value_*() function.
>> 4. Add the completed row using table_add_row().
>> 5. Repeat steps 3–4 for each additional row.
>> 5. Call table_print() to display the table.
>> 6. Call table_free() to release table resources.
>>
>> With these APIs, developers no longer need to pre-calculate column or row
>> widths. The output is consistently aligned and easy to read.
>>
>> Suggested-by: Daniel Wagner <dwagner@suse.de>
>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>> ---
>>   util/meson.build |   3 +-
>>   util/table.c     | 278 +++++++++++++++++++++++++++++++++++++++++++++++
>>   util/table.h     | 117 ++++++++++++++++++++
>>   3 files changed, 397 insertions(+), 1 deletion(-)
>>   create mode 100644 util/table.c
>>   create mode 100644 util/table.h
>>
> Sheesh. Can o'worms.
> 
> Displaying a table on an ASCII terminal is black magic (check
> manpage for ncurses ...), and I'm pretty certain that printing
> spaces for alignment is _not_ the way to go.
> If you really want to implement this then at the very least use
> tabs. Ideally check with the ncurses manpage how one can display
> a table with the correct escape characters.
> 
Thanks for the feedback. You're absolutely right that terminal alignment
can get tricky, and ncurses does provide a more feature-complete way to
handle interactive table layouts.

However, for this patch, the intent was much narrower — to make existing
nvme-cli tabular output (e.g., nvme list, nvme list -v and nvme show-topology)
more consistent and readable without requiring each command to manually set
column/row widths. The current implementation just calculates the max width
per column and pads with spaces, which works for the non-interactive, plain-text
output we typically expect from CLI tools (including when the output is
redirected to a file).

Using tabs was considered, but since most values (especially long PCI/target
addresses, UUIDs, or model names) don’t line up cleanly on tab stops, the result
was often misaligned depending on the terminal settings. Spaces gave us 
deterministic alignment across all environments.

That said, I completely agree ncurses (or similar) would be the right direction
if we wanted dynamic, interactive table display (resizable, scrollable, etc.). 
For nvme-cli, which is mostly non-interactive, my goal was to keep things simple
while improving readability.

That said, if you think it’s worth exploring a tabs-based approach instead, 
I can prototype that and compare the output side-by-side. Otherwise, perhaps
we can clarify in the commit message that this is meant for non-interactive,
plain ASCII output only. What do you think?

For your reference, I have printed below the output of nvme list -v 
command (before/after patch):

### Without this patch:

$ nvme list -v 
Subsystem        Subsystem-NQN                                                                                    Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys0     nqn.2018-01.com.wdc:guid:E8238FA6BF53-0001-001B444A495BF972                                      nvme0

Device           Cntlid SN                   MN                                       FR       TxPort Address        Slot   Subsystem    Namespaces      
---------------- ------ -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme0    8215   2136HZ464910         WDC PC SN730 SDBQNTY-512G-1001           11170101 pcie   0000:04:00.0          nvme-subsys0 nvme0n1

Device            Generic           NSID       Usage                      Format           Controllers     
----------------- ----------------- ---------- -------------------------- ---------------- ----------------
/dev/nvme0n1 /dev/ng0n1   0x1        512.11  GB / 512.11  GB    512   B +  0 B   nvme0


### With this patch applied and converting nvme list code to use the new table API:

$ nvme list -v 
Subsystem    Subsystem-NQN                                               Controllers 
------------ ----------------------------------------------------------- ----------- 
nvme-subsys0 nqn.2018-01.com.wdc:guid:E8238FA6BF53-0001-001B444A495BF972 nvme0       

Device Cntlid SN           MN                             FR       TxPort Address      Slot Subsystem    Namespaces 
------ ------ ------------ ------------------------------ -------- ------ ------------ ---- ------------ ---------- 
nvme0  8215   2136HZ464910 WDC PC SN730 SDBQNTY-512G-1001 11170101 pcie   0000:04:00.0      nvme-subsys0 nvme0n1    

Device       Generic    NSID USAGE                                             Format         Controllers 
------------ ---------- ---- ------------------------------------------------- -------------- ----------- 
/dev/nvme0n1 /dev/ng0n1  0x1 512.11 GB / 512.11 GB ( 476.94 GiB /  476.94 GiB) 512   B +  0 B nvme0       

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-19  6:15       ` Hannes Reinecke
@ 2025-08-19 10:31         ` Nilay Shroff
  2025-08-19 11:05           ` Hannes Reinecke
  2025-08-20  8:17         ` Daniel Wagner
  1 sibling, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-19 10:31 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: dwagner, kbusch, gjoyce



On 8/19/25 11:45 AM, Hannes Reinecke wrote:
> On 8/19/25 06:49, Nilay Shroff wrote:
>>
>>
>> On 8/18/25 12:52 PM, Hannes Reinecke wrote:
>>> On 8/12/25 14:56, Nilay Shroff wrote:
>>>> This commit enhances the show-topology command by adding support for
>>>> NVMe multipath. With this change, users can now list all paths to a
>>>> namespace from its corresponding head node device. Each NVMe path
>>>> entry then also includes additional details such as ANA state, NUMA
>>>> node, and queue depth, improving visibility into multipath configs.
>>>> This information can be particularly helpful for debugging and
>>>> analyzing NVMe multipath setups.
>>>>
>>>> To support this functionality, the "--ranking" option of the nvme
>>>> show-topology command has been extended with a new sub-option:
>>>> "multipath".
>>>>
>>>> Since this enhancement is specific to NVMe multipath, the iopolicy
>>>> configured under each subsystem is now always displayed. Previously,
>>>> iopolicy was shown only with nvme show-topology verbose output, but
>>>> it is now included by default to improve usability and provide better
>>>> context when reviewing multipath configurations via show-topology.
>>>>
>>>> With this update, users can view the multipath topology of a multi
>>>> controller/port NVMe disk using:
>>>>
>>>> $ nvme show-topology -r multipath
>>>>
>>>> nvme-subsys2 - NQN=nvmet_subsystem
>>>>                  hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>>>>                  iopolicy=numa
>>>>
>>>>             _ _ _<head-node>
>>>>            /              _ _ _ <ana-state>
>>>>           /              /      _ _ _ <numa-node-list>
>>>>          /              /      /  _ _ _<queue-depth>
>>>>         |              /      /  /
>>>>    +- nvme2n1 (ns 1)  /      /  /
>>>>    \                 |      |  |
>>>>     +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>>>>     +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
>>>>
>>>> Please note that the annotations shown above (e.g., <numa-node-list>,
>>>> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
>>>> only and are not part of the actual output.
>>>>
>>>
>>> Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
>>> both in here? They are tied to the selected IO policy, and pretty
>>> meaningless if that IO policy is not selected.
>>> Please include only the values relevant for the selected IO policy;
>>> this will increase readability of the resulting status string.
>>>
>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>> I'll update this in the next patch.
>>
> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
> the number of fields will be different (and making parsing harder).
> 
Okay so then how about printing <numa-node> for round-robin policy as well?

I looked at the NVMe path selection code for the round-robin iopolicy, and it
appears the kernel uses the NUMA node ID of the I/O submitting CPU as the
reference for path selection.
For example, on a system with two NUMA nodes (0 and 1) and two NVMe paths
(PA and PB):
- If an I/O from node 0 selects PA, that choice is cached.
- Next time when kernel receives the I/O from the node 0, it'd retrieve
  the cached path value and find the last path chosen was PA. So now it
  will choose next available path which is PB to forward this IO.

This way kernel alternates between PA and PB in round-robin fashion.
So the selection is still tied to the submitting NUMA node, just with path
rotation layered on top. Given that, I think it makes sense to also print
<numa-node> for round-robin iopolicy, to keep field consistency and still
provide meaningful context. Agreed?

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-19 10:31         ` Nilay Shroff
@ 2025-08-19 11:05           ` Hannes Reinecke
  2025-08-19 11:30             ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-19 11:05 UTC (permalink / raw)
  To: Nilay Shroff, linux-nvme; +Cc: dwagner, kbusch, gjoyce

On 8/19/25 12:31, Nilay Shroff wrote:
> 
> 
> On 8/19/25 11:45 AM, Hannes Reinecke wrote:
>> On 8/19/25 06:49, Nilay Shroff wrote:
>>>
>>>
>>> On 8/18/25 12:52 PM, Hannes Reinecke wrote:
>>>> On 8/12/25 14:56, Nilay Shroff wrote:
>>>>> This commit enhances the show-topology command by adding support for
>>>>> NVMe multipath. With this change, users can now list all paths to a
>>>>> namespace from its corresponding head node device. Each NVMe path
>>>>> entry then also includes additional details such as ANA state, NUMA
>>>>> node, and queue depth, improving visibility into multipath configs.
>>>>> This information can be particularly helpful for debugging and
>>>>> analyzing NVMe multipath setups.
>>>>>
>>>>> To support this functionality, the "--ranking" option of the nvme
>>>>> show-topology command has been extended with a new sub-option:
>>>>> "multipath".
>>>>>
>>>>> Since this enhancement is specific to NVMe multipath, the iopolicy
>>>>> configured under each subsystem is now always displayed. Previously,
>>>>> iopolicy was shown only with nvme show-topology verbose output, but
>>>>> it is now included by default to improve usability and provide better
>>>>> context when reviewing multipath configurations via show-topology.
>>>>>
>>>>> With this update, users can view the multipath topology of a multi
>>>>> controller/port NVMe disk using:
>>>>>
>>>>> $ nvme show-topology -r multipath
>>>>>
>>>>> nvme-subsys2 - NQN=nvmet_subsystem
>>>>>                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>>>>>                   iopolicy=numa
>>>>>
>>>>>              _ _ _<head-node>
>>>>>             /              _ _ _ <ana-state>
>>>>>            /              /      _ _ _ <numa-node-list>
>>>>>           /              /      /  _ _ _<queue-depth>
>>>>>          |              /      /  /
>>>>>     +- nvme2n1 (ns 1)  /      /  /
>>>>>     \                 |      |  |
>>>>>      +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>>>>>      +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
>>>>>
>>>>> Please note that the annotations shown above (e.g., <numa-node-list>,
>>>>> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
>>>>> only and are not part of the actual output.
>>>>>
>>>>
>>>> Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
>>>> both in here? They are tied to the selected IO policy, and pretty
>>>> meaningless if that IO policy is not selected.
>>>> Please include only the values relevant for the selected IO policy;
>>>> this will increase readability of the resulting status string.
>>>>
>>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>>> I'll update this in the next patch.
>>>
>> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
>> the number of fields will be different (and making parsing harder).
>>
> Okay so then how about printing <numa-node> for round-robin policy as well?
> 
> I looked at the NVMe path selection code for the round-robin iopolicy, and it
> appears the kernel uses the NUMA node ID of the I/O submitting CPU as the
> reference for path selection.
> For example, on a system with two NUMA nodes (0 and 1) and two NVMe paths
> (PA and PB):
> - If an I/O from node 0 selects PA, that choice is cached.
> - Next time when kernel receives the I/O from the node 0, it'd retrieve
>    the cached path value and find the last path chosen was PA. So now it
>    will choose next available path which is PB to forward this IO.
> 
> This way kernel alternates between PA and PB in round-robin fashion.
> So the selection is still tied to the submitting NUMA node, just with path
> rotation layered on top. Given that, I think it makes sense to also print
> <numa-node> for round-robin iopolicy, to keep field consistency and still
> provide meaningful context. Agreed?
> 
Can't we print the current path? That would be more meaningful than just 
printing the NUMA node ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-19 11:05           ` Hannes Reinecke
@ 2025-08-19 11:30             ` Nilay Shroff
  0 siblings, 0 replies; 26+ messages in thread
From: Nilay Shroff @ 2025-08-19 11:30 UTC (permalink / raw)
  To: Hannes Reinecke, linux-nvme; +Cc: dwagner, kbusch, gjoyce



On 8/19/25 4:35 PM, Hannes Reinecke wrote:
> On 8/19/25 12:31, Nilay Shroff wrote:
>>
>>
>> On 8/19/25 11:45 AM, Hannes Reinecke wrote:
>>> On 8/19/25 06:49, Nilay Shroff wrote:
>>>>
>>>>
>>>> On 8/18/25 12:52 PM, Hannes Reinecke wrote:
>>>>> On 8/12/25 14:56, Nilay Shroff wrote:
>>>>>> This commit enhances the show-topology command by adding support for
>>>>>> NVMe multipath. With this change, users can now list all paths to a
>>>>>> namespace from its corresponding head node device. Each NVMe path
>>>>>> entry then also includes additional details such as ANA state, NUMA
>>>>>> node, and queue depth, improving visibility into multipath configs.
>>>>>> This information can be particularly helpful for debugging and
>>>>>> analyzing NVMe multipath setups.
>>>>>>
>>>>>> To support this functionality, the "--ranking" option of the nvme
>>>>>> show-topology command has been extended with a new sub-option:
>>>>>> "multipath".
>>>>>>
>>>>>> Since this enhancement is specific to NVMe multipath, the iopolicy
>>>>>> configured under each subsystem is now always displayed. Previously,
>>>>>> iopolicy was shown only with nvme show-topology verbose output, but
>>>>>> it is now included by default to improve usability and provide better
>>>>>> context when reviewing multipath configurations via show-topology.
>>>>>>
>>>>>> With this update, users can view the multipath topology of a multi
>>>>>> controller/port NVMe disk using:
>>>>>>
>>>>>> $ nvme show-topology -r multipath
>>>>>>
>>>>>> nvme-subsys2 - NQN=nvmet_subsystem
>>>>>>                   hostnqn=nqn.2014-08.org.nvmexpress:uuid:12b49f6e-0276-4746-b10c-56815b7e6dc2
>>>>>>                   iopolicy=numa
>>>>>>
>>>>>>              _ _ _<head-node>
>>>>>>             /              _ _ _ <ana-state>
>>>>>>            /              /      _ _ _ <numa-node-list>
>>>>>>           /              /      /  _ _ _<queue-depth>
>>>>>>          |              /      /  /
>>>>>>     +- nvme2n1 (ns 1)  /      /  /
>>>>>>     \                 |      |  |
>>>>>>      +- nvme2c2n1 optimized 1,2 0 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
>>>>>>      +- nvme2c3n1 optimized 3,4 0 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live
>>>>>>
>>>>>> Please note that the annotations shown above (e.g., <numa-node-list>,
>>>>>> <ana-state>, <hed-node>, and <queue-depth>) are included for clarity
>>>>>> only and are not part of the actual output.
>>>>>>
>>>>>
>>>>> Hmm. Why do we have the values for 'numa-node-list' and 'queue-depth'
>>>>> both in here? They are tied to the selected IO policy, and pretty
>>>>> meaningless if that IO policy is not selected.
>>>>> Please include only the values relevant for the selected IO policy;
>>>>> this will increase readability of the resulting status string.
>>>>>
>>>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>>>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>>>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>>>> I'll update this in the next patch.
>>>>
>>> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
>>> the number of fields will be different (and making parsing harder).
>>>
>> Okay so then how about printing <numa-node> for round-robin policy as well?
>>
>> I looked at the NVMe path selection code for the round-robin iopolicy, and it
>> appears the kernel uses the NUMA node ID of the I/O submitting CPU as the
>> reference for path selection.
>> For example, on a system with two NUMA nodes (0 and 1) and two NVMe paths
>> (PA and PB):
>> - If an I/O from node 0 selects PA, that choice is cached.
>> - Next time when kernel receives the I/O from the node 0, it'd retrieve
>>    the cached path value and find the last path chosen was PA. So now it
>>    will choose next available path which is PB to forward this IO.
>>
>> This way kernel alternates between PA and PB in round-robin fashion.
>> So the selection is still tied to the submitting NUMA node, just with path
>> rotation layered on top. Given that, I think it makes sense to also print
>> <numa-node> for round-robin iopolicy, to keep field consistency and still
>> provide meaningful context. Agreed?
>>
> Can't we print the current path? That would be more meaningful than just printing the NUMA node ...
> 
The current path is already printed in the output. For instance, if we look at
the below output then it's apparent:

              _ _ _<head-node>
             /              _ _ _ <ana-state>
            /              /     _ _ _ <numa-node>
           /              /     /  
          |              /     /  
     +- nvme2n1 (ns 1)  /     /  
     \                 |     |  
      +- nvme2c2n1 optimized 1 nvme2 tcp traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live
      +- nvme2c3n1 optimized 2 nvme3 tcp traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live


In the above output, node1 represents the current path nvme2c2n1 and node2 represents 
the current path nvme2c3n1.

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-19  6:15       ` Hannes Reinecke
  2025-08-19 10:31         ` Nilay Shroff
@ 2025-08-20  8:17         ` Daniel Wagner
  2025-08-20  8:30           ` Hannes Reinecke
  1 sibling, 1 reply; 26+ messages in thread
From: Daniel Wagner @ 2025-08-20  8:17 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: Nilay Shroff, linux-nvme, kbusch, gjoyce

On Tue, Aug 19, 2025 at 08:15:09AM +0200, Hannes Reinecke wrote:
> > Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
> > is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
> > And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
> > I'll update this in the next patch.
> > 
> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
> the number of fields will be different (and making parsing harder).

What type of parser do you mean, a carbon based one or a computer? I
strongly recommend to use the JSON output for the later.

I would really prefer that the default stdout is easy for a human to
read not screen scrappers.

Thanks,
Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 3/4] nvme: add common APIs for printing tabular format output
  2025-08-19  8:56     ` Nilay Shroff
@ 2025-08-20  8:23       ` Daniel Wagner
  0 siblings, 0 replies; 26+ messages in thread
From: Daniel Wagner @ 2025-08-20  8:23 UTC (permalink / raw)
  To: Nilay Shroff; +Cc: Hannes Reinecke, linux-nvme, kbusch, gjoyce

On Tue, Aug 19, 2025 at 02:26:19PM +0530, Nilay Shroff wrote:
> > Sheesh. Can o'worms.
> > 
> > Displaying a table on an ASCII terminal is black magic (check
> > manpage for ncurses ...), and I'm pretty certain that printing
> > spaces for alignment is _not_ the way to go.
> > If you really want to implement this then at the very least use
> > tabs. Ideally check with the ncurses manpage how one can display
> > a table with the correct escape characters.
> > 
> Thanks for the feedback. You're absolutely right that terminal alignment
> can get tricky, and ncurses does provide a more feature-complete way to
> handle interactive table layouts.
> 
> However, for this patch, the intent was much narrower — to make existing
> nvme-cli tabular output (e.g., nvme list, nvme list -v and nvme show-topology)
> more consistent and readable without requiring each command to manually set
> column/row widths. The current implementation just calculates the max width
> per column and pads with spaces, which works for the non-interactive, plain-text
> output we typically expect from CLI tools (including when the output is
> redirected to a file).

I agree with Nilay here. It might not be perfect solution but the
current code is producing non aligned tables (as the examples show).
And another good thing about this patch, it removes the formatting rules
from the printf statements and we only have to touch one place. After
that we could add optional ncurses support if someone fancy this to do.

Thanks,
Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-20  8:17         ` Daniel Wagner
@ 2025-08-20  8:30           ` Hannes Reinecke
  2025-08-20 11:59             ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-08-20  8:30 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: Nilay Shroff, linux-nvme, kbusch, gjoyce

On 8/20/25 10:17, Daniel Wagner wrote:
> On Tue, Aug 19, 2025 at 08:15:09AM +0200, Hannes Reinecke wrote:
>>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>>> I'll update this in the next patch.
>>>
>> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
>> the number of fields will be different (and making parsing harder).
> 
> What type of parser do you mean, a carbon based one or a computer? I
> strongly recommend to use the JSON output for the later.
> 
> I would really prefer that the default stdout is easy for a human to
> read not screen scrappers.
> 
That was my intention, too. And my prime objection was to have a
sequence of raw numbers, and requiring the user to figure out what
these numbers are for.

But really, I'm not sure if we should print out values from the various
I/O policies. For NUMA it probably makes sense, but for round-robin and
queue-depths the values are extremely volatile, so I wonder what benefit
for the user is here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-20  8:30           ` Hannes Reinecke
@ 2025-08-20 11:59             ` Nilay Shroff
  2025-09-01  9:21               ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-08-20 11:59 UTC (permalink / raw)
  To: Hannes Reinecke, Daniel Wagner; +Cc: linux-nvme, kbusch, gjoyce



On 8/20/25 2:00 PM, Hannes Reinecke wrote:
> On 8/20/25 10:17, Daniel Wagner wrote:
>> On Tue, Aug 19, 2025 at 08:15:09AM +0200, Hannes Reinecke wrote:
>>>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>>>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>>>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>>>> I'll update this in the next patch.
>>>>
>>> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
>>> the number of fields will be different (and making parsing harder).
>>
>> What type of parser do you mean, a carbon based one or a computer? I
>> strongly recommend to use the JSON output for the later.
>>
>> I would really prefer that the default stdout is easy for a human to
>> read not screen scrappers.
>>
> That was my intention, too. And my prime objection was to have a
> sequence of raw numbers, and requiring the user to figure out what
> these numbers are for.
> 
Yes, you are right — those sequences of raw numbers did look confusing,
and that’s why Daniel suggested adding annotations. However, I found it
difficult to annotate the numbers in the tree-style output of show-topology.
To address this, we decided to also support printing topology in a tabular
format, which is much easier for humans to interpret.

As you can see in the last patch of this series (4/4), we now support showing
the topology in a tabular format (when the user selects it). For reference:

# ./nvme show-topology -o tabular 
nvme-subsys1 - NQN=nvmet_subsystem
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:779c3f46-2bb7-4e82-8f46-2b3e795e54f6
               iopolicy=numa

NSHead  NSID NSPath    ANAState  Nodes Qdepth Controller TrType Address                                          State 
------- ---- --------- --------- ----- ------ ---------- ------ ------------------------------------------------ ----- 
nvme1n1  1   nvme1c1n1 optimized  0,1     0    nvme1      tcp    traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live  
  -->    1   nvme1c2n1 optimized  2,3     0    nvme2      tcp    traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live  

So as you could see above, this format is much clearer and human-friendly.
(Please note that above table is printed using new table APIs introduced
in patch 3/3).

> But really, I'm not sure if we should print out values from the various
> I/O policies. For NUMA it probably makes sense, but for round-robin and
> queue-depths the values are extremely volatile, so I wonder what benefit
> for the user is here.
> 

I think the qdepth output could still be useful. For example, if I/Os are
queuing up on one path (perhaps because that path is slower), then the Qdepth
value might help indicate something unusual or explain why one path is being
chosen over another.

That said, if we all agree that tools or scripts should ideally rely on JSON
output for parsing, then the tabular output could be simplified further:

- For numa iopolicy: print <Nodes> and exclude <Qdepth>.
- For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
- For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.

Does this sound reasonable? Or do we still want to avoid printing 
<Qdepth> even for queue-depth iopolicy?

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-08-20 11:59             ` Nilay Shroff
@ 2025-09-01  9:21               ` Nilay Shroff
  2025-09-01 16:36                 ` Daniel Wagner
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-09-01  9:21 UTC (permalink / raw)
  To: Hannes Reinecke, Daniel Wagner; +Cc: linux-nvme, kbusch, gjoyce

Hi Daniel and Hannes,

Just a gentle ping on this one...

Do you agree with the reasoning I suggested for filtering
columns based on iopolicy? If we all have agreement then
I'd send out the next patchset with appropriate change.

Thanks,
--Nilay

On 8/20/25 5:29 PM, Nilay Shroff wrote:
> 
> 
> On 8/20/25 2:00 PM, Hannes Reinecke wrote:
>> On 8/20/25 10:17, Daniel Wagner wrote:
>>> On Tue, Aug 19, 2025 at 08:15:09AM +0200, Hannes Reinecke wrote:
>>>>> Okay makes sense, so we'd print <numa-node> and exclude <queue-depth> if iopolicy
>>>>> is numa. For 'queue-depth' iopolicy, we'd print <queue-depth> and exclude <numa-node>.
>>>>> And for 'round-robin' iopolicy, we'd neither print <numa-node> nor <queue-depth>.
>>>>> I'll update this in the next patch.
>>>>>
>>>> Hmm. I'd rather have _some_ value for 'round-robin', too, as otherwise
>>>> the number of fields will be different (and making parsing harder).
>>>
>>> What type of parser do you mean, a carbon based one or a computer? I
>>> strongly recommend to use the JSON output for the later.
>>>
>>> I would really prefer that the default stdout is easy for a human to
>>> read not screen scrappers.
>>>
>> That was my intention, too. And my prime objection was to have a
>> sequence of raw numbers, and requiring the user to figure out what
>> these numbers are for.
>>
> Yes, you are right — those sequences of raw numbers did look confusing,
> and that’s why Daniel suggested adding annotations. However, I found it
> difficult to annotate the numbers in the tree-style output of show-topology.
> To address this, we decided to also support printing topology in a tabular
> format, which is much easier for humans to interpret.
> 
> As you can see in the last patch of this series (4/4), we now support showing
> the topology in a tabular format (when the user selects it). For reference:
> 
> # ./nvme show-topology -o tabular 
> nvme-subsys1 - NQN=nvmet_subsystem
>                hostnqn=nqn.2014-08.org.nvmexpress:uuid:779c3f46-2bb7-4e82-8f46-2b3e795e54f6
>                iopolicy=numa
> 
> NSHead  NSID NSPath    ANAState  Nodes Qdepth Controller TrType Address                                          State 
> ------- ---- --------- --------- ----- ------ ---------- ------ ------------------------------------------------ ----- 
> nvme1n1  1   nvme1c1n1 optimized  0,1     0    nvme1      tcp    traddr=127.0.0.2,trsvcid=4460,src_addr=127.0.0.1 live  
>   -->    1   nvme1c2n1 optimized  2,3     0    nvme2      tcp    traddr=127.0.0.3,trsvcid=4460,src_addr=127.0.0.1 live  
> 
> So as you could see above, this format is much clearer and human-friendly.
> (Please note that above table is printed using new table APIs introduced
> in patch 3/3).
> 
>> But really, I'm not sure if we should print out values from the various
>> I/O policies. For NUMA it probably makes sense, but for round-robin and
>> queue-depths the values are extremely volatile, so I wonder what benefit
>> for the user is here.
>>
> 
> I think the qdepth output could still be useful. For example, if I/Os are
> queuing up on one path (perhaps because that path is slower), then the Qdepth
> value might help indicate something unusual or explain why one path is being
> chosen over another.
> 
> That said, if we all agree that tools or scripts should ideally rely on JSON
> output for parsing, then the tabular output could be simplified further:
> 
> - For numa iopolicy: print <Nodes> and exclude <Qdepth>.
> - For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
> - For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.
> 
> Does this sound reasonable? Or do we still want to avoid printing 
> <Qdepth> even for queue-depth iopolicy?
> 
> Thanks,
> --Nilay



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-09-01  9:21               ` Nilay Shroff
@ 2025-09-01 16:36                 ` Daniel Wagner
  2025-09-02  6:26                   ` Hannes Reinecke
  0 siblings, 1 reply; 26+ messages in thread
From: Daniel Wagner @ 2025-09-01 16:36 UTC (permalink / raw)
  To: Nilay Shroff; +Cc: Hannes Reinecke, linux-nvme, kbusch, gjoyce

Hi Nilay,

On Mon, Sep 01, 2025 at 02:51:09PM +0530, Nilay Shroff wrote:
> Hi Daniel and Hannes,
> 
> Just a gentle ping on this one...
> 
> Do you agree with the reasoning I suggested for filtering
> columns based on iopolicy? If we all have agreement then
> I'd send out the next patchset with appropriate change.

I was waiting for Hannes input here as he was in discussion.

> >> But really, I'm not sure if we should print out values from the various
> >> I/O policies. For NUMA it probably makes sense, but for round-robin and
> >> queue-depths the values are extremely volatile, so I wonder what benefit
> >> for the user is here.
> >>
> > 
> > I think the qdepth output could still be useful. For example, if I/Os are
> > queuing up on one path (perhaps because that path is slower), then the Qdepth
> > value might help indicate something unusual or explain why one path is being
> > chosen over another.
> > 
> > That said, if we all agree that tools or scripts should ideally rely on JSON
> > output for parsing, then the tabular output could be simplified further:
> > 
> > - For numa iopolicy: print <Nodes> and exclude <Qdepth>.
> > - For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
> > - For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.

Looks reasonable to me.

> > Does this sound reasonable? Or do we still want to avoid printing 
> > <Qdepth> even for queue-depth iopolicy?

I am fine with printing the qdepth value as long it is documented what it
means. IIRC there are other tools which just show a snapshot for some
statistics.

BTW, some discussion on github regarding something like a
'monitor' feature: https://github.com/linux-nvme/nvme-cli/issues/2189
Might be something to which could be considered here as well.

Thanks,
Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-09-01 16:36                 ` Daniel Wagner
@ 2025-09-02  6:26                   ` Hannes Reinecke
  2025-09-03  4:22                     ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2025-09-02  6:26 UTC (permalink / raw)
  To: Daniel Wagner, Nilay Shroff; +Cc: linux-nvme, kbusch, gjoyce

On 9/1/25 18:36, Daniel Wagner wrote:
> Hi Nilay,
> 
> On Mon, Sep 01, 2025 at 02:51:09PM +0530, Nilay Shroff wrote:
>> Hi Daniel and Hannes,
>>
>> Just a gentle ping on this one...
>>
>> Do you agree with the reasoning I suggested for filtering
>> columns based on iopolicy? If we all have agreement then
>> I'd send out the next patchset with appropriate change.
> 
> I was waiting for Hannes input here as he was in discussion.
> 
Yeah, and he is fine with the latest changes. Probably should've been
a bit more vocal about it :-)

>>>> But really, I'm not sure if we should print out values from the various
>>>> I/O policies. For NUMA it probably makes sense, but for round-robin and
>>>> queue-depths the values are extremely volatile, so I wonder what benefit
>>>> for the user is here.
>>>>
>>>
>>> I think the qdepth output could still be useful. For example, if I/Os are
>>> queuing up on one path (perhaps because that path is slower), then the Qdepth
>>> value might help indicate something unusual or explain why one path is being
>>> chosen over another.
>>>
>>> That said, if we all agree that tools or scripts should ideally rely on JSON
>>> output for parsing, then the tabular output could be simplified further:
>>>
>>> - For numa iopolicy: print <Nodes> and exclude <Qdepth>.
>>> - For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
>>> - For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.
> 
> Looks reasonable to me.
> 
Yep, that's fine.

>>> Does this sound reasonable? Or do we still want to avoid printing
>>> <Qdepth> even for queue-depth iopolicy?
> 
> I am fine with printing the qdepth value as long it is documented what it
> means. IIRC there are other tools which just show a snapshot for some
> statistics.
> 
> BTW, some discussion on github regarding something like a
> 'monitor' feature: https://github.com/linux-nvme/nvme-cli/issues/2189
> Might be something to which could be considered here as well.
> Well, that might be tricky. The current 'tree' structure is build
once when the program starts up. Any changes to that structure after
that are not tracked, and there (currently) are no hooks for updating
it. So having a 'monitor' function will get tricky.

To do that we would either need a udev event receiver (using uevents
to update the tree structure) or looking into fanotify()/inotify().
Then the tree structure would always be up-to-date and things like
'monitor' will be possible.
Will be hell for the python bindings, mind :-(

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-09-02  6:26                   ` Hannes Reinecke
@ 2025-09-03  4:22                     ` Nilay Shroff
  2025-09-03  7:24                       ` Daniel Wagner
  0 siblings, 1 reply; 26+ messages in thread
From: Nilay Shroff @ 2025-09-03  4:22 UTC (permalink / raw)
  To: Hannes Reinecke, Daniel Wagner; +Cc: linux-nvme, kbusch, gjoyce



On 9/2/25 11:56 AM, Hannes Reinecke wrote:
> On 9/1/25 18:36, Daniel Wagner wrote:
>> Hi Nilay,
>>
>> On Mon, Sep 01, 2025 at 02:51:09PM +0530, Nilay Shroff wrote:
>>> Hi Daniel and Hannes,
>>>
>>> Just a gentle ping on this one...
>>>
>>> Do you agree with the reasoning I suggested for filtering
>>> columns based on iopolicy? If we all have agreement then
>>> I'd send out the next patchset with appropriate change.
>>
>> I was waiting for Hannes input here as he was in discussion.
>>
> Yeah, and he is fine with the latest changes. Probably should've been
> a bit more vocal about it :-)
> 
>>>>> But really, I'm not sure if we should print out values from the various
>>>>> I/O policies. For NUMA it probably makes sense, but for round-robin and
>>>>> queue-depths the values are extremely volatile, so I wonder what benefit
>>>>> for the user is here.
>>>>>
>>>>
>>>> I think the qdepth output could still be useful. For example, if I/Os are
>>>> queuing up on one path (perhaps because that path is slower), then the Qdepth
>>>> value might help indicate something unusual or explain why one path is being
>>>> chosen over another.
>>>>
>>>> That said, if we all agree that tools or scripts should ideally rely on JSON
>>>> output for parsing, then the tabular output could be simplified further:
>>>>
>>>> - For numa iopolicy: print <Nodes> and exclude <Qdepth>.
>>>> - For queue-depth iopolicy: print <Qdepth> and exclude <Nodes>.
>>>> - For round-robin iopolicy: exclude both <Nodes> and <Qdepth>.
>>
>> Looks reasonable to me.
>>
> Yep, that's fine.
> 
>>>> Does this sound reasonable? Or do we still want to avoid printing
>>>> <Qdepth> even for queue-depth iopolicy?
>>
>> I am fine with printing the qdepth value as long it is documented what it
>> means. IIRC there are other tools which just show a snapshot for some
>> statistics.
>>
>> BTW, some discussion on github regarding something like a
>> 'monitor' feature: https://github.com/linux-nvme/nvme-cli/issues/2189
>> Might be something to which could be considered here as well.
>> Well, that might be tricky. The current 'tree' structure is build
> once when the program starts up. Any changes to that structure after
> that are not tracked, and there (currently) are no hooks for updating
> it. So having a 'monitor' function will get tricky.
> 
> To do that we would either need a udev event receiver (using uevents
> to update the tree structure) or looking into fanotify()/inotify().
> Then the tree structure would always be up-to-date and things like
> 'monitor' will be possible.
> Will be hell for the python bindings, mind :-(
> 
Thanks, that explanation helps. I see the difference — today nvme-cli
just builds the tree once and prints a snapshot, so volatile values like
Qdepth are inherently a “point in time” view. A monitor-style feature would
instead keep the tree live and update it as ANA state, path health, or qdepth
change (probably similar to how iostat or top refresh continuously). That
definitely makes sense for those kinds of fields, though as you say it would
require new hooks (udev/inotify) and rework of the in-memory tree handling.

For this patchset, I’d prefer to keep the scope limited and just filter the
tabular output based on iopolicy (NUMA -> Nodes, qdepth -> Qdepth, RR -> none).
That way the snapshot view stays useful. 

Having said that, the “monitor” idea seems worth pursuing separately as a
longer-term feature. For users who want a live view today, they can already
run something like "watch -n1 nvme show-topology ..." to refresh the snapshot
every second. A built-in --monitor feature would be nicer and more efficient,
but I think that’s orthogonal and can be discussed separately. And I believe
"minitor" feature shall be useful as well for implementing nvme-top command
(as I recall we (me and Daniel) discussed it during LSFM-2025). 

Thanks,
--Nilay


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-09-03  4:22                     ` Nilay Shroff
@ 2025-09-03  7:24                       ` Daniel Wagner
  2025-09-03 12:19                         ` Nilay Shroff
  0 siblings, 1 reply; 26+ messages in thread
From: Daniel Wagner @ 2025-09-03  7:24 UTC (permalink / raw)
  To: Nilay Shroff; +Cc: Hannes Reinecke, linux-nvme, kbusch, gjoyce

Hi Nilay,

On Wed, Sep 03, 2025 at 09:52:13AM +0530, Nilay Shroff wrote:
> Having said that, the “monitor” idea seems worth pursuing separately as a
> longer-term feature. For users who want a live view today, they can already
> run something like "watch -n1 nvme show-topology ..." to refresh the snapshot
> every second. A built-in --monitor feature would be nicer and more efficient,
> but I think that’s orthogonal and can be discussed separately. And I believe
> "minitor" feature shall be useful as well for implementing nvme-top command
> (as I recall we (me and Daniel) discussed it during LSFM-2025).

Yes, I remember the discussion, me trying to convince you to implement it :)

I've created an issue on github, so we can track this feature:

https://github.com/linux-nvme/nvme-cli/issues/2904

Thanks,
Daniel


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath
  2025-09-03  7:24                       ` Daniel Wagner
@ 2025-09-03 12:19                         ` Nilay Shroff
  0 siblings, 0 replies; 26+ messages in thread
From: Nilay Shroff @ 2025-09-03 12:19 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: Hannes Reinecke, linux-nvme, kbusch, gjoyce



On 9/3/25 12:54 PM, Daniel Wagner wrote:
> Hi Nilay,
> 
> On Wed, Sep 03, 2025 at 09:52:13AM +0530, Nilay Shroff wrote:
>> Having said that, the “monitor” idea seems worth pursuing separately as a
>> longer-term feature. For users who want a live view today, they can already
>> run something like "watch -n1 nvme show-topology ..." to refresh the snapshot
>> every second. A built-in --monitor feature would be nicer and more efficient,
>> but I think that’s orthogonal and can be discussed separately. And I believe
>> "minitor" feature shall be useful as well for implementing nvme-top command
>> (as I recall we (me and Daniel) discussed it during LSFM-2025).
> 
> Yes, I remember the discussion, me trying to convince you to implement it :)
Yeah I was convinced at the time :)

> I've created an issue on github, so we can track this feature:
> 
> https://github.com/linux-nvme/nvme-cli/issues/2904
> 
Okay good. I'll work on it once this patchset is merged.

Thanks,
--Nilay



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-09-03 13:16 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-12 12:56 [PATCHv2 0/4] nvme-cli: enhance the visibility of multipath using show-topology command Nilay Shroff
2025-08-12 12:56 ` [PATCHv2 1/4] nvme: support <device> option in " Nilay Shroff
2025-08-18  7:12   ` Hannes Reinecke
2025-08-19  4:43     ` Nilay Shroff
2025-08-19  6:11       ` Hannes Reinecke
2025-08-12 12:56 ` [PATCHv2 2/4] nvme: extend show-topology command to add support for multipath Nilay Shroff
2025-08-18  7:22   ` Hannes Reinecke
2025-08-19  4:49     ` Nilay Shroff
2025-08-19  6:15       ` Hannes Reinecke
2025-08-19 10:31         ` Nilay Shroff
2025-08-19 11:05           ` Hannes Reinecke
2025-08-19 11:30             ` Nilay Shroff
2025-08-20  8:17         ` Daniel Wagner
2025-08-20  8:30           ` Hannes Reinecke
2025-08-20 11:59             ` Nilay Shroff
2025-09-01  9:21               ` Nilay Shroff
2025-09-01 16:36                 ` Daniel Wagner
2025-09-02  6:26                   ` Hannes Reinecke
2025-09-03  4:22                     ` Nilay Shroff
2025-09-03  7:24                       ` Daniel Wagner
2025-09-03 12:19                         ` Nilay Shroff
2025-08-12 12:56 ` [PATCHv2 3/4] nvme: add common APIs for printing tabular format output Nilay Shroff
2025-08-18  7:27   ` Hannes Reinecke
2025-08-19  8:56     ` Nilay Shroff
2025-08-20  8:23       ` Daniel Wagner
2025-08-12 12:56 ` [PATCHv2 4/4] nvme: add support for printing show-topology in tabular form Nilay Shroff

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).