linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Nilay Shroff <nilay@linux.ibm.com>
To: linux-nvme@lists.infradead.org
Cc: dwagner@suse.de, msmurthy@imap.linux.ibm.com, gjoyce@ibm.com
Subject: [PATCH] nvme-list: fix verbose JSON output for 'nvme list' command
Date: Mon, 21 Jul 2025 15:59:11 +0530	[thread overview]
Message-ID: <20250721102946.1003295-1-nilay@linux.ibm.com> (raw)

The verbose JSON output of the nvme list command is currently incorrect in
both multipath and non-multipath configurations. Specifically, it prints
empty Namespaces[] and Paths[] arrays in the wrong places, leading to
confusion and invalid output. For example, on a system with single NVMe
disk, signle controller and one namepsace created, 'nvme list --verbose
--output json' prints the following output:

With multipath disabled:

{
  "Devices":[
    {
      ...
      "Subsystems":[
        {
          ...
          "Controllers":[
            {
              "Controller":"nvme0",
	      ...
              "Namespaces":[
                {
                  "NameSpace":"nvme0n1",
		  ...
                }
              ],
              "Paths":[] <---- Incorrct: Path should not be present
            }
          ],
          "Namespaces":[] <-----Incorrect: Namespaces should not be here
        }
      ]
    }
  ]
}

With multipath enabled, the output changes, but still has misplaced or
empty fields:

{
  "Devices":[
    {
      ...
      "Subsystems":[
        {
          ...
          "Controllers":[
            {
              "Controller":"nvme0",
	      ...
              "Namespaces":[] <-----Incorrect: Namespaces should not be here
              "Paths":[
                {
                  "Path":"nvme0c0n1",
                  "ANAState":"optimized"
                }
              ]
            }
          ],
          "Namespaces":[
            {
              "NameSpace":"nvme0n1",
	      ...
            }
          ]
        }
      ]
    }
  ]
}

So as we could see above in both multipath and non-multipath scenarios,
the JSON formatted output is incorrect.

The existing JSON formatting logic doesn't differentiate between multipath
and non-multipath configurations. As a result:
- "Paths" is printed even when multipath is disabled.
- "Namespaces" appear at incorrect levels in the output tree.

This patch updates the logic for verbose JSON output in nvme list to
properly reflect the system configuration:
- When multipath is enabled, each namespace entry includes its associated
  paths and controller attributes.
- When multipath is disabled, namespaces are shown directly under the
  controller, and the "Paths" array is omitted.

After this fix, JSON formatted output looks as below:

Scenario1:  multipath is enabled
{
  Devices: [
    {
      ...
      Subsystems: [
        {
          ...
          Namespaces: [
	    {
	      "NameSpace":"nvme0n1",
	      ...
	      Paths: [
                {
                  "Path":"nvme0c0n1",
		  "ANAState":"optimized",
		  "Controller":"nvme0",
                  ...
		}
	      ]
            }
	  ]
        }
      ]
    }
  ]
}

Scenario2: multipath is disabled
{
  Devices: [
    {
      ...
      Subsystems: [
        {
          ...
	  Controllers: [
	    {
              "Controller":"nvme0",
	      ...
              "Namespaces":[
	        {
                  "NameSpace":"nvme0n1",
		  ...
		}
	      ]
	    }
	  ]
        }
      ]
    }
  ]
}
This fix ensures the JSON output is semantically accurate and easier to
consume by tools that parse nvme list --verbose --output json.

Reported-by: Maram Srimannarayana Murthy <msmurthy@imap.linux.ibm.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
 nvme-print-json.c   | 170 ++++++++++++++++++++++++++------------------
 nvme-print-stdout.c |  12 ----
 nvme.h              |  12 ++++
 3 files changed, 113 insertions(+), 81 deletions(-)

diff --git a/nvme-print-json.c b/nvme-print-json.c
index a4b8df9f..3d656baa 100644
--- a/nvme-print-json.c
+++ b/nvme-print-json.c
@@ -4423,6 +4423,103 @@ static void json_support_log(struct nvme_supported_log_pages *support_log,
 	json_print(r);
 }
 
+static void json_print_detail_list_multipath(nvme_subsystem_t s,
+		struct json_object *jss)
+{
+	nvme_ns_t n;
+	nvme_path_t p;
+	struct json_object *jnss = json_create_array();
+
+	nvme_subsystem_for_each_ns(s, n) {
+		struct json_object *jns = json_create_object();
+		struct json_object *jpaths = json_create_array();
+
+		int lba = nvme_ns_get_lba_size(n);
+		uint64_t nsze = nvme_ns_get_lba_count(n) * lba;
+		uint64_t nuse = nvme_ns_get_lba_util(n) * lba;
+
+		obj_add_str(jns, "NameSpace", nvme_ns_get_name(n));
+		obj_add_str(jns, "Generic", nvme_ns_get_generic_name(n));
+		obj_add_int(jns, "NSID", nvme_ns_get_nsid(n));
+		obj_add_uint64(jns, "UsedBytes", nuse);
+		obj_add_uint64(jns, "MaximumLBA", nvme_ns_get_lba_count(n));
+		obj_add_uint64(jns, "PhysicalSize", nsze);
+		obj_add_int(jns, "SectorSize", lba);
+
+		nvme_namespace_for_each_path(n, p) {
+			nvme_ctrl_t c;
+			struct json_object *jpath = json_create_object();
+
+			obj_add_str(jpath, "Path", nvme_path_get_name(p));
+			obj_add_str(jpath, "ANAState", nvme_path_get_ana_state(p));
+
+			/*
+			 * For multipath, each path maps to one controller.
+			 * So get the controller from the path and then add
+			 * controller  attributes.
+			 */
+			c = nvme_path_get_ctrl(p);
+			obj_add_str(jpath, "Controller", nvme_ctrl_get_name(c));
+			obj_add_str(jpath, "Cntlid", nvme_ctrl_get_cntlid(c));
+			obj_add_str(jpath, "SerialNumber", nvme_ctrl_get_serial(c));
+			obj_add_str(jpath, "ModelNumber", nvme_ctrl_get_model(c));
+			obj_add_str(jpath, "Firmware", nvme_ctrl_get_firmware(c));
+			obj_add_str(jpath, "Transport", nvme_ctrl_get_transport(c));
+			obj_add_str(jpath, "Address", nvme_ctrl_get_address(c));
+			obj_add_str(jpath, "Slot", nvme_ctrl_get_phy_slot(c));
+
+			array_add_obj(jpaths, jpath);
+		}
+
+		obj_add_obj(jns, "Paths", jpaths);
+		array_add_obj(jnss, jns);
+	}
+	obj_add_obj(jss, "Namespaces", jnss);
+}
+
+static void json_print_detail_list(nvme_subsystem_t s, struct json_object *jss)
+{
+	nvme_ctrl_t c;
+	nvme_ns_t n;
+	struct json_object *jctrls = json_create_array();
+
+	nvme_subsystem_for_each_ctrl(s, c) {
+		struct json_object *jctrl = json_create_object();
+		struct json_object *jnss = json_create_array();
+
+		obj_add_str(jctrl, "Controller", nvme_ctrl_get_name(c));
+		obj_add_str(jctrl, "Cntlid", nvme_ctrl_get_cntlid(c));
+		obj_add_str(jctrl, "SerialNumber", nvme_ctrl_get_serial(c));
+		obj_add_str(jctrl, "ModelNumber", nvme_ctrl_get_model(c));
+		obj_add_str(jctrl, "Firmware", nvme_ctrl_get_firmware(c));
+		obj_add_str(jctrl, "Transport", nvme_ctrl_get_transport(c));
+		obj_add_str(jctrl, "Address", nvme_ctrl_get_address(c));
+		obj_add_str(jctrl, "Slot", nvme_ctrl_get_phy_slot(c));
+
+		nvme_ctrl_for_each_ns(c, n) {
+			struct json_object *jns = json_create_object();
+			int lba = nvme_ns_get_lba_size(n);
+			uint64_t nsze = nvme_ns_get_lba_count(n) * lba;
+			uint64_t nuse = nvme_ns_get_lba_util(n) * lba;
+
+			obj_add_str(jns, "NameSpace", nvme_ns_get_name(n));
+			obj_add_str(jns, "Generic", nvme_ns_get_generic_name(n));
+			obj_add_int(jns, "NSID", nvme_ns_get_nsid(n));
+			obj_add_uint64(jns, "UsedBytes", nuse);
+			obj_add_uint64(jns, "MaximumLBA", nvme_ns_get_lba_count(n));
+			obj_add_uint64(jns, "PhysicalSize", nsze);
+			obj_add_int(jns, "SectorSize", lba);
+
+			array_add_obj(jnss, jns);
+		}
+
+		obj_add_obj(jctrl, "Namespaces", jnss);
+		array_add_obj(jctrls, jctrl);
+	}
+
+	obj_add_obj(jss, "Controllers", jctrls);
+}
+
 static void json_detail_list(nvme_root_t t)
 {
 	struct json_object *r = json_create_object();
@@ -4430,9 +4527,6 @@ static void json_detail_list(nvme_root_t t)
 
 	nvme_host_t h;
 	nvme_subsystem_t s;
-	nvme_ctrl_t c;
-	nvme_path_t p;
-	nvme_ns_t n;
 
 	nvme_for_each_host(t, h) {
 		struct json_object *hss = json_create_object();
@@ -4446,76 +4540,14 @@ static void json_detail_list(nvme_root_t t)
 
 		nvme_for_each_subsystem(h, s) {
 			struct json_object *jss = json_create_object();
-			struct json_object *jctrls = json_create_array();
-			struct json_object *jnss = json_create_array();
 
 			obj_add_str(jss, "Subsystem", nvme_subsystem_get_name(s));
 			obj_add_str(jss, "SubsystemNQN", nvme_subsystem_get_nqn(s));
 
-			nvme_subsystem_for_each_ctrl(s, c) {
-				struct json_object *jctrl = json_create_object();
-				struct json_object *jnss = json_create_array();
-				struct json_object *jpaths = json_create_array();
-
-				obj_add_str(jctrl, "Controller", nvme_ctrl_get_name(c));
-				obj_add_str(jctrl, "Cntlid", nvme_ctrl_get_cntlid(c));
-				obj_add_str(jctrl, "SerialNumber", nvme_ctrl_get_serial(c));
-				obj_add_str(jctrl, "ModelNumber", nvme_ctrl_get_model(c));
-				obj_add_str(jctrl, "Firmware", nvme_ctrl_get_firmware(c));
-				obj_add_str(jctrl, "Transport", nvme_ctrl_get_transport(c));
-				obj_add_str(jctrl, "Address", nvme_ctrl_get_address(c));
-				obj_add_str(jctrl, "Slot", nvme_ctrl_get_phy_slot(c));
-
-				nvme_ctrl_for_each_ns(c, n) {
-					struct json_object *jns = json_create_object();
-					int lba = nvme_ns_get_lba_size(n);
-					uint64_t nsze = nvme_ns_get_lba_count(n) * lba;
-					uint64_t nuse = nvme_ns_get_lba_util(n) * lba;
-
-					obj_add_str(jns, "NameSpace", nvme_ns_get_name(n));
-					obj_add_str(jns, "Generic", nvme_ns_get_generic_name(n));
-					obj_add_int(jns, "NSID", nvme_ns_get_nsid(n));
-					obj_add_uint64(jns, "UsedBytes", nuse);
-					obj_add_uint64(jns, "MaximumLBA", nvme_ns_get_lba_count(n));
-					obj_add_uint64(jns, "PhysicalSize", nsze);
-					obj_add_int(jns, "SectorSize", lba);
-
-					array_add_obj(jnss, jns);
-				}
-				obj_add_obj(jctrl, "Namespaces", jnss);
-
-				nvme_ctrl_for_each_path(c, p) {
-					struct json_object *jpath = json_create_object();
-
-					obj_add_str(jpath, "Path", nvme_path_get_name(p));
-					obj_add_str(jpath, "ANAState", nvme_path_get_ana_state(p));
-
-					array_add_obj(jpaths, jpath);
-				}
-				obj_add_obj(jctrl, "Paths", jpaths);
-
-				array_add_obj(jctrls, jctrl);
-			}
-			obj_add_obj(jss, "Controllers", jctrls);
-
-			nvme_subsystem_for_each_ns(s, n) {
-				struct json_object *jns = json_create_object();
-
-				int lba = nvme_ns_get_lba_size(n);
-				uint64_t nsze = nvme_ns_get_lba_count(n) * lba;
-				uint64_t nuse = nvme_ns_get_lba_util(n) * lba;
-
-				obj_add_str(jns, "NameSpace", nvme_ns_get_name(n));
-				obj_add_str(jns, "Generic", nvme_ns_get_generic_name(n));
-				obj_add_int(jns, "NSID", nvme_ns_get_nsid(n));
-				obj_add_uint64(jns, "UsedBytes", nuse);
-				obj_add_uint64(jns, "MaximumLBA", nvme_ns_get_lba_count(n));
-				obj_add_uint64(jns, "PhysicalSize", nsze);
-				obj_add_int(jns, "SectorSize", lba);
-
-				array_add_obj(jnss, jns);
-			}
-			obj_add_obj(jss, "Namespaces", jnss);
+			if (nvme_is_multipath(s))
+				json_print_detail_list_multipath(s, jss);
+			else
+				json_print_detail_list(s, jss);
 
 			array_add_obj(jsslist, jss);
 		}
diff --git a/nvme-print-stdout.c b/nvme-print-stdout.c
index 7943b91c..52d8d2b1 100644
--- a/nvme-print-stdout.c
+++ b/nvme-print-stdout.c
@@ -5589,18 +5589,6 @@ static void stdout_list_items(nvme_root_t r)
 		stdout_simple_list(r);
 }
 
-static bool nvme_is_multipath(nvme_subsystem_t s)
-{
-	nvme_ns_t n;
-	nvme_path_t p;
-
-	nvme_subsystem_for_each_ns(s, n)
-		nvme_namespace_for_each_path(n, p)
-			return true;
-
-	return false;
-}
-
 static void stdout_subsystem_topology_multipath(nvme_subsystem_t s,
 						     enum nvme_cli_topo_ranking ranking)
 {
diff --git a/nvme.h b/nvme.h
index 1a15fc18..248c5c7c 100644
--- a/nvme.h
+++ b/nvme.h
@@ -122,6 +122,18 @@ static inline nvme_mi_ep_t dev_mi_ep(struct nvme_dev *dev)
 	return dev->mi.ep;
 }
 
+static inline bool nvme_is_multipath(nvme_subsystem_t s)
+{
+	nvme_ns_t n;
+	nvme_path_t p;
+
+	nvme_subsystem_for_each_ns(s, n)
+		nvme_namespace_for_each_path(n, p)
+			return true;
+
+	return false;
+}
+
 void register_extension(struct plugin *plugin);
 
 /*
-- 
2.50.1



             reply	other threads:[~2025-07-21 11:39 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-21 10:29 Nilay Shroff [this message]
2025-07-24 14:18 ` [PATCH] nvme-list: fix verbose JSON output for 'nvme list' command Daniel Wagner
2025-08-04 15:25   ` Daniel Wagner
2025-08-05  4:30     ` Nilay Shroff
2025-08-05  7:27       ` Daniel Wagner
2025-08-05 10:18         ` Nilay Shroff

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250721102946.1003295-1-nilay@linux.ibm.com \
    --to=nilay@linux.ibm.com \
    --cc=dwagner@suse.de \
    --cc=gjoyce@ibm.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=msmurthy@imap.linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).