public inbox for linux-hyperv@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/7] mshv: Debugfs interface for mshv_root
@ 2026-01-21 21:46 Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 1/7] mshv: Ignore second stats page map result failure Nuno Das Neves
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

Expose hypervisor, logical processor, partition, and virtual processor
statistics via debugfs. These are provided by mapping 'stats' pages via
hypercall.

Patch #1: Update hv_call_map_stats_page() to return success when
          HV_STATS_AREA_PARENT is unavailable, which is the case on some
          hypervisor versions, where it can fall back to HV_STATS_AREA_SELF
Patch #2: Use struct hv_stats_page pointers instead of void *
Patch #3: Make mshv_vp_stats_map/unmap() more flexible to use with debugfs code
Patch #4: Always map vp stats page regardless of scheduler, to reuse in debugfs
Patch #5: Change to hv_stats_page definition and VpRootDispatchThreadBlocked
Patch #6: Introduce the definitions needed for the various stats pages
Patch #7: Add mshv_debugfs.c, and integrate it with the mshv_root driver to
          expose the partition and VP stats.

---
Changes in v4:
- Put the counters definitions in static arrays in hv_counters.c, instead of as
  enums in hvhdk.h [Michael]
- Due to the above, add an additional patch (#5) to simplify hv_stats_page, and
  retain the enum definition at the top of mshv_root_main.c for use with
  VpRootDispatchThreadBlocked. That is the only remaining use of the counter
  enum.
- Due to the above, use num_present_cpus() as the number of LPs to map stats
  pages for - this number shouldn't change at runtime because the hypervisor
  doesn't support hotplug for root partition.

Changes in v3:
- Add 3 small refactor/cleanup patches (patches 2,3,4) from Stanislav. These
  simplify some of the debugfs code, and fix issues with mapping VP stats on
  L1VH.
- Fix cleanup of parent stats dentries on module removal (via squashing some
  internal patches into patch #6) [Praveen]
- Remove unused goto label [Stanislav, kernel bot]
- Use struct hv_stats_page * instead of void * in mshv_debugfs.c [Stanislav]
- Remove some redundant variables [Stanislav]
- Rename debugfs dentry fields for brevity [Stanislav]
- Use ERR_CAST() for the dentry error pointer returned from
  lp_debugfs_stats_create() [Stanislav]
- Fix leak of pages allocated for lp stats mappings by storing them in an array
  [Michael]
- Add comments to clarify PARENT vs SELF usage and edge cases [Michael]
- Add VpLoadAvg for x86 and print the stat [Michael]
- Add NUM_STATS_AREAS for array sizing in mshv_debugfs.c [Michael]

Changes in v2:
- Remove unnecessary pr_debug_once() in patch 1 [Stanislav Kinsburskii]
- CONFIG_X86 -> CONFIG_X86_64 in patch 2 [Stanislav Kinsburskii]

---
Nuno Das Neves (3):
  mshv: Update hv_stats_page definitions
  mshv: Add data for printing stats page counters
  mshv: Add debugfs to view hypervisor statistics

Purna Pavan Chandra Aekkaladevi (1):
  mshv: Ignore second stats page map result failure

Stanislav Kinsburskii (3):
  mshv: Use typed hv_stats_page pointers
  mshv: Improve mshv_vp_stats_map/unmap(), add them to mshv_root.h
  mshv: Always map child vp stats pages regardless of scheduler type

 drivers/hv/Makefile            |   1 +
 drivers/hv/hv_counters.c       | 489 +++++++++++++++++++++++
 drivers/hv/hv_synic.c          | 177 +++++++++
 drivers/hv/mshv_debugfs.c      | 703 +++++++++++++++++++++++++++++++++
 drivers/hv/mshv_root.h         |  49 ++-
 drivers/hv/mshv_root_hv_call.c |  64 ++-
 drivers/hv/mshv_root_main.c    | 135 ++++---
 include/hyperv/hvhdk.h         |   8 +
 8 files changed, 1564 insertions(+), 62 deletions(-)
 create mode 100644 drivers/hv/hv_counters.c
 create mode 100644 drivers/hv/hv_synic.c
 create mode 100644 drivers/hv/mshv_debugfs.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v4 1/7] mshv: Ignore second stats page map result failure
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 2/7] mshv: Use typed hv_stats_page pointers Nuno Das Neves
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

From: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>

Older versions of the hypervisor do not have a concept of separate SELF
and PARENT stats areas. In this case, mapping the HV_STATS_AREA_SELF page
is sufficient - it's the only page and it contains all available stats.

Mapping HV_STATS_AREA_PARENT returns HV_STATUS_INVALID_PARAMETER which
currently causes module init to fail on older hypevisor versions.

Detect this case and gracefully fall back to populating
stats_pages[HV_STATS_AREA_PARENT] with the already-mapped SELF page.

Add comments to clarify the behavior, including a clarification of why
this isn't needed for hv_call_map_stats_page2() which always supports
PARENT and SELF areas.

Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
 drivers/hv/mshv_root_hv_call.c | 52 +++++++++++++++++++++++++++++++---
 drivers/hv/mshv_root_main.c    |  3 ++
 2 files changed, 51 insertions(+), 4 deletions(-)

diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c
index 598eaff4ff29..1f93b94d7580 100644
--- a/drivers/hv/mshv_root_hv_call.c
+++ b/drivers/hv/mshv_root_hv_call.c
@@ -813,6 +813,13 @@ hv_call_notify_port_ring_empty(u32 sint_index)
 	return hv_result_to_errno(status);
 }
 
+/*
+ * Equivalent of hv_call_map_stats_page() for cases when the caller provides
+ * the map location.
+ *
+ * NOTE: This is a newer hypercall that always supports SELF and PARENT stats
+ * areas, unlike hv_call_map_stats_page().
+ */
 static int hv_call_map_stats_page2(enum hv_stats_object_type type,
 				   const union hv_stats_object_identity *identity,
 				   u64 map_location)
@@ -855,6 +862,34 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type,
 	return ret;
 }
 
+static int
+hv_stats_get_area_type(enum hv_stats_object_type type,
+		       const union hv_stats_object_identity *identity)
+{
+	switch (type) {
+	case HV_STATS_OBJECT_HYPERVISOR:
+		return identity->hv.stats_area_type;
+	case HV_STATS_OBJECT_LOGICAL_PROCESSOR:
+		return identity->lp.stats_area_type;
+	case HV_STATS_OBJECT_PARTITION:
+		return identity->partition.stats_area_type;
+	case HV_STATS_OBJECT_VP:
+		return identity->vp.stats_area_type;
+	}
+
+	return -EINVAL;
+}
+
+/*
+ * Map a stats page, where the page location is provided by the hypervisor.
+ *
+ * NOTE: The concept of separate SELF and PARENT stats areas does not exist on
+ * older hypervisor versions. All the available stats information can be found
+ * on the SELF page. When attempting to map the PARENT area on a hypervisor
+ * that doesn't support it, return "success" but with a NULL address. The
+ * caller should check for this case and instead fallback to the SELF area
+ * alone.
+ */
 static int hv_call_map_stats_page(enum hv_stats_object_type type,
 				  const union hv_stats_object_identity *identity,
 				  void **addr)
@@ -863,7 +898,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type,
 	struct hv_input_map_stats_page *input;
 	struct hv_output_map_stats_page *output;
 	u64 status, pfn;
-	int ret = 0;
+	int hv_status, ret = 0;
 
 	do {
 		local_irq_save(flags);
@@ -878,11 +913,20 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type,
 		pfn = output->map_location;
 
 		local_irq_restore(flags);
-		if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
-			ret = hv_result_to_errno(status);
+
+		hv_status = hv_result(status);
+		if (hv_status != HV_STATUS_INSUFFICIENT_MEMORY) {
 			if (hv_result_success(status))
 				break;
-			return ret;
+
+			if (hv_stats_get_area_type(type, identity) == HV_STATS_AREA_PARENT &&
+			    hv_status == HV_STATUS_INVALID_PARAMETER) {
+				*addr = NULL;
+				return 0;
+			}
+
+			hv_status_debug(status, "\n");
+			return hv_result_to_errno(status);
 		}
 
 		ret = hv_call_deposit_pages(NUMA_NO_NODE,
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 1134a82c7881..1777778f84b8 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -992,6 +992,9 @@ static int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
 	if (err)
 		goto unmap_self;
 
+	if (!stats_pages[HV_STATS_AREA_PARENT])
+		stats_pages[HV_STATS_AREA_PARENT] = stats_pages[HV_STATS_AREA_SELF];
+
 	return 0;
 
 unmap_self:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 2/7] mshv: Use typed hv_stats_page pointers
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 1/7] mshv: Ignore second stats page map result failure Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 3/7] mshv: Improve mshv_vp_stats_map/unmap(), add them to mshv_root.h Nuno Das Neves
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>

Refactor all relevant functions to use struct hv_stats_page pointers
instead of void pointers for stats page mapping and unmapping thus
improving type safety and code clarity across the Hyper-V stats mapping
APIs.

Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
---
 drivers/hv/mshv_root.h         |  5 +++--
 drivers/hv/mshv_root_hv_call.c | 12 +++++++-----
 drivers/hv/mshv_root_main.c    |  8 ++++----
 3 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index 3c1d88b36741..05ba1f716f9e 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -307,8 +307,9 @@ int hv_call_disconnect_port(u64 connection_partition_id,
 int hv_call_notify_port_ring_empty(u32 sint_index);
 int hv_map_stats_page(enum hv_stats_object_type type,
 		      const union hv_stats_object_identity *identity,
-		      void **addr);
-int hv_unmap_stats_page(enum hv_stats_object_type type, void *page_addr,
+		      struct hv_stats_page **addr);
+int hv_unmap_stats_page(enum hv_stats_object_type type,
+			struct hv_stats_page *page_addr,
 			const union hv_stats_object_identity *identity);
 int hv_call_modify_spa_host_access(u64 partition_id, struct page **pages,
 				   u64 page_struct_count, u32 host_access,
diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c
index 1f93b94d7580..daee036e48bc 100644
--- a/drivers/hv/mshv_root_hv_call.c
+++ b/drivers/hv/mshv_root_hv_call.c
@@ -890,9 +890,10 @@ hv_stats_get_area_type(enum hv_stats_object_type type,
  * caller should check for this case and instead fallback to the SELF area
  * alone.
  */
-static int hv_call_map_stats_page(enum hv_stats_object_type type,
-				  const union hv_stats_object_identity *identity,
-				  void **addr)
+static int
+hv_call_map_stats_page(enum hv_stats_object_type type,
+		       const union hv_stats_object_identity *identity,
+		       struct hv_stats_page **addr)
 {
 	unsigned long flags;
 	struct hv_input_map_stats_page *input;
@@ -942,7 +943,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type,
 
 int hv_map_stats_page(enum hv_stats_object_type type,
 		      const union hv_stats_object_identity *identity,
-		      void **addr)
+		      struct hv_stats_page **addr)
 {
 	int ret;
 	struct page *allocated_page = NULL;
@@ -990,7 +991,8 @@ static int hv_call_unmap_stats_page(enum hv_stats_object_type type,
 	return hv_result_to_errno(status);
 }
 
-int hv_unmap_stats_page(enum hv_stats_object_type type, void *page_addr,
+int hv_unmap_stats_page(enum hv_stats_object_type type,
+			struct hv_stats_page *page_addr,
 			const union hv_stats_object_identity *identity)
 {
 	int ret;
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 1777778f84b8..be5ad0fbfbee 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -957,7 +957,7 @@ mshv_vp_release(struct inode *inode, struct file *filp)
 }
 
 static void mshv_vp_stats_unmap(u64 partition_id, u32 vp_index,
-				void *stats_pages[])
+				struct hv_stats_page *stats_pages[])
 {
 	union hv_stats_object_identity identity = {
 		.vp.partition_id = partition_id,
@@ -972,7 +972,7 @@ static void mshv_vp_stats_unmap(u64 partition_id, u32 vp_index,
 }
 
 static int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
-			     void *stats_pages[])
+			     struct hv_stats_page *stats_pages[])
 {
 	union hv_stats_object_identity identity = {
 		.vp.partition_id = partition_id,
@@ -1010,7 +1010,7 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 	struct mshv_create_vp args;
 	struct mshv_vp *vp;
 	struct page *intercept_msg_page, *register_page, *ghcb_page;
-	void *stats_pages[2];
+	struct hv_stats_page *stats_pages[2];
 	long ret;
 
 	if (copy_from_user(&args, arg, sizeof(args)))
@@ -1729,7 +1729,7 @@ static void destroy_partition(struct mshv_partition *partition)
 
 			if (hv_scheduler_type == HV_SCHEDULER_TYPE_ROOT)
 				mshv_vp_stats_unmap(partition->pt_id, vp->vp_index,
-						    (void **)vp->vp_stats_pages);
+						    vp->vp_stats_pages);
 
 			if (vp->vp_register_page) {
 				(void)hv_unmap_vp_state_page(partition->pt_id,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 3/7] mshv: Improve mshv_vp_stats_map/unmap(), add them to mshv_root.h
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 1/7] mshv: Ignore second stats page map result failure Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 2/7] mshv: Use typed hv_stats_page pointers Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 4/7] mshv: Always map child vp stats pages regardless of scheduler type Nuno Das Neves
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>

These functions are currently only used to map child partition VP stats,
on root partition. However, they will soon be used on L1VH, and and also
used for mapping the host's own VP stats.

Introduce a helper is_l1vh_parent() to determine whether we are mapping
our own VP stats. In this case, do not attempt to map the PARENT area.
Note this is a different case than mapping PARENT on an older hypervisor
where it is not available at all, so must be handled separately.

On unmap, pass the stats pages since on L1VH the kernel allocates them
and they must be freed in hv_unmap_stats_page().

Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
---
 drivers/hv/mshv_root.h      | 10 ++++++
 drivers/hv/mshv_root_main.c | 61 ++++++++++++++++++++++++++-----------
 2 files changed, 54 insertions(+), 17 deletions(-)

diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index 05ba1f716f9e..e4912b0618fa 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -254,6 +254,16 @@ struct mshv_partition *mshv_partition_get(struct mshv_partition *partition);
 void mshv_partition_put(struct mshv_partition *partition);
 struct mshv_partition *mshv_partition_find(u64 partition_id) __must_hold(RCU);
 
+static inline bool is_l1vh_parent(u64 partition_id)
+{
+	return hv_l1vh_partition() && (partition_id == HV_PARTITION_ID_SELF);
+}
+
+int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
+		      struct hv_stats_page **stats_pages);
+void mshv_vp_stats_unmap(u64 partition_id, u32 vp_index,
+			 struct hv_stats_page **stats_pages);
+
 /* hypercalls */
 
 int hv_call_withdraw_memory(u64 count, int node, u64 partition_id);
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index be5ad0fbfbee..faca3cc63e79 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -956,23 +956,36 @@ mshv_vp_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static void mshv_vp_stats_unmap(u64 partition_id, u32 vp_index,
-				struct hv_stats_page *stats_pages[])
+void mshv_vp_stats_unmap(u64 partition_id, u32 vp_index,
+			 struct hv_stats_page *stats_pages[])
 {
 	union hv_stats_object_identity identity = {
 		.vp.partition_id = partition_id,
 		.vp.vp_index = vp_index,
 	};
+	int err;
 
 	identity.vp.stats_area_type = HV_STATS_AREA_SELF;
-	hv_unmap_stats_page(HV_STATS_OBJECT_VP, NULL, &identity);
-
-	identity.vp.stats_area_type = HV_STATS_AREA_PARENT;
-	hv_unmap_stats_page(HV_STATS_OBJECT_VP, NULL, &identity);
+	err = hv_unmap_stats_page(HV_STATS_OBJECT_VP,
+				  stats_pages[HV_STATS_AREA_SELF],
+				  &identity);
+	if (err)
+		pr_err("%s: failed to unmap partition %llu vp %u self stats, err: %d\n",
+		       __func__, partition_id, vp_index, err);
+
+	if (stats_pages[HV_STATS_AREA_PARENT] != stats_pages[HV_STATS_AREA_SELF]) {
+		identity.vp.stats_area_type = HV_STATS_AREA_PARENT;
+		err = hv_unmap_stats_page(HV_STATS_OBJECT_VP,
+					  stats_pages[HV_STATS_AREA_PARENT],
+					  &identity);
+		if (err)
+			pr_err("%s: failed to unmap partition %llu vp %u parent stats, err: %d\n",
+			       __func__, partition_id, vp_index, err);
+	}
 }
 
-static int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
-			     struct hv_stats_page *stats_pages[])
+int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
+		      struct hv_stats_page *stats_pages[])
 {
 	union hv_stats_object_identity identity = {
 		.vp.partition_id = partition_id,
@@ -983,23 +996,37 @@ static int mshv_vp_stats_map(u64 partition_id, u32 vp_index,
 	identity.vp.stats_area_type = HV_STATS_AREA_SELF;
 	err = hv_map_stats_page(HV_STATS_OBJECT_VP, &identity,
 				&stats_pages[HV_STATS_AREA_SELF]);
-	if (err)
+	if (err) {
+		pr_err("%s: failed to map partition %llu vp %u self stats, err: %d\n",
+		       __func__, partition_id, vp_index, err);
 		return err;
+	}
 
-	identity.vp.stats_area_type = HV_STATS_AREA_PARENT;
-	err = hv_map_stats_page(HV_STATS_OBJECT_VP, &identity,
-				&stats_pages[HV_STATS_AREA_PARENT]);
-	if (err)
-		goto unmap_self;
-
-	if (!stats_pages[HV_STATS_AREA_PARENT])
+	/*
+	 * L1VH partition cannot access its vp stats in parent area.
+	 */
+	if (is_l1vh_parent(partition_id)) {
 		stats_pages[HV_STATS_AREA_PARENT] = stats_pages[HV_STATS_AREA_SELF];
+	} else {
+		identity.vp.stats_area_type = HV_STATS_AREA_PARENT;
+		err = hv_map_stats_page(HV_STATS_OBJECT_VP, &identity,
+					&stats_pages[HV_STATS_AREA_PARENT]);
+		if (err) {
+			pr_err("%s: failed to map partition %llu vp %u parent stats, err: %d\n",
+			       __func__, partition_id, vp_index, err);
+			goto unmap_self;
+		}
+		if (!stats_pages[HV_STATS_AREA_PARENT])
+			stats_pages[HV_STATS_AREA_PARENT] = stats_pages[HV_STATS_AREA_SELF];
+	}
 
 	return 0;
 
 unmap_self:
 	identity.vp.stats_area_type = HV_STATS_AREA_SELF;
-	hv_unmap_stats_page(HV_STATS_OBJECT_VP, NULL, &identity);
+	hv_unmap_stats_page(HV_STATS_OBJECT_VP,
+			    stats_pages[HV_STATS_AREA_SELF],
+			    &identity);
 	return err;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 4/7] mshv: Always map child vp stats pages regardless of scheduler type
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
                   ` (2 preceding siblings ...)
  2026-01-21 21:46 ` [PATCH v4 3/7] mshv: Improve mshv_vp_stats_map/unmap(), add them to mshv_root.h Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 5/7] mshv: Update hv_stats_page definitions Nuno Das Neves
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

From: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>

Currently vp->vp_stats_pages is only used by the root scheduler for fast
interrupt injection.

Soon, vp_stats_pages will also be needed for exposing child VP stats to
userspace via debugfs. Mapping the pages a second time to a different
address causes an error on L1VH.

Remove the scheduler requirement and always map the vp stats pages.

Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
---
 drivers/hv/mshv_root_main.c | 25 ++++++++-----------------
 1 file changed, 8 insertions(+), 17 deletions(-)

diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index faca3cc63e79..fbfc9e7d9fa4 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -1077,16 +1077,10 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 			goto unmap_register_page;
 	}
 
-	/*
-	 * This mapping of the stats page is for detecting if dispatch thread
-	 * is blocked - only relevant for root scheduler
-	 */
-	if (hv_scheduler_type == HV_SCHEDULER_TYPE_ROOT) {
-		ret = mshv_vp_stats_map(partition->pt_id, args.vp_index,
-					stats_pages);
-		if (ret)
-			goto unmap_ghcb_page;
-	}
+	ret = mshv_vp_stats_map(partition->pt_id, args.vp_index,
+				stats_pages);
+	if (ret)
+		goto unmap_ghcb_page;
 
 	vp = kzalloc(sizeof(*vp), GFP_KERNEL);
 	if (!vp)
@@ -1110,8 +1104,7 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 	if (mshv_partition_encrypted(partition) && is_ghcb_mapping_available())
 		vp->vp_ghcb_page = page_to_virt(ghcb_page);
 
-	if (hv_scheduler_type == HV_SCHEDULER_TYPE_ROOT)
-		memcpy(vp->vp_stats_pages, stats_pages, sizeof(stats_pages));
+	memcpy(vp->vp_stats_pages, stats_pages, sizeof(stats_pages));
 
 	/*
 	 * Keep anon_inode_getfd last: it installs fd in the file struct and
@@ -1133,8 +1126,7 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 free_vp:
 	kfree(vp);
 unmap_stats_pages:
-	if (hv_scheduler_type == HV_SCHEDULER_TYPE_ROOT)
-		mshv_vp_stats_unmap(partition->pt_id, args.vp_index, stats_pages);
+	mshv_vp_stats_unmap(partition->pt_id, args.vp_index, stats_pages);
 unmap_ghcb_page:
 	if (mshv_partition_encrypted(partition) && is_ghcb_mapping_available())
 		hv_unmap_vp_state_page(partition->pt_id, args.vp_index,
@@ -1754,9 +1746,8 @@ static void destroy_partition(struct mshv_partition *partition)
 			if (!vp)
 				continue;
 
-			if (hv_scheduler_type == HV_SCHEDULER_TYPE_ROOT)
-				mshv_vp_stats_unmap(partition->pt_id, vp->vp_index,
-						    vp->vp_stats_pages);
+			mshv_vp_stats_unmap(partition->pt_id, vp->vp_index,
+					    vp->vp_stats_pages);
 
 			if (vp->vp_register_page) {
 				(void)hv_unmap_vp_state_page(partition->pt_id,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 5/7] mshv: Update hv_stats_page definitions
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
                   ` (3 preceding siblings ...)
  2026-01-21 21:46 ` [PATCH v4 4/7] mshv: Always map child vp stats pages regardless of scheduler type Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-22  1:22   ` Stanislav Kinsburskii
  2026-01-21 21:46 ` [PATCH v4 6/7] mshv: Add data for printing stats page counters Nuno Das Neves
  2026-01-21 21:46 ` [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics Nuno Das Neves
  6 siblings, 1 reply; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

hv_stats_page belongs in hvhdk.h, move it there.

It does not require a union to access the data for different counters,
just use a single u64 array for simplicity and to match the Windows
definitions.

While at it, correct the ARM64 value for VpRootDispatchThreadBlocked.

Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
---
 drivers/hv/mshv_root_main.c | 22 ++++++----------------
 include/hyperv/hvhdk.h      |  8 ++++++++
 2 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index fbfc9e7d9fa4..12825666e21b 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -39,23 +39,14 @@ MODULE_AUTHOR("Microsoft");
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Microsoft Hyper-V root partition VMM interface /dev/mshv");
 
-/* TODO move this to another file when debugfs code is added */
 enum hv_stats_vp_counters {			/* HV_THREAD_COUNTER */
 #if defined(CONFIG_X86)
-	VpRootDispatchThreadBlocked			= 202,
+	VpRootDispatchThreadBlocked = 202,
 #elif defined(CONFIG_ARM64)
-	VpRootDispatchThreadBlocked			= 94,
+	VpRootDispatchThreadBlocked = 95,
 #endif
-	VpStatsMaxCounter
 };
 
-struct hv_stats_page {
-	union {
-		u64 vp_cntrs[VpStatsMaxCounter];		/* VP counters */
-		u8 data[HV_HYP_PAGE_SIZE];
-	};
-} __packed;
-
 struct mshv_root mshv_root;
 
 enum hv_scheduler_type hv_scheduler_type;
@@ -485,12 +476,11 @@ static u64 mshv_vp_interrupt_pending(struct mshv_vp *vp)
 static bool mshv_vp_dispatch_thread_blocked(struct mshv_vp *vp)
 {
 	struct hv_stats_page **stats = vp->vp_stats_pages;
-	u64 *self_vp_cntrs = stats[HV_STATS_AREA_SELF]->vp_cntrs;
-	u64 *parent_vp_cntrs = stats[HV_STATS_AREA_PARENT]->vp_cntrs;
+	u64 *self_vp_cntrs = stats[HV_STATS_AREA_SELF]->data;
+	u64 *parent_vp_cntrs = stats[HV_STATS_AREA_PARENT]->data;
 
-	if (self_vp_cntrs[VpRootDispatchThreadBlocked])
-		return self_vp_cntrs[VpRootDispatchThreadBlocked];
-	return parent_vp_cntrs[VpRootDispatchThreadBlocked];
+	return parent_vp_cntrs[VpRootDispatchThreadBlocked] ||
+	       self_vp_cntrs[VpRootDispatchThreadBlocked];
 }
 
 static int
diff --git a/include/hyperv/hvhdk.h b/include/hyperv/hvhdk.h
index 469186df7826..ac501969105c 100644
--- a/include/hyperv/hvhdk.h
+++ b/include/hyperv/hvhdk.h
@@ -10,6 +10,14 @@
 #include "hvhdk_mini.h"
 #include "hvgdk.h"
 
+/*
+ * Hypervisor statistics page format
+ */
+struct hv_stats_page {
+	u64 data[HV_HYP_PAGE_SIZE / sizeof(u64)];
+} __packed;
+
+
 /* Bits for dirty mask of hv_vp_register_page */
 #define HV_X64_REGISTER_CLASS_GENERAL	0
 #define HV_X64_REGISTER_CLASS_IP	1
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
                   ` (4 preceding siblings ...)
  2026-01-21 21:46 ` [PATCH v4 5/7] mshv: Update hv_stats_page definitions Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-22  1:18   ` Stanislav Kinsburskii
  2026-01-23 17:09   ` Michael Kelley
  2026-01-21 21:46 ` [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics Nuno Das Neves
  6 siblings, 2 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves

Introduce hv_counters.c, containing static data corresponding to
HV_*_COUNTER enums in the hypervisor source. Defining the enum
members as an array instead makes more sense, since it will be
iterated over to print counter information to debugfs.

Include hypervisor, logical processor, partition, and virtual
processor counters.

Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
---
 drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 488 insertions(+)
 create mode 100644 drivers/hv/hv_counters.c

diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
new file mode 100644
index 000000000000..a8e07e72cc29
--- /dev/null
+++ b/drivers/hv/hv_counters.c
@@ -0,0 +1,488 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2026, Microsoft Corporation.
+ *
+ * Data for printing stats page counters via debugfs.
+ *
+ * Authors: Microsoft Linux virtualization team
+ */
+
+struct hv_counter_entry {
+	char *name;
+	int idx;
+};
+
+/* HV_HYPERVISOR_COUNTER */
+static struct hv_counter_entry hv_hypervisor_counters[] = {
+	{ "HvLogicalProcessors", 1 },
+	{ "HvPartitions", 2 },
+	{ "HvTotalPages", 3 },
+	{ "HvVirtualProcessors", 4 },
+	{ "HvMonitoredNotifications", 5 },
+	{ "HvModernStandbyEntries", 6 },
+	{ "HvPlatformIdleTransitions", 7 },
+	{ "HvHypervisorStartupCost", 8 },
+
+	{ "HvIOSpacePages", 10 },
+	{ "HvNonEssentialPagesForDump", 11 },
+	{ "HvSubsumedPages", 12 },
+};
+
+/* HV_CPU_COUNTER */
+static struct hv_counter_entry hv_lp_counters[] = {
+	{ "LpGlobalTime", 1 },
+	{ "LpTotalRunTime", 2 },
+	{ "LpHypervisorRunTime", 3 },
+	{ "LpHardwareInterrupts", 4 },
+	{ "LpContextSwitches", 5 },
+	{ "LpInterProcessorInterrupts", 6 },
+	{ "LpSchedulerInterrupts", 7 },
+	{ "LpTimerInterrupts", 8 },
+	{ "LpInterProcessorInterruptsSent", 9 },
+	{ "LpProcessorHalts", 10 },
+	{ "LpMonitorTransitionCost", 11 },
+	{ "LpContextSwitchTime", 12 },
+	{ "LpC1TransitionsCount", 13 },
+	{ "LpC1RunTime", 14 },
+	{ "LpC2TransitionsCount", 15 },
+	{ "LpC2RunTime", 16 },
+	{ "LpC3TransitionsCount", 17 },
+	{ "LpC3RunTime", 18 },
+	{ "LpRootVpIndex", 19 },
+	{ "LpIdleSequenceNumber", 20 },
+	{ "LpGlobalTscCount", 21 },
+	{ "LpActiveTscCount", 22 },
+	{ "LpIdleAccumulation", 23 },
+	{ "LpReferenceCycleCount0", 24 },
+	{ "LpActualCycleCount0", 25 },
+	{ "LpReferenceCycleCount1", 26 },
+	{ "LpActualCycleCount1", 27 },
+	{ "LpProximityDomainId", 28 },
+	{ "LpPostedInterruptNotifications", 29 },
+	{ "LpBranchPredictorFlushes", 30 },
+#if IS_ENABLED(CONFIG_X86_64)
+	{ "LpL1DataCacheFlushes", 31 },
+	{ "LpImmediateL1DataCacheFlushes", 32 },
+	{ "LpMbFlushes", 33 },
+	{ "LpCounterRefreshSequenceNumber", 34 },
+	{ "LpCounterRefreshReferenceTime", 35 },
+	{ "LpIdleAccumulationSnapshot", 36 },
+	{ "LpActiveTscCountSnapshot", 37 },
+	{ "LpHwpRequestContextSwitches", 38 },
+	{ "LpPlaceholder1", 39 },
+	{ "LpPlaceholder2", 40 },
+	{ "LpPlaceholder3", 41 },
+	{ "LpPlaceholder4", 42 },
+	{ "LpPlaceholder5", 43 },
+	{ "LpPlaceholder6", 44 },
+	{ "LpPlaceholder7", 45 },
+	{ "LpPlaceholder8", 46 },
+	{ "LpPlaceholder9", 47 },
+	{ "LpSchLocalRunListSize", 48 },
+	{ "LpReserveGroupId", 49 },
+	{ "LpRunningPriority", 50 },
+	{ "LpPerfmonInterruptCount", 51 },
+#elif IS_ENABLED(CONFIG_ARM64)
+	{ "LpCounterRefreshSequenceNumber", 31 },
+	{ "LpCounterRefreshReferenceTime", 32 },
+	{ "LpIdleAccumulationSnapshot", 33 },
+	{ "LpActiveTscCountSnapshot", 34 },
+	{ "LpHwpRequestContextSwitches", 35 },
+	{ "LpPlaceholder2", 36 },
+	{ "LpPlaceholder3", 37 },
+	{ "LpPlaceholder4", 38 },
+	{ "LpPlaceholder5", 39 },
+	{ "LpPlaceholder6", 40 },
+	{ "LpPlaceholder7", 41 },
+	{ "LpPlaceholder8", 42 },
+	{ "LpPlaceholder9", 43 },
+	{ "LpSchLocalRunListSize", 44 },
+	{ "LpReserveGroupId", 45 },
+	{ "LpRunningPriority", 46 },
+#endif
+};
+
+/* HV_PROCESS_COUNTER */
+static struct hv_counter_entry hv_partition_counters[] = {
+	{ "PtVirtualProcessors", 1 },
+
+	{ "PtTlbSize", 3 },
+	{ "PtAddressSpaces", 4 },
+	{ "PtDepositedPages", 5 },
+	{ "PtGpaPages", 6 },
+	{ "PtGpaSpaceModifications", 7 },
+	{ "PtVirtualTlbFlushEntires", 8 },
+	{ "PtRecommendedTlbSize", 9 },
+	{ "PtGpaPages4K", 10 },
+	{ "PtGpaPages2M", 11 },
+	{ "PtGpaPages1G", 12 },
+	{ "PtGpaPages512G", 13 },
+	{ "PtDevicePages4K", 14 },
+	{ "PtDevicePages2M", 15 },
+	{ "PtDevicePages1G", 16 },
+	{ "PtDevicePages512G", 17 },
+	{ "PtAttachedDevices", 18 },
+	{ "PtDeviceInterruptMappings", 19 },
+	{ "PtIoTlbFlushes", 20 },
+	{ "PtIoTlbFlushCost", 21 },
+	{ "PtDeviceInterruptErrors", 22 },
+	{ "PtDeviceDmaErrors", 23 },
+	{ "PtDeviceInterruptThrottleEvents", 24 },
+	{ "PtSkippedTimerTicks", 25 },
+	{ "PtPartitionId", 26 },
+#if IS_ENABLED(CONFIG_X86_64)
+	{ "PtNestedTlbSize", 27 },
+	{ "PtRecommendedNestedTlbSize", 28 },
+	{ "PtNestedTlbFreeListSize", 29 },
+	{ "PtNestedTlbTrimmedPages", 30 },
+	{ "PtPagesShattered", 31 },
+	{ "PtPagesRecombined", 32 },
+	{ "PtHwpRequestValue", 33 },
+	{ "PtAutoSuspendEnableTime", 34 },
+	{ "PtAutoSuspendTriggerTime", 35 },
+	{ "PtAutoSuspendDisableTime", 36 },
+	{ "PtPlaceholder1", 37 },
+	{ "PtPlaceholder2", 38 },
+	{ "PtPlaceholder3", 39 },
+	{ "PtPlaceholder4", 40 },
+	{ "PtPlaceholder5", 41 },
+	{ "PtPlaceholder6", 42 },
+	{ "PtPlaceholder7", 43 },
+	{ "PtPlaceholder8", 44 },
+	{ "PtHypervisorStateTransferGeneration", 45 },
+	{ "PtNumberofActiveChildPartitions", 46 },
+#elif IS_ENABLED(CONFIG_ARM64)
+	{ "PtHwpRequestValue", 27 },
+	{ "PtAutoSuspendEnableTime", 28 },
+	{ "PtAutoSuspendTriggerTime", 29 },
+	{ "PtAutoSuspendDisableTime", 30 },
+	{ "PtPlaceholder1", 31 },
+	{ "PtPlaceholder2", 32 },
+	{ "PtPlaceholder3", 33 },
+	{ "PtPlaceholder4", 34 },
+	{ "PtPlaceholder5", 35 },
+	{ "PtPlaceholder6", 36 },
+	{ "PtPlaceholder7", 37 },
+	{ "PtPlaceholder8", 38 },
+	{ "PtHypervisorStateTransferGeneration", 39 },
+	{ "PtNumberofActiveChildPartitions", 40 },
+#endif
+};
+
+/* HV_THREAD_COUNTER */
+static struct hv_counter_entry hv_vp_counters[] = {
+	{ "VpTotalRunTime", 1 },
+	{ "VpHypervisorRunTime", 2 },
+	{ "VpRemoteNodeRunTime", 3 },
+	{ "VpNormalizedRunTime", 4 },
+	{ "VpIdealCpu", 5 },
+
+	{ "VpHypercallsCount", 7 },
+	{ "VpHypercallsTime", 8 },
+#if IS_ENABLED(CONFIG_X86_64)
+	{ "VpPageInvalidationsCount", 9 },
+	{ "VpPageInvalidationsTime", 10 },
+	{ "VpControlRegisterAccessesCount", 11 },
+	{ "VpControlRegisterAccessesTime", 12 },
+	{ "VpIoInstructionsCount", 13 },
+	{ "VpIoInstructionsTime", 14 },
+	{ "VpHltInstructionsCount", 15 },
+	{ "VpHltInstructionsTime", 16 },
+	{ "VpMwaitInstructionsCount", 17 },
+	{ "VpMwaitInstructionsTime", 18 },
+	{ "VpCpuidInstructionsCount", 19 },
+	{ "VpCpuidInstructionsTime", 20 },
+	{ "VpMsrAccessesCount", 21 },
+	{ "VpMsrAccessesTime", 22 },
+	{ "VpOtherInterceptsCount", 23 },
+	{ "VpOtherInterceptsTime", 24 },
+	{ "VpExternalInterruptsCount", 25 },
+	{ "VpExternalInterruptsTime", 26 },
+	{ "VpPendingInterruptsCount", 27 },
+	{ "VpPendingInterruptsTime", 28 },
+	{ "VpEmulatedInstructionsCount", 29 },
+	{ "VpEmulatedInstructionsTime", 30 },
+	{ "VpDebugRegisterAccessesCount", 31 },
+	{ "VpDebugRegisterAccessesTime", 32 },
+	{ "VpPageFaultInterceptsCount", 33 },
+	{ "VpPageFaultInterceptsTime", 34 },
+	{ "VpGuestPageTableMaps", 35 },
+	{ "VpLargePageTlbFills", 36 },
+	{ "VpSmallPageTlbFills", 37 },
+	{ "VpReflectedGuestPageFaults", 38 },
+	{ "VpApicMmioAccesses", 39 },
+	{ "VpIoInterceptMessages", 40 },
+	{ "VpMemoryInterceptMessages", 41 },
+	{ "VpApicEoiAccesses", 42 },
+	{ "VpOtherMessages", 43 },
+	{ "VpPageTableAllocations", 44 },
+	{ "VpLogicalProcessorMigrations", 45 },
+	{ "VpAddressSpaceEvictions", 46 },
+	{ "VpAddressSpaceSwitches", 47 },
+	{ "VpAddressDomainFlushes", 48 },
+	{ "VpAddressSpaceFlushes", 49 },
+	{ "VpGlobalGvaRangeFlushes", 50 },
+	{ "VpLocalGvaRangeFlushes", 51 },
+	{ "VpPageTableEvictions", 52 },
+	{ "VpPageTableReclamations", 53 },
+	{ "VpPageTableResets", 54 },
+	{ "VpPageTableValidations", 55 },
+	{ "VpApicTprAccesses", 56 },
+	{ "VpPageTableWriteIntercepts", 57 },
+	{ "VpSyntheticInterrupts", 58 },
+	{ "VpVirtualInterrupts", 59 },
+	{ "VpApicIpisSent", 60 },
+	{ "VpApicSelfIpisSent", 61 },
+	{ "VpGpaSpaceHypercalls", 62 },
+	{ "VpLogicalProcessorHypercalls", 63 },
+	{ "VpLongSpinWaitHypercalls", 64 },
+	{ "VpOtherHypercalls", 65 },
+	{ "VpSyntheticInterruptHypercalls", 66 },
+	{ "VpVirtualInterruptHypercalls", 67 },
+	{ "VpVirtualMmuHypercalls", 68 },
+	{ "VpVirtualProcessorHypercalls", 69 },
+	{ "VpHardwareInterrupts", 70 },
+	{ "VpNestedPageFaultInterceptsCount", 71 },
+	{ "VpNestedPageFaultInterceptsTime", 72 },
+	{ "VpPageScans", 73 },
+	{ "VpLogicalProcessorDispatches", 74 },
+	{ "VpWaitingForCpuTime", 75 },
+	{ "VpExtendedHypercalls", 76 },
+	{ "VpExtendedHypercallInterceptMessages", 77 },
+	{ "VpMbecNestedPageTableSwitches", 78 },
+	{ "VpOtherReflectedGuestExceptions", 79 },
+	{ "VpGlobalIoTlbFlushes", 80 },
+	{ "VpGlobalIoTlbFlushCost", 81 },
+	{ "VpLocalIoTlbFlushes", 82 },
+	{ "VpLocalIoTlbFlushCost", 83 },
+	{ "VpHypercallsForwardedCount", 84 },
+	{ "VpHypercallsForwardingTime", 85 },
+	{ "VpPageInvalidationsForwardedCount", 86 },
+	{ "VpPageInvalidationsForwardingTime", 87 },
+	{ "VpControlRegisterAccessesForwardedCount", 88 },
+	{ "VpControlRegisterAccessesForwardingTime", 89 },
+	{ "VpIoInstructionsForwardedCount", 90 },
+	{ "VpIoInstructionsForwardingTime", 91 },
+	{ "VpHltInstructionsForwardedCount", 92 },
+	{ "VpHltInstructionsForwardingTime", 93 },
+	{ "VpMwaitInstructionsForwardedCount", 94 },
+	{ "VpMwaitInstructionsForwardingTime", 95 },
+	{ "VpCpuidInstructionsForwardedCount", 96 },
+	{ "VpCpuidInstructionsForwardingTime", 97 },
+	{ "VpMsrAccessesForwardedCount", 98 },
+	{ "VpMsrAccessesForwardingTime", 99 },
+	{ "VpOtherInterceptsForwardedCount", 100 },
+	{ "VpOtherInterceptsForwardingTime", 101 },
+	{ "VpExternalInterruptsForwardedCount", 102 },
+	{ "VpExternalInterruptsForwardingTime", 103 },
+	{ "VpPendingInterruptsForwardedCount", 104 },
+	{ "VpPendingInterruptsForwardingTime", 105 },
+	{ "VpEmulatedInstructionsForwardedCount", 106 },
+	{ "VpEmulatedInstructionsForwardingTime", 107 },
+	{ "VpDebugRegisterAccessesForwardedCount", 108 },
+	{ "VpDebugRegisterAccessesForwardingTime", 109 },
+	{ "VpPageFaultInterceptsForwardedCount", 110 },
+	{ "VpPageFaultInterceptsForwardingTime", 111 },
+	{ "VpVmclearEmulationCount", 112 },
+	{ "VpVmclearEmulationTime", 113 },
+	{ "VpVmptrldEmulationCount", 114 },
+	{ "VpVmptrldEmulationTime", 115 },
+	{ "VpVmptrstEmulationCount", 116 },
+	{ "VpVmptrstEmulationTime", 117 },
+	{ "VpVmreadEmulationCount", 118 },
+	{ "VpVmreadEmulationTime", 119 },
+	{ "VpVmwriteEmulationCount", 120 },
+	{ "VpVmwriteEmulationTime", 121 },
+	{ "VpVmxoffEmulationCount", 122 },
+	{ "VpVmxoffEmulationTime", 123 },
+	{ "VpVmxonEmulationCount", 124 },
+	{ "VpVmxonEmulationTime", 125 },
+	{ "VpNestedVMEntriesCount", 126 },
+	{ "VpNestedVMEntriesTime", 127 },
+	{ "VpNestedSLATSoftPageFaultsCount", 128 },
+	{ "VpNestedSLATSoftPageFaultsTime", 129 },
+	{ "VpNestedSLATHardPageFaultsCount", 130 },
+	{ "VpNestedSLATHardPageFaultsTime", 131 },
+	{ "VpInvEptAllContextEmulationCount", 132 },
+	{ "VpInvEptAllContextEmulationTime", 133 },
+	{ "VpInvEptSingleContextEmulationCount", 134 },
+	{ "VpInvEptSingleContextEmulationTime", 135 },
+	{ "VpInvVpidAllContextEmulationCount", 136 },
+	{ "VpInvVpidAllContextEmulationTime", 137 },
+	{ "VpInvVpidSingleContextEmulationCount", 138 },
+	{ "VpInvVpidSingleContextEmulationTime", 139 },
+	{ "VpInvVpidSingleAddressEmulationCount", 140 },
+	{ "VpInvVpidSingleAddressEmulationTime", 141 },
+	{ "VpNestedTlbPageTableReclamations", 142 },
+	{ "VpNestedTlbPageTableEvictions", 143 },
+	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
+	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
+	{ "VpPostedInterruptNotifications", 146 },
+	{ "VpPostedInterruptScans", 147 },
+	{ "VpTotalCoreRunTime", 148 },
+	{ "VpMaximumRunTime", 149 },
+	{ "VpHwpRequestContextSwitches", 150 },
+	{ "VpWaitingForCpuTimeBucket0", 151 },
+	{ "VpWaitingForCpuTimeBucket1", 152 },
+	{ "VpWaitingForCpuTimeBucket2", 153 },
+	{ "VpWaitingForCpuTimeBucket3", 154 },
+	{ "VpWaitingForCpuTimeBucket4", 155 },
+	{ "VpWaitingForCpuTimeBucket5", 156 },
+	{ "VpWaitingForCpuTimeBucket6", 157 },
+	{ "VpVmloadEmulationCount", 158 },
+	{ "VpVmloadEmulationTime", 159 },
+	{ "VpVmsaveEmulationCount", 160 },
+	{ "VpVmsaveEmulationTime", 161 },
+	{ "VpGifInstructionEmulationCount", 162 },
+	{ "VpGifInstructionEmulationTime", 163 },
+	{ "VpEmulatedErrataSvmInstructions", 164 },
+	{ "VpPlaceholder1", 165 },
+	{ "VpPlaceholder2", 166 },
+	{ "VpPlaceholder3", 167 },
+	{ "VpPlaceholder4", 168 },
+	{ "VpPlaceholder5", 169 },
+	{ "VpPlaceholder6", 170 },
+	{ "VpPlaceholder7", 171 },
+	{ "VpPlaceholder8", 172 },
+	{ "VpContentionTime", 173 },
+	{ "VpWakeUpTime", 174 },
+	{ "VpSchedulingPriority", 175 },
+	{ "VpRdpmcInstructionsCount", 176 },
+	{ "VpRdpmcInstructionsTime", 177 },
+	{ "VpPerfmonPmuMsrAccessesCount", 178 },
+	{ "VpPerfmonLbrMsrAccessesCount", 179 },
+	{ "VpPerfmonIptMsrAccessesCount", 180 },
+	{ "VpPerfmonInterruptCount", 181 },
+	{ "VpVtl1DispatchCount", 182 },
+	{ "VpVtl2DispatchCount", 183 },
+	{ "VpVtl2DispatchBucket0", 184 },
+	{ "VpVtl2DispatchBucket1", 185 },
+	{ "VpVtl2DispatchBucket2", 186 },
+	{ "VpVtl2DispatchBucket3", 187 },
+	{ "VpVtl2DispatchBucket4", 188 },
+	{ "VpVtl2DispatchBucket5", 189 },
+	{ "VpVtl2DispatchBucket6", 190 },
+	{ "VpVtl1RunTime", 191 },
+	{ "VpVtl2RunTime", 192 },
+	{ "VpIommuHypercalls", 193 },
+	{ "VpCpuGroupHypercalls", 194 },
+	{ "VpVsmHypercalls", 195 },
+	{ "VpEventLogHypercalls", 196 },
+	{ "VpDeviceDomainHypercalls", 197 },
+	{ "VpDepositHypercalls", 198 },
+	{ "VpSvmHypercalls", 199 },
+	{ "VpBusLockAcquisitionCount", 200 },
+	{ "VpLoadAvg", 201 },
+	{ "VpRootDispatchThreadBlocked", 202 },
+	{ "VpIdleCpuTime", 203 },
+	{ "VpWaitingForCpuTimeBucket7", 204 },
+	{ "VpWaitingForCpuTimeBucket8", 205 },
+	{ "VpWaitingForCpuTimeBucket9", 206 },
+	{ "VpWaitingForCpuTimeBucket10", 207 },
+	{ "VpWaitingForCpuTimeBucket11", 208 },
+	{ "VpWaitingForCpuTimeBucket12", 209 },
+	{ "VpHierarchicalSuspendTime", 210 },
+	{ "VpExpressSchedulingAttempts", 211 },
+	{ "VpExpressSchedulingCount", 212 },
+	{ "VpBusLockAcquisitionTime", 213 },
+#elif IS_ENABLED(CONFIG_ARM64)
+	{ "VpSysRegAccessesCount", 9 },
+	{ "VpSysRegAccessesTime", 10 },
+	{ "VpSmcInstructionsCount", 11 },
+	{ "VpSmcInstructionsTime", 12 },
+	{ "VpOtherInterceptsCount", 13 },
+	{ "VpOtherInterceptsTime", 14 },
+	{ "VpExternalInterruptsCount", 15 },
+	{ "VpExternalInterruptsTime", 16 },
+	{ "VpPendingInterruptsCount", 17 },
+	{ "VpPendingInterruptsTime", 18 },
+	{ "VpGuestPageTableMaps", 19 },
+	{ "VpLargePageTlbFills", 20 },
+	{ "VpSmallPageTlbFills", 21 },
+	{ "VpReflectedGuestPageFaults", 22 },
+	{ "VpMemoryInterceptMessages", 23 },
+	{ "VpOtherMessages", 24 },
+	{ "VpLogicalProcessorMigrations", 25 },
+	{ "VpAddressDomainFlushes", 26 },
+	{ "VpAddressSpaceFlushes", 27 },
+	{ "VpSyntheticInterrupts", 28 },
+	{ "VpVirtualInterrupts", 29 },
+	{ "VpApicSelfIpisSent", 30 },
+	{ "VpGpaSpaceHypercalls", 31 },
+	{ "VpLogicalProcessorHypercalls", 32 },
+	{ "VpLongSpinWaitHypercalls", 33 },
+	{ "VpOtherHypercalls", 34 },
+	{ "VpSyntheticInterruptHypercalls", 35 },
+	{ "VpVirtualInterruptHypercalls", 36 },
+	{ "VpVirtualMmuHypercalls", 37 },
+	{ "VpVirtualProcessorHypercalls", 38 },
+	{ "VpHardwareInterrupts", 39 },
+	{ "VpNestedPageFaultInterceptsCount", 40 },
+	{ "VpNestedPageFaultInterceptsTime", 41 },
+	{ "VpLogicalProcessorDispatches", 42 },
+	{ "VpWaitingForCpuTime", 43 },
+	{ "VpExtendedHypercalls", 44 },
+	{ "VpExtendedHypercallInterceptMessages", 45 },
+	{ "VpMbecNestedPageTableSwitches", 46 },
+	{ "VpOtherReflectedGuestExceptions", 47 },
+	{ "VpGlobalIoTlbFlushes", 48 },
+	{ "VpGlobalIoTlbFlushCost", 49 },
+	{ "VpLocalIoTlbFlushes", 50 },
+	{ "VpLocalIoTlbFlushCost", 51 },
+	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
+	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
+	{ "VpPostedInterruptNotifications", 54 },
+	{ "VpPostedInterruptScans", 55 },
+	{ "VpTotalCoreRunTime", 56 },
+	{ "VpMaximumRunTime", 57 },
+	{ "VpWaitingForCpuTimeBucket0", 58 },
+	{ "VpWaitingForCpuTimeBucket1", 59 },
+	{ "VpWaitingForCpuTimeBucket2", 60 },
+	{ "VpWaitingForCpuTimeBucket3", 61 },
+	{ "VpWaitingForCpuTimeBucket4", 62 },
+	{ "VpWaitingForCpuTimeBucket5", 63 },
+	{ "VpWaitingForCpuTimeBucket6", 64 },
+	{ "VpHwpRequestContextSwitches", 65 },
+	{ "VpPlaceholder2", 66 },
+	{ "VpPlaceholder3", 67 },
+	{ "VpPlaceholder4", 68 },
+	{ "VpPlaceholder5", 69 },
+	{ "VpPlaceholder6", 70 },
+	{ "VpPlaceholder7", 71 },
+	{ "VpPlaceholder8", 72 },
+	{ "VpContentionTime", 73 },
+	{ "VpWakeUpTime", 74 },
+	{ "VpSchedulingPriority", 75 },
+	{ "VpVtl1DispatchCount", 76 },
+	{ "VpVtl2DispatchCount", 77 },
+	{ "VpVtl2DispatchBucket0", 78 },
+	{ "VpVtl2DispatchBucket1", 79 },
+	{ "VpVtl2DispatchBucket2", 80 },
+	{ "VpVtl2DispatchBucket3", 81 },
+	{ "VpVtl2DispatchBucket4", 82 },
+	{ "VpVtl2DispatchBucket5", 83 },
+	{ "VpVtl2DispatchBucket6", 84 },
+	{ "VpVtl1RunTime", 85 },
+	{ "VpVtl2RunTime", 86 },
+	{ "VpIommuHypercalls", 87 },
+	{ "VpCpuGroupHypercalls", 88 },
+	{ "VpVsmHypercalls", 89 },
+	{ "VpEventLogHypercalls", 90 },
+	{ "VpDeviceDomainHypercalls", 91 },
+	{ "VpDepositHypercalls", 92 },
+	{ "VpSvmHypercalls", 93 },
+	{ "VpLoadAvg", 94 },
+	{ "VpRootDispatchThreadBlocked", 95 },
+	{ "VpIdleCpuTime", 96 },
+	{ "VpWaitingForCpuTimeBucket7", 97 },
+	{ "VpWaitingForCpuTimeBucket8", 98 },
+	{ "VpWaitingForCpuTimeBucket9", 99 },
+	{ "VpWaitingForCpuTimeBucket10", 100 },
+	{ "VpWaitingForCpuTimeBucket11", 101 },
+	{ "VpWaitingForCpuTimeBucket12", 102 },
+	{ "VpHierarchicalSuspendTime", 103 },
+	{ "VpExpressSchedulingAttempts", 104 },
+	{ "VpExpressSchedulingCount", 105 },
+#endif
+};
+
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics
  2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
                   ` (5 preceding siblings ...)
  2026-01-21 21:46 ` [PATCH v4 6/7] mshv: Add data for printing stats page counters Nuno Das Neves
@ 2026-01-21 21:46 ` Nuno Das Neves
  2026-01-23 17:09   ` Michael Kelley
  6 siblings, 1 reply; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-21 21:46 UTC (permalink / raw)
  To: linux-hyperv, linux-kernel, mhklinux, skinsburskii
  Cc: kys, haiyangz, wei.liu, decui, longli, prapal, mrathor,
	paekkaladevi, Nuno Das Neves, Jinank Jain

Introduce a debugfs interface to expose root and child partition stats
when running with mshv_root.

Create a debugfs directory "mshv" containing 'stats' files organized by
type and id. A stats file contains a number of counters depending on
its type. e.g. an excerpt from a VP stats file:

TotalRunTime                  : 1997602722
HypervisorRunTime             : 649671371
RemoteNodeRunTime             : 0
NormalizedRunTime             : 1997602721
IdealCpu                      : 0
HypercallsCount               : 1708169
HypercallsTime                : 111914774
PageInvalidationsCount        : 0
PageInvalidationsTime         : 0

On a root partition with some active child partitions, the entire
directory structure may look like:

mshv/
  stats             # hypervisor stats
  lp/               # logical processors
    0/              # LP id
      stats         # LP 0 stats
    1/
    2/
    3/
  partition/        # partition stats
    1/              # root partition id
      stats         # root partition stats
      vp/           # root virtual processors
        0/          # root VP id
          stats     # root VP 0 stats
        1/
        2/
        3/
    42/             # child partition id
      stats         # child partition stats
      vp/           # child VPs
        0/          # child VP id
          stats     # child VP 0 stats
        1/
    43/
    55/

On L1VH, some stats are not present as it does not own the hardware
like the root partition does:
- The hypervisor and lp stats are not present
- L1VH's partition directory is named "self" because it can't get its
  own id
- Some of L1VH's partition and VP stats fields are not populated, because
  it can't map its own HV_STATS_AREA_PARENT page.

Co-developed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
Co-developed-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Co-developed-by: Mukesh Rathor <mrathor@linux.microsoft.com>
Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com>
Co-developed-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
Co-developed-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
 drivers/hv/Makefile         |   1 +
 drivers/hv/hv_counters.c    |   1 +
 drivers/hv/hv_synic.c       | 177 +++++++++
 drivers/hv/mshv_debugfs.c   | 703 ++++++++++++++++++++++++++++++++++++
 drivers/hv/mshv_root.h      |  34 ++
 drivers/hv/mshv_root_main.c |  26 +-
 6 files changed, 940 insertions(+), 2 deletions(-)
 create mode 100644 drivers/hv/hv_synic.c
 create mode 100644 drivers/hv/mshv_debugfs.c

diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile
index a49f93c2d245..2593711c3628 100644
--- a/drivers/hv/Makefile
+++ b/drivers/hv/Makefile
@@ -15,6 +15,7 @@ hv_vmbus-$(CONFIG_HYPERV_TESTING)	+= hv_debugfs.o
 hv_utils-y := hv_util.o hv_kvp.o hv_snapshot.o hv_utils_transport.o
 mshv_root-y := mshv_root_main.o mshv_synic.o mshv_eventfd.o mshv_irq.o \
 	       mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o
+mshv_root-$(CONFIG_DEBUG_FS) += mshv_debugfs.o
 mshv_vtl-y := mshv_vtl_main.o
 
 # Code that must be built-in
diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
index a8e07e72cc29..45ff3d663e56 100644
--- a/drivers/hv/hv_counters.c
+++ b/drivers/hv/hv_counters.c
@@ -3,6 +3,7 @@
  * Copyright (c) 2026, Microsoft Corporation.
  *
  * Data for printing stats page counters via debugfs.
+ * Included directly in mshv_debugfs.c.
  *
  * Authors: Microsoft Linux virtualization team
  */
diff --git a/drivers/hv/hv_synic.c b/drivers/hv/hv_synic.c
new file mode 100644
index 000000000000..cc81d78887f2
--- /dev/null
+++ b/drivers/hv/hv_synic.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2025, Microsoft Corporation.
+ *
+ * Authors: Microsoft Linux virtualization team
+ */
+
+/*
+	root	l1vh	vtl
+vmbus
+
+guest
+vmbus, nothing else
+
+vtl
+mshv_vtl uses intercept SINT, VTL2_VMBUS_SINT_INDEX (7, not in hvgdk_mini lol)
+vmbus
+
+bm root
+mshv_root, no vmbus
+
+nested root
+mshv_root uses L1
+vmbus uses L0 (NESTED regs)
+
+l1vh
+mshv_root and vmbus use same regs
+
+*/
+
+struct hv_synic_page {
+	u64 msr;
+	void *ptr;
+	struct kref refcount;
+};
+
+void *hv_get_synic_page(u32 msr) {
+	struct hv_synic_page *page_obj;
+	page_obj = kmalloc
+}
+
+
+#define HV_SYNIC_PAGE_STRUCT(type, name) \
+struct 
+
+/* UGH */
+struct hv_percpu_synic_cxt {
+	struct {
+		struct hv_message_page *ptr;
+		refcount_t pt_ref_count;
+	} hv_simp;
+	struct hv_message_page *hv_simp;
+	struct hv_synic_event_flags_page *hv_siefp;
+	struct hv_synic_event_ring_page *hv_sierp;
+};
+
+int hv_setup_sint(u32 sint_msr)
+{
+	union hv_synic_sint sint;
+
+	// TODO validate sint_msr
+
+	sint.as_uint64 = hv_get_msr(sint_msr);
+	sint.vector = vmbus_interrupt;
+	sint.masked = false;
+	sint.auto_eoi = hv_recommend_using_aeoi();
+
+	hv_set_msr(sint_msr, sint.as_uint64);
+
+	return 0;
+}
+
+void *hv_setup_synic_page(u32 msr)
+{
+	void *addr;
+	struct hv_synic_page synic_page;
+
+	// TODO validate msr
+
+	synic_page.as_uint64 = hv_get_msr(msr);
+	synic_page.enabled = 1;
+
+	if (ms_hyperv.paravisor_present || hv_root_partition()) {
+		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
+		u64 base = (synic_page.gpa << HV_HYP_PAGE_SHIFT) &
+			    ~ms_hyperv.shared_gpa_boundary;
+		addr = (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
+		if (!addr) {
+			pr_err("%s: Fail to map synic page from %#x.\n",
+			       __func__, msr);
+			return NULL;
+		}
+	} else {
+		addr = (void *)__get_free_page(GFP_KERNEL);
+		if (!page)
+			return NULL;
+
+		memset(page, 0, PAGE_SIZE);
+		synic_page.gpa = virt_to_phys(addr) >> HV_HYP_PAGE_SHIFT;
+	}
+	hv_set_msr(msr, synic_page.as_uint64);
+
+	return addr;
+}
+
+/*
+ * hv_hyp_synic_enable_regs - Initialize the Synthetic Interrupt Controller
+ * with the hypervisor.
+ */
+void hv_hyp_synic_enable_regs(unsigned int cpu)
+{
+	struct hv_per_cpu_context *hv_cpu =
+		per_cpu_ptr(hv_context.cpu_context, cpu);
+	union hv_synic_simp simp;
+	union hv_synic_siefp siefp;
+	union hv_synic_sint shared_sint;
+
+	/* Setup the Synic's message page with the hypervisor. */
+	simp.as_uint64 = hv_get_msr(HV_MSR_SIMP);
+	simp.simp_enabled = 1;
+
+	if (ms_hyperv.paravisor_present || hv_root_partition()) {
+		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
+		u64 base = (simp.base_simp_gpa << HV_HYP_PAGE_SHIFT) &
+				~ms_hyperv.shared_gpa_boundary;
+		hv_cpu->hyp_synic_message_page =
+			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
+		if (!hv_cpu->hyp_synic_message_page)
+			pr_err("Fail to map synic message page.\n");
+	} else {
+		simp.base_simp_gpa = virt_to_phys(hv_cpu->hyp_synic_message_page)
+			>> HV_HYP_PAGE_SHIFT;
+	}
+
+	hv_set_msr(HV_MSR_SIMP, simp.as_uint64);
+
+	/* Setup the Synic's event page with the hypervisor. */
+	siefp.as_uint64 = hv_get_msr(HV_MSR_SIEFP);
+	siefp.siefp_enabled = 1;
+
+	if (ms_hyperv.paravisor_present || hv_root_partition()) {
+		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
+		u64 base = (siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT) &
+				~ms_hyperv.shared_gpa_boundary;
+		hv_cpu->hyp_synic_event_page =
+			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
+		if (!hv_cpu->hyp_synic_event_page)
+			pr_err("Fail to map synic event page.\n");
+	} else {
+		siefp.base_siefp_gpa = virt_to_phys(hv_cpu->hyp_synic_event_page)
+			>> HV_HYP_PAGE_SHIFT;
+	}
+
+	hv_set_msr(HV_MSR_SIEFP, siefp.as_uint64);
+	hv_enable_coco_interrupt(cpu, vmbus_interrupt, true);
+
+	/* Setup the shared SINT. */
+	if (vmbus_irq != -1)
+		enable_percpu_irq(vmbus_irq, 0);
+	shared_sint.as_uint64 = hv_get_msr(HV_MSR_SINT0 + VMBUS_MESSAGE_SINT);
+
+	shared_sint.vector = vmbus_interrupt;
+	shared_sint.masked = false;
+	shared_sint.auto_eoi = hv_recommend_using_aeoi();
+	hv_set_msr(HV_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+}
+
+static void hv_hyp_synic_enable_interrupts(void)
+{
+	union hv_synic_scontrol sctrl;
+
+	/* Enable the global synic bit */
+	sctrl.as_uint64 = hv_get_msr(HV_MSR_SCONTROL);
+	sctrl.enable = 1;
+
+	hv_set_msr(HV_MSR_SCONTROL, sctrl.as_uint64);
+}
diff --git a/drivers/hv/mshv_debugfs.c b/drivers/hv/mshv_debugfs.c
new file mode 100644
index 000000000000..72eb0ae44e4b
--- /dev/null
+++ b/drivers/hv/mshv_debugfs.c
@@ -0,0 +1,703 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2026, Microsoft Corporation.
+ *
+ * The /sys/kernel/debug/mshv directory contents.
+ * Contains various statistics data, provided by the hypervisor.
+ *
+ * Authors: Microsoft Linux virtualization team
+ */
+
+#include <linux/debugfs.h>
+#include <linux/stringify.h>
+#include <asm/mshyperv.h>
+#include <linux/slab.h>
+
+#include "mshv.h"
+#include "mshv_root.h"
+
+#include "hv_counters.c"
+
+#define U32_BUF_SZ 11
+#define U64_BUF_SZ 21
+#define NUM_STATS_AREAS (HV_STATS_AREA_PARENT + 1)
+
+static struct dentry *mshv_debugfs;
+static struct dentry *mshv_debugfs_partition;
+static struct dentry *mshv_debugfs_lp;
+static struct dentry **parent_vp_stats;
+static struct dentry *parent_partition_stats;
+
+static u64 mshv_lps_count;
+static struct hv_stats_page **mshv_lps_stats;
+
+static int lp_stats_show(struct seq_file *m, void *v)
+{
+	const struct hv_stats_page *stats = m->private;
+	struct hv_counter_entry *entry = hv_lp_counters;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(hv_lp_counters); i++, entry++)
+		seq_printf(m, "%-29s: %llu\n", entry->name,
+			   stats->data[entry->idx]);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(lp_stats);
+
+static void mshv_lp_stats_unmap(u32 lp_index)
+{
+	union hv_stats_object_identity identity = {
+		.lp.lp_index = lp_index,
+		.lp.stats_area_type = HV_STATS_AREA_SELF,
+	};
+	int err;
+
+	err = hv_unmap_stats_page(HV_STATS_OBJECT_LOGICAL_PROCESSOR,
+				  mshv_lps_stats[lp_index], &identity);
+	if (err)
+		pr_err("%s: failed to unmap logical processor %u stats, err: %d\n",
+		       __func__, lp_index, err);
+}
+
+static struct hv_stats_page * __init mshv_lp_stats_map(u32 lp_index)
+{
+	union hv_stats_object_identity identity = {
+		.lp.lp_index = lp_index,
+		.lp.stats_area_type = HV_STATS_AREA_SELF,
+	};
+	struct hv_stats_page *stats;
+	int err;
+
+	err = hv_map_stats_page(HV_STATS_OBJECT_LOGICAL_PROCESSOR, &identity,
+				&stats);
+	if (err) {
+		pr_err("%s: failed to map logical processor %u stats, err: %d\n",
+		       __func__, lp_index, err);
+		return ERR_PTR(err);
+	}
+	mshv_lps_stats[lp_index] = stats;
+
+	return stats;
+}
+
+static struct hv_stats_page * __init lp_debugfs_stats_create(u32 lp_index,
+							     struct dentry *parent)
+{
+	struct dentry *dentry;
+	struct hv_stats_page *stats;
+
+	stats = mshv_lp_stats_map(lp_index);
+	if (IS_ERR(stats))
+		return stats;
+
+	dentry = debugfs_create_file("stats", 0400, parent,
+				     stats, &lp_stats_fops);
+	if (IS_ERR(dentry)) {
+		mshv_lp_stats_unmap(lp_index);
+		return ERR_CAST(dentry);
+	}
+	return stats;
+}
+
+static int __init lp_debugfs_create(u32 lp_index, struct dentry *parent)
+{
+	struct dentry *idx;
+	char lp_idx_str[U32_BUF_SZ];
+	struct hv_stats_page *stats;
+	int err;
+
+	sprintf(lp_idx_str, "%u", lp_index);
+
+	idx = debugfs_create_dir(lp_idx_str, parent);
+	if (IS_ERR(idx))
+		return PTR_ERR(idx);
+
+	stats = lp_debugfs_stats_create(lp_index, idx);
+	if (IS_ERR(stats)) {
+		err = PTR_ERR(stats);
+		goto remove_debugfs_lp_idx;
+	}
+
+	return 0;
+
+remove_debugfs_lp_idx:
+	debugfs_remove_recursive(idx);
+	return err;
+}
+
+static void mshv_debugfs_lp_remove(void)
+{
+	int lp_index;
+
+	debugfs_remove_recursive(mshv_debugfs_lp);
+
+	for (lp_index = 0; lp_index < mshv_lps_count; lp_index++)
+		mshv_lp_stats_unmap(lp_index);
+
+	kfree(mshv_lps_stats);
+	mshv_lps_stats = NULL;
+}
+
+static int __init mshv_debugfs_lp_create(struct dentry *parent)
+{
+	struct dentry *lp_dir;
+	int err, lp_index;
+
+	mshv_lps_stats = kcalloc(mshv_lps_count,
+				 sizeof(*mshv_lps_stats),
+				 GFP_KERNEL_ACCOUNT);
+
+	if (!mshv_lps_stats)
+		return -ENOMEM;
+
+	lp_dir = debugfs_create_dir("lp", parent);
+	if (IS_ERR(lp_dir)) {
+		err = PTR_ERR(lp_dir);
+		goto free_lp_stats;
+	}
+
+	for (lp_index = 0; lp_index < mshv_lps_count; lp_index++) {
+		err = lp_debugfs_create(lp_index, lp_dir);
+		if (err)
+			goto remove_debugfs_lps;
+	}
+
+	mshv_debugfs_lp = lp_dir;
+
+	return 0;
+
+remove_debugfs_lps:
+	for (lp_index -= 1; lp_index >= 0; lp_index--)
+		mshv_lp_stats_unmap(lp_index);
+	debugfs_remove_recursive(lp_dir);
+free_lp_stats:
+	kfree(mshv_lps_stats);
+
+	return err;
+}
+
+static int vp_stats_show(struct seq_file *m, void *v)
+{
+	const struct hv_stats_page **pstats = m->private;
+	struct hv_counter_entry *entry = hv_vp_counters;
+	int i;
+
+	/*
+	 * For VP and partition stats, there may be two stats areas mapped,
+	 * SELF and PARENT. These refer to the privilege level of the data in
+	 * each page. Some fields may be 0 in SELF and nonzero in PARENT, or
+	 * vice versa.
+	 *
+	 * Hence, prioritize printing from the PARENT page (more privileged
+	 * data), but use the value from the SELF page if the PARENT value is
+	 * 0.
+	 */
+
+	for (i = 0; i < ARRAY_SIZE(hv_vp_counters); i++, entry++) {
+		u64 parent_val = pstats[HV_STATS_AREA_PARENT]->data[entry->idx];
+		u64 self_val = pstats[HV_STATS_AREA_SELF]->data[entry->idx];
+
+		seq_printf(m, "%-43s: %llu\n", entry->name,
+			   parent_val ? parent_val : self_val);
+	}
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(vp_stats);
+
+static void vp_debugfs_remove(struct dentry *vp_stats)
+{
+	debugfs_remove_recursive(vp_stats->d_parent);
+}
+
+static int vp_debugfs_create(u64 partition_id, u32 vp_index,
+			     struct hv_stats_page **pstats,
+			     struct dentry **vp_stats_ptr,
+			     struct dentry *parent)
+{
+	struct dentry *vp_idx_dir, *d;
+	char vp_idx_str[U32_BUF_SZ];
+	int err;
+
+	sprintf(vp_idx_str, "%u", vp_index);
+
+	vp_idx_dir = debugfs_create_dir(vp_idx_str, parent);
+	if (IS_ERR(vp_idx_dir))
+		return PTR_ERR(vp_idx_dir);
+
+	d = debugfs_create_file("stats", 0400, vp_idx_dir,
+				     pstats, &vp_stats_fops);
+	if (IS_ERR(d)) {
+		err = PTR_ERR(d);
+		goto remove_debugfs_vp_idx;
+	}
+
+	*vp_stats_ptr = d;
+
+	return 0;
+
+remove_debugfs_vp_idx:
+	debugfs_remove_recursive(vp_idx_dir);
+	return err;
+}
+
+static int partition_stats_show(struct seq_file *m, void *v)
+{
+	const struct hv_stats_page **pstats = m->private;
+	struct hv_counter_entry *entry = hv_partition_counters;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(hv_partition_counters); i++, entry++) {
+		u64 parent_val = pstats[HV_STATS_AREA_PARENT]->data[entry->idx];
+		u64 self_val = pstats[HV_STATS_AREA_SELF]->data[entry->idx];
+
+		seq_printf(m, "%-32s: %llu\n", entry->name,
+			   parent_val ? parent_val : self_val);
+	}
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(partition_stats);
+
+static void mshv_partition_stats_unmap(u64 partition_id,
+				       struct hv_stats_page *stats_page,
+				       enum hv_stats_area_type stats_area_type)
+{
+	union hv_stats_object_identity identity = {
+		.partition.partition_id = partition_id,
+		.partition.stats_area_type = stats_area_type,
+	};
+	int err;
+
+	err = hv_unmap_stats_page(HV_STATS_OBJECT_PARTITION, stats_page,
+				  &identity);
+	if (err)
+		pr_err("%s: failed to unmap partition %lld %s stats, err: %d\n",
+		       __func__, partition_id,
+		       (stats_area_type == HV_STATS_AREA_SELF) ? "self" : "parent",
+		       err);
+}
+
+static struct hv_stats_page *mshv_partition_stats_map(u64 partition_id,
+						      enum hv_stats_area_type stats_area_type)
+{
+	union hv_stats_object_identity identity = {
+		.partition.partition_id = partition_id,
+		.partition.stats_area_type = stats_area_type,
+	};
+	struct hv_stats_page *stats;
+	int err;
+
+	err = hv_map_stats_page(HV_STATS_OBJECT_PARTITION, &identity, &stats);
+	if (err) {
+		pr_err("%s: failed to map partition %lld %s stats, err: %d\n",
+		       __func__, partition_id,
+		       (stats_area_type == HV_STATS_AREA_SELF) ? "self" : "parent",
+		       err);
+		return ERR_PTR(err);
+	}
+	return stats;
+}
+
+static int mshv_debugfs_partition_stats_create(u64 partition_id,
+					    struct dentry **partition_stats_ptr,
+					    struct dentry *parent)
+{
+	struct dentry *dentry;
+	struct hv_stats_page **pstats;
+	int err;
+
+	pstats = kcalloc(NUM_STATS_AREAS, sizeof(struct hv_stats_page *),
+			 GFP_KERNEL_ACCOUNT);
+	if (!pstats)
+		return -ENOMEM;
+
+	pstats[HV_STATS_AREA_SELF] = mshv_partition_stats_map(partition_id,
+							      HV_STATS_AREA_SELF);
+	if (IS_ERR(pstats[HV_STATS_AREA_SELF])) {
+		err = PTR_ERR(pstats[HV_STATS_AREA_SELF]);
+		goto cleanup;
+	}
+
+	/*
+	 * L1VH partition cannot access its partition stats in parent area.
+	 */
+	if (is_l1vh_parent(partition_id)) {
+		pstats[HV_STATS_AREA_PARENT] = pstats[HV_STATS_AREA_SELF];
+	} else {
+		pstats[HV_STATS_AREA_PARENT] = mshv_partition_stats_map(partition_id,
+									HV_STATS_AREA_PARENT);
+		if (IS_ERR(pstats[HV_STATS_AREA_PARENT])) {
+			err = PTR_ERR(pstats[HV_STATS_AREA_PARENT]);
+			goto unmap_self;
+		}
+		if (!pstats[HV_STATS_AREA_PARENT])
+			pstats[HV_STATS_AREA_PARENT] = pstats[HV_STATS_AREA_SELF];
+	}
+
+	dentry = debugfs_create_file("stats", 0400, parent,
+				     pstats, &partition_stats_fops);
+	if (IS_ERR(dentry)) {
+		err = PTR_ERR(dentry);
+		goto unmap_partition_stats;
+	}
+
+	*partition_stats_ptr = dentry;
+	return 0;
+
+unmap_partition_stats:
+	if (pstats[HV_STATS_AREA_PARENT] != pstats[HV_STATS_AREA_SELF])
+		mshv_partition_stats_unmap(partition_id, pstats[HV_STATS_AREA_PARENT],
+					   HV_STATS_AREA_PARENT);
+unmap_self:
+	mshv_partition_stats_unmap(partition_id, pstats[HV_STATS_AREA_SELF],
+				   HV_STATS_AREA_SELF);
+cleanup:
+	kfree(pstats);
+	return err;
+}
+
+static void partition_debugfs_remove(u64 partition_id, struct dentry *dentry)
+{
+	struct hv_stats_page **pstats = NULL;
+
+	pstats = dentry->d_inode->i_private;
+
+	debugfs_remove_recursive(dentry->d_parent);
+
+	if (pstats[HV_STATS_AREA_PARENT] != pstats[HV_STATS_AREA_SELF]) {
+		mshv_partition_stats_unmap(partition_id,
+					   pstats[HV_STATS_AREA_PARENT],
+					   HV_STATS_AREA_PARENT);
+	}
+
+	mshv_partition_stats_unmap(partition_id,
+				   pstats[HV_STATS_AREA_SELF],
+				   HV_STATS_AREA_SELF);
+
+	kfree(pstats);
+}
+
+static int partition_debugfs_create(u64 partition_id,
+				    struct dentry **vp_dir_ptr,
+				    struct dentry **partition_stats_ptr,
+				    struct dentry *parent)
+{
+	char part_id_str[U64_BUF_SZ];
+	struct dentry *part_id_dir, *vp_dir;
+	int err;
+
+	if (is_l1vh_parent(partition_id))
+		sprintf(part_id_str, "self");
+	else
+		sprintf(part_id_str, "%llu", partition_id);
+
+	part_id_dir = debugfs_create_dir(part_id_str, parent);
+	if (IS_ERR(part_id_dir))
+		return PTR_ERR(part_id_dir);
+
+	vp_dir = debugfs_create_dir("vp", part_id_dir);
+	if (IS_ERR(vp_dir)) {
+		err = PTR_ERR(vp_dir);
+		goto remove_debugfs_partition_id;
+	}
+
+	err = mshv_debugfs_partition_stats_create(partition_id,
+						  partition_stats_ptr,
+						  part_id_dir);
+	if (err)
+		goto remove_debugfs_partition_id;
+
+	*vp_dir_ptr = vp_dir;
+
+	return 0;
+
+remove_debugfs_partition_id:
+	debugfs_remove_recursive(part_id_dir);
+	return err;
+}
+
+static void parent_vp_debugfs_remove(u32 vp_index,
+				     struct dentry *vp_stats_ptr)
+{
+	struct hv_stats_page **pstats;
+
+	pstats = vp_stats_ptr->d_inode->i_private;
+	vp_debugfs_remove(vp_stats_ptr);
+	mshv_vp_stats_unmap(hv_current_partition_id, vp_index, pstats);
+	kfree(pstats);
+}
+
+static void mshv_debugfs_parent_partition_remove(void)
+{
+	int idx;
+
+	for_each_online_cpu(idx)
+		parent_vp_debugfs_remove(idx,
+					 parent_vp_stats[idx]);
+
+	partition_debugfs_remove(hv_current_partition_id,
+				 parent_partition_stats);
+	kfree(parent_vp_stats);
+	parent_vp_stats = NULL;
+	parent_partition_stats = NULL;
+
+}
+
+static int __init parent_vp_debugfs_create(u32 vp_index,
+					   struct dentry **vp_stats_ptr,
+					   struct dentry *parent)
+{
+	struct hv_stats_page **pstats;
+	int err;
+
+	pstats = kcalloc(2, sizeof(struct hv_stats_page *), GFP_KERNEL_ACCOUNT);
+	if (!pstats)
+		return -ENOMEM;
+
+	err = mshv_vp_stats_map(hv_current_partition_id, vp_index, pstats);
+	if (err)
+		goto cleanup;
+
+	err = vp_debugfs_create(hv_current_partition_id, vp_index, pstats,
+				vp_stats_ptr, parent);
+	if (err)
+		goto unmap_vp_stats;
+
+	return 0;
+
+unmap_vp_stats:
+	mshv_vp_stats_unmap(hv_current_partition_id, vp_index, pstats);
+cleanup:
+	kfree(pstats);
+	return err;
+}
+
+static int __init mshv_debugfs_parent_partition_create(void)
+{
+	struct dentry *vp_dir;
+	int err, idx, i;
+
+	mshv_debugfs_partition = debugfs_create_dir("partition",
+						     mshv_debugfs);
+	if (IS_ERR(mshv_debugfs_partition))
+		return PTR_ERR(mshv_debugfs_partition);
+
+	err = partition_debugfs_create(hv_current_partition_id,
+				       &vp_dir,
+				       &parent_partition_stats,
+				       mshv_debugfs_partition);
+	if (err)
+		goto remove_debugfs_partition;
+
+	parent_vp_stats = kcalloc(num_possible_cpus(),
+				  sizeof(*parent_vp_stats),
+				  GFP_KERNEL);
+	if (!parent_vp_stats) {
+		err = -ENOMEM;
+		goto remove_debugfs_partition;
+	}
+
+	for_each_online_cpu(idx) {
+		err = parent_vp_debugfs_create(hv_vp_index[idx],
+					       &parent_vp_stats[idx],
+					       vp_dir);
+		if (err)
+			goto remove_debugfs_partition_vp;
+	}
+
+	return 0;
+
+remove_debugfs_partition_vp:
+	for_each_online_cpu(i) {
+		if (i >= idx)
+			break;
+		parent_vp_debugfs_remove(i, parent_vp_stats[i]);
+	}
+	partition_debugfs_remove(hv_current_partition_id,
+				 parent_partition_stats);
+
+	kfree(parent_vp_stats);
+	parent_vp_stats = NULL;
+	parent_partition_stats = NULL;
+
+remove_debugfs_partition:
+	debugfs_remove_recursive(mshv_debugfs_partition);
+	mshv_debugfs_partition = NULL;
+	return err;
+}
+
+static int hv_stats_show(struct seq_file *m, void *v)
+{
+	const struct hv_stats_page *stats = m->private;
+	struct hv_counter_entry *entry = hv_hypervisor_counters;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(hv_hypervisor_counters); i++, entry++)
+		seq_printf(m, "%-25s: %llu\n", entry->name,
+			   stats->data[entry->idx]);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(hv_stats);
+
+static void mshv_hv_stats_unmap(void)
+{
+	union hv_stats_object_identity identity = {
+		.hv.stats_area_type = HV_STATS_AREA_SELF,
+	};
+	int err;
+
+	err = hv_unmap_stats_page(HV_STATS_OBJECT_HYPERVISOR, NULL, &identity);
+	if (err)
+		pr_err("%s: failed to unmap hypervisor stats: %d\n",
+		       __func__, err);
+}
+
+static void * __init mshv_hv_stats_map(void)
+{
+	union hv_stats_object_identity identity = {
+		.hv.stats_area_type = HV_STATS_AREA_SELF,
+	};
+	struct hv_stats_page *stats;
+	int err;
+
+	err = hv_map_stats_page(HV_STATS_OBJECT_HYPERVISOR, &identity, &stats);
+	if (err) {
+		pr_err("%s: failed to map hypervisor stats: %d\n",
+		       __func__, err);
+		return ERR_PTR(err);
+	}
+	return stats;
+}
+
+static int __init mshv_debugfs_hv_stats_create(struct dentry *parent)
+{
+	struct dentry *dentry;
+	u64 *stats;
+	int err;
+
+	stats = mshv_hv_stats_map();
+	if (IS_ERR(stats))
+		return PTR_ERR(stats);
+
+	dentry = debugfs_create_file("stats", 0400, parent,
+				     stats, &hv_stats_fops);
+	if (IS_ERR(dentry)) {
+		err = PTR_ERR(dentry);
+		pr_err("%s: failed to create hypervisor stats dentry: %d\n",
+		       __func__, err);
+		goto unmap_hv_stats;
+	}
+
+	mshv_lps_count = num_present_cpus();
+
+	return 0;
+
+unmap_hv_stats:
+	mshv_hv_stats_unmap();
+	return err;
+}
+
+int mshv_debugfs_vp_create(struct mshv_vp *vp)
+{
+	struct mshv_partition *p = vp->vp_partition;
+
+	if (!mshv_debugfs)
+		return 0;
+
+	return vp_debugfs_create(p->pt_id, vp->vp_index,
+				 vp->vp_stats_pages,
+				 &vp->vp_stats_dentry,
+				 p->pt_vp_dentry);
+}
+
+void mshv_debugfs_vp_remove(struct mshv_vp *vp)
+{
+	if (!mshv_debugfs)
+		return;
+
+	vp_debugfs_remove(vp->vp_stats_dentry);
+}
+
+int mshv_debugfs_partition_create(struct mshv_partition *partition)
+{
+	int err;
+
+	if (!mshv_debugfs)
+		return 0;
+
+	err = partition_debugfs_create(partition->pt_id,
+				       &partition->pt_vp_dentry,
+				       &partition->pt_stats_dentry,
+				       mshv_debugfs_partition);
+	if (err)
+		return err;
+
+	return 0;
+}
+
+void mshv_debugfs_partition_remove(struct mshv_partition *partition)
+{
+	if (!mshv_debugfs)
+		return;
+
+	partition_debugfs_remove(partition->pt_id,
+				 partition->pt_stats_dentry);
+}
+
+int __init mshv_debugfs_init(void)
+{
+	int err;
+
+	mshv_debugfs = debugfs_create_dir("mshv", NULL);
+	if (IS_ERR(mshv_debugfs)) {
+		pr_err("%s: failed to create debugfs directory\n", __func__);
+		return PTR_ERR(mshv_debugfs);
+	}
+
+	if (hv_root_partition()) {
+		err = mshv_debugfs_hv_stats_create(mshv_debugfs);
+		if (err)
+			goto remove_mshv_dir;
+
+		err = mshv_debugfs_lp_create(mshv_debugfs);
+		if (err)
+			goto unmap_hv_stats;
+	}
+
+	err = mshv_debugfs_parent_partition_create();
+	if (err)
+		goto unmap_lp_stats;
+
+	return 0;
+
+unmap_lp_stats:
+	if (hv_root_partition()) {
+		mshv_debugfs_lp_remove();
+		mshv_debugfs_lp = NULL;
+	}
+unmap_hv_stats:
+	if (hv_root_partition())
+		mshv_hv_stats_unmap();
+remove_mshv_dir:
+	debugfs_remove_recursive(mshv_debugfs);
+	mshv_debugfs = NULL;
+	return err;
+}
+
+void mshv_debugfs_exit(void)
+{
+	mshv_debugfs_parent_partition_remove();
+
+	if (hv_root_partition()) {
+		mshv_debugfs_lp_remove();
+		mshv_debugfs_lp = NULL;
+		mshv_hv_stats_unmap();
+	}
+
+	debugfs_remove_recursive(mshv_debugfs);
+	mshv_debugfs = NULL;
+	mshv_debugfs_partition = NULL;
+}
diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
index e4912b0618fa..7332d9af8373 100644
--- a/drivers/hv/mshv_root.h
+++ b/drivers/hv/mshv_root.h
@@ -52,6 +52,9 @@ struct mshv_vp {
 		unsigned int kicked_by_hv;
 		wait_queue_head_t vp_suspend_queue;
 	} run;
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+	struct dentry *vp_stats_dentry;
+#endif
 };
 
 #define vp_fmt(fmt) "p%lluvp%u: " fmt
@@ -136,6 +139,10 @@ struct mshv_partition {
 	u64 isolation_type;
 	bool import_completed;
 	bool pt_initialized;
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+	struct dentry *pt_stats_dentry;
+	struct dentry *pt_vp_dentry;
+#endif
 };
 
 #define pt_fmt(fmt) "p%llu: " fmt
@@ -327,6 +334,33 @@ int hv_call_modify_spa_host_access(u64 partition_id, struct page **pages,
 int hv_call_get_partition_property_ex(u64 partition_id, u64 property_code, u64 arg,
 				      void *property_value, size_t property_value_sz);
 
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+int __init mshv_debugfs_init(void);
+void mshv_debugfs_exit(void);
+
+int mshv_debugfs_partition_create(struct mshv_partition *partition);
+void mshv_debugfs_partition_remove(struct mshv_partition *partition);
+int mshv_debugfs_vp_create(struct mshv_vp *vp);
+void mshv_debugfs_vp_remove(struct mshv_vp *vp);
+#else
+static inline int __init mshv_debugfs_init(void)
+{
+	return 0;
+}
+static inline void mshv_debugfs_exit(void) { }
+
+static inline int mshv_debugfs_partition_create(struct mshv_partition *partition)
+{
+	return 0;
+}
+static inline void mshv_debugfs_partition_remove(struct mshv_partition *partition) { }
+static inline int mshv_debugfs_vp_create(struct mshv_vp *vp)
+{
+	return 0;
+}
+static inline void mshv_debugfs_vp_remove(struct mshv_vp *vp) { }
+#endif
+
 extern struct mshv_root mshv_root;
 extern enum hv_scheduler_type hv_scheduler_type;
 extern u8 * __percpu *hv_synic_eventring_tail;
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 12825666e21b..f4654fb8cd23 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -1096,6 +1096,10 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 
 	memcpy(vp->vp_stats_pages, stats_pages, sizeof(stats_pages));
 
+	ret = mshv_debugfs_vp_create(vp);
+	if (ret)
+		goto put_partition;
+
 	/*
 	 * Keep anon_inode_getfd last: it installs fd in the file struct and
 	 * thus makes the state accessible in user space.
@@ -1103,7 +1107,7 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 	ret = anon_inode_getfd("mshv_vp", &mshv_vp_fops, vp,
 			       O_RDWR | O_CLOEXEC);
 	if (ret < 0)
-		goto put_partition;
+		goto remove_debugfs_vp;
 
 	/* already exclusive with the partition mutex for all ioctls */
 	partition->pt_vp_count++;
@@ -1111,6 +1115,8 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
 
 	return ret;
 
+remove_debugfs_vp:
+	mshv_debugfs_vp_remove(vp);
 put_partition:
 	mshv_partition_put(partition);
 free_vp:
@@ -1553,10 +1559,16 @@ mshv_partition_ioctl_initialize(struct mshv_partition *partition)
 	if (ret)
 		goto withdraw_mem;
 
+	ret = mshv_debugfs_partition_create(partition);
+	if (ret)
+		goto finalize_partition;
+
 	partition->pt_initialized = true;
 
 	return 0;
 
+finalize_partition:
+	hv_call_finalize_partition(partition->pt_id);
 withdraw_mem:
 	hv_call_withdraw_memory(U64_MAX, NUMA_NO_NODE, partition->pt_id);
 
@@ -1736,6 +1748,7 @@ static void destroy_partition(struct mshv_partition *partition)
 			if (!vp)
 				continue;
 
+			mshv_debugfs_vp_remove(vp);
 			mshv_vp_stats_unmap(partition->pt_id, vp->vp_index,
 					    vp->vp_stats_pages);
 
@@ -1769,6 +1782,8 @@ static void destroy_partition(struct mshv_partition *partition)
 			partition->pt_vp_array[i] = NULL;
 		}
 
+		mshv_debugfs_partition_remove(partition);
+
 		/* Deallocates and unmaps everything including vcpus, GPA mappings etc */
 		hv_call_finalize_partition(partition->pt_id);
 
@@ -2314,10 +2329,14 @@ static int __init mshv_parent_partition_init(void)
 
 	mshv_init_vmm_caps(dev);
 
-	ret = mshv_irqfd_wq_init();
+	ret = mshv_debugfs_init();
 	if (ret)
 		goto exit_partition;
 
+	ret = mshv_irqfd_wq_init();
+	if (ret)
+		goto exit_debugfs;
+
 	spin_lock_init(&mshv_root.pt_ht_lock);
 	hash_init(mshv_root.pt_htable);
 
@@ -2325,6 +2344,8 @@ static int __init mshv_parent_partition_init(void)
 
 	return 0;
 
+exit_debugfs:
+	mshv_debugfs_exit();
 exit_partition:
 	if (hv_root_partition())
 		mshv_root_partition_exit();
@@ -2341,6 +2362,7 @@ static void __exit mshv_parent_partition_exit(void)
 {
 	hv_setup_mshv_handler(NULL);
 	mshv_port_table_fini();
+	mshv_debugfs_exit();
 	misc_deregister(&mshv_dev);
 	mshv_irqfd_wq_cleanup();
 	if (hv_root_partition())
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-21 21:46 ` [PATCH v4 6/7] mshv: Add data for printing stats page counters Nuno Das Neves
@ 2026-01-22  1:18   ` Stanislav Kinsburskii
  2026-01-22 18:21     ` Nuno Das Neves
  2026-01-23 17:09   ` Michael Kelley
  1 sibling, 1 reply; 22+ messages in thread
From: Stanislav Kinsburskii @ 2026-01-22  1:18 UTC (permalink / raw)
  To: Nuno Das Neves
  Cc: linux-hyperv, linux-kernel, mhklinux, kys, haiyangz, wei.liu,
	decui, longli, prapal, mrathor, paekkaladevi

On Wed, Jan 21, 2026 at 01:46:22PM -0800, Nuno Das Neves wrote:
> Introduce hv_counters.c, containing static data corresponding to
> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> members as an array instead makes more sense, since it will be
> iterated over to print counter information to debugfs.
> 
> Include hypervisor, logical processor, partition, and virtual
> processor counters.
> 
> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> ---
>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 488 insertions(+)
>  create mode 100644 drivers/hv/hv_counters.c
> 
> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> new file mode 100644
> index 000000000000..a8e07e72cc29
> --- /dev/null
> +++ b/drivers/hv/hv_counters.c
> @@ -0,0 +1,488 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2026, Microsoft Corporation.
> + *
> + * Data for printing stats page counters via debugfs.
> + *
> + * Authors: Microsoft Linux virtualization team
> + */
> +
> +struct hv_counter_entry {
> +	char *name;
> +	int idx;
> +};

This structure looks redundant to me mostly because of the "idx".
It looks what you need here is an arry of pointers to strings, like
below:

static const char *hv_hypervisor_counters[] = {
        NULL, /* 0 is unused */
	"HvLogicalProcessors",
	"HvPartitions",
	"HvTotalPages",
	"HvVirtualProcessors",
	"HvMonitoredNotifications",
	"HvModernStandbyEntries",
	"HvPlatformIdleTransitions",
	"HvHypervisorStartupCost",
	NULL, /* 9 is unused */
	"HvIOSpacePages",
	...
};

which can be iterated like this:

for (idx = 0; idx < ARRAY_SIZE(hv_hypervisor_counters); idx++) {
    const char *name = hv_hypervisor_counters[idx];
    if (!name)
	continue;
    /* print */
    ...
}

What do you think?

Thanks,
Stanislav

> +
> +/* HV_HYPERVISOR_COUNTER */
> +static struct hv_counter_entry hv_hypervisor_counters[] = {
> +	{ "HvLogicalProcessors", 1 },
> +	{ "HvPartitions", 2 },
> +	{ "HvTotalPages", 3 },
> +	{ "HvVirtualProcessors", 4 },
> +	{ "HvMonitoredNotifications", 5 },
> +	{ "HvModernStandbyEntries", 6 },
> +	{ "HvPlatformIdleTransitions", 7 },
> +	{ "HvHypervisorStartupCost", 8 },
> +
> +	{ "HvIOSpacePages", 10 },
> +	{ "HvNonEssentialPagesForDump", 11 },
> +	{ "HvSubsumedPages", 12 },
> +};
> +
> +/* HV_CPU_COUNTER */
> +static struct hv_counter_entry hv_lp_counters[] = {
> +	{ "LpGlobalTime", 1 },
> +	{ "LpTotalRunTime", 2 },
> +	{ "LpHypervisorRunTime", 3 },
> +	{ "LpHardwareInterrupts", 4 },
> +	{ "LpContextSwitches", 5 },
> +	{ "LpInterProcessorInterrupts", 6 },
> +	{ "LpSchedulerInterrupts", 7 },
> +	{ "LpTimerInterrupts", 8 },
> +	{ "LpInterProcessorInterruptsSent", 9 },
> +	{ "LpProcessorHalts", 10 },
> +	{ "LpMonitorTransitionCost", 11 },
> +	{ "LpContextSwitchTime", 12 },
> +	{ "LpC1TransitionsCount", 13 },
> +	{ "LpC1RunTime", 14 },
> +	{ "LpC2TransitionsCount", 15 },
> +	{ "LpC2RunTime", 16 },
> +	{ "LpC3TransitionsCount", 17 },
> +	{ "LpC3RunTime", 18 },
> +	{ "LpRootVpIndex", 19 },
> +	{ "LpIdleSequenceNumber", 20 },
> +	{ "LpGlobalTscCount", 21 },
> +	{ "LpActiveTscCount", 22 },
> +	{ "LpIdleAccumulation", 23 },
> +	{ "LpReferenceCycleCount0", 24 },
> +	{ "LpActualCycleCount0", 25 },
> +	{ "LpReferenceCycleCount1", 26 },
> +	{ "LpActualCycleCount1", 27 },
> +	{ "LpProximityDomainId", 28 },
> +	{ "LpPostedInterruptNotifications", 29 },
> +	{ "LpBranchPredictorFlushes", 30 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "LpL1DataCacheFlushes", 31 },
> +	{ "LpImmediateL1DataCacheFlushes", 32 },
> +	{ "LpMbFlushes", 33 },
> +	{ "LpCounterRefreshSequenceNumber", 34 },
> +	{ "LpCounterRefreshReferenceTime", 35 },
> +	{ "LpIdleAccumulationSnapshot", 36 },
> +	{ "LpActiveTscCountSnapshot", 37 },
> +	{ "LpHwpRequestContextSwitches", 38 },
> +	{ "LpPlaceholder1", 39 },
> +	{ "LpPlaceholder2", 40 },
> +	{ "LpPlaceholder3", 41 },
> +	{ "LpPlaceholder4", 42 },
> +	{ "LpPlaceholder5", 43 },
> +	{ "LpPlaceholder6", 44 },
> +	{ "LpPlaceholder7", 45 },
> +	{ "LpPlaceholder8", 46 },
> +	{ "LpPlaceholder9", 47 },
> +	{ "LpSchLocalRunListSize", 48 },
> +	{ "LpReserveGroupId", 49 },
> +	{ "LpRunningPriority", 50 },
> +	{ "LpPerfmonInterruptCount", 51 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "LpCounterRefreshSequenceNumber", 31 },
> +	{ "LpCounterRefreshReferenceTime", 32 },
> +	{ "LpIdleAccumulationSnapshot", 33 },
> +	{ "LpActiveTscCountSnapshot", 34 },
> +	{ "LpHwpRequestContextSwitches", 35 },
> +	{ "LpPlaceholder2", 36 },
> +	{ "LpPlaceholder3", 37 },
> +	{ "LpPlaceholder4", 38 },
> +	{ "LpPlaceholder5", 39 },
> +	{ "LpPlaceholder6", 40 },
> +	{ "LpPlaceholder7", 41 },
> +	{ "LpPlaceholder8", 42 },
> +	{ "LpPlaceholder9", 43 },
> +	{ "LpSchLocalRunListSize", 44 },
> +	{ "LpReserveGroupId", 45 },
> +	{ "LpRunningPriority", 46 },
> +#endif
> +};
> +
> +/* HV_PROCESS_COUNTER */
> +static struct hv_counter_entry hv_partition_counters[] = {
> +	{ "PtVirtualProcessors", 1 },
> +
> +	{ "PtTlbSize", 3 },
> +	{ "PtAddressSpaces", 4 },
> +	{ "PtDepositedPages", 5 },
> +	{ "PtGpaPages", 6 },
> +	{ "PtGpaSpaceModifications", 7 },
> +	{ "PtVirtualTlbFlushEntires", 8 },
> +	{ "PtRecommendedTlbSize", 9 },
> +	{ "PtGpaPages4K", 10 },
> +	{ "PtGpaPages2M", 11 },
> +	{ "PtGpaPages1G", 12 },
> +	{ "PtGpaPages512G", 13 },
> +	{ "PtDevicePages4K", 14 },
> +	{ "PtDevicePages2M", 15 },
> +	{ "PtDevicePages1G", 16 },
> +	{ "PtDevicePages512G", 17 },
> +	{ "PtAttachedDevices", 18 },
> +	{ "PtDeviceInterruptMappings", 19 },
> +	{ "PtIoTlbFlushes", 20 },
> +	{ "PtIoTlbFlushCost", 21 },
> +	{ "PtDeviceInterruptErrors", 22 },
> +	{ "PtDeviceDmaErrors", 23 },
> +	{ "PtDeviceInterruptThrottleEvents", 24 },
> +	{ "PtSkippedTimerTicks", 25 },
> +	{ "PtPartitionId", 26 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "PtNestedTlbSize", 27 },
> +	{ "PtRecommendedNestedTlbSize", 28 },
> +	{ "PtNestedTlbFreeListSize", 29 },
> +	{ "PtNestedTlbTrimmedPages", 30 },
> +	{ "PtPagesShattered", 31 },
> +	{ "PtPagesRecombined", 32 },
> +	{ "PtHwpRequestValue", 33 },
> +	{ "PtAutoSuspendEnableTime", 34 },
> +	{ "PtAutoSuspendTriggerTime", 35 },
> +	{ "PtAutoSuspendDisableTime", 36 },
> +	{ "PtPlaceholder1", 37 },
> +	{ "PtPlaceholder2", 38 },
> +	{ "PtPlaceholder3", 39 },
> +	{ "PtPlaceholder4", 40 },
> +	{ "PtPlaceholder5", 41 },
> +	{ "PtPlaceholder6", 42 },
> +	{ "PtPlaceholder7", 43 },
> +	{ "PtPlaceholder8", 44 },
> +	{ "PtHypervisorStateTransferGeneration", 45 },
> +	{ "PtNumberofActiveChildPartitions", 46 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "PtHwpRequestValue", 27 },
> +	{ "PtAutoSuspendEnableTime", 28 },
> +	{ "PtAutoSuspendTriggerTime", 29 },
> +	{ "PtAutoSuspendDisableTime", 30 },
> +	{ "PtPlaceholder1", 31 },
> +	{ "PtPlaceholder2", 32 },
> +	{ "PtPlaceholder3", 33 },
> +	{ "PtPlaceholder4", 34 },
> +	{ "PtPlaceholder5", 35 },
> +	{ "PtPlaceholder6", 36 },
> +	{ "PtPlaceholder7", 37 },
> +	{ "PtPlaceholder8", 38 },
> +	{ "PtHypervisorStateTransferGeneration", 39 },
> +	{ "PtNumberofActiveChildPartitions", 40 },
> +#endif
> +};
> +
> +/* HV_THREAD_COUNTER */
> +static struct hv_counter_entry hv_vp_counters[] = {
> +	{ "VpTotalRunTime", 1 },
> +	{ "VpHypervisorRunTime", 2 },
> +	{ "VpRemoteNodeRunTime", 3 },
> +	{ "VpNormalizedRunTime", 4 },
> +	{ "VpIdealCpu", 5 },
> +
> +	{ "VpHypercallsCount", 7 },
> +	{ "VpHypercallsTime", 8 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "VpPageInvalidationsCount", 9 },
> +	{ "VpPageInvalidationsTime", 10 },
> +	{ "VpControlRegisterAccessesCount", 11 },
> +	{ "VpControlRegisterAccessesTime", 12 },
> +	{ "VpIoInstructionsCount", 13 },
> +	{ "VpIoInstructionsTime", 14 },
> +	{ "VpHltInstructionsCount", 15 },
> +	{ "VpHltInstructionsTime", 16 },
> +	{ "VpMwaitInstructionsCount", 17 },
> +	{ "VpMwaitInstructionsTime", 18 },
> +	{ "VpCpuidInstructionsCount", 19 },
> +	{ "VpCpuidInstructionsTime", 20 },
> +	{ "VpMsrAccessesCount", 21 },
> +	{ "VpMsrAccessesTime", 22 },
> +	{ "VpOtherInterceptsCount", 23 },
> +	{ "VpOtherInterceptsTime", 24 },
> +	{ "VpExternalInterruptsCount", 25 },
> +	{ "VpExternalInterruptsTime", 26 },
> +	{ "VpPendingInterruptsCount", 27 },
> +	{ "VpPendingInterruptsTime", 28 },
> +	{ "VpEmulatedInstructionsCount", 29 },
> +	{ "VpEmulatedInstructionsTime", 30 },
> +	{ "VpDebugRegisterAccessesCount", 31 },
> +	{ "VpDebugRegisterAccessesTime", 32 },
> +	{ "VpPageFaultInterceptsCount", 33 },
> +	{ "VpPageFaultInterceptsTime", 34 },
> +	{ "VpGuestPageTableMaps", 35 },
> +	{ "VpLargePageTlbFills", 36 },
> +	{ "VpSmallPageTlbFills", 37 },
> +	{ "VpReflectedGuestPageFaults", 38 },
> +	{ "VpApicMmioAccesses", 39 },
> +	{ "VpIoInterceptMessages", 40 },
> +	{ "VpMemoryInterceptMessages", 41 },
> +	{ "VpApicEoiAccesses", 42 },
> +	{ "VpOtherMessages", 43 },
> +	{ "VpPageTableAllocations", 44 },
> +	{ "VpLogicalProcessorMigrations", 45 },
> +	{ "VpAddressSpaceEvictions", 46 },
> +	{ "VpAddressSpaceSwitches", 47 },
> +	{ "VpAddressDomainFlushes", 48 },
> +	{ "VpAddressSpaceFlushes", 49 },
> +	{ "VpGlobalGvaRangeFlushes", 50 },
> +	{ "VpLocalGvaRangeFlushes", 51 },
> +	{ "VpPageTableEvictions", 52 },
> +	{ "VpPageTableReclamations", 53 },
> +	{ "VpPageTableResets", 54 },
> +	{ "VpPageTableValidations", 55 },
> +	{ "VpApicTprAccesses", 56 },
> +	{ "VpPageTableWriteIntercepts", 57 },
> +	{ "VpSyntheticInterrupts", 58 },
> +	{ "VpVirtualInterrupts", 59 },
> +	{ "VpApicIpisSent", 60 },
> +	{ "VpApicSelfIpisSent", 61 },
> +	{ "VpGpaSpaceHypercalls", 62 },
> +	{ "VpLogicalProcessorHypercalls", 63 },
> +	{ "VpLongSpinWaitHypercalls", 64 },
> +	{ "VpOtherHypercalls", 65 },
> +	{ "VpSyntheticInterruptHypercalls", 66 },
> +	{ "VpVirtualInterruptHypercalls", 67 },
> +	{ "VpVirtualMmuHypercalls", 68 },
> +	{ "VpVirtualProcessorHypercalls", 69 },
> +	{ "VpHardwareInterrupts", 70 },
> +	{ "VpNestedPageFaultInterceptsCount", 71 },
> +	{ "VpNestedPageFaultInterceptsTime", 72 },
> +	{ "VpPageScans", 73 },
> +	{ "VpLogicalProcessorDispatches", 74 },
> +	{ "VpWaitingForCpuTime", 75 },
> +	{ "VpExtendedHypercalls", 76 },
> +	{ "VpExtendedHypercallInterceptMessages", 77 },
> +	{ "VpMbecNestedPageTableSwitches", 78 },
> +	{ "VpOtherReflectedGuestExceptions", 79 },
> +	{ "VpGlobalIoTlbFlushes", 80 },
> +	{ "VpGlobalIoTlbFlushCost", 81 },
> +	{ "VpLocalIoTlbFlushes", 82 },
> +	{ "VpLocalIoTlbFlushCost", 83 },
> +	{ "VpHypercallsForwardedCount", 84 },
> +	{ "VpHypercallsForwardingTime", 85 },
> +	{ "VpPageInvalidationsForwardedCount", 86 },
> +	{ "VpPageInvalidationsForwardingTime", 87 },
> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
> +	{ "VpIoInstructionsForwardedCount", 90 },
> +	{ "VpIoInstructionsForwardingTime", 91 },
> +	{ "VpHltInstructionsForwardedCount", 92 },
> +	{ "VpHltInstructionsForwardingTime", 93 },
> +	{ "VpMwaitInstructionsForwardedCount", 94 },
> +	{ "VpMwaitInstructionsForwardingTime", 95 },
> +	{ "VpCpuidInstructionsForwardedCount", 96 },
> +	{ "VpCpuidInstructionsForwardingTime", 97 },
> +	{ "VpMsrAccessesForwardedCount", 98 },
> +	{ "VpMsrAccessesForwardingTime", 99 },
> +	{ "VpOtherInterceptsForwardedCount", 100 },
> +	{ "VpOtherInterceptsForwardingTime", 101 },
> +	{ "VpExternalInterruptsForwardedCount", 102 },
> +	{ "VpExternalInterruptsForwardingTime", 103 },
> +	{ "VpPendingInterruptsForwardedCount", 104 },
> +	{ "VpPendingInterruptsForwardingTime", 105 },
> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
> +	{ "VpVmclearEmulationCount", 112 },
> +	{ "VpVmclearEmulationTime", 113 },
> +	{ "VpVmptrldEmulationCount", 114 },
> +	{ "VpVmptrldEmulationTime", 115 },
> +	{ "VpVmptrstEmulationCount", 116 },
> +	{ "VpVmptrstEmulationTime", 117 },
> +	{ "VpVmreadEmulationCount", 118 },
> +	{ "VpVmreadEmulationTime", 119 },
> +	{ "VpVmwriteEmulationCount", 120 },
> +	{ "VpVmwriteEmulationTime", 121 },
> +	{ "VpVmxoffEmulationCount", 122 },
> +	{ "VpVmxoffEmulationTime", 123 },
> +	{ "VpVmxonEmulationCount", 124 },
> +	{ "VpVmxonEmulationTime", 125 },
> +	{ "VpNestedVMEntriesCount", 126 },
> +	{ "VpNestedVMEntriesTime", 127 },
> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
> +	{ "VpInvEptAllContextEmulationCount", 132 },
> +	{ "VpInvEptAllContextEmulationTime", 133 },
> +	{ "VpInvEptSingleContextEmulationCount", 134 },
> +	{ "VpInvEptSingleContextEmulationTime", 135 },
> +	{ "VpInvVpidAllContextEmulationCount", 136 },
> +	{ "VpInvVpidAllContextEmulationTime", 137 },
> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
> +	{ "VpNestedTlbPageTableReclamations", 142 },
> +	{ "VpNestedTlbPageTableEvictions", 143 },
> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
> +	{ "VpPostedInterruptNotifications", 146 },
> +	{ "VpPostedInterruptScans", 147 },
> +	{ "VpTotalCoreRunTime", 148 },
> +	{ "VpMaximumRunTime", 149 },
> +	{ "VpHwpRequestContextSwitches", 150 },
> +	{ "VpWaitingForCpuTimeBucket0", 151 },
> +	{ "VpWaitingForCpuTimeBucket1", 152 },
> +	{ "VpWaitingForCpuTimeBucket2", 153 },
> +	{ "VpWaitingForCpuTimeBucket3", 154 },
> +	{ "VpWaitingForCpuTimeBucket4", 155 },
> +	{ "VpWaitingForCpuTimeBucket5", 156 },
> +	{ "VpWaitingForCpuTimeBucket6", 157 },
> +	{ "VpVmloadEmulationCount", 158 },
> +	{ "VpVmloadEmulationTime", 159 },
> +	{ "VpVmsaveEmulationCount", 160 },
> +	{ "VpVmsaveEmulationTime", 161 },
> +	{ "VpGifInstructionEmulationCount", 162 },
> +	{ "VpGifInstructionEmulationTime", 163 },
> +	{ "VpEmulatedErrataSvmInstructions", 164 },
> +	{ "VpPlaceholder1", 165 },
> +	{ "VpPlaceholder2", 166 },
> +	{ "VpPlaceholder3", 167 },
> +	{ "VpPlaceholder4", 168 },
> +	{ "VpPlaceholder5", 169 },
> +	{ "VpPlaceholder6", 170 },
> +	{ "VpPlaceholder7", 171 },
> +	{ "VpPlaceholder8", 172 },
> +	{ "VpContentionTime", 173 },
> +	{ "VpWakeUpTime", 174 },
> +	{ "VpSchedulingPriority", 175 },
> +	{ "VpRdpmcInstructionsCount", 176 },
> +	{ "VpRdpmcInstructionsTime", 177 },
> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
> +	{ "VpPerfmonInterruptCount", 181 },
> +	{ "VpVtl1DispatchCount", 182 },
> +	{ "VpVtl2DispatchCount", 183 },
> +	{ "VpVtl2DispatchBucket0", 184 },
> +	{ "VpVtl2DispatchBucket1", 185 },
> +	{ "VpVtl2DispatchBucket2", 186 },
> +	{ "VpVtl2DispatchBucket3", 187 },
> +	{ "VpVtl2DispatchBucket4", 188 },
> +	{ "VpVtl2DispatchBucket5", 189 },
> +	{ "VpVtl2DispatchBucket6", 190 },
> +	{ "VpVtl1RunTime", 191 },
> +	{ "VpVtl2RunTime", 192 },
> +	{ "VpIommuHypercalls", 193 },
> +	{ "VpCpuGroupHypercalls", 194 },
> +	{ "VpVsmHypercalls", 195 },
> +	{ "VpEventLogHypercalls", 196 },
> +	{ "VpDeviceDomainHypercalls", 197 },
> +	{ "VpDepositHypercalls", 198 },
> +	{ "VpSvmHypercalls", 199 },
> +	{ "VpBusLockAcquisitionCount", 200 },
> +	{ "VpLoadAvg", 201 },
> +	{ "VpRootDispatchThreadBlocked", 202 },
> +	{ "VpIdleCpuTime", 203 },
> +	{ "VpWaitingForCpuTimeBucket7", 204 },
> +	{ "VpWaitingForCpuTimeBucket8", 205 },
> +	{ "VpWaitingForCpuTimeBucket9", 206 },
> +	{ "VpWaitingForCpuTimeBucket10", 207 },
> +	{ "VpWaitingForCpuTimeBucket11", 208 },
> +	{ "VpWaitingForCpuTimeBucket12", 209 },
> +	{ "VpHierarchicalSuspendTime", 210 },
> +	{ "VpExpressSchedulingAttempts", 211 },
> +	{ "VpExpressSchedulingCount", 212 },
> +	{ "VpBusLockAcquisitionTime", 213 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "VpSysRegAccessesCount", 9 },
> +	{ "VpSysRegAccessesTime", 10 },
> +	{ "VpSmcInstructionsCount", 11 },
> +	{ "VpSmcInstructionsTime", 12 },
> +	{ "VpOtherInterceptsCount", 13 },
> +	{ "VpOtherInterceptsTime", 14 },
> +	{ "VpExternalInterruptsCount", 15 },
> +	{ "VpExternalInterruptsTime", 16 },
> +	{ "VpPendingInterruptsCount", 17 },
> +	{ "VpPendingInterruptsTime", 18 },
> +	{ "VpGuestPageTableMaps", 19 },
> +	{ "VpLargePageTlbFills", 20 },
> +	{ "VpSmallPageTlbFills", 21 },
> +	{ "VpReflectedGuestPageFaults", 22 },
> +	{ "VpMemoryInterceptMessages", 23 },
> +	{ "VpOtherMessages", 24 },
> +	{ "VpLogicalProcessorMigrations", 25 },
> +	{ "VpAddressDomainFlushes", 26 },
> +	{ "VpAddressSpaceFlushes", 27 },
> +	{ "VpSyntheticInterrupts", 28 },
> +	{ "VpVirtualInterrupts", 29 },
> +	{ "VpApicSelfIpisSent", 30 },
> +	{ "VpGpaSpaceHypercalls", 31 },
> +	{ "VpLogicalProcessorHypercalls", 32 },
> +	{ "VpLongSpinWaitHypercalls", 33 },
> +	{ "VpOtherHypercalls", 34 },
> +	{ "VpSyntheticInterruptHypercalls", 35 },
> +	{ "VpVirtualInterruptHypercalls", 36 },
> +	{ "VpVirtualMmuHypercalls", 37 },
> +	{ "VpVirtualProcessorHypercalls", 38 },
> +	{ "VpHardwareInterrupts", 39 },
> +	{ "VpNestedPageFaultInterceptsCount", 40 },
> +	{ "VpNestedPageFaultInterceptsTime", 41 },
> +	{ "VpLogicalProcessorDispatches", 42 },
> +	{ "VpWaitingForCpuTime", 43 },
> +	{ "VpExtendedHypercalls", 44 },
> +	{ "VpExtendedHypercallInterceptMessages", 45 },
> +	{ "VpMbecNestedPageTableSwitches", 46 },
> +	{ "VpOtherReflectedGuestExceptions", 47 },
> +	{ "VpGlobalIoTlbFlushes", 48 },
> +	{ "VpGlobalIoTlbFlushCost", 49 },
> +	{ "VpLocalIoTlbFlushes", 50 },
> +	{ "VpLocalIoTlbFlushCost", 51 },
> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
> +	{ "VpPostedInterruptNotifications", 54 },
> +	{ "VpPostedInterruptScans", 55 },
> +	{ "VpTotalCoreRunTime", 56 },
> +	{ "VpMaximumRunTime", 57 },
> +	{ "VpWaitingForCpuTimeBucket0", 58 },
> +	{ "VpWaitingForCpuTimeBucket1", 59 },
> +	{ "VpWaitingForCpuTimeBucket2", 60 },
> +	{ "VpWaitingForCpuTimeBucket3", 61 },
> +	{ "VpWaitingForCpuTimeBucket4", 62 },
> +	{ "VpWaitingForCpuTimeBucket5", 63 },
> +	{ "VpWaitingForCpuTimeBucket6", 64 },
> +	{ "VpHwpRequestContextSwitches", 65 },
> +	{ "VpPlaceholder2", 66 },
> +	{ "VpPlaceholder3", 67 },
> +	{ "VpPlaceholder4", 68 },
> +	{ "VpPlaceholder5", 69 },
> +	{ "VpPlaceholder6", 70 },
> +	{ "VpPlaceholder7", 71 },
> +	{ "VpPlaceholder8", 72 },
> +	{ "VpContentionTime", 73 },
> +	{ "VpWakeUpTime", 74 },
> +	{ "VpSchedulingPriority", 75 },
> +	{ "VpVtl1DispatchCount", 76 },
> +	{ "VpVtl2DispatchCount", 77 },
> +	{ "VpVtl2DispatchBucket0", 78 },
> +	{ "VpVtl2DispatchBucket1", 79 },
> +	{ "VpVtl2DispatchBucket2", 80 },
> +	{ "VpVtl2DispatchBucket3", 81 },
> +	{ "VpVtl2DispatchBucket4", 82 },
> +	{ "VpVtl2DispatchBucket5", 83 },
> +	{ "VpVtl2DispatchBucket6", 84 },
> +	{ "VpVtl1RunTime", 85 },
> +	{ "VpVtl2RunTime", 86 },
> +	{ "VpIommuHypercalls", 87 },
> +	{ "VpCpuGroupHypercalls", 88 },
> +	{ "VpVsmHypercalls", 89 },
> +	{ "VpEventLogHypercalls", 90 },
> +	{ "VpDeviceDomainHypercalls", 91 },
> +	{ "VpDepositHypercalls", 92 },
> +	{ "VpSvmHypercalls", 93 },
> +	{ "VpLoadAvg", 94 },
> +	{ "VpRootDispatchThreadBlocked", 95 },
> +	{ "VpIdleCpuTime", 96 },
> +	{ "VpWaitingForCpuTimeBucket7", 97 },
> +	{ "VpWaitingForCpuTimeBucket8", 98 },
> +	{ "VpWaitingForCpuTimeBucket9", 99 },
> +	{ "VpWaitingForCpuTimeBucket10", 100 },
> +	{ "VpWaitingForCpuTimeBucket11", 101 },
> +	{ "VpWaitingForCpuTimeBucket12", 102 },
> +	{ "VpHierarchicalSuspendTime", 103 },
> +	{ "VpExpressSchedulingAttempts", 104 },
> +	{ "VpExpressSchedulingCount", 105 },
> +#endif
> +};
> +
> -- 
> 2.34.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/7] mshv: Update hv_stats_page definitions
  2026-01-21 21:46 ` [PATCH v4 5/7] mshv: Update hv_stats_page definitions Nuno Das Neves
@ 2026-01-22  1:22   ` Stanislav Kinsburskii
  0 siblings, 0 replies; 22+ messages in thread
From: Stanislav Kinsburskii @ 2026-01-22  1:22 UTC (permalink / raw)
  To: Nuno Das Neves
  Cc: linux-hyperv, linux-kernel, mhklinux, kys, haiyangz, wei.liu,
	decui, longli, prapal, mrathor, paekkaladevi

On Wed, Jan 21, 2026 at 01:46:21PM -0800, Nuno Das Neves wrote:
> hv_stats_page belongs in hvhdk.h, move it there.
> 
> It does not require a union to access the data for different counters,
> just use a single u64 array for simplicity and to match the Windows
> definitions.
> 
> While at it, correct the ARM64 value for VpRootDispatchThreadBlocked.
> 
> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
>
> ---
>  drivers/hv/mshv_root_main.c | 22 ++++++----------------
>  include/hyperv/hvhdk.h      |  8 ++++++++
>  2 files changed, 14 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
> index fbfc9e7d9fa4..12825666e21b 100644
> --- a/drivers/hv/mshv_root_main.c
> +++ b/drivers/hv/mshv_root_main.c
> @@ -39,23 +39,14 @@ MODULE_AUTHOR("Microsoft");
>  MODULE_LICENSE("GPL");
>  MODULE_DESCRIPTION("Microsoft Hyper-V root partition VMM interface /dev/mshv");
>  
> -/* TODO move this to another file when debugfs code is added */
>  enum hv_stats_vp_counters {			/* HV_THREAD_COUNTER */

Given the changes you are making for printing VP counters in the
subsequent patches, this union looks redundant.
I'd suggest replacing it a simple define for the thread blocked counter.

But nontheless:

Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>


>  #if defined(CONFIG_X86)
> -	VpRootDispatchThreadBlocked			= 202,
> +	VpRootDispatchThreadBlocked = 202,
>  #elif defined(CONFIG_ARM64)
> -	VpRootDispatchThreadBlocked			= 94,
> +	VpRootDispatchThreadBlocked = 95,
>  #endif
> -	VpStatsMaxCounter
>  };
>  
> -struct hv_stats_page {
> -	union {
> -		u64 vp_cntrs[VpStatsMaxCounter];		/* VP counters */
> -		u8 data[HV_HYP_PAGE_SIZE];
> -	};
> -} __packed;
> -
>  struct mshv_root mshv_root;
>  
>  enum hv_scheduler_type hv_scheduler_type;
> @@ -485,12 +476,11 @@ static u64 mshv_vp_interrupt_pending(struct mshv_vp *vp)
>  static bool mshv_vp_dispatch_thread_blocked(struct mshv_vp *vp)
>  {
>  	struct hv_stats_page **stats = vp->vp_stats_pages;
> -	u64 *self_vp_cntrs = stats[HV_STATS_AREA_SELF]->vp_cntrs;
> -	u64 *parent_vp_cntrs = stats[HV_STATS_AREA_PARENT]->vp_cntrs;
> +	u64 *self_vp_cntrs = stats[HV_STATS_AREA_SELF]->data;
> +	u64 *parent_vp_cntrs = stats[HV_STATS_AREA_PARENT]->data;
>  
> -	if (self_vp_cntrs[VpRootDispatchThreadBlocked])
> -		return self_vp_cntrs[VpRootDispatchThreadBlocked];
> -	return parent_vp_cntrs[VpRootDispatchThreadBlocked];
> +	return parent_vp_cntrs[VpRootDispatchThreadBlocked] ||
> +	       self_vp_cntrs[VpRootDispatchThreadBlocked];
>  }
>  
>  static int
> diff --git a/include/hyperv/hvhdk.h b/include/hyperv/hvhdk.h
> index 469186df7826..ac501969105c 100644
> --- a/include/hyperv/hvhdk.h
> +++ b/include/hyperv/hvhdk.h
> @@ -10,6 +10,14 @@
>  #include "hvhdk_mini.h"
>  #include "hvgdk.h"
>  
> +/*
> + * Hypervisor statistics page format
> + */
> +struct hv_stats_page {
> +	u64 data[HV_HYP_PAGE_SIZE / sizeof(u64)];
> +} __packed;
> +
> +
>  /* Bits for dirty mask of hv_vp_register_page */
>  #define HV_X64_REGISTER_CLASS_GENERAL	0
>  #define HV_X64_REGISTER_CLASS_IP	1
> -- 
> 2.34.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-22  1:18   ` Stanislav Kinsburskii
@ 2026-01-22 18:21     ` Nuno Das Neves
  2026-01-22 18:52       ` Michael Kelley
  2026-01-23 22:28       ` Stanislav Kinsburskii
  0 siblings, 2 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-22 18:21 UTC (permalink / raw)
  To: Stanislav Kinsburskii
  Cc: linux-hyperv, linux-kernel, mhklinux, kys, haiyangz, wei.liu,
	decui, longli, prapal, mrathor, paekkaladevi

On 1/21/2026 5:18 PM, Stanislav Kinsburskii wrote:
> On Wed, Jan 21, 2026 at 01:46:22PM -0800, Nuno Das Neves wrote:
>> Introduce hv_counters.c, containing static data corresponding to
>> HV_*_COUNTER enums in the hypervisor source. Defining the enum
>> members as an array instead makes more sense, since it will be
>> iterated over to print counter information to debugfs.
>>
>> Include hypervisor, logical processor, partition, and virtual
>> processor counters.
>>
>> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
>> ---
>>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 488 insertions(+)
>>  create mode 100644 drivers/hv/hv_counters.c
>>
>> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
>> new file mode 100644
>> index 000000000000..a8e07e72cc29
>> --- /dev/null
>> +++ b/drivers/hv/hv_counters.c
>> @@ -0,0 +1,488 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Copyright (c) 2026, Microsoft Corporation.
>> + *
>> + * Data for printing stats page counters via debugfs.
>> + *
>> + * Authors: Microsoft Linux virtualization team
>> + */
>> +
>> +struct hv_counter_entry {
>> +	char *name;
>> +	int idx;
>> +};
> 
> This structure looks redundant to me mostly because of the "idx".
> It looks what you need here is an arry of pointers to strings, like
> below:
> 
> static const char *hv_hypervisor_counters[] = {
>         NULL, /* 0 is unused */
> 	"HvLogicalProcessors",
> 	"HvPartitions",
> 	"HvTotalPages",
> 	"HvVirtualProcessors",
> 	"HvMonitoredNotifications",
> 	"HvModernStandbyEntries",
> 	"HvPlatformIdleTransitions",
> 	"HvHypervisorStartupCost",
> 	NULL, /* 9 is unused */
> 	"HvIOSpacePages",
> 	...
> };
> 
> which can be iterated like this:
> 
> for (idx = 0; idx < ARRAY_SIZE(hv_hypervisor_counters); idx++) {
>     const char *name = hv_hypervisor_counters[idx];
>     if (!name)
> 	continue;
>     /* print */
>     ...
> }
> 
> What do you think?

It's an elegant option, given the values are almost uniformly
tightly packed. It also saves a fair bit of space - around 2.5Kb.

For my taste, I do like being able to visually verify the
correctness of any given member. That way whenever I look at it, I
don't have to blindly trust that the list was previously set up
correctly, or count the lines to check if a given value is correct.
Not a big deal, but it does introduce some friction.

We could also use a designated initializer list:

static const char *hv_hypervisor_counters[] = {
	[1] = "HvLogicalProcessors",
	[2] = "HvPartitions",
	[3] = "HvTotalPages",
	[4] = "HvVirtualProcessors",
	[5] = "HvMonitoredNotifications",
	[6] = "HvModernStandbyEntries",
	[7] = "HvPlatformIdleTransitions",
	[8] = "HvHypervisorStartupCost",

	[10] = "HvIOSpacePages",
	...
};

The indices are explicit, so it's easy to visually verify that any
particular part of the list is correct. It's functionally identical
to your approach, so it saves the same amount of space, and the
explicit NULLs are unnecessary so it's more straightforward to
transform from the Windows source in case of any gaps that are
harder to notice later on in the list.

How does that sound?

Nuno

> 
> Thanks,
> Stanislav
> 
>> +
>> +/* HV_HYPERVISOR_COUNTER */
>> +static struct hv_counter_entry hv_hypervisor_counters[] = {
>> +	{ "HvLogicalProcessors", 1 },
>> +	{ "HvPartitions", 2 },
>> +	{ "HvTotalPages", 3 },
>> +	{ "HvVirtualProcessors", 4 },
>> +	{ "HvMonitoredNotifications", 5 },
>> +	{ "HvModernStandbyEntries", 6 },
>> +	{ "HvPlatformIdleTransitions", 7 },
>> +	{ "HvHypervisorStartupCost", 8 },
>> +
>> +	{ "HvIOSpacePages", 10 },
>> +	{ "HvNonEssentialPagesForDump", 11 },
>> +	{ "HvSubsumedPages", 12 },
>> +};
>> +
>> +/* HV_CPU_COUNTER */
>> +static struct hv_counter_entry hv_lp_counters[] = {
>> +	{ "LpGlobalTime", 1 },
>> +	{ "LpTotalRunTime", 2 },
>> +	{ "LpHypervisorRunTime", 3 },
>> +	{ "LpHardwareInterrupts", 4 },
>> +	{ "LpContextSwitches", 5 },
>> +	{ "LpInterProcessorInterrupts", 6 },
>> +	{ "LpSchedulerInterrupts", 7 },
>> +	{ "LpTimerInterrupts", 8 },
>> +	{ "LpInterProcessorInterruptsSent", 9 },
>> +	{ "LpProcessorHalts", 10 },
>> +	{ "LpMonitorTransitionCost", 11 },
>> +	{ "LpContextSwitchTime", 12 },
>> +	{ "LpC1TransitionsCount", 13 },
>> +	{ "LpC1RunTime", 14 },
>> +	{ "LpC2TransitionsCount", 15 },
>> +	{ "LpC2RunTime", 16 },
>> +	{ "LpC3TransitionsCount", 17 },
>> +	{ "LpC3RunTime", 18 },
>> +	{ "LpRootVpIndex", 19 },
>> +	{ "LpIdleSequenceNumber", 20 },
>> +	{ "LpGlobalTscCount", 21 },
>> +	{ "LpActiveTscCount", 22 },
>> +	{ "LpIdleAccumulation", 23 },
>> +	{ "LpReferenceCycleCount0", 24 },
>> +	{ "LpActualCycleCount0", 25 },
>> +	{ "LpReferenceCycleCount1", 26 },
>> +	{ "LpActualCycleCount1", 27 },
>> +	{ "LpProximityDomainId", 28 },
>> +	{ "LpPostedInterruptNotifications", 29 },
>> +	{ "LpBranchPredictorFlushes", 30 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "LpL1DataCacheFlushes", 31 },
>> +	{ "LpImmediateL1DataCacheFlushes", 32 },
>> +	{ "LpMbFlushes", 33 },
>> +	{ "LpCounterRefreshSequenceNumber", 34 },
>> +	{ "LpCounterRefreshReferenceTime", 35 },
>> +	{ "LpIdleAccumulationSnapshot", 36 },
>> +	{ "LpActiveTscCountSnapshot", 37 },
>> +	{ "LpHwpRequestContextSwitches", 38 },
>> +	{ "LpPlaceholder1", 39 },
>> +	{ "LpPlaceholder2", 40 },
>> +	{ "LpPlaceholder3", 41 },
>> +	{ "LpPlaceholder4", 42 },
>> +	{ "LpPlaceholder5", 43 },
>> +	{ "LpPlaceholder6", 44 },
>> +	{ "LpPlaceholder7", 45 },
>> +	{ "LpPlaceholder8", 46 },
>> +	{ "LpPlaceholder9", 47 },
>> +	{ "LpSchLocalRunListSize", 48 },
>> +	{ "LpReserveGroupId", 49 },
>> +	{ "LpRunningPriority", 50 },
>> +	{ "LpPerfmonInterruptCount", 51 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "LpCounterRefreshSequenceNumber", 31 },
>> +	{ "LpCounterRefreshReferenceTime", 32 },
>> +	{ "LpIdleAccumulationSnapshot", 33 },
>> +	{ "LpActiveTscCountSnapshot", 34 },
>> +	{ "LpHwpRequestContextSwitches", 35 },
>> +	{ "LpPlaceholder2", 36 },
>> +	{ "LpPlaceholder3", 37 },
>> +	{ "LpPlaceholder4", 38 },
>> +	{ "LpPlaceholder5", 39 },
>> +	{ "LpPlaceholder6", 40 },
>> +	{ "LpPlaceholder7", 41 },
>> +	{ "LpPlaceholder8", 42 },
>> +	{ "LpPlaceholder9", 43 },
>> +	{ "LpSchLocalRunListSize", 44 },
>> +	{ "LpReserveGroupId", 45 },
>> +	{ "LpRunningPriority", 46 },
>> +#endif
>> +};
>> +
>> +/* HV_PROCESS_COUNTER */
>> +static struct hv_counter_entry hv_partition_counters[] = {
>> +	{ "PtVirtualProcessors", 1 },
>> +
>> +	{ "PtTlbSize", 3 },
>> +	{ "PtAddressSpaces", 4 },
>> +	{ "PtDepositedPages", 5 },
>> +	{ "PtGpaPages", 6 },
>> +	{ "PtGpaSpaceModifications", 7 },
>> +	{ "PtVirtualTlbFlushEntires", 8 },
>> +	{ "PtRecommendedTlbSize", 9 },
>> +	{ "PtGpaPages4K", 10 },
>> +	{ "PtGpaPages2M", 11 },
>> +	{ "PtGpaPages1G", 12 },
>> +	{ "PtGpaPages512G", 13 },
>> +	{ "PtDevicePages4K", 14 },
>> +	{ "PtDevicePages2M", 15 },
>> +	{ "PtDevicePages1G", 16 },
>> +	{ "PtDevicePages512G", 17 },
>> +	{ "PtAttachedDevices", 18 },
>> +	{ "PtDeviceInterruptMappings", 19 },
>> +	{ "PtIoTlbFlushes", 20 },
>> +	{ "PtIoTlbFlushCost", 21 },
>> +	{ "PtDeviceInterruptErrors", 22 },
>> +	{ "PtDeviceDmaErrors", 23 },
>> +	{ "PtDeviceInterruptThrottleEvents", 24 },
>> +	{ "PtSkippedTimerTicks", 25 },
>> +	{ "PtPartitionId", 26 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "PtNestedTlbSize", 27 },
>> +	{ "PtRecommendedNestedTlbSize", 28 },
>> +	{ "PtNestedTlbFreeListSize", 29 },
>> +	{ "PtNestedTlbTrimmedPages", 30 },
>> +	{ "PtPagesShattered", 31 },
>> +	{ "PtPagesRecombined", 32 },
>> +	{ "PtHwpRequestValue", 33 },
>> +	{ "PtAutoSuspendEnableTime", 34 },
>> +	{ "PtAutoSuspendTriggerTime", 35 },
>> +	{ "PtAutoSuspendDisableTime", 36 },
>> +	{ "PtPlaceholder1", 37 },
>> +	{ "PtPlaceholder2", 38 },
>> +	{ "PtPlaceholder3", 39 },
>> +	{ "PtPlaceholder4", 40 },
>> +	{ "PtPlaceholder5", 41 },
>> +	{ "PtPlaceholder6", 42 },
>> +	{ "PtPlaceholder7", 43 },
>> +	{ "PtPlaceholder8", 44 },
>> +	{ "PtHypervisorStateTransferGeneration", 45 },
>> +	{ "PtNumberofActiveChildPartitions", 46 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "PtHwpRequestValue", 27 },
>> +	{ "PtAutoSuspendEnableTime", 28 },
>> +	{ "PtAutoSuspendTriggerTime", 29 },
>> +	{ "PtAutoSuspendDisableTime", 30 },
>> +	{ "PtPlaceholder1", 31 },
>> +	{ "PtPlaceholder2", 32 },
>> +	{ "PtPlaceholder3", 33 },
>> +	{ "PtPlaceholder4", 34 },
>> +	{ "PtPlaceholder5", 35 },
>> +	{ "PtPlaceholder6", 36 },
>> +	{ "PtPlaceholder7", 37 },
>> +	{ "PtPlaceholder8", 38 },
>> +	{ "PtHypervisorStateTransferGeneration", 39 },
>> +	{ "PtNumberofActiveChildPartitions", 40 },
>> +#endif
>> +};
>> +
>> +/* HV_THREAD_COUNTER */
>> +static struct hv_counter_entry hv_vp_counters[] = {
>> +	{ "VpTotalRunTime", 1 },
>> +	{ "VpHypervisorRunTime", 2 },
>> +	{ "VpRemoteNodeRunTime", 3 },
>> +	{ "VpNormalizedRunTime", 4 },
>> +	{ "VpIdealCpu", 5 },
>> +
>> +	{ "VpHypercallsCount", 7 },
>> +	{ "VpHypercallsTime", 8 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "VpPageInvalidationsCount", 9 },
>> +	{ "VpPageInvalidationsTime", 10 },
>> +	{ "VpControlRegisterAccessesCount", 11 },
>> +	{ "VpControlRegisterAccessesTime", 12 },
>> +	{ "VpIoInstructionsCount", 13 },
>> +	{ "VpIoInstructionsTime", 14 },
>> +	{ "VpHltInstructionsCount", 15 },
>> +	{ "VpHltInstructionsTime", 16 },
>> +	{ "VpMwaitInstructionsCount", 17 },
>> +	{ "VpMwaitInstructionsTime", 18 },
>> +	{ "VpCpuidInstructionsCount", 19 },
>> +	{ "VpCpuidInstructionsTime", 20 },
>> +	{ "VpMsrAccessesCount", 21 },
>> +	{ "VpMsrAccessesTime", 22 },
>> +	{ "VpOtherInterceptsCount", 23 },
>> +	{ "VpOtherInterceptsTime", 24 },
>> +	{ "VpExternalInterruptsCount", 25 },
>> +	{ "VpExternalInterruptsTime", 26 },
>> +	{ "VpPendingInterruptsCount", 27 },
>> +	{ "VpPendingInterruptsTime", 28 },
>> +	{ "VpEmulatedInstructionsCount", 29 },
>> +	{ "VpEmulatedInstructionsTime", 30 },
>> +	{ "VpDebugRegisterAccessesCount", 31 },
>> +	{ "VpDebugRegisterAccessesTime", 32 },
>> +	{ "VpPageFaultInterceptsCount", 33 },
>> +	{ "VpPageFaultInterceptsTime", 34 },
>> +	{ "VpGuestPageTableMaps", 35 },
>> +	{ "VpLargePageTlbFills", 36 },
>> +	{ "VpSmallPageTlbFills", 37 },
>> +	{ "VpReflectedGuestPageFaults", 38 },
>> +	{ "VpApicMmioAccesses", 39 },
>> +	{ "VpIoInterceptMessages", 40 },
>> +	{ "VpMemoryInterceptMessages", 41 },
>> +	{ "VpApicEoiAccesses", 42 },
>> +	{ "VpOtherMessages", 43 },
>> +	{ "VpPageTableAllocations", 44 },
>> +	{ "VpLogicalProcessorMigrations", 45 },
>> +	{ "VpAddressSpaceEvictions", 46 },
>> +	{ "VpAddressSpaceSwitches", 47 },
>> +	{ "VpAddressDomainFlushes", 48 },
>> +	{ "VpAddressSpaceFlushes", 49 },
>> +	{ "VpGlobalGvaRangeFlushes", 50 },
>> +	{ "VpLocalGvaRangeFlushes", 51 },
>> +	{ "VpPageTableEvictions", 52 },
>> +	{ "VpPageTableReclamations", 53 },
>> +	{ "VpPageTableResets", 54 },
>> +	{ "VpPageTableValidations", 55 },
>> +	{ "VpApicTprAccesses", 56 },
>> +	{ "VpPageTableWriteIntercepts", 57 },
>> +	{ "VpSyntheticInterrupts", 58 },
>> +	{ "VpVirtualInterrupts", 59 },
>> +	{ "VpApicIpisSent", 60 },
>> +	{ "VpApicSelfIpisSent", 61 },
>> +	{ "VpGpaSpaceHypercalls", 62 },
>> +	{ "VpLogicalProcessorHypercalls", 63 },
>> +	{ "VpLongSpinWaitHypercalls", 64 },
>> +	{ "VpOtherHypercalls", 65 },
>> +	{ "VpSyntheticInterruptHypercalls", 66 },
>> +	{ "VpVirtualInterruptHypercalls", 67 },
>> +	{ "VpVirtualMmuHypercalls", 68 },
>> +	{ "VpVirtualProcessorHypercalls", 69 },
>> +	{ "VpHardwareInterrupts", 70 },
>> +	{ "VpNestedPageFaultInterceptsCount", 71 },
>> +	{ "VpNestedPageFaultInterceptsTime", 72 },
>> +	{ "VpPageScans", 73 },
>> +	{ "VpLogicalProcessorDispatches", 74 },
>> +	{ "VpWaitingForCpuTime", 75 },
>> +	{ "VpExtendedHypercalls", 76 },
>> +	{ "VpExtendedHypercallInterceptMessages", 77 },
>> +	{ "VpMbecNestedPageTableSwitches", 78 },
>> +	{ "VpOtherReflectedGuestExceptions", 79 },
>> +	{ "VpGlobalIoTlbFlushes", 80 },
>> +	{ "VpGlobalIoTlbFlushCost", 81 },
>> +	{ "VpLocalIoTlbFlushes", 82 },
>> +	{ "VpLocalIoTlbFlushCost", 83 },
>> +	{ "VpHypercallsForwardedCount", 84 },
>> +	{ "VpHypercallsForwardingTime", 85 },
>> +	{ "VpPageInvalidationsForwardedCount", 86 },
>> +	{ "VpPageInvalidationsForwardingTime", 87 },
>> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
>> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
>> +	{ "VpIoInstructionsForwardedCount", 90 },
>> +	{ "VpIoInstructionsForwardingTime", 91 },
>> +	{ "VpHltInstructionsForwardedCount", 92 },
>> +	{ "VpHltInstructionsForwardingTime", 93 },
>> +	{ "VpMwaitInstructionsForwardedCount", 94 },
>> +	{ "VpMwaitInstructionsForwardingTime", 95 },
>> +	{ "VpCpuidInstructionsForwardedCount", 96 },
>> +	{ "VpCpuidInstructionsForwardingTime", 97 },
>> +	{ "VpMsrAccessesForwardedCount", 98 },
>> +	{ "VpMsrAccessesForwardingTime", 99 },
>> +	{ "VpOtherInterceptsForwardedCount", 100 },
>> +	{ "VpOtherInterceptsForwardingTime", 101 },
>> +	{ "VpExternalInterruptsForwardedCount", 102 },
>> +	{ "VpExternalInterruptsForwardingTime", 103 },
>> +	{ "VpPendingInterruptsForwardedCount", 104 },
>> +	{ "VpPendingInterruptsForwardingTime", 105 },
>> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
>> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
>> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
>> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
>> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
>> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
>> +	{ "VpVmclearEmulationCount", 112 },
>> +	{ "VpVmclearEmulationTime", 113 },
>> +	{ "VpVmptrldEmulationCount", 114 },
>> +	{ "VpVmptrldEmulationTime", 115 },
>> +	{ "VpVmptrstEmulationCount", 116 },
>> +	{ "VpVmptrstEmulationTime", 117 },
>> +	{ "VpVmreadEmulationCount", 118 },
>> +	{ "VpVmreadEmulationTime", 119 },
>> +	{ "VpVmwriteEmulationCount", 120 },
>> +	{ "VpVmwriteEmulationTime", 121 },
>> +	{ "VpVmxoffEmulationCount", 122 },
>> +	{ "VpVmxoffEmulationTime", 123 },
>> +	{ "VpVmxonEmulationCount", 124 },
>> +	{ "VpVmxonEmulationTime", 125 },
>> +	{ "VpNestedVMEntriesCount", 126 },
>> +	{ "VpNestedVMEntriesTime", 127 },
>> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
>> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
>> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
>> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
>> +	{ "VpInvEptAllContextEmulationCount", 132 },
>> +	{ "VpInvEptAllContextEmulationTime", 133 },
>> +	{ "VpInvEptSingleContextEmulationCount", 134 },
>> +	{ "VpInvEptSingleContextEmulationTime", 135 },
>> +	{ "VpInvVpidAllContextEmulationCount", 136 },
>> +	{ "VpInvVpidAllContextEmulationTime", 137 },
>> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
>> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
>> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
>> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
>> +	{ "VpNestedTlbPageTableReclamations", 142 },
>> +	{ "VpNestedTlbPageTableEvictions", 143 },
>> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
>> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
>> +	{ "VpPostedInterruptNotifications", 146 },
>> +	{ "VpPostedInterruptScans", 147 },
>> +	{ "VpTotalCoreRunTime", 148 },
>> +	{ "VpMaximumRunTime", 149 },
>> +	{ "VpHwpRequestContextSwitches", 150 },
>> +	{ "VpWaitingForCpuTimeBucket0", 151 },
>> +	{ "VpWaitingForCpuTimeBucket1", 152 },
>> +	{ "VpWaitingForCpuTimeBucket2", 153 },
>> +	{ "VpWaitingForCpuTimeBucket3", 154 },
>> +	{ "VpWaitingForCpuTimeBucket4", 155 },
>> +	{ "VpWaitingForCpuTimeBucket5", 156 },
>> +	{ "VpWaitingForCpuTimeBucket6", 157 },
>> +	{ "VpVmloadEmulationCount", 158 },
>> +	{ "VpVmloadEmulationTime", 159 },
>> +	{ "VpVmsaveEmulationCount", 160 },
>> +	{ "VpVmsaveEmulationTime", 161 },
>> +	{ "VpGifInstructionEmulationCount", 162 },
>> +	{ "VpGifInstructionEmulationTime", 163 },
>> +	{ "VpEmulatedErrataSvmInstructions", 164 },
>> +	{ "VpPlaceholder1", 165 },
>> +	{ "VpPlaceholder2", 166 },
>> +	{ "VpPlaceholder3", 167 },
>> +	{ "VpPlaceholder4", 168 },
>> +	{ "VpPlaceholder5", 169 },
>> +	{ "VpPlaceholder6", 170 },
>> +	{ "VpPlaceholder7", 171 },
>> +	{ "VpPlaceholder8", 172 },
>> +	{ "VpContentionTime", 173 },
>> +	{ "VpWakeUpTime", 174 },
>> +	{ "VpSchedulingPriority", 175 },
>> +	{ "VpRdpmcInstructionsCount", 176 },
>> +	{ "VpRdpmcInstructionsTime", 177 },
>> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
>> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
>> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
>> +	{ "VpPerfmonInterruptCount", 181 },
>> +	{ "VpVtl1DispatchCount", 182 },
>> +	{ "VpVtl2DispatchCount", 183 },
>> +	{ "VpVtl2DispatchBucket0", 184 },
>> +	{ "VpVtl2DispatchBucket1", 185 },
>> +	{ "VpVtl2DispatchBucket2", 186 },
>> +	{ "VpVtl2DispatchBucket3", 187 },
>> +	{ "VpVtl2DispatchBucket4", 188 },
>> +	{ "VpVtl2DispatchBucket5", 189 },
>> +	{ "VpVtl2DispatchBucket6", 190 },
>> +	{ "VpVtl1RunTime", 191 },
>> +	{ "VpVtl2RunTime", 192 },
>> +	{ "VpIommuHypercalls", 193 },
>> +	{ "VpCpuGroupHypercalls", 194 },
>> +	{ "VpVsmHypercalls", 195 },
>> +	{ "VpEventLogHypercalls", 196 },
>> +	{ "VpDeviceDomainHypercalls", 197 },
>> +	{ "VpDepositHypercalls", 198 },
>> +	{ "VpSvmHypercalls", 199 },
>> +	{ "VpBusLockAcquisitionCount", 200 },
>> +	{ "VpLoadAvg", 201 },
>> +	{ "VpRootDispatchThreadBlocked", 202 },
>> +	{ "VpIdleCpuTime", 203 },
>> +	{ "VpWaitingForCpuTimeBucket7", 204 },
>> +	{ "VpWaitingForCpuTimeBucket8", 205 },
>> +	{ "VpWaitingForCpuTimeBucket9", 206 },
>> +	{ "VpWaitingForCpuTimeBucket10", 207 },
>> +	{ "VpWaitingForCpuTimeBucket11", 208 },
>> +	{ "VpWaitingForCpuTimeBucket12", 209 },
>> +	{ "VpHierarchicalSuspendTime", 210 },
>> +	{ "VpExpressSchedulingAttempts", 211 },
>> +	{ "VpExpressSchedulingCount", 212 },
>> +	{ "VpBusLockAcquisitionTime", 213 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "VpSysRegAccessesCount", 9 },
>> +	{ "VpSysRegAccessesTime", 10 },
>> +	{ "VpSmcInstructionsCount", 11 },
>> +	{ "VpSmcInstructionsTime", 12 },
>> +	{ "VpOtherInterceptsCount", 13 },
>> +	{ "VpOtherInterceptsTime", 14 },
>> +	{ "VpExternalInterruptsCount", 15 },
>> +	{ "VpExternalInterruptsTime", 16 },
>> +	{ "VpPendingInterruptsCount", 17 },
>> +	{ "VpPendingInterruptsTime", 18 },
>> +	{ "VpGuestPageTableMaps", 19 },
>> +	{ "VpLargePageTlbFills", 20 },
>> +	{ "VpSmallPageTlbFills", 21 },
>> +	{ "VpReflectedGuestPageFaults", 22 },
>> +	{ "VpMemoryInterceptMessages", 23 },
>> +	{ "VpOtherMessages", 24 },
>> +	{ "VpLogicalProcessorMigrations", 25 },
>> +	{ "VpAddressDomainFlushes", 26 },
>> +	{ "VpAddressSpaceFlushes", 27 },
>> +	{ "VpSyntheticInterrupts", 28 },
>> +	{ "VpVirtualInterrupts", 29 },
>> +	{ "VpApicSelfIpisSent", 30 },
>> +	{ "VpGpaSpaceHypercalls", 31 },
>> +	{ "VpLogicalProcessorHypercalls", 32 },
>> +	{ "VpLongSpinWaitHypercalls", 33 },
>> +	{ "VpOtherHypercalls", 34 },
>> +	{ "VpSyntheticInterruptHypercalls", 35 },
>> +	{ "VpVirtualInterruptHypercalls", 36 },
>> +	{ "VpVirtualMmuHypercalls", 37 },
>> +	{ "VpVirtualProcessorHypercalls", 38 },
>> +	{ "VpHardwareInterrupts", 39 },
>> +	{ "VpNestedPageFaultInterceptsCount", 40 },
>> +	{ "VpNestedPageFaultInterceptsTime", 41 },
>> +	{ "VpLogicalProcessorDispatches", 42 },
>> +	{ "VpWaitingForCpuTime", 43 },
>> +	{ "VpExtendedHypercalls", 44 },
>> +	{ "VpExtendedHypercallInterceptMessages", 45 },
>> +	{ "VpMbecNestedPageTableSwitches", 46 },
>> +	{ "VpOtherReflectedGuestExceptions", 47 },
>> +	{ "VpGlobalIoTlbFlushes", 48 },
>> +	{ "VpGlobalIoTlbFlushCost", 49 },
>> +	{ "VpLocalIoTlbFlushes", 50 },
>> +	{ "VpLocalIoTlbFlushCost", 51 },
>> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
>> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
>> +	{ "VpPostedInterruptNotifications", 54 },
>> +	{ "VpPostedInterruptScans", 55 },
>> +	{ "VpTotalCoreRunTime", 56 },
>> +	{ "VpMaximumRunTime", 57 },
>> +	{ "VpWaitingForCpuTimeBucket0", 58 },
>> +	{ "VpWaitingForCpuTimeBucket1", 59 },
>> +	{ "VpWaitingForCpuTimeBucket2", 60 },
>> +	{ "VpWaitingForCpuTimeBucket3", 61 },
>> +	{ "VpWaitingForCpuTimeBucket4", 62 },
>> +	{ "VpWaitingForCpuTimeBucket5", 63 },
>> +	{ "VpWaitingForCpuTimeBucket6", 64 },
>> +	{ "VpHwpRequestContextSwitches", 65 },
>> +	{ "VpPlaceholder2", 66 },
>> +	{ "VpPlaceholder3", 67 },
>> +	{ "VpPlaceholder4", 68 },
>> +	{ "VpPlaceholder5", 69 },
>> +	{ "VpPlaceholder6", 70 },
>> +	{ "VpPlaceholder7", 71 },
>> +	{ "VpPlaceholder8", 72 },
>> +	{ "VpContentionTime", 73 },
>> +	{ "VpWakeUpTime", 74 },
>> +	{ "VpSchedulingPriority", 75 },
>> +	{ "VpVtl1DispatchCount", 76 },
>> +	{ "VpVtl2DispatchCount", 77 },
>> +	{ "VpVtl2DispatchBucket0", 78 },
>> +	{ "VpVtl2DispatchBucket1", 79 },
>> +	{ "VpVtl2DispatchBucket2", 80 },
>> +	{ "VpVtl2DispatchBucket3", 81 },
>> +	{ "VpVtl2DispatchBucket4", 82 },
>> +	{ "VpVtl2DispatchBucket5", 83 },
>> +	{ "VpVtl2DispatchBucket6", 84 },
>> +	{ "VpVtl1RunTime", 85 },
>> +	{ "VpVtl2RunTime", 86 },
>> +	{ "VpIommuHypercalls", 87 },
>> +	{ "VpCpuGroupHypercalls", 88 },
>> +	{ "VpVsmHypercalls", 89 },
>> +	{ "VpEventLogHypercalls", 90 },
>> +	{ "VpDeviceDomainHypercalls", 91 },
>> +	{ "VpDepositHypercalls", 92 },
>> +	{ "VpSvmHypercalls", 93 },
>> +	{ "VpLoadAvg", 94 },
>> +	{ "VpRootDispatchThreadBlocked", 95 },
>> +	{ "VpIdleCpuTime", 96 },
>> +	{ "VpWaitingForCpuTimeBucket7", 97 },
>> +	{ "VpWaitingForCpuTimeBucket8", 98 },
>> +	{ "VpWaitingForCpuTimeBucket9", 99 },
>> +	{ "VpWaitingForCpuTimeBucket10", 100 },
>> +	{ "VpWaitingForCpuTimeBucket11", 101 },
>> +	{ "VpWaitingForCpuTimeBucket12", 102 },
>> +	{ "VpHierarchicalSuspendTime", 103 },
>> +	{ "VpExpressSchedulingAttempts", 104 },
>> +	{ "VpExpressSchedulingCount", 105 },
>> +#endif
>> +};
>> +
>> -- 
>> 2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-22 18:21     ` Nuno Das Neves
@ 2026-01-22 18:52       ` Michael Kelley
  2026-01-23 22:28       ` Stanislav Kinsburskii
  1 sibling, 0 replies; 22+ messages in thread
From: Michael Kelley @ 2026-01-22 18:52 UTC (permalink / raw)
  To: Nuno Das Neves, Stanislav Kinsburskii
  Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Thursday, January 22, 2026 10:21 AM
> 
> On 1/21/2026 5:18 PM, Stanislav Kinsburskii wrote:
> > On Wed, Jan 21, 2026 at 01:46:22PM -0800, Nuno Das Neves wrote:
> >> Introduce hv_counters.c, containing static data corresponding to
> >> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> >> members as an array instead makes more sense, since it will be
> >> iterated over to print counter information to debugfs.
> >>
> >> Include hypervisor, logical processor, partition, and virtual
> >> processor counters.
> >>
> >> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> >> ---
> >>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 488 insertions(+)
> >>  create mode 100644 drivers/hv/hv_counters.c
> >>
> >> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> >> new file mode 100644
> >> index 000000000000..a8e07e72cc29
> >> --- /dev/null
> >> +++ b/drivers/hv/hv_counters.c
> >> @@ -0,0 +1,488 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * Copyright (c) 2026, Microsoft Corporation.
> >> + *
> >> + * Data for printing stats page counters via debugfs.
> >> + *
> >> + * Authors: Microsoft Linux virtualization team
> >> + */
> >> +
> >> +struct hv_counter_entry {
> >> +	char *name;
> >> +	int idx;
> >> +};
> >
> > This structure looks redundant to me mostly because of the "idx".
> > It looks what you need here is an arry of pointers to strings, like
> > below:
> >
> > static const char *hv_hypervisor_counters[] = {
> >         NULL, /* 0 is unused */
> > 	"HvLogicalProcessors",
> > 	"HvPartitions",
> > 	"HvTotalPages",
> > 	"HvVirtualProcessors",
> > 	"HvMonitoredNotifications",
> > 	"HvModernStandbyEntries",
> > 	"HvPlatformIdleTransitions",
> > 	"HvHypervisorStartupCost",
> > 	NULL, /* 9 is unused */
> > 	"HvIOSpacePages",
> > 	...
> > };
> >
> > which can be iterated like this:
> >
> > for (idx = 0; idx < ARRAY_SIZE(hv_hypervisor_counters); idx++) {
> >     const char *name = hv_hypervisor_counters[idx];
> >     if (!name)
> > 	continue;
> >     /* print */
> >     ...
> > }
> >
> > What do you think?
> 
> It's an elegant option, given the values are almost uniformly
> tightly packed. It also saves a fair bit of space - around 2.5Kb.
> 
> For my taste, I do like being able to visually verify the
> correctness of any given member. That way whenever I look at it, I
> don't have to blindly trust that the list was previously set up
> correctly, or count the lines to check if a given value is correct.
> Not a big deal, but it does introduce some friction.
> 
> We could also use a designated initializer list:
> 
> static const char *hv_hypervisor_counters[] = {
> 	[1] = "HvLogicalProcessors",
> 	[2] = "HvPartitions",
> 	[3] = "HvTotalPages",
> 	[4] = "HvVirtualProcessors",
> 	[5] = "HvMonitoredNotifications",
> 	[6] = "HvModernStandbyEntries",
> 	[7] = "HvPlatformIdleTransitions",
> 	[8] = "HvHypervisorStartupCost",
> 
> 	[10] = "HvIOSpacePages",
> 	...
> };
> 
> The indices are explicit, so it's easy to visually verify that any
> particular part of the list is correct. It's functionally identical
> to your approach, so it saves the same amount of space, and the
> explicit NULLs are unnecessary so it's more straightforward to
> transform from the Windows source in case of any gaps that are
> harder to notice later on in the list.
> 
> How does that sound?
> 
> Nuno

That's pretty nice and checks all the boxes. +1 for me.

FYI, I have some additional comments coming for the overall
series -- hope to have those posted tomorrow at the latest.

Michael

> 
> >
> > Thanks,
> > Stanislav
> >
> >> +
> >> +/* HV_HYPERVISOR_COUNTER */
> >> +static struct hv_counter_entry hv_hypervisor_counters[] = {
> >> +	{ "HvLogicalProcessors", 1 },
> >> +	{ "HvPartitions", 2 },
> >> +	{ "HvTotalPages", 3 },
> >> +	{ "HvVirtualProcessors", 4 },
> >> +	{ "HvMonitoredNotifications", 5 },
> >> +	{ "HvModernStandbyEntries", 6 },
> >> +	{ "HvPlatformIdleTransitions", 7 },
> >> +	{ "HvHypervisorStartupCost", 8 },
> >> +
> >> +	{ "HvIOSpacePages", 10 },
> >> +	{ "HvNonEssentialPagesForDump", 11 },
> >> +	{ "HvSubsumedPages", 12 },
> >> +};
> >> +
> >> +/* HV_CPU_COUNTER */
> >> +static struct hv_counter_entry hv_lp_counters[] = {
> >> +	{ "LpGlobalTime", 1 },
> >> +	{ "LpTotalRunTime", 2 },
> >> +	{ "LpHypervisorRunTime", 3 },
> >> +	{ "LpHardwareInterrupts", 4 },
> >> +	{ "LpContextSwitches", 5 },
> >> +	{ "LpInterProcessorInterrupts", 6 },
> >> +	{ "LpSchedulerInterrupts", 7 },
> >> +	{ "LpTimerInterrupts", 8 },
> >> +	{ "LpInterProcessorInterruptsSent", 9 },
> >> +	{ "LpProcessorHalts", 10 },
> >> +	{ "LpMonitorTransitionCost", 11 },
> >> +	{ "LpContextSwitchTime", 12 },
> >> +	{ "LpC1TransitionsCount", 13 },
> >> +	{ "LpC1RunTime", 14 },
> >> +	{ "LpC2TransitionsCount", 15 },
> >> +	{ "LpC2RunTime", 16 },
> >> +	{ "LpC3TransitionsCount", 17 },
> >> +	{ "LpC3RunTime", 18 },
> >> +	{ "LpRootVpIndex", 19 },
> >> +	{ "LpIdleSequenceNumber", 20 },
> >> +	{ "LpGlobalTscCount", 21 },
> >> +	{ "LpActiveTscCount", 22 },
> >> +	{ "LpIdleAccumulation", 23 },
> >> +	{ "LpReferenceCycleCount0", 24 },
> >> +	{ "LpActualCycleCount0", 25 },
> >> +	{ "LpReferenceCycleCount1", 26 },
> >> +	{ "LpActualCycleCount1", 27 },
> >> +	{ "LpProximityDomainId", 28 },
> >> +	{ "LpPostedInterruptNotifications", 29 },
> >> +	{ "LpBranchPredictorFlushes", 30 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "LpL1DataCacheFlushes", 31 },
> >> +	{ "LpImmediateL1DataCacheFlushes", 32 },
> >> +	{ "LpMbFlushes", 33 },
> >> +	{ "LpCounterRefreshSequenceNumber", 34 },
> >> +	{ "LpCounterRefreshReferenceTime", 35 },
> >> +	{ "LpIdleAccumulationSnapshot", 36 },
> >> +	{ "LpActiveTscCountSnapshot", 37 },
> >> +	{ "LpHwpRequestContextSwitches", 38 },
> >> +	{ "LpPlaceholder1", 39 },
> >> +	{ "LpPlaceholder2", 40 },
> >> +	{ "LpPlaceholder3", 41 },
> >> +	{ "LpPlaceholder4", 42 },
> >> +	{ "LpPlaceholder5", 43 },
> >> +	{ "LpPlaceholder6", 44 },
> >> +	{ "LpPlaceholder7", 45 },
> >> +	{ "LpPlaceholder8", 46 },
> >> +	{ "LpPlaceholder9", 47 },
> >> +	{ "LpSchLocalRunListSize", 48 },
> >> +	{ "LpReserveGroupId", 49 },
> >> +	{ "LpRunningPriority", 50 },
> >> +	{ "LpPerfmonInterruptCount", 51 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "LpCounterRefreshSequenceNumber", 31 },
> >> +	{ "LpCounterRefreshReferenceTime", 32 },
> >> +	{ "LpIdleAccumulationSnapshot", 33 },
> >> +	{ "LpActiveTscCountSnapshot", 34 },
> >> +	{ "LpHwpRequestContextSwitches", 35 },
> >> +	{ "LpPlaceholder2", 36 },
> >> +	{ "LpPlaceholder3", 37 },
> >> +	{ "LpPlaceholder4", 38 },
> >> +	{ "LpPlaceholder5", 39 },
> >> +	{ "LpPlaceholder6", 40 },
> >> +	{ "LpPlaceholder7", 41 },
> >> +	{ "LpPlaceholder8", 42 },
> >> +	{ "LpPlaceholder9", 43 },
> >> +	{ "LpSchLocalRunListSize", 44 },
> >> +	{ "LpReserveGroupId", 45 },
> >> +	{ "LpRunningPriority", 46 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_PROCESS_COUNTER */
> >> +static struct hv_counter_entry hv_partition_counters[] = {
> >> +	{ "PtVirtualProcessors", 1 },
> >> +
> >> +	{ "PtTlbSize", 3 },
> >> +	{ "PtAddressSpaces", 4 },
> >> +	{ "PtDepositedPages", 5 },
> >> +	{ "PtGpaPages", 6 },
> >> +	{ "PtGpaSpaceModifications", 7 },
> >> +	{ "PtVirtualTlbFlushEntires", 8 },
> >> +	{ "PtRecommendedTlbSize", 9 },
> >> +	{ "PtGpaPages4K", 10 },
> >> +	{ "PtGpaPages2M", 11 },
> >> +	{ "PtGpaPages1G", 12 },
> >> +	{ "PtGpaPages512G", 13 },
> >> +	{ "PtDevicePages4K", 14 },
> >> +	{ "PtDevicePages2M", 15 },
> >> +	{ "PtDevicePages1G", 16 },
> >> +	{ "PtDevicePages512G", 17 },
> >> +	{ "PtAttachedDevices", 18 },
> >> +	{ "PtDeviceInterruptMappings", 19 },
> >> +	{ "PtIoTlbFlushes", 20 },
> >> +	{ "PtIoTlbFlushCost", 21 },
> >> +	{ "PtDeviceInterruptErrors", 22 },
> >> +	{ "PtDeviceDmaErrors", 23 },
> >> +	{ "PtDeviceInterruptThrottleEvents", 24 },
> >> +	{ "PtSkippedTimerTicks", 25 },
> >> +	{ "PtPartitionId", 26 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "PtNestedTlbSize", 27 },
> >> +	{ "PtRecommendedNestedTlbSize", 28 },
> >> +	{ "PtNestedTlbFreeListSize", 29 },
> >> +	{ "PtNestedTlbTrimmedPages", 30 },
> >> +	{ "PtPagesShattered", 31 },
> >> +	{ "PtPagesRecombined", 32 },
> >> +	{ "PtHwpRequestValue", 33 },
> >> +	{ "PtAutoSuspendEnableTime", 34 },
> >> +	{ "PtAutoSuspendTriggerTime", 35 },
> >> +	{ "PtAutoSuspendDisableTime", 36 },
> >> +	{ "PtPlaceholder1", 37 },
> >> +	{ "PtPlaceholder2", 38 },
> >> +	{ "PtPlaceholder3", 39 },
> >> +	{ "PtPlaceholder4", 40 },
> >> +	{ "PtPlaceholder5", 41 },
> >> +	{ "PtPlaceholder6", 42 },
> >> +	{ "PtPlaceholder7", 43 },
> >> +	{ "PtPlaceholder8", 44 },
> >> +	{ "PtHypervisorStateTransferGeneration", 45 },
> >> +	{ "PtNumberofActiveChildPartitions", 46 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "PtHwpRequestValue", 27 },
> >> +	{ "PtAutoSuspendEnableTime", 28 },
> >> +	{ "PtAutoSuspendTriggerTime", 29 },
> >> +	{ "PtAutoSuspendDisableTime", 30 },
> >> +	{ "PtPlaceholder1", 31 },
> >> +	{ "PtPlaceholder2", 32 },
> >> +	{ "PtPlaceholder3", 33 },
> >> +	{ "PtPlaceholder4", 34 },
> >> +	{ "PtPlaceholder5", 35 },
> >> +	{ "PtPlaceholder6", 36 },
> >> +	{ "PtPlaceholder7", 37 },
> >> +	{ "PtPlaceholder8", 38 },
> >> +	{ "PtHypervisorStateTransferGeneration", 39 },
> >> +	{ "PtNumberofActiveChildPartitions", 40 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_THREAD_COUNTER */
> >> +static struct hv_counter_entry hv_vp_counters[] = {
> >> +	{ "VpTotalRunTime", 1 },
> >> +	{ "VpHypervisorRunTime", 2 },
> >> +	{ "VpRemoteNodeRunTime", 3 },
> >> +	{ "VpNormalizedRunTime", 4 },
> >> +	{ "VpIdealCpu", 5 },
> >> +
> >> +	{ "VpHypercallsCount", 7 },
> >> +	{ "VpHypercallsTime", 8 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "VpPageInvalidationsCount", 9 },
> >> +	{ "VpPageInvalidationsTime", 10 },
> >> +	{ "VpControlRegisterAccessesCount", 11 },
> >> +	{ "VpControlRegisterAccessesTime", 12 },
> >> +	{ "VpIoInstructionsCount", 13 },
> >> +	{ "VpIoInstructionsTime", 14 },
> >> +	{ "VpHltInstructionsCount", 15 },
> >> +	{ "VpHltInstructionsTime", 16 },
> >> +	{ "VpMwaitInstructionsCount", 17 },
> >> +	{ "VpMwaitInstructionsTime", 18 },
> >> +	{ "VpCpuidInstructionsCount", 19 },
> >> +	{ "VpCpuidInstructionsTime", 20 },
> >> +	{ "VpMsrAccessesCount", 21 },
> >> +	{ "VpMsrAccessesTime", 22 },
> >> +	{ "VpOtherInterceptsCount", 23 },
> >> +	{ "VpOtherInterceptsTime", 24 },
> >> +	{ "VpExternalInterruptsCount", 25 },
> >> +	{ "VpExternalInterruptsTime", 26 },
> >> +	{ "VpPendingInterruptsCount", 27 },
> >> +	{ "VpPendingInterruptsTime", 28 },
> >> +	{ "VpEmulatedInstructionsCount", 29 },
> >> +	{ "VpEmulatedInstructionsTime", 30 },
> >> +	{ "VpDebugRegisterAccessesCount", 31 },
> >> +	{ "VpDebugRegisterAccessesTime", 32 },
> >> +	{ "VpPageFaultInterceptsCount", 33 },
> >> +	{ "VpPageFaultInterceptsTime", 34 },
> >> +	{ "VpGuestPageTableMaps", 35 },
> >> +	{ "VpLargePageTlbFills", 36 },
> >> +	{ "VpSmallPageTlbFills", 37 },
> >> +	{ "VpReflectedGuestPageFaults", 38 },
> >> +	{ "VpApicMmioAccesses", 39 },
> >> +	{ "VpIoInterceptMessages", 40 },
> >> +	{ "VpMemoryInterceptMessages", 41 },
> >> +	{ "VpApicEoiAccesses", 42 },
> >> +	{ "VpOtherMessages", 43 },
> >> +	{ "VpPageTableAllocations", 44 },
> >> +	{ "VpLogicalProcessorMigrations", 45 },
> >> +	{ "VpAddressSpaceEvictions", 46 },
> >> +	{ "VpAddressSpaceSwitches", 47 },
> >> +	{ "VpAddressDomainFlushes", 48 },
> >> +	{ "VpAddressSpaceFlushes", 49 },
> >> +	{ "VpGlobalGvaRangeFlushes", 50 },
> >> +	{ "VpLocalGvaRangeFlushes", 51 },
> >> +	{ "VpPageTableEvictions", 52 },
> >> +	{ "VpPageTableReclamations", 53 },
> >> +	{ "VpPageTableResets", 54 },
> >> +	{ "VpPageTableValidations", 55 },
> >> +	{ "VpApicTprAccesses", 56 },
> >> +	{ "VpPageTableWriteIntercepts", 57 },
> >> +	{ "VpSyntheticInterrupts", 58 },
> >> +	{ "VpVirtualInterrupts", 59 },
> >> +	{ "VpApicIpisSent", 60 },
> >> +	{ "VpApicSelfIpisSent", 61 },
> >> +	{ "VpGpaSpaceHypercalls", 62 },
> >> +	{ "VpLogicalProcessorHypercalls", 63 },
> >> +	{ "VpLongSpinWaitHypercalls", 64 },
> >> +	{ "VpOtherHypercalls", 65 },
> >> +	{ "VpSyntheticInterruptHypercalls", 66 },
> >> +	{ "VpVirtualInterruptHypercalls", 67 },
> >> +	{ "VpVirtualMmuHypercalls", 68 },
> >> +	{ "VpVirtualProcessorHypercalls", 69 },
> >> +	{ "VpHardwareInterrupts", 70 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 71 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 72 },
> >> +	{ "VpPageScans", 73 },
> >> +	{ "VpLogicalProcessorDispatches", 74 },
> >> +	{ "VpWaitingForCpuTime", 75 },
> >> +	{ "VpExtendedHypercalls", 76 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 77 },
> >> +	{ "VpMbecNestedPageTableSwitches", 78 },
> >> +	{ "VpOtherReflectedGuestExceptions", 79 },
> >> +	{ "VpGlobalIoTlbFlushes", 80 },
> >> +	{ "VpGlobalIoTlbFlushCost", 81 },
> >> +	{ "VpLocalIoTlbFlushes", 82 },
> >> +	{ "VpLocalIoTlbFlushCost", 83 },
> >> +	{ "VpHypercallsForwardedCount", 84 },
> >> +	{ "VpHypercallsForwardingTime", 85 },
> >> +	{ "VpPageInvalidationsForwardedCount", 86 },
> >> +	{ "VpPageInvalidationsForwardingTime", 87 },
> >> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
> >> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
> >> +	{ "VpIoInstructionsForwardedCount", 90 },
> >> +	{ "VpIoInstructionsForwardingTime", 91 },
> >> +	{ "VpHltInstructionsForwardedCount", 92 },
> >> +	{ "VpHltInstructionsForwardingTime", 93 },
> >> +	{ "VpMwaitInstructionsForwardedCount", 94 },
> >> +	{ "VpMwaitInstructionsForwardingTime", 95 },
> >> +	{ "VpCpuidInstructionsForwardedCount", 96 },
> >> +	{ "VpCpuidInstructionsForwardingTime", 97 },
> >> +	{ "VpMsrAccessesForwardedCount", 98 },
> >> +	{ "VpMsrAccessesForwardingTime", 99 },
> >> +	{ "VpOtherInterceptsForwardedCount", 100 },
> >> +	{ "VpOtherInterceptsForwardingTime", 101 },
> >> +	{ "VpExternalInterruptsForwardedCount", 102 },
> >> +	{ "VpExternalInterruptsForwardingTime", 103 },
> >> +	{ "VpPendingInterruptsForwardedCount", 104 },
> >> +	{ "VpPendingInterruptsForwardingTime", 105 },
> >> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
> >> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
> >> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
> >> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
> >> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
> >> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
> >> +	{ "VpVmclearEmulationCount", 112 },
> >> +	{ "VpVmclearEmulationTime", 113 },
> >> +	{ "VpVmptrldEmulationCount", 114 },
> >> +	{ "VpVmptrldEmulationTime", 115 },
> >> +	{ "VpVmptrstEmulationCount", 116 },
> >> +	{ "VpVmptrstEmulationTime", 117 },
> >> +	{ "VpVmreadEmulationCount", 118 },
> >> +	{ "VpVmreadEmulationTime", 119 },
> >> +	{ "VpVmwriteEmulationCount", 120 },
> >> +	{ "VpVmwriteEmulationTime", 121 },
> >> +	{ "VpVmxoffEmulationCount", 122 },
> >> +	{ "VpVmxoffEmulationTime", 123 },
> >> +	{ "VpVmxonEmulationCount", 124 },
> >> +	{ "VpVmxonEmulationTime", 125 },
> >> +	{ "VpNestedVMEntriesCount", 126 },
> >> +	{ "VpNestedVMEntriesTime", 127 },
> >> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
> >> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
> >> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
> >> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
> >> +	{ "VpInvEptAllContextEmulationCount", 132 },
> >> +	{ "VpInvEptAllContextEmulationTime", 133 },
> >> +	{ "VpInvEptSingleContextEmulationCount", 134 },
> >> +	{ "VpInvEptSingleContextEmulationTime", 135 },
> >> +	{ "VpInvVpidAllContextEmulationCount", 136 },
> >> +	{ "VpInvVpidAllContextEmulationTime", 137 },
> >> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
> >> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
> >> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
> >> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
> >> +	{ "VpNestedTlbPageTableReclamations", 142 },
> >> +	{ "VpNestedTlbPageTableEvictions", 143 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
> >> +	{ "VpPostedInterruptNotifications", 146 },
> >> +	{ "VpPostedInterruptScans", 147 },
> >> +	{ "VpTotalCoreRunTime", 148 },
> >> +	{ "VpMaximumRunTime", 149 },
> >> +	{ "VpHwpRequestContextSwitches", 150 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 151 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 152 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 153 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 154 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 155 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 156 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 157 },
> >> +	{ "VpVmloadEmulationCount", 158 },
> >> +	{ "VpVmloadEmulationTime", 159 },
> >> +	{ "VpVmsaveEmulationCount", 160 },
> >> +	{ "VpVmsaveEmulationTime", 161 },
> >> +	{ "VpGifInstructionEmulationCount", 162 },
> >> +	{ "VpGifInstructionEmulationTime", 163 },
> >> +	{ "VpEmulatedErrataSvmInstructions", 164 },
> >> +	{ "VpPlaceholder1", 165 },
> >> +	{ "VpPlaceholder2", 166 },
> >> +	{ "VpPlaceholder3", 167 },
> >> +	{ "VpPlaceholder4", 168 },
> >> +	{ "VpPlaceholder5", 169 },
> >> +	{ "VpPlaceholder6", 170 },
> >> +	{ "VpPlaceholder7", 171 },
> >> +	{ "VpPlaceholder8", 172 },
> >> +	{ "VpContentionTime", 173 },
> >> +	{ "VpWakeUpTime", 174 },
> >> +	{ "VpSchedulingPriority", 175 },
> >> +	{ "VpRdpmcInstructionsCount", 176 },
> >> +	{ "VpRdpmcInstructionsTime", 177 },
> >> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
> >> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
> >> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
> >> +	{ "VpPerfmonInterruptCount", 181 },
> >> +	{ "VpVtl1DispatchCount", 182 },
> >> +	{ "VpVtl2DispatchCount", 183 },
> >> +	{ "VpVtl2DispatchBucket0", 184 },
> >> +	{ "VpVtl2DispatchBucket1", 185 },
> >> +	{ "VpVtl2DispatchBucket2", 186 },
> >> +	{ "VpVtl2DispatchBucket3", 187 },
> >> +	{ "VpVtl2DispatchBucket4", 188 },
> >> +	{ "VpVtl2DispatchBucket5", 189 },
> >> +	{ "VpVtl2DispatchBucket6", 190 },
> >> +	{ "VpVtl1RunTime", 191 },
> >> +	{ "VpVtl2RunTime", 192 },
> >> +	{ "VpIommuHypercalls", 193 },
> >> +	{ "VpCpuGroupHypercalls", 194 },
> >> +	{ "VpVsmHypercalls", 195 },
> >> +	{ "VpEventLogHypercalls", 196 },
> >> +	{ "VpDeviceDomainHypercalls", 197 },
> >> +	{ "VpDepositHypercalls", 198 },
> >> +	{ "VpSvmHypercalls", 199 },
> >> +	{ "VpBusLockAcquisitionCount", 200 },
> >> +	{ "VpLoadAvg", 201 },
> >> +	{ "VpRootDispatchThreadBlocked", 202 },
> >> +	{ "VpIdleCpuTime", 203 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 204 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 205 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 206 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 207 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 208 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 209 },
> >> +	{ "VpHierarchicalSuspendTime", 210 },
> >> +	{ "VpExpressSchedulingAttempts", 211 },
> >> +	{ "VpExpressSchedulingCount", 212 },
> >> +	{ "VpBusLockAcquisitionTime", 213 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "VpSysRegAccessesCount", 9 },
> >> +	{ "VpSysRegAccessesTime", 10 },
> >> +	{ "VpSmcInstructionsCount", 11 },
> >> +	{ "VpSmcInstructionsTime", 12 },
> >> +	{ "VpOtherInterceptsCount", 13 },
> >> +	{ "VpOtherInterceptsTime", 14 },
> >> +	{ "VpExternalInterruptsCount", 15 },
> >> +	{ "VpExternalInterruptsTime", 16 },
> >> +	{ "VpPendingInterruptsCount", 17 },
> >> +	{ "VpPendingInterruptsTime", 18 },
> >> +	{ "VpGuestPageTableMaps", 19 },
> >> +	{ "VpLargePageTlbFills", 20 },
> >> +	{ "VpSmallPageTlbFills", 21 },
> >> +	{ "VpReflectedGuestPageFaults", 22 },
> >> +	{ "VpMemoryInterceptMessages", 23 },
> >> +	{ "VpOtherMessages", 24 },
> >> +	{ "VpLogicalProcessorMigrations", 25 },
> >> +	{ "VpAddressDomainFlushes", 26 },
> >> +	{ "VpAddressSpaceFlushes", 27 },
> >> +	{ "VpSyntheticInterrupts", 28 },
> >> +	{ "VpVirtualInterrupts", 29 },
> >> +	{ "VpApicSelfIpisSent", 30 },
> >> +	{ "VpGpaSpaceHypercalls", 31 },
> >> +	{ "VpLogicalProcessorHypercalls", 32 },
> >> +	{ "VpLongSpinWaitHypercalls", 33 },
> >> +	{ "VpOtherHypercalls", 34 },
> >> +	{ "VpSyntheticInterruptHypercalls", 35 },
> >> +	{ "VpVirtualInterruptHypercalls", 36 },
> >> +	{ "VpVirtualMmuHypercalls", 37 },
> >> +	{ "VpVirtualProcessorHypercalls", 38 },
> >> +	{ "VpHardwareInterrupts", 39 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 40 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 41 },
> >> +	{ "VpLogicalProcessorDispatches", 42 },
> >> +	{ "VpWaitingForCpuTime", 43 },
> >> +	{ "VpExtendedHypercalls", 44 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 45 },
> >> +	{ "VpMbecNestedPageTableSwitches", 46 },
> >> +	{ "VpOtherReflectedGuestExceptions", 47 },
> >> +	{ "VpGlobalIoTlbFlushes", 48 },
> >> +	{ "VpGlobalIoTlbFlushCost", 49 },
> >> +	{ "VpLocalIoTlbFlushes", 50 },
> >> +	{ "VpLocalIoTlbFlushCost", 51 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
> >> +	{ "VpPostedInterruptNotifications", 54 },
> >> +	{ "VpPostedInterruptScans", 55 },
> >> +	{ "VpTotalCoreRunTime", 56 },
> >> +	{ "VpMaximumRunTime", 57 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 58 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 59 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 60 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 61 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 62 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 63 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 64 },
> >> +	{ "VpHwpRequestContextSwitches", 65 },
> >> +	{ "VpPlaceholder2", 66 },
> >> +	{ "VpPlaceholder3", 67 },
> >> +	{ "VpPlaceholder4", 68 },
> >> +	{ "VpPlaceholder5", 69 },
> >> +	{ "VpPlaceholder6", 70 },
> >> +	{ "VpPlaceholder7", 71 },
> >> +	{ "VpPlaceholder8", 72 },
> >> +	{ "VpContentionTime", 73 },
> >> +	{ "VpWakeUpTime", 74 },
> >> +	{ "VpSchedulingPriority", 75 },
> >> +	{ "VpVtl1DispatchCount", 76 },
> >> +	{ "VpVtl2DispatchCount", 77 },
> >> +	{ "VpVtl2DispatchBucket0", 78 },
> >> +	{ "VpVtl2DispatchBucket1", 79 },
> >> +	{ "VpVtl2DispatchBucket2", 80 },
> >> +	{ "VpVtl2DispatchBucket3", 81 },
> >> +	{ "VpVtl2DispatchBucket4", 82 },
> >> +	{ "VpVtl2DispatchBucket5", 83 },
> >> +	{ "VpVtl2DispatchBucket6", 84 },
> >> +	{ "VpVtl1RunTime", 85 },
> >> +	{ "VpVtl2RunTime", 86 },
> >> +	{ "VpIommuHypercalls", 87 },
> >> +	{ "VpCpuGroupHypercalls", 88 },
> >> +	{ "VpVsmHypercalls", 89 },
> >> +	{ "VpEventLogHypercalls", 90 },
> >> +	{ "VpDeviceDomainHypercalls", 91 },
> >> +	{ "VpDepositHypercalls", 92 },
> >> +	{ "VpSvmHypercalls", 93 },
> >> +	{ "VpLoadAvg", 94 },
> >> +	{ "VpRootDispatchThreadBlocked", 95 },
> >> +	{ "VpIdleCpuTime", 96 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 97 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 98 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 99 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 100 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 101 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 102 },
> >> +	{ "VpHierarchicalSuspendTime", 103 },
> >> +	{ "VpExpressSchedulingAttempts", 104 },
> >> +	{ "VpExpressSchedulingCount", 105 },
> >> +#endif
> >> +};
> >> +
> >> --
> >> 2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-21 21:46 ` [PATCH v4 6/7] mshv: Add data for printing stats page counters Nuno Das Neves
  2026-01-22  1:18   ` Stanislav Kinsburskii
@ 2026-01-23 17:09   ` Michael Kelley
  2026-01-23 19:04     ` Nuno Das Neves
  1 sibling, 1 reply; 22+ messages in thread
From: Michael Kelley @ 2026-01-23 17:09 UTC (permalink / raw)
  To: Nuno Das Neves, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> 
> Introduce hv_counters.c, containing static data corresponding to
> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> members as an array instead makes more sense, since it will be
> iterated over to print counter information to debugfs.

I would have expected the filename to be mshv_counters.c, so that the association
with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
for using the "hv_" prefix?

Also, I see in Patch 7 of this series that hv_counters.c is #included as a .c file
in mshv_debugfs.c. Is there a reason for doing the #include instead of adding
hv_counters.c to the Makefile and building it on its own? You would need to
add a handful of extern statements to mshv_root.h so that the tables are
referenceable from mshv_debugfs.c. But that would seem to be the more
normal way of doing things.  #including a .c file is unusual.

See one more comment on the last line of this patch ...

> 
> Include hypervisor, logical processor, partition, and virtual
> processor counters.
> 
> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> ---
>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 488 insertions(+)
>  create mode 100644 drivers/hv/hv_counters.c
> 
> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> new file mode 100644
> index 000000000000..a8e07e72cc29
> --- /dev/null
> +++ b/drivers/hv/hv_counters.c
> @@ -0,0 +1,488 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2026, Microsoft Corporation.
> + *
> + * Data for printing stats page counters via debugfs.
> + *
> + * Authors: Microsoft Linux virtualization team
> + */
> +
> +struct hv_counter_entry {
> +	char *name;
> +	int idx;
> +};
> +
> +/* HV_HYPERVISOR_COUNTER */
> +static struct hv_counter_entry hv_hypervisor_counters[] = {
> +	{ "HvLogicalProcessors", 1 },
> +	{ "HvPartitions", 2 },
> +	{ "HvTotalPages", 3 },
> +	{ "HvVirtualProcessors", 4 },
> +	{ "HvMonitoredNotifications", 5 },
> +	{ "HvModernStandbyEntries", 6 },
> +	{ "HvPlatformIdleTransitions", 7 },
> +	{ "HvHypervisorStartupCost", 8 },
> +
> +	{ "HvIOSpacePages", 10 },
> +	{ "HvNonEssentialPagesForDump", 11 },
> +	{ "HvSubsumedPages", 12 },
> +};
> +
> +/* HV_CPU_COUNTER */
> +static struct hv_counter_entry hv_lp_counters[] = {
> +	{ "LpGlobalTime", 1 },
> +	{ "LpTotalRunTime", 2 },
> +	{ "LpHypervisorRunTime", 3 },
> +	{ "LpHardwareInterrupts", 4 },
> +	{ "LpContextSwitches", 5 },
> +	{ "LpInterProcessorInterrupts", 6 },
> +	{ "LpSchedulerInterrupts", 7 },
> +	{ "LpTimerInterrupts", 8 },
> +	{ "LpInterProcessorInterruptsSent", 9 },
> +	{ "LpProcessorHalts", 10 },
> +	{ "LpMonitorTransitionCost", 11 },
> +	{ "LpContextSwitchTime", 12 },
> +	{ "LpC1TransitionsCount", 13 },
> +	{ "LpC1RunTime", 14 },
> +	{ "LpC2TransitionsCount", 15 },
> +	{ "LpC2RunTime", 16 },
> +	{ "LpC3TransitionsCount", 17 },
> +	{ "LpC3RunTime", 18 },
> +	{ "LpRootVpIndex", 19 },
> +	{ "LpIdleSequenceNumber", 20 },
> +	{ "LpGlobalTscCount", 21 },
> +	{ "LpActiveTscCount", 22 },
> +	{ "LpIdleAccumulation", 23 },
> +	{ "LpReferenceCycleCount0", 24 },
> +	{ "LpActualCycleCount0", 25 },
> +	{ "LpReferenceCycleCount1", 26 },
> +	{ "LpActualCycleCount1", 27 },
> +	{ "LpProximityDomainId", 28 },
> +	{ "LpPostedInterruptNotifications", 29 },
> +	{ "LpBranchPredictorFlushes", 30 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "LpL1DataCacheFlushes", 31 },
> +	{ "LpImmediateL1DataCacheFlushes", 32 },
> +	{ "LpMbFlushes", 33 },
> +	{ "LpCounterRefreshSequenceNumber", 34 },
> +	{ "LpCounterRefreshReferenceTime", 35 },
> +	{ "LpIdleAccumulationSnapshot", 36 },
> +	{ "LpActiveTscCountSnapshot", 37 },
> +	{ "LpHwpRequestContextSwitches", 38 },
> +	{ "LpPlaceholder1", 39 },
> +	{ "LpPlaceholder2", 40 },
> +	{ "LpPlaceholder3", 41 },
> +	{ "LpPlaceholder4", 42 },
> +	{ "LpPlaceholder5", 43 },
> +	{ "LpPlaceholder6", 44 },
> +	{ "LpPlaceholder7", 45 },
> +	{ "LpPlaceholder8", 46 },
> +	{ "LpPlaceholder9", 47 },
> +	{ "LpSchLocalRunListSize", 48 },
> +	{ "LpReserveGroupId", 49 },
> +	{ "LpRunningPriority", 50 },
> +	{ "LpPerfmonInterruptCount", 51 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "LpCounterRefreshSequenceNumber", 31 },
> +	{ "LpCounterRefreshReferenceTime", 32 },
> +	{ "LpIdleAccumulationSnapshot", 33 },
> +	{ "LpActiveTscCountSnapshot", 34 },
> +	{ "LpHwpRequestContextSwitches", 35 },
> +	{ "LpPlaceholder2", 36 },
> +	{ "LpPlaceholder3", 37 },
> +	{ "LpPlaceholder4", 38 },
> +	{ "LpPlaceholder5", 39 },
> +	{ "LpPlaceholder6", 40 },
> +	{ "LpPlaceholder7", 41 },
> +	{ "LpPlaceholder8", 42 },
> +	{ "LpPlaceholder9", 43 },
> +	{ "LpSchLocalRunListSize", 44 },
> +	{ "LpReserveGroupId", 45 },
> +	{ "LpRunningPriority", 46 },
> +#endif
> +};
> +
> +/* HV_PROCESS_COUNTER */
> +static struct hv_counter_entry hv_partition_counters[] = {
> +	{ "PtVirtualProcessors", 1 },
> +
> +	{ "PtTlbSize", 3 },
> +	{ "PtAddressSpaces", 4 },
> +	{ "PtDepositedPages", 5 },
> +	{ "PtGpaPages", 6 },
> +	{ "PtGpaSpaceModifications", 7 },
> +	{ "PtVirtualTlbFlushEntires", 8 },
> +	{ "PtRecommendedTlbSize", 9 },
> +	{ "PtGpaPages4K", 10 },
> +	{ "PtGpaPages2M", 11 },
> +	{ "PtGpaPages1G", 12 },
> +	{ "PtGpaPages512G", 13 },
> +	{ "PtDevicePages4K", 14 },
> +	{ "PtDevicePages2M", 15 },
> +	{ "PtDevicePages1G", 16 },
> +	{ "PtDevicePages512G", 17 },
> +	{ "PtAttachedDevices", 18 },
> +	{ "PtDeviceInterruptMappings", 19 },
> +	{ "PtIoTlbFlushes", 20 },
> +	{ "PtIoTlbFlushCost", 21 },
> +	{ "PtDeviceInterruptErrors", 22 },
> +	{ "PtDeviceDmaErrors", 23 },
> +	{ "PtDeviceInterruptThrottleEvents", 24 },
> +	{ "PtSkippedTimerTicks", 25 },
> +	{ "PtPartitionId", 26 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "PtNestedTlbSize", 27 },
> +	{ "PtRecommendedNestedTlbSize", 28 },
> +	{ "PtNestedTlbFreeListSize", 29 },
> +	{ "PtNestedTlbTrimmedPages", 30 },
> +	{ "PtPagesShattered", 31 },
> +	{ "PtPagesRecombined", 32 },
> +	{ "PtHwpRequestValue", 33 },
> +	{ "PtAutoSuspendEnableTime", 34 },
> +	{ "PtAutoSuspendTriggerTime", 35 },
> +	{ "PtAutoSuspendDisableTime", 36 },
> +	{ "PtPlaceholder1", 37 },
> +	{ "PtPlaceholder2", 38 },
> +	{ "PtPlaceholder3", 39 },
> +	{ "PtPlaceholder4", 40 },
> +	{ "PtPlaceholder5", 41 },
> +	{ "PtPlaceholder6", 42 },
> +	{ "PtPlaceholder7", 43 },
> +	{ "PtPlaceholder8", 44 },
> +	{ "PtHypervisorStateTransferGeneration", 45 },
> +	{ "PtNumberofActiveChildPartitions", 46 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "PtHwpRequestValue", 27 },
> +	{ "PtAutoSuspendEnableTime", 28 },
> +	{ "PtAutoSuspendTriggerTime", 29 },
> +	{ "PtAutoSuspendDisableTime", 30 },
> +	{ "PtPlaceholder1", 31 },
> +	{ "PtPlaceholder2", 32 },
> +	{ "PtPlaceholder3", 33 },
> +	{ "PtPlaceholder4", 34 },
> +	{ "PtPlaceholder5", 35 },
> +	{ "PtPlaceholder6", 36 },
> +	{ "PtPlaceholder7", 37 },
> +	{ "PtPlaceholder8", 38 },
> +	{ "PtHypervisorStateTransferGeneration", 39 },
> +	{ "PtNumberofActiveChildPartitions", 40 },
> +#endif
> +};
> +
> +/* HV_THREAD_COUNTER */
> +static struct hv_counter_entry hv_vp_counters[] = {
> +	{ "VpTotalRunTime", 1 },
> +	{ "VpHypervisorRunTime", 2 },
> +	{ "VpRemoteNodeRunTime", 3 },
> +	{ "VpNormalizedRunTime", 4 },
> +	{ "VpIdealCpu", 5 },
> +
> +	{ "VpHypercallsCount", 7 },
> +	{ "VpHypercallsTime", 8 },
> +#if IS_ENABLED(CONFIG_X86_64)
> +	{ "VpPageInvalidationsCount", 9 },
> +	{ "VpPageInvalidationsTime", 10 },
> +	{ "VpControlRegisterAccessesCount", 11 },
> +	{ "VpControlRegisterAccessesTime", 12 },
> +	{ "VpIoInstructionsCount", 13 },
> +	{ "VpIoInstructionsTime", 14 },
> +	{ "VpHltInstructionsCount", 15 },
> +	{ "VpHltInstructionsTime", 16 },
> +	{ "VpMwaitInstructionsCount", 17 },
> +	{ "VpMwaitInstructionsTime", 18 },
> +	{ "VpCpuidInstructionsCount", 19 },
> +	{ "VpCpuidInstructionsTime", 20 },
> +	{ "VpMsrAccessesCount", 21 },
> +	{ "VpMsrAccessesTime", 22 },
> +	{ "VpOtherInterceptsCount", 23 },
> +	{ "VpOtherInterceptsTime", 24 },
> +	{ "VpExternalInterruptsCount", 25 },
> +	{ "VpExternalInterruptsTime", 26 },
> +	{ "VpPendingInterruptsCount", 27 },
> +	{ "VpPendingInterruptsTime", 28 },
> +	{ "VpEmulatedInstructionsCount", 29 },
> +	{ "VpEmulatedInstructionsTime", 30 },
> +	{ "VpDebugRegisterAccessesCount", 31 },
> +	{ "VpDebugRegisterAccessesTime", 32 },
> +	{ "VpPageFaultInterceptsCount", 33 },
> +	{ "VpPageFaultInterceptsTime", 34 },
> +	{ "VpGuestPageTableMaps", 35 },
> +	{ "VpLargePageTlbFills", 36 },
> +	{ "VpSmallPageTlbFills", 37 },
> +	{ "VpReflectedGuestPageFaults", 38 },
> +	{ "VpApicMmioAccesses", 39 },
> +	{ "VpIoInterceptMessages", 40 },
> +	{ "VpMemoryInterceptMessages", 41 },
> +	{ "VpApicEoiAccesses", 42 },
> +	{ "VpOtherMessages", 43 },
> +	{ "VpPageTableAllocations", 44 },
> +	{ "VpLogicalProcessorMigrations", 45 },
> +	{ "VpAddressSpaceEvictions", 46 },
> +	{ "VpAddressSpaceSwitches", 47 },
> +	{ "VpAddressDomainFlushes", 48 },
> +	{ "VpAddressSpaceFlushes", 49 },
> +	{ "VpGlobalGvaRangeFlushes", 50 },
> +	{ "VpLocalGvaRangeFlushes", 51 },
> +	{ "VpPageTableEvictions", 52 },
> +	{ "VpPageTableReclamations", 53 },
> +	{ "VpPageTableResets", 54 },
> +	{ "VpPageTableValidations", 55 },
> +	{ "VpApicTprAccesses", 56 },
> +	{ "VpPageTableWriteIntercepts", 57 },
> +	{ "VpSyntheticInterrupts", 58 },
> +	{ "VpVirtualInterrupts", 59 },
> +	{ "VpApicIpisSent", 60 },
> +	{ "VpApicSelfIpisSent", 61 },
> +	{ "VpGpaSpaceHypercalls", 62 },
> +	{ "VpLogicalProcessorHypercalls", 63 },
> +	{ "VpLongSpinWaitHypercalls", 64 },
> +	{ "VpOtherHypercalls", 65 },
> +	{ "VpSyntheticInterruptHypercalls", 66 },
> +	{ "VpVirtualInterruptHypercalls", 67 },
> +	{ "VpVirtualMmuHypercalls", 68 },
> +	{ "VpVirtualProcessorHypercalls", 69 },
> +	{ "VpHardwareInterrupts", 70 },
> +	{ "VpNestedPageFaultInterceptsCount", 71 },
> +	{ "VpNestedPageFaultInterceptsTime", 72 },
> +	{ "VpPageScans", 73 },
> +	{ "VpLogicalProcessorDispatches", 74 },
> +	{ "VpWaitingForCpuTime", 75 },
> +	{ "VpExtendedHypercalls", 76 },
> +	{ "VpExtendedHypercallInterceptMessages", 77 },
> +	{ "VpMbecNestedPageTableSwitches", 78 },
> +	{ "VpOtherReflectedGuestExceptions", 79 },
> +	{ "VpGlobalIoTlbFlushes", 80 },
> +	{ "VpGlobalIoTlbFlushCost", 81 },
> +	{ "VpLocalIoTlbFlushes", 82 },
> +	{ "VpLocalIoTlbFlushCost", 83 },
> +	{ "VpHypercallsForwardedCount", 84 },
> +	{ "VpHypercallsForwardingTime", 85 },
> +	{ "VpPageInvalidationsForwardedCount", 86 },
> +	{ "VpPageInvalidationsForwardingTime", 87 },
> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
> +	{ "VpIoInstructionsForwardedCount", 90 },
> +	{ "VpIoInstructionsForwardingTime", 91 },
> +	{ "VpHltInstructionsForwardedCount", 92 },
> +	{ "VpHltInstructionsForwardingTime", 93 },
> +	{ "VpMwaitInstructionsForwardedCount", 94 },
> +	{ "VpMwaitInstructionsForwardingTime", 95 },
> +	{ "VpCpuidInstructionsForwardedCount", 96 },
> +	{ "VpCpuidInstructionsForwardingTime", 97 },
> +	{ "VpMsrAccessesForwardedCount", 98 },
> +	{ "VpMsrAccessesForwardingTime", 99 },
> +	{ "VpOtherInterceptsForwardedCount", 100 },
> +	{ "VpOtherInterceptsForwardingTime", 101 },
> +	{ "VpExternalInterruptsForwardedCount", 102 },
> +	{ "VpExternalInterruptsForwardingTime", 103 },
> +	{ "VpPendingInterruptsForwardedCount", 104 },
> +	{ "VpPendingInterruptsForwardingTime", 105 },
> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
> +	{ "VpVmclearEmulationCount", 112 },
> +	{ "VpVmclearEmulationTime", 113 },
> +	{ "VpVmptrldEmulationCount", 114 },
> +	{ "VpVmptrldEmulationTime", 115 },
> +	{ "VpVmptrstEmulationCount", 116 },
> +	{ "VpVmptrstEmulationTime", 117 },
> +	{ "VpVmreadEmulationCount", 118 },
> +	{ "VpVmreadEmulationTime", 119 },
> +	{ "VpVmwriteEmulationCount", 120 },
> +	{ "VpVmwriteEmulationTime", 121 },
> +	{ "VpVmxoffEmulationCount", 122 },
> +	{ "VpVmxoffEmulationTime", 123 },
> +	{ "VpVmxonEmulationCount", 124 },
> +	{ "VpVmxonEmulationTime", 125 },
> +	{ "VpNestedVMEntriesCount", 126 },
> +	{ "VpNestedVMEntriesTime", 127 },
> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
> +	{ "VpInvEptAllContextEmulationCount", 132 },
> +	{ "VpInvEptAllContextEmulationTime", 133 },
> +	{ "VpInvEptSingleContextEmulationCount", 134 },
> +	{ "VpInvEptSingleContextEmulationTime", 135 },
> +	{ "VpInvVpidAllContextEmulationCount", 136 },
> +	{ "VpInvVpidAllContextEmulationTime", 137 },
> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
> +	{ "VpNestedTlbPageTableReclamations", 142 },
> +	{ "VpNestedTlbPageTableEvictions", 143 },
> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
> +	{ "VpPostedInterruptNotifications", 146 },
> +	{ "VpPostedInterruptScans", 147 },
> +	{ "VpTotalCoreRunTime", 148 },
> +	{ "VpMaximumRunTime", 149 },
> +	{ "VpHwpRequestContextSwitches", 150 },
> +	{ "VpWaitingForCpuTimeBucket0", 151 },
> +	{ "VpWaitingForCpuTimeBucket1", 152 },
> +	{ "VpWaitingForCpuTimeBucket2", 153 },
> +	{ "VpWaitingForCpuTimeBucket3", 154 },
> +	{ "VpWaitingForCpuTimeBucket4", 155 },
> +	{ "VpWaitingForCpuTimeBucket5", 156 },
> +	{ "VpWaitingForCpuTimeBucket6", 157 },
> +	{ "VpVmloadEmulationCount", 158 },
> +	{ "VpVmloadEmulationTime", 159 },
> +	{ "VpVmsaveEmulationCount", 160 },
> +	{ "VpVmsaveEmulationTime", 161 },
> +	{ "VpGifInstructionEmulationCount", 162 },
> +	{ "VpGifInstructionEmulationTime", 163 },
> +	{ "VpEmulatedErrataSvmInstructions", 164 },
> +	{ "VpPlaceholder1", 165 },
> +	{ "VpPlaceholder2", 166 },
> +	{ "VpPlaceholder3", 167 },
> +	{ "VpPlaceholder4", 168 },
> +	{ "VpPlaceholder5", 169 },
> +	{ "VpPlaceholder6", 170 },
> +	{ "VpPlaceholder7", 171 },
> +	{ "VpPlaceholder8", 172 },
> +	{ "VpContentionTime", 173 },
> +	{ "VpWakeUpTime", 174 },
> +	{ "VpSchedulingPriority", 175 },
> +	{ "VpRdpmcInstructionsCount", 176 },
> +	{ "VpRdpmcInstructionsTime", 177 },
> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
> +	{ "VpPerfmonInterruptCount", 181 },
> +	{ "VpVtl1DispatchCount", 182 },
> +	{ "VpVtl2DispatchCount", 183 },
> +	{ "VpVtl2DispatchBucket0", 184 },
> +	{ "VpVtl2DispatchBucket1", 185 },
> +	{ "VpVtl2DispatchBucket2", 186 },
> +	{ "VpVtl2DispatchBucket3", 187 },
> +	{ "VpVtl2DispatchBucket4", 188 },
> +	{ "VpVtl2DispatchBucket5", 189 },
> +	{ "VpVtl2DispatchBucket6", 190 },
> +	{ "VpVtl1RunTime", 191 },
> +	{ "VpVtl2RunTime", 192 },
> +	{ "VpIommuHypercalls", 193 },
> +	{ "VpCpuGroupHypercalls", 194 },
> +	{ "VpVsmHypercalls", 195 },
> +	{ "VpEventLogHypercalls", 196 },
> +	{ "VpDeviceDomainHypercalls", 197 },
> +	{ "VpDepositHypercalls", 198 },
> +	{ "VpSvmHypercalls", 199 },
> +	{ "VpBusLockAcquisitionCount", 200 },
> +	{ "VpLoadAvg", 201 },
> +	{ "VpRootDispatchThreadBlocked", 202 },
> +	{ "VpIdleCpuTime", 203 },
> +	{ "VpWaitingForCpuTimeBucket7", 204 },
> +	{ "VpWaitingForCpuTimeBucket8", 205 },
> +	{ "VpWaitingForCpuTimeBucket9", 206 },
> +	{ "VpWaitingForCpuTimeBucket10", 207 },
> +	{ "VpWaitingForCpuTimeBucket11", 208 },
> +	{ "VpWaitingForCpuTimeBucket12", 209 },
> +	{ "VpHierarchicalSuspendTime", 210 },
> +	{ "VpExpressSchedulingAttempts", 211 },
> +	{ "VpExpressSchedulingCount", 212 },
> +	{ "VpBusLockAcquisitionTime", 213 },
> +#elif IS_ENABLED(CONFIG_ARM64)
> +	{ "VpSysRegAccessesCount", 9 },
> +	{ "VpSysRegAccessesTime", 10 },
> +	{ "VpSmcInstructionsCount", 11 },
> +	{ "VpSmcInstructionsTime", 12 },
> +	{ "VpOtherInterceptsCount", 13 },
> +	{ "VpOtherInterceptsTime", 14 },
> +	{ "VpExternalInterruptsCount", 15 },
> +	{ "VpExternalInterruptsTime", 16 },
> +	{ "VpPendingInterruptsCount", 17 },
> +	{ "VpPendingInterruptsTime", 18 },
> +	{ "VpGuestPageTableMaps", 19 },
> +	{ "VpLargePageTlbFills", 20 },
> +	{ "VpSmallPageTlbFills", 21 },
> +	{ "VpReflectedGuestPageFaults", 22 },
> +	{ "VpMemoryInterceptMessages", 23 },
> +	{ "VpOtherMessages", 24 },
> +	{ "VpLogicalProcessorMigrations", 25 },
> +	{ "VpAddressDomainFlushes", 26 },
> +	{ "VpAddressSpaceFlushes", 27 },
> +	{ "VpSyntheticInterrupts", 28 },
> +	{ "VpVirtualInterrupts", 29 },
> +	{ "VpApicSelfIpisSent", 30 },
> +	{ "VpGpaSpaceHypercalls", 31 },
> +	{ "VpLogicalProcessorHypercalls", 32 },
> +	{ "VpLongSpinWaitHypercalls", 33 },
> +	{ "VpOtherHypercalls", 34 },
> +	{ "VpSyntheticInterruptHypercalls", 35 },
> +	{ "VpVirtualInterruptHypercalls", 36 },
> +	{ "VpVirtualMmuHypercalls", 37 },
> +	{ "VpVirtualProcessorHypercalls", 38 },
> +	{ "VpHardwareInterrupts", 39 },
> +	{ "VpNestedPageFaultInterceptsCount", 40 },
> +	{ "VpNestedPageFaultInterceptsTime", 41 },
> +	{ "VpLogicalProcessorDispatches", 42 },
> +	{ "VpWaitingForCpuTime", 43 },
> +	{ "VpExtendedHypercalls", 44 },
> +	{ "VpExtendedHypercallInterceptMessages", 45 },
> +	{ "VpMbecNestedPageTableSwitches", 46 },
> +	{ "VpOtherReflectedGuestExceptions", 47 },
> +	{ "VpGlobalIoTlbFlushes", 48 },
> +	{ "VpGlobalIoTlbFlushCost", 49 },
> +	{ "VpLocalIoTlbFlushes", 50 },
> +	{ "VpLocalIoTlbFlushCost", 51 },
> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
> +	{ "VpPostedInterruptNotifications", 54 },
> +	{ "VpPostedInterruptScans", 55 },
> +	{ "VpTotalCoreRunTime", 56 },
> +	{ "VpMaximumRunTime", 57 },
> +	{ "VpWaitingForCpuTimeBucket0", 58 },
> +	{ "VpWaitingForCpuTimeBucket1", 59 },
> +	{ "VpWaitingForCpuTimeBucket2", 60 },
> +	{ "VpWaitingForCpuTimeBucket3", 61 },
> +	{ "VpWaitingForCpuTimeBucket4", 62 },
> +	{ "VpWaitingForCpuTimeBucket5", 63 },
> +	{ "VpWaitingForCpuTimeBucket6", 64 },
> +	{ "VpHwpRequestContextSwitches", 65 },
> +	{ "VpPlaceholder2", 66 },
> +	{ "VpPlaceholder3", 67 },
> +	{ "VpPlaceholder4", 68 },
> +	{ "VpPlaceholder5", 69 },
> +	{ "VpPlaceholder6", 70 },
> +	{ "VpPlaceholder7", 71 },
> +	{ "VpPlaceholder8", 72 },
> +	{ "VpContentionTime", 73 },
> +	{ "VpWakeUpTime", 74 },
> +	{ "VpSchedulingPriority", 75 },
> +	{ "VpVtl1DispatchCount", 76 },
> +	{ "VpVtl2DispatchCount", 77 },
> +	{ "VpVtl2DispatchBucket0", 78 },
> +	{ "VpVtl2DispatchBucket1", 79 },
> +	{ "VpVtl2DispatchBucket2", 80 },
> +	{ "VpVtl2DispatchBucket3", 81 },
> +	{ "VpVtl2DispatchBucket4", 82 },
> +	{ "VpVtl2DispatchBucket5", 83 },
> +	{ "VpVtl2DispatchBucket6", 84 },
> +	{ "VpVtl1RunTime", 85 },
> +	{ "VpVtl2RunTime", 86 },
> +	{ "VpIommuHypercalls", 87 },
> +	{ "VpCpuGroupHypercalls", 88 },
> +	{ "VpVsmHypercalls", 89 },
> +	{ "VpEventLogHypercalls", 90 },
> +	{ "VpDeviceDomainHypercalls", 91 },
> +	{ "VpDepositHypercalls", 92 },
> +	{ "VpSvmHypercalls", 93 },
> +	{ "VpLoadAvg", 94 },
> +	{ "VpRootDispatchThreadBlocked", 95 },
> +	{ "VpIdleCpuTime", 96 },
> +	{ "VpWaitingForCpuTimeBucket7", 97 },
> +	{ "VpWaitingForCpuTimeBucket8", 98 },
> +	{ "VpWaitingForCpuTimeBucket9", 99 },
> +	{ "VpWaitingForCpuTimeBucket10", 100 },
> +	{ "VpWaitingForCpuTimeBucket11", 101 },
> +	{ "VpWaitingForCpuTimeBucket12", 102 },
> +	{ "VpHierarchicalSuspendTime", 103 },
> +	{ "VpExpressSchedulingAttempts", 104 },
> +	{ "VpExpressSchedulingCount", 105 },
> +#endif
> +};
> +

The patch puts a blank line at the end of the new hv_counters.c file. When using
"git am" to apply this patch, I get this warning:

.git/rebase-apply/patch:499: new blank line at EOF.
+
warning: 1 line adds whitespace errors.

Line 499 is that blank line at the end of the new file. If I modify the patch to remove
the adding of the blank line, "git am" will apply the patch with no warning. This
should probably be fixed.

Michael


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics
  2026-01-21 21:46 ` [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics Nuno Das Neves
@ 2026-01-23 17:09   ` Michael Kelley
  2026-01-23 21:11     ` Nuno Das Neves
  0 siblings, 1 reply; 22+ messages in thread
From: Michael Kelley @ 2026-01-23 17:09 UTC (permalink / raw)
  To: Nuno Das Neves, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com, Jinank Jain

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> 
> Introduce a debugfs interface to expose root and child partition stats
> when running with mshv_root.
> 
> Create a debugfs directory "mshv" containing 'stats' files organized by
> type and id. A stats file contains a number of counters depending on
> its type. e.g. an excerpt from a VP stats file:
> 
> TotalRunTime                  : 1997602722
> HypervisorRunTime             : 649671371
> RemoteNodeRunTime             : 0
> NormalizedRunTime             : 1997602721
> IdealCpu                      : 0
> HypercallsCount               : 1708169
> HypercallsTime                : 111914774
> PageInvalidationsCount        : 0
> PageInvalidationsTime         : 0
> 
> On a root partition with some active child partitions, the entire
> directory structure may look like:
> 
> mshv/
>   stats             # hypervisor stats
>   lp/               # logical processors
>     0/              # LP id
>       stats         # LP 0 stats
>     1/
>     2/
>     3/
>   partition/        # partition stats
>     1/              # root partition id
>       stats         # root partition stats
>       vp/           # root virtual processors
>         0/          # root VP id
>           stats     # root VP 0 stats
>         1/
>         2/
>         3/
>     42/             # child partition id
>       stats         # child partition stats
>       vp/           # child VPs
>         0/          # child VP id
>           stats     # child VP 0 stats
>         1/
>     43/
>     55/
> 
> On L1VH, some stats are not present as it does not own the hardware
> like the root partition does:
> - The hypervisor and lp stats are not present
> - L1VH's partition directory is named "self" because it can't get its
>   own id
> - Some of L1VH's partition and VP stats fields are not populated, because
>   it can't map its own HV_STATS_AREA_PARENT page.
> 
> Co-developed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> Co-developed-by: Praveen K Paladugu <prapal@linux.microsoft.com>
> Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
> Co-developed-by: Mukesh Rathor <mrathor@linux.microsoft.com>
> Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com>
> Co-developed-by: Purna Pavan Chandra Aekkaladevi
> <paekkaladevi@linux.microsoft.com>
> Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
> Co-developed-by: Jinank Jain <jinankjain@microsoft.com>
> Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> ---
>  drivers/hv/Makefile         |   1 +
>  drivers/hv/hv_counters.c    |   1 +
>  drivers/hv/hv_synic.c       | 177 +++++++++

This new file hv_synic.c seems to be spurious.  It looks like you unintentionally
picked up this new file from the build tree where you were creating the patches
for this series.

>  drivers/hv/mshv_debugfs.c   | 703 ++++++++++++++++++++++++++++++++++++
>  drivers/hv/mshv_root.h      |  34 ++
>  drivers/hv/mshv_root_main.c |  26 +-
>  6 files changed, 940 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/hv/hv_synic.c
>  create mode 100644 drivers/hv/mshv_debugfs.c
> 
> diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile
> index a49f93c2d245..2593711c3628 100644
> --- a/drivers/hv/Makefile
> +++ b/drivers/hv/Makefile
> @@ -15,6 +15,7 @@ hv_vmbus-$(CONFIG_HYPERV_TESTING)	+= hv_debugfs.o
>  hv_utils-y := hv_util.o hv_kvp.o hv_snapshot.o hv_utils_transport.o
>  mshv_root-y := mshv_root_main.o mshv_synic.o mshv_eventfd.o mshv_irq.o \
>  	       mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o
> +mshv_root-$(CONFIG_DEBUG_FS) += mshv_debugfs.o
>  mshv_vtl-y := mshv_vtl_main.o
> 
>  # Code that must be built-in
> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> index a8e07e72cc29..45ff3d663e56 100644
> --- a/drivers/hv/hv_counters.c
> +++ b/drivers/hv/hv_counters.c
> @@ -3,6 +3,7 @@
>   * Copyright (c) 2026, Microsoft Corporation.
>   *
>   * Data for printing stats page counters via debugfs.
> + * Included directly in mshv_debugfs.c.
>   *
>   * Authors: Microsoft Linux virtualization team
>   */
> diff --git a/drivers/hv/hv_synic.c b/drivers/hv/hv_synic.c
> new file mode 100644
> index 000000000000..cc81d78887f2
> --- /dev/null
> +++ b/drivers/hv/hv_synic.c
> @@ -0,0 +1,177 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2025, Microsoft Corporation.
> + *
> + * Authors: Microsoft Linux virtualization team
> + */
> +
> +/*
> +	root	l1vh	vtl
> +vmbus
> +
> +guest
> +vmbus, nothing else
> +
> +vtl
> +mshv_vtl uses intercept SINT, VTL2_VMBUS_SINT_INDEX (7, not in hvgdk_mini lol)
> +vmbus
> +
> +bm root
> +mshv_root, no vmbus
> +
> +nested root
> +mshv_root uses L1
> +vmbus uses L0 (NESTED regs)
> +
> +l1vh
> +mshv_root and vmbus use same regs
> +
> +*/
> +
> +struct hv_synic_page {
> +	u64 msr;
> +	void *ptr;
> +	struct kref refcount;
> +};
> +
> +void *hv_get_synic_page(u32 msr) {
> +	struct hv_synic_page *page_obj;
> +	page_obj = kmalloc
> +}
> +
> +
> +#define HV_SYNIC_PAGE_STRUCT(type, name) \
> +struct
> +
> +/* UGH */
> +struct hv_percpu_synic_cxt {
> +	struct {
> +		struct hv_message_page *ptr;
> +		refcount_t pt_ref_count;
> +	} hv_simp;
> +	struct hv_message_page *hv_simp;
> +	struct hv_synic_event_flags_page *hv_siefp;
> +	struct hv_synic_event_ring_page *hv_sierp;
> +};
> +
> +int hv_setup_sint(u32 sint_msr)
> +{
> +	union hv_synic_sint sint;
> +
> +	// TODO validate sint_msr
> +
> +	sint.as_uint64 = hv_get_msr(sint_msr);
> +	sint.vector = vmbus_interrupt;
> +	sint.masked = false;
> +	sint.auto_eoi = hv_recommend_using_aeoi();
> +
> +	hv_set_msr(sint_msr, sint.as_uint64);
> +
> +	return 0;
> +}
> +
> +void *hv_setup_synic_page(u32 msr)
> +{
> +	void *addr;
> +	struct hv_synic_page synic_page;
> +
> +	// TODO validate msr
> +
> +	synic_page.as_uint64 = hv_get_msr(msr);
> +	synic_page.enabled = 1;
> +
> +	if (ms_hyperv.paravisor_present || hv_root_partition()) {
> +		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
> +		u64 base = (synic_page.gpa << HV_HYP_PAGE_SHIFT) &
> +			    ~ms_hyperv.shared_gpa_boundary;
> +		addr = (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
> +		if (!addr) {
> +			pr_err("%s: Fail to map synic page from %#x.\n",
> +			       __func__, msr);
> +			return NULL;
> +		}
> +	} else {
> +		addr = (void *)__get_free_page(GFP_KERNEL);
> +		if (!page)
> +			return NULL;
> +
> +		memset(page, 0, PAGE_SIZE);
> +		synic_page.gpa = virt_to_phys(addr) >> HV_HYP_PAGE_SHIFT;
> +	}
> +	hv_set_msr(msr, synic_page.as_uint64);
> +
> +	return addr;
> +}
> +
> +/*
> + * hv_hyp_synic_enable_regs - Initialize the Synthetic Interrupt Controller
> + * with the hypervisor.
> + */
> +void hv_hyp_synic_enable_regs(unsigned int cpu)
> +{
> +	struct hv_per_cpu_context *hv_cpu =
> +		per_cpu_ptr(hv_context.cpu_context, cpu);
> +	union hv_synic_simp simp;
> +	union hv_synic_siefp siefp;
> +	union hv_synic_sint shared_sint;
> +
> +	/* Setup the Synic's message page with the hypervisor. */
> +	simp.as_uint64 = hv_get_msr(HV_MSR_SIMP);
> +	simp.simp_enabled = 1;
> +
> +	if (ms_hyperv.paravisor_present || hv_root_partition()) {
> +		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
> +		u64 base = (simp.base_simp_gpa << HV_HYP_PAGE_SHIFT) &
> +				~ms_hyperv.shared_gpa_boundary;
> +		hv_cpu->hyp_synic_message_page =
> +			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
> +		if (!hv_cpu->hyp_synic_message_page)
> +			pr_err("Fail to map synic message page.\n");
> +	} else {
> +		simp.base_simp_gpa = virt_to_phys(hv_cpu-
> >hyp_synic_message_page)
> +			>> HV_HYP_PAGE_SHIFT;
> +	}
> +
> +	hv_set_msr(HV_MSR_SIMP, simp.as_uint64);
> +
> +	/* Setup the Synic's event page with the hypervisor. */
> +	siefp.as_uint64 = hv_get_msr(HV_MSR_SIEFP);
> +	siefp.siefp_enabled = 1;
> +
> +	if (ms_hyperv.paravisor_present || hv_root_partition()) {
> +		/* Mask out vTOM bit. ioremap_cache() maps decrypted */
> +		u64 base = (siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT) &
> +				~ms_hyperv.shared_gpa_boundary;
> +		hv_cpu->hyp_synic_event_page =
> +			(void *)ioremap_cache(base, HV_HYP_PAGE_SIZE);
> +		if (!hv_cpu->hyp_synic_event_page)
> +			pr_err("Fail to map synic event page.\n");
> +	} else {
> +		siefp.base_siefp_gpa = virt_to_phys(hv_cpu->hyp_synic_event_page)
> +			>> HV_HYP_PAGE_SHIFT;
> +	}
> +
> +	hv_set_msr(HV_MSR_SIEFP, siefp.as_uint64);
> +	hv_enable_coco_interrupt(cpu, vmbus_interrupt, true);
> +
> +	/* Setup the shared SINT. */
> +	if (vmbus_irq != -1)
> +		enable_percpu_irq(vmbus_irq, 0);
> +	shared_sint.as_uint64 = hv_get_msr(HV_MSR_SINT0 +
> VMBUS_MESSAGE_SINT);
> +
> +	shared_sint.vector = vmbus_interrupt;
> +	shared_sint.masked = false;
> +	shared_sint.auto_eoi = hv_recommend_using_aeoi();
> +	hv_set_msr(HV_MSR_SINT0 + VMBUS_MESSAGE_SINT,
> shared_sint.as_uint64);
> +}
> +
> +static void hv_hyp_synic_enable_interrupts(void)
> +{
> +	union hv_synic_scontrol sctrl;
> +
> +	/* Enable the global synic bit */
> +	sctrl.as_uint64 = hv_get_msr(HV_MSR_SCONTROL);
> +	sctrl.enable = 1;
> +
> +	hv_set_msr(HV_MSR_SCONTROL, sctrl.as_uint64);
> +}
> diff --git a/drivers/hv/mshv_debugfs.c b/drivers/hv/mshv_debugfs.c
> new file mode 100644
> index 000000000000..72eb0ae44e4b
> --- /dev/null
> +++ b/drivers/hv/mshv_debugfs.c
> @@ -0,0 +1,703 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (c) 2026, Microsoft Corporation.
> + *
> + * The /sys/kernel/debug/mshv directory contents.
> + * Contains various statistics data, provided by the hypervisor.
> + *
> + * Authors: Microsoft Linux virtualization team
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/stringify.h>
> +#include <asm/mshyperv.h>
> +#include <linux/slab.h>
> +
> +#include "mshv.h"
> +#include "mshv_root.h"
> +
> +#include "hv_counters.c"
> +
> +#define U32_BUF_SZ 11
> +#define U64_BUF_SZ 21
> +#define NUM_STATS_AREAS (HV_STATS_AREA_PARENT + 1)

This is sort of weak in that it doesn't really guard against
changes in the enum that defines HV_STATS_AREA_PARENT.
It would work if it were defined as part of the enum, but then
you are changing the code coming from the Windows world,
which I know is a different problem.

The enum is part of the hypervisor ABI and hence isn't likely to
change, but it still feels funny to define NUM_STATS_AREAS like
this. I would suggest dropping this and just using
HV_STATS_AREA_COUNT for the memory allocations even
though doing so will allocate space for a stats area pointer
that isn't used by this code. It's only a few bytes.

> +
> +static struct dentry *mshv_debugfs;
> +static struct dentry *mshv_debugfs_partition;
> +static struct dentry *mshv_debugfs_lp;
> +static struct dentry **parent_vp_stats;
> +static struct dentry *parent_partition_stats;
> +
> +static u64 mshv_lps_count;
> +static struct hv_stats_page **mshv_lps_stats;
> +
> +static int lp_stats_show(struct seq_file *m, void *v)
> +{
> +	const struct hv_stats_page *stats = m->private;
> +	struct hv_counter_entry *entry = hv_lp_counters;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(hv_lp_counters); i++, entry++)
> +		seq_printf(m, "%-29s: %llu\n", entry->name,
> +			   stats->data[entry->idx]);
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(lp_stats);
> +
> +static void mshv_lp_stats_unmap(u32 lp_index)
> +{
> +	union hv_stats_object_identity identity = {
> +		.lp.lp_index = lp_index,
> +		.lp.stats_area_type = HV_STATS_AREA_SELF,
> +	};
> +	int err;
> +
> +	err = hv_unmap_stats_page(HV_STATS_OBJECT_LOGICAL_PROCESSOR,
> +				  mshv_lps_stats[lp_index], &identity);
> +	if (err)
> +		pr_err("%s: failed to unmap logical processor %u stats, err: %d\n",
> +		       __func__, lp_index, err);

Perhaps set mshv_lps_stats[lp_index] to NULL?  I don't think it's actually
required, but similar code later in this file sets some pointers to NULL
just as good hygiene.

> +}
> +
> +static struct hv_stats_page * __init mshv_lp_stats_map(u32 lp_index)
> +{
> +	union hv_stats_object_identity identity = {
> +		.lp.lp_index = lp_index,
> +		.lp.stats_area_type = HV_STATS_AREA_SELF,
> +	};
> +	struct hv_stats_page *stats;
> +	int err;
> +
> +	err = hv_map_stats_page(HV_STATS_OBJECT_LOGICAL_PROCESSOR, &identity,
> +				&stats);
> +	if (err) {
> +		pr_err("%s: failed to map logical processor %u stats, err: %d\n",
> +		       __func__, lp_index, err);
> +		return ERR_PTR(err);
> +	}
> +	mshv_lps_stats[lp_index] = stats;
> +
> +	return stats;
> +}
> +
> +static struct hv_stats_page * __init lp_debugfs_stats_create(u32 lp_index,
> +							     struct dentry *parent)
> +{
> +	struct dentry *dentry;
> +	struct hv_stats_page *stats;
> +
> +	stats = mshv_lp_stats_map(lp_index);
> +	if (IS_ERR(stats))
> +		return stats;
> +
> +	dentry = debugfs_create_file("stats", 0400, parent,
> +				     stats, &lp_stats_fops);
> +	if (IS_ERR(dentry)) {
> +		mshv_lp_stats_unmap(lp_index);
> +		return ERR_CAST(dentry);
> +	}
> +	return stats;
> +}
> +
> +static int __init lp_debugfs_create(u32 lp_index, struct dentry *parent)
> +{
> +	struct dentry *idx;
> +	char lp_idx_str[U32_BUF_SZ];
> +	struct hv_stats_page *stats;
> +	int err;
> +
> +	sprintf(lp_idx_str, "%u", lp_index);
> +
> +	idx = debugfs_create_dir(lp_idx_str, parent);
> +	if (IS_ERR(idx))
> +		return PTR_ERR(idx);
> +
> +	stats = lp_debugfs_stats_create(lp_index, idx);
> +	if (IS_ERR(stats)) {
> +		err = PTR_ERR(stats);
> +		goto remove_debugfs_lp_idx;
> +	}
> +
> +	return 0;
> +
> +remove_debugfs_lp_idx:
> +	debugfs_remove_recursive(idx);
> +	return err;
> +}
> +
> +static void mshv_debugfs_lp_remove(void)
> +{
> +	int lp_index;
> +
> +	debugfs_remove_recursive(mshv_debugfs_lp);
> +
> +	for (lp_index = 0; lp_index < mshv_lps_count; lp_index++)
> +		mshv_lp_stats_unmap(lp_index);
> +
> +	kfree(mshv_lps_stats);
> +	mshv_lps_stats = NULL;
> +}
> +
> +static int __init mshv_debugfs_lp_create(struct dentry *parent)
> +{
> +	struct dentry *lp_dir;
> +	int err, lp_index;
> +
> +	mshv_lps_stats = kcalloc(mshv_lps_count,
> +				 sizeof(*mshv_lps_stats),
> +				 GFP_KERNEL_ACCOUNT);
> +
> +	if (!mshv_lps_stats)
> +		return -ENOMEM;
> +
> +	lp_dir = debugfs_create_dir("lp", parent);
> +	if (IS_ERR(lp_dir)) {
> +		err = PTR_ERR(lp_dir);
> +		goto free_lp_stats;
> +	}
> +
> +	for (lp_index = 0; lp_index < mshv_lps_count; lp_index++) {
> +		err = lp_debugfs_create(lp_index, lp_dir);
> +		if (err)
> +			goto remove_debugfs_lps;
> +	}
> +
> +	mshv_debugfs_lp = lp_dir;
> +
> +	return 0;
> +
> +remove_debugfs_lps:
> +	for (lp_index -= 1; lp_index >= 0; lp_index--)
> +		mshv_lp_stats_unmap(lp_index);
> +	debugfs_remove_recursive(lp_dir);
> +free_lp_stats:
> +	kfree(mshv_lps_stats);

Set mshv_lps_stats to NULL?

> +
> +	return err;
> +}
> +
> +static int vp_stats_show(struct seq_file *m, void *v)
> +{
> +	const struct hv_stats_page **pstats = m->private;
> +	struct hv_counter_entry *entry = hv_vp_counters;
> +	int i;
> +
> +	/*
> +	 * For VP and partition stats, there may be two stats areas mapped,
> +	 * SELF and PARENT. These refer to the privilege level of the data in
> +	 * each page. Some fields may be 0 in SELF and nonzero in PARENT, or
> +	 * vice versa.
> +	 *
> +	 * Hence, prioritize printing from the PARENT page (more privileged
> +	 * data), but use the value from the SELF page if the PARENT value is
> +	 * 0.
> +	 */
> +
> +	for (i = 0; i < ARRAY_SIZE(hv_vp_counters); i++, entry++) {
> +		u64 parent_val = pstats[HV_STATS_AREA_PARENT]->data[entry->idx];
> +		u64 self_val = pstats[HV_STATS_AREA_SELF]->data[entry->idx];
> +
> +		seq_printf(m, "%-43s: %llu\n", entry->name,
> +			   parent_val ? parent_val : self_val);
> +	}
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(vp_stats);
> +
> +static void vp_debugfs_remove(struct dentry *vp_stats)
> +{
> +	debugfs_remove_recursive(vp_stats->d_parent);
> +}
> +
> +static int vp_debugfs_create(u64 partition_id, u32 vp_index,
> +			     struct hv_stats_page **pstats,
> +			     struct dentry **vp_stats_ptr,
> +			     struct dentry *parent)
> +{
> +	struct dentry *vp_idx_dir, *d;
> +	char vp_idx_str[U32_BUF_SZ];
> +	int err;
> +
> +	sprintf(vp_idx_str, "%u", vp_index);
> +
> +	vp_idx_dir = debugfs_create_dir(vp_idx_str, parent);
> +	if (IS_ERR(vp_idx_dir))
> +		return PTR_ERR(vp_idx_dir);
> +
> +	d = debugfs_create_file("stats", 0400, vp_idx_dir,
> +				     pstats, &vp_stats_fops);
> +	if (IS_ERR(d)) {
> +		err = PTR_ERR(d);
> +		goto remove_debugfs_vp_idx;
> +	}
> +
> +	*vp_stats_ptr = d;
> +
> +	return 0;
> +
> +remove_debugfs_vp_idx:
> +	debugfs_remove_recursive(vp_idx_dir);
> +	return err;
> +}
> +
> +static int partition_stats_show(struct seq_file *m, void *v)
> +{
> +	const struct hv_stats_page **pstats = m->private;
> +	struct hv_counter_entry *entry = hv_partition_counters;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(hv_partition_counters); i++, entry++) {
> +		u64 parent_val = pstats[HV_STATS_AREA_PARENT]->data[entry->idx];
> +		u64 self_val = pstats[HV_STATS_AREA_SELF]->data[entry->idx];
> +
> +		seq_printf(m, "%-32s: %llu\n", entry->name,
> +			   parent_val ? parent_val : self_val);
> +	}
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(partition_stats);
> +
> +static void mshv_partition_stats_unmap(u64 partition_id,
> +				       struct hv_stats_page *stats_page,
> +				       enum hv_stats_area_type stats_area_type)
> +{
> +	union hv_stats_object_identity identity = {
> +		.partition.partition_id = partition_id,
> +		.partition.stats_area_type = stats_area_type,
> +	};
> +	int err;
> +
> +	err = hv_unmap_stats_page(HV_STATS_OBJECT_PARTITION, stats_page,
> +				  &identity);
> +	if (err)
> +		pr_err("%s: failed to unmap partition %lld %s stats, err: %d\n",
> +		       __func__, partition_id,
> +		       (stats_area_type == HV_STATS_AREA_SELF) ? "self" : "parent",
> +		       err);
> +}
> +
> +static struct hv_stats_page *mshv_partition_stats_map(u64 partition_id,
> +						      enum hv_stats_area_type
> stats_area_type)
> +{
> +	union hv_stats_object_identity identity = {
> +		.partition.partition_id = partition_id,
> +		.partition.stats_area_type = stats_area_type,
> +	};
> +	struct hv_stats_page *stats;
> +	int err;
> +
> +	err = hv_map_stats_page(HV_STATS_OBJECT_PARTITION, &identity, &stats);
> +	if (err) {
> +		pr_err("%s: failed to map partition %lld %s stats, err: %d\n",
> +		       __func__, partition_id,
> +		       (stats_area_type == HV_STATS_AREA_SELF) ? "self" : "parent",
> +		       err);
> +		return ERR_PTR(err);
> +	}
> +	return stats;
> +}
> +
> +static int mshv_debugfs_partition_stats_create(u64 partition_id,
> +					    struct dentry **partition_stats_ptr,
> +					    struct dentry *parent)
> +{
> +	struct dentry *dentry;
> +	struct hv_stats_page **pstats;
> +	int err;
> +
> +	pstats = kcalloc(NUM_STATS_AREAS, sizeof(struct hv_stats_page *),
> +			 GFP_KERNEL_ACCOUNT);
> +	if (!pstats)
> +		return -ENOMEM;
> +
> +	pstats[HV_STATS_AREA_SELF] = mshv_partition_stats_map(partition_id,
> +							      HV_STATS_AREA_SELF);
> +	if (IS_ERR(pstats[HV_STATS_AREA_SELF])) {
> +		err = PTR_ERR(pstats[HV_STATS_AREA_SELF]);
> +		goto cleanup;
> +	}
> +
> +	/*
> +	 * L1VH partition cannot access its partition stats in parent area.
> +	 */
> +	if (is_l1vh_parent(partition_id)) {
> +		pstats[HV_STATS_AREA_PARENT] = pstats[HV_STATS_AREA_SELF];
> +	} else {
> +		pstats[HV_STATS_AREA_PARENT] = mshv_partition_stats_map(partition_id,
> +
> 	HV_STATS_AREA_PARENT);
> +		if (IS_ERR(pstats[HV_STATS_AREA_PARENT])) {
> +			err = PTR_ERR(pstats[HV_STATS_AREA_PARENT]);
> +			goto unmap_self;
> +		}
> +		if (!pstats[HV_STATS_AREA_PARENT])
> +			pstats[HV_STATS_AREA_PARENT] = pstats[HV_STATS_AREA_SELF];
> +	}
> +
> +	dentry = debugfs_create_file("stats", 0400, parent,
> +				     pstats, &partition_stats_fops);
> +	if (IS_ERR(dentry)) {
> +		err = PTR_ERR(dentry);
> +		goto unmap_partition_stats;
> +	}
> +
> +	*partition_stats_ptr = dentry;
> +	return 0;
> +
> +unmap_partition_stats:
> +	if (pstats[HV_STATS_AREA_PARENT] != pstats[HV_STATS_AREA_SELF])
> +		mshv_partition_stats_unmap(partition_id, pstats[HV_STATS_AREA_PARENT],
> +					   HV_STATS_AREA_PARENT);
> +unmap_self:
> +	mshv_partition_stats_unmap(partition_id, pstats[HV_STATS_AREA_SELF],
> +				   HV_STATS_AREA_SELF);
> +cleanup:
> +	kfree(pstats);
> +	return err;
> +}
> +
> +static void partition_debugfs_remove(u64 partition_id, struct dentry *dentry)
> +{
> +	struct hv_stats_page **pstats = NULL;
> +
> +	pstats = dentry->d_inode->i_private;
> +
> +	debugfs_remove_recursive(dentry->d_parent);
> +
> +	if (pstats[HV_STATS_AREA_PARENT] != pstats[HV_STATS_AREA_SELF]) {
> +		mshv_partition_stats_unmap(partition_id,
> +					   pstats[HV_STATS_AREA_PARENT],
> +					   HV_STATS_AREA_PARENT);
> +	}
> +
> +	mshv_partition_stats_unmap(partition_id,
> +				   pstats[HV_STATS_AREA_SELF],
> +				   HV_STATS_AREA_SELF);
> +
> +	kfree(pstats);
> +}
> +
> +static int partition_debugfs_create(u64 partition_id,
> +				    struct dentry **vp_dir_ptr,
> +				    struct dentry **partition_stats_ptr,
> +				    struct dentry *parent)
> +{
> +	char part_id_str[U64_BUF_SZ];
> +	struct dentry *part_id_dir, *vp_dir;
> +	int err;
> +
> +	if (is_l1vh_parent(partition_id))
> +		sprintf(part_id_str, "self");
> +	else
> +		sprintf(part_id_str, "%llu", partition_id);
> +
> +	part_id_dir = debugfs_create_dir(part_id_str, parent);
> +	if (IS_ERR(part_id_dir))
> +		return PTR_ERR(part_id_dir);
> +
> +	vp_dir = debugfs_create_dir("vp", part_id_dir);
> +	if (IS_ERR(vp_dir)) {
> +		err = PTR_ERR(vp_dir);
> +		goto remove_debugfs_partition_id;
> +	}
> +
> +	err = mshv_debugfs_partition_stats_create(partition_id,
> +						  partition_stats_ptr,
> +						  part_id_dir);
> +	if (err)
> +		goto remove_debugfs_partition_id;
> +
> +	*vp_dir_ptr = vp_dir;
> +
> +	return 0;
> +
> +remove_debugfs_partition_id:
> +	debugfs_remove_recursive(part_id_dir);
> +	return err;
> +}
> +
> +static void parent_vp_debugfs_remove(u32 vp_index,
> +				     struct dentry *vp_stats_ptr)
> +{
> +	struct hv_stats_page **pstats;
> +
> +	pstats = vp_stats_ptr->d_inode->i_private;
> +	vp_debugfs_remove(vp_stats_ptr);
> +	mshv_vp_stats_unmap(hv_current_partition_id, vp_index, pstats);
> +	kfree(pstats);
> +}
> +
> +static void mshv_debugfs_parent_partition_remove(void)
> +{
> +	int idx;
> +
> +	for_each_online_cpu(idx)
> +		parent_vp_debugfs_remove(idx,

The first parameter here ("idx") should be translated through the
hv_vp_index[] array like is done in mshv_debugfs_parent_partition_create().

> +					 parent_vp_stats[idx]);
> +
> +	partition_debugfs_remove(hv_current_partition_id,
> +				 parent_partition_stats);
> +	kfree(parent_vp_stats);
> +	parent_vp_stats = NULL;
> +	parent_partition_stats = NULL;
> +

Extra blank line.

> +}
> +
> +static int __init parent_vp_debugfs_create(u32 vp_index,
> +					   struct dentry **vp_stats_ptr,
> +					   struct dentry *parent)
> +{
> +	struct hv_stats_page **pstats;
> +	int err;
> +
> +	pstats = kcalloc(2, sizeof(struct hv_stats_page *), GFP_KERNEL_ACCOUNT);

Another case of using "2" that should be changed.

> +	if (!pstats)
> +		return -ENOMEM;
> +
> +	err = mshv_vp_stats_map(hv_current_partition_id, vp_index, pstats);
> +	if (err)
> +		goto cleanup;
> +
> +	err = vp_debugfs_create(hv_current_partition_id, vp_index, pstats,
> +				vp_stats_ptr, parent);
> +	if (err)
> +		goto unmap_vp_stats;
> +
> +	return 0;
> +
> +unmap_vp_stats:
> +	mshv_vp_stats_unmap(hv_current_partition_id, vp_index, pstats);
> +cleanup:
> +	kfree(pstats);
> +	return err;
> +}
> +
> +static int __init mshv_debugfs_parent_partition_create(void)
> +{
> +	struct dentry *vp_dir;
> +	int err, idx, i;
> +
> +	mshv_debugfs_partition = debugfs_create_dir("partition",
> +						     mshv_debugfs);
> +	if (IS_ERR(mshv_debugfs_partition))
> +		return PTR_ERR(mshv_debugfs_partition);
> +
> +	err = partition_debugfs_create(hv_current_partition_id,
> +				       &vp_dir,
> +				       &parent_partition_stats,
> +				       mshv_debugfs_partition);
> +	if (err)
> +		goto remove_debugfs_partition;
> +
> +	parent_vp_stats = kcalloc(num_possible_cpus(),

num_possible_cpus() should not be used to allocate an array that is
then indexed by the Linux CPU number. Use nr_cpu_ids instead when
allocating the array. See commit 16b18fdf6bc7 for the full explanation.
As explained in that commit message, using num_possible_cpus()
doesn't break things now, but it might in the future.

> +				  sizeof(*parent_vp_stats),
> +				  GFP_KERNEL);
> +	if (!parent_vp_stats) {
> +		err = -ENOMEM;
> +		goto remove_debugfs_partition;
> +	}
> +
> +	for_each_online_cpu(idx) {
> +		err = parent_vp_debugfs_create(hv_vp_index[idx],
> +					       &parent_vp_stats[idx],
> +					       vp_dir);
> +		if (err)
> +			goto remove_debugfs_partition_vp;
> +	}
> +
> +	return 0;
> +
> +remove_debugfs_partition_vp:
> +	for_each_online_cpu(i) {
> +		if (i >= idx)
> +			break;
> +		parent_vp_debugfs_remove(i, parent_vp_stats[i]);
> +	}
> +	partition_debugfs_remove(hv_current_partition_id,
> +				 parent_partition_stats);
> +
> +	kfree(parent_vp_stats);
> +	parent_vp_stats = NULL;
> +	parent_partition_stats = NULL;
> +
> +remove_debugfs_partition:
> +	debugfs_remove_recursive(mshv_debugfs_partition);
> +	mshv_debugfs_partition = NULL;
> +	return err;
> +}
> +
> +static int hv_stats_show(struct seq_file *m, void *v)
> +{
> +	const struct hv_stats_page *stats = m->private;
> +	struct hv_counter_entry *entry = hv_hypervisor_counters;
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(hv_hypervisor_counters); i++, entry++)
> +		seq_printf(m, "%-25s: %llu\n", entry->name,
> +			   stats->data[entry->idx]);
> +
> +	return 0;
> +}
> +DEFINE_SHOW_ATTRIBUTE(hv_stats);
> +
> +static void mshv_hv_stats_unmap(void)
> +{
> +	union hv_stats_object_identity identity = {
> +		.hv.stats_area_type = HV_STATS_AREA_SELF,
> +	};
> +	int err;
> +
> +	err = hv_unmap_stats_page(HV_STATS_OBJECT_HYPERVISOR, NULL, &identity);
> +	if (err)
> +		pr_err("%s: failed to unmap hypervisor stats: %d\n",
> +		       __func__, err);
> +}
> +
> +static void * __init mshv_hv_stats_map(void)
> +{
> +	union hv_stats_object_identity identity = {
> +		.hv.stats_area_type = HV_STATS_AREA_SELF,
> +	};
> +	struct hv_stats_page *stats;
> +	int err;
> +
> +	err = hv_map_stats_page(HV_STATS_OBJECT_HYPERVISOR, &identity, &stats);
> +	if (err) {
> +		pr_err("%s: failed to map hypervisor stats: %d\n",
> +		       __func__, err);
> +		return ERR_PTR(err);
> +	}
> +	return stats;
> +}
> +
> +static int __init mshv_debugfs_hv_stats_create(struct dentry *parent)
> +{
> +	struct dentry *dentry;
> +	u64 *stats;
> +	int err;
> +
> +	stats = mshv_hv_stats_map();
> +	if (IS_ERR(stats))
> +		return PTR_ERR(stats);
> +
> +	dentry = debugfs_create_file("stats", 0400, parent,
> +				     stats, &hv_stats_fops);
> +	if (IS_ERR(dentry)) {
> +		err = PTR_ERR(dentry);
> +		pr_err("%s: failed to create hypervisor stats dentry: %d\n",
> +		       __func__, err);
> +		goto unmap_hv_stats;
> +	}
> +
> +	mshv_lps_count = num_present_cpus();

This method of setting mshv_lps_count, and the iteration through the lp_index
in mshv_debugfs_lp_create() and mshv_debugfs_lp_remove(), seems risky. The
lp_index gets passed to the hypervisor, so it must be the hypervisor's concept
of the lp_index. Is that always guaranteed to be the same as Linux's numbering
of the present CPUs? There may be edge cases where it is not. For example, what
if Linux in the root partition were booted with the "nosmt" kernel boot option,
such that Linux ignores all the 2nd hyper-threads in a core? Could that create
a numbering mismatch?

Note that for vp_index, we have the hv_vp_index[] array for translating from
Linux's concept of a CPU number to Hyper-V's concept of vp_index. For
example, mshv_debugfs_parent_partition_create() correctly goes through
this translation. And presumably when the VMM code does the
MSHV_CREATE_VP ioctl, it is passing in a hypervisor vp_index.

Everything may work fine "as is" for the moment, but the lp functions here
are still conflating the hypervisor's LP numbering with Linux's CPU numbering,
and that seems like a recipe for trouble somewhere down the road. I'm
not sure how the hypervisor interprets the "lp_index" part of the identity
argument passed to a hypercall, so I'm not sure what the fix is.

> +
> +	return 0;
> +
> +unmap_hv_stats:
> +	mshv_hv_stats_unmap();
> +	return err;
> +}
> +
> +int mshv_debugfs_vp_create(struct mshv_vp *vp)
> +{
> +	struct mshv_partition *p = vp->vp_partition;
> +
> +	if (!mshv_debugfs)
> +		return 0;
> +
> +	return vp_debugfs_create(p->pt_id, vp->vp_index,
> +				 vp->vp_stats_pages,
> +				 &vp->vp_stats_dentry,
> +				 p->pt_vp_dentry);
> +}
> +
> +void mshv_debugfs_vp_remove(struct mshv_vp *vp)
> +{
> +	if (!mshv_debugfs)
> +		return;
> +
> +	vp_debugfs_remove(vp->vp_stats_dentry);
> +}
> +
> +int mshv_debugfs_partition_create(struct mshv_partition *partition)
> +{
> +	int err;
> +
> +	if (!mshv_debugfs)
> +		return 0;
> +
> +	err = partition_debugfs_create(partition->pt_id,
> +				       &partition->pt_vp_dentry,
> +				       &partition->pt_stats_dentry,
> +				       mshv_debugfs_partition);
> +	if (err)
> +		return err;
> +
> +	return 0;
> +}
> +
> +void mshv_debugfs_partition_remove(struct mshv_partition *partition)
> +{
> +	if (!mshv_debugfs)
> +		return;
> +
> +	partition_debugfs_remove(partition->pt_id,
> +				 partition->pt_stats_dentry);
> +}
> +
> +int __init mshv_debugfs_init(void)
> +{
> +	int err;
> +
> +	mshv_debugfs = debugfs_create_dir("mshv", NULL);
> +	if (IS_ERR(mshv_debugfs)) {
> +		pr_err("%s: failed to create debugfs directory\n", __func__);
> +		return PTR_ERR(mshv_debugfs);
> +	}
> +
> +	if (hv_root_partition()) {
> +		err = mshv_debugfs_hv_stats_create(mshv_debugfs);
> +		if (err)
> +			goto remove_mshv_dir;
> +
> +		err = mshv_debugfs_lp_create(mshv_debugfs);
> +		if (err)
> +			goto unmap_hv_stats;
> +	}
> +
> +	err = mshv_debugfs_parent_partition_create();
> +	if (err)
> +		goto unmap_lp_stats;
> +
> +	return 0;
> +
> +unmap_lp_stats:
> +	if (hv_root_partition()) {
> +		mshv_debugfs_lp_remove();
> +		mshv_debugfs_lp = NULL;
> +	}
> +unmap_hv_stats:
> +	if (hv_root_partition())
> +		mshv_hv_stats_unmap();
> +remove_mshv_dir:
> +	debugfs_remove_recursive(mshv_debugfs);
> +	mshv_debugfs = NULL;
> +	return err;
> +}
> +
> +void mshv_debugfs_exit(void)
> +{
> +	mshv_debugfs_parent_partition_remove();
> +
> +	if (hv_root_partition()) {
> +		mshv_debugfs_lp_remove();
> +		mshv_debugfs_lp = NULL;
> +		mshv_hv_stats_unmap();
> +	}
> +
> +	debugfs_remove_recursive(mshv_debugfs);
> +	mshv_debugfs = NULL;
> +	mshv_debugfs_partition = NULL;
> +}
> diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h
> index e4912b0618fa..7332d9af8373 100644
> --- a/drivers/hv/mshv_root.h
> +++ b/drivers/hv/mshv_root.h
> @@ -52,6 +52,9 @@ struct mshv_vp {
>  		unsigned int kicked_by_hv;
>  		wait_queue_head_t vp_suspend_queue;
>  	} run;
> +#if IS_ENABLED(CONFIG_DEBUG_FS)
> +	struct dentry *vp_stats_dentry;
> +#endif
>  };
> 
>  #define vp_fmt(fmt) "p%lluvp%u: " fmt
> @@ -136,6 +139,10 @@ struct mshv_partition {
>  	u64 isolation_type;
>  	bool import_completed;
>  	bool pt_initialized;
> +#if IS_ENABLED(CONFIG_DEBUG_FS)
> +	struct dentry *pt_stats_dentry;
> +	struct dentry *pt_vp_dentry;
> +#endif
>  };
> 
>  #define pt_fmt(fmt) "p%llu: " fmt
> @@ -327,6 +334,33 @@ int hv_call_modify_spa_host_access(u64 partition_id, struct
> page **pages,
>  int hv_call_get_partition_property_ex(u64 partition_id, u64 property_code, u64 arg,
>  				      void *property_value, size_t property_value_sz);
> 
> +#if IS_ENABLED(CONFIG_DEBUG_FS)
> +int __init mshv_debugfs_init(void);
> +void mshv_debugfs_exit(void);
> +
> +int mshv_debugfs_partition_create(struct mshv_partition *partition);
> +void mshv_debugfs_partition_remove(struct mshv_partition *partition);
> +int mshv_debugfs_vp_create(struct mshv_vp *vp);
> +void mshv_debugfs_vp_remove(struct mshv_vp *vp);
> +#else
> +static inline int __init mshv_debugfs_init(void)
> +{
> +	return 0;
> +}
> +static inline void mshv_debugfs_exit(void) { }
> +
> +static inline int mshv_debugfs_partition_create(struct mshv_partition *partition)
> +{
> +	return 0;
> +}
> +static inline void mshv_debugfs_partition_remove(struct mshv_partition *partition) { }
> +static inline int mshv_debugfs_vp_create(struct mshv_vp *vp)
> +{
> +	return 0;
> +}
> +static inline void mshv_debugfs_vp_remove(struct mshv_vp *vp) { }
> +#endif
> +
>  extern struct mshv_root mshv_root;
>  extern enum hv_scheduler_type hv_scheduler_type;
>  extern u8 * __percpu *hv_synic_eventring_tail;
> diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
> index 12825666e21b..f4654fb8cd23 100644
> --- a/drivers/hv/mshv_root_main.c
> +++ b/drivers/hv/mshv_root_main.c
> @@ -1096,6 +1096,10 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
> 
>  	memcpy(vp->vp_stats_pages, stats_pages, sizeof(stats_pages));
> 
> +	ret = mshv_debugfs_vp_create(vp);
> +	if (ret)
> +		goto put_partition;
> +
>  	/*
>  	 * Keep anon_inode_getfd last: it installs fd in the file struct and
>  	 * thus makes the state accessible in user space.
> @@ -1103,7 +1107,7 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
>  	ret = anon_inode_getfd("mshv_vp", &mshv_vp_fops, vp,
>  			       O_RDWR | O_CLOEXEC);
>  	if (ret < 0)
> -		goto put_partition;
> +		goto remove_debugfs_vp;
> 
>  	/* already exclusive with the partition mutex for all ioctls */
>  	partition->pt_vp_count++;
> @@ -1111,6 +1115,8 @@ mshv_partition_ioctl_create_vp(struct mshv_partition *partition,
> 
>  	return ret;
> 
> +remove_debugfs_vp:
> +	mshv_debugfs_vp_remove(vp);
>  put_partition:
>  	mshv_partition_put(partition);
>  free_vp:
> @@ -1553,10 +1559,16 @@ mshv_partition_ioctl_initialize(struct mshv_partition *partition)
>  	if (ret)
>  		goto withdraw_mem;
> 
> +	ret = mshv_debugfs_partition_create(partition);
> +	if (ret)
> +		goto finalize_partition;
> +
>  	partition->pt_initialized = true;
> 
>  	return 0;
> 
> +finalize_partition:
> +	hv_call_finalize_partition(partition->pt_id);
>  withdraw_mem:
>  	hv_call_withdraw_memory(U64_MAX, NUMA_NO_NODE, partition->pt_id);
> 
> @@ -1736,6 +1748,7 @@ static void destroy_partition(struct mshv_partition *partition)
>  			if (!vp)
>  				continue;
> 
> +			mshv_debugfs_vp_remove(vp);
>  			mshv_vp_stats_unmap(partition->pt_id, vp->vp_index,
>  					    vp->vp_stats_pages);
> 
> @@ -1769,6 +1782,8 @@ static void destroy_partition(struct mshv_partition *partition)
>  			partition->pt_vp_array[i] = NULL;
>  		}
> 
> +		mshv_debugfs_partition_remove(partition);
> +
>  		/* Deallocates and unmaps everything including vcpus, GPA mappings etc */
>  		hv_call_finalize_partition(partition->pt_id);
> 
> @@ -2314,10 +2329,14 @@ static int __init mshv_parent_partition_init(void)
> 
>  	mshv_init_vmm_caps(dev);
> 
> -	ret = mshv_irqfd_wq_init();
> +	ret = mshv_debugfs_init();
>  	if (ret)
>  		goto exit_partition;
> 
> +	ret = mshv_irqfd_wq_init();
> +	if (ret)
> +		goto exit_debugfs;
> +
>  	spin_lock_init(&mshv_root.pt_ht_lock);
>  	hash_init(mshv_root.pt_htable);
> 
> @@ -2325,6 +2344,8 @@ static int __init mshv_parent_partition_init(void)
> 
>  	return 0;
> 
> +exit_debugfs:
> +	mshv_debugfs_exit();
>  exit_partition:
>  	if (hv_root_partition())
>  		mshv_root_partition_exit();
> @@ -2341,6 +2362,7 @@ static void __exit mshv_parent_partition_exit(void)
>  {
>  	hv_setup_mshv_handler(NULL);
>  	mshv_port_table_fini();
> +	mshv_debugfs_exit();
>  	misc_deregister(&mshv_dev);
>  	mshv_irqfd_wq_cleanup();
>  	if (hv_root_partition())
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-23 17:09   ` Michael Kelley
@ 2026-01-23 19:04     ` Nuno Das Neves
  2026-01-23 19:10       ` Michael Kelley
  2026-01-23 22:31       ` Stanislav Kinsburskii
  0 siblings, 2 replies; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-23 19:04 UTC (permalink / raw)
  To: Michael Kelley, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

On 1/23/2026 9:09 AM, Michael Kelley wrote:
> From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
>>
>> Introduce hv_counters.c, containing static data corresponding to
>> HV_*_COUNTER enums in the hypervisor source. Defining the enum
>> members as an array instead makes more sense, since it will be
>> iterated over to print counter information to debugfs.
> 
> I would have expected the filename to be mshv_counters.c, so that the association
> with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
> which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
> for using the "hv_" prefix?
> 
Good question - I originally thought of using hv_ because the definitions inside are
part of the hypervisor ABI, and hence also have the hv_ prefix.

However you have a good point, and I'm not opposed to changing it.

Maybe to just be super explicit: "mshv_debugfs_counters.c" ?

> Also, I see in Patch 7 of this series that hv_counters.c is #included as a .c file
> in mshv_debugfs.c. Is there a reason for doing the #include instead of adding
> hv_counters.c to the Makefile and building it on its own? You would need to
> add a handful of extern statements to mshv_root.h so that the tables are
> referenceable from mshv_debugfs.c. But that would seem to be the more
> normal way of doing things.  #including a .c file is unusual.
> 

Yes...I thought I could avoid noise in mshv_root.h and the Makefile, since it's
only relevant for mshv_debugfs.c. However I could see this file (whether as .c or
.h) being misused and included elsewhere inadvertantly, which would duplicate the
tables, so maybe doing it the normal way is a better idea, even if mshv_debugfs.c
is likely the only user.

> See one more comment on the last line of this patch ...
> 
>>
>> Include hypervisor, logical processor, partition, and virtual
>> processor counters.
>>
>> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
>> ---
>>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 488 insertions(+)
>>  create mode 100644 drivers/hv/hv_counters.c
>>
>> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
>> new file mode 100644
>> index 000000000000..a8e07e72cc29
>> --- /dev/null
>> +++ b/drivers/hv/hv_counters.c
>> @@ -0,0 +1,488 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Copyright (c) 2026, Microsoft Corporation.
>> + *
>> + * Data for printing stats page counters via debugfs.
>> + *
>> + * Authors: Microsoft Linux virtualization team
>> + */
>> +
>> +struct hv_counter_entry {
>> +	char *name;
>> +	int idx;
>> +};
>> +
>> +/* HV_HYPERVISOR_COUNTER */
>> +static struct hv_counter_entry hv_hypervisor_counters[] = {
>> +	{ "HvLogicalProcessors", 1 },
>> +	{ "HvPartitions", 2 },
>> +	{ "HvTotalPages", 3 },
>> +	{ "HvVirtualProcessors", 4 },
>> +	{ "HvMonitoredNotifications", 5 },
>> +	{ "HvModernStandbyEntries", 6 },
>> +	{ "HvPlatformIdleTransitions", 7 },
>> +	{ "HvHypervisorStartupCost", 8 },
>> +
>> +	{ "HvIOSpacePages", 10 },
>> +	{ "HvNonEssentialPagesForDump", 11 },
>> +	{ "HvSubsumedPages", 12 },
>> +};
>> +
>> +/* HV_CPU_COUNTER */
>> +static struct hv_counter_entry hv_lp_counters[] = {
>> +	{ "LpGlobalTime", 1 },
>> +	{ "LpTotalRunTime", 2 },
>> +	{ "LpHypervisorRunTime", 3 },
>> +	{ "LpHardwareInterrupts", 4 },
>> +	{ "LpContextSwitches", 5 },
>> +	{ "LpInterProcessorInterrupts", 6 },
>> +	{ "LpSchedulerInterrupts", 7 },
>> +	{ "LpTimerInterrupts", 8 },
>> +	{ "LpInterProcessorInterruptsSent", 9 },
>> +	{ "LpProcessorHalts", 10 },
>> +	{ "LpMonitorTransitionCost", 11 },
>> +	{ "LpContextSwitchTime", 12 },
>> +	{ "LpC1TransitionsCount", 13 },
>> +	{ "LpC1RunTime", 14 },
>> +	{ "LpC2TransitionsCount", 15 },
>> +	{ "LpC2RunTime", 16 },
>> +	{ "LpC3TransitionsCount", 17 },
>> +	{ "LpC3RunTime", 18 },
>> +	{ "LpRootVpIndex", 19 },
>> +	{ "LpIdleSequenceNumber", 20 },
>> +	{ "LpGlobalTscCount", 21 },
>> +	{ "LpActiveTscCount", 22 },
>> +	{ "LpIdleAccumulation", 23 },
>> +	{ "LpReferenceCycleCount0", 24 },
>> +	{ "LpActualCycleCount0", 25 },
>> +	{ "LpReferenceCycleCount1", 26 },
>> +	{ "LpActualCycleCount1", 27 },
>> +	{ "LpProximityDomainId", 28 },
>> +	{ "LpPostedInterruptNotifications", 29 },
>> +	{ "LpBranchPredictorFlushes", 30 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "LpL1DataCacheFlushes", 31 },
>> +	{ "LpImmediateL1DataCacheFlushes", 32 },
>> +	{ "LpMbFlushes", 33 },
>> +	{ "LpCounterRefreshSequenceNumber", 34 },
>> +	{ "LpCounterRefreshReferenceTime", 35 },
>> +	{ "LpIdleAccumulationSnapshot", 36 },
>> +	{ "LpActiveTscCountSnapshot", 37 },
>> +	{ "LpHwpRequestContextSwitches", 38 },
>> +	{ "LpPlaceholder1", 39 },
>> +	{ "LpPlaceholder2", 40 },
>> +	{ "LpPlaceholder3", 41 },
>> +	{ "LpPlaceholder4", 42 },
>> +	{ "LpPlaceholder5", 43 },
>> +	{ "LpPlaceholder6", 44 },
>> +	{ "LpPlaceholder7", 45 },
>> +	{ "LpPlaceholder8", 46 },
>> +	{ "LpPlaceholder9", 47 },
>> +	{ "LpSchLocalRunListSize", 48 },
>> +	{ "LpReserveGroupId", 49 },
>> +	{ "LpRunningPriority", 50 },
>> +	{ "LpPerfmonInterruptCount", 51 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "LpCounterRefreshSequenceNumber", 31 },
>> +	{ "LpCounterRefreshReferenceTime", 32 },
>> +	{ "LpIdleAccumulationSnapshot", 33 },
>> +	{ "LpActiveTscCountSnapshot", 34 },
>> +	{ "LpHwpRequestContextSwitches", 35 },
>> +	{ "LpPlaceholder2", 36 },
>> +	{ "LpPlaceholder3", 37 },
>> +	{ "LpPlaceholder4", 38 },
>> +	{ "LpPlaceholder5", 39 },
>> +	{ "LpPlaceholder6", 40 },
>> +	{ "LpPlaceholder7", 41 },
>> +	{ "LpPlaceholder8", 42 },
>> +	{ "LpPlaceholder9", 43 },
>> +	{ "LpSchLocalRunListSize", 44 },
>> +	{ "LpReserveGroupId", 45 },
>> +	{ "LpRunningPriority", 46 },
>> +#endif
>> +};
>> +
>> +/* HV_PROCESS_COUNTER */
>> +static struct hv_counter_entry hv_partition_counters[] = {
>> +	{ "PtVirtualProcessors", 1 },
>> +
>> +	{ "PtTlbSize", 3 },
>> +	{ "PtAddressSpaces", 4 },
>> +	{ "PtDepositedPages", 5 },
>> +	{ "PtGpaPages", 6 },
>> +	{ "PtGpaSpaceModifications", 7 },
>> +	{ "PtVirtualTlbFlushEntires", 8 },
>> +	{ "PtRecommendedTlbSize", 9 },
>> +	{ "PtGpaPages4K", 10 },
>> +	{ "PtGpaPages2M", 11 },
>> +	{ "PtGpaPages1G", 12 },
>> +	{ "PtGpaPages512G", 13 },
>> +	{ "PtDevicePages4K", 14 },
>> +	{ "PtDevicePages2M", 15 },
>> +	{ "PtDevicePages1G", 16 },
>> +	{ "PtDevicePages512G", 17 },
>> +	{ "PtAttachedDevices", 18 },
>> +	{ "PtDeviceInterruptMappings", 19 },
>> +	{ "PtIoTlbFlushes", 20 },
>> +	{ "PtIoTlbFlushCost", 21 },
>> +	{ "PtDeviceInterruptErrors", 22 },
>> +	{ "PtDeviceDmaErrors", 23 },
>> +	{ "PtDeviceInterruptThrottleEvents", 24 },
>> +	{ "PtSkippedTimerTicks", 25 },
>> +	{ "PtPartitionId", 26 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "PtNestedTlbSize", 27 },
>> +	{ "PtRecommendedNestedTlbSize", 28 },
>> +	{ "PtNestedTlbFreeListSize", 29 },
>> +	{ "PtNestedTlbTrimmedPages", 30 },
>> +	{ "PtPagesShattered", 31 },
>> +	{ "PtPagesRecombined", 32 },
>> +	{ "PtHwpRequestValue", 33 },
>> +	{ "PtAutoSuspendEnableTime", 34 },
>> +	{ "PtAutoSuspendTriggerTime", 35 },
>> +	{ "PtAutoSuspendDisableTime", 36 },
>> +	{ "PtPlaceholder1", 37 },
>> +	{ "PtPlaceholder2", 38 },
>> +	{ "PtPlaceholder3", 39 },
>> +	{ "PtPlaceholder4", 40 },
>> +	{ "PtPlaceholder5", 41 },
>> +	{ "PtPlaceholder6", 42 },
>> +	{ "PtPlaceholder7", 43 },
>> +	{ "PtPlaceholder8", 44 },
>> +	{ "PtHypervisorStateTransferGeneration", 45 },
>> +	{ "PtNumberofActiveChildPartitions", 46 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "PtHwpRequestValue", 27 },
>> +	{ "PtAutoSuspendEnableTime", 28 },
>> +	{ "PtAutoSuspendTriggerTime", 29 },
>> +	{ "PtAutoSuspendDisableTime", 30 },
>> +	{ "PtPlaceholder1", 31 },
>> +	{ "PtPlaceholder2", 32 },
>> +	{ "PtPlaceholder3", 33 },
>> +	{ "PtPlaceholder4", 34 },
>> +	{ "PtPlaceholder5", 35 },
>> +	{ "PtPlaceholder6", 36 },
>> +	{ "PtPlaceholder7", 37 },
>> +	{ "PtPlaceholder8", 38 },
>> +	{ "PtHypervisorStateTransferGeneration", 39 },
>> +	{ "PtNumberofActiveChildPartitions", 40 },
>> +#endif
>> +};
>> +
>> +/* HV_THREAD_COUNTER */
>> +static struct hv_counter_entry hv_vp_counters[] = {
>> +	{ "VpTotalRunTime", 1 },
>> +	{ "VpHypervisorRunTime", 2 },
>> +	{ "VpRemoteNodeRunTime", 3 },
>> +	{ "VpNormalizedRunTime", 4 },
>> +	{ "VpIdealCpu", 5 },
>> +
>> +	{ "VpHypercallsCount", 7 },
>> +	{ "VpHypercallsTime", 8 },
>> +#if IS_ENABLED(CONFIG_X86_64)
>> +	{ "VpPageInvalidationsCount", 9 },
>> +	{ "VpPageInvalidationsTime", 10 },
>> +	{ "VpControlRegisterAccessesCount", 11 },
>> +	{ "VpControlRegisterAccessesTime", 12 },
>> +	{ "VpIoInstructionsCount", 13 },
>> +	{ "VpIoInstructionsTime", 14 },
>> +	{ "VpHltInstructionsCount", 15 },
>> +	{ "VpHltInstructionsTime", 16 },
>> +	{ "VpMwaitInstructionsCount", 17 },
>> +	{ "VpMwaitInstructionsTime", 18 },
>> +	{ "VpCpuidInstructionsCount", 19 },
>> +	{ "VpCpuidInstructionsTime", 20 },
>> +	{ "VpMsrAccessesCount", 21 },
>> +	{ "VpMsrAccessesTime", 22 },
>> +	{ "VpOtherInterceptsCount", 23 },
>> +	{ "VpOtherInterceptsTime", 24 },
>> +	{ "VpExternalInterruptsCount", 25 },
>> +	{ "VpExternalInterruptsTime", 26 },
>> +	{ "VpPendingInterruptsCount", 27 },
>> +	{ "VpPendingInterruptsTime", 28 },
>> +	{ "VpEmulatedInstructionsCount", 29 },
>> +	{ "VpEmulatedInstructionsTime", 30 },
>> +	{ "VpDebugRegisterAccessesCount", 31 },
>> +	{ "VpDebugRegisterAccessesTime", 32 },
>> +	{ "VpPageFaultInterceptsCount", 33 },
>> +	{ "VpPageFaultInterceptsTime", 34 },
>> +	{ "VpGuestPageTableMaps", 35 },
>> +	{ "VpLargePageTlbFills", 36 },
>> +	{ "VpSmallPageTlbFills", 37 },
>> +	{ "VpReflectedGuestPageFaults", 38 },
>> +	{ "VpApicMmioAccesses", 39 },
>> +	{ "VpIoInterceptMessages", 40 },
>> +	{ "VpMemoryInterceptMessages", 41 },
>> +	{ "VpApicEoiAccesses", 42 },
>> +	{ "VpOtherMessages", 43 },
>> +	{ "VpPageTableAllocations", 44 },
>> +	{ "VpLogicalProcessorMigrations", 45 },
>> +	{ "VpAddressSpaceEvictions", 46 },
>> +	{ "VpAddressSpaceSwitches", 47 },
>> +	{ "VpAddressDomainFlushes", 48 },
>> +	{ "VpAddressSpaceFlushes", 49 },
>> +	{ "VpGlobalGvaRangeFlushes", 50 },
>> +	{ "VpLocalGvaRangeFlushes", 51 },
>> +	{ "VpPageTableEvictions", 52 },
>> +	{ "VpPageTableReclamations", 53 },
>> +	{ "VpPageTableResets", 54 },
>> +	{ "VpPageTableValidations", 55 },
>> +	{ "VpApicTprAccesses", 56 },
>> +	{ "VpPageTableWriteIntercepts", 57 },
>> +	{ "VpSyntheticInterrupts", 58 },
>> +	{ "VpVirtualInterrupts", 59 },
>> +	{ "VpApicIpisSent", 60 },
>> +	{ "VpApicSelfIpisSent", 61 },
>> +	{ "VpGpaSpaceHypercalls", 62 },
>> +	{ "VpLogicalProcessorHypercalls", 63 },
>> +	{ "VpLongSpinWaitHypercalls", 64 },
>> +	{ "VpOtherHypercalls", 65 },
>> +	{ "VpSyntheticInterruptHypercalls", 66 },
>> +	{ "VpVirtualInterruptHypercalls", 67 },
>> +	{ "VpVirtualMmuHypercalls", 68 },
>> +	{ "VpVirtualProcessorHypercalls", 69 },
>> +	{ "VpHardwareInterrupts", 70 },
>> +	{ "VpNestedPageFaultInterceptsCount", 71 },
>> +	{ "VpNestedPageFaultInterceptsTime", 72 },
>> +	{ "VpPageScans", 73 },
>> +	{ "VpLogicalProcessorDispatches", 74 },
>> +	{ "VpWaitingForCpuTime", 75 },
>> +	{ "VpExtendedHypercalls", 76 },
>> +	{ "VpExtendedHypercallInterceptMessages", 77 },
>> +	{ "VpMbecNestedPageTableSwitches", 78 },
>> +	{ "VpOtherReflectedGuestExceptions", 79 },
>> +	{ "VpGlobalIoTlbFlushes", 80 },
>> +	{ "VpGlobalIoTlbFlushCost", 81 },
>> +	{ "VpLocalIoTlbFlushes", 82 },
>> +	{ "VpLocalIoTlbFlushCost", 83 },
>> +	{ "VpHypercallsForwardedCount", 84 },
>> +	{ "VpHypercallsForwardingTime", 85 },
>> +	{ "VpPageInvalidationsForwardedCount", 86 },
>> +	{ "VpPageInvalidationsForwardingTime", 87 },
>> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
>> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
>> +	{ "VpIoInstructionsForwardedCount", 90 },
>> +	{ "VpIoInstructionsForwardingTime", 91 },
>> +	{ "VpHltInstructionsForwardedCount", 92 },
>> +	{ "VpHltInstructionsForwardingTime", 93 },
>> +	{ "VpMwaitInstructionsForwardedCount", 94 },
>> +	{ "VpMwaitInstructionsForwardingTime", 95 },
>> +	{ "VpCpuidInstructionsForwardedCount", 96 },
>> +	{ "VpCpuidInstructionsForwardingTime", 97 },
>> +	{ "VpMsrAccessesForwardedCount", 98 },
>> +	{ "VpMsrAccessesForwardingTime", 99 },
>> +	{ "VpOtherInterceptsForwardedCount", 100 },
>> +	{ "VpOtherInterceptsForwardingTime", 101 },
>> +	{ "VpExternalInterruptsForwardedCount", 102 },
>> +	{ "VpExternalInterruptsForwardingTime", 103 },
>> +	{ "VpPendingInterruptsForwardedCount", 104 },
>> +	{ "VpPendingInterruptsForwardingTime", 105 },
>> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
>> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
>> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
>> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
>> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
>> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
>> +	{ "VpVmclearEmulationCount", 112 },
>> +	{ "VpVmclearEmulationTime", 113 },
>> +	{ "VpVmptrldEmulationCount", 114 },
>> +	{ "VpVmptrldEmulationTime", 115 },
>> +	{ "VpVmptrstEmulationCount", 116 },
>> +	{ "VpVmptrstEmulationTime", 117 },
>> +	{ "VpVmreadEmulationCount", 118 },
>> +	{ "VpVmreadEmulationTime", 119 },
>> +	{ "VpVmwriteEmulationCount", 120 },
>> +	{ "VpVmwriteEmulationTime", 121 },
>> +	{ "VpVmxoffEmulationCount", 122 },
>> +	{ "VpVmxoffEmulationTime", 123 },
>> +	{ "VpVmxonEmulationCount", 124 },
>> +	{ "VpVmxonEmulationTime", 125 },
>> +	{ "VpNestedVMEntriesCount", 126 },
>> +	{ "VpNestedVMEntriesTime", 127 },
>> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
>> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
>> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
>> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
>> +	{ "VpInvEptAllContextEmulationCount", 132 },
>> +	{ "VpInvEptAllContextEmulationTime", 133 },
>> +	{ "VpInvEptSingleContextEmulationCount", 134 },
>> +	{ "VpInvEptSingleContextEmulationTime", 135 },
>> +	{ "VpInvVpidAllContextEmulationCount", 136 },
>> +	{ "VpInvVpidAllContextEmulationTime", 137 },
>> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
>> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
>> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
>> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
>> +	{ "VpNestedTlbPageTableReclamations", 142 },
>> +	{ "VpNestedTlbPageTableEvictions", 143 },
>> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
>> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
>> +	{ "VpPostedInterruptNotifications", 146 },
>> +	{ "VpPostedInterruptScans", 147 },
>> +	{ "VpTotalCoreRunTime", 148 },
>> +	{ "VpMaximumRunTime", 149 },
>> +	{ "VpHwpRequestContextSwitches", 150 },
>> +	{ "VpWaitingForCpuTimeBucket0", 151 },
>> +	{ "VpWaitingForCpuTimeBucket1", 152 },
>> +	{ "VpWaitingForCpuTimeBucket2", 153 },
>> +	{ "VpWaitingForCpuTimeBucket3", 154 },
>> +	{ "VpWaitingForCpuTimeBucket4", 155 },
>> +	{ "VpWaitingForCpuTimeBucket5", 156 },
>> +	{ "VpWaitingForCpuTimeBucket6", 157 },
>> +	{ "VpVmloadEmulationCount", 158 },
>> +	{ "VpVmloadEmulationTime", 159 },
>> +	{ "VpVmsaveEmulationCount", 160 },
>> +	{ "VpVmsaveEmulationTime", 161 },
>> +	{ "VpGifInstructionEmulationCount", 162 },
>> +	{ "VpGifInstructionEmulationTime", 163 },
>> +	{ "VpEmulatedErrataSvmInstructions", 164 },
>> +	{ "VpPlaceholder1", 165 },
>> +	{ "VpPlaceholder2", 166 },
>> +	{ "VpPlaceholder3", 167 },
>> +	{ "VpPlaceholder4", 168 },
>> +	{ "VpPlaceholder5", 169 },
>> +	{ "VpPlaceholder6", 170 },
>> +	{ "VpPlaceholder7", 171 },
>> +	{ "VpPlaceholder8", 172 },
>> +	{ "VpContentionTime", 173 },
>> +	{ "VpWakeUpTime", 174 },
>> +	{ "VpSchedulingPriority", 175 },
>> +	{ "VpRdpmcInstructionsCount", 176 },
>> +	{ "VpRdpmcInstructionsTime", 177 },
>> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
>> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
>> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
>> +	{ "VpPerfmonInterruptCount", 181 },
>> +	{ "VpVtl1DispatchCount", 182 },
>> +	{ "VpVtl2DispatchCount", 183 },
>> +	{ "VpVtl2DispatchBucket0", 184 },
>> +	{ "VpVtl2DispatchBucket1", 185 },
>> +	{ "VpVtl2DispatchBucket2", 186 },
>> +	{ "VpVtl2DispatchBucket3", 187 },
>> +	{ "VpVtl2DispatchBucket4", 188 },
>> +	{ "VpVtl2DispatchBucket5", 189 },
>> +	{ "VpVtl2DispatchBucket6", 190 },
>> +	{ "VpVtl1RunTime", 191 },
>> +	{ "VpVtl2RunTime", 192 },
>> +	{ "VpIommuHypercalls", 193 },
>> +	{ "VpCpuGroupHypercalls", 194 },
>> +	{ "VpVsmHypercalls", 195 },
>> +	{ "VpEventLogHypercalls", 196 },
>> +	{ "VpDeviceDomainHypercalls", 197 },
>> +	{ "VpDepositHypercalls", 198 },
>> +	{ "VpSvmHypercalls", 199 },
>> +	{ "VpBusLockAcquisitionCount", 200 },
>> +	{ "VpLoadAvg", 201 },
>> +	{ "VpRootDispatchThreadBlocked", 202 },
>> +	{ "VpIdleCpuTime", 203 },
>> +	{ "VpWaitingForCpuTimeBucket7", 204 },
>> +	{ "VpWaitingForCpuTimeBucket8", 205 },
>> +	{ "VpWaitingForCpuTimeBucket9", 206 },
>> +	{ "VpWaitingForCpuTimeBucket10", 207 },
>> +	{ "VpWaitingForCpuTimeBucket11", 208 },
>> +	{ "VpWaitingForCpuTimeBucket12", 209 },
>> +	{ "VpHierarchicalSuspendTime", 210 },
>> +	{ "VpExpressSchedulingAttempts", 211 },
>> +	{ "VpExpressSchedulingCount", 212 },
>> +	{ "VpBusLockAcquisitionTime", 213 },
>> +#elif IS_ENABLED(CONFIG_ARM64)
>> +	{ "VpSysRegAccessesCount", 9 },
>> +	{ "VpSysRegAccessesTime", 10 },
>> +	{ "VpSmcInstructionsCount", 11 },
>> +	{ "VpSmcInstructionsTime", 12 },
>> +	{ "VpOtherInterceptsCount", 13 },
>> +	{ "VpOtherInterceptsTime", 14 },
>> +	{ "VpExternalInterruptsCount", 15 },
>> +	{ "VpExternalInterruptsTime", 16 },
>> +	{ "VpPendingInterruptsCount", 17 },
>> +	{ "VpPendingInterruptsTime", 18 },
>> +	{ "VpGuestPageTableMaps", 19 },
>> +	{ "VpLargePageTlbFills", 20 },
>> +	{ "VpSmallPageTlbFills", 21 },
>> +	{ "VpReflectedGuestPageFaults", 22 },
>> +	{ "VpMemoryInterceptMessages", 23 },
>> +	{ "VpOtherMessages", 24 },
>> +	{ "VpLogicalProcessorMigrations", 25 },
>> +	{ "VpAddressDomainFlushes", 26 },
>> +	{ "VpAddressSpaceFlushes", 27 },
>> +	{ "VpSyntheticInterrupts", 28 },
>> +	{ "VpVirtualInterrupts", 29 },
>> +	{ "VpApicSelfIpisSent", 30 },
>> +	{ "VpGpaSpaceHypercalls", 31 },
>> +	{ "VpLogicalProcessorHypercalls", 32 },
>> +	{ "VpLongSpinWaitHypercalls", 33 },
>> +	{ "VpOtherHypercalls", 34 },
>> +	{ "VpSyntheticInterruptHypercalls", 35 },
>> +	{ "VpVirtualInterruptHypercalls", 36 },
>> +	{ "VpVirtualMmuHypercalls", 37 },
>> +	{ "VpVirtualProcessorHypercalls", 38 },
>> +	{ "VpHardwareInterrupts", 39 },
>> +	{ "VpNestedPageFaultInterceptsCount", 40 },
>> +	{ "VpNestedPageFaultInterceptsTime", 41 },
>> +	{ "VpLogicalProcessorDispatches", 42 },
>> +	{ "VpWaitingForCpuTime", 43 },
>> +	{ "VpExtendedHypercalls", 44 },
>> +	{ "VpExtendedHypercallInterceptMessages", 45 },
>> +	{ "VpMbecNestedPageTableSwitches", 46 },
>> +	{ "VpOtherReflectedGuestExceptions", 47 },
>> +	{ "VpGlobalIoTlbFlushes", 48 },
>> +	{ "VpGlobalIoTlbFlushCost", 49 },
>> +	{ "VpLocalIoTlbFlushes", 50 },
>> +	{ "VpLocalIoTlbFlushCost", 51 },
>> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
>> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
>> +	{ "VpPostedInterruptNotifications", 54 },
>> +	{ "VpPostedInterruptScans", 55 },
>> +	{ "VpTotalCoreRunTime", 56 },
>> +	{ "VpMaximumRunTime", 57 },
>> +	{ "VpWaitingForCpuTimeBucket0", 58 },
>> +	{ "VpWaitingForCpuTimeBucket1", 59 },
>> +	{ "VpWaitingForCpuTimeBucket2", 60 },
>> +	{ "VpWaitingForCpuTimeBucket3", 61 },
>> +	{ "VpWaitingForCpuTimeBucket4", 62 },
>> +	{ "VpWaitingForCpuTimeBucket5", 63 },
>> +	{ "VpWaitingForCpuTimeBucket6", 64 },
>> +	{ "VpHwpRequestContextSwitches", 65 },
>> +	{ "VpPlaceholder2", 66 },
>> +	{ "VpPlaceholder3", 67 },
>> +	{ "VpPlaceholder4", 68 },
>> +	{ "VpPlaceholder5", 69 },
>> +	{ "VpPlaceholder6", 70 },
>> +	{ "VpPlaceholder7", 71 },
>> +	{ "VpPlaceholder8", 72 },
>> +	{ "VpContentionTime", 73 },
>> +	{ "VpWakeUpTime", 74 },
>> +	{ "VpSchedulingPriority", 75 },
>> +	{ "VpVtl1DispatchCount", 76 },
>> +	{ "VpVtl2DispatchCount", 77 },
>> +	{ "VpVtl2DispatchBucket0", 78 },
>> +	{ "VpVtl2DispatchBucket1", 79 },
>> +	{ "VpVtl2DispatchBucket2", 80 },
>> +	{ "VpVtl2DispatchBucket3", 81 },
>> +	{ "VpVtl2DispatchBucket4", 82 },
>> +	{ "VpVtl2DispatchBucket5", 83 },
>> +	{ "VpVtl2DispatchBucket6", 84 },
>> +	{ "VpVtl1RunTime", 85 },
>> +	{ "VpVtl2RunTime", 86 },
>> +	{ "VpIommuHypercalls", 87 },
>> +	{ "VpCpuGroupHypercalls", 88 },
>> +	{ "VpVsmHypercalls", 89 },
>> +	{ "VpEventLogHypercalls", 90 },
>> +	{ "VpDeviceDomainHypercalls", 91 },
>> +	{ "VpDepositHypercalls", 92 },
>> +	{ "VpSvmHypercalls", 93 },
>> +	{ "VpLoadAvg", 94 },
>> +	{ "VpRootDispatchThreadBlocked", 95 },
>> +	{ "VpIdleCpuTime", 96 },
>> +	{ "VpWaitingForCpuTimeBucket7", 97 },
>> +	{ "VpWaitingForCpuTimeBucket8", 98 },
>> +	{ "VpWaitingForCpuTimeBucket9", 99 },
>> +	{ "VpWaitingForCpuTimeBucket10", 100 },
>> +	{ "VpWaitingForCpuTimeBucket11", 101 },
>> +	{ "VpWaitingForCpuTimeBucket12", 102 },
>> +	{ "VpHierarchicalSuspendTime", 103 },
>> +	{ "VpExpressSchedulingAttempts", 104 },
>> +	{ "VpExpressSchedulingCount", 105 },
>> +#endif
>> +};
>> +
> 
> The patch puts a blank line at the end of the new hv_counters.c file. When using
> "git am" to apply this patch, I get this warning:
> 
> .git/rebase-apply/patch:499: new blank line at EOF.
> +
> warning: 1 line adds whitespace errors.
> 
> Line 499 is that blank line at the end of the new file. If I modify the patch to remove
> the adding of the blank line, "git am" will apply the patch with no warning. This
> should probably be fixed.
> 
Thanks for pointing that out, I'll fix it!

> Michael


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-23 19:04     ` Nuno Das Neves
@ 2026-01-23 19:10       ` Michael Kelley
  2026-01-23 22:31       ` Stanislav Kinsburskii
  1 sibling, 0 replies; 22+ messages in thread
From: Michael Kelley @ 2026-01-23 19:10 UTC (permalink / raw)
  To: Nuno Das Neves, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Friday, January 23, 2026 11:05 AM
> 
> On 1/23/2026 9:09 AM, Michael Kelley wrote:
> > From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> >>
> >> Introduce hv_counters.c, containing static data corresponding to
> >> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> >> members as an array instead makes more sense, since it will be
> >> iterated over to print counter information to debugfs.
> >
> > I would have expected the filename to be mshv_counters.c, so that the association
> > with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
> > which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
> > for using the "hv_" prefix?
> >
> Good question - I originally thought of using hv_ because the definitions inside are
> part of the hypervisor ABI, and hence also have the hv_ prefix.
> 
> However you have a good point, and I'm not opposed to changing it.
> 
> Maybe to just be super explicit: "mshv_debugfs_counters.c" ?

That sounds good to me.

Michael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics
  2026-01-23 17:09   ` Michael Kelley
@ 2026-01-23 21:11     ` Nuno Das Neves
  2026-01-24  4:14       ` Michael Kelley
  0 siblings, 1 reply; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-23 21:11 UTC (permalink / raw)
  To: Michael Kelley, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com, Jinank Jain

On 1/23/2026 9:09 AM, Michael Kelley wrote:
> From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
>>
>> Introduce a debugfs interface to expose root and child partition stats
>> when running with mshv_root.
>>
>> Create a debugfs directory "mshv" containing 'stats' files organized by
>> type and id. A stats file contains a number of counters depending on
>> its type. e.g. an excerpt from a VP stats file:
>>
>> TotalRunTime                  : 1997602722
>> HypervisorRunTime             : 649671371
>> RemoteNodeRunTime             : 0
>> NormalizedRunTime             : 1997602721
>> IdealCpu                      : 0
>> HypercallsCount               : 1708169
>> HypercallsTime                : 111914774
>> PageInvalidationsCount        : 0
>> PageInvalidationsTime         : 0
>>
>> On a root partition with some active child partitions, the entire
>> directory structure may look like:
>>
>> mshv/
>>   stats             # hypervisor stats
>>   lp/               # logical processors
>>     0/              # LP id
>>       stats         # LP 0 stats
>>     1/
>>     2/
>>     3/
>>   partition/        # partition stats
>>     1/              # root partition id
>>       stats         # root partition stats
>>       vp/           # root virtual processors
>>         0/          # root VP id
>>           stats     # root VP 0 stats
>>         1/
>>         2/
>>         3/
>>     42/             # child partition id
>>       stats         # child partition stats
>>       vp/           # child VPs
>>         0/          # child VP id
>>           stats     # child VP 0 stats
>>         1/
>>     43/
>>     55/
>>
>> On L1VH, some stats are not present as it does not own the hardware
>> like the root partition does:
>> - The hypervisor and lp stats are not present
>> - L1VH's partition directory is named "self" because it can't get its
>>   own id
>> - Some of L1VH's partition and VP stats fields are not populated, because
>>   it can't map its own HV_STATS_AREA_PARENT page.
>>
>> Co-developed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
>> Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
>> Co-developed-by: Praveen K Paladugu <prapal@linux.microsoft.com>
>> Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
>> Co-developed-by: Mukesh Rathor <mrathor@linux.microsoft.com>
>> Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com>
>> Co-developed-by: Purna Pavan Chandra Aekkaladevi
>> <paekkaladevi@linux.microsoft.com>
>> Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
>> Co-developed-by: Jinank Jain <jinankjain@microsoft.com>
>> Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
>> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
>> Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
>> ---
>>  drivers/hv/Makefile         |   1 +
>>  drivers/hv/hv_counters.c    |   1 +
>>  drivers/hv/hv_synic.c       | 177 +++++++++
> 
> This new file hv_synic.c seems to be spurious.  It looks like you unintentionally
> picked up this new file from the build tree where you were creating the patches
> for this series.
> 

Oh, that's embarrassing! Yes, it's a half-baked, unrelated work-in-progress...
Please ignore!

<snip>
>> diff --git a/drivers/hv/mshv_debugfs.c b/drivers/hv/mshv_debugfs.c
>> new file mode 100644
>> index 000000000000..72eb0ae44e4b
>> --- /dev/null
>> +++ b/drivers/hv/mshv_debugfs.c
>> @@ -0,0 +1,703 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Copyright (c) 2026, Microsoft Corporation.
>> + *
>> + * The /sys/kernel/debug/mshv directory contents.
>> + * Contains various statistics data, provided by the hypervisor.
>> + *
>> + * Authors: Microsoft Linux virtualization team
>> + */
>> +
>> +#include <linux/debugfs.h>
>> +#include <linux/stringify.h>
>> +#include <asm/mshyperv.h>
>> +#include <linux/slab.h>
>> +
>> +#include "mshv.h"
>> +#include "mshv_root.h"
>> +
>> +#include "hv_counters.c"
>> +
>> +#define U32_BUF_SZ 11
>> +#define U64_BUF_SZ 21
>> +#define NUM_STATS_AREAS (HV_STATS_AREA_PARENT + 1)
> 
> This is sort of weak in that it doesn't really guard against
> changes in the enum that defines HV_STATS_AREA_PARENT.
> It would work if it were defined as part of the enum, but then
> you are changing the code coming from the Windows world,
> which I know is a different problem.
> 
> The enum is part of the hypervisor ABI and hence isn't likely to
> change, but it still feels funny to define NUM_STATS_AREAS like
> this. I would suggest dropping this and just using
> HV_STATS_AREA_COUNT for the memory allocations even
> though doing so will allocate space for a stats area pointer
> that isn't used by this code. It's only a few bytes.
> 

That would work, but then I'd want to have a comment explaining
that the decision is intentional, otherwise I think it's just as
confusing to have unexplained wasted space.

Alternatively, the usage of SELF and PARENT (but not INTERNAL)
could be made explicit by a compile-time check, and a comment to
clarify:

/* Only support SELF and PARENT areas */
#define NUM_STATS_AREAS 2
static_assert(HV_STATS_AREA_SELF == 0 && HV_STATS_AREA_PARENT == 1,
	      "SELF and PARENT areas must be usable as indices into an array of size NUM_STATS_AREAS")

>> +
>> +static struct dentry *mshv_debugfs;
>> +static struct dentry *mshv_debugfs_partition;
>> +static struct dentry *mshv_debugfs_lp;
>> +static struct dentry **parent_vp_stats;
>> +static struct dentry *parent_partition_stats;
>> +
>> +static u64 mshv_lps_count;
>> +static struct hv_stats_page **mshv_lps_stats;
>> +
>> +static int lp_stats_show(struct seq_file *m, void *v)
>> +{
>> +	const struct hv_stats_page *stats = m->private;
>> +	struct hv_counter_entry *entry = hv_lp_counters;
>> +	int i;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(hv_lp_counters); i++, entry++)
>> +		seq_printf(m, "%-29s: %llu\n", entry->name,
>> +			   stats->data[entry->idx]);
>> +
>> +	return 0;
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(lp_stats);
>> +
>> +static void mshv_lp_stats_unmap(u32 lp_index)
>> +{
>> +	union hv_stats_object_identity identity = {
>> +		.lp.lp_index = lp_index,
>> +		.lp.stats_area_type = HV_STATS_AREA_SELF,
>> +	};
>> +	int err;
>> +
>> +	err = hv_unmap_stats_page(HV_STATS_OBJECT_LOGICAL_PROCESSOR,
>> +				  mshv_lps_stats[lp_index], &identity);
>> +	if (err)
>> +		pr_err("%s: failed to unmap logical processor %u stats, err: %d\n",
>> +		       __func__, lp_index, err);
> 
> Perhaps set mshv_lps_stats[lp_index] to NULL?  I don't think it's actually
> required, but similar code later in this file sets some pointers to NULL
> just as good hygiene.
> 

Good idea, I'll do that.

>> +}
>> +
<snip>
>> +
>> +static int __init mshv_debugfs_lp_create(struct dentry *parent)
>> +{
>> +	struct dentry *lp_dir;
>> +	int err, lp_index;
>> +
>> +	mshv_lps_stats = kcalloc(mshv_lps_count,
>> +				 sizeof(*mshv_lps_stats),
>> +				 GFP_KERNEL_ACCOUNT);
>> +
>> +	if (!mshv_lps_stats)
>> +		return -ENOMEM;
>> +
>> +	lp_dir = debugfs_create_dir("lp", parent);
>> +	if (IS_ERR(lp_dir)) {
>> +		err = PTR_ERR(lp_dir);
>> +		goto free_lp_stats;
>> +	}
>> +
>> +	for (lp_index = 0; lp_index < mshv_lps_count; lp_index++) {
>> +		err = lp_debugfs_create(lp_index, lp_dir);
>> +		if (err)
>> +			goto remove_debugfs_lps;
>> +	}
>> +
>> +	mshv_debugfs_lp = lp_dir;
>> +
>> +	return 0;
>> +
>> +remove_debugfs_lps:
>> +	for (lp_index -= 1; lp_index >= 0; lp_index--)
>> +		mshv_lp_stats_unmap(lp_index);
>> +	debugfs_remove_recursive(lp_dir);
>> +free_lp_stats:
>> +	kfree(mshv_lps_stats);
> 
> Set mshv_lps_stats to NULL?
> 

Agreed, thanks.

>> +
>> +	return err;
>> +}
>> +
<snip>
>> +
>> +static void mshv_debugfs_parent_partition_remove(void)
>> +{
>> +	int idx;
>> +
>> +	for_each_online_cpu(idx)
>> +		parent_vp_debugfs_remove(idx,
> 
> The first parameter here ("idx") should be translated through the
> hv_vp_index[] array like is done in mshv_debugfs_parent_partition_create().
> 

Ok, thanks

>> +					 parent_vp_stats[idx]);
>> +
>> +	partition_debugfs_remove(hv_current_partition_id,
>> +				 parent_partition_stats);
>> +	kfree(parent_vp_stats);
>> +	parent_vp_stats = NULL;
>> +	parent_partition_stats = NULL;
>> +
> 
> Extra blank line.
> 

Ack

>> +}
>> +
>> +static int __init parent_vp_debugfs_create(u32 vp_index,
>> +					   struct dentry **vp_stats_ptr,
>> +					   struct dentry *parent)
>> +{
>> +	struct hv_stats_page **pstats;
>> +	int err;
>> +
>> +	pstats = kcalloc(2, sizeof(struct hv_stats_page *), GFP_KERNEL_ACCOUNT);
> 
> Another case of using "2" that should be changed.
>

Ack

>> +	if (!pstats)
>> +		return -ENOMEM;
>> +
>> +	err = mshv_vp_stats_map(hv_current_partition_id, vp_index, pstats);
>> +	if (err)
>> +		goto cleanup;
>> +
>> +	err = vp_debugfs_create(hv_current_partition_id, vp_index, pstats,
>> +				vp_stats_ptr, parent);
>> +	if (err)
>> +		goto unmap_vp_stats;
>> +
>> +	return 0;
>> +
>> +unmap_vp_stats:
>> +	mshv_vp_stats_unmap(hv_current_partition_id, vp_index, pstats);
>> +cleanup:
>> +	kfree(pstats);
>> +	return err;
>> +}
>> +
>> +static int __init mshv_debugfs_parent_partition_create(void)
>> +{
>> +	struct dentry *vp_dir;
>> +	int err, idx, i;
>> +
>> +	mshv_debugfs_partition = debugfs_create_dir("partition",
>> +						     mshv_debugfs);
>> +	if (IS_ERR(mshv_debugfs_partition))
>> +		return PTR_ERR(mshv_debugfs_partition);
>> +
>> +	err = partition_debugfs_create(hv_current_partition_id,
>> +				       &vp_dir,
>> +				       &parent_partition_stats,
>> +				       mshv_debugfs_partition);
>> +	if (err)
>> +		goto remove_debugfs_partition;
>> +
>> +	parent_vp_stats = kcalloc(num_possible_cpus(),
> 
> num_possible_cpus() should not be used to allocate an array that is
> then indexed by the Linux CPU number. Use nr_cpu_ids instead when
> allocating the array. See commit 16b18fdf6bc7 for the full explanation.
> As explained in that commit message, using num_possible_cpus()
> doesn't break things now, but it might in the future.
> 
Thanks, will do

>> +				  sizeof(*parent_vp_stats),
>> +				  GFP_KERNEL);
>> +	if (!parent_vp_stats) {
>> +		err = -ENOMEM;
>> +		goto remove_debugfs_partition;
>> +	}
>> +
>> +	for_each_online_cpu(idx) {
>> +		err = parent_vp_debugfs_create(hv_vp_index[idx],
>> +					       &parent_vp_stats[idx],
>> +					       vp_dir);
>> +		if (err)
>> +			goto remove_debugfs_partition_vp;
>> +	}
>> +
>> +	return 0;
>> +
>> +remove_debugfs_partition_vp:
>> +	for_each_online_cpu(i) {
>> +		if (i >= idx)
>> +			break;
>> +		parent_vp_debugfs_remove(i, parent_vp_stats[i]);
>> +	}
>> +	partition_debugfs_remove(hv_current_partition_id,
>> +				 parent_partition_stats);
>> +
>> +	kfree(parent_vp_stats);
>> +	parent_vp_stats = NULL;
>> +	parent_partition_stats = NULL;
>> +
>> +remove_debugfs_partition:
>> +	debugfs_remove_recursive(mshv_debugfs_partition);
>> +	mshv_debugfs_partition = NULL;
>> +	return err;
>> +}
>> +
>> +static int hv_stats_show(struct seq_file *m, void *v)
>> +{
>> +	const struct hv_stats_page *stats = m->private;
>> +	struct hv_counter_entry *entry = hv_hypervisor_counters;
>> +	int i;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(hv_hypervisor_counters); i++, entry++)
>> +		seq_printf(m, "%-25s: %llu\n", entry->name,
>> +			   stats->data[entry->idx]);
>> +
>> +	return 0;
>> +}
>> +DEFINE_SHOW_ATTRIBUTE(hv_stats);
>> +
>> +static void mshv_hv_stats_unmap(void)
>> +{
>> +	union hv_stats_object_identity identity = {
>> +		.hv.stats_area_type = HV_STATS_AREA_SELF,
>> +	};
>> +	int err;
>> +
>> +	err = hv_unmap_stats_page(HV_STATS_OBJECT_HYPERVISOR, NULL, &identity);
>> +	if (err)
>> +		pr_err("%s: failed to unmap hypervisor stats: %d\n",
>> +		       __func__, err);
>> +}
>> +
>> +static void * __init mshv_hv_stats_map(void)
>> +{
>> +	union hv_stats_object_identity identity = {
>> +		.hv.stats_area_type = HV_STATS_AREA_SELF,
>> +	};
>> +	struct hv_stats_page *stats;
>> +	int err;
>> +
>> +	err = hv_map_stats_page(HV_STATS_OBJECT_HYPERVISOR, &identity, &stats);
>> +	if (err) {
>> +		pr_err("%s: failed to map hypervisor stats: %d\n",
>> +		       __func__, err);
>> +		return ERR_PTR(err);
>> +	}
>> +	return stats;
>> +}
>> +
>> +static int __init mshv_debugfs_hv_stats_create(struct dentry *parent)
>> +{
>> +	struct dentry *dentry;
>> +	u64 *stats;
>> +	int err;
>> +
>> +	stats = mshv_hv_stats_map();
>> +	if (IS_ERR(stats))
>> +		return PTR_ERR(stats);
>> +
>> +	dentry = debugfs_create_file("stats", 0400, parent,
>> +				     stats, &hv_stats_fops);
>> +	if (IS_ERR(dentry)) {
>> +		err = PTR_ERR(dentry);
>> +		pr_err("%s: failed to create hypervisor stats dentry: %d\n",
>> +		       __func__, err);
>> +		goto unmap_hv_stats;
>> +	}
>> +
>> +	mshv_lps_count = num_present_cpus();
> 
> This method of setting mshv_lps_count, and the iteration through the lp_index
> in mshv_debugfs_lp_create() and mshv_debugfs_lp_remove(), seems risky. The
> lp_index gets passed to the hypervisor, so it must be the hypervisor's concept
> of the lp_index. Is that always guaranteed to be the same as Linux's numbering
> of the present CPUs? There may be edge cases where it is not. For example, what
> if Linux in the root partition were booted with the "nosmt" kernel boot option,
> such that Linux ignores all the 2nd hyper-threads in a core? Could that create
> a numbering mismatch?
> 

Ah, this was using the hypervisor stats page before; HvLogicalProcessors. But
I removed the enum, so I thought this would be a reasonable way to get the number
of LPs, but I think I'm mistaken.

For context, there is a fix to how LP and VP numbers are assigned in
hv_smp_prepare_cpus(), but it's part of a future patchset. That fix ensures the
LP indices are dense. The code looks like:

	/* create dense LPs from 0-N for all apicids */
        i = next_smallest_apicid(apicids, 0);
        for (lpidx = 1; i != INT_MAX; lpidx++) {
                node = __apicid_to_node[i];
                if (node == NUMA_NO_NODE)
                        node = 0;

                /* params: node num, lp index, apic id */
                ret = hv_call_add_logical_proc(node, lpidx, i);
                BUG_ON(ret);

                i = next_smallest_apicid(apicids, i);
        }

	/* create a VP for each present CPU */
        lpidx = 1;         /* skip BSP cpu 0 */
        for_each_present_cpu(i) {
                if (i == 0)
                        continue;

                /* params: node num, domid, vp index, lp index */
                ret = hv_call_create_vp(numa_cpu_node(i),
                                        hv_current_partition_id, lpidx, lpidx);
                BUG_ON(ret);
                lpidx++;
        }

For what it's worth, with that fix^ I tested with "nosmt" and things worked as I
would expect: All LPs were displayed in debugfs, but every second LP was not in
use by Linux, as evidenced by e.g. the number of timer interrupts not going up:
LpTimerInterrupts            : 1

Also, only every second VP was created (0, 2, 4, 6...) since the others aren't
in the present mask at boot.

> Note that for vp_index, we have the hv_vp_index[] array for translating from
> Linux's concept of a CPU number to Hyper-V's concept of vp_index. For
> example, mshv_debugfs_parent_partition_create() correctly goes through
> this translation. And presumably when the VMM code does the
> MSHV_CREATE_VP ioctl, it is passing in a hypervisor vp_index.
> 
> Everything may work fine "as is" for the moment, but the lp functions here
> are still conflating the hypervisor's LP numbering with Linux's CPU numbering,
> and that seems like a recipe for trouble somewhere down the road. I'm
> not sure how the hypervisor interprets the "lp_index" part of the identity
> argument passed to a hypercall, so I'm not sure what the fix is.
> 

The simplest thing for now might be to bring back that enum value
HvLogicalProcessors just for this one usage. I'll admit I'm not familiar with
all the nuances here so there are still probably edge cases here.

>> +
>> +	return 0;
>> +
>> +unmap_hv_stats:
>> +	mshv_hv_stats_unmap();
>> +	return err;
>> +}
>> +
<snip>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-22 18:21     ` Nuno Das Neves
  2026-01-22 18:52       ` Michael Kelley
@ 2026-01-23 22:28       ` Stanislav Kinsburskii
  1 sibling, 0 replies; 22+ messages in thread
From: Stanislav Kinsburskii @ 2026-01-23 22:28 UTC (permalink / raw)
  To: Nuno Das Neves
  Cc: linux-hyperv, linux-kernel, mhklinux, kys, haiyangz, wei.liu,
	decui, longli, prapal, mrathor, paekkaladevi

On Thu, Jan 22, 2026 at 10:21:17AM -0800, Nuno Das Neves wrote:
> On 1/21/2026 5:18 PM, Stanislav Kinsburskii wrote:
> > On Wed, Jan 21, 2026 at 01:46:22PM -0800, Nuno Das Neves wrote:
> >> Introduce hv_counters.c, containing static data corresponding to
> >> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> >> members as an array instead makes more sense, since it will be
> >> iterated over to print counter information to debugfs.
> >>
> >> Include hypervisor, logical processor, partition, and virtual
> >> processor counters.
> >>
> >> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> >> ---
> >>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 488 insertions(+)
> >>  create mode 100644 drivers/hv/hv_counters.c
> >>
> >> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> >> new file mode 100644
> >> index 000000000000..a8e07e72cc29
> >> --- /dev/null
> >> +++ b/drivers/hv/hv_counters.c
> >> @@ -0,0 +1,488 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * Copyright (c) 2026, Microsoft Corporation.
> >> + *
> >> + * Data for printing stats page counters via debugfs.
> >> + *
> >> + * Authors: Microsoft Linux virtualization team
> >> + */
> >> +
> >> +struct hv_counter_entry {
> >> +	char *name;
> >> +	int idx;
> >> +};
> > 
> > This structure looks redundant to me mostly because of the "idx".
> > It looks what you need here is an arry of pointers to strings, like
> > below:
> > 
> > static const char *hv_hypervisor_counters[] = {
> >         NULL, /* 0 is unused */
> > 	"HvLogicalProcessors",
> > 	"HvPartitions",
> > 	"HvTotalPages",
> > 	"HvVirtualProcessors",
> > 	"HvMonitoredNotifications",
> > 	"HvModernStandbyEntries",
> > 	"HvPlatformIdleTransitions",
> > 	"HvHypervisorStartupCost",
> > 	NULL, /* 9 is unused */
> > 	"HvIOSpacePages",
> > 	...
> > };
> > 
> > which can be iterated like this:
> > 
> > for (idx = 0; idx < ARRAY_SIZE(hv_hypervisor_counters); idx++) {
> >     const char *name = hv_hypervisor_counters[idx];
> >     if (!name)
> > 	continue;
> >     /* print */
> >     ...
> > }
> > 
> > What do you think?
> 
> It's an elegant option, given the values are almost uniformly
> tightly packed. It also saves a fair bit of space - around 2.5Kb.
> 
> For my taste, I do like being able to visually verify the
> correctness of any given member. That way whenever I look at it, I
> don't have to blindly trust that the list was previously set up
> correctly, or count the lines to check if a given value is correct.
> Not a big deal, but it does introduce some friction.
> 
> We could also use a designated initializer list:
> 
> static const char *hv_hypervisor_counters[] = {
> 	[1] = "HvLogicalProcessors",
> 	[2] = "HvPartitions",
> 	[3] = "HvTotalPages",
> 	[4] = "HvVirtualProcessors",
> 	[5] = "HvMonitoredNotifications",
> 	[6] = "HvModernStandbyEntries",
> 	[7] = "HvPlatformIdleTransitions",
> 	[8] = "HvHypervisorStartupCost",
> 
> 	[10] = "HvIOSpacePages",
> 	...
> };
> 
> The indices are explicit, so it's easy to visually verify that any
> particular part of the list is correct. It's functionally identical
> to your approach, so it saves the same amount of space, and the
> explicit NULLs are unnecessary so it's more straightforward to
> transform from the Windows source in case of any gaps that are
> harder to notice later on in the list.
> 
> How does that sound?
> 

Fine by me.

Thanks,
Stanislav

> Nuno
> 
> > 
> > Thanks,
> > Stanislav
> > 
> >> +
> >> +/* HV_HYPERVISOR_COUNTER */
> >> +static struct hv_counter_entry hv_hypervisor_counters[] = {
> >> +	{ "HvLogicalProcessors", 1 },
> >> +	{ "HvPartitions", 2 },
> >> +	{ "HvTotalPages", 3 },
> >> +	{ "HvVirtualProcessors", 4 },
> >> +	{ "HvMonitoredNotifications", 5 },
> >> +	{ "HvModernStandbyEntries", 6 },
> >> +	{ "HvPlatformIdleTransitions", 7 },
> >> +	{ "HvHypervisorStartupCost", 8 },
> >> +
> >> +	{ "HvIOSpacePages", 10 },
> >> +	{ "HvNonEssentialPagesForDump", 11 },
> >> +	{ "HvSubsumedPages", 12 },
> >> +};
> >> +
> >> +/* HV_CPU_COUNTER */
> >> +static struct hv_counter_entry hv_lp_counters[] = {
> >> +	{ "LpGlobalTime", 1 },
> >> +	{ "LpTotalRunTime", 2 },
> >> +	{ "LpHypervisorRunTime", 3 },
> >> +	{ "LpHardwareInterrupts", 4 },
> >> +	{ "LpContextSwitches", 5 },
> >> +	{ "LpInterProcessorInterrupts", 6 },
> >> +	{ "LpSchedulerInterrupts", 7 },
> >> +	{ "LpTimerInterrupts", 8 },
> >> +	{ "LpInterProcessorInterruptsSent", 9 },
> >> +	{ "LpProcessorHalts", 10 },
> >> +	{ "LpMonitorTransitionCost", 11 },
> >> +	{ "LpContextSwitchTime", 12 },
> >> +	{ "LpC1TransitionsCount", 13 },
> >> +	{ "LpC1RunTime", 14 },
> >> +	{ "LpC2TransitionsCount", 15 },
> >> +	{ "LpC2RunTime", 16 },
> >> +	{ "LpC3TransitionsCount", 17 },
> >> +	{ "LpC3RunTime", 18 },
> >> +	{ "LpRootVpIndex", 19 },
> >> +	{ "LpIdleSequenceNumber", 20 },
> >> +	{ "LpGlobalTscCount", 21 },
> >> +	{ "LpActiveTscCount", 22 },
> >> +	{ "LpIdleAccumulation", 23 },
> >> +	{ "LpReferenceCycleCount0", 24 },
> >> +	{ "LpActualCycleCount0", 25 },
> >> +	{ "LpReferenceCycleCount1", 26 },
> >> +	{ "LpActualCycleCount1", 27 },
> >> +	{ "LpProximityDomainId", 28 },
> >> +	{ "LpPostedInterruptNotifications", 29 },
> >> +	{ "LpBranchPredictorFlushes", 30 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "LpL1DataCacheFlushes", 31 },
> >> +	{ "LpImmediateL1DataCacheFlushes", 32 },
> >> +	{ "LpMbFlushes", 33 },
> >> +	{ "LpCounterRefreshSequenceNumber", 34 },
> >> +	{ "LpCounterRefreshReferenceTime", 35 },
> >> +	{ "LpIdleAccumulationSnapshot", 36 },
> >> +	{ "LpActiveTscCountSnapshot", 37 },
> >> +	{ "LpHwpRequestContextSwitches", 38 },
> >> +	{ "LpPlaceholder1", 39 },
> >> +	{ "LpPlaceholder2", 40 },
> >> +	{ "LpPlaceholder3", 41 },
> >> +	{ "LpPlaceholder4", 42 },
> >> +	{ "LpPlaceholder5", 43 },
> >> +	{ "LpPlaceholder6", 44 },
> >> +	{ "LpPlaceholder7", 45 },
> >> +	{ "LpPlaceholder8", 46 },
> >> +	{ "LpPlaceholder9", 47 },
> >> +	{ "LpSchLocalRunListSize", 48 },
> >> +	{ "LpReserveGroupId", 49 },
> >> +	{ "LpRunningPriority", 50 },
> >> +	{ "LpPerfmonInterruptCount", 51 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "LpCounterRefreshSequenceNumber", 31 },
> >> +	{ "LpCounterRefreshReferenceTime", 32 },
> >> +	{ "LpIdleAccumulationSnapshot", 33 },
> >> +	{ "LpActiveTscCountSnapshot", 34 },
> >> +	{ "LpHwpRequestContextSwitches", 35 },
> >> +	{ "LpPlaceholder2", 36 },
> >> +	{ "LpPlaceholder3", 37 },
> >> +	{ "LpPlaceholder4", 38 },
> >> +	{ "LpPlaceholder5", 39 },
> >> +	{ "LpPlaceholder6", 40 },
> >> +	{ "LpPlaceholder7", 41 },
> >> +	{ "LpPlaceholder8", 42 },
> >> +	{ "LpPlaceholder9", 43 },
> >> +	{ "LpSchLocalRunListSize", 44 },
> >> +	{ "LpReserveGroupId", 45 },
> >> +	{ "LpRunningPriority", 46 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_PROCESS_COUNTER */
> >> +static struct hv_counter_entry hv_partition_counters[] = {
> >> +	{ "PtVirtualProcessors", 1 },
> >> +
> >> +	{ "PtTlbSize", 3 },
> >> +	{ "PtAddressSpaces", 4 },
> >> +	{ "PtDepositedPages", 5 },
> >> +	{ "PtGpaPages", 6 },
> >> +	{ "PtGpaSpaceModifications", 7 },
> >> +	{ "PtVirtualTlbFlushEntires", 8 },
> >> +	{ "PtRecommendedTlbSize", 9 },
> >> +	{ "PtGpaPages4K", 10 },
> >> +	{ "PtGpaPages2M", 11 },
> >> +	{ "PtGpaPages1G", 12 },
> >> +	{ "PtGpaPages512G", 13 },
> >> +	{ "PtDevicePages4K", 14 },
> >> +	{ "PtDevicePages2M", 15 },
> >> +	{ "PtDevicePages1G", 16 },
> >> +	{ "PtDevicePages512G", 17 },
> >> +	{ "PtAttachedDevices", 18 },
> >> +	{ "PtDeviceInterruptMappings", 19 },
> >> +	{ "PtIoTlbFlushes", 20 },
> >> +	{ "PtIoTlbFlushCost", 21 },
> >> +	{ "PtDeviceInterruptErrors", 22 },
> >> +	{ "PtDeviceDmaErrors", 23 },
> >> +	{ "PtDeviceInterruptThrottleEvents", 24 },
> >> +	{ "PtSkippedTimerTicks", 25 },
> >> +	{ "PtPartitionId", 26 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "PtNestedTlbSize", 27 },
> >> +	{ "PtRecommendedNestedTlbSize", 28 },
> >> +	{ "PtNestedTlbFreeListSize", 29 },
> >> +	{ "PtNestedTlbTrimmedPages", 30 },
> >> +	{ "PtPagesShattered", 31 },
> >> +	{ "PtPagesRecombined", 32 },
> >> +	{ "PtHwpRequestValue", 33 },
> >> +	{ "PtAutoSuspendEnableTime", 34 },
> >> +	{ "PtAutoSuspendTriggerTime", 35 },
> >> +	{ "PtAutoSuspendDisableTime", 36 },
> >> +	{ "PtPlaceholder1", 37 },
> >> +	{ "PtPlaceholder2", 38 },
> >> +	{ "PtPlaceholder3", 39 },
> >> +	{ "PtPlaceholder4", 40 },
> >> +	{ "PtPlaceholder5", 41 },
> >> +	{ "PtPlaceholder6", 42 },
> >> +	{ "PtPlaceholder7", 43 },
> >> +	{ "PtPlaceholder8", 44 },
> >> +	{ "PtHypervisorStateTransferGeneration", 45 },
> >> +	{ "PtNumberofActiveChildPartitions", 46 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "PtHwpRequestValue", 27 },
> >> +	{ "PtAutoSuspendEnableTime", 28 },
> >> +	{ "PtAutoSuspendTriggerTime", 29 },
> >> +	{ "PtAutoSuspendDisableTime", 30 },
> >> +	{ "PtPlaceholder1", 31 },
> >> +	{ "PtPlaceholder2", 32 },
> >> +	{ "PtPlaceholder3", 33 },
> >> +	{ "PtPlaceholder4", 34 },
> >> +	{ "PtPlaceholder5", 35 },
> >> +	{ "PtPlaceholder6", 36 },
> >> +	{ "PtPlaceholder7", 37 },
> >> +	{ "PtPlaceholder8", 38 },
> >> +	{ "PtHypervisorStateTransferGeneration", 39 },
> >> +	{ "PtNumberofActiveChildPartitions", 40 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_THREAD_COUNTER */
> >> +static struct hv_counter_entry hv_vp_counters[] = {
> >> +	{ "VpTotalRunTime", 1 },
> >> +	{ "VpHypervisorRunTime", 2 },
> >> +	{ "VpRemoteNodeRunTime", 3 },
> >> +	{ "VpNormalizedRunTime", 4 },
> >> +	{ "VpIdealCpu", 5 },
> >> +
> >> +	{ "VpHypercallsCount", 7 },
> >> +	{ "VpHypercallsTime", 8 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "VpPageInvalidationsCount", 9 },
> >> +	{ "VpPageInvalidationsTime", 10 },
> >> +	{ "VpControlRegisterAccessesCount", 11 },
> >> +	{ "VpControlRegisterAccessesTime", 12 },
> >> +	{ "VpIoInstructionsCount", 13 },
> >> +	{ "VpIoInstructionsTime", 14 },
> >> +	{ "VpHltInstructionsCount", 15 },
> >> +	{ "VpHltInstructionsTime", 16 },
> >> +	{ "VpMwaitInstructionsCount", 17 },
> >> +	{ "VpMwaitInstructionsTime", 18 },
> >> +	{ "VpCpuidInstructionsCount", 19 },
> >> +	{ "VpCpuidInstructionsTime", 20 },
> >> +	{ "VpMsrAccessesCount", 21 },
> >> +	{ "VpMsrAccessesTime", 22 },
> >> +	{ "VpOtherInterceptsCount", 23 },
> >> +	{ "VpOtherInterceptsTime", 24 },
> >> +	{ "VpExternalInterruptsCount", 25 },
> >> +	{ "VpExternalInterruptsTime", 26 },
> >> +	{ "VpPendingInterruptsCount", 27 },
> >> +	{ "VpPendingInterruptsTime", 28 },
> >> +	{ "VpEmulatedInstructionsCount", 29 },
> >> +	{ "VpEmulatedInstructionsTime", 30 },
> >> +	{ "VpDebugRegisterAccessesCount", 31 },
> >> +	{ "VpDebugRegisterAccessesTime", 32 },
> >> +	{ "VpPageFaultInterceptsCount", 33 },
> >> +	{ "VpPageFaultInterceptsTime", 34 },
> >> +	{ "VpGuestPageTableMaps", 35 },
> >> +	{ "VpLargePageTlbFills", 36 },
> >> +	{ "VpSmallPageTlbFills", 37 },
> >> +	{ "VpReflectedGuestPageFaults", 38 },
> >> +	{ "VpApicMmioAccesses", 39 },
> >> +	{ "VpIoInterceptMessages", 40 },
> >> +	{ "VpMemoryInterceptMessages", 41 },
> >> +	{ "VpApicEoiAccesses", 42 },
> >> +	{ "VpOtherMessages", 43 },
> >> +	{ "VpPageTableAllocations", 44 },
> >> +	{ "VpLogicalProcessorMigrations", 45 },
> >> +	{ "VpAddressSpaceEvictions", 46 },
> >> +	{ "VpAddressSpaceSwitches", 47 },
> >> +	{ "VpAddressDomainFlushes", 48 },
> >> +	{ "VpAddressSpaceFlushes", 49 },
> >> +	{ "VpGlobalGvaRangeFlushes", 50 },
> >> +	{ "VpLocalGvaRangeFlushes", 51 },
> >> +	{ "VpPageTableEvictions", 52 },
> >> +	{ "VpPageTableReclamations", 53 },
> >> +	{ "VpPageTableResets", 54 },
> >> +	{ "VpPageTableValidations", 55 },
> >> +	{ "VpApicTprAccesses", 56 },
> >> +	{ "VpPageTableWriteIntercepts", 57 },
> >> +	{ "VpSyntheticInterrupts", 58 },
> >> +	{ "VpVirtualInterrupts", 59 },
> >> +	{ "VpApicIpisSent", 60 },
> >> +	{ "VpApicSelfIpisSent", 61 },
> >> +	{ "VpGpaSpaceHypercalls", 62 },
> >> +	{ "VpLogicalProcessorHypercalls", 63 },
> >> +	{ "VpLongSpinWaitHypercalls", 64 },
> >> +	{ "VpOtherHypercalls", 65 },
> >> +	{ "VpSyntheticInterruptHypercalls", 66 },
> >> +	{ "VpVirtualInterruptHypercalls", 67 },
> >> +	{ "VpVirtualMmuHypercalls", 68 },
> >> +	{ "VpVirtualProcessorHypercalls", 69 },
> >> +	{ "VpHardwareInterrupts", 70 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 71 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 72 },
> >> +	{ "VpPageScans", 73 },
> >> +	{ "VpLogicalProcessorDispatches", 74 },
> >> +	{ "VpWaitingForCpuTime", 75 },
> >> +	{ "VpExtendedHypercalls", 76 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 77 },
> >> +	{ "VpMbecNestedPageTableSwitches", 78 },
> >> +	{ "VpOtherReflectedGuestExceptions", 79 },
> >> +	{ "VpGlobalIoTlbFlushes", 80 },
> >> +	{ "VpGlobalIoTlbFlushCost", 81 },
> >> +	{ "VpLocalIoTlbFlushes", 82 },
> >> +	{ "VpLocalIoTlbFlushCost", 83 },
> >> +	{ "VpHypercallsForwardedCount", 84 },
> >> +	{ "VpHypercallsForwardingTime", 85 },
> >> +	{ "VpPageInvalidationsForwardedCount", 86 },
> >> +	{ "VpPageInvalidationsForwardingTime", 87 },
> >> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
> >> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
> >> +	{ "VpIoInstructionsForwardedCount", 90 },
> >> +	{ "VpIoInstructionsForwardingTime", 91 },
> >> +	{ "VpHltInstructionsForwardedCount", 92 },
> >> +	{ "VpHltInstructionsForwardingTime", 93 },
> >> +	{ "VpMwaitInstructionsForwardedCount", 94 },
> >> +	{ "VpMwaitInstructionsForwardingTime", 95 },
> >> +	{ "VpCpuidInstructionsForwardedCount", 96 },
> >> +	{ "VpCpuidInstructionsForwardingTime", 97 },
> >> +	{ "VpMsrAccessesForwardedCount", 98 },
> >> +	{ "VpMsrAccessesForwardingTime", 99 },
> >> +	{ "VpOtherInterceptsForwardedCount", 100 },
> >> +	{ "VpOtherInterceptsForwardingTime", 101 },
> >> +	{ "VpExternalInterruptsForwardedCount", 102 },
> >> +	{ "VpExternalInterruptsForwardingTime", 103 },
> >> +	{ "VpPendingInterruptsForwardedCount", 104 },
> >> +	{ "VpPendingInterruptsForwardingTime", 105 },
> >> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
> >> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
> >> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
> >> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
> >> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
> >> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
> >> +	{ "VpVmclearEmulationCount", 112 },
> >> +	{ "VpVmclearEmulationTime", 113 },
> >> +	{ "VpVmptrldEmulationCount", 114 },
> >> +	{ "VpVmptrldEmulationTime", 115 },
> >> +	{ "VpVmptrstEmulationCount", 116 },
> >> +	{ "VpVmptrstEmulationTime", 117 },
> >> +	{ "VpVmreadEmulationCount", 118 },
> >> +	{ "VpVmreadEmulationTime", 119 },
> >> +	{ "VpVmwriteEmulationCount", 120 },
> >> +	{ "VpVmwriteEmulationTime", 121 },
> >> +	{ "VpVmxoffEmulationCount", 122 },
> >> +	{ "VpVmxoffEmulationTime", 123 },
> >> +	{ "VpVmxonEmulationCount", 124 },
> >> +	{ "VpVmxonEmulationTime", 125 },
> >> +	{ "VpNestedVMEntriesCount", 126 },
> >> +	{ "VpNestedVMEntriesTime", 127 },
> >> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
> >> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
> >> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
> >> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
> >> +	{ "VpInvEptAllContextEmulationCount", 132 },
> >> +	{ "VpInvEptAllContextEmulationTime", 133 },
> >> +	{ "VpInvEptSingleContextEmulationCount", 134 },
> >> +	{ "VpInvEptSingleContextEmulationTime", 135 },
> >> +	{ "VpInvVpidAllContextEmulationCount", 136 },
> >> +	{ "VpInvVpidAllContextEmulationTime", 137 },
> >> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
> >> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
> >> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
> >> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
> >> +	{ "VpNestedTlbPageTableReclamations", 142 },
> >> +	{ "VpNestedTlbPageTableEvictions", 143 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
> >> +	{ "VpPostedInterruptNotifications", 146 },
> >> +	{ "VpPostedInterruptScans", 147 },
> >> +	{ "VpTotalCoreRunTime", 148 },
> >> +	{ "VpMaximumRunTime", 149 },
> >> +	{ "VpHwpRequestContextSwitches", 150 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 151 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 152 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 153 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 154 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 155 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 156 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 157 },
> >> +	{ "VpVmloadEmulationCount", 158 },
> >> +	{ "VpVmloadEmulationTime", 159 },
> >> +	{ "VpVmsaveEmulationCount", 160 },
> >> +	{ "VpVmsaveEmulationTime", 161 },
> >> +	{ "VpGifInstructionEmulationCount", 162 },
> >> +	{ "VpGifInstructionEmulationTime", 163 },
> >> +	{ "VpEmulatedErrataSvmInstructions", 164 },
> >> +	{ "VpPlaceholder1", 165 },
> >> +	{ "VpPlaceholder2", 166 },
> >> +	{ "VpPlaceholder3", 167 },
> >> +	{ "VpPlaceholder4", 168 },
> >> +	{ "VpPlaceholder5", 169 },
> >> +	{ "VpPlaceholder6", 170 },
> >> +	{ "VpPlaceholder7", 171 },
> >> +	{ "VpPlaceholder8", 172 },
> >> +	{ "VpContentionTime", 173 },
> >> +	{ "VpWakeUpTime", 174 },
> >> +	{ "VpSchedulingPriority", 175 },
> >> +	{ "VpRdpmcInstructionsCount", 176 },
> >> +	{ "VpRdpmcInstructionsTime", 177 },
> >> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
> >> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
> >> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
> >> +	{ "VpPerfmonInterruptCount", 181 },
> >> +	{ "VpVtl1DispatchCount", 182 },
> >> +	{ "VpVtl2DispatchCount", 183 },
> >> +	{ "VpVtl2DispatchBucket0", 184 },
> >> +	{ "VpVtl2DispatchBucket1", 185 },
> >> +	{ "VpVtl2DispatchBucket2", 186 },
> >> +	{ "VpVtl2DispatchBucket3", 187 },
> >> +	{ "VpVtl2DispatchBucket4", 188 },
> >> +	{ "VpVtl2DispatchBucket5", 189 },
> >> +	{ "VpVtl2DispatchBucket6", 190 },
> >> +	{ "VpVtl1RunTime", 191 },
> >> +	{ "VpVtl2RunTime", 192 },
> >> +	{ "VpIommuHypercalls", 193 },
> >> +	{ "VpCpuGroupHypercalls", 194 },
> >> +	{ "VpVsmHypercalls", 195 },
> >> +	{ "VpEventLogHypercalls", 196 },
> >> +	{ "VpDeviceDomainHypercalls", 197 },
> >> +	{ "VpDepositHypercalls", 198 },
> >> +	{ "VpSvmHypercalls", 199 },
> >> +	{ "VpBusLockAcquisitionCount", 200 },
> >> +	{ "VpLoadAvg", 201 },
> >> +	{ "VpRootDispatchThreadBlocked", 202 },
> >> +	{ "VpIdleCpuTime", 203 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 204 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 205 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 206 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 207 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 208 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 209 },
> >> +	{ "VpHierarchicalSuspendTime", 210 },
> >> +	{ "VpExpressSchedulingAttempts", 211 },
> >> +	{ "VpExpressSchedulingCount", 212 },
> >> +	{ "VpBusLockAcquisitionTime", 213 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "VpSysRegAccessesCount", 9 },
> >> +	{ "VpSysRegAccessesTime", 10 },
> >> +	{ "VpSmcInstructionsCount", 11 },
> >> +	{ "VpSmcInstructionsTime", 12 },
> >> +	{ "VpOtherInterceptsCount", 13 },
> >> +	{ "VpOtherInterceptsTime", 14 },
> >> +	{ "VpExternalInterruptsCount", 15 },
> >> +	{ "VpExternalInterruptsTime", 16 },
> >> +	{ "VpPendingInterruptsCount", 17 },
> >> +	{ "VpPendingInterruptsTime", 18 },
> >> +	{ "VpGuestPageTableMaps", 19 },
> >> +	{ "VpLargePageTlbFills", 20 },
> >> +	{ "VpSmallPageTlbFills", 21 },
> >> +	{ "VpReflectedGuestPageFaults", 22 },
> >> +	{ "VpMemoryInterceptMessages", 23 },
> >> +	{ "VpOtherMessages", 24 },
> >> +	{ "VpLogicalProcessorMigrations", 25 },
> >> +	{ "VpAddressDomainFlushes", 26 },
> >> +	{ "VpAddressSpaceFlushes", 27 },
> >> +	{ "VpSyntheticInterrupts", 28 },
> >> +	{ "VpVirtualInterrupts", 29 },
> >> +	{ "VpApicSelfIpisSent", 30 },
> >> +	{ "VpGpaSpaceHypercalls", 31 },
> >> +	{ "VpLogicalProcessorHypercalls", 32 },
> >> +	{ "VpLongSpinWaitHypercalls", 33 },
> >> +	{ "VpOtherHypercalls", 34 },
> >> +	{ "VpSyntheticInterruptHypercalls", 35 },
> >> +	{ "VpVirtualInterruptHypercalls", 36 },
> >> +	{ "VpVirtualMmuHypercalls", 37 },
> >> +	{ "VpVirtualProcessorHypercalls", 38 },
> >> +	{ "VpHardwareInterrupts", 39 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 40 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 41 },
> >> +	{ "VpLogicalProcessorDispatches", 42 },
> >> +	{ "VpWaitingForCpuTime", 43 },
> >> +	{ "VpExtendedHypercalls", 44 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 45 },
> >> +	{ "VpMbecNestedPageTableSwitches", 46 },
> >> +	{ "VpOtherReflectedGuestExceptions", 47 },
> >> +	{ "VpGlobalIoTlbFlushes", 48 },
> >> +	{ "VpGlobalIoTlbFlushCost", 49 },
> >> +	{ "VpLocalIoTlbFlushes", 50 },
> >> +	{ "VpLocalIoTlbFlushCost", 51 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
> >> +	{ "VpPostedInterruptNotifications", 54 },
> >> +	{ "VpPostedInterruptScans", 55 },
> >> +	{ "VpTotalCoreRunTime", 56 },
> >> +	{ "VpMaximumRunTime", 57 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 58 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 59 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 60 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 61 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 62 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 63 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 64 },
> >> +	{ "VpHwpRequestContextSwitches", 65 },
> >> +	{ "VpPlaceholder2", 66 },
> >> +	{ "VpPlaceholder3", 67 },
> >> +	{ "VpPlaceholder4", 68 },
> >> +	{ "VpPlaceholder5", 69 },
> >> +	{ "VpPlaceholder6", 70 },
> >> +	{ "VpPlaceholder7", 71 },
> >> +	{ "VpPlaceholder8", 72 },
> >> +	{ "VpContentionTime", 73 },
> >> +	{ "VpWakeUpTime", 74 },
> >> +	{ "VpSchedulingPriority", 75 },
> >> +	{ "VpVtl1DispatchCount", 76 },
> >> +	{ "VpVtl2DispatchCount", 77 },
> >> +	{ "VpVtl2DispatchBucket0", 78 },
> >> +	{ "VpVtl2DispatchBucket1", 79 },
> >> +	{ "VpVtl2DispatchBucket2", 80 },
> >> +	{ "VpVtl2DispatchBucket3", 81 },
> >> +	{ "VpVtl2DispatchBucket4", 82 },
> >> +	{ "VpVtl2DispatchBucket5", 83 },
> >> +	{ "VpVtl2DispatchBucket6", 84 },
> >> +	{ "VpVtl1RunTime", 85 },
> >> +	{ "VpVtl2RunTime", 86 },
> >> +	{ "VpIommuHypercalls", 87 },
> >> +	{ "VpCpuGroupHypercalls", 88 },
> >> +	{ "VpVsmHypercalls", 89 },
> >> +	{ "VpEventLogHypercalls", 90 },
> >> +	{ "VpDeviceDomainHypercalls", 91 },
> >> +	{ "VpDepositHypercalls", 92 },
> >> +	{ "VpSvmHypercalls", 93 },
> >> +	{ "VpLoadAvg", 94 },
> >> +	{ "VpRootDispatchThreadBlocked", 95 },
> >> +	{ "VpIdleCpuTime", 96 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 97 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 98 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 99 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 100 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 101 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 102 },
> >> +	{ "VpHierarchicalSuspendTime", 103 },
> >> +	{ "VpExpressSchedulingAttempts", 104 },
> >> +	{ "VpExpressSchedulingCount", 105 },
> >> +#endif
> >> +};
> >> +
> >> -- 
> >> 2.34.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-23 19:04     ` Nuno Das Neves
  2026-01-23 19:10       ` Michael Kelley
@ 2026-01-23 22:31       ` Stanislav Kinsburskii
  2026-01-24  0:13         ` Nuno Das Neves
  1 sibling, 1 reply; 22+ messages in thread
From: Stanislav Kinsburskii @ 2026-01-23 22:31 UTC (permalink / raw)
  To: Nuno Das Neves
  Cc: Michael Kelley, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, kys@microsoft.com,
	haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com,
	longli@microsoft.com, prapal@linux.microsoft.com,
	mrathor@linux.microsoft.com, paekkaladevi@linux.microsoft.com

On Fri, Jan 23, 2026 at 11:04:52AM -0800, Nuno Das Neves wrote:
> On 1/23/2026 9:09 AM, Michael Kelley wrote:
> > From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> >>
> >> Introduce hv_counters.c, containing static data corresponding to
> >> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> >> members as an array instead makes more sense, since it will be
> >> iterated over to print counter information to debugfs.
> > 
> > I would have expected the filename to be mshv_counters.c, so that the association
> > with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
> > which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
> > for using the "hv_" prefix?
> > 
> Good question - I originally thought of using hv_ because the definitions inside are
> part of the hypervisor ABI, and hence also have the hv_ prefix.
> 
> However you have a good point, and I'm not opposed to changing it.
> 
> Maybe to just be super explicit: "mshv_debugfs_counters.c" ?
> 

This is reudnant from my POV.
If these counters are only used by mshv_debugfs.c, then should rather be
a part of this file.
What was the reason to move them elsewhere?

Thanks,
Stanislav

> > Also, I see in Patch 7 of this series that hv_counters.c is #included as a .c file
> > in mshv_debugfs.c. Is there a reason for doing the #include instead of adding
> > hv_counters.c to the Makefile and building it on its own? You would need to
> > add a handful of extern statements to mshv_root.h so that the tables are
> > referenceable from mshv_debugfs.c. But that would seem to be the more
> > normal way of doing things.  #including a .c file is unusual.
> > 
> 
> Yes...I thought I could avoid noise in mshv_root.h and the Makefile, since it's
> only relevant for mshv_debugfs.c. However I could see this file (whether as .c or
> .h) being misused and included elsewhere inadvertantly, which would duplicate the
> tables, so maybe doing it the normal way is a better idea, even if mshv_debugfs.c
> is likely the only user.
> 
> > See one more comment on the last line of this patch ...
> > 
> >>
> >> Include hypervisor, logical processor, partition, and virtual
> >> processor counters.
> >>
> >> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> >> ---
> >>  drivers/hv/hv_counters.c | 488 +++++++++++++++++++++++++++++++++++++++
> >>  1 file changed, 488 insertions(+)
> >>  create mode 100644 drivers/hv/hv_counters.c
> >>
> >> diff --git a/drivers/hv/hv_counters.c b/drivers/hv/hv_counters.c
> >> new file mode 100644
> >> index 000000000000..a8e07e72cc29
> >> --- /dev/null
> >> +++ b/drivers/hv/hv_counters.c
> >> @@ -0,0 +1,488 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * Copyright (c) 2026, Microsoft Corporation.
> >> + *
> >> + * Data for printing stats page counters via debugfs.
> >> + *
> >> + * Authors: Microsoft Linux virtualization team
> >> + */
> >> +
> >> +struct hv_counter_entry {
> >> +	char *name;
> >> +	int idx;
> >> +};
> >> +
> >> +/* HV_HYPERVISOR_COUNTER */
> >> +static struct hv_counter_entry hv_hypervisor_counters[] = {
> >> +	{ "HvLogicalProcessors", 1 },
> >> +	{ "HvPartitions", 2 },
> >> +	{ "HvTotalPages", 3 },
> >> +	{ "HvVirtualProcessors", 4 },
> >> +	{ "HvMonitoredNotifications", 5 },
> >> +	{ "HvModernStandbyEntries", 6 },
> >> +	{ "HvPlatformIdleTransitions", 7 },
> >> +	{ "HvHypervisorStartupCost", 8 },
> >> +
> >> +	{ "HvIOSpacePages", 10 },
> >> +	{ "HvNonEssentialPagesForDump", 11 },
> >> +	{ "HvSubsumedPages", 12 },
> >> +};
> >> +
> >> +/* HV_CPU_COUNTER */
> >> +static struct hv_counter_entry hv_lp_counters[] = {
> >> +	{ "LpGlobalTime", 1 },
> >> +	{ "LpTotalRunTime", 2 },
> >> +	{ "LpHypervisorRunTime", 3 },
> >> +	{ "LpHardwareInterrupts", 4 },
> >> +	{ "LpContextSwitches", 5 },
> >> +	{ "LpInterProcessorInterrupts", 6 },
> >> +	{ "LpSchedulerInterrupts", 7 },
> >> +	{ "LpTimerInterrupts", 8 },
> >> +	{ "LpInterProcessorInterruptsSent", 9 },
> >> +	{ "LpProcessorHalts", 10 },
> >> +	{ "LpMonitorTransitionCost", 11 },
> >> +	{ "LpContextSwitchTime", 12 },
> >> +	{ "LpC1TransitionsCount", 13 },
> >> +	{ "LpC1RunTime", 14 },
> >> +	{ "LpC2TransitionsCount", 15 },
> >> +	{ "LpC2RunTime", 16 },
> >> +	{ "LpC3TransitionsCount", 17 },
> >> +	{ "LpC3RunTime", 18 },
> >> +	{ "LpRootVpIndex", 19 },
> >> +	{ "LpIdleSequenceNumber", 20 },
> >> +	{ "LpGlobalTscCount", 21 },
> >> +	{ "LpActiveTscCount", 22 },
> >> +	{ "LpIdleAccumulation", 23 },
> >> +	{ "LpReferenceCycleCount0", 24 },
> >> +	{ "LpActualCycleCount0", 25 },
> >> +	{ "LpReferenceCycleCount1", 26 },
> >> +	{ "LpActualCycleCount1", 27 },
> >> +	{ "LpProximityDomainId", 28 },
> >> +	{ "LpPostedInterruptNotifications", 29 },
> >> +	{ "LpBranchPredictorFlushes", 30 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "LpL1DataCacheFlushes", 31 },
> >> +	{ "LpImmediateL1DataCacheFlushes", 32 },
> >> +	{ "LpMbFlushes", 33 },
> >> +	{ "LpCounterRefreshSequenceNumber", 34 },
> >> +	{ "LpCounterRefreshReferenceTime", 35 },
> >> +	{ "LpIdleAccumulationSnapshot", 36 },
> >> +	{ "LpActiveTscCountSnapshot", 37 },
> >> +	{ "LpHwpRequestContextSwitches", 38 },
> >> +	{ "LpPlaceholder1", 39 },
> >> +	{ "LpPlaceholder2", 40 },
> >> +	{ "LpPlaceholder3", 41 },
> >> +	{ "LpPlaceholder4", 42 },
> >> +	{ "LpPlaceholder5", 43 },
> >> +	{ "LpPlaceholder6", 44 },
> >> +	{ "LpPlaceholder7", 45 },
> >> +	{ "LpPlaceholder8", 46 },
> >> +	{ "LpPlaceholder9", 47 },
> >> +	{ "LpSchLocalRunListSize", 48 },
> >> +	{ "LpReserveGroupId", 49 },
> >> +	{ "LpRunningPriority", 50 },
> >> +	{ "LpPerfmonInterruptCount", 51 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "LpCounterRefreshSequenceNumber", 31 },
> >> +	{ "LpCounterRefreshReferenceTime", 32 },
> >> +	{ "LpIdleAccumulationSnapshot", 33 },
> >> +	{ "LpActiveTscCountSnapshot", 34 },
> >> +	{ "LpHwpRequestContextSwitches", 35 },
> >> +	{ "LpPlaceholder2", 36 },
> >> +	{ "LpPlaceholder3", 37 },
> >> +	{ "LpPlaceholder4", 38 },
> >> +	{ "LpPlaceholder5", 39 },
> >> +	{ "LpPlaceholder6", 40 },
> >> +	{ "LpPlaceholder7", 41 },
> >> +	{ "LpPlaceholder8", 42 },
> >> +	{ "LpPlaceholder9", 43 },
> >> +	{ "LpSchLocalRunListSize", 44 },
> >> +	{ "LpReserveGroupId", 45 },
> >> +	{ "LpRunningPriority", 46 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_PROCESS_COUNTER */
> >> +static struct hv_counter_entry hv_partition_counters[] = {
> >> +	{ "PtVirtualProcessors", 1 },
> >> +
> >> +	{ "PtTlbSize", 3 },
> >> +	{ "PtAddressSpaces", 4 },
> >> +	{ "PtDepositedPages", 5 },
> >> +	{ "PtGpaPages", 6 },
> >> +	{ "PtGpaSpaceModifications", 7 },
> >> +	{ "PtVirtualTlbFlushEntires", 8 },
> >> +	{ "PtRecommendedTlbSize", 9 },
> >> +	{ "PtGpaPages4K", 10 },
> >> +	{ "PtGpaPages2M", 11 },
> >> +	{ "PtGpaPages1G", 12 },
> >> +	{ "PtGpaPages512G", 13 },
> >> +	{ "PtDevicePages4K", 14 },
> >> +	{ "PtDevicePages2M", 15 },
> >> +	{ "PtDevicePages1G", 16 },
> >> +	{ "PtDevicePages512G", 17 },
> >> +	{ "PtAttachedDevices", 18 },
> >> +	{ "PtDeviceInterruptMappings", 19 },
> >> +	{ "PtIoTlbFlushes", 20 },
> >> +	{ "PtIoTlbFlushCost", 21 },
> >> +	{ "PtDeviceInterruptErrors", 22 },
> >> +	{ "PtDeviceDmaErrors", 23 },
> >> +	{ "PtDeviceInterruptThrottleEvents", 24 },
> >> +	{ "PtSkippedTimerTicks", 25 },
> >> +	{ "PtPartitionId", 26 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "PtNestedTlbSize", 27 },
> >> +	{ "PtRecommendedNestedTlbSize", 28 },
> >> +	{ "PtNestedTlbFreeListSize", 29 },
> >> +	{ "PtNestedTlbTrimmedPages", 30 },
> >> +	{ "PtPagesShattered", 31 },
> >> +	{ "PtPagesRecombined", 32 },
> >> +	{ "PtHwpRequestValue", 33 },
> >> +	{ "PtAutoSuspendEnableTime", 34 },
> >> +	{ "PtAutoSuspendTriggerTime", 35 },
> >> +	{ "PtAutoSuspendDisableTime", 36 },
> >> +	{ "PtPlaceholder1", 37 },
> >> +	{ "PtPlaceholder2", 38 },
> >> +	{ "PtPlaceholder3", 39 },
> >> +	{ "PtPlaceholder4", 40 },
> >> +	{ "PtPlaceholder5", 41 },
> >> +	{ "PtPlaceholder6", 42 },
> >> +	{ "PtPlaceholder7", 43 },
> >> +	{ "PtPlaceholder8", 44 },
> >> +	{ "PtHypervisorStateTransferGeneration", 45 },
> >> +	{ "PtNumberofActiveChildPartitions", 46 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "PtHwpRequestValue", 27 },
> >> +	{ "PtAutoSuspendEnableTime", 28 },
> >> +	{ "PtAutoSuspendTriggerTime", 29 },
> >> +	{ "PtAutoSuspendDisableTime", 30 },
> >> +	{ "PtPlaceholder1", 31 },
> >> +	{ "PtPlaceholder2", 32 },
> >> +	{ "PtPlaceholder3", 33 },
> >> +	{ "PtPlaceholder4", 34 },
> >> +	{ "PtPlaceholder5", 35 },
> >> +	{ "PtPlaceholder6", 36 },
> >> +	{ "PtPlaceholder7", 37 },
> >> +	{ "PtPlaceholder8", 38 },
> >> +	{ "PtHypervisorStateTransferGeneration", 39 },
> >> +	{ "PtNumberofActiveChildPartitions", 40 },
> >> +#endif
> >> +};
> >> +
> >> +/* HV_THREAD_COUNTER */
> >> +static struct hv_counter_entry hv_vp_counters[] = {
> >> +	{ "VpTotalRunTime", 1 },
> >> +	{ "VpHypervisorRunTime", 2 },
> >> +	{ "VpRemoteNodeRunTime", 3 },
> >> +	{ "VpNormalizedRunTime", 4 },
> >> +	{ "VpIdealCpu", 5 },
> >> +
> >> +	{ "VpHypercallsCount", 7 },
> >> +	{ "VpHypercallsTime", 8 },
> >> +#if IS_ENABLED(CONFIG_X86_64)
> >> +	{ "VpPageInvalidationsCount", 9 },
> >> +	{ "VpPageInvalidationsTime", 10 },
> >> +	{ "VpControlRegisterAccessesCount", 11 },
> >> +	{ "VpControlRegisterAccessesTime", 12 },
> >> +	{ "VpIoInstructionsCount", 13 },
> >> +	{ "VpIoInstructionsTime", 14 },
> >> +	{ "VpHltInstructionsCount", 15 },
> >> +	{ "VpHltInstructionsTime", 16 },
> >> +	{ "VpMwaitInstructionsCount", 17 },
> >> +	{ "VpMwaitInstructionsTime", 18 },
> >> +	{ "VpCpuidInstructionsCount", 19 },
> >> +	{ "VpCpuidInstructionsTime", 20 },
> >> +	{ "VpMsrAccessesCount", 21 },
> >> +	{ "VpMsrAccessesTime", 22 },
> >> +	{ "VpOtherInterceptsCount", 23 },
> >> +	{ "VpOtherInterceptsTime", 24 },
> >> +	{ "VpExternalInterruptsCount", 25 },
> >> +	{ "VpExternalInterruptsTime", 26 },
> >> +	{ "VpPendingInterruptsCount", 27 },
> >> +	{ "VpPendingInterruptsTime", 28 },
> >> +	{ "VpEmulatedInstructionsCount", 29 },
> >> +	{ "VpEmulatedInstructionsTime", 30 },
> >> +	{ "VpDebugRegisterAccessesCount", 31 },
> >> +	{ "VpDebugRegisterAccessesTime", 32 },
> >> +	{ "VpPageFaultInterceptsCount", 33 },
> >> +	{ "VpPageFaultInterceptsTime", 34 },
> >> +	{ "VpGuestPageTableMaps", 35 },
> >> +	{ "VpLargePageTlbFills", 36 },
> >> +	{ "VpSmallPageTlbFills", 37 },
> >> +	{ "VpReflectedGuestPageFaults", 38 },
> >> +	{ "VpApicMmioAccesses", 39 },
> >> +	{ "VpIoInterceptMessages", 40 },
> >> +	{ "VpMemoryInterceptMessages", 41 },
> >> +	{ "VpApicEoiAccesses", 42 },
> >> +	{ "VpOtherMessages", 43 },
> >> +	{ "VpPageTableAllocations", 44 },
> >> +	{ "VpLogicalProcessorMigrations", 45 },
> >> +	{ "VpAddressSpaceEvictions", 46 },
> >> +	{ "VpAddressSpaceSwitches", 47 },
> >> +	{ "VpAddressDomainFlushes", 48 },
> >> +	{ "VpAddressSpaceFlushes", 49 },
> >> +	{ "VpGlobalGvaRangeFlushes", 50 },
> >> +	{ "VpLocalGvaRangeFlushes", 51 },
> >> +	{ "VpPageTableEvictions", 52 },
> >> +	{ "VpPageTableReclamations", 53 },
> >> +	{ "VpPageTableResets", 54 },
> >> +	{ "VpPageTableValidations", 55 },
> >> +	{ "VpApicTprAccesses", 56 },
> >> +	{ "VpPageTableWriteIntercepts", 57 },
> >> +	{ "VpSyntheticInterrupts", 58 },
> >> +	{ "VpVirtualInterrupts", 59 },
> >> +	{ "VpApicIpisSent", 60 },
> >> +	{ "VpApicSelfIpisSent", 61 },
> >> +	{ "VpGpaSpaceHypercalls", 62 },
> >> +	{ "VpLogicalProcessorHypercalls", 63 },
> >> +	{ "VpLongSpinWaitHypercalls", 64 },
> >> +	{ "VpOtherHypercalls", 65 },
> >> +	{ "VpSyntheticInterruptHypercalls", 66 },
> >> +	{ "VpVirtualInterruptHypercalls", 67 },
> >> +	{ "VpVirtualMmuHypercalls", 68 },
> >> +	{ "VpVirtualProcessorHypercalls", 69 },
> >> +	{ "VpHardwareInterrupts", 70 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 71 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 72 },
> >> +	{ "VpPageScans", 73 },
> >> +	{ "VpLogicalProcessorDispatches", 74 },
> >> +	{ "VpWaitingForCpuTime", 75 },
> >> +	{ "VpExtendedHypercalls", 76 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 77 },
> >> +	{ "VpMbecNestedPageTableSwitches", 78 },
> >> +	{ "VpOtherReflectedGuestExceptions", 79 },
> >> +	{ "VpGlobalIoTlbFlushes", 80 },
> >> +	{ "VpGlobalIoTlbFlushCost", 81 },
> >> +	{ "VpLocalIoTlbFlushes", 82 },
> >> +	{ "VpLocalIoTlbFlushCost", 83 },
> >> +	{ "VpHypercallsForwardedCount", 84 },
> >> +	{ "VpHypercallsForwardingTime", 85 },
> >> +	{ "VpPageInvalidationsForwardedCount", 86 },
> >> +	{ "VpPageInvalidationsForwardingTime", 87 },
> >> +	{ "VpControlRegisterAccessesForwardedCount", 88 },
> >> +	{ "VpControlRegisterAccessesForwardingTime", 89 },
> >> +	{ "VpIoInstructionsForwardedCount", 90 },
> >> +	{ "VpIoInstructionsForwardingTime", 91 },
> >> +	{ "VpHltInstructionsForwardedCount", 92 },
> >> +	{ "VpHltInstructionsForwardingTime", 93 },
> >> +	{ "VpMwaitInstructionsForwardedCount", 94 },
> >> +	{ "VpMwaitInstructionsForwardingTime", 95 },
> >> +	{ "VpCpuidInstructionsForwardedCount", 96 },
> >> +	{ "VpCpuidInstructionsForwardingTime", 97 },
> >> +	{ "VpMsrAccessesForwardedCount", 98 },
> >> +	{ "VpMsrAccessesForwardingTime", 99 },
> >> +	{ "VpOtherInterceptsForwardedCount", 100 },
> >> +	{ "VpOtherInterceptsForwardingTime", 101 },
> >> +	{ "VpExternalInterruptsForwardedCount", 102 },
> >> +	{ "VpExternalInterruptsForwardingTime", 103 },
> >> +	{ "VpPendingInterruptsForwardedCount", 104 },
> >> +	{ "VpPendingInterruptsForwardingTime", 105 },
> >> +	{ "VpEmulatedInstructionsForwardedCount", 106 },
> >> +	{ "VpEmulatedInstructionsForwardingTime", 107 },
> >> +	{ "VpDebugRegisterAccessesForwardedCount", 108 },
> >> +	{ "VpDebugRegisterAccessesForwardingTime", 109 },
> >> +	{ "VpPageFaultInterceptsForwardedCount", 110 },
> >> +	{ "VpPageFaultInterceptsForwardingTime", 111 },
> >> +	{ "VpVmclearEmulationCount", 112 },
> >> +	{ "VpVmclearEmulationTime", 113 },
> >> +	{ "VpVmptrldEmulationCount", 114 },
> >> +	{ "VpVmptrldEmulationTime", 115 },
> >> +	{ "VpVmptrstEmulationCount", 116 },
> >> +	{ "VpVmptrstEmulationTime", 117 },
> >> +	{ "VpVmreadEmulationCount", 118 },
> >> +	{ "VpVmreadEmulationTime", 119 },
> >> +	{ "VpVmwriteEmulationCount", 120 },
> >> +	{ "VpVmwriteEmulationTime", 121 },
> >> +	{ "VpVmxoffEmulationCount", 122 },
> >> +	{ "VpVmxoffEmulationTime", 123 },
> >> +	{ "VpVmxonEmulationCount", 124 },
> >> +	{ "VpVmxonEmulationTime", 125 },
> >> +	{ "VpNestedVMEntriesCount", 126 },
> >> +	{ "VpNestedVMEntriesTime", 127 },
> >> +	{ "VpNestedSLATSoftPageFaultsCount", 128 },
> >> +	{ "VpNestedSLATSoftPageFaultsTime", 129 },
> >> +	{ "VpNestedSLATHardPageFaultsCount", 130 },
> >> +	{ "VpNestedSLATHardPageFaultsTime", 131 },
> >> +	{ "VpInvEptAllContextEmulationCount", 132 },
> >> +	{ "VpInvEptAllContextEmulationTime", 133 },
> >> +	{ "VpInvEptSingleContextEmulationCount", 134 },
> >> +	{ "VpInvEptSingleContextEmulationTime", 135 },
> >> +	{ "VpInvVpidAllContextEmulationCount", 136 },
> >> +	{ "VpInvVpidAllContextEmulationTime", 137 },
> >> +	{ "VpInvVpidSingleContextEmulationCount", 138 },
> >> +	{ "VpInvVpidSingleContextEmulationTime", 139 },
> >> +	{ "VpInvVpidSingleAddressEmulationCount", 140 },
> >> +	{ "VpInvVpidSingleAddressEmulationTime", 141 },
> >> +	{ "VpNestedTlbPageTableReclamations", 142 },
> >> +	{ "VpNestedTlbPageTableEvictions", 143 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 144 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 145 },
> >> +	{ "VpPostedInterruptNotifications", 146 },
> >> +	{ "VpPostedInterruptScans", 147 },
> >> +	{ "VpTotalCoreRunTime", 148 },
> >> +	{ "VpMaximumRunTime", 149 },
> >> +	{ "VpHwpRequestContextSwitches", 150 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 151 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 152 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 153 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 154 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 155 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 156 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 157 },
> >> +	{ "VpVmloadEmulationCount", 158 },
> >> +	{ "VpVmloadEmulationTime", 159 },
> >> +	{ "VpVmsaveEmulationCount", 160 },
> >> +	{ "VpVmsaveEmulationTime", 161 },
> >> +	{ "VpGifInstructionEmulationCount", 162 },
> >> +	{ "VpGifInstructionEmulationTime", 163 },
> >> +	{ "VpEmulatedErrataSvmInstructions", 164 },
> >> +	{ "VpPlaceholder1", 165 },
> >> +	{ "VpPlaceholder2", 166 },
> >> +	{ "VpPlaceholder3", 167 },
> >> +	{ "VpPlaceholder4", 168 },
> >> +	{ "VpPlaceholder5", 169 },
> >> +	{ "VpPlaceholder6", 170 },
> >> +	{ "VpPlaceholder7", 171 },
> >> +	{ "VpPlaceholder8", 172 },
> >> +	{ "VpContentionTime", 173 },
> >> +	{ "VpWakeUpTime", 174 },
> >> +	{ "VpSchedulingPriority", 175 },
> >> +	{ "VpRdpmcInstructionsCount", 176 },
> >> +	{ "VpRdpmcInstructionsTime", 177 },
> >> +	{ "VpPerfmonPmuMsrAccessesCount", 178 },
> >> +	{ "VpPerfmonLbrMsrAccessesCount", 179 },
> >> +	{ "VpPerfmonIptMsrAccessesCount", 180 },
> >> +	{ "VpPerfmonInterruptCount", 181 },
> >> +	{ "VpVtl1DispatchCount", 182 },
> >> +	{ "VpVtl2DispatchCount", 183 },
> >> +	{ "VpVtl2DispatchBucket0", 184 },
> >> +	{ "VpVtl2DispatchBucket1", 185 },
> >> +	{ "VpVtl2DispatchBucket2", 186 },
> >> +	{ "VpVtl2DispatchBucket3", 187 },
> >> +	{ "VpVtl2DispatchBucket4", 188 },
> >> +	{ "VpVtl2DispatchBucket5", 189 },
> >> +	{ "VpVtl2DispatchBucket6", 190 },
> >> +	{ "VpVtl1RunTime", 191 },
> >> +	{ "VpVtl2RunTime", 192 },
> >> +	{ "VpIommuHypercalls", 193 },
> >> +	{ "VpCpuGroupHypercalls", 194 },
> >> +	{ "VpVsmHypercalls", 195 },
> >> +	{ "VpEventLogHypercalls", 196 },
> >> +	{ "VpDeviceDomainHypercalls", 197 },
> >> +	{ "VpDepositHypercalls", 198 },
> >> +	{ "VpSvmHypercalls", 199 },
> >> +	{ "VpBusLockAcquisitionCount", 200 },
> >> +	{ "VpLoadAvg", 201 },
> >> +	{ "VpRootDispatchThreadBlocked", 202 },
> >> +	{ "VpIdleCpuTime", 203 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 204 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 205 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 206 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 207 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 208 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 209 },
> >> +	{ "VpHierarchicalSuspendTime", 210 },
> >> +	{ "VpExpressSchedulingAttempts", 211 },
> >> +	{ "VpExpressSchedulingCount", 212 },
> >> +	{ "VpBusLockAcquisitionTime", 213 },
> >> +#elif IS_ENABLED(CONFIG_ARM64)
> >> +	{ "VpSysRegAccessesCount", 9 },
> >> +	{ "VpSysRegAccessesTime", 10 },
> >> +	{ "VpSmcInstructionsCount", 11 },
> >> +	{ "VpSmcInstructionsTime", 12 },
> >> +	{ "VpOtherInterceptsCount", 13 },
> >> +	{ "VpOtherInterceptsTime", 14 },
> >> +	{ "VpExternalInterruptsCount", 15 },
> >> +	{ "VpExternalInterruptsTime", 16 },
> >> +	{ "VpPendingInterruptsCount", 17 },
> >> +	{ "VpPendingInterruptsTime", 18 },
> >> +	{ "VpGuestPageTableMaps", 19 },
> >> +	{ "VpLargePageTlbFills", 20 },
> >> +	{ "VpSmallPageTlbFills", 21 },
> >> +	{ "VpReflectedGuestPageFaults", 22 },
> >> +	{ "VpMemoryInterceptMessages", 23 },
> >> +	{ "VpOtherMessages", 24 },
> >> +	{ "VpLogicalProcessorMigrations", 25 },
> >> +	{ "VpAddressDomainFlushes", 26 },
> >> +	{ "VpAddressSpaceFlushes", 27 },
> >> +	{ "VpSyntheticInterrupts", 28 },
> >> +	{ "VpVirtualInterrupts", 29 },
> >> +	{ "VpApicSelfIpisSent", 30 },
> >> +	{ "VpGpaSpaceHypercalls", 31 },
> >> +	{ "VpLogicalProcessorHypercalls", 32 },
> >> +	{ "VpLongSpinWaitHypercalls", 33 },
> >> +	{ "VpOtherHypercalls", 34 },
> >> +	{ "VpSyntheticInterruptHypercalls", 35 },
> >> +	{ "VpVirtualInterruptHypercalls", 36 },
> >> +	{ "VpVirtualMmuHypercalls", 37 },
> >> +	{ "VpVirtualProcessorHypercalls", 38 },
> >> +	{ "VpHardwareInterrupts", 39 },
> >> +	{ "VpNestedPageFaultInterceptsCount", 40 },
> >> +	{ "VpNestedPageFaultInterceptsTime", 41 },
> >> +	{ "VpLogicalProcessorDispatches", 42 },
> >> +	{ "VpWaitingForCpuTime", 43 },
> >> +	{ "VpExtendedHypercalls", 44 },
> >> +	{ "VpExtendedHypercallInterceptMessages", 45 },
> >> +	{ "VpMbecNestedPageTableSwitches", 46 },
> >> +	{ "VpOtherReflectedGuestExceptions", 47 },
> >> +	{ "VpGlobalIoTlbFlushes", 48 },
> >> +	{ "VpGlobalIoTlbFlushCost", 49 },
> >> +	{ "VpLocalIoTlbFlushes", 50 },
> >> +	{ "VpLocalIoTlbFlushCost", 51 },
> >> +	{ "VpFlushGuestPhysicalAddressSpaceHypercalls", 52 },
> >> +	{ "VpFlushGuestPhysicalAddressListHypercalls", 53 },
> >> +	{ "VpPostedInterruptNotifications", 54 },
> >> +	{ "VpPostedInterruptScans", 55 },
> >> +	{ "VpTotalCoreRunTime", 56 },
> >> +	{ "VpMaximumRunTime", 57 },
> >> +	{ "VpWaitingForCpuTimeBucket0", 58 },
> >> +	{ "VpWaitingForCpuTimeBucket1", 59 },
> >> +	{ "VpWaitingForCpuTimeBucket2", 60 },
> >> +	{ "VpWaitingForCpuTimeBucket3", 61 },
> >> +	{ "VpWaitingForCpuTimeBucket4", 62 },
> >> +	{ "VpWaitingForCpuTimeBucket5", 63 },
> >> +	{ "VpWaitingForCpuTimeBucket6", 64 },
> >> +	{ "VpHwpRequestContextSwitches", 65 },
> >> +	{ "VpPlaceholder2", 66 },
> >> +	{ "VpPlaceholder3", 67 },
> >> +	{ "VpPlaceholder4", 68 },
> >> +	{ "VpPlaceholder5", 69 },
> >> +	{ "VpPlaceholder6", 70 },
> >> +	{ "VpPlaceholder7", 71 },
> >> +	{ "VpPlaceholder8", 72 },
> >> +	{ "VpContentionTime", 73 },
> >> +	{ "VpWakeUpTime", 74 },
> >> +	{ "VpSchedulingPriority", 75 },
> >> +	{ "VpVtl1DispatchCount", 76 },
> >> +	{ "VpVtl2DispatchCount", 77 },
> >> +	{ "VpVtl2DispatchBucket0", 78 },
> >> +	{ "VpVtl2DispatchBucket1", 79 },
> >> +	{ "VpVtl2DispatchBucket2", 80 },
> >> +	{ "VpVtl2DispatchBucket3", 81 },
> >> +	{ "VpVtl2DispatchBucket4", 82 },
> >> +	{ "VpVtl2DispatchBucket5", 83 },
> >> +	{ "VpVtl2DispatchBucket6", 84 },
> >> +	{ "VpVtl1RunTime", 85 },
> >> +	{ "VpVtl2RunTime", 86 },
> >> +	{ "VpIommuHypercalls", 87 },
> >> +	{ "VpCpuGroupHypercalls", 88 },
> >> +	{ "VpVsmHypercalls", 89 },
> >> +	{ "VpEventLogHypercalls", 90 },
> >> +	{ "VpDeviceDomainHypercalls", 91 },
> >> +	{ "VpDepositHypercalls", 92 },
> >> +	{ "VpSvmHypercalls", 93 },
> >> +	{ "VpLoadAvg", 94 },
> >> +	{ "VpRootDispatchThreadBlocked", 95 },
> >> +	{ "VpIdleCpuTime", 96 },
> >> +	{ "VpWaitingForCpuTimeBucket7", 97 },
> >> +	{ "VpWaitingForCpuTimeBucket8", 98 },
> >> +	{ "VpWaitingForCpuTimeBucket9", 99 },
> >> +	{ "VpWaitingForCpuTimeBucket10", 100 },
> >> +	{ "VpWaitingForCpuTimeBucket11", 101 },
> >> +	{ "VpWaitingForCpuTimeBucket12", 102 },
> >> +	{ "VpHierarchicalSuspendTime", 103 },
> >> +	{ "VpExpressSchedulingAttempts", 104 },
> >> +	{ "VpExpressSchedulingCount", 105 },
> >> +#endif
> >> +};
> >> +
> > 
> > The patch puts a blank line at the end of the new hv_counters.c file. When using
> > "git am" to apply this patch, I get this warning:
> > 
> > .git/rebase-apply/patch:499: new blank line at EOF.
> > +
> > warning: 1 line adds whitespace errors.
> > 
> > Line 499 is that blank line at the end of the new file. If I modify the patch to remove
> > the adding of the blank line, "git am" will apply the patch with no warning. This
> > should probably be fixed.
> > 
> Thanks for pointing that out, I'll fix it!
> 
> > Michael

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-23 22:31       ` Stanislav Kinsburskii
@ 2026-01-24  0:13         ` Nuno Das Neves
  2026-01-24  0:44           ` Michael Kelley
  0 siblings, 1 reply; 22+ messages in thread
From: Nuno Das Neves @ 2026-01-24  0:13 UTC (permalink / raw)
  To: Stanislav Kinsburskii
  Cc: Michael Kelley, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, kys@microsoft.com,
	haiyangz@microsoft.com, wei.liu@kernel.org, decui@microsoft.com,
	longli@microsoft.com, prapal@linux.microsoft.com,
	mrathor@linux.microsoft.com, paekkaladevi@linux.microsoft.com

On 1/23/2026 2:31 PM, Stanislav Kinsburskii wrote:
> On Fri, Jan 23, 2026 at 11:04:52AM -0800, Nuno Das Neves wrote:
>> On 1/23/2026 9:09 AM, Michael Kelley wrote:
>>> From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
>>>>
>>>> Introduce hv_counters.c, containing static data corresponding to
>>>> HV_*_COUNTER enums in the hypervisor source. Defining the enum
>>>> members as an array instead makes more sense, since it will be
>>>> iterated over to print counter information to debugfs.
>>>
>>> I would have expected the filename to be mshv_counters.c, so that the association
>>> with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
>>> which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
>>> for using the "hv_" prefix?
>>>
>> Good question - I originally thought of using hv_ because the definitions inside are
>> part of the hypervisor ABI, and hence also have the hv_ prefix.
>>
>> However you have a good point, and I'm not opposed to changing it.
>>
>> Maybe to just be super explicit: "mshv_debugfs_counters.c" ?
>>
> 
> This is reudnant from my POV.
> If these counters are only used by mshv_debugfs.c, then should rather be
> a part of this file.
> What was the reason to move them elsewhere?
> 

Just a matter of taste - so there isn't ~450 lines of definitions at the beginning of
mshv_debugfs.c. But I'm not fussed. If you think it's better to just prepend the
definitions to mshv_debugfs.c, then that's an easy change.

Nuno

> Thanks,
> Stanislav
> 
>>> Also, I see in Patch 7 of this series that hv_counters.c is #included as a .c file
>>> in mshv_debugfs.c. Is there a reason for doing the #include instead of adding
>>> hv_counters.c to the Makefile and building it on its own? You would need to
>>> add a handful of extern statements to mshv_root.h so that the tables are
>>> referenceable from mshv_debugfs.c. But that would seem to be the more
>>> normal way of doing things.  #including a .c file is unusual.
>>>
>>
>> Yes...I thought I could avoid noise in mshv_root.h and the Makefile, since it's
>> only relevant for mshv_debugfs.c. However I could see this file (whether as .c or
>> .h) being misused and included elsewhere inadvertantly, which would duplicate the
>> tables, so maybe doing it the normal way is a better idea, even if mshv_debugfs.c
>> is likely the only user.
>>
>>> See one more comment on the last line of this patch ...
>>>

<snip>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 6/7] mshv: Add data for printing stats page counters
  2026-01-24  0:13         ` Nuno Das Neves
@ 2026-01-24  0:44           ` Michael Kelley
  0 siblings, 0 replies; 22+ messages in thread
From: Michael Kelley @ 2026-01-24  0:44 UTC (permalink / raw)
  To: Nuno Das Neves, Stanislav Kinsburskii
  Cc: linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Friday, January 23, 2026 4:13 PM
> 
> On 1/23/2026 2:31 PM, Stanislav Kinsburskii wrote:
> > On Fri, Jan 23, 2026 at 11:04:52AM -0800, Nuno Das Neves wrote:
> >> On 1/23/2026 9:09 AM, Michael Kelley wrote:
> >>> From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> >>>>
> >>>> Introduce hv_counters.c, containing static data corresponding to
> >>>> HV_*_COUNTER enums in the hypervisor source. Defining the enum
> >>>> members as an array instead makes more sense, since it will be
> >>>> iterated over to print counter information to debugfs.
> >>>
> >>> I would have expected the filename to be mshv_counters.c, so that the association
> >>> with the MS hypervisor is clear. And the file is inextricably linked to mshv_debugfs.c,
> >>> which of course has the "mshv_" prefix. Or is there some thinking I'm not aware of
> >>> for using the "hv_" prefix?
> >>>
> >> Good question - I originally thought of using hv_ because the definitions inside are
> >> part of the hypervisor ABI, and hence also have the hv_ prefix.
> >>
> >> However you have a good point, and I'm not opposed to changing it.
> >>
> >> Maybe to just be super explicit: "mshv_debugfs_counters.c" ?
> >>
> >
> > This is reudnant from my POV.
> > If these counters are only used by mshv_debugfs.c, then should rather be
> > a part of this file.
> > What was the reason to move them elsewhere?
> >
> 
> Just a matter of taste - so there isn't ~450 lines of definitions at the beginning of
> mshv_debugfs.c. But I'm not fussed. If you think it's better to just prepend the
> definitions to mshv_debugfs.c, then that's an easy change.
> 
> Nuno

FWIW, I preferred the separate file so that the main debugfs code
isn't burdened with 450 lines of definitions that aren't going to be
edited/revised/improved via the typical processes. The current
mshv_debugfs.c is a reasonable 700 lines of code without all the
definitions.

But it's not a big deal for me either way.

Michael

> 
> > Thanks,
> > Stanislav
> >
> >>> Also, I see in Patch 7 of this series that hv_counters.c is #included as a .c file
> >>> in mshv_debugfs.c. Is there a reason for doing the #include instead of adding
> >>> hv_counters.c to the Makefile and building it on its own? You would need to
> >>> add a handful of extern statements to mshv_root.h so that the tables are
> >>> referenceable from mshv_debugfs.c. But that would seem to be the more
> >>> normal way of doing things.  #including a .c file is unusual.
> >>>
> >>
> >> Yes...I thought I could avoid noise in mshv_root.h and the Makefile, since it's
> >> only relevant for mshv_debugfs.c. However I could see this file (whether as .c or
> >> .h) being misused and included elsewhere inadvertantly, which would duplicate the
> >> tables, so maybe doing it the normal way is a better idea, even if mshv_debugfs.c
> >> is likely the only user.
> >>
> >>> See one more comment on the last line of this patch ...
> >>>
> 
> <snip>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics
  2026-01-23 21:11     ` Nuno Das Neves
@ 2026-01-24  4:14       ` Michael Kelley
  0 siblings, 0 replies; 22+ messages in thread
From: Michael Kelley @ 2026-01-24  4:14 UTC (permalink / raw)
  To: Nuno Das Neves, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, skinsburskii@linux.microsoft.com
  Cc: kys@microsoft.com, haiyangz@microsoft.com, wei.liu@kernel.org,
	decui@microsoft.com, longli@microsoft.com,
	prapal@linux.microsoft.com, mrathor@linux.microsoft.com,
	paekkaladevi@linux.microsoft.com

From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Friday, January 23, 2026 1:11 PM
> 
> On 1/23/2026 9:09 AM, Michael Kelley wrote:
> > From: Nuno Das Neves <nunodasneves@linux.microsoft.com> Sent: Wednesday, January 21, 2026 1:46 PM
> >>
> >> Introduce a debugfs interface to expose root and child partition stats
> >> when running with mshv_root.
> >>
> >> Create a debugfs directory "mshv" containing 'stats' files organized by
> >> type and id. A stats file contains a number of counters depending on
> >> its type. e.g. an excerpt from a VP stats file:
> >>
> >> TotalRunTime                  : 1997602722
> >> HypervisorRunTime             : 649671371
> >> RemoteNodeRunTime             : 0
> >> NormalizedRunTime             : 1997602721
> >> IdealCpu                      : 0
> >> HypercallsCount               : 1708169
> >> HypercallsTime                : 111914774
> >> PageInvalidationsCount        : 0
> >> PageInvalidationsTime         : 0
> >>
> >> On a root partition with some active child partitions, the entire
> >> directory structure may look like:
> >>
> >> mshv/
> >>   stats             # hypervisor stats
> >>   lp/               # logical processors
> >>     0/              # LP id
> >>       stats         # LP 0 stats
> >>     1/
> >>     2/
> >>     3/
> >>   partition/        # partition stats
> >>     1/              # root partition id
> >>       stats         # root partition stats
> >>       vp/           # root virtual processors
> >>         0/          # root VP id
> >>           stats     # root VP 0 stats
> >>         1/
> >>         2/
> >>         3/
> >>     42/             # child partition id
> >>       stats         # child partition stats
> >>       vp/           # child VPs
> >>         0/          # child VP id
> >>           stats     # child VP 0 stats
> >>         1/
> >>     43/
> >>     55/
> >>
> >> On L1VH, some stats are not present as it does not own the hardware
> >> like the root partition does:
> >> - The hypervisor and lp stats are not present
> >> - L1VH's partition directory is named "self" because it can't get its
> >>   own id
> >> - Some of L1VH's partition and VP stats fields are not populated, because
> >>   it can't map its own HV_STATS_AREA_PARENT page.
> >>
> >> Co-developed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> >> Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> >> Co-developed-by: Praveen K Paladugu <prapal@linux.microsoft.com>
> >> Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
> >> Co-developed-by: Mukesh Rathor <mrathor@linux.microsoft.com>
> >> Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com>
> >> Co-developed-by: Purna Pavan Chandra Aekkaladevi
> >> <paekkaladevi@linux.microsoft.com>
> >> Signed-off-by: Purna Pavan Chandra Aekkaladevi <paekkaladevi@linux.microsoft.com>
> >> Co-developed-by: Jinank Jain <jinankjain@microsoft.com>
> >> Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
> >> Signed-off-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
> >> Reviewed-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
> >> ---
> >>  drivers/hv/Makefile         |   1 +
> >>  drivers/hv/hv_counters.c    |   1 +
> >>  drivers/hv/hv_synic.c       | 177 +++++++++
> >
> > This new file hv_synic.c seems to be spurious.  It looks like you unintentionally
> > picked up this new file from the build tree where you were creating the patches
> > for this series.
> >
> 
> Oh, that's embarrassing! Yes, it's a half-baked, unrelated work-in-progress...
> Please ignore!
> 
> <snip>
> >> diff --git a/drivers/hv/mshv_debugfs.c b/drivers/hv/mshv_debugfs.c
> >> new file mode 100644
> >> index 000000000000..72eb0ae44e4b
> >> --- /dev/null
> >> +++ b/drivers/hv/mshv_debugfs.c
> >> @@ -0,0 +1,703 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +/*
> >> + * Copyright (c) 2026, Microsoft Corporation.
> >> + *
> >> + * The /sys/kernel/debug/mshv directory contents.
> >> + * Contains various statistics data, provided by the hypervisor.
> >> + *
> >> + * Authors: Microsoft Linux virtualization team
> >> + */
> >> +
> >> +#include <linux/debugfs.h>
> >> +#include <linux/stringify.h>
> >> +#include <asm/mshyperv.h>
> >> +#include <linux/slab.h>
> >> +
> >> +#include "mshv.h"
> >> +#include "mshv_root.h"
> >> +
> >> +#include "hv_counters.c"
> >> +
> >> +#define U32_BUF_SZ 11
> >> +#define U64_BUF_SZ 21
> >> +#define NUM_STATS_AREAS (HV_STATS_AREA_PARENT + 1)
> >
> > This is sort of weak in that it doesn't really guard against
> > changes in the enum that defines HV_STATS_AREA_PARENT.
> > It would work if it were defined as part of the enum, but then
> > you are changing the code coming from the Windows world,
> > which I know is a different problem.
> >
> > The enum is part of the hypervisor ABI and hence isn't likely to
> > change, but it still feels funny to define NUM_STATS_AREAS like
> > this. I would suggest dropping this and just using
> > HV_STATS_AREA_COUNT for the memory allocations even
> > though doing so will allocate space for a stats area pointer
> > that isn't used by this code. It's only a few bytes.
> >
> 
> That would work, but then I'd want to have a comment explaining
> that the decision is intentional, otherwise I think it's just as
> confusing to have unexplained wasted space.

Yes, that's true.

> 
> Alternatively, the usage of SELF and PARENT (but not INTERNAL)
> could be made explicit by a compile-time check, and a comment to
> clarify:
> 
> /* Only support SELF and PARENT areas */
> #define NUM_STATS_AREAS 2
> static_assert(HV_STATS_AREA_SELF == 0 && HV_STATS_AREA_PARENT == 1,
> 	      "SELF and PARENT areas must be usable as indices into an array of size NUM_STATS_AREAS")

Works for me. 

[snip]

> >> +static int __init mshv_debugfs_hv_stats_create(struct dentry *parent)
> >> +{
> >> +	struct dentry *dentry;
> >> +	u64 *stats;
> >> +	int err;
> >> +
> >> +	stats = mshv_hv_stats_map();
> >> +	if (IS_ERR(stats))
> >> +		return PTR_ERR(stats);
> >> +
> >> +	dentry = debugfs_create_file("stats", 0400, parent,
> >> +				     stats, &hv_stats_fops);
> >> +	if (IS_ERR(dentry)) {
> >> +		err = PTR_ERR(dentry);
> >> +		pr_err("%s: failed to create hypervisor stats dentry: %d\n",
> >> +		       __func__, err);
> >> +		goto unmap_hv_stats;
> >> +	}
> >> +
> >> +	mshv_lps_count = num_present_cpus();
> >
> > This method of setting mshv_lps_count, and the iteration through the lp_index
> > in mshv_debugfs_lp_create() and mshv_debugfs_lp_remove(), seems risky. The
> > lp_index gets passed to the hypervisor, so it must be the hypervisor's concept
> > of the lp_index. Is that always guaranteed to be the same as Linux's numbering
> > of the present CPUs? There may be edge cases where it is not. For example, what
> > if Linux in the root partition were booted with the "nosmt" kernel boot option,
> > such that Linux ignores all the 2nd hyper-threads in a core? Could that create
> > a numbering mismatch?
> >
> 
> Ah, this was using the hypervisor stats page before; HvLogicalProcessors. But
> I removed the enum, so I thought this would be a reasonable way to get the number
> of LPs, but I think I'm mistaken.
> 
> For context, there is a fix to how LP and VP numbers are assigned in
> hv_smp_prepare_cpus(), but it's part of a future patchset. That fix ensures the
> LP indices are dense. The code looks like:
> 
> 	/* create dense LPs from 0-N for all apicids */
>         i = next_smallest_apicid(apicids, 0);
>         for (lpidx = 1; i != INT_MAX; lpidx++) {
>                 node = __apicid_to_node[i];
>                 if (node == NUMA_NO_NODE)
>                         node = 0;
> 
>                 /* params: node num, lp index, apic id */
>                 ret = hv_call_add_logical_proc(node, lpidx, i);
>                 BUG_ON(ret);
> 
>                 i = next_smallest_apicid(apicids, i);
>         }
> 
> 	/* create a VP for each present CPU */
>         lpidx = 1;         /* skip BSP cpu 0 */
>         for_each_present_cpu(i) {
>                 if (i == 0)
>                         continue;
> 
>                 /* params: node num, domid, vp index, lp index */
>                 ret = hv_call_create_vp(numa_cpu_node(i),
>                                         hv_current_partition_id, lpidx, lpidx);
>                 BUG_ON(ret);
>                 lpidx++;
>         }
> 
> For what it's worth, with that fix^ I tested with "nosmt" and things worked as I
> would expect: All LPs were displayed in debugfs, but every second LP was not in
> use by Linux, as evidenced by e.g. the number of timer interrupts not going up:
> LpTimerInterrupts            : 1
> 
> Also, only every second VP was created (0, 2, 4, 6...) since the others aren't
> in the present mask at boot.

OK -- I'm glad to hear that someone has thought about this potential
issue. next_smallest_apicid() looks like it should handle the gaps that can occur
in APIC IDs, as I pointed out in my separate email to you about an issue unrelated
to this patch set.

> 
> > Note that for vp_index, we have the hv_vp_index[] array for translating from
> > Linux's concept of a CPU number to Hyper-V's concept of vp_index. For
> > example, mshv_debugfs_parent_partition_create() correctly goes through
> > this translation. And presumably when the VMM code does the
> > MSHV_CREATE_VP ioctl, it is passing in a hypervisor vp_index.
> >
> > Everything may work fine "as is" for the moment, but the lp functions here
> > are still conflating the hypervisor's LP numbering with Linux's CPU numbering,
> > and that seems like a recipe for trouble somewhere down the road. I'm
> > not sure how the hypervisor interprets the "lp_index" part of the identity
> > argument passed to a hypercall, so I'm not sure what the fix is.
> >
> 
> The simplest thing for now might be to bring back that enum value
> HvLogicalProcessors just for this one usage. I'll admit I'm not familiar with
> all the nuances here so there are still probably edge cases here.

Yes, that seems like it would make sense. At least it would have the
hypervisor reporting how many LPs it thinks there are, instead of
getting Linux's view, which might be different. Guaranteeing that the
LP indices are dense can come later since in all likelihood they are
dense by default. It's the potential for an atypical corner case that
I was worried about.

Michael

> 
> >> +
> >> +	return 0;
> >> +
> >> +unmap_hv_stats:
> >> +	mshv_hv_stats_unmap();
> >> +	return err;
> >> +}
> >> +
> <snip>


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2026-01-24  4:14 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-21 21:46 [PATCH v4 0/7] mshv: Debugfs interface for mshv_root Nuno Das Neves
2026-01-21 21:46 ` [PATCH v4 1/7] mshv: Ignore second stats page map result failure Nuno Das Neves
2026-01-21 21:46 ` [PATCH v4 2/7] mshv: Use typed hv_stats_page pointers Nuno Das Neves
2026-01-21 21:46 ` [PATCH v4 3/7] mshv: Improve mshv_vp_stats_map/unmap(), add them to mshv_root.h Nuno Das Neves
2026-01-21 21:46 ` [PATCH v4 4/7] mshv: Always map child vp stats pages regardless of scheduler type Nuno Das Neves
2026-01-21 21:46 ` [PATCH v4 5/7] mshv: Update hv_stats_page definitions Nuno Das Neves
2026-01-22  1:22   ` Stanislav Kinsburskii
2026-01-21 21:46 ` [PATCH v4 6/7] mshv: Add data for printing stats page counters Nuno Das Neves
2026-01-22  1:18   ` Stanislav Kinsburskii
2026-01-22 18:21     ` Nuno Das Neves
2026-01-22 18:52       ` Michael Kelley
2026-01-23 22:28       ` Stanislav Kinsburskii
2026-01-23 17:09   ` Michael Kelley
2026-01-23 19:04     ` Nuno Das Neves
2026-01-23 19:10       ` Michael Kelley
2026-01-23 22:31       ` Stanislav Kinsburskii
2026-01-24  0:13         ` Nuno Das Neves
2026-01-24  0:44           ` Michael Kelley
2026-01-21 21:46 ` [PATCH v4 7/7] mshv: Add debugfs to view hypervisor statistics Nuno Das Neves
2026-01-23 17:09   ` Michael Kelley
2026-01-23 21:11     ` Nuno Das Neves
2026-01-24  4:14       ` Michael Kelley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox