Kernel KVM virtualization development
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code
@ 2025-06-10 19:54 Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 01/14] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Copy KVM selftests' X86_PROPERTY_* infrastructure (multi-bit CPUID
fields), and use the properties to clean up various warts.  The SEV code
is particular makes things much harder than they need to be.

Note, this applies on kvm-x86 next.

v2:
 - Avoid tabs immediatedly after #defines. [Dapeng]
 - Sqaush the arch events vs. GP counters fixes into one patch. [Dapeng]
 - Mask available arch events based on enumerate bit vector width. [Dapeng]
 - Add a missing space in a printf argument. [Liam]
 - Collect reviews. [Dapeng, Liam]

v1: https://lore.kernel.org/all/20250529221929.3807680-1-seanjc@google.com

Sean Christopherson (14):
  x86: Encode X86_FEATURE_* definitions using a structure
  x86: Add X86_PROPERTY_* framework to retrieve CPUID values
  x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
  x86: Implement get_supported_xcr0() using
    X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
  x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
  x86/pmu: Mark all arch events as available on AMD, and rename fields
  x86/pmu: Mark Intel architectural event available iff X <=
    CPUID.0xA.EAX[31:24]
  x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
  x86/sev: Use VC_VECTOR from processor.h
  x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
  x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
  x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
  x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
  x86: Move SEV MSR definitions to msr.h

 lib/x86/amd_sev.c   |  48 ++-----
 lib/x86/amd_sev.h   |  29 ----
 lib/x86/msr.h       |   6 +
 lib/x86/pmu.c       |  25 ++--
 lib/x86/pmu.h       |   8 +-
 lib/x86/processor.h | 312 ++++++++++++++++++++++++++++++++------------
 x86/amd_sev.c       |  63 ++-------
 x86/la57.c          |   2 +-
 x86/pmu.c           |   9 +-
 x86/xsave.c         |  11 +-
 10 files changed, 273 insertions(+), 240 deletions(-)


base-commit: 0293b912a7e7c019ed0144ad9ee62c09b0b61de2
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 01/14] x86: Encode X86_FEATURE_* definitions using a structure
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 02/14] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Encode X86_FEATURE_* macros using a new "struct x86_cpu_feature" instead
of manually packing the values into a u64.  Using a structure eliminates
open code shifts and masks, and is largely self-documenting.

Opportunistically replace single tabs with single spaces after #define
for relevant code; the existing code uses a mix of both, and a single
space is far more common.

Note, the code and naming scheme are stolen from KVM selftests.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/processor.h | 171 ++++++++++++++++++++++++--------------------
 1 file changed, 95 insertions(+), 76 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 5bc9ef89..d86fa0cf 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -6,6 +6,7 @@
 #include "msr.h"
 #include <bitops.h>
 #include <stdint.h>
+#include <util.h>
 
 #define CANONICAL_48_VAL 0xffffaaaaaaaaaaaaull
 #define CANONICAL_57_VAL 0xffaaaaaaaaaaaaaaull
@@ -232,100 +233,118 @@ static inline bool is_intel(void)
 	return strcmp((char *)name, "GenuineIntel") == 0;
 }
 
-#define	CPUID(a, b, c, d) ((((unsigned long long) a) << 32) | (b << 16) | \
-			  (c << 8) | d)
-
 /*
- * Each X86_FEATURE_XXX definition is 64-bit and contains the following
- * CPUID meta-data:
- *
- * 	[63:32] :  input value for EAX
- * 	[31:16] :  input value for ECX
- * 	[15:8]  :  output register
- * 	[7:0]   :  bit position in output register
+ * Pack the information into a 64-bit value so that each X86_FEATURE_XXX can be
+ * passed by value with no overhead.
  */
+struct x86_cpu_feature {
+	u32	function;
+	u16	index;
+	u8	reg;
+	u8	bit;
+};
+
+#define X86_CPU_FEATURE(fn, idx, gpr, __bit)					\
+({										\
+	struct x86_cpu_feature feature = {					\
+		.function = fn,							\
+		.index = idx,							\
+		.reg = gpr,							\
+		.bit = __bit,							\
+	};									\
+										\
+	static_assert((fn & 0xc0000000) == 0 ||					\
+		      (fn & 0xc0000000) == 0x40000000 ||			\
+		      (fn & 0xc0000000) == 0x80000000 ||			\
+		      (fn & 0xc0000000) == 0xc0000000);				\
+	static_assert(idx < BIT(sizeof(feature.index) * BITS_PER_BYTE));	\
+	feature;								\
+})
 
 /*
  * Basic Leafs, a.k.a. Intel defined
  */
-#define	X86_FEATURE_MWAIT		(CPUID(0x1, 0, ECX, 3))
-#define	X86_FEATURE_VMX			(CPUID(0x1, 0, ECX, 5))
-#define	X86_FEATURE_PDCM		(CPUID(0x1, 0, ECX, 15))
-#define	X86_FEATURE_PCID		(CPUID(0x1, 0, ECX, 17))
-#define X86_FEATURE_X2APIC		(CPUID(0x1, 0, ECX, 21))
-#define	X86_FEATURE_MOVBE		(CPUID(0x1, 0, ECX, 22))
-#define	X86_FEATURE_TSC_DEADLINE_TIMER	(CPUID(0x1, 0, ECX, 24))
-#define	X86_FEATURE_XSAVE		(CPUID(0x1, 0, ECX, 26))
-#define	X86_FEATURE_OSXSAVE		(CPUID(0x1, 0, ECX, 27))
-#define	X86_FEATURE_RDRAND		(CPUID(0x1, 0, ECX, 30))
-#define	X86_FEATURE_MCE			(CPUID(0x1, 0, EDX, 7))
-#define	X86_FEATURE_APIC		(CPUID(0x1, 0, EDX, 9))
-#define	X86_FEATURE_CLFLUSH		(CPUID(0x1, 0, EDX, 19))
-#define	X86_FEATURE_DS			(CPUID(0x1, 0, EDX, 21))
-#define	X86_FEATURE_XMM			(CPUID(0x1, 0, EDX, 25))
-#define	X86_FEATURE_XMM2		(CPUID(0x1, 0, EDX, 26))
-#define	X86_FEATURE_TSC_ADJUST		(CPUID(0x7, 0, EBX, 1))
-#define	X86_FEATURE_HLE			(CPUID(0x7, 0, EBX, 4))
-#define	X86_FEATURE_SMEP		(CPUID(0x7, 0, EBX, 7))
-#define	X86_FEATURE_INVPCID		(CPUID(0x7, 0, EBX, 10))
-#define	X86_FEATURE_RTM			(CPUID(0x7, 0, EBX, 11))
-#define	X86_FEATURE_SMAP		(CPUID(0x7, 0, EBX, 20))
-#define	X86_FEATURE_PCOMMIT		(CPUID(0x7, 0, EBX, 22))
-#define	X86_FEATURE_CLFLUSHOPT		(CPUID(0x7, 0, EBX, 23))
-#define	X86_FEATURE_CLWB		(CPUID(0x7, 0, EBX, 24))
-#define X86_FEATURE_INTEL_PT		(CPUID(0x7, 0, EBX, 25))
-#define	X86_FEATURE_UMIP		(CPUID(0x7, 0, ECX, 2))
-#define	X86_FEATURE_PKU			(CPUID(0x7, 0, ECX, 3))
-#define	X86_FEATURE_LA57		(CPUID(0x7, 0, ECX, 16))
-#define	X86_FEATURE_RDPID		(CPUID(0x7, 0, ECX, 22))
-#define	X86_FEATURE_SHSTK		(CPUID(0x7, 0, ECX, 7))
-#define	X86_FEATURE_IBT			(CPUID(0x7, 0, EDX, 20))
-#define	X86_FEATURE_SPEC_CTRL		(CPUID(0x7, 0, EDX, 26))
-#define	X86_FEATURE_FLUSH_L1D		(CPUID(0x7, 0, EDX, 28))
-#define	X86_FEATURE_ARCH_CAPABILITIES	(CPUID(0x7, 0, EDX, 29))
-#define	X86_FEATURE_PKS			(CPUID(0x7, 0, ECX, 31))
-#define	X86_FEATURE_LAM			(CPUID(0x7, 1, EAX, 26))
+#define X86_FEATURE_MWAIT		X86_CPU_FEATURE(0x1, 0, ECX, 3)
+#define X86_FEATURE_VMX			X86_CPU_FEATURE(0x1, 0, ECX, 5)
+#define X86_FEATURE_PDCM		X86_CPU_FEATURE(0x1, 0, ECX, 15)
+#define X86_FEATURE_PCID		X86_CPU_FEATURE(0x1, 0, ECX, 17)
+#define X86_FEATURE_X2APIC		X86_CPU_FEATURE(0x1, 0, ECX, 21)
+#define X86_FEATURE_MOVBE		X86_CPU_FEATURE(0x1, 0, ECX, 22)
+#define X86_FEATURE_TSC_DEADLINE_TIMER	X86_CPU_FEATURE(0x1, 0, ECX, 24)
+#define X86_FEATURE_XSAVE		X86_CPU_FEATURE(0x1, 0, ECX, 26)
+#define X86_FEATURE_OSXSAVE		X86_CPU_FEATURE(0x1, 0, ECX, 27)
+#define X86_FEATURE_RDRAND		X86_CPU_FEATURE(0x1, 0, ECX, 30)
+#define X86_FEATURE_MCE			X86_CPU_FEATURE(0x1, 0, EDX, 7)
+#define X86_FEATURE_APIC		X86_CPU_FEATURE(0x1, 0, EDX, 9)
+#define X86_FEATURE_CLFLUSH		X86_CPU_FEATURE(0x1, 0, EDX, 19)
+#define X86_FEATURE_DS			X86_CPU_FEATURE(0x1, 0, EDX, 21)
+#define X86_FEATURE_XMM			X86_CPU_FEATURE(0x1, 0, EDX, 25)
+#define X86_FEATURE_XMM2		X86_CPU_FEATURE(0x1, 0, EDX, 26)
+#define X86_FEATURE_TSC_ADJUST		X86_CPU_FEATURE(0x7, 0, EBX, 1)
+#define X86_FEATURE_HLE			X86_CPU_FEATURE(0x7, 0, EBX, 4)
+#define X86_FEATURE_SMEP		X86_CPU_FEATURE(0x7, 0, EBX, 7)
+#define X86_FEATURE_INVPCID		X86_CPU_FEATURE(0x7, 0, EBX, 10)
+#define X86_FEATURE_RTM			X86_CPU_FEATURE(0x7, 0, EBX, 11)
+#define X86_FEATURE_SMAP		X86_CPU_FEATURE(0x7, 0, EBX, 20)
+#define X86_FEATURE_PCOMMIT		X86_CPU_FEATURE(0x7, 0, EBX, 22)
+#define X86_FEATURE_CLFLUSHOPT		X86_CPU_FEATURE(0x7, 0, EBX, 23)
+#define X86_FEATURE_CLWB		X86_CPU_FEATURE(0x7, 0, EBX, 24)
+#define X86_FEATURE_INTEL_PT		X86_CPU_FEATURE(0x7, 0, EBX, 25)
+#define X86_FEATURE_UMIP		X86_CPU_FEATURE(0x7, 0, ECX, 2)
+#define X86_FEATURE_PKU			X86_CPU_FEATURE(0x7, 0, ECX, 3)
+#define X86_FEATURE_LA57		X86_CPU_FEATURE(0x7, 0, ECX, 16)
+#define X86_FEATURE_RDPID		X86_CPU_FEATURE(0x7, 0, ECX, 22)
+#define X86_FEATURE_SHSTK		X86_CPU_FEATURE(0x7, 0, ECX, 7)
+#define X86_FEATURE_IBT			X86_CPU_FEATURE(0x7, 0, EDX, 20)
+#define X86_FEATURE_SPEC_CTRL		X86_CPU_FEATURE(0x7, 0, EDX, 26)
+#define X86_FEATURE_FLUSH_L1D		X86_CPU_FEATURE(0x7, 0, EDX, 28)
+#define X86_FEATURE_ARCH_CAPABILITIES	X86_CPU_FEATURE(0x7, 0, EDX, 29)
+#define X86_FEATURE_PKS			X86_CPU_FEATURE(0x7, 0, ECX, 31)
+#define X86_FEATURE_LAM			X86_CPU_FEATURE(0x7, 1, EAX, 26)
 
 /*
  * KVM defined leafs
  */
-#define	KVM_FEATURE_ASYNC_PF		(CPUID(0x40000001, 0, EAX, 4))
-#define	KVM_FEATURE_ASYNC_PF_INT	(CPUID(0x40000001, 0, EAX, 14))
+#define KVM_FEATURE_ASYNC_PF		X86_CPU_FEATURE(0x40000001, 0, EAX, 4)
+#define KVM_FEATURE_ASYNC_PF_INT	X86_CPU_FEATURE(0x40000001, 0, EAX, 14)
 
 /*
  * Extended Leafs, a.k.a. AMD defined
  */
-#define	X86_FEATURE_SVM			(CPUID(0x80000001, 0, ECX, 2))
-#define	X86_FEATURE_PERFCTR_CORE	(CPUID(0x80000001, 0, ECX, 23))
-#define	X86_FEATURE_NX			(CPUID(0x80000001, 0, EDX, 20))
-#define	X86_FEATURE_GBPAGES		(CPUID(0x80000001, 0, EDX, 26))
-#define	X86_FEATURE_RDTSCP		(CPUID(0x80000001, 0, EDX, 27))
-#define	X86_FEATURE_LM			(CPUID(0x80000001, 0, EDX, 29))
-#define	X86_FEATURE_RDPRU		(CPUID(0x80000008, 0, EBX, 4))
-#define	X86_FEATURE_AMD_IBPB		(CPUID(0x80000008, 0, EBX, 12))
-#define	X86_FEATURE_NPT			(CPUID(0x8000000A, 0, EDX, 0))
-#define	X86_FEATURE_LBRV		(CPUID(0x8000000A, 0, EDX, 1))
-#define	X86_FEATURE_NRIPS		(CPUID(0x8000000A, 0, EDX, 3))
-#define X86_FEATURE_TSCRATEMSR		(CPUID(0x8000000A, 0, EDX, 4))
-#define X86_FEATURE_PAUSEFILTER		(CPUID(0x8000000A, 0, EDX, 10))
-#define X86_FEATURE_PFTHRESHOLD		(CPUID(0x8000000A, 0, EDX, 12))
-#define	X86_FEATURE_VGIF		(CPUID(0x8000000A, 0, EDX, 16))
-#define X86_FEATURE_VNMI		(CPUID(0x8000000A, 0, EDX, 25))
-#define	X86_FEATURE_AMD_PMU_V2		(CPUID(0x80000022, 0, EAX, 0))
+#define X86_FEATURE_SVM			X86_CPU_FEATURE(0x80000001, 0, ECX, 2)
+#define X86_FEATURE_PERFCTR_CORE	X86_CPU_FEATURE(0x80000001, 0, ECX, 23)
+#define X86_FEATURE_NX			X86_CPU_FEATURE(0x80000001, 0, EDX, 20)
+#define X86_FEATURE_GBPAGES		X86_CPU_FEATURE(0x80000001, 0, EDX, 26)
+#define X86_FEATURE_RDTSCP		X86_CPU_FEATURE(0x80000001, 0, EDX, 27)
+#define X86_FEATURE_LM			X86_CPU_FEATURE(0x80000001, 0, EDX, 29)
+#define X86_FEATURE_RDPRU		X86_CPU_FEATURE(0x80000008, 0, EBX, 4)
+#define X86_FEATURE_AMD_IBPB		X86_CPU_FEATURE(0x80000008, 0, EBX, 12)
+#define X86_FEATURE_NPT			X86_CPU_FEATURE(0x8000000A, 0, EDX, 0)
+#define X86_FEATURE_LBRV		X86_CPU_FEATURE(0x8000000A, 0, EDX, 1)
+#define X86_FEATURE_NRIPS		X86_CPU_FEATURE(0x8000000A, 0, EDX, 3)
+#define X86_FEATURE_TSCRATEMSR		X86_CPU_FEATURE(0x8000000A, 0, EDX, 4)
+#define X86_FEATURE_PAUSEFILTER		X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
+#define X86_FEATURE_PFTHRESHOLD		X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
+#define X86_FEATURE_VGIF		X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
+#define X86_FEATURE_VNMI		X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
+#define X86_FEATURE_AMD_PMU_V2		X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
 
-static inline bool this_cpu_has(u64 feature)
+static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
 {
-	u32 input_eax = feature >> 32;
-	u32 input_ecx = (feature >> 16) & 0xffff;
-	u32 output_reg = (feature >> 8) & 0xff;
-	u8 bit = feature & 0xff;
-	struct cpuid c;
-	u32 *tmp;
+	union {
+		struct cpuid cpuid;
+		u32 gprs[4];
+	} c;
 
-	c = cpuid_indexed(input_eax, input_ecx);
-	tmp = (u32 *)&c;
+	c.cpuid = cpuid_indexed(function, index);
 
-	return ((*(tmp + (output_reg % 32))) & (1 << bit));
+	return (c.gprs[reg] & GENMASK(hi, lo)) >> lo;
+}
+
+static inline bool this_cpu_has(struct x86_cpu_feature feature)
+{
+	return __this_cpu_has(feature.function, feature.index,
+			      feature.reg, feature.bit, feature.bit);
 }
 
 struct far_pointer32 {
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 02/14] x86: Add X86_PROPERTY_* framework to retrieve CPUID values
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 01/14] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 03/14] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Introduce X86_PROPERTY_* to allow retrieving values/properties from CPUID
leafs, e.g. MAXPHYADDR from CPUID.0x80000008.  Use the same core code as
X86_FEATURE_*, the primary difference is that properties are multi-bit
values, whereas features enumerate a single bit.

Add this_cpu_has_p() to allow querying whether or not a property exists
based on the maximum leaf associated with the property, e.g. MAXPHYADDR
doesn't exist if the max leaf for 0x8000_xxxx is less than 0x8000_0008.

Use the new property infrastructure in cpuid_maxphyaddr() to prove that
the code works as intended.  Future patches will convert additional code.

Note, the code, nomenclature, changelog, etc. are all stolen from KVM
selftests.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/processor.h | 109 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 102 insertions(+), 7 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index d86fa0cf..e6bd964f 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -218,13 +218,6 @@ static inline struct cpuid cpuid(u32 function)
 	return cpuid_indexed(function, 0);
 }
 
-static inline u8 cpuid_maxphyaddr(void)
-{
-	if (raw_cpuid(0x80000000, 0).a < 0x80000008)
-	return 36;
-	return raw_cpuid(0x80000008, 0).a & 0xff;
-}
-
 static inline bool is_intel(void)
 {
 	struct cpuid c = cpuid(0);
@@ -329,6 +322,74 @@ struct x86_cpu_feature {
 #define X86_FEATURE_VNMI		X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
 #define X86_FEATURE_AMD_PMU_V2		X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
 
+/*
+ * Same idea as X86_FEATURE_XXX, but X86_PROPERTY_XXX retrieves a multi-bit
+ * value/property as opposed to a single-bit feature.  Again, pack the info
+ * into a 64-bit value to pass by value with no overhead on 64-bit builds.
+ */
+struct x86_cpu_property {
+	u32	function;
+	u8	index;
+	u8	reg;
+	u8	lo_bit;
+	u8	hi_bit;
+};
+#define X86_CPU_PROPERTY(fn, idx, gpr, low_bit, high_bit)			\
+({										\
+	struct x86_cpu_property property = {					\
+		.function = fn,							\
+		.index = idx,							\
+		.reg = gpr,							\
+		.lo_bit = low_bit,						\
+		.hi_bit = high_bit,						\
+	};									\
+										\
+	static_assert(low_bit < high_bit);					\
+	static_assert((fn & 0xc0000000) == 0 ||					\
+		      (fn & 0xc0000000) == 0x40000000 ||			\
+		      (fn & 0xc0000000) == 0x80000000 ||			\
+		      (fn & 0xc0000000) == 0xc0000000);				\
+	static_assert(idx < BIT(sizeof(property.index) * BITS_PER_BYTE));	\
+	property;								\
+})
+
+#define X86_PROPERTY_MAX_BASIC_LEAF		X86_CPU_PROPERTY(0, 0, EAX, 0, 31)
+#define X86_PROPERTY_PMU_VERSION		X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7)
+#define X86_PROPERTY_PMU_NR_GP_COUNTERS		X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15)
+#define X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH	X86_CPU_PROPERTY(0xa, 0, EAX, 16, 23)
+#define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH	X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31)
+#define X86_PROPERTY_PMU_EVENTS_MASK		X86_CPU_PROPERTY(0xa, 0, EBX, 0, 7)
+#define X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK	X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31)
+#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS	X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4)
+#define X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH	X86_CPU_PROPERTY(0xa, 0, EDX, 5, 12)
+
+#define X86_PROPERTY_SUPPORTED_XCR0_LO		X86_CPU_PROPERTY(0xd,  0, EAX,  0, 31)
+#define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0	X86_CPU_PROPERTY(0xd,  0, EBX,  0, 31)
+#define X86_PROPERTY_XSTATE_MAX_SIZE		X86_CPU_PROPERTY(0xd,  0, ECX,  0, 31)
+#define X86_PROPERTY_SUPPORTED_XCR0_HI		X86_CPU_PROPERTY(0xd,  0, EDX,  0, 31)
+
+#define X86_PROPERTY_XSTATE_TILE_SIZE		X86_CPU_PROPERTY(0xd, 18, EAX,  0, 31)
+#define X86_PROPERTY_XSTATE_TILE_OFFSET		X86_CPU_PROPERTY(0xd, 18, EBX,  0, 31)
+#define X86_PROPERTY_AMX_MAX_PALETTE_TABLES	X86_CPU_PROPERTY(0x1d, 0, EAX,  0, 31)
+#define X86_PROPERTY_AMX_TOTAL_TILE_BYTES	X86_CPU_PROPERTY(0x1d, 1, EAX,  0, 15)
+#define X86_PROPERTY_AMX_BYTES_PER_TILE		X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
+#define X86_PROPERTY_AMX_BYTES_PER_ROW		X86_CPU_PROPERTY(0x1d, 1, EBX, 0,  15)
+#define X86_PROPERTY_AMX_NR_TILE_REGS		X86_CPU_PROPERTY(0x1d, 1, EBX, 16, 31)
+#define X86_PROPERTY_AMX_MAX_ROWS		X86_CPU_PROPERTY(0x1d, 1, ECX, 0,  15)
+
+#define X86_PROPERTY_MAX_KVM_LEAF		X86_CPU_PROPERTY(0x40000000, 0, EAX, 0, 31)
+
+#define X86_PROPERTY_MAX_EXT_LEAF		X86_CPU_PROPERTY(0x80000000, 0, EAX, 0, 31)
+#define X86_PROPERTY_MAX_PHY_ADDR		X86_CPU_PROPERTY(0x80000008, 0, EAX, 0, 7)
+#define X86_PROPERTY_MAX_VIRT_ADDR		X86_CPU_PROPERTY(0x80000008, 0, EAX, 8, 15)
+#define X86_PROPERTY_GUEST_MAX_PHY_ADDR		X86_CPU_PROPERTY(0x80000008, 0, EAX, 16, 23)
+#define X86_PROPERTY_SEV_C_BIT			X86_CPU_PROPERTY(0x8000001F, 0, EBX, 0, 5)
+#define X86_PROPERTY_PHYS_ADDR_REDUCTION	X86_CPU_PROPERTY(0x8000001F, 0, EBX, 6, 11)
+#define X86_PROPERTY_NR_PERFCTR_CORE		X86_CPU_PROPERTY(0x80000022, 0, EBX, 0, 3)
+#define X86_PROPERTY_NR_PERFCTR_NB		X86_CPU_PROPERTY(0x80000022, 0, EBX, 10, 15)
+
+#define X86_PROPERTY_MAX_CENTAUR_LEAF		X86_CPU_PROPERTY(0xC0000000, 0, EAX, 0, 31)
+
 static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
 {
 	union {
@@ -347,6 +408,40 @@ static inline bool this_cpu_has(struct x86_cpu_feature feature)
 			      feature.reg, feature.bit, feature.bit);
 }
 
+static inline uint32_t this_cpu_property(struct x86_cpu_property property)
+{
+	return __this_cpu_has(property.function, property.index,
+			      property.reg, property.lo_bit, property.hi_bit);
+}
+
+static __always_inline bool this_cpu_has_p(struct x86_cpu_property property)
+{
+	uint32_t max_leaf;
+
+	switch (property.function & 0xc0000000) {
+	case 0:
+		max_leaf = this_cpu_property(X86_PROPERTY_MAX_BASIC_LEAF);
+		break;
+	case 0x40000000:
+		max_leaf = this_cpu_property(X86_PROPERTY_MAX_KVM_LEAF);
+		break;
+	case 0x80000000:
+		max_leaf = this_cpu_property(X86_PROPERTY_MAX_EXT_LEAF);
+		break;
+	case 0xc0000000:
+		max_leaf = this_cpu_property(X86_PROPERTY_MAX_CENTAUR_LEAF);
+	}
+	return max_leaf >= property.function;
+}
+
+static inline u8 cpuid_maxphyaddr(void)
+{
+	if (!this_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR))
+		return 36;
+
+	return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
+}
+
 struct far_pointer32 {
 	u32 offset;
 	u16 selector;
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 03/14] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 01/14] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 02/14] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 04/14] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() instead of open coding a
*very* rough equivalent.  Default to a maximum virtual address width of
48 bits instead of 64 bits to better match real x86 CPUs (and Intel and
AMD architectures).

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/processor.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index e6bd964f..10391cc0 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -1022,9 +1022,14 @@ static inline void write_pkru(u32 pkru)
 
 static inline bool is_canonical(u64 addr)
 {
-	int va_width = (raw_cpuid(0x80000008, 0).a & 0xff00) >> 8;
-	int shift_amt = 64 - va_width;
+	int va_width, shift_amt;
 
+	if (this_cpu_has_p(X86_PROPERTY_MAX_VIRT_ADDR))
+		va_width = this_cpu_property(X86_PROPERTY_MAX_VIRT_ADDR);
+	else
+		va_width = 48;
+
+	shift_amt = 64 - va_width;
 	return (s64)(addr << shift_amt) >> shift_amt == addr;
 }
 
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 04/14] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (2 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 03/14] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 05/14] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} to implement get_supported_xcr0().

Opportunistically rename the helper and move it to processor.h.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/processor.h |  9 +++++++++
 x86/xsave.c         | 11 +----------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 10391cc0..b3ea6881 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -442,6 +442,15 @@ static inline u8 cpuid_maxphyaddr(void)
 	return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
 }
 
+static inline u64 this_cpu_supported_xcr0(void)
+{
+	if (!this_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO))
+		return 0;
+
+	return (u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) |
+	       ((u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
+}
+
 struct far_pointer32 {
 	u32 offset;
 	u16 selector;
diff --git a/x86/xsave.c b/x86/xsave.c
index 5d80f245..cc8e3a0a 100644
--- a/x86/xsave.c
+++ b/x86/xsave.c
@@ -8,15 +8,6 @@
 #define uint64_t unsigned long long
 #endif
 
-static uint64_t get_supported_xcr0(void)
-{
-    struct cpuid r;
-    r = cpuid_indexed(0xd, 0);
-    printf("eax %x, ebx %x, ecx %x, edx %x\n",
-            r.a, r.b, r.c, r.d);
-    return r.a + ((u64)r.d << 32);
-}
-
 #define XCR_XFEATURE_ENABLED_MASK       0x00000000
 #define XCR_XFEATURE_ILLEGAL_MASK       0x00000010
 
@@ -33,7 +24,7 @@ static void test_xsave(void)
 
     printf("Legal instruction testing:\n");
 
-    supported_xcr0 = get_supported_xcr0();
+    supported_xcr0 = this_cpu_supported_xcr0();
     printf("Supported XCR0 bits: %#lx\n", supported_xcr0);
 
     test_bits = XSTATE_FP | XSTATE_SSE;
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 05/14] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (3 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 04/14] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Add a definition for X86_PROPERTY_INTEL_PT_NR_RANGES, and use it instead
of open coding equivalent logic in the LA57 testcase that verifies the
canonical address behavior of PT MSRs.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/processor.h | 3 +++
 x86/la57.c          | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index b3ea6881..e3b3df89 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -370,6 +370,9 @@ struct x86_cpu_property {
 
 #define X86_PROPERTY_XSTATE_TILE_SIZE		X86_CPU_PROPERTY(0xd, 18, EAX,  0, 31)
 #define X86_PROPERTY_XSTATE_TILE_OFFSET		X86_CPU_PROPERTY(0xd, 18, EBX,  0, 31)
+
+#define X86_PROPERTY_INTEL_PT_NR_RANGES		X86_CPU_PROPERTY(0x14, 1, EAX,  0, 2)
+
 #define X86_PROPERTY_AMX_MAX_PALETTE_TABLES	X86_CPU_PROPERTY(0x1d, 0, EAX,  0, 31)
 #define X86_PROPERTY_AMX_TOTAL_TILE_BYTES	X86_CPU_PROPERTY(0x1d, 1, EAX,  0, 15)
 #define X86_PROPERTY_AMX_BYTES_PER_TILE		X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
diff --git a/x86/la57.c b/x86/la57.c
index d93e286c..aaf9d974 100644
--- a/x86/la57.c
+++ b/x86/la57.c
@@ -288,7 +288,7 @@ static void __test_canonical_checks(bool force_emulation)
 
 	/* PT filter ranges */
 	if (this_cpu_has(X86_FEATURE_INTEL_PT)) {
-		int n_ranges = cpuid_indexed(0x14, 0x1).a & 0x7;
+		int n_ranges = this_cpu_property(X86_PROPERTY_INTEL_PT_NR_RANGES);
 		int i;
 
 		for (i = 0 ; i < n_ranges ; i++) {
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (4 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 05/14] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11  1:32   ` Mi, Dapeng
                     ` (2 more replies)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24] Sean Christopherson
                   ` (8 subsequent siblings)
  14 siblings, 3 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Mark all arch events as available on AMD, as AMD PMUs don't provide the
"not available" CPUID field, and the number of GP counters has nothing to
do with which architectural events are available/supported.

Rename gp_counter_mask_length to arch_event_mask_length, and
pmu_gp_counter_is_available() to pmu_arch_event_is_available(), to
reflect what the field and helper actually track.

Cc: Dapeng Mi <dapeng1.mi@linux.intel.com>
Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/pmu.c | 10 +++++-----
 lib/x86/pmu.h |  8 ++++----
 x86/pmu.c     |  8 ++++----
 3 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index d06e9455..d37c874c 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -18,10 +18,10 @@ void pmu_init(void)
 
 		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
 		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
-		pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
+		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
 
-		/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
-		pmu.gp_counter_available = ~cpuid_10.b;
+		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
+		pmu.arch_event_available = ~cpuid_10.b;
 
 		if (this_cpu_has(X86_FEATURE_PDCM))
 			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
@@ -50,8 +50,8 @@ void pmu_init(void)
 			pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
 		}
 		pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
-		pmu.gp_counter_mask_length = pmu.nr_gp_counters;
-		pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
+		pmu.arch_event_mask_length = 32;
+		pmu.arch_event_available = -1u;
 
 		if (this_cpu_has_perf_global_status()) {
 			pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
index f07fbd93..c7dc68c1 100644
--- a/lib/x86/pmu.h
+++ b/lib/x86/pmu.h
@@ -63,8 +63,8 @@ struct pmu_caps {
 	u8 fixed_counter_width;
 	u8 nr_gp_counters;
 	u8 gp_counter_width;
-	u8 gp_counter_mask_length;
-	u32 gp_counter_available;
+	u8 arch_event_mask_length;
+	u32 arch_event_available;
 	u32 msr_gp_counter_base;
 	u32 msr_gp_event_select_base;
 
@@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
 	return pmu.version > 1;
 }
 
-static inline bool pmu_gp_counter_is_available(int i)
+static inline bool pmu_arch_event_is_available(int i)
 {
-	return pmu.gp_counter_available & BIT(i);
+	return pmu.arch_event_available & BIT(i);
 }
 
 static inline u64 pmu_lbr_version(void)
diff --git a/x86/pmu.c b/x86/pmu.c
index 45c6db3c..e79122ed 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -436,7 +436,7 @@ static void check_gp_counters(void)
 	int i;
 
 	for (i = 0; i < gp_events_size; i++)
-		if (pmu_gp_counter_is_available(i))
+		if (pmu_arch_event_is_available(i))
 			check_gp_counter(&gp_events[i]);
 		else
 			printf("GP event '%s' is disabled\n",
@@ -463,7 +463,7 @@ static void check_counters_many(void)
 	int i, n;
 
 	for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
-		if (!pmu_gp_counter_is_available(i))
+		if (!pmu_arch_event_is_available(i))
 			continue;
 
 		cnt[n].ctr = MSR_GP_COUNTERx(n);
@@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
 	uint64_t t0, t1, t2, t3;
 
 	/* Bit 2 enumerates the availability of reference cycles events. */
-	if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
+	if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
 		return;
 
 	t0 = fenced_rdtsc();
@@ -992,7 +992,7 @@ int main(int ac, char **av)
 	printf("PMU version:         %d\n", pmu.version);
 	printf("GP counters:         %d\n", pmu.nr_gp_counters);
 	printf("GP counter width:    %d\n", pmu.gp_counter_width);
-	printf("Mask length:         %d\n", pmu.gp_counter_mask_length);
+	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
 	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
 	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
 
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (5 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11  1:35   ` Mi, Dapeng
  2025-06-11 12:10   ` Liam Merwick
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
                   ` (7 subsequent siblings)
  14 siblings, 2 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Mask the set of available architectural events based on the bit vector
length to avoid marking reserved/undefined events as available.  Per the
SDM:

  EAX Bits 31-24: Length of EBX bit vector to enumerate architectural
                  performance monitoring events. Architectural event x is
                  supported if EBX[x]=0 && EAX[31:24]>x.

Suggested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/pmu.c | 3 ++-
 x86/pmu.c     | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index d37c874c..92707698 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -21,7 +21,8 @@ void pmu_init(void)
 		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
 
 		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
-		pmu.arch_event_available = ~cpuid_10.b;
+		pmu.arch_event_available = ~cpuid_10.b &
+					   (BIT(pmu.arch_event_mask_length) - 1);
 
 		if (this_cpu_has(X86_FEATURE_PDCM))
 			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
diff --git a/x86/pmu.c b/x86/pmu.c
index e79122ed..3987311c 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -993,6 +993,7 @@ int main(int ac, char **av)
 	printf("GP counters:         %d\n", pmu.nr_gp_counters);
 	printf("GP counter width:    %d\n", pmu.gp_counter_width);
 	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
+	printf("Arch Events (mask):  0x%x\n", pmu.arch_event_available);
 	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
 	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
 
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (6 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24] Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-13  6:25   ` Sandipan Das
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 09/14] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
information instead of open coding equivalent functionality.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/pmu.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index 92707698..fb46b196 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -7,21 +7,19 @@ void pmu_init(void)
 	pmu.is_intel = is_intel();
 
 	if (pmu.is_intel) {
-		struct cpuid cpuid_10 = cpuid(10);
-
-		pmu.version = cpuid_10.a & 0xff;
+		pmu.version = this_cpu_property(X86_PROPERTY_PMU_VERSION);
 
 		if (pmu.version > 1) {
-			pmu.nr_fixed_counters = cpuid_10.d & 0x1f;
-			pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff;
+			pmu.nr_fixed_counters = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+			pmu.fixed_counter_width = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH);
 		}
 
-		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
-		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
-		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
+		pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
+		pmu.gp_counter_width = this_cpu_property(X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH);
+		pmu.arch_event_mask_length = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
 
 		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
-		pmu.arch_event_available = ~cpuid_10.b &
+		pmu.arch_event_available = ~this_cpu_property(X86_PROPERTY_PMU_EVENTS_MASK) &
 					   (BIT(pmu.arch_event_mask_length) - 1);
 
 		if (this_cpu_has(X86_FEATURE_PDCM))
@@ -39,7 +37,7 @@ void pmu_init(void)
 			/* Performance Monitoring Version 2 Supported */
 			if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
 				pmu.version = 2;
-				pmu.nr_gp_counters = cpuid(0x80000022).b & 0xf;
+				pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_NR_PERFCTR_CORE);
 			} else {
 				pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE;
 			}
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 09/14] x86/sev: Use VC_VECTOR from processor.h
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (7 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use VC_VECTOR (defined in processor.h along with all other known vectors)
and drop the one-off SEV_ES_VC_HANDLER_VECTOR macro.

No functional change intended.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/amd_sev.c | 4 ++--
 lib/x86/amd_sev.h | 6 ------
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 66722141..6c0a66ac 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -111,9 +111,9 @@ efi_status_t setup_amd_sev_es(void)
 	 */
 	sidt(&idtr);
 	idt = (idt_entry_t *)idtr.base;
-	vc_handler_idt = idt[SEV_ES_VC_HANDLER_VECTOR];
+	vc_handler_idt = idt[VC_VECTOR];
 	vc_handler_idt.selector = KERNEL_CS;
-	boot_idt[SEV_ES_VC_HANDLER_VECTOR] = vc_handler_idt;
+	boot_idt[VC_VECTOR] = vc_handler_idt;
 
 	return EFI_SUCCESS;
 }
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index ed6e3385..ca7216d4 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -39,12 +39,6 @@
 bool amd_sev_enabled(void);
 efi_status_t setup_amd_sev(void);
 
-/*
- * AMD Programmer's Manual Volume 2
- *   - Section "#VC Exception"
- */
-#define SEV_ES_VC_HANDLER_VECTOR 29
-
 /*
  * AMD Programmer's Manual Volume 2
  *   - Section "GHCB"
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (8 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 09/14] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11 12:28   ` Liam Merwick
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Skip the AMD SEV test if SEV is unsupported, as KVM-Unit-Tests typically
don't report failures if feature is missing.

Opportunistically use amd_sev_enabled() instead of duplicating all of its
functionality.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 x86/amd_sev.c | 51 +++++++--------------------------------------------
 1 file changed, 7 insertions(+), 44 deletions(-)

diff --git a/x86/amd_sev.c b/x86/amd_sev.c
index 7757d4f8..4ec45543 100644
--- a/x86/amd_sev.c
+++ b/x86/amd_sev.c
@@ -15,51 +15,10 @@
 #include "x86/amd_sev.h"
 #include "msr.h"
 
-#define EXIT_SUCCESS 0
-#define EXIT_FAILURE 1
-
 #define TESTDEV_IO_PORT 0xe0
 
 static char st1[] = "abcdefghijklmnop";
 
-static int test_sev_activation(void)
-{
-	struct cpuid cpuid_out;
-	u64 msr_out;
-
-	printf("SEV activation test is loaded.\n");
-
-	/* Tests if CPUID function to check SEV is implemented */
-	cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
-	printf("CPUID Fn8000_0000[EAX]: 0x%08x\n", cpuid_out.a);
-	if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
-		printf("CPUID does not support FN%08x\n",
-		       CPUID_FN_ENCRYPT_MEM_CAPAB);
-		return EXIT_FAILURE;
-	}
-
-	/* Tests if SEV is supported */
-	cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
-	printf("CPUID Fn8000_001F[EAX]: 0x%08x\n", cpuid_out.a);
-	printf("CPUID Fn8000_001F[EBX]: 0x%08x\n", cpuid_out.b);
-	if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
-		printf("SEV is not supported.\n");
-		return EXIT_FAILURE;
-	}
-	printf("SEV is supported\n");
-
-	/* Tests if SEV is enabled */
-	msr_out = rdmsr(MSR_SEV_STATUS);
-	printf("MSR C001_0131[EAX]: 0x%08lx\n", msr_out & 0xffffffff);
-	if (!(msr_out & SEV_ENABLED_MASK)) {
-		printf("SEV is not enabled.\n");
-		return EXIT_FAILURE;
-	}
-	printf("SEV is enabled\n");
-
-	return EXIT_SUCCESS;
-}
-
 static void test_sev_es_activation(void)
 {
 	if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
@@ -88,10 +47,14 @@ static void test_stringio(void)
 
 int main(void)
 {
-	int rtn;
-	rtn = test_sev_activation();
-	report(rtn == EXIT_SUCCESS, "SEV activation test.");
+	if (!amd_sev_enabled()) {
+		report_skip("AMD SEV not enabled\n");
+		goto out;
+	}
+
 	test_sev_es_activation();
 	test_stringio();
+
+out:
 	return report_summary();
 }
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (9 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11 12:38   ` Liam Merwick
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Define proper X86_FEATURE_* flags for CPUID 0x8000001F, and use them
instead of open coding equivalent checks in amd_sev_{,es_}enabled().

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/amd_sev.c   | 32 +++++---------------------------
 lib/x86/amd_sev.h   |  3 ---
 lib/x86/processor.h |  9 +++++++++
 3 files changed, 14 insertions(+), 30 deletions(-)

diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 6c0a66ac..b7cefd0f 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -17,31 +17,15 @@ static unsigned short amd_sev_c_bit_pos;
 
 bool amd_sev_enabled(void)
 {
-	struct cpuid cpuid_out;
 	static bool sev_enabled;
 	static bool initialized = false;
 
 	/* Check CPUID and MSR for SEV status and store it for future function calls. */
 	if (!initialized) {
-		sev_enabled = false;
 		initialized = true;
 
-		/* Test if we can query SEV features */
-		cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
-		if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
-			return sev_enabled;
-		}
-
-		/* Test if SEV is supported */
-		cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
-		if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
-			return sev_enabled;
-		}
-
-		/* Test if SEV is enabled */
-		if (rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK) {
-			sev_enabled = true;
-		}
+		sev_enabled = this_cpu_has(X86_FEATURE_SEV) &&
+			      rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK;
 	}
 
 	return sev_enabled;
@@ -72,17 +56,11 @@ bool amd_sev_es_enabled(void)
 	static bool initialized = false;
 
 	if (!initialized) {
-		sev_es_enabled = false;
 		initialized = true;
 
-		if (!amd_sev_enabled()) {
-			return sev_es_enabled;
-		}
-
-		/* Test if SEV-ES is enabled */
-		if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
-			sev_es_enabled = true;
-		}
+		sev_es_enabled = amd_sev_enabled() &&
+				 this_cpu_has(X86_FEATURE_SEV_ES) &&
+				 rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
 	}
 
 	return sev_es_enabled;
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index ca7216d4..defcda75 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -21,12 +21,9 @@
 
 /*
  * AMD Programmer's Manual Volume 3
- *   - Section "Function 8000_0000h - Maximum Extended Function Number and Vendor String"
  *   - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
  */
-#define CPUID_FN_LARGEST_EXT_FUNC_NUM 0x80000000
 #define CPUID_FN_ENCRYPT_MEM_CAPAB    0x8000001f
-#define SEV_SUPPORT_MASK              0b10
 
 /*
  * AMD Programmer's Manual Volume 2
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index e3b3df89..1adfd027 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -320,6 +320,15 @@ struct x86_cpu_feature {
 #define X86_FEATURE_PFTHRESHOLD		X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
 #define X86_FEATURE_VGIF		X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
 #define X86_FEATURE_VNMI		X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
+#define X86_FEATURE_SME			X86_CPU_FEATURE(0x8000001F, 0, EAX,  0)
+#define X86_FEATURE_SEV			X86_CPU_FEATURE(0x8000001F, 0, EAX,  1)
+#define X86_FEATURE_VM_PAGE_FLUSH	X86_CPU_FEATURE(0x8000001F, 0, EAX,  2)
+#define X86_FEATURE_SEV_ES		X86_CPU_FEATURE(0x8000001F, 0, EAX,  3)
+#define X86_FEATURE_SEV_SNP		X86_CPU_FEATURE(0x8000001F, 0, EAX,  4)
+#define X86_FEATURE_V_TSC_AUX		X86_CPU_FEATURE(0x8000001F, 0, EAX,  9)
+#define X86_FEATURE_SME_COHERENT	X86_CPU_FEATURE(0x8000001F, 0, EAX, 10)
+#define X86_FEATURE_DEBUG_SWAP		X86_CPU_FEATURE(0x8000001F, 0, EAX, 14)
+#define X86_FEATURE_SVSM		X86_CPU_FEATURE(0x8000001F, 0, EAX, 28)
 #define X86_FEATURE_AMD_PMU_V2		X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
 
 /*
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (10 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11 12:58   ` Liam Merwick
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 13/14] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use X86_PROPERTY_SEV_C_BIT instead of open coding equivalent functionality,
and delete the overly-verbose CPUID_FN_ENCRYPT_MEM_CAPAB macro.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/amd_sev.c | 10 +---------
 lib/x86/amd_sev.h |  6 ------
 2 files changed, 1 insertion(+), 15 deletions(-)

diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index b7cefd0f..da0e2077 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -33,19 +33,11 @@ bool amd_sev_enabled(void)
 
 efi_status_t setup_amd_sev(void)
 {
-	struct cpuid cpuid_out;
-
 	if (!amd_sev_enabled()) {
 		return EFI_UNSUPPORTED;
 	}
 
-	/*
-	 * Extract C-Bit position from ebx[5:0]
-	 * AMD64 Architecture Programmer's Manual Volume 3
-	 *   - Section " Function 8000_001Fh - Encrypted Memory Capabilities"
-	 */
-	cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
-	amd_sev_c_bit_pos = (unsigned short)(cpuid_out.b & 0x3f);
+	amd_sev_c_bit_pos = this_cpu_property(X86_PROPERTY_SEV_C_BIT);
 
 	return EFI_SUCCESS;
 }
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index defcda75..daa33a05 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -19,12 +19,6 @@
 #include "asm/page.h"
 #include "efi.h"
 
-/*
- * AMD Programmer's Manual Volume 3
- *   - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
- */
-#define CPUID_FN_ENCRYPT_MEM_CAPAB    0x8000001f
-
 /*
  * AMD Programmer's Manual Volume 2
  *   - Section "SEV_STATUS MSR"
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 13/14] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (11 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h Sean Christopherson
  2025-06-25 22:25 ` [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Use amd_sev_es_enabled() in the SEV string I/O test instead manually
checking the SEV_STATUS MSR.

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 x86/amd_sev.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/x86/amd_sev.c b/x86/amd_sev.c
index 4ec45543..3e80d28b 100644
--- a/x86/amd_sev.c
+++ b/x86/amd_sev.c
@@ -19,15 +19,6 @@
 
 static char st1[] = "abcdefghijklmnop";
 
-static void test_sev_es_activation(void)
-{
-	if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
-		printf("SEV-ES is enabled.\n");
-	} else {
-		printf("SEV-ES is not enabled.\n");
-	}
-}
-
 static void test_stringio(void)
 {
 	int st1_len = sizeof(st1) - 1;
@@ -52,7 +43,8 @@ int main(void)
 		goto out;
 	}
 
-	test_sev_es_activation();
+	printf("SEV-ES is %senabled.\n", amd_sev_es_enabled() ? "" : "not ");
+
 	test_stringio();
 
 out:
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (12 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 13/14] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
@ 2025-06-10 19:54 ` Sean Christopherson
  2025-06-11 15:41   ` Liam Merwick
  2025-06-25 22:25 ` [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
  14 siblings, 1 reply; 26+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Dapeng Mi, Sean Christopherson, Liam Merwick

Move the SEV MSR definitions to msr.h so that they're available for non-EFI
builds.  There is nothing EFI specific about the architectural definitions.

Opportunistically massage the names to align with existing style.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 lib/x86/amd_sev.c |  6 +++---
 lib/x86/amd_sev.h | 14 --------------
 lib/x86/msr.h     |  6 ++++++
 3 files changed, 9 insertions(+), 17 deletions(-)

diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index da0e2077..7c6d2804 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -25,7 +25,7 @@ bool amd_sev_enabled(void)
 		initialized = true;
 
 		sev_enabled = this_cpu_has(X86_FEATURE_SEV) &&
-			      rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK;
+			      rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ENABLED;
 	}
 
 	return sev_enabled;
@@ -52,7 +52,7 @@ bool amd_sev_es_enabled(void)
 
 		sev_es_enabled = amd_sev_enabled() &&
 				 this_cpu_has(X86_FEATURE_SEV_ES) &&
-				 rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
+				 rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ES_ENABLED;
 	}
 
 	return sev_es_enabled;
@@ -100,7 +100,7 @@ void setup_ghcb_pte(pgd_t *page_table)
 	pteval_t *pte;
 
 	/* Read the current GHCB page addr */
-	ghcb_addr = rdmsr(SEV_ES_GHCB_MSR_INDEX);
+	ghcb_addr = rdmsr(MSR_SEV_ES_GHCB);
 
 	/* Search Level 1 page table entry for GHCB page */
 	pte = get_pte_level(page_table, (void *)ghcb_addr, 1);
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index daa33a05..9d587e2d 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -19,23 +19,9 @@
 #include "asm/page.h"
 #include "efi.h"
 
-/*
- * AMD Programmer's Manual Volume 2
- *   - Section "SEV_STATUS MSR"
- */
-#define MSR_SEV_STATUS      0xc0010131
-#define SEV_ENABLED_MASK    0b1
-#define SEV_ES_ENABLED_MASK 0b10
-
 bool amd_sev_enabled(void);
 efi_status_t setup_amd_sev(void);
 
-/*
- * AMD Programmer's Manual Volume 2
- *   - Section "GHCB"
- */
-#define SEV_ES_GHCB_MSR_INDEX 0xc0010130
-
 bool amd_sev_es_enabled(void);
 efi_status_t setup_amd_sev_es(void);
 void setup_ghcb_pte(pgd_t *page_table);
diff --git a/lib/x86/msr.h b/lib/x86/msr.h
index 658d237f..ccfd6bdd 100644
--- a/lib/x86/msr.h
+++ b/lib/x86/msr.h
@@ -523,4 +523,10 @@
 #define MSR_VM_IGNNE                    0xc0010115
 #define MSR_VM_HSAVE_PA                 0xc0010117
 
+#define MSR_SEV_STATUS			0xc0010131
+#define SEV_STATUS_SEV_ENABLED		BIT(0)
+#define SEV_STATUS_SEV_ES_ENABLED	BIT(1)
+
+#define MSR_SEV_ES_GHCB			0xc0010130
+
 #endif /* _X86_MSR_H_ */
-- 
2.50.0.rc0.642.g800a2b2222-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
@ 2025-06-11  1:32   ` Mi, Dapeng
  2025-06-11 12:02   ` Liam Merwick
  2025-06-13  6:24   ` Sandipan Das
  2 siblings, 0 replies; 26+ messages in thread
From: Mi, Dapeng @ 2025-06-11  1:32 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Liam Merwick


On 6/11/2025 3:54 AM, Sean Christopherson wrote:
> Mark all arch events as available on AMD, as AMD PMUs don't provide the
> "not available" CPUID field, and the number of GP counters has nothing to
> do with which architectural events are available/supported.
>
> Rename gp_counter_mask_length to arch_event_mask_length, and
> pmu_gp_counter_is_available() to pmu_arch_event_is_available(), to
> reflect what the field and helper actually track.
>
> Cc: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  lib/x86/pmu.c | 10 +++++-----
>  lib/x86/pmu.h |  8 ++++----
>  x86/pmu.c     |  8 ++++----
>  3 files changed, 13 insertions(+), 13 deletions(-)
>
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d06e9455..d37c874c 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -18,10 +18,10 @@ void pmu_init(void)
>  
>  		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
>  		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> -		pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
> +		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>  
> -		/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
> -		pmu.gp_counter_available = ~cpuid_10.b;
> +		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> +		pmu.arch_event_available = ~cpuid_10.b;
>  
>  		if (this_cpu_has(X86_FEATURE_PDCM))
>  			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> @@ -50,8 +50,8 @@ void pmu_init(void)
>  			pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
>  		}
>  		pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> -		pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> -		pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
> +		pmu.arch_event_mask_length = 32;
> +		pmu.arch_event_available = -1u;
>  
>  		if (this_cpu_has_perf_global_status()) {
>  			pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
> diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
> index f07fbd93..c7dc68c1 100644
> --- a/lib/x86/pmu.h
> +++ b/lib/x86/pmu.h
> @@ -63,8 +63,8 @@ struct pmu_caps {
>  	u8 fixed_counter_width;
>  	u8 nr_gp_counters;
>  	u8 gp_counter_width;
> -	u8 gp_counter_mask_length;
> -	u32 gp_counter_available;
> +	u8 arch_event_mask_length;
> +	u32 arch_event_available;
>  	u32 msr_gp_counter_base;
>  	u32 msr_gp_event_select_base;
>  
> @@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
>  	return pmu.version > 1;
>  }
>  
> -static inline bool pmu_gp_counter_is_available(int i)
> +static inline bool pmu_arch_event_is_available(int i)
>  {
> -	return pmu.gp_counter_available & BIT(i);
> +	return pmu.arch_event_available & BIT(i);
>  }
>  
>  static inline u64 pmu_lbr_version(void)
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 45c6db3c..e79122ed 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -436,7 +436,7 @@ static void check_gp_counters(void)
>  	int i;
>  
>  	for (i = 0; i < gp_events_size; i++)
> -		if (pmu_gp_counter_is_available(i))
> +		if (pmu_arch_event_is_available(i))
>  			check_gp_counter(&gp_events[i]);
>  		else
>  			printf("GP event '%s' is disabled\n",
> @@ -463,7 +463,7 @@ static void check_counters_many(void)
>  	int i, n;
>  
>  	for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
> -		if (!pmu_gp_counter_is_available(i))
> +		if (!pmu_arch_event_is_available(i))
>  			continue;
>  
>  		cnt[n].ctr = MSR_GP_COUNTERx(n);
> @@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
>  	uint64_t t0, t1, t2, t3;
>  
>  	/* Bit 2 enumerates the availability of reference cycles events. */
> -	if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
> +	if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
>  		return;
>  
>  	t0 = fenced_rdtsc();
> @@ -992,7 +992,7 @@ int main(int ac, char **av)
>  	printf("PMU version:         %d\n", pmu.version);
>  	printf("GP counters:         %d\n", pmu.nr_gp_counters);
>  	printf("GP counter width:    %d\n", pmu.gp_counter_width);
> -	printf("Mask length:         %d\n", pmu.gp_counter_mask_length);
> +	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
>  	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
>  	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>  

Tested this patch on Intel platform (Sapphire Rapids). No issue found.

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>

Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24] Sean Christopherson
@ 2025-06-11  1:35   ` Mi, Dapeng
  2025-06-11 12:10   ` Liam Merwick
  1 sibling, 0 replies; 26+ messages in thread
From: Mi, Dapeng @ 2025-06-11  1:35 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Liam Merwick


On 6/11/2025 3:54 AM, Sean Christopherson wrote:
> Mask the set of available architectural events based on the bit vector
> length to avoid marking reserved/undefined events as available.  Per the
> SDM:
>
>   EAX Bits 31-24: Length of EBX bit vector to enumerate architectural
>                   performance monitoring events. Architectural event x is
>                   supported if EBX[x]=0 && EAX[31:24]>x.
>
> Suggested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  lib/x86/pmu.c | 3 ++-
>  x86/pmu.c     | 1 +
>  2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d37c874c..92707698 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -21,7 +21,8 @@ void pmu_init(void)
>  		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>  
>  		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> -		pmu.arch_event_available = ~cpuid_10.b;
> +		pmu.arch_event_available = ~cpuid_10.b &
> +					   (BIT(pmu.arch_event_mask_length) - 1);
>  
>  		if (this_cpu_has(X86_FEATURE_PDCM))
>  			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> diff --git a/x86/pmu.c b/x86/pmu.c
> index e79122ed..3987311c 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -993,6 +993,7 @@ int main(int ac, char **av)
>  	printf("GP counters:         %d\n", pmu.nr_gp_counters);
>  	printf("GP counter width:    %d\n", pmu.gp_counter_width);
>  	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
> +	printf("Arch Events (mask):  0x%x\n", pmu.arch_event_available);
>  	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
>  	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>  

Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
  2025-06-11  1:32   ` Mi, Dapeng
@ 2025-06-11 12:02   ` Liam Merwick
  2025-06-13  6:24   ` Sandipan Das
  2 siblings, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 12:02 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi



On 10/06/2025 20:54, Sean Christopherson wrote:
> Mark all arch events as available on AMD, as AMD PMUs don't provide the
> "not available" CPUID field, and the number of GP counters has nothing to
> do with which architectural events are available/supported.
> 
> Rename gp_counter_mask_length to arch_event_mask_length, and
> pmu_gp_counter_is_available() to pmu_arch_event_is_available(), to
> reflect what the field and helper actually track.
> 
> Cc: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>

> ---
>   lib/x86/pmu.c | 10 +++++-----
>   lib/x86/pmu.h |  8 ++++----
>   x86/pmu.c     |  8 ++++----
>   3 files changed, 13 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d06e9455..d37c874c 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -18,10 +18,10 @@ void pmu_init(void)
>   
>   		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
>   		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> -		pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
> +		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>   
> -		/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
> -		pmu.gp_counter_available = ~cpuid_10.b;
> +		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> +		pmu.arch_event_available = ~cpuid_10.b;
>   
>   		if (this_cpu_has(X86_FEATURE_PDCM))
>   			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> @@ -50,8 +50,8 @@ void pmu_init(void)
>   			pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
>   		}
>   		pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> -		pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> -		pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
> +		pmu.arch_event_mask_length = 32;
> +		pmu.arch_event_available = -1u;
>   
>   		if (this_cpu_has_perf_global_status()) {
>   			pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
> diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
> index f07fbd93..c7dc68c1 100644
> --- a/lib/x86/pmu.h
> +++ b/lib/x86/pmu.h
> @@ -63,8 +63,8 @@ struct pmu_caps {
>   	u8 fixed_counter_width;
>   	u8 nr_gp_counters;
>   	u8 gp_counter_width;
> -	u8 gp_counter_mask_length;
> -	u32 gp_counter_available;
> +	u8 arch_event_mask_length;
> +	u32 arch_event_available;
>   	u32 msr_gp_counter_base;
>   	u32 msr_gp_event_select_base;
>   
> @@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
>   	return pmu.version > 1;
>   }
>   
> -static inline bool pmu_gp_counter_is_available(int i)
> +static inline bool pmu_arch_event_is_available(int i)
>   {
> -	return pmu.gp_counter_available & BIT(i);
> +	return pmu.arch_event_available & BIT(i);
>   }
>   
>   static inline u64 pmu_lbr_version(void)
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 45c6db3c..e79122ed 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -436,7 +436,7 @@ static void check_gp_counters(void)
>   	int i;
>   
>   	for (i = 0; i < gp_events_size; i++)
> -		if (pmu_gp_counter_is_available(i))
> +		if (pmu_arch_event_is_available(i))
>   			check_gp_counter(&gp_events[i]);
>   		else
>   			printf("GP event '%s' is disabled\n",
> @@ -463,7 +463,7 @@ static void check_counters_many(void)
>   	int i, n;
>   
>   	for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
> -		if (!pmu_gp_counter_is_available(i))
> +		if (!pmu_arch_event_is_available(i))
>   			continue;
>   
>   		cnt[n].ctr = MSR_GP_COUNTERx(n);
> @@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
>   	uint64_t t0, t1, t2, t3;
>   
>   	/* Bit 2 enumerates the availability of reference cycles events. */
> -	if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
> +	if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
>   		return;
>   
>   	t0 = fenced_rdtsc();
> @@ -992,7 +992,7 @@ int main(int ac, char **av)
>   	printf("PMU version:         %d\n", pmu.version);
>   	printf("GP counters:         %d\n", pmu.nr_gp_counters);
>   	printf("GP counter width:    %d\n", pmu.gp_counter_width);
> -	printf("Mask length:         %d\n", pmu.gp_counter_mask_length);
> +	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
>   	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
>   	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>   


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24] Sean Christopherson
  2025-06-11  1:35   ` Mi, Dapeng
@ 2025-06-11 12:10   ` Liam Merwick
  1 sibling, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 12:10 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi



On 10/06/2025 20:54, Sean Christopherson wrote:
> Mask the set of available architectural events based on the bit vector
> length to avoid marking reserved/undefined events as available.  Per the
> SDM:
> 
>    EAX Bits 31-24: Length of EBX bit vector to enumerate architectural
>                    performance monitoring events. Architectural event x is
>                    supported if EBX[x]=0 && EAX[31:24]>x.
> 
> Suggested-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>


> ---
>   lib/x86/pmu.c | 3 ++-
>   x86/pmu.c     | 1 +
>   2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d37c874c..92707698 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -21,7 +21,8 @@ void pmu_init(void)
>   		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>   
>   		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> -		pmu.arch_event_available = ~cpuid_10.b;
> +		pmu.arch_event_available = ~cpuid_10.b &
> +					   (BIT(pmu.arch_event_mask_length) - 1);
>   
>   		if (this_cpu_has(X86_FEATURE_PDCM))
>   			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> diff --git a/x86/pmu.c b/x86/pmu.c
> index e79122ed..3987311c 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -993,6 +993,7 @@ int main(int ac, char **av)
>   	printf("GP counters:         %d\n", pmu.nr_gp_counters);
>   	printf("GP counter width:    %d\n", pmu.gp_counter_width);
>   	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
> +	printf("Arch Events (mask):  0x%x\n", pmu.arch_event_available);
>   	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
>   	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>   


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
@ 2025-06-11 12:28   ` Liam Merwick
  0 siblings, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 12:28 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, liam.merwick



On 10/06/2025 20:54, Sean Christopherson wrote:
> Skip the AMD SEV test if SEV is unsupported, as KVM-Unit-Tests typically
> don't report failures if feature is missing.
> 
> Opportunistically use amd_sev_enabled() instead of duplicating all of its
> functionality.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>


> ---
>   x86/amd_sev.c | 51 +++++++--------------------------------------------
>   1 file changed, 7 insertions(+), 44 deletions(-)
> 
> diff --git a/x86/amd_sev.c b/x86/amd_sev.c
> index 7757d4f8..4ec45543 100644
> --- a/x86/amd_sev.c
> +++ b/x86/amd_sev.c
> @@ -15,51 +15,10 @@
>   #include "x86/amd_sev.h"
>   #include "msr.h"
>   
> -#define EXIT_SUCCESS 0
> -#define EXIT_FAILURE 1
> -
>   #define TESTDEV_IO_PORT 0xe0
>   
>   static char st1[] = "abcdefghijklmnop";
>   
> -static int test_sev_activation(void)
> -{
> -	struct cpuid cpuid_out;
> -	u64 msr_out;
> -
> -	printf("SEV activation test is loaded.\n");
> -
> -	/* Tests if CPUID function to check SEV is implemented */
> -	cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
> -	printf("CPUID Fn8000_0000[EAX]: 0x%08x\n", cpuid_out.a);
> -	if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
> -		printf("CPUID does not support FN%08x\n",
> -		       CPUID_FN_ENCRYPT_MEM_CAPAB);
> -		return EXIT_FAILURE;
> -	}
> -
> -	/* Tests if SEV is supported */
> -	cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
> -	printf("CPUID Fn8000_001F[EAX]: 0x%08x\n", cpuid_out.a);
> -	printf("CPUID Fn8000_001F[EBX]: 0x%08x\n", cpuid_out.b);
> -	if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
> -		printf("SEV is not supported.\n");
> -		return EXIT_FAILURE;
> -	}
> -	printf("SEV is supported\n");
> -
> -	/* Tests if SEV is enabled */
> -	msr_out = rdmsr(MSR_SEV_STATUS);
> -	printf("MSR C001_0131[EAX]: 0x%08lx\n", msr_out & 0xffffffff);
> -	if (!(msr_out & SEV_ENABLED_MASK)) {
> -		printf("SEV is not enabled.\n");
> -		return EXIT_FAILURE;
> -	}
> -	printf("SEV is enabled\n");
> -
> -	return EXIT_SUCCESS;
> -}
> -
>   static void test_sev_es_activation(void)
>   {
>   	if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
> @@ -88,10 +47,14 @@ static void test_stringio(void)
>   
>   int main(void)
>   {
> -	int rtn;
> -	rtn = test_sev_activation();
> -	report(rtn == EXIT_SUCCESS, "SEV activation test.");
> +	if (!amd_sev_enabled()) {
> +		report_skip("AMD SEV not enabled\n");
> +		goto out;
> +	}
> +
>   	test_sev_es_activation();
>   	test_stringio();
> +
> +out:
>   	return report_summary();
>   }


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
@ 2025-06-11 12:38   ` Liam Merwick
  0 siblings, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 12:38 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, liam.merwick



On 10/06/2025 20:54, Sean Christopherson wrote:
> Define proper X86_FEATURE_* flags for CPUID 0x8000001F, and use them
> instead of open coding equivalent checks in amd_sev_{,es_}enabled().
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>


> ---
>   lib/x86/amd_sev.c   | 32 +++++---------------------------
>   lib/x86/amd_sev.h   |  3 ---
>   lib/x86/processor.h |  9 +++++++++
>   3 files changed, 14 insertions(+), 30 deletions(-)
> 
> diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
> index 6c0a66ac..b7cefd0f 100644
> --- a/lib/x86/amd_sev.c
> +++ b/lib/x86/amd_sev.c
> @@ -17,31 +17,15 @@ static unsigned short amd_sev_c_bit_pos;
>   
>   bool amd_sev_enabled(void)
>   {
> -	struct cpuid cpuid_out;
>   	static bool sev_enabled;
>   	static bool initialized = false;
>   
>   	/* Check CPUID and MSR for SEV status and store it for future function calls. */
>   	if (!initialized) {
> -		sev_enabled = false;
>   		initialized = true;
>   
> -		/* Test if we can query SEV features */
> -		cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
> -		if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
> -			return sev_enabled;
> -		}
> -
> -		/* Test if SEV is supported */
> -		cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
> -		if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
> -			return sev_enabled;
> -		}
> -
> -		/* Test if SEV is enabled */
> -		if (rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK) {
> -			sev_enabled = true;
> -		}
> +		sev_enabled = this_cpu_has(X86_FEATURE_SEV) &&
> +			      rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK;
>   	}
>   
>   	return sev_enabled;
> @@ -72,17 +56,11 @@ bool amd_sev_es_enabled(void)
>   	static bool initialized = false;
>   
>   	if (!initialized) {
> -		sev_es_enabled = false;
>   		initialized = true;
>   
> -		if (!amd_sev_enabled()) {
> -			return sev_es_enabled;
> -		}
> -
> -		/* Test if SEV-ES is enabled */
> -		if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
> -			sev_es_enabled = true;
> -		}
> +		sev_es_enabled = amd_sev_enabled() &&
> +				 this_cpu_has(X86_FEATURE_SEV_ES) &&
> +				 rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
>   	}
>   
>   	return sev_es_enabled;
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index ca7216d4..defcda75 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -21,12 +21,9 @@
>   
>   /*
>    * AMD Programmer's Manual Volume 3
> - *   - Section "Function 8000_0000h - Maximum Extended Function Number and Vendor String"
>    *   - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
>    */
> -#define CPUID_FN_LARGEST_EXT_FUNC_NUM 0x80000000
>   #define CPUID_FN_ENCRYPT_MEM_CAPAB    0x8000001f
> -#define SEV_SUPPORT_MASK              0b10
>   
>   /*
>    * AMD Programmer's Manual Volume 2
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index e3b3df89..1adfd027 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -320,6 +320,15 @@ struct x86_cpu_feature {
>   #define X86_FEATURE_PFTHRESHOLD		X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
>   #define X86_FEATURE_VGIF		X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
>   #define X86_FEATURE_VNMI		X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
> +#define X86_FEATURE_SME			X86_CPU_FEATURE(0x8000001F, 0, EAX,  0)
> +#define X86_FEATURE_SEV			X86_CPU_FEATURE(0x8000001F, 0, EAX,  1)
> +#define X86_FEATURE_VM_PAGE_FLUSH	X86_CPU_FEATURE(0x8000001F, 0, EAX,  2)
> +#define X86_FEATURE_SEV_ES		X86_CPU_FEATURE(0x8000001F, 0, EAX,  3)
> +#define X86_FEATURE_SEV_SNP		X86_CPU_FEATURE(0x8000001F, 0, EAX,  4)
> +#define X86_FEATURE_V_TSC_AUX		X86_CPU_FEATURE(0x8000001F, 0, EAX,  9)
> +#define X86_FEATURE_SME_COHERENT	X86_CPU_FEATURE(0x8000001F, 0, EAX, 10)
> +#define X86_FEATURE_DEBUG_SWAP		X86_CPU_FEATURE(0x8000001F, 0, EAX, 14)
> +#define X86_FEATURE_SVSM		X86_CPU_FEATURE(0x8000001F, 0, EAX, 28)
>   #define X86_FEATURE_AMD_PMU_V2		X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
>   
>   /*


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
@ 2025-06-11 12:58   ` Liam Merwick
  0 siblings, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 12:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, liam.merwick



On 10/06/2025 20:54, Sean Christopherson wrote:
> Use X86_PROPERTY_SEV_C_BIT instead of open coding equivalent functionality,
> and delete the overly-verbose CPUID_FN_ENCRYPT_MEM_CAPAB macro.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Liam Merwick <liam.merwick@oracle.com>


> ---
>   lib/x86/amd_sev.c | 10 +---------
>   lib/x86/amd_sev.h |  6 ------
>   2 files changed, 1 insertion(+), 15 deletions(-)
> 
> diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
> index b7cefd0f..da0e2077 100644
> --- a/lib/x86/amd_sev.c
> +++ b/lib/x86/amd_sev.c
> @@ -33,19 +33,11 @@ bool amd_sev_enabled(void)
>   
>   efi_status_t setup_amd_sev(void)
>   {
> -	struct cpuid cpuid_out;
> -
>   	if (!amd_sev_enabled()) {
>   		return EFI_UNSUPPORTED;
>   	}
>   
> -	/*
> -	 * Extract C-Bit position from ebx[5:0]
> -	 * AMD64 Architecture Programmer's Manual Volume 3
> -	 *   - Section " Function 8000_001Fh - Encrypted Memory Capabilities"
> -	 */
> -	cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
> -	amd_sev_c_bit_pos = (unsigned short)(cpuid_out.b & 0x3f);
> +	amd_sev_c_bit_pos = this_cpu_property(X86_PROPERTY_SEV_C_BIT);
>   
>   	return EFI_SUCCESS;
>   }
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index defcda75..daa33a05 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -19,12 +19,6 @@
>   #include "asm/page.h"
>   #include "efi.h"
>   
> -/*
> - * AMD Programmer's Manual Volume 3
> - *   - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
> - */
> -#define CPUID_FN_ENCRYPT_MEM_CAPAB    0x8000001f
> -
>   /*
>    * AMD Programmer's Manual Volume 2
>    *   - Section "SEV_STATUS MSR"


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h Sean Christopherson
@ 2025-06-11 15:41   ` Liam Merwick
  0 siblings, 0 replies; 26+ messages in thread
From: Liam Merwick @ 2025-06-11 15:41 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, liam.merwick



On 10/06/2025 20:54, Sean Christopherson wrote:
> Move the SEV MSR definitions to msr.h so that they're available for non-EFI
> builds.  There is nothing EFI specific about the architectural definitions.
> 
> Opportunistically massage the names to align with existing style.
> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   lib/x86/amd_sev.c |  6 +++---
>   lib/x86/amd_sev.h | 14 --------------
>   lib/x86/msr.h     |  6 ++++++
>   3 files changed, 9 insertions(+), 17 deletions(-)
> 
> diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
> index da0e2077..7c6d2804 100644
> --- a/lib/x86/amd_sev.c
> +++ b/lib/x86/amd_sev.c


Maybe msr.h should be explicitly #included?

either way
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>

> @@ -25,7 +25,7 @@ bool amd_sev_enabled(void)
>   		initialized = true;
>   
>   		sev_enabled = this_cpu_has(X86_FEATURE_SEV) &&
> -			      rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK;
> +			      rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ENABLED;
>   	}
>   
>   	return sev_enabled;
> @@ -52,7 +52,7 @@ bool amd_sev_es_enabled(void)
>   
>   		sev_es_enabled = amd_sev_enabled() &&
>   				 this_cpu_has(X86_FEATURE_SEV_ES) &&
> -				 rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
> +				 rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ES_ENABLED;
>   	}
>   
>   	return sev_es_enabled;
> @@ -100,7 +100,7 @@ void setup_ghcb_pte(pgd_t *page_table)
>   	pteval_t *pte;
>   
>   	/* Read the current GHCB page addr */
> -	ghcb_addr = rdmsr(SEV_ES_GHCB_MSR_INDEX);
> +	ghcb_addr = rdmsr(MSR_SEV_ES_GHCB);
>   
>   	/* Search Level 1 page table entry for GHCB page */
>   	pte = get_pte_level(page_table, (void *)ghcb_addr, 1);
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index daa33a05..9d587e2d 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -19,23 +19,9 @@
>   #include "asm/page.h"
>   #include "efi.h"
>   
> -/*
> - * AMD Programmer's Manual Volume 2
> - *   - Section "SEV_STATUS MSR"
> - */
> -#define MSR_SEV_STATUS      0xc0010131
> -#define SEV_ENABLED_MASK    0b1
> -#define SEV_ES_ENABLED_MASK 0b10
> -
>   bool amd_sev_enabled(void);
>   efi_status_t setup_amd_sev(void);
>   
> -/*
> - * AMD Programmer's Manual Volume 2
> - *   - Section "GHCB"
> - */
> -#define SEV_ES_GHCB_MSR_INDEX 0xc0010130
> -
>   bool amd_sev_es_enabled(void);
>   efi_status_t setup_amd_sev_es(void);
>   void setup_ghcb_pte(pgd_t *page_table);
> diff --git a/lib/x86/msr.h b/lib/x86/msr.h
> index 658d237f..ccfd6bdd 100644
> --- a/lib/x86/msr.h
> +++ b/lib/x86/msr.h
> @@ -523,4 +523,10 @@
>   #define MSR_VM_IGNNE                    0xc0010115
>   #define MSR_VM_HSAVE_PA                 0xc0010117
>   
> +#define MSR_SEV_STATUS			0xc0010131
> +#define SEV_STATUS_SEV_ENABLED		BIT(0)
> +#define SEV_STATUS_SEV_ES_ENABLED	BIT(1)
> +
> +#define MSR_SEV_ES_GHCB			0xc0010130
> +
>   #endif /* _X86_MSR_H_ */


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
  2025-06-11  1:32   ` Mi, Dapeng
  2025-06-11 12:02   ` Liam Merwick
@ 2025-06-13  6:24   ` Sandipan Das
  2 siblings, 0 replies; 26+ messages in thread
From: Sandipan Das @ 2025-06-13  6:24 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, Liam Merwick

On 6/11/2025 1:24 AM, Sean Christopherson wrote:
> Mark all arch events as available on AMD, as AMD PMUs don't provide the
> "not available" CPUID field, and the number of GP counters has nothing to
> do with which architectural events are available/supported.
> 
> Rename gp_counter_mask_length to arch_event_mask_length, and
> pmu_gp_counter_is_available() to pmu_arch_event_is_available(), to
> reflect what the field and helper actually track.
> 
> Cc: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  lib/x86/pmu.c | 10 +++++-----
>  lib/x86/pmu.h |  8 ++++----
>  x86/pmu.c     |  8 ++++----
>  3 files changed, 13 insertions(+), 13 deletions(-)
> 

Tested-by: Sandipan Das <sandipan.das@amd.com>

> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d06e9455..d37c874c 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -18,10 +18,10 @@ void pmu_init(void)
>  
>  		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
>  		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> -		pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
> +		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>  
> -		/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
> -		pmu.gp_counter_available = ~cpuid_10.b;
> +		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> +		pmu.arch_event_available = ~cpuid_10.b;
>  
>  		if (this_cpu_has(X86_FEATURE_PDCM))
>  			pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> @@ -50,8 +50,8 @@ void pmu_init(void)
>  			pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
>  		}
>  		pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> -		pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> -		pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
> +		pmu.arch_event_mask_length = 32;
> +		pmu.arch_event_available = -1u;
>  
>  		if (this_cpu_has_perf_global_status()) {
>  			pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
> diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
> index f07fbd93..c7dc68c1 100644
> --- a/lib/x86/pmu.h
> +++ b/lib/x86/pmu.h
> @@ -63,8 +63,8 @@ struct pmu_caps {
>  	u8 fixed_counter_width;
>  	u8 nr_gp_counters;
>  	u8 gp_counter_width;
> -	u8 gp_counter_mask_length;
> -	u32 gp_counter_available;
> +	u8 arch_event_mask_length;
> +	u32 arch_event_available;
>  	u32 msr_gp_counter_base;
>  	u32 msr_gp_event_select_base;
>  
> @@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
>  	return pmu.version > 1;
>  }
>  
> -static inline bool pmu_gp_counter_is_available(int i)
> +static inline bool pmu_arch_event_is_available(int i)
>  {
> -	return pmu.gp_counter_available & BIT(i);
> +	return pmu.arch_event_available & BIT(i);
>  }
>  
>  static inline u64 pmu_lbr_version(void)
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 45c6db3c..e79122ed 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -436,7 +436,7 @@ static void check_gp_counters(void)
>  	int i;
>  
>  	for (i = 0; i < gp_events_size; i++)
> -		if (pmu_gp_counter_is_available(i))
> +		if (pmu_arch_event_is_available(i))
>  			check_gp_counter(&gp_events[i]);
>  		else
>  			printf("GP event '%s' is disabled\n",
> @@ -463,7 +463,7 @@ static void check_counters_many(void)
>  	int i, n;
>  
>  	for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
> -		if (!pmu_gp_counter_is_available(i))
> +		if (!pmu_arch_event_is_available(i))
>  			continue;
>  
>  		cnt[n].ctr = MSR_GP_COUNTERx(n);
> @@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
>  	uint64_t t0, t1, t2, t3;
>  
>  	/* Bit 2 enumerates the availability of reference cycles events. */
> -	if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
> +	if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
>  		return;
>  
>  	t0 = fenced_rdtsc();
> @@ -992,7 +992,7 @@ int main(int ac, char **av)
>  	printf("PMU version:         %d\n", pmu.version);
>  	printf("GP counters:         %d\n", pmu.nr_gp_counters);
>  	printf("GP counter width:    %d\n", pmu.gp_counter_width);
> -	printf("Mask length:         %d\n", pmu.gp_counter_mask_length);
> +	printf("Event Mask length:   %d\n", pmu.arch_event_mask_length);
>  	printf("Fixed counters:      %d\n", pmu.nr_fixed_counters);
>  	printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>  


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
@ 2025-06-13  6:25   ` Sandipan Das
  0 siblings, 0 replies; 26+ messages in thread
From: Sandipan Das @ 2025-06-13  6:25 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, Liam Merwick

On 6/11/2025 1:24 AM, Sean Christopherson wrote:
> Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
> information instead of open coding equivalent functionality.
> 
> No functional change intended.
> 
> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>  lib/x86/pmu.c | 18 ++++++++----------
>  1 file changed, 8 insertions(+), 10 deletions(-)
> 

Tested-by: Sandipan Das <sandipan.das@amd.com>

> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index 92707698..fb46b196 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -7,21 +7,19 @@ void pmu_init(void)
>  	pmu.is_intel = is_intel();
>  
>  	if (pmu.is_intel) {
> -		struct cpuid cpuid_10 = cpuid(10);
> -
> -		pmu.version = cpuid_10.a & 0xff;
> +		pmu.version = this_cpu_property(X86_PROPERTY_PMU_VERSION);
>  
>  		if (pmu.version > 1) {
> -			pmu.nr_fixed_counters = cpuid_10.d & 0x1f;
> -			pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff;
> +			pmu.nr_fixed_counters = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
> +			pmu.fixed_counter_width = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH);
>  		}
>  
> -		pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
> -		pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> -		pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
> +		pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
> +		pmu.gp_counter_width = this_cpu_property(X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH);
> +		pmu.arch_event_mask_length = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
>  
>  		/* CPUID.0xA.EBX bit is '1' if an arch event is NOT available. */
> -		pmu.arch_event_available = ~cpuid_10.b &
> +		pmu.arch_event_available = ~this_cpu_property(X86_PROPERTY_PMU_EVENTS_MASK) &
>  					   (BIT(pmu.arch_event_mask_length) - 1);
>  
>  		if (this_cpu_has(X86_FEATURE_PDCM))
> @@ -39,7 +37,7 @@ void pmu_init(void)
>  			/* Performance Monitoring Version 2 Supported */
>  			if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
>  				pmu.version = 2;
> -				pmu.nr_gp_counters = cpuid(0x80000022).b & 0xf;
> +				pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_NR_PERFCTR_CORE);
>  			} else {
>  				pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE;
>  			}


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code
  2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
                   ` (13 preceding siblings ...)
  2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h Sean Christopherson
@ 2025-06-25 22:25 ` Sean Christopherson
  14 siblings, 0 replies; 26+ messages in thread
From: Sean Christopherson @ 2025-06-25 22:25 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, Dapeng Mi, Liam Merwick

On Tue, 10 Jun 2025 12:54:01 -0700, Sean Christopherson wrote:
> Copy KVM selftests' X86_PROPERTY_* infrastructure (multi-bit CPUID
> fields), and use the properties to clean up various warts.  The SEV code
> is particular makes things much harder than they need to be.
> 
> Note, this applies on kvm-x86 next.
> 
> v2:
>  - Avoid tabs immediatedly after #defines. [Dapeng]
>  - Sqaush the arch events vs. GP counters fixes into one patch. [Dapeng]
>  - Mask available arch events based on enumerate bit vector width. [Dapeng]
>  - Add a missing space in a printf argument. [Liam]
>  - Collect reviews. [Dapeng, Liam]
> 
> [...]

Applied to kvm-x86 next, thanks!

[01/14] x86: Encode X86_FEATURE_* definitions using a structure
        https://github.com/kvm-x86/kvm-unit-tests/commit/361f623cb12e
[02/14] x86: Add X86_PROPERTY_* framework to retrieve CPUID values
        https://github.com/kvm-x86/kvm-unit-tests/commit/77ea6ad194b2
[03/14] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
        https://github.com/kvm-x86/kvm-unit-tests/commit/9a3266bf023e
[04/14] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
        https://github.com/kvm-x86/kvm-unit-tests/commit/587db1e85faa
[05/14] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
        https://github.com/kvm-x86/kvm-unit-tests/commit/25e295a5bb8f
[06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields
        https://github.com/kvm-x86/kvm-unit-tests/commit/6c9e1907ecaa
[07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24]
        https://github.com/kvm-x86/kvm-unit-tests/commit/92dc5f7ab459
[08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
        https://github.com/kvm-x86/kvm-unit-tests/commit/215e67c112bc
[09/14] x86/sev: Use VC_VECTOR from processor.h
        https://github.com/kvm-x86/kvm-unit-tests/commit/5d80d64dc482
[10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
        https://github.com/kvm-x86/kvm-unit-tests/commit/031a0b02be0a
[11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
        https://github.com/kvm-x86/kvm-unit-tests/commit/b643ae6207da
[12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
        https://github.com/kvm-x86/kvm-unit-tests/commit/38147316d147
[13/14] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
        https://github.com/kvm-x86/kvm-unit-tests/commit/8f6aee89b941
[14/14] x86: Move SEV MSR definitions to msr.h
        https://github.com/kvm-x86/kvm-unit-tests/commit/cebc6ef778a7

--
https://github.com/kvm-x86/kvm-unit-tests/tree/next

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-06-25 22:27 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-10 19:54 [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 01/14] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 02/14] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 03/14] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 04/14] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 05/14] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 06/14] x86/pmu: Mark all arch events as available on AMD, and rename fields Sean Christopherson
2025-06-11  1:32   ` Mi, Dapeng
2025-06-11 12:02   ` Liam Merwick
2025-06-13  6:24   ` Sandipan Das
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 07/14] x86/pmu: Mark Intel architectural event available iff X <= CPUID.0xA.EAX[31:24] Sean Christopherson
2025-06-11  1:35   ` Mi, Dapeng
2025-06-11 12:10   ` Liam Merwick
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 08/14] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
2025-06-13  6:25   ` Sandipan Das
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 09/14] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 10/14] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
2025-06-11 12:28   ` Liam Merwick
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 11/14] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
2025-06-11 12:38   ` Liam Merwick
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 12/14] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
2025-06-11 12:58   ` Liam Merwick
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 13/14] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
2025-06-10 19:54 ` [kvm-unit-tests PATCH v2 14/14] x86: Move SEV MSR definitions to msr.h Sean Christopherson
2025-06-11 15:41   ` Liam Merwick
2025-06-25 22:25 ` [kvm-unit-tests PATCH v2 00/14] x86: Add CPUID properties, clean up related code Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox