* [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code
@ 2025-05-29 22:19 Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
` (16 more replies)
0 siblings, 17 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Copy KVM selftests' X86_PROPERTY_* infrastructure (multi-bit CPUID
fields), and use the properties to clean up various warts. The SEV code
is particular makes things much harder than they need to be (I went down
this rabbit hole purely because the stupid MSR_SEV_STATUS definition was
buried behind CONFIG_EFI=y, *sigh*).
The first patch is a common change to add static_assert() as a wrapper
to _Static_assert(). Forcing code to provide an error message just leads
to useless error messages.
Compile tested on arm64, riscv64, and s390x.
Sean Christopherson (16):
lib: Add and use static_assert() convenience wrappers
x86: Encode X86_FEATURE_* definitions using a structure
x86: Add X86_PROPERTY_* framework to retrieve CPUID values
x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
x86: Implement get_supported_xcr0() using
X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
x86/pmu: Rename pmu_gp_counter_is_available() to
pmu_arch_event_is_available()
x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length
x86/pmu: Mark all arch events as available on AMD
x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
x86/sev: Use VC_VECTOR from processor.h
x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
x86: Move SEV MSR definitions to msr.h
lib/riscv/asm/isa.h | 4 +-
lib/s390x/asm/arch_def.h | 6 +-
lib/s390x/fault.c | 3 +-
lib/util.h | 3 +
lib/x86/amd_sev.c | 48 ++----
lib/x86/amd_sev.h | 29 ----
lib/x86/msr.h | 6 +
lib/x86/pmu.c | 22 ++-
lib/x86/pmu.h | 8 +-
lib/x86/processor.h | 312 ++++++++++++++++++++++++++++-----------
x86/amd_sev.c | 63 ++------
x86/la57.c | 2 +-
x86/lam.c | 4 +-
x86/pmu.c | 8 +-
x86/xsave.c | 11 +-
15 files changed, 284 insertions(+), 245 deletions(-)
base-commit: 72d110d8286baf1b355301cc8c8bdb42be2663fb
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-30 6:03 ` Andrew Jones
` (2 more replies)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
` (15 subsequent siblings)
16 siblings, 3 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Add static_assert() to wrap _Static_assert() with stringification of the
tested expression as the assert message. In most cases, the failed
expression is far more helpful than a human-generated message (usually
because the developer is forced to add _something_ for the message).
For API consistency, provide a double-underscore variant for specifying a
custom message.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/riscv/asm/isa.h | 4 +++-
lib/s390x/asm/arch_def.h | 6 ++++--
lib/s390x/fault.c | 3 ++-
lib/util.h | 3 +++
x86/lam.c | 4 ++--
5 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/lib/riscv/asm/isa.h b/lib/riscv/asm/isa.h
index df874173..fb3af67d 100644
--- a/lib/riscv/asm/isa.h
+++ b/lib/riscv/asm/isa.h
@@ -1,7 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef _ASMRISCV_ISA_H_
#define _ASMRISCV_ISA_H_
+
#include <bitops.h>
+#include <util.h>
#include <asm/setup.h>
/*
@@ -14,7 +16,7 @@ enum {
ISA_SSTC,
ISA_MAX,
};
-_Static_assert(ISA_MAX <= __riscv_xlen, "Need to increase thread_info.isa");
+__static_assert(ISA_MAX <= __riscv_xlen, "Need to increase thread_info.isa");
static inline bool cpu_has_extension(int cpu, int ext)
{
diff --git a/lib/s390x/asm/arch_def.h b/lib/s390x/asm/arch_def.h
index 03adcd3c..4c11df74 100644
--- a/lib/s390x/asm/arch_def.h
+++ b/lib/s390x/asm/arch_def.h
@@ -8,6 +8,8 @@
#ifndef _ASMS390X_ARCH_DEF_H_
#define _ASMS390X_ARCH_DEF_H_
+#include <util.h>
+
struct stack_frame {
struct stack_frame *back_chain;
uint64_t reserved;
@@ -62,7 +64,7 @@ struct psw {
};
uint64_t addr;
};
-_Static_assert(sizeof(struct psw) == 16, "PSW size");
+static_assert(sizeof(struct psw) == 16);
#define PSW(m, a) ((struct psw){ .mask = (m), .addr = (uint64_t)(a) })
@@ -194,7 +196,7 @@ struct lowcore {
uint8_t pad_0x1400[0x1800 - 0x1400]; /* 0x1400 */
uint8_t pgm_int_tdb[0x1900 - 0x1800]; /* 0x1800 */
} __attribute__ ((__packed__));
-_Static_assert(sizeof(struct lowcore) == 0x1900, "Lowcore size");
+static_assert(sizeof(struct lowcore) == 0x1900);
extern struct lowcore lowcore;
diff --git a/lib/s390x/fault.c b/lib/s390x/fault.c
index a882d5d9..ad5a5f66 100644
--- a/lib/s390x/fault.c
+++ b/lib/s390x/fault.c
@@ -9,6 +9,7 @@
*/
#include <libcflat.h>
#include <bitops.h>
+#include <util.h>
#include <asm/arch_def.h>
#include <asm/page.h>
#include <fault.h>
@@ -40,7 +41,7 @@ static void print_decode_pgm_prot(union teid teid)
"LAP",
"IEP",
};
- _Static_assert(ARRAY_SIZE(prot_str) == PROT_NUM_CODES, "ESOP2 prot codes");
+ static_assert(ARRAY_SIZE(prot_str) == PROT_NUM_CODES);
int prot_code = teid_esop2_prot_code(teid);
printf("Type: %s\n", prot_str[prot_code]);
diff --git a/lib/util.h b/lib/util.h
index f86af6d3..00d0b47d 100644
--- a/lib/util.h
+++ b/lib/util.h
@@ -8,6 +8,9 @@
* This work is licensed under the terms of the GNU LGPL, version 2.
*/
+#define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
+#define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
+
/*
* parse_keyval extracts the integer from a string formatted as
* string=integer. This is useful for passing expected values to
diff --git a/x86/lam.c b/x86/lam.c
index a1c98949..ad91deaf 100644
--- a/x86/lam.c
+++ b/x86/lam.c
@@ -13,6 +13,7 @@
#include "libcflat.h"
#include "processor.h"
#include "desc.h"
+#include <util.h>
#include "vmalloc.h"
#include "alloc_page.h"
#include "vm.h"
@@ -236,8 +237,7 @@ static void test_lam_user(void)
* address for both LAM48 and LAM57.
*/
vaddr = alloc_pages_flags(0, AREA_NORMAL);
- _Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL,
- "Identical mapping range check");
+ static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL);
/*
* Note, LAM doesn't have a global control bit to turn on/off LAM
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 6:08 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
` (14 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Encode X86_FEATURE_* macros using a new "struct x86_cpu_feature" instead
of manually packing the values into a u64. Using a structure eliminates
open code shifts and masks, and is largely self-documenting.
Note, the code and naming scheme are stolen from KVM selftests.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/processor.h | 171 ++++++++++++++++++++++++--------------------
1 file changed, 95 insertions(+), 76 deletions(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index a0be04c5..3ac6711d 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -6,6 +6,7 @@
#include "msr.h"
#include <bitops.h>
#include <stdint.h>
+#include <util.h>
#define CANONICAL_48_VAL 0xffffaaaaaaaaaaaaull
#define CANONICAL_57_VAL 0xffaaaaaaaaaaaaaaull
@@ -232,100 +233,118 @@ static inline bool is_intel(void)
return strcmp((char *)name, "GenuineIntel") == 0;
}
-#define CPUID(a, b, c, d) ((((unsigned long long) a) << 32) | (b << 16) | \
- (c << 8) | d)
-
/*
- * Each X86_FEATURE_XXX definition is 64-bit and contains the following
- * CPUID meta-data:
- *
- * [63:32] : input value for EAX
- * [31:16] : input value for ECX
- * [15:8] : output register
- * [7:0] : bit position in output register
+ * Pack the information into a 64-bit value so that each X86_FEATURE_XXX can be
+ * passed by value with no overhead.
*/
+struct x86_cpu_feature {
+ u32 function;
+ u16 index;
+ u8 reg;
+ u8 bit;
+};
+
+#define X86_CPU_FEATURE(fn, idx, gpr, __bit) \
+({ \
+ struct x86_cpu_feature feature = { \
+ .function = fn, \
+ .index = idx, \
+ .reg = gpr, \
+ .bit = __bit, \
+ }; \
+ \
+ static_assert((fn & 0xc0000000) == 0 || \
+ (fn & 0xc0000000) == 0x40000000 || \
+ (fn & 0xc0000000) == 0x80000000 || \
+ (fn & 0xc0000000) == 0xc0000000); \
+ static_assert(idx < BIT(sizeof(feature.index) * BITS_PER_BYTE)); \
+ feature; \
+})
/*
* Basic Leafs, a.k.a. Intel defined
*/
-#define X86_FEATURE_MWAIT (CPUID(0x1, 0, ECX, 3))
-#define X86_FEATURE_VMX (CPUID(0x1, 0, ECX, 5))
-#define X86_FEATURE_PDCM (CPUID(0x1, 0, ECX, 15))
-#define X86_FEATURE_PCID (CPUID(0x1, 0, ECX, 17))
-#define X86_FEATURE_X2APIC (CPUID(0x1, 0, ECX, 21))
-#define X86_FEATURE_MOVBE (CPUID(0x1, 0, ECX, 22))
-#define X86_FEATURE_TSC_DEADLINE_TIMER (CPUID(0x1, 0, ECX, 24))
-#define X86_FEATURE_XSAVE (CPUID(0x1, 0, ECX, 26))
-#define X86_FEATURE_OSXSAVE (CPUID(0x1, 0, ECX, 27))
-#define X86_FEATURE_RDRAND (CPUID(0x1, 0, ECX, 30))
-#define X86_FEATURE_MCE (CPUID(0x1, 0, EDX, 7))
-#define X86_FEATURE_APIC (CPUID(0x1, 0, EDX, 9))
-#define X86_FEATURE_CLFLUSH (CPUID(0x1, 0, EDX, 19))
-#define X86_FEATURE_DS (CPUID(0x1, 0, EDX, 21))
-#define X86_FEATURE_XMM (CPUID(0x1, 0, EDX, 25))
-#define X86_FEATURE_XMM2 (CPUID(0x1, 0, EDX, 26))
-#define X86_FEATURE_TSC_ADJUST (CPUID(0x7, 0, EBX, 1))
-#define X86_FEATURE_HLE (CPUID(0x7, 0, EBX, 4))
-#define X86_FEATURE_SMEP (CPUID(0x7, 0, EBX, 7))
-#define X86_FEATURE_INVPCID (CPUID(0x7, 0, EBX, 10))
-#define X86_FEATURE_RTM (CPUID(0x7, 0, EBX, 11))
-#define X86_FEATURE_SMAP (CPUID(0x7, 0, EBX, 20))
-#define X86_FEATURE_PCOMMIT (CPUID(0x7, 0, EBX, 22))
-#define X86_FEATURE_CLFLUSHOPT (CPUID(0x7, 0, EBX, 23))
-#define X86_FEATURE_CLWB (CPUID(0x7, 0, EBX, 24))
-#define X86_FEATURE_INTEL_PT (CPUID(0x7, 0, EBX, 25))
-#define X86_FEATURE_UMIP (CPUID(0x7, 0, ECX, 2))
-#define X86_FEATURE_PKU (CPUID(0x7, 0, ECX, 3))
-#define X86_FEATURE_LA57 (CPUID(0x7, 0, ECX, 16))
-#define X86_FEATURE_RDPID (CPUID(0x7, 0, ECX, 22))
-#define X86_FEATURE_SHSTK (CPUID(0x7, 0, ECX, 7))
-#define X86_FEATURE_IBT (CPUID(0x7, 0, EDX, 20))
-#define X86_FEATURE_SPEC_CTRL (CPUID(0x7, 0, EDX, 26))
-#define X86_FEATURE_FLUSH_L1D (CPUID(0x7, 0, EDX, 28))
-#define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29))
-#define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31))
-#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26))
+#define X86_FEATURE_MWAIT X86_CPU_FEATURE(0x1, 0, ECX, 3)
+#define X86_FEATURE_VMX X86_CPU_FEATURE(0x1, 0, ECX, 5)
+#define X86_FEATURE_PDCM X86_CPU_FEATURE(0x1, 0, ECX, 15)
+#define X86_FEATURE_PCID X86_CPU_FEATURE(0x1, 0, ECX, 17)
+#define X86_FEATURE_X2APIC X86_CPU_FEATURE(0x1, 0, ECX, 21)
+#define X86_FEATURE_MOVBE X86_CPU_FEATURE(0x1, 0, ECX, 22)
+#define X86_FEATURE_TSC_DEADLINE_TIMER X86_CPU_FEATURE(0x1, 0, ECX, 24)
+#define X86_FEATURE_XSAVE X86_CPU_FEATURE(0x1, 0, ECX, 26)
+#define X86_FEATURE_OSXSAVE X86_CPU_FEATURE(0x1, 0, ECX, 27)
+#define X86_FEATURE_RDRAND X86_CPU_FEATURE(0x1, 0, ECX, 30)
+#define X86_FEATURE_MCE X86_CPU_FEATURE(0x1, 0, EDX, 7)
+#define X86_FEATURE_APIC X86_CPU_FEATURE(0x1, 0, EDX, 9)
+#define X86_FEATURE_CLFLUSH X86_CPU_FEATURE(0x1, 0, EDX, 19)
+#define X86_FEATURE_DS X86_CPU_FEATURE(0x1, 0, EDX, 21)
+#define X86_FEATURE_XMM X86_CPU_FEATURE(0x1, 0, EDX, 25)
+#define X86_FEATURE_XMM2 X86_CPU_FEATURE(0x1, 0, EDX, 26)
+#define X86_FEATURE_TSC_ADJUST X86_CPU_FEATURE(0x7, 0, EBX, 1)
+#define X86_FEATURE_HLE X86_CPU_FEATURE(0x7, 0, EBX, 4)
+#define X86_FEATURE_SMEP X86_CPU_FEATURE(0x7, 0, EBX, 7)
+#define X86_FEATURE_INVPCID X86_CPU_FEATURE(0x7, 0, EBX, 10)
+#define X86_FEATURE_RTM X86_CPU_FEATURE(0x7, 0, EBX, 11)
+#define X86_FEATURE_SMAP X86_CPU_FEATURE(0x7, 0, EBX, 20)
+#define X86_FEATURE_PCOMMIT X86_CPU_FEATURE(0x7, 0, EBX, 22)
+#define X86_FEATURE_CLFLUSHOPT X86_CPU_FEATURE(0x7, 0, EBX, 23)
+#define X86_FEATURE_CLWB X86_CPU_FEATURE(0x7, 0, EBX, 24)
+#define X86_FEATURE_INTEL_PT X86_CPU_FEATURE(0x7, 0, EBX, 25)
+#define X86_FEATURE_UMIP X86_CPU_FEATURE(0x7, 0, ECX, 2)
+#define X86_FEATURE_PKU X86_CPU_FEATURE(0x7, 0, ECX, 3)
+#define X86_FEATURE_LA57 X86_CPU_FEATURE(0x7, 0, ECX, 16)
+#define X86_FEATURE_RDPID X86_CPU_FEATURE(0x7, 0, ECX, 22)
+#define X86_FEATURE_SHSTK X86_CPU_FEATURE(0x7, 0, ECX, 7)
+#define X86_FEATURE_IBT X86_CPU_FEATURE(0x7, 0, EDX, 20)
+#define X86_FEATURE_SPEC_CTRL X86_CPU_FEATURE(0x7, 0, EDX, 26)
+#define X86_FEATURE_FLUSH_L1D X86_CPU_FEATURE(0x7, 0, EDX, 28)
+#define X86_FEATURE_ARCH_CAPABILITIES X86_CPU_FEATURE(0x7, 0, EDX, 29)
+#define X86_FEATURE_PKS X86_CPU_FEATURE(0x7, 0, ECX, 31)
+#define X86_FEATURE_LAM X86_CPU_FEATURE(0x7, 1, EAX, 26)
/*
* KVM defined leafs
*/
-#define KVM_FEATURE_ASYNC_PF (CPUID(0x40000001, 0, EAX, 4))
-#define KVM_FEATURE_ASYNC_PF_INT (CPUID(0x40000001, 0, EAX, 14))
+#define KVM_FEATURE_ASYNC_PF X86_CPU_FEATURE(0x40000001, 0, EAX, 4)
+#define KVM_FEATURE_ASYNC_PF_INT X86_CPU_FEATURE(0x40000001, 0, EAX, 14)
/*
* Extended Leafs, a.k.a. AMD defined
*/
-#define X86_FEATURE_SVM (CPUID(0x80000001, 0, ECX, 2))
-#define X86_FEATURE_PERFCTR_CORE (CPUID(0x80000001, 0, ECX, 23))
-#define X86_FEATURE_NX (CPUID(0x80000001, 0, EDX, 20))
-#define X86_FEATURE_GBPAGES (CPUID(0x80000001, 0, EDX, 26))
-#define X86_FEATURE_RDTSCP (CPUID(0x80000001, 0, EDX, 27))
-#define X86_FEATURE_LM (CPUID(0x80000001, 0, EDX, 29))
-#define X86_FEATURE_RDPRU (CPUID(0x80000008, 0, EBX, 4))
-#define X86_FEATURE_AMD_IBPB (CPUID(0x80000008, 0, EBX, 12))
-#define X86_FEATURE_NPT (CPUID(0x8000000A, 0, EDX, 0))
-#define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1))
-#define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3))
-#define X86_FEATURE_TSCRATEMSR (CPUID(0x8000000A, 0, EDX, 4))
-#define X86_FEATURE_PAUSEFILTER (CPUID(0x8000000A, 0, EDX, 10))
-#define X86_FEATURE_PFTHRESHOLD (CPUID(0x8000000A, 0, EDX, 12))
-#define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16))
-#define X86_FEATURE_VNMI (CPUID(0x8000000A, 0, EDX, 25))
-#define X86_FEATURE_AMD_PMU_V2 (CPUID(0x80000022, 0, EAX, 0))
+#define X86_FEATURE_SVM X86_CPU_FEATURE(0x80000001, 0, ECX, 2)
+#define X86_FEATURE_PERFCTR_CORE X86_CPU_FEATURE(0x80000001, 0, ECX, 23)
+#define X86_FEATURE_NX X86_CPU_FEATURE(0x80000001, 0, EDX, 20)
+#define X86_FEATURE_GBPAGES X86_CPU_FEATURE(0x80000001, 0, EDX, 26)
+#define X86_FEATURE_RDTSCP X86_CPU_FEATURE(0x80000001, 0, EDX, 27)
+#define X86_FEATURE_LM X86_CPU_FEATURE(0x80000001, 0, EDX, 29)
+#define X86_FEATURE_RDPRU X86_CPU_FEATURE(0x80000008, 0, EBX, 4)
+#define X86_FEATURE_AMD_IBPB X86_CPU_FEATURE(0x80000008, 0, EBX, 12)
+#define X86_FEATURE_NPT X86_CPU_FEATURE(0x8000000A, 0, EDX, 0)
+#define X86_FEATURE_LBRV X86_CPU_FEATURE(0x8000000A, 0, EDX, 1)
+#define X86_FEATURE_NRIPS X86_CPU_FEATURE(0x8000000A, 0, EDX, 3)
+#define X86_FEATURE_TSCRATEMSR X86_CPU_FEATURE(0x8000000A, 0, EDX, 4)
+#define X86_FEATURE_PAUSEFILTER X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
+#define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
+#define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
+#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
+#define X86_FEATURE_AMD_PMU_V2 X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
-static inline bool this_cpu_has(u64 feature)
+static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
{
- u32 input_eax = feature >> 32;
- u32 input_ecx = (feature >> 16) & 0xffff;
- u32 output_reg = (feature >> 8) & 0xff;
- u8 bit = feature & 0xff;
- struct cpuid c;
- u32 *tmp;
+ union {
+ struct cpuid cpuid;
+ u32 gprs[4];
+ } c;
- c = cpuid_indexed(input_eax, input_ecx);
- tmp = (u32 *)&c;
+ c.cpuid = cpuid_indexed(function, index);
- return ((*(tmp + (output_reg % 32))) & (1 << bit));
+ return (c.gprs[reg] & GENMASK(hi, lo)) >> lo;
+}
+
+static inline bool this_cpu_has(struct x86_cpu_feature feature)
+{
+ return __this_cpu_has(feature.function, feature.index,
+ feature.reg, feature.bit, feature.bit);
}
struct far_pointer32 {
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 6:14 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
` (13 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Introduce X86_PROPERTY_* to allow retrieving values/properties from CPUID
leafs, e.g. MAXPHYADDR from CPUID.0x80000008. Use the same core code as
X86_FEATURE_*, the primary difference is that properties are multi-bit
values, whereas features enumerate a single bit.
Add this_cpu_has_p() to allow querying whether or not a property exists
based on the maximum leaf associated with the property, e.g. MAXPHYADDR
doesn't exist if the max leaf for 0x8000_xxxx is less than 0x8000_0008.
Use the new property infrastructure in cpuid_maxphyaddr() to prove that
the code works as intended. Future patches will convert additional code.
Note, the code, nomenclature, changelog, etc. are all stolen from KVM
selftests.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/processor.h | 109 +++++++++++++++++++++++++++++++++++++++++---
1 file changed, 102 insertions(+), 7 deletions(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 3ac6711d..6b61a38b 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -218,13 +218,6 @@ static inline struct cpuid cpuid(u32 function)
return cpuid_indexed(function, 0);
}
-static inline u8 cpuid_maxphyaddr(void)
-{
- if (raw_cpuid(0x80000000, 0).a < 0x80000008)
- return 36;
- return raw_cpuid(0x80000008, 0).a & 0xff;
-}
-
static inline bool is_intel(void)
{
struct cpuid c = cpuid(0);
@@ -329,6 +322,74 @@ struct x86_cpu_feature {
#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
#define X86_FEATURE_AMD_PMU_V2 X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
+/*
+ * Same idea as X86_FEATURE_XXX, but X86_PROPERTY_XXX retrieves a multi-bit
+ * value/property as opposed to a single-bit feature. Again, pack the info
+ * into a 64-bit value to pass by value with no overhead on 64-bit builds.
+ */
+struct x86_cpu_property {
+ u32 function;
+ u8 index;
+ u8 reg;
+ u8 lo_bit;
+ u8 hi_bit;
+};
+#define X86_CPU_PROPERTY(fn, idx, gpr, low_bit, high_bit) \
+({ \
+ struct x86_cpu_property property = { \
+ .function = fn, \
+ .index = idx, \
+ .reg = gpr, \
+ .lo_bit = low_bit, \
+ .hi_bit = high_bit, \
+ }; \
+ \
+ static_assert(low_bit < high_bit); \
+ static_assert((fn & 0xc0000000) == 0 || \
+ (fn & 0xc0000000) == 0x40000000 || \
+ (fn & 0xc0000000) == 0x80000000 || \
+ (fn & 0xc0000000) == 0xc0000000); \
+ static_assert(idx < BIT(sizeof(property.index) * BITS_PER_BYTE)); \
+ property; \
+})
+
+#define X86_PROPERTY_MAX_BASIC_LEAF X86_CPU_PROPERTY(0, 0, EAX, 0, 31)
+#define X86_PROPERTY_PMU_VERSION X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7)
+#define X86_PROPERTY_PMU_NR_GP_COUNTERS X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15)
+#define X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH X86_CPU_PROPERTY(0xa, 0, EAX, 16, 23)
+#define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31)
+#define X86_PROPERTY_PMU_EVENTS_MASK X86_CPU_PROPERTY(0xa, 0, EBX, 0, 7)
+#define X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31)
+#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4)
+#define X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH X86_CPU_PROPERTY(0xa, 0, EDX, 5, 12)
+
+#define X86_PROPERTY_SUPPORTED_XCR0_LO X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31)
+#define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31)
+#define X86_PROPERTY_XSTATE_MAX_SIZE X86_CPU_PROPERTY(0xd, 0, ECX, 0, 31)
+#define X86_PROPERTY_SUPPORTED_XCR0_HI X86_CPU_PROPERTY(0xd, 0, EDX, 0, 31)
+
+#define X86_PROPERTY_XSTATE_TILE_SIZE X86_CPU_PROPERTY(0xd, 18, EAX, 0, 31)
+#define X86_PROPERTY_XSTATE_TILE_OFFSET X86_CPU_PROPERTY(0xd, 18, EBX, 0, 31)
+#define X86_PROPERTY_AMX_MAX_PALETTE_TABLES X86_CPU_PROPERTY(0x1d, 0, EAX, 0, 31)
+#define X86_PROPERTY_AMX_TOTAL_TILE_BYTES X86_CPU_PROPERTY(0x1d, 1, EAX, 0, 15)
+#define X86_PROPERTY_AMX_BYTES_PER_TILE X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
+#define X86_PROPERTY_AMX_BYTES_PER_ROW X86_CPU_PROPERTY(0x1d, 1, EBX, 0, 15)
+#define X86_PROPERTY_AMX_NR_TILE_REGS X86_CPU_PROPERTY(0x1d, 1, EBX, 16, 31)
+#define X86_PROPERTY_AMX_MAX_ROWS X86_CPU_PROPERTY(0x1d, 1, ECX, 0, 15)
+
+#define X86_PROPERTY_MAX_KVM_LEAF X86_CPU_PROPERTY(0x40000000, 0, EAX, 0, 31)
+
+#define X86_PROPERTY_MAX_EXT_LEAF X86_CPU_PROPERTY(0x80000000, 0, EAX, 0, 31)
+#define X86_PROPERTY_MAX_PHY_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 0, 7)
+#define X86_PROPERTY_MAX_VIRT_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 8, 15)
+#define X86_PROPERTY_GUEST_MAX_PHY_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 16, 23)
+#define X86_PROPERTY_SEV_C_BIT X86_CPU_PROPERTY(0x8000001F, 0, EBX, 0, 5)
+#define X86_PROPERTY_PHYS_ADDR_REDUCTION X86_CPU_PROPERTY(0x8000001F, 0, EBX, 6, 11)
+#define X86_PROPERTY_NR_PERFCTR_CORE X86_CPU_PROPERTY(0x80000022, 0, EBX, 0, 3)
+#define X86_PROPERTY_NR_PERFCTR_NB X86_CPU_PROPERTY(0x80000022, 0, EBX, 10, 15)
+
+#define X86_PROPERTY_MAX_CENTAUR_LEAF X86_CPU_PROPERTY(0xC0000000, 0, EAX, 0, 31)
+
static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
{
union {
@@ -347,6 +408,40 @@ static inline bool this_cpu_has(struct x86_cpu_feature feature)
feature.reg, feature.bit, feature.bit);
}
+static inline uint32_t this_cpu_property(struct x86_cpu_property property)
+{
+ return __this_cpu_has(property.function, property.index,
+ property.reg, property.lo_bit, property.hi_bit);
+}
+
+static __always_inline bool this_cpu_has_p(struct x86_cpu_property property)
+{
+ uint32_t max_leaf;
+
+ switch (property.function & 0xc0000000) {
+ case 0:
+ max_leaf = this_cpu_property(X86_PROPERTY_MAX_BASIC_LEAF);
+ break;
+ case 0x40000000:
+ max_leaf = this_cpu_property(X86_PROPERTY_MAX_KVM_LEAF);
+ break;
+ case 0x80000000:
+ max_leaf = this_cpu_property(X86_PROPERTY_MAX_EXT_LEAF);
+ break;
+ case 0xc0000000:
+ max_leaf = this_cpu_property(X86_PROPERTY_MAX_CENTAUR_LEAF);
+ }
+ return max_leaf >= property.function;
+}
+
+static inline u8 cpuid_maxphyaddr(void)
+{
+ if (!this_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR))
+ return 36;
+
+ return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
+}
+
struct far_pointer32 {
u32 offset;
u16 selector;
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (2 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 6:16 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
` (12 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() instead of open coding a
*very* rough equivalent. Default to a maximum virtual address width of
48 bits instead of 64 bits to better match real x86 CPUs (and Intel and
AMD architectures).
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/processor.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 6b61a38b..8c6f28a3 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -1022,9 +1022,14 @@ static inline void write_pkru(u32 pkru)
static inline bool is_canonical(u64 addr)
{
- int va_width = (raw_cpuid(0x80000008, 0).a & 0xff00) >> 8;
- int shift_amt = 64 - va_width;
+ int va_width, shift_amt;
+ if (this_cpu_has_p(X86_PROPERTY_MAX_VIRT_ADDR))
+ va_width = this_cpu_property(X86_PROPERTY_MAX_VIRT_ADDR);
+ else
+ va_width = 48;
+
+ shift_amt = 64 - va_width;
return (s64)(addr << shift_amt) >> shift_amt == addr;
}
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (3 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 6:18 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
` (11 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} to implement get_supported_xcr0().
Opportunistically rename the helper and move it to processor.h.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/processor.h | 9 +++++++++
x86/xsave.c | 11 +----------
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 8c6f28a3..cbfd2ee1 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -442,6 +442,15 @@ static inline u8 cpuid_maxphyaddr(void)
return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
}
+static inline u64 this_cpu_supported_xcr0(void)
+{
+ if (!this_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO))
+ return 0;
+
+ return (u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) |
+ ((u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
+}
+
struct far_pointer32 {
u32 offset;
u16 selector;
diff --git a/x86/xsave.c b/x86/xsave.c
index 5d80f245..cc8e3a0a 100644
--- a/x86/xsave.c
+++ b/x86/xsave.c
@@ -8,15 +8,6 @@
#define uint64_t unsigned long long
#endif
-static uint64_t get_supported_xcr0(void)
-{
- struct cpuid r;
- r = cpuid_indexed(0xd, 0);
- printf("eax %x, ebx %x, ecx %x, edx %x\n",
- r.a, r.b, r.c, r.d);
- return r.a + ((u64)r.d << 32);
-}
-
#define XCR_XFEATURE_ENABLED_MASK 0x00000000
#define XCR_XFEATURE_ILLEGAL_MASK 0x00000010
@@ -33,7 +24,7 @@ static void test_xsave(void)
printf("Legal instruction testing:\n");
- supported_xcr0 = get_supported_xcr0();
+ supported_xcr0 = this_cpu_supported_xcr0();
printf("Supported XCR0 bits: %#lx\n", supported_xcr0);
test_bits = XSTATE_FP | XSTATE_SSE;
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (4 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 6:21 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() Sean Christopherson
` (10 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Add a definition for X86_PROPERTY_INTEL_PT_NR_RANGES, and use it instead
of open coding equivalent logic in the LA57 testcase that verifies the
canonical address behavior of PT MSRs.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/processor.h | 3 +++
x86/la57.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index cbfd2ee1..3b02a966 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -370,6 +370,9 @@ struct x86_cpu_property {
#define X86_PROPERTY_XSTATE_TILE_SIZE X86_CPU_PROPERTY(0xd, 18, EAX, 0, 31)
#define X86_PROPERTY_XSTATE_TILE_OFFSET X86_CPU_PROPERTY(0xd, 18, EBX, 0, 31)
+
+#define X86_PROPERTY_INTEL_PT_NR_RANGES X86_CPU_PROPERTY(0x14, 1, EAX, 0, 2)
+
#define X86_PROPERTY_AMX_MAX_PALETTE_TABLES X86_CPU_PROPERTY(0x1d, 0, EAX, 0, 31)
#define X86_PROPERTY_AMX_TOTAL_TILE_BYTES X86_CPU_PROPERTY(0x1d, 1, EAX, 0, 15)
#define X86_PROPERTY_AMX_BYTES_PER_TILE X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
diff --git a/x86/la57.c b/x86/la57.c
index 41764110..1161a5bf 100644
--- a/x86/la57.c
+++ b/x86/la57.c
@@ -288,7 +288,7 @@ static void __test_canonical_checks(bool force_emulation)
/* PT filter ranges */
if (this_cpu_has(X86_FEATURE_INTEL_PT)) {
- int n_ranges = cpuid_indexed(0x14, 0x1).a & 0x7;
+ int n_ranges = this_cpu_property(X86_PROPERTY_INTEL_PT_NR_RANGES);
int i;
for (i = 0 ; i < n_ranges ; i++) {
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available()
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (5 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 7:09 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length Sean Christopherson
` (9 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() to
reflect what the field and helper actually track. The availablity of
architectural events has nothing to do with the GP counters themselves.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/pmu.c | 4 ++--
lib/x86/pmu.h | 6 +++---
x86/pmu.c | 6 +++---
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index d06e9455..599168ac 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -21,7 +21,7 @@ void pmu_init(void)
pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
- pmu.gp_counter_available = ~cpuid_10.b;
+ pmu.arch_event_available = ~cpuid_10.b;
if (this_cpu_has(X86_FEATURE_PDCM))
pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
@@ -51,7 +51,7 @@ void pmu_init(void)
}
pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
pmu.gp_counter_mask_length = pmu.nr_gp_counters;
- pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
+ pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
if (this_cpu_has_perf_global_status()) {
pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
index f07fbd93..d0ad280a 100644
--- a/lib/x86/pmu.h
+++ b/lib/x86/pmu.h
@@ -64,7 +64,7 @@ struct pmu_caps {
u8 nr_gp_counters;
u8 gp_counter_width;
u8 gp_counter_mask_length;
- u32 gp_counter_available;
+ u32 arch_event_available;
u32 msr_gp_counter_base;
u32 msr_gp_event_select_base;
@@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
return pmu.version > 1;
}
-static inline bool pmu_gp_counter_is_available(int i)
+static inline bool pmu_arch_event_is_available(int i)
{
- return pmu.gp_counter_available & BIT(i);
+ return pmu.arch_event_available & BIT(i);
}
static inline u64 pmu_lbr_version(void)
diff --git a/x86/pmu.c b/x86/pmu.c
index 8cf26b12..0ce34433 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -436,7 +436,7 @@ static void check_gp_counters(void)
int i;
for (i = 0; i < gp_events_size; i++)
- if (pmu_gp_counter_is_available(i))
+ if (pmu_arch_event_is_available(i))
check_gp_counter(&gp_events[i]);
else
printf("GP event '%s' is disabled\n",
@@ -463,7 +463,7 @@ static void check_counters_many(void)
int i, n;
for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
- if (!pmu_gp_counter_is_available(i))
+ if (!pmu_arch_event_is_available(i))
continue;
cnt[n].ctr = MSR_GP_COUNTERx(n);
@@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
uint64_t t0, t1, t2, t3;
/* Bit 2 enumerates the availability of reference cycles events. */
- if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
+ if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
return;
if (this_cpu_has_perf_global_ctrl())
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (6 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 7:22 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 09/16] x86/pmu: Mark all arch events as available on AMD Sean Christopherson
` (8 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Rename gp_counter_mask_length to arch_event_mask_length to reflect what
the field actually tracks. The availablity of architectural events has
nothing to do with the GP counters themselves.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/pmu.c | 4 ++--
lib/x86/pmu.h | 2 +-
x86/pmu.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index 599168ac..b97e2c4a 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -18,7 +18,7 @@ void pmu_init(void)
pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
- pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
+ pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
pmu.arch_event_available = ~cpuid_10.b;
@@ -50,7 +50,7 @@ void pmu_init(void)
pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
}
pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
- pmu.gp_counter_mask_length = pmu.nr_gp_counters;
+ pmu.arch_event_mask_length = pmu.nr_gp_counters;
pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
if (this_cpu_has_perf_global_status()) {
diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
index d0ad280a..c7dc68c1 100644
--- a/lib/x86/pmu.h
+++ b/lib/x86/pmu.h
@@ -63,7 +63,7 @@ struct pmu_caps {
u8 fixed_counter_width;
u8 nr_gp_counters;
u8 gp_counter_width;
- u8 gp_counter_mask_length;
+ u8 arch_event_mask_length;
u32 arch_event_available;
u32 msr_gp_counter_base;
u32 msr_gp_event_select_base;
diff --git a/x86/pmu.c b/x86/pmu.c
index 0ce34433..63eae3db 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -992,7 +992,7 @@ int main(int ac, char **av)
printf("PMU version: %d\n", pmu.version);
printf("GP counters: %d\n", pmu.nr_gp_counters);
printf("GP counter width: %d\n", pmu.gp_counter_width);
- printf("Mask length: %d\n", pmu.gp_counter_mask_length);
+ printf("Event Mask length: %d\n", pmu.arch_event_mask_length);
printf("Fixed counters: %d\n", pmu.nr_fixed_counters);
printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 09/16] x86/pmu: Mark all arch events as available on AMD
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (7 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
` (7 subsequent siblings)
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Mark all arch events as available on AMD, as AMD PMUs don't provide the
"not available" CPUID field, and the number of GP counters has nothing to
do with which events are supported.
Fixes: b883751a ("x86/pmu: Update testcases to cover AMD PMU")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/pmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index b97e2c4a..44449372 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -50,8 +50,8 @@ void pmu_init(void)
pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
}
pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
- pmu.arch_event_mask_length = pmu.nr_gp_counters;
- pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
+ pmu.arch_event_mask_length = 32;
+ pmu.arch_event_available = -1u;
if (this_cpu_has_perf_global_status()) {
pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (8 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 09/16] x86/pmu: Mark all arch events as available on AMD Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 7:29 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
` (6 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
information instead of open coding equivalent functionality.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/pmu.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
index 44449372..c7f7da14 100644
--- a/lib/x86/pmu.c
+++ b/lib/x86/pmu.c
@@ -7,21 +7,19 @@ void pmu_init(void)
pmu.is_intel = is_intel();
if (pmu.is_intel) {
- struct cpuid cpuid_10 = cpuid(10);
-
- pmu.version = cpuid_10.a & 0xff;
+ pmu.version = this_cpu_property(X86_PROPERTY_PMU_VERSION);
if (pmu.version > 1) {
- pmu.nr_fixed_counters = cpuid_10.d & 0x1f;
- pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff;
+ pmu.nr_fixed_counters = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ pmu.fixed_counter_width = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH);
}
- pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
- pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
- pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
+ pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
+ pmu.gp_counter_width = this_cpu_property(X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH);
+ pmu.arch_event_mask_length = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
/* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
- pmu.arch_event_available = ~cpuid_10.b;
+ pmu.arch_event_available = ~this_cpu_property(X86_PROPERTY_PMU_EVENTS_MASK);
if (this_cpu_has(X86_FEATURE_PDCM))
pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
@@ -38,7 +36,7 @@ void pmu_init(void)
/* Performance Monitoring Version 2 Supported */
if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
pmu.version = 2;
- pmu.nr_gp_counters = cpuid(0x80000022).b & 0xf;
+ pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_NR_PERFCTR_CORE);
} else {
pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE;
}
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (9 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 7:25 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 12/16] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
` (5 subsequent siblings)
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use VC_VECTOR (defined in processor.h along with all other known vectors)
and drop the one-off SEV_ES_VC_HANDLER_VECTOR macro.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/amd_sev.c | 4 ++--
lib/x86/amd_sev.h | 6 ------
2 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 66722141..6c0a66ac 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -111,9 +111,9 @@ efi_status_t setup_amd_sev_es(void)
*/
sidt(&idtr);
idt = (idt_entry_t *)idtr.base;
- vc_handler_idt = idt[SEV_ES_VC_HANDLER_VECTOR];
+ vc_handler_idt = idt[VC_VECTOR];
vc_handler_idt.selector = KERNEL_CS;
- boot_idt[SEV_ES_VC_HANDLER_VECTOR] = vc_handler_idt;
+ boot_idt[VC_VECTOR] = vc_handler_idt;
return EFI_SUCCESS;
}
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index ed6e3385..ca7216d4 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -39,12 +39,6 @@
bool amd_sev_enabled(void);
efi_status_t setup_amd_sev(void);
-/*
- * AMD Programmer's Manual Volume 2
- * - Section "#VC Exception"
- */
-#define SEV_ES_VC_HANDLER_VECTOR 29
-
/*
* AMD Programmer's Manual Volume 2
* - Section "GHCB"
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 12/16] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (10 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 13/16] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
` (4 subsequent siblings)
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Skip the AMD SEV test if SEV is unsupported, as KVM-Unit-Tests typically
don't report failures if feature is missing.
Opportunistically use amd_sev_enabled() instead of duplicating all of its
functionality.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
x86/amd_sev.c | 51 +++++++--------------------------------------------
1 file changed, 7 insertions(+), 44 deletions(-)
diff --git a/x86/amd_sev.c b/x86/amd_sev.c
index 7757d4f8..4ec45543 100644
--- a/x86/amd_sev.c
+++ b/x86/amd_sev.c
@@ -15,51 +15,10 @@
#include "x86/amd_sev.h"
#include "msr.h"
-#define EXIT_SUCCESS 0
-#define EXIT_FAILURE 1
-
#define TESTDEV_IO_PORT 0xe0
static char st1[] = "abcdefghijklmnop";
-static int test_sev_activation(void)
-{
- struct cpuid cpuid_out;
- u64 msr_out;
-
- printf("SEV activation test is loaded.\n");
-
- /* Tests if CPUID function to check SEV is implemented */
- cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
- printf("CPUID Fn8000_0000[EAX]: 0x%08x\n", cpuid_out.a);
- if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
- printf("CPUID does not support FN%08x\n",
- CPUID_FN_ENCRYPT_MEM_CAPAB);
- return EXIT_FAILURE;
- }
-
- /* Tests if SEV is supported */
- cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
- printf("CPUID Fn8000_001F[EAX]: 0x%08x\n", cpuid_out.a);
- printf("CPUID Fn8000_001F[EBX]: 0x%08x\n", cpuid_out.b);
- if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
- printf("SEV is not supported.\n");
- return EXIT_FAILURE;
- }
- printf("SEV is supported\n");
-
- /* Tests if SEV is enabled */
- msr_out = rdmsr(MSR_SEV_STATUS);
- printf("MSR C001_0131[EAX]: 0x%08lx\n", msr_out & 0xffffffff);
- if (!(msr_out & SEV_ENABLED_MASK)) {
- printf("SEV is not enabled.\n");
- return EXIT_FAILURE;
- }
- printf("SEV is enabled\n");
-
- return EXIT_SUCCESS;
-}
-
static void test_sev_es_activation(void)
{
if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
@@ -88,10 +47,14 @@ static void test_stringio(void)
int main(void)
{
- int rtn;
- rtn = test_sev_activation();
- report(rtn == EXIT_SUCCESS, "SEV activation test.");
+ if (!amd_sev_enabled()) {
+ report_skip("AMD SEV not enabled\n");
+ goto out;
+ }
+
test_sev_es_activation();
test_stringio();
+
+out:
return report_summary();
}
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 13/16] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (11 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 12/16] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 14/16] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
` (3 subsequent siblings)
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Define proper X86_FEATURE_* flags for CPUID 0x8000001F, and use them
instead of open coding equivalent checks in amd_sev_{,es_}enabled().
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/amd_sev.c | 32 +++++---------------------------
lib/x86/amd_sev.h | 3 ---
lib/x86/processor.h | 9 +++++++++
3 files changed, 14 insertions(+), 30 deletions(-)
diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 6c0a66ac..4e89c84c 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -17,31 +17,15 @@ static unsigned short amd_sev_c_bit_pos;
bool amd_sev_enabled(void)
{
- struct cpuid cpuid_out;
static bool sev_enabled;
static bool initialized = false;
/* Check CPUID and MSR for SEV status and store it for future function calls. */
if (!initialized) {
- sev_enabled = false;
initialized = true;
- /* Test if we can query SEV features */
- cpuid_out = cpuid(CPUID_FN_LARGEST_EXT_FUNC_NUM);
- if (cpuid_out.a < CPUID_FN_ENCRYPT_MEM_CAPAB) {
- return sev_enabled;
- }
-
- /* Test if SEV is supported */
- cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
- if (!(cpuid_out.a & SEV_SUPPORT_MASK)) {
- return sev_enabled;
- }
-
- /* Test if SEV is enabled */
- if (rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK) {
- sev_enabled = true;
- }
+ sev_enabled = this_cpu_has(X86_FEATURE_SEV)
+ rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK);
}
return sev_enabled;
@@ -72,17 +56,11 @@ bool amd_sev_es_enabled(void)
static bool initialized = false;
if (!initialized) {
- sev_es_enabled = false;
initialized = true;
- if (!amd_sev_enabled()) {
- return sev_es_enabled;
- }
-
- /* Test if SEV-ES is enabled */
- if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
- sev_es_enabled = true;
- }
+ sev_es_enabled = amd_sev_enabled() &&
+ this_cpu_has(X86_FEATURE_SEV_ES) &&
+ rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
}
return sev_es_enabled;
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index ca7216d4..defcda75 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -21,12 +21,9 @@
/*
* AMD Programmer's Manual Volume 3
- * - Section "Function 8000_0000h - Maximum Extended Function Number and Vendor String"
* - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
*/
-#define CPUID_FN_LARGEST_EXT_FUNC_NUM 0x80000000
#define CPUID_FN_ENCRYPT_MEM_CAPAB 0x8000001f
-#define SEV_SUPPORT_MASK 0b10
/*
* AMD Programmer's Manual Volume 2
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 3b02a966..b656ebf6 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -320,6 +320,15 @@ struct x86_cpu_feature {
#define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
#define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
+#define X86_FEATURE_SME X86_CPU_FEATURE(0x8000001F, 0, EAX, 0)
+#define X86_FEATURE_SEV X86_CPU_FEATURE(0x8000001F, 0, EAX, 1)
+#define X86_FEATURE_VM_PAGE_FLUSH X86_CPU_FEATURE(0x8000001F, 0, EAX, 2)
+#define X86_FEATURE_SEV_ES X86_CPU_FEATURE(0x8000001F, 0, EAX, 3)
+#define X86_FEATURE_SEV_SNP X86_CPU_FEATURE(0x8000001F, 0, EAX, 4)
+#define X86_FEATURE_V_TSC_AUX X86_CPU_FEATURE(0x8000001F, 0, EAX, 9)
+#define X86_FEATURE_SME_COHERENT X86_CPU_FEATURE(0x8000001F, 0, EAX, 10)
+#define X86_FEATURE_DEBUG_SWAP X86_CPU_FEATURE(0x8000001F, 0, EAX, 14)
+#define X86_FEATURE_SVSM X86_CPU_FEATURE(0x8000001F, 0, EAX, 28)
#define X86_FEATURE_AMD_PMU_V2 X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
/*
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 14/16] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (12 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 13/16] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
` (2 subsequent siblings)
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use X86_PROPERTY_SEV_C_BIT instead of open coding equivalent functionality,
and delete the overly-verbose CPUID_FN_ENCRYPT_MEM_CAPAB macro.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/amd_sev.c | 10 +---------
lib/x86/amd_sev.h | 6 ------
2 files changed, 1 insertion(+), 15 deletions(-)
diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 4e89c84c..416e4423 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -33,19 +33,11 @@ bool amd_sev_enabled(void)
efi_status_t setup_amd_sev(void)
{
- struct cpuid cpuid_out;
-
if (!amd_sev_enabled()) {
return EFI_UNSUPPORTED;
}
- /*
- * Extract C-Bit position from ebx[5:0]
- * AMD64 Architecture Programmer's Manual Volume 3
- * - Section " Function 8000_001Fh - Encrypted Memory Capabilities"
- */
- cpuid_out = cpuid(CPUID_FN_ENCRYPT_MEM_CAPAB);
- amd_sev_c_bit_pos = (unsigned short)(cpuid_out.b & 0x3f);
+ amd_sev_c_bit_pos = this_cpu_property(X86_PROPERTY_SEV_C_BIT);
return EFI_SUCCESS;
}
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index defcda75..daa33a05 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -19,12 +19,6 @@
#include "asm/page.h"
#include "efi.h"
-/*
- * AMD Programmer's Manual Volume 3
- * - Section "Function 8000_001Fh - Encrypted Memory Capabilities"
- */
-#define CPUID_FN_ENCRYPT_MEM_CAPAB 0x8000001f
-
/*
* AMD Programmer's Manual Volume 2
* - Section "SEV_STATUS MSR"
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (13 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 14/16] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-30 16:22 ` Liam Merwick
2025-05-29 22:19 ` [kvm-unit-tests PATCH 16/16] x86: Move SEV MSR definitions to msr.h Sean Christopherson
2025-06-10 19:42 ` [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
16 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Use amd_sev_es_enabled() in the SEV string I/O test instead manually
checking the SEV_STATUS MSR.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
x86/amd_sev.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/x86/amd_sev.c b/x86/amd_sev.c
index 4ec45543..7c207a07 100644
--- a/x86/amd_sev.c
+++ b/x86/amd_sev.c
@@ -19,15 +19,6 @@
static char st1[] = "abcdefghijklmnop";
-static void test_sev_es_activation(void)
-{
- if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
- printf("SEV-ES is enabled.\n");
- } else {
- printf("SEV-ES is not enabled.\n");
- }
-}
-
static void test_stringio(void)
{
int st1_len = sizeof(st1) - 1;
@@ -52,7 +43,8 @@ int main(void)
goto out;
}
- test_sev_es_activation();
+ printf("SEV-ES is %senabled.\n", amd_sev_es_enabled() ? "" : "not");
+
test_stringio();
out:
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [kvm-unit-tests PATCH 16/16] x86: Move SEV MSR definitions to msr.h
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (14 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
@ 2025-05-29 22:19 ` Sean Christopherson
2025-06-10 19:42 ` [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, Sean Christopherson
Move the SEV MSR definitions to msr.h so that they're available for non-EFI
builds. There is nothing EFI specific about the architectural definitions.
Opportunistically massage the names to align with existing style.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
lib/x86/amd_sev.c | 8 ++++----
lib/x86/amd_sev.h | 14 --------------
lib/x86/msr.h | 6 ++++++
3 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 416e4423..7c6d2804 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -24,8 +24,8 @@ bool amd_sev_enabled(void)
if (!initialized) {
initialized = true;
- sev_enabled = this_cpu_has(X86_FEATURE_SEV)
- rdmsr(MSR_SEV_STATUS) & SEV_ENABLED_MASK);
+ sev_enabled = this_cpu_has(X86_FEATURE_SEV) &&
+ rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ENABLED;
}
return sev_enabled;
@@ -52,7 +52,7 @@ bool amd_sev_es_enabled(void)
sev_es_enabled = amd_sev_enabled() &&
this_cpu_has(X86_FEATURE_SEV_ES) &&
- rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK;
+ rdmsr(MSR_SEV_STATUS) & SEV_STATUS_SEV_ES_ENABLED;
}
return sev_es_enabled;
@@ -100,7 +100,7 @@ void setup_ghcb_pte(pgd_t *page_table)
pteval_t *pte;
/* Read the current GHCB page addr */
- ghcb_addr = rdmsr(SEV_ES_GHCB_MSR_INDEX);
+ ghcb_addr = rdmsr(MSR_SEV_ES_GHCB);
/* Search Level 1 page table entry for GHCB page */
pte = get_pte_level(page_table, (void *)ghcb_addr, 1);
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index daa33a05..9d587e2d 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -19,23 +19,9 @@
#include "asm/page.h"
#include "efi.h"
-/*
- * AMD Programmer's Manual Volume 2
- * - Section "SEV_STATUS MSR"
- */
-#define MSR_SEV_STATUS 0xc0010131
-#define SEV_ENABLED_MASK 0b1
-#define SEV_ES_ENABLED_MASK 0b10
-
bool amd_sev_enabled(void);
efi_status_t setup_amd_sev(void);
-/*
- * AMD Programmer's Manual Volume 2
- * - Section "GHCB"
- */
-#define SEV_ES_GHCB_MSR_INDEX 0xc0010130
-
bool amd_sev_es_enabled(void);
efi_status_t setup_amd_sev_es(void);
void setup_ghcb_pte(pgd_t *page_table);
diff --git a/lib/x86/msr.h b/lib/x86/msr.h
index 658d237f..ccfd6bdd 100644
--- a/lib/x86/msr.h
+++ b/lib/x86/msr.h
@@ -523,4 +523,10 @@
#define MSR_VM_IGNNE 0xc0010115
#define MSR_VM_HSAVE_PA 0xc0010117
+#define MSR_SEV_STATUS 0xc0010131
+#define SEV_STATUS_SEV_ENABLED BIT(0)
+#define SEV_STATUS_SEV_ES_ENABLED BIT(1)
+
+#define MSR_SEV_ES_GHCB 0xc0010130
+
#endif /* _X86_MSR_H_ */
--
2.49.0.1204.g71687c7c1d-goog
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
@ 2025-05-30 6:03 ` Andrew Jones
2025-05-30 9:01 ` Janosch Frank
2025-06-10 6:04 ` Mi, Dapeng
2 siblings, 0 replies; 34+ messages in thread
From: Andrew Jones @ 2025-05-30 6:03 UTC (permalink / raw)
To: Sean Christopherson
Cc: Janosch Frank, Claudio Imbrenda, Nico Böhr, Paolo Bonzini,
kvm-riscv, linux-s390, kvm
On Thu, May 29, 2025 at 03:19:14PM -0700, Sean Christopherson wrote:
> Add static_assert() to wrap _Static_assert() with stringification of the
> tested expression as the assert message. In most cases, the failed
> expression is far more helpful than a human-generated message (usually
> because the developer is forced to add _something_ for the message).
>
> For API consistency, provide a double-underscore variant for specifying a
> custom message.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/riscv/asm/isa.h | 4 +++-
> lib/s390x/asm/arch_def.h | 6 ++++--
> lib/s390x/fault.c | 3 ++-
> lib/util.h | 3 +++
> x86/lam.c | 4 ++--
> 5 files changed, 14 insertions(+), 6 deletions(-)
>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
2025-05-30 6:03 ` Andrew Jones
@ 2025-05-30 9:01 ` Janosch Frank
2025-06-10 6:04 ` Mi, Dapeng
2 siblings, 0 replies; 34+ messages in thread
From: Janosch Frank @ 2025-05-30 9:01 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Claudio Imbrenda,
Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/25 12:19 AM, Sean Christopherson wrote:
> Add static_assert() to wrap _Static_assert() with stringification of the
> tested expression as the assert message. In most cases, the failed
> expression is far more helpful than a human-generated message (usually
> because the developer is forced to add _something_ for the message).
>
> For API consistency, provide a double-underscore variant for specifying a
> custom message.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled
2025-05-29 22:19 ` [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
@ 2025-05-30 16:22 ` Liam Merwick
0 siblings, 0 replies; 34+ messages in thread
From: Liam Merwick @ 2025-05-30 16:22 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm, liam.merwick
On 29/05/2025 23:19, Sean Christopherson wrote:
> Use amd_sev_es_enabled() in the SEV string I/O test instead manually
> checking the SEV_STATUS MSR.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> x86/amd_sev.c | 12 ++----------
> 1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/x86/amd_sev.c b/x86/amd_sev.c
> index 4ec45543..7c207a07 100644
> --- a/x86/amd_sev.c
> +++ b/x86/amd_sev.c
> @@ -19,15 +19,6 @@
>
> static char st1[] = "abcdefghijklmnop";
>
> -static void test_sev_es_activation(void)
> -{
> - if (rdmsr(MSR_SEV_STATUS) & SEV_ES_ENABLED_MASK) {
> - printf("SEV-ES is enabled.\n");
> - } else {
> - printf("SEV-ES is not enabled.\n");
> - }
> -}
> -
> static void test_stringio(void)
> {
> int st1_len = sizeof(st1) - 1;
> @@ -52,7 +43,8 @@ int main(void)
> goto out;
> }
>
> - test_sev_es_activation();
> + printf("SEV-ES is %senabled.\n", amd_sev_es_enabled() ? "" : "not");
Add a space after 'not' to avoid "notenabled"
Otherwise
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
> +
> test_stringio();
>
> out:
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
2025-05-30 6:03 ` Andrew Jones
2025-05-30 9:01 ` Janosch Frank
@ 2025-06-10 6:04 ` Mi, Dapeng
2 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:04 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Add static_assert() to wrap _Static_assert() with stringification of the
> tested expression as the assert message. In most cases, the failed
> expression is far more helpful than a human-generated message (usually
> because the developer is forced to add _something_ for the message).
>
> For API consistency, provide a double-underscore variant for specifying a
> custom message.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/riscv/asm/isa.h | 4 +++-
> lib/s390x/asm/arch_def.h | 6 ++++--
> lib/s390x/fault.c | 3 ++-
> lib/util.h | 3 +++
> x86/lam.c | 4 ++--
> 5 files changed, 14 insertions(+), 6 deletions(-)
>
> diff --git a/lib/riscv/asm/isa.h b/lib/riscv/asm/isa.h
> index df874173..fb3af67d 100644
> --- a/lib/riscv/asm/isa.h
> +++ b/lib/riscv/asm/isa.h
> @@ -1,7 +1,9 @@
> /* SPDX-License-Identifier: GPL-2.0-only */
> #ifndef _ASMRISCV_ISA_H_
> #define _ASMRISCV_ISA_H_
> +
> #include <bitops.h>
> +#include <util.h>
> #include <asm/setup.h>
>
> /*
> @@ -14,7 +16,7 @@ enum {
> ISA_SSTC,
> ISA_MAX,
> };
> -_Static_assert(ISA_MAX <= __riscv_xlen, "Need to increase thread_info.isa");
> +__static_assert(ISA_MAX <= __riscv_xlen, "Need to increase thread_info.isa");
>
> static inline bool cpu_has_extension(int cpu, int ext)
> {
> diff --git a/lib/s390x/asm/arch_def.h b/lib/s390x/asm/arch_def.h
> index 03adcd3c..4c11df74 100644
> --- a/lib/s390x/asm/arch_def.h
> +++ b/lib/s390x/asm/arch_def.h
> @@ -8,6 +8,8 @@
> #ifndef _ASMS390X_ARCH_DEF_H_
> #define _ASMS390X_ARCH_DEF_H_
>
> +#include <util.h>
> +
> struct stack_frame {
> struct stack_frame *back_chain;
> uint64_t reserved;
> @@ -62,7 +64,7 @@ struct psw {
> };
> uint64_t addr;
> };
> -_Static_assert(sizeof(struct psw) == 16, "PSW size");
> +static_assert(sizeof(struct psw) == 16);
>
> #define PSW(m, a) ((struct psw){ .mask = (m), .addr = (uint64_t)(a) })
>
> @@ -194,7 +196,7 @@ struct lowcore {
> uint8_t pad_0x1400[0x1800 - 0x1400]; /* 0x1400 */
> uint8_t pgm_int_tdb[0x1900 - 0x1800]; /* 0x1800 */
> } __attribute__ ((__packed__));
> -_Static_assert(sizeof(struct lowcore) == 0x1900, "Lowcore size");
> +static_assert(sizeof(struct lowcore) == 0x1900);
>
> extern struct lowcore lowcore;
>
> diff --git a/lib/s390x/fault.c b/lib/s390x/fault.c
> index a882d5d9..ad5a5f66 100644
> --- a/lib/s390x/fault.c
> +++ b/lib/s390x/fault.c
> @@ -9,6 +9,7 @@
> */
> #include <libcflat.h>
> #include <bitops.h>
> +#include <util.h>
> #include <asm/arch_def.h>
> #include <asm/page.h>
> #include <fault.h>
> @@ -40,7 +41,7 @@ static void print_decode_pgm_prot(union teid teid)
> "LAP",
> "IEP",
> };
> - _Static_assert(ARRAY_SIZE(prot_str) == PROT_NUM_CODES, "ESOP2 prot codes");
> + static_assert(ARRAY_SIZE(prot_str) == PROT_NUM_CODES);
> int prot_code = teid_esop2_prot_code(teid);
>
> printf("Type: %s\n", prot_str[prot_code]);
> diff --git a/lib/util.h b/lib/util.h
> index f86af6d3..00d0b47d 100644
> --- a/lib/util.h
> +++ b/lib/util.h
> @@ -8,6 +8,9 @@
> * This work is licensed under the terms of the GNU LGPL, version 2.
> */
>
> +#define static_assert(expr, ...) __static_assert(expr, ##__VA_ARGS__, #expr)
> +#define __static_assert(expr, msg, ...) _Static_assert(expr, msg)
> +
> /*
> * parse_keyval extracts the integer from a string formatted as
> * string=integer. This is useful for passing expected values to
> diff --git a/x86/lam.c b/x86/lam.c
> index a1c98949..ad91deaf 100644
> --- a/x86/lam.c
> +++ b/x86/lam.c
> @@ -13,6 +13,7 @@
> #include "libcflat.h"
> #include "processor.h"
> #include "desc.h"
> +#include <util.h>
> #include "vmalloc.h"
> #include "alloc_page.h"
> #include "vm.h"
> @@ -236,8 +237,7 @@ static void test_lam_user(void)
> * address for both LAM48 and LAM57.
> */
> vaddr = alloc_pages_flags(0, AREA_NORMAL);
> - _Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL,
> - "Identical mapping range check");
> + static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL);
>
> /*
> * Note, LAM doesn't have a global control bit to turn on/off LAM
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure
2025-05-29 22:19 ` [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
@ 2025-06-10 6:08 ` Mi, Dapeng
2025-06-10 13:56 ` Sean Christopherson
0 siblings, 1 reply; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:08 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Encode X86_FEATURE_* macros using a new "struct x86_cpu_feature" instead
> of manually packing the values into a u64. Using a structure eliminates
> open code shifts and masks, and is largely self-documenting.
>
> Note, the code and naming scheme are stolen from KVM selftests.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/processor.h | 171 ++++++++++++++++++++++++--------------------
> 1 file changed, 95 insertions(+), 76 deletions(-)
>
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index a0be04c5..3ac6711d 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -6,6 +6,7 @@
> #include "msr.h"
> #include <bitops.h>
> #include <stdint.h>
> +#include <util.h>
>
> #define CANONICAL_48_VAL 0xffffaaaaaaaaaaaaull
> #define CANONICAL_57_VAL 0xffaaaaaaaaaaaaaaull
> @@ -232,100 +233,118 @@ static inline bool is_intel(void)
> return strcmp((char *)name, "GenuineIntel") == 0;
> }
>
> -#define CPUID(a, b, c, d) ((((unsigned long long) a) << 32) | (b << 16) | \
> - (c << 8) | d)
> -
> /*
> - * Each X86_FEATURE_XXX definition is 64-bit and contains the following
> - * CPUID meta-data:
> - *
> - * [63:32] : input value for EAX
> - * [31:16] : input value for ECX
> - * [15:8] : output register
> - * [7:0] : bit position in output register
> + * Pack the information into a 64-bit value so that each X86_FEATURE_XXX can be
> + * passed by value with no overhead.
> */
> +struct x86_cpu_feature {
> + u32 function;
> + u16 index;
> + u8 reg;
> + u8 bit;
> +};
> +
> +#define X86_CPU_FEATURE(fn, idx, gpr, __bit) \
> +({ \
> + struct x86_cpu_feature feature = { \
> + .function = fn, \
> + .index = idx, \
> + .reg = gpr, \
> + .bit = __bit, \
> + }; \
> + \
> + static_assert((fn & 0xc0000000) == 0 || \
> + (fn & 0xc0000000) == 0x40000000 || \
> + (fn & 0xc0000000) == 0x80000000 || \
> + (fn & 0xc0000000) == 0xc0000000); \
> + static_assert(idx < BIT(sizeof(feature.index) * BITS_PER_BYTE)); \
> + feature; \
> +})
>
> /*
> * Basic Leafs, a.k.a. Intel defined
> */
> -#define X86_FEATURE_MWAIT (CPUID(0x1, 0, ECX, 3))
> -#define X86_FEATURE_VMX (CPUID(0x1, 0, ECX, 5))
> -#define X86_FEATURE_PDCM (CPUID(0x1, 0, ECX, 15))
> -#define X86_FEATURE_PCID (CPUID(0x1, 0, ECX, 17))
> -#define X86_FEATURE_X2APIC (CPUID(0x1, 0, ECX, 21))
> -#define X86_FEATURE_MOVBE (CPUID(0x1, 0, ECX, 22))
> -#define X86_FEATURE_TSC_DEADLINE_TIMER (CPUID(0x1, 0, ECX, 24))
> -#define X86_FEATURE_XSAVE (CPUID(0x1, 0, ECX, 26))
> -#define X86_FEATURE_OSXSAVE (CPUID(0x1, 0, ECX, 27))
> -#define X86_FEATURE_RDRAND (CPUID(0x1, 0, ECX, 30))
> -#define X86_FEATURE_MCE (CPUID(0x1, 0, EDX, 7))
> -#define X86_FEATURE_APIC (CPUID(0x1, 0, EDX, 9))
> -#define X86_FEATURE_CLFLUSH (CPUID(0x1, 0, EDX, 19))
> -#define X86_FEATURE_DS (CPUID(0x1, 0, EDX, 21))
> -#define X86_FEATURE_XMM (CPUID(0x1, 0, EDX, 25))
> -#define X86_FEATURE_XMM2 (CPUID(0x1, 0, EDX, 26))
> -#define X86_FEATURE_TSC_ADJUST (CPUID(0x7, 0, EBX, 1))
> -#define X86_FEATURE_HLE (CPUID(0x7, 0, EBX, 4))
> -#define X86_FEATURE_SMEP (CPUID(0x7, 0, EBX, 7))
> -#define X86_FEATURE_INVPCID (CPUID(0x7, 0, EBX, 10))
> -#define X86_FEATURE_RTM (CPUID(0x7, 0, EBX, 11))
> -#define X86_FEATURE_SMAP (CPUID(0x7, 0, EBX, 20))
> -#define X86_FEATURE_PCOMMIT (CPUID(0x7, 0, EBX, 22))
> -#define X86_FEATURE_CLFLUSHOPT (CPUID(0x7, 0, EBX, 23))
> -#define X86_FEATURE_CLWB (CPUID(0x7, 0, EBX, 24))
> -#define X86_FEATURE_INTEL_PT (CPUID(0x7, 0, EBX, 25))
> -#define X86_FEATURE_UMIP (CPUID(0x7, 0, ECX, 2))
> -#define X86_FEATURE_PKU (CPUID(0x7, 0, ECX, 3))
> -#define X86_FEATURE_LA57 (CPUID(0x7, 0, ECX, 16))
> -#define X86_FEATURE_RDPID (CPUID(0x7, 0, ECX, 22))
> -#define X86_FEATURE_SHSTK (CPUID(0x7, 0, ECX, 7))
> -#define X86_FEATURE_IBT (CPUID(0x7, 0, EDX, 20))
> -#define X86_FEATURE_SPEC_CTRL (CPUID(0x7, 0, EDX, 26))
> -#define X86_FEATURE_FLUSH_L1D (CPUID(0x7, 0, EDX, 28))
> -#define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29))
> -#define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31))
> -#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26))
> +#define X86_FEATURE_MWAIT X86_CPU_FEATURE(0x1, 0, ECX, 3)
> +#define X86_FEATURE_VMX X86_CPU_FEATURE(0x1, 0, ECX, 5)
> +#define X86_FEATURE_PDCM X86_CPU_FEATURE(0x1, 0, ECX, 15)
> +#define X86_FEATURE_PCID X86_CPU_FEATURE(0x1, 0, ECX, 17)
> +#define X86_FEATURE_X2APIC X86_CPU_FEATURE(0x1, 0, ECX, 21)
> +#define X86_FEATURE_MOVBE X86_CPU_FEATURE(0x1, 0, ECX, 22)
> +#define X86_FEATURE_TSC_DEADLINE_TIMER X86_CPU_FEATURE(0x1, 0, ECX, 24)
> +#define X86_FEATURE_XSAVE X86_CPU_FEATURE(0x1, 0, ECX, 26)
> +#define X86_FEATURE_OSXSAVE X86_CPU_FEATURE(0x1, 0, ECX, 27)
> +#define X86_FEATURE_RDRAND X86_CPU_FEATURE(0x1, 0, ECX, 30)
> +#define X86_FEATURE_MCE X86_CPU_FEATURE(0x1, 0, EDX, 7)
> +#define X86_FEATURE_APIC X86_CPU_FEATURE(0x1, 0, EDX, 9)
> +#define X86_FEATURE_CLFLUSH X86_CPU_FEATURE(0x1, 0, EDX, 19)
> +#define X86_FEATURE_DS X86_CPU_FEATURE(0x1, 0, EDX, 21)
> +#define X86_FEATURE_XMM X86_CPU_FEATURE(0x1, 0, EDX, 25)
> +#define X86_FEATURE_XMM2 X86_CPU_FEATURE(0x1, 0, EDX, 26)
> +#define X86_FEATURE_TSC_ADJUST X86_CPU_FEATURE(0x7, 0, EBX, 1)
> +#define X86_FEATURE_HLE X86_CPU_FEATURE(0x7, 0, EBX, 4)
> +#define X86_FEATURE_SMEP X86_CPU_FEATURE(0x7, 0, EBX, 7)
> +#define X86_FEATURE_INVPCID X86_CPU_FEATURE(0x7, 0, EBX, 10)
> +#define X86_FEATURE_RTM X86_CPU_FEATURE(0x7, 0, EBX, 11)
> +#define X86_FEATURE_SMAP X86_CPU_FEATURE(0x7, 0, EBX, 20)
> +#define X86_FEATURE_PCOMMIT X86_CPU_FEATURE(0x7, 0, EBX, 22)
> +#define X86_FEATURE_CLFLUSHOPT X86_CPU_FEATURE(0x7, 0, EBX, 23)
> +#define X86_FEATURE_CLWB X86_CPU_FEATURE(0x7, 0, EBX, 24)
> +#define X86_FEATURE_INTEL_PT X86_CPU_FEATURE(0x7, 0, EBX, 25)
> +#define X86_FEATURE_UMIP X86_CPU_FEATURE(0x7, 0, ECX, 2)
> +#define X86_FEATURE_PKU X86_CPU_FEATURE(0x7, 0, ECX, 3)
> +#define X86_FEATURE_LA57 X86_CPU_FEATURE(0x7, 0, ECX, 16)
> +#define X86_FEATURE_RDPID X86_CPU_FEATURE(0x7, 0, ECX, 22)
> +#define X86_FEATURE_SHSTK X86_CPU_FEATURE(0x7, 0, ECX, 7)
> +#define X86_FEATURE_IBT X86_CPU_FEATURE(0x7, 0, EDX, 20)
> +#define X86_FEATURE_SPEC_CTRL X86_CPU_FEATURE(0x7, 0, EDX, 26)
> +#define X86_FEATURE_FLUSH_L1D X86_CPU_FEATURE(0x7, 0, EDX, 28)
> +#define X86_FEATURE_ARCH_CAPABILITIES X86_CPU_FEATURE(0x7, 0, EDX, 29)
> +#define X86_FEATURE_PKS X86_CPU_FEATURE(0x7, 0, ECX, 31)
> +#define X86_FEATURE_LAM X86_CPU_FEATURE(0x7, 1, EAX, 26)
>
> /*
> * KVM defined leafs
> */
> -#define KVM_FEATURE_ASYNC_PF (CPUID(0x40000001, 0, EAX, 4))
> -#define KVM_FEATURE_ASYNC_PF_INT (CPUID(0x40000001, 0, EAX, 14))
> +#define KVM_FEATURE_ASYNC_PF X86_CPU_FEATURE(0x40000001, 0, EAX, 4)
> +#define KVM_FEATURE_ASYNC_PF_INT X86_CPU_FEATURE(0x40000001, 0, EAX, 14)
>
> /*
> * Extended Leafs, a.k.a. AMD defined
> */
> -#define X86_FEATURE_SVM (CPUID(0x80000001, 0, ECX, 2))
> -#define X86_FEATURE_PERFCTR_CORE (CPUID(0x80000001, 0, ECX, 23))
> -#define X86_FEATURE_NX (CPUID(0x80000001, 0, EDX, 20))
> -#define X86_FEATURE_GBPAGES (CPUID(0x80000001, 0, EDX, 26))
> -#define X86_FEATURE_RDTSCP (CPUID(0x80000001, 0, EDX, 27))
> -#define X86_FEATURE_LM (CPUID(0x80000001, 0, EDX, 29))
> -#define X86_FEATURE_RDPRU (CPUID(0x80000008, 0, EBX, 4))
> -#define X86_FEATURE_AMD_IBPB (CPUID(0x80000008, 0, EBX, 12))
> -#define X86_FEATURE_NPT (CPUID(0x8000000A, 0, EDX, 0))
> -#define X86_FEATURE_LBRV (CPUID(0x8000000A, 0, EDX, 1))
> -#define X86_FEATURE_NRIPS (CPUID(0x8000000A, 0, EDX, 3))
> -#define X86_FEATURE_TSCRATEMSR (CPUID(0x8000000A, 0, EDX, 4))
> -#define X86_FEATURE_PAUSEFILTER (CPUID(0x8000000A, 0, EDX, 10))
> -#define X86_FEATURE_PFTHRESHOLD (CPUID(0x8000000A, 0, EDX, 12))
> -#define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16))
> -#define X86_FEATURE_VNMI (CPUID(0x8000000A, 0, EDX, 25))
> -#define X86_FEATURE_AMD_PMU_V2 (CPUID(0x80000022, 0, EAX, 0))
> +#define X86_FEATURE_SVM X86_CPU_FEATURE(0x80000001, 0, ECX, 2)
> +#define X86_FEATURE_PERFCTR_CORE X86_CPU_FEATURE(0x80000001, 0, ECX, 23)
> +#define X86_FEATURE_NX X86_CPU_FEATURE(0x80000001, 0, EDX, 20)
> +#define X86_FEATURE_GBPAGES X86_CPU_FEATURE(0x80000001, 0, EDX, 26)
> +#define X86_FEATURE_RDTSCP X86_CPU_FEATURE(0x80000001, 0, EDX, 27)
> +#define X86_FEATURE_LM X86_CPU_FEATURE(0x80000001, 0, EDX, 29)
> +#define X86_FEATURE_RDPRU X86_CPU_FEATURE(0x80000008, 0, EBX, 4)
> +#define X86_FEATURE_AMD_IBPB X86_CPU_FEATURE(0x80000008, 0, EBX, 12)
> +#define X86_FEATURE_NPT X86_CPU_FEATURE(0x8000000A, 0, EDX, 0)
> +#define X86_FEATURE_LBRV X86_CPU_FEATURE(0x8000000A, 0, EDX, 1)
> +#define X86_FEATURE_NRIPS X86_CPU_FEATURE(0x8000000A, 0, EDX, 3)
> +#define X86_FEATURE_TSCRATEMSR X86_CPU_FEATURE(0x8000000A, 0, EDX, 4)
> +#define X86_FEATURE_PAUSEFILTER X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
> +#define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
> +#define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
> +#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
The code looks good to me except the indent style (mixed tab and space).
Although it's not introduced by this patch, we'd better make them identical
by this chance.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
> +#define X86_FEATURE_AMD_PMU_V2 X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
>
> -static inline bool this_cpu_has(u64 feature)
> +static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
> {
> - u32 input_eax = feature >> 32;
> - u32 input_ecx = (feature >> 16) & 0xffff;
> - u32 output_reg = (feature >> 8) & 0xff;
> - u8 bit = feature & 0xff;
> - struct cpuid c;
> - u32 *tmp;
> + union {
> + struct cpuid cpuid;
> + u32 gprs[4];
> + } c;
>
> - c = cpuid_indexed(input_eax, input_ecx);
> - tmp = (u32 *)&c;
> + c.cpuid = cpuid_indexed(function, index);
>
> - return ((*(tmp + (output_reg % 32))) & (1 << bit));
> + return (c.gprs[reg] & GENMASK(hi, lo)) >> lo;
> +}
> +
> +static inline bool this_cpu_has(struct x86_cpu_feature feature)
> +{
> + return __this_cpu_has(feature.function, feature.index,
> + feature.reg, feature.bit, feature.bit);
> }
>
> struct far_pointer32 {
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values
2025-05-29 22:19 ` [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
@ 2025-06-10 6:14 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:14 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Introduce X86_PROPERTY_* to allow retrieving values/properties from CPUID
> leafs, e.g. MAXPHYADDR from CPUID.0x80000008. Use the same core code as
> X86_FEATURE_*, the primary difference is that properties are multi-bit
> values, whereas features enumerate a single bit.
>
> Add this_cpu_has_p() to allow querying whether or not a property exists
> based on the maximum leaf associated with the property, e.g. MAXPHYADDR
> doesn't exist if the max leaf for 0x8000_xxxx is less than 0x8000_0008.
>
> Use the new property infrastructure in cpuid_maxphyaddr() to prove that
> the code works as intended. Future patches will convert additional code.
>
> Note, the code, nomenclature, changelog, etc. are all stolen from KVM
> selftests.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/processor.h | 109 +++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 102 insertions(+), 7 deletions(-)
>
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index 3ac6711d..6b61a38b 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -218,13 +218,6 @@ static inline struct cpuid cpuid(u32 function)
> return cpuid_indexed(function, 0);
> }
>
> -static inline u8 cpuid_maxphyaddr(void)
> -{
> - if (raw_cpuid(0x80000000, 0).a < 0x80000008)
> - return 36;
> - return raw_cpuid(0x80000008, 0).a & 0xff;
> -}
> -
> static inline bool is_intel(void)
> {
> struct cpuid c = cpuid(0);
> @@ -329,6 +322,74 @@ struct x86_cpu_feature {
> #define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
> #define X86_FEATURE_AMD_PMU_V2 X86_CPU_FEATURE(0x80000022, 0, EAX, 0)
>
> +/*
> + * Same idea as X86_FEATURE_XXX, but X86_PROPERTY_XXX retrieves a multi-bit
> + * value/property as opposed to a single-bit feature. Again, pack the info
> + * into a 64-bit value to pass by value with no overhead on 64-bit builds.
> + */
> +struct x86_cpu_property {
> + u32 function;
> + u8 index;
> + u8 reg;
> + u8 lo_bit;
> + u8 hi_bit;
> +};
> +#define X86_CPU_PROPERTY(fn, idx, gpr, low_bit, high_bit) \
> +({ \
> + struct x86_cpu_property property = { \
> + .function = fn, \
> + .index = idx, \
> + .reg = gpr, \
> + .lo_bit = low_bit, \
> + .hi_bit = high_bit, \
> + }; \
> + \
> + static_assert(low_bit < high_bit); \
> + static_assert((fn & 0xc0000000) == 0 || \
> + (fn & 0xc0000000) == 0x40000000 || \
> + (fn & 0xc0000000) == 0x80000000 || \
> + (fn & 0xc0000000) == 0xc0000000); \
> + static_assert(idx < BIT(sizeof(property.index) * BITS_PER_BYTE)); \
> + property; \
> +})
> +
> +#define X86_PROPERTY_MAX_BASIC_LEAF X86_CPU_PROPERTY(0, 0, EAX, 0, 31)
> +#define X86_PROPERTY_PMU_VERSION X86_CPU_PROPERTY(0xa, 0, EAX, 0, 7)
> +#define X86_PROPERTY_PMU_NR_GP_COUNTERS X86_CPU_PROPERTY(0xa, 0, EAX, 8, 15)
> +#define X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH X86_CPU_PROPERTY(0xa, 0, EAX, 16, 23)
> +#define X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH X86_CPU_PROPERTY(0xa, 0, EAX, 24, 31)
> +#define X86_PROPERTY_PMU_EVENTS_MASK X86_CPU_PROPERTY(0xa, 0, EBX, 0, 7)
> +#define X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK X86_CPU_PROPERTY(0xa, 0, ECX, 0, 31)
> +#define X86_PROPERTY_PMU_NR_FIXED_COUNTERS X86_CPU_PROPERTY(0xa, 0, EDX, 0, 4)
> +#define X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH X86_CPU_PROPERTY(0xa, 0, EDX, 5, 12)
> +
> +#define X86_PROPERTY_SUPPORTED_XCR0_LO X86_CPU_PROPERTY(0xd, 0, EAX, 0, 31)
> +#define X86_PROPERTY_XSTATE_MAX_SIZE_XCR0 X86_CPU_PROPERTY(0xd, 0, EBX, 0, 31)
> +#define X86_PROPERTY_XSTATE_MAX_SIZE X86_CPU_PROPERTY(0xd, 0, ECX, 0, 31)
> +#define X86_PROPERTY_SUPPORTED_XCR0_HI X86_CPU_PROPERTY(0xd, 0, EDX, 0, 31)
> +
> +#define X86_PROPERTY_XSTATE_TILE_SIZE X86_CPU_PROPERTY(0xd, 18, EAX, 0, 31)
> +#define X86_PROPERTY_XSTATE_TILE_OFFSET X86_CPU_PROPERTY(0xd, 18, EBX, 0, 31)
> +#define X86_PROPERTY_AMX_MAX_PALETTE_TABLES X86_CPU_PROPERTY(0x1d, 0, EAX, 0, 31)
> +#define X86_PROPERTY_AMX_TOTAL_TILE_BYTES X86_CPU_PROPERTY(0x1d, 1, EAX, 0, 15)
> +#define X86_PROPERTY_AMX_BYTES_PER_TILE X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
> +#define X86_PROPERTY_AMX_BYTES_PER_ROW X86_CPU_PROPERTY(0x1d, 1, EBX, 0, 15)
> +#define X86_PROPERTY_AMX_NR_TILE_REGS X86_CPU_PROPERTY(0x1d, 1, EBX, 16, 31)
> +#define X86_PROPERTY_AMX_MAX_ROWS X86_CPU_PROPERTY(0x1d, 1, ECX, 0, 15)
> +
> +#define X86_PROPERTY_MAX_KVM_LEAF X86_CPU_PROPERTY(0x40000000, 0, EAX, 0, 31)
> +
> +#define X86_PROPERTY_MAX_EXT_LEAF X86_CPU_PROPERTY(0x80000000, 0, EAX, 0, 31)
> +#define X86_PROPERTY_MAX_PHY_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 0, 7)
> +#define X86_PROPERTY_MAX_VIRT_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 8, 15)
> +#define X86_PROPERTY_GUEST_MAX_PHY_ADDR X86_CPU_PROPERTY(0x80000008, 0, EAX, 16, 23)
> +#define X86_PROPERTY_SEV_C_BIT X86_CPU_PROPERTY(0x8000001F, 0, EBX, 0, 5)
> +#define X86_PROPERTY_PHYS_ADDR_REDUCTION X86_CPU_PROPERTY(0x8000001F, 0, EBX, 6, 11)
> +#define X86_PROPERTY_NR_PERFCTR_CORE X86_CPU_PROPERTY(0x80000022, 0, EBX, 0, 3)
> +#define X86_PROPERTY_NR_PERFCTR_NB X86_CPU_PROPERTY(0x80000022, 0, EBX, 10, 15)
> +
> +#define X86_PROPERTY_MAX_CENTAUR_LEAF X86_CPU_PROPERTY(0xC0000000, 0, EAX, 0, 31)
> +
> static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
> {
> union {
> @@ -347,6 +408,40 @@ static inline bool this_cpu_has(struct x86_cpu_feature feature)
> feature.reg, feature.bit, feature.bit);
> }
>
> +static inline uint32_t this_cpu_property(struct x86_cpu_property property)
> +{
> + return __this_cpu_has(property.function, property.index,
> + property.reg, property.lo_bit, property.hi_bit);
> +}
> +
> +static __always_inline bool this_cpu_has_p(struct x86_cpu_property property)
> +{
> + uint32_t max_leaf;
> +
> + switch (property.function & 0xc0000000) {
> + case 0:
> + max_leaf = this_cpu_property(X86_PROPERTY_MAX_BASIC_LEAF);
> + break;
> + case 0x40000000:
> + max_leaf = this_cpu_property(X86_PROPERTY_MAX_KVM_LEAF);
> + break;
> + case 0x80000000:
> + max_leaf = this_cpu_property(X86_PROPERTY_MAX_EXT_LEAF);
> + break;
> + case 0xc0000000:
> + max_leaf = this_cpu_property(X86_PROPERTY_MAX_CENTAUR_LEAF);
> + }
> + return max_leaf >= property.function;
> +}
> +
> +static inline u8 cpuid_maxphyaddr(void)
> +{
> + if (!this_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR))
> + return 36;
> +
> + return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
> +}
> +
> struct far_pointer32 {
> u32 offset;
> u16 selector;
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical()
2025-05-29 22:19 ` [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
@ 2025-06-10 6:16 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:16 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() instead of open coding a
> *very* rough equivalent. Default to a maximum virtual address width of
> 48 bits instead of 64 bits to better match real x86 CPUs (and Intel and
> AMD architectures).
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/processor.h | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index 6b61a38b..8c6f28a3 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -1022,9 +1022,14 @@ static inline void write_pkru(u32 pkru)
>
> static inline bool is_canonical(u64 addr)
> {
> - int va_width = (raw_cpuid(0x80000008, 0).a & 0xff00) >> 8;
> - int shift_amt = 64 - va_width;
> + int va_width, shift_amt;
>
> + if (this_cpu_has_p(X86_PROPERTY_MAX_VIRT_ADDR))
> + va_width = this_cpu_property(X86_PROPERTY_MAX_VIRT_ADDR);
> + else
> + va_width = 48;
> +
> + shift_amt = 64 - va_width;
> return (s64)(addr << shift_amt) >> shift_amt == addr;
> }
>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI}
2025-05-29 22:19 ` [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
@ 2025-06-10 6:18 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:18 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Use X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} to implement get_supported_xcr0().
>
> Opportunistically rename the helper and move it to processor.h.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/processor.h | 9 +++++++++
> x86/xsave.c | 11 +----------
> 2 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index 8c6f28a3..cbfd2ee1 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -442,6 +442,15 @@ static inline u8 cpuid_maxphyaddr(void)
> return this_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);
> }
>
> +static inline u64 this_cpu_supported_xcr0(void)
> +{
> + if (!this_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO))
> + return 0;
> +
> + return (u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) |
> + ((u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
> +}
> +
> struct far_pointer32 {
> u32 offset;
> u16 selector;
> diff --git a/x86/xsave.c b/x86/xsave.c
> index 5d80f245..cc8e3a0a 100644
> --- a/x86/xsave.c
> +++ b/x86/xsave.c
> @@ -8,15 +8,6 @@
> #define uint64_t unsigned long long
> #endif
>
> -static uint64_t get_supported_xcr0(void)
> -{
> - struct cpuid r;
> - r = cpuid_indexed(0xd, 0);
> - printf("eax %x, ebx %x, ecx %x, edx %x\n",
> - r.a, r.b, r.c, r.d);
> - return r.a + ((u64)r.d << 32);
> -}
> -
> #define XCR_XFEATURE_ENABLED_MASK 0x00000000
> #define XCR_XFEATURE_ILLEGAL_MASK 0x00000010
>
> @@ -33,7 +24,7 @@ static void test_xsave(void)
>
> printf("Legal instruction testing:\n");
>
> - supported_xcr0 = get_supported_xcr0();
> + supported_xcr0 = this_cpu_supported_xcr0();
> printf("Supported XCR0 bits: %#lx\n", supported_xcr0);
>
> test_bits = XSTATE_FP | XSTATE_SSE;
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES
2025-05-29 22:19 ` [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
@ 2025-06-10 6:21 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 6:21 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Add a definition for X86_PROPERTY_INTEL_PT_NR_RANGES, and use it instead
> of open coding equivalent logic in the LA57 testcase that verifies the
> canonical address behavior of PT MSRs.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/processor.h | 3 +++
> x86/la57.c | 2 +-
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index cbfd2ee1..3b02a966 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -370,6 +370,9 @@ struct x86_cpu_property {
>
> #define X86_PROPERTY_XSTATE_TILE_SIZE X86_CPU_PROPERTY(0xd, 18, EAX, 0, 31)
> #define X86_PROPERTY_XSTATE_TILE_OFFSET X86_CPU_PROPERTY(0xd, 18, EBX, 0, 31)
> +
> +#define X86_PROPERTY_INTEL_PT_NR_RANGES X86_CPU_PROPERTY(0x14, 1, EAX, 0, 2)
> +
> #define X86_PROPERTY_AMX_MAX_PALETTE_TABLES X86_CPU_PROPERTY(0x1d, 0, EAX, 0, 31)
> #define X86_PROPERTY_AMX_TOTAL_TILE_BYTES X86_CPU_PROPERTY(0x1d, 1, EAX, 0, 15)
> #define X86_PROPERTY_AMX_BYTES_PER_TILE X86_CPU_PROPERTY(0x1d, 1, EAX, 16, 31)
> diff --git a/x86/la57.c b/x86/la57.c
> index 41764110..1161a5bf 100644
> --- a/x86/la57.c
> +++ b/x86/la57.c
> @@ -288,7 +288,7 @@ static void __test_canonical_checks(bool force_emulation)
>
> /* PT filter ranges */
> if (this_cpu_has(X86_FEATURE_INTEL_PT)) {
> - int n_ranges = cpuid_indexed(0x14, 0x1).a & 0x7;
> + int n_ranges = this_cpu_property(X86_PROPERTY_INTEL_PT_NR_RANGES);
> int i;
>
> for (i = 0 ; i < n_ranges ; i++) {
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available()
2025-05-29 22:19 ` [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() Sean Christopherson
@ 2025-06-10 7:09 ` Mi, Dapeng
2025-06-10 16:16 ` Sean Christopherson
0 siblings, 1 reply; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 7:09 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() to
> reflect what the field and helper actually track. The availablity of
> architectural events has nothing to do with the GP counters themselves.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/pmu.c | 4 ++--
> lib/x86/pmu.h | 6 +++---
> x86/pmu.c | 6 +++---
> 3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index d06e9455..599168ac 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -21,7 +21,7 @@ void pmu_init(void)
> pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
>
> /* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
We need to modify the comment as well.
> - pmu.gp_counter_available = ~cpuid_10.b;
> + pmu.arch_event_available = ~cpuid_10.b;
>
> if (this_cpu_has(X86_FEATURE_PDCM))
> pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> @@ -51,7 +51,7 @@ void pmu_init(void)
> }
> pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> - pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
> + pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
"available architectural events" and "available GP counters" are two
different things. I know this would be changed in later patch 09/16, but
it's really confusing. Could we merge the later patch 09/16 into this patch?
>
> if (this_cpu_has_perf_global_status()) {
> pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS;
> diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
> index f07fbd93..d0ad280a 100644
> --- a/lib/x86/pmu.h
> +++ b/lib/x86/pmu.h
> @@ -64,7 +64,7 @@ struct pmu_caps {
> u8 nr_gp_counters;
> u8 gp_counter_width;
> u8 gp_counter_mask_length;
> - u32 gp_counter_available;
> + u32 arch_event_available;
> u32 msr_gp_counter_base;
> u32 msr_gp_event_select_base;
>
> @@ -110,9 +110,9 @@ static inline bool this_cpu_has_perf_global_status(void)
> return pmu.version > 1;
> }
>
> -static inline bool pmu_gp_counter_is_available(int i)
> +static inline bool pmu_arch_event_is_available(int i)
> {
> - return pmu.gp_counter_available & BIT(i);
> + return pmu.arch_event_available & BIT(i);
> }
>
> static inline u64 pmu_lbr_version(void)
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 8cf26b12..0ce34433 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -436,7 +436,7 @@ static void check_gp_counters(void)
> int i;
>
> for (i = 0; i < gp_events_size; i++)
> - if (pmu_gp_counter_is_available(i))
> + if (pmu_arch_event_is_available(i))
> check_gp_counter(&gp_events[i]);
> else
> printf("GP event '%s' is disabled\n",
> @@ -463,7 +463,7 @@ static void check_counters_many(void)
> int i, n;
>
> for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
> - if (!pmu_gp_counter_is_available(i))
> + if (!pmu_arch_event_is_available(i))
> continue;
The intent of check_counters_many() is to verify all available GP and fixed
counters can count correctly at the same time. So we should select another
available event to verify the counter instead of skipping the counter if an
event is not available.
Maybe like this.
diff --git a/x86/pmu.c b/x86/pmu.c
index 63eae3db..013fdfce 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -457,18 +457,34 @@ static void check_fixed_counters(void)
}
}
+static struct pmu_event *get_one_event(int idx)
+{
+ int i;
+
+ if (pmu_arch_event_is_available(idx))
+ return &gp_events[idx % gp_events_size];
+
+ for (i = 0; i < gp_events_size; i++) {
+ if (pmu_arch_event_is_available(i))
+ return &gp_events[i];
+ }
+
+ return NULL;
+}
+
static void check_counters_many(void)
{
+ struct pmu_event *evt;
pmu_counter_t cnt[48];
int i, n;
for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
- if (!pmu_arch_event_is_available(i))
+ evt = get_one_event(i);
+ if (!evt)
continue;
cnt[n].ctr = MSR_GP_COUNTERx(n);
- cnt[n].config = EVNTSEL_OS | EVNTSEL_USR |
- gp_events[i % gp_events_size].unit_sel;
+ cnt[n].config = EVNTSEL_OS | EVNTSEL_USR | evt->unit_sel;
n++;
}
for (i = 0; i < fixed_counters_num; i++) {
>
> cnt[n].ctr = MSR_GP_COUNTERx(n);
> @@ -902,7 +902,7 @@ static void set_ref_cycle_expectations(void)
> uint64_t t0, t1, t2, t3;
>
> /* Bit 2 enumerates the availability of reference cycles events. */
> - if (!pmu.nr_gp_counters || !pmu_gp_counter_is_available(2))
> + if (!pmu.nr_gp_counters || !pmu_arch_event_is_available(2))
> return;
>
> if (this_cpu_has_perf_global_ctrl())
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length
2025-05-29 22:19 ` [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length Sean Christopherson
@ 2025-06-10 7:22 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 7:22 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Rename gp_counter_mask_length to arch_event_mask_length to reflect what
> the field actually tracks. The availablity of architectural events has
> nothing to do with the GP counters themselves.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/pmu.c | 4 ++--
> lib/x86/pmu.h | 2 +-
> x86/pmu.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index 599168ac..b97e2c4a 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -18,7 +18,7 @@ void pmu_init(void)
>
> pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
> pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> - pmu.gp_counter_mask_length = (cpuid_10.a >> 24) & 0xff;
> + pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
>
> /* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
> pmu.arch_event_available = ~cpuid_10.b;
Better to change to "pmu.arch_event_available = ~cpuid_10.b &
(BIT(pmu.arch_event_mask_length) - 1)" to follow SDM. Some newly introduced
architectural events like topdown metrics events doesn't exist on older
platforms.
> @@ -50,7 +50,7 @@ void pmu_init(void)
> pmu.msr_gp_event_select_base = MSR_K7_EVNTSEL0;
> }
> pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> - pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> + pmu.arch_event_mask_length = pmu.nr_gp_counters;
> pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
>
> if (this_cpu_has_perf_global_status()) {
> diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h
> index d0ad280a..c7dc68c1 100644
> --- a/lib/x86/pmu.h
> +++ b/lib/x86/pmu.h
> @@ -63,7 +63,7 @@ struct pmu_caps {
> u8 fixed_counter_width;
> u8 nr_gp_counters;
> u8 gp_counter_width;
> - u8 gp_counter_mask_length;
> + u8 arch_event_mask_length;
> u32 arch_event_available;
> u32 msr_gp_counter_base;
> u32 msr_gp_event_select_base;
> diff --git a/x86/pmu.c b/x86/pmu.c
> index 0ce34433..63eae3db 100644
> --- a/x86/pmu.c
> +++ b/x86/pmu.c
> @@ -992,7 +992,7 @@ int main(int ac, char **av)
> printf("PMU version: %d\n", pmu.version);
> printf("GP counters: %d\n", pmu.nr_gp_counters);
> printf("GP counter width: %d\n", pmu.gp_counter_width);
> - printf("Mask length: %d\n", pmu.gp_counter_mask_length);
> + printf("Event Mask length: %d\n", pmu.arch_event_mask_length);
> printf("Fixed counters: %d\n", pmu.nr_fixed_counters);
> printf("Fixed counter width: %d\n", pmu.fixed_counter_width);
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h
2025-05-29 22:19 ` [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
@ 2025-06-10 7:25 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 7:25 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Use VC_VECTOR (defined in processor.h along with all other known vectors)
> and drop the one-off SEV_ES_VC_HANDLER_VECTOR macro.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/amd_sev.c | 4 ++--
> lib/x86/amd_sev.h | 6 ------
> 2 files changed, 2 insertions(+), 8 deletions(-)
>
> diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
> index 66722141..6c0a66ac 100644
> --- a/lib/x86/amd_sev.c
> +++ b/lib/x86/amd_sev.c
> @@ -111,9 +111,9 @@ efi_status_t setup_amd_sev_es(void)
> */
> sidt(&idtr);
> idt = (idt_entry_t *)idtr.base;
> - vc_handler_idt = idt[SEV_ES_VC_HANDLER_VECTOR];
> + vc_handler_idt = idt[VC_VECTOR];
> vc_handler_idt.selector = KERNEL_CS;
> - boot_idt[SEV_ES_VC_HANDLER_VECTOR] = vc_handler_idt;
> + boot_idt[VC_VECTOR] = vc_handler_idt;
>
> return EFI_SUCCESS;
> }
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index ed6e3385..ca7216d4 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -39,12 +39,6 @@
> bool amd_sev_enabled(void);
> efi_status_t setup_amd_sev(void);
>
> -/*
> - * AMD Programmer's Manual Volume 2
> - * - Section "#VC Exception"
> - */
> -#define SEV_ES_VC_HANDLER_VECTOR 29
> -
> /*
> * AMD Programmer's Manual Volume 2
> * - Section "GHCB"
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information
2025-05-29 22:19 ` [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
@ 2025-06-10 7:29 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-10 7:29 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> Use the recently introduced X86_PROPERTY_PMU_* macros to get PMU
> information instead of open coding equivalent functionality.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> lib/x86/pmu.c | 18 ++++++++----------
> 1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c
> index 44449372..c7f7da14 100644
> --- a/lib/x86/pmu.c
> +++ b/lib/x86/pmu.c
> @@ -7,21 +7,19 @@ void pmu_init(void)
> pmu.is_intel = is_intel();
>
> if (pmu.is_intel) {
> - struct cpuid cpuid_10 = cpuid(10);
> -
> - pmu.version = cpuid_10.a & 0xff;
> + pmu.version = this_cpu_property(X86_PROPERTY_PMU_VERSION);
>
> if (pmu.version > 1) {
> - pmu.nr_fixed_counters = cpuid_10.d & 0x1f;
> - pmu.fixed_counter_width = (cpuid_10.d >> 5) & 0xff;
> + pmu.nr_fixed_counters = this_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
> + pmu.fixed_counter_width = this_cpu_property(X86_PROPERTY_PMU_FIXED_COUNTERS_BIT_WIDTH);
> }
>
> - pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff;
> - pmu.gp_counter_width = (cpuid_10.a >> 16) & 0xff;
> - pmu.arch_event_mask_length = (cpuid_10.a >> 24) & 0xff;
> + pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
> + pmu.gp_counter_width = this_cpu_property(X86_PROPERTY_PMU_GP_COUNTERS_BIT_WIDTH);
> + pmu.arch_event_mask_length = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
>
> /* CPUID.0xA.EBX bit is '1' if a counter is NOT available. */
> - pmu.arch_event_available = ~cpuid_10.b;
> + pmu.arch_event_available = ~this_cpu_property(X86_PROPERTY_PMU_EVENTS_MASK);
>
> if (this_cpu_has(X86_FEATURE_PDCM))
> pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
> @@ -38,7 +36,7 @@ void pmu_init(void)
> /* Performance Monitoring Version 2 Supported */
> if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) {
> pmu.version = 2;
> - pmu.nr_gp_counters = cpuid(0x80000022).b & 0xf;
> + pmu.nr_gp_counters = this_cpu_property(X86_PROPERTY_NR_PERFCTR_CORE);
> } else {
> pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE;
> }
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure
2025-06-10 6:08 ` Mi, Dapeng
@ 2025-06-10 13:56 ` Sean Christopherson
0 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-06-10 13:56 UTC (permalink / raw)
To: Dapeng Mi
Cc: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini, kvm-riscv, linux-s390, kvm
On Tue, Jun 10, 2025, Dapeng Mi wrote:
> On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> > +#define X86_FEATURE_SVM X86_CPU_FEATURE(0x80000001, 0, ECX, 2)
> > +#define X86_FEATURE_PERFCTR_CORE X86_CPU_FEATURE(0x80000001, 0, ECX, 23)
> > +#define X86_FEATURE_NX X86_CPU_FEATURE(0x80000001, 0, EDX, 20)
> > +#define X86_FEATURE_GBPAGES X86_CPU_FEATURE(0x80000001, 0, EDX, 26)
> > +#define X86_FEATURE_RDTSCP X86_CPU_FEATURE(0x80000001, 0, EDX, 27)
> > +#define X86_FEATURE_LM X86_CPU_FEATURE(0x80000001, 0, EDX, 29)
> > +#define X86_FEATURE_RDPRU X86_CPU_FEATURE(0x80000008, 0, EBX, 4)
> > +#define X86_FEATURE_AMD_IBPB X86_CPU_FEATURE(0x80000008, 0, EBX, 12)
> > +#define X86_FEATURE_NPT X86_CPU_FEATURE(0x8000000A, 0, EDX, 0)
> > +#define X86_FEATURE_LBRV X86_CPU_FEATURE(0x8000000A, 0, EDX, 1)
> > +#define X86_FEATURE_NRIPS X86_CPU_FEATURE(0x8000000A, 0, EDX, 3)
> > +#define X86_FEATURE_TSCRATEMSR X86_CPU_FEATURE(0x8000000A, 0, EDX, 4)
> > +#define X86_FEATURE_PAUSEFILTER X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
> > +#define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
> > +#define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
> > +#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
>
> The code looks good to me except the indent style (mixed tab and space).
> Although it's not introduced by this patch, we'd better make them identical
> by this chance.
Agreed, that is weird. I didn't notice it in the code, but looking at this diff
again, it really stands out.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available()
2025-06-10 7:09 ` Mi, Dapeng
@ 2025-06-10 16:16 ` Sean Christopherson
2025-06-11 0:41 ` Mi, Dapeng
0 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2025-06-10 16:16 UTC (permalink / raw)
To: Dapeng Mi
Cc: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini, kvm-riscv, linux-s390, kvm
On Tue, Jun 10, 2025, Dapeng Mi wrote:
> On 5/30/2025 6:19 AM, Sean Christopherson wrote:
> > @@ -51,7 +51,7 @@ void pmu_init(void)
> > }
> > pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
> > pmu.gp_counter_mask_length = pmu.nr_gp_counters;
> > - pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
> > + pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
>
> "available architectural events" and "available GP counters" are two
> different things. I know this would be changed in later patch 09/16, but
> it's really confusing. Could we merge the later patch 09/16 into this patch?
Ya. I was trying to not mix too many things in one patch, but looking at this
again, I 100% agree that squashing 7-9 into one patch is better overall.
> > @@ -463,7 +463,7 @@ static void check_counters_many(void)
> > int i, n;
> >
> > for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
> > - if (!pmu_gp_counter_is_available(i))
> > + if (!pmu_arch_event_is_available(i))
> > continue;
>
> The intent of check_counters_many() is to verify all available GP and fixed
> counters can count correctly at the same time. So we should select another
> available event to verify the counter instead of skipping the counter if an
> event is not available.
Agreed, but I'm going to defer that for now, this series already wanders in too
many directions. Definitely feel free to post a patch.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
` (15 preceding siblings ...)
2025-05-29 22:19 ` [kvm-unit-tests PATCH 16/16] x86: Move SEV MSR definitions to msr.h Sean Christopherson
@ 2025-06-10 19:42 ` Sean Christopherson
16 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2025-06-10 19:42 UTC (permalink / raw)
To: Sean Christopherson, Andrew Jones, Janosch Frank,
Claudio Imbrenda, Nico Böhr, Paolo Bonzini
Cc: kvm-riscv, linux-s390, kvm
On Thu, 29 May 2025 15:19:13 -0700, Sean Christopherson wrote:
> Copy KVM selftests' X86_PROPERTY_* infrastructure (multi-bit CPUID
> fields), and use the properties to clean up various warts. The SEV code
> is particular makes things much harder than they need to be (I went down
> this rabbit hole purely because the stupid MSR_SEV_STATUS definition was
> buried behind CONFIG_EFI=y, *sigh*).
>
> The first patch is a common change to add static_assert() as a wrapper
> to _Static_assert(). Forcing code to provide an error message just leads
> to useless error messages.
>
> [...]
To avoid spamming non-x86 folks with noise, applied patch 1 to kvm-x86 next.
I'll send a v2 for the rest.
[01/16] lib: Add and use static_assert() convenience wrappers
https://github.com/kvm-x86/kvm-unit-tests/commit/863e0b90fb88
--
https://github.com/kvm-x86/kvm-unit-tests/tree/next
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available()
2025-06-10 16:16 ` Sean Christopherson
@ 2025-06-11 0:41 ` Mi, Dapeng
0 siblings, 0 replies; 34+ messages in thread
From: Mi, Dapeng @ 2025-06-11 0:41 UTC (permalink / raw)
To: Sean Christopherson
Cc: Andrew Jones, Janosch Frank, Claudio Imbrenda, Nico Böhr,
Paolo Bonzini, kvm-riscv, linux-s390, kvm
On 6/11/2025 12:16 AM, Sean Christopherson wrote:
> On Tue, Jun 10, 2025, Dapeng Mi wrote:
>> On 5/30/2025 6:19 AM, Sean Christopherson wrote:
>>> @@ -51,7 +51,7 @@ void pmu_init(void)
>>> }
>>> pmu.gp_counter_width = PMC_DEFAULT_WIDTH;
>>> pmu.gp_counter_mask_length = pmu.nr_gp_counters;
>>> - pmu.gp_counter_available = (1u << pmu.nr_gp_counters) - 1;
>>> + pmu.arch_event_available = (1u << pmu.nr_gp_counters) - 1;
>> "available architectural events" and "available GP counters" are two
>> different things. I know this would be changed in later patch 09/16, but
>> it's really confusing. Could we merge the later patch 09/16 into this patch?
> Ya. I was trying to not mix too many things in one patch, but looking at this
> again, I 100% agree that squashing 7-9 into one patch is better overall.
>
>>> @@ -463,7 +463,7 @@ static void check_counters_many(void)
>>> int i, n;
>>>
>>> for (i = 0, n = 0; n < pmu.nr_gp_counters; i++) {
>>> - if (!pmu_gp_counter_is_available(i))
>>> + if (!pmu_arch_event_is_available(i))
>>> continue;
>> The intent of check_counters_many() is to verify all available GP and fixed
>> counters can count correctly at the same time. So we should select another
>> available event to verify the counter instead of skipping the counter if an
>> event is not available.
> Agreed, but I'm going to defer that for now, this series already wanders in too
> many directions. Definitely feel free to post a patch.
Sure. Thanks.
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2025-06-11 0:41 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-29 22:19 [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 01/16] lib: Add and use static_assert() convenience wrappers Sean Christopherson
2025-05-30 6:03 ` Andrew Jones
2025-05-30 9:01 ` Janosch Frank
2025-06-10 6:04 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 02/16] x86: Encode X86_FEATURE_* definitions using a structure Sean Christopherson
2025-06-10 6:08 ` Mi, Dapeng
2025-06-10 13:56 ` Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 03/16] x86: Add X86_PROPERTY_* framework to retrieve CPUID values Sean Christopherson
2025-06-10 6:14 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 04/16] x86: Use X86_PROPERTY_MAX_VIRT_ADDR in is_canonical() Sean Christopherson
2025-06-10 6:16 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 05/16] x86: Implement get_supported_xcr0() using X86_PROPERTY_SUPPORTED_XCR0_{LO,HI} Sean Christopherson
2025-06-10 6:18 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 06/16] x86: Add and use X86_PROPERTY_INTEL_PT_NR_RANGES Sean Christopherson
2025-06-10 6:21 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 07/16] x86/pmu: Rename pmu_gp_counter_is_available() to pmu_arch_event_is_available() Sean Christopherson
2025-06-10 7:09 ` Mi, Dapeng
2025-06-10 16:16 ` Sean Christopherson
2025-06-11 0:41 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 08/16] x86/pmu: Rename gp_counter_mask_length to arch_event_mask_length Sean Christopherson
2025-06-10 7:22 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 09/16] x86/pmu: Mark all arch events as available on AMD Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 10/16] x86/pmu: Use X86_PROPERTY_PMU_* macros to retrieve PMU information Sean Christopherson
2025-06-10 7:29 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 11/16] x86/sev: Use VC_VECTOR from processor.h Sean Christopherson
2025-06-10 7:25 ` Mi, Dapeng
2025-05-29 22:19 ` [kvm-unit-tests PATCH 12/16] x86/sev: Skip the AMD SEV test if SEV is unsupported/disabled Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 13/16] x86/sev: Define and use X86_FEATURE_* flags for CPUID 0x8000001F Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 14/16] x86/sev: Use X86_PROPERTY_SEV_C_BIT to get the AMD SEV C-bit location Sean Christopherson
2025-05-29 22:19 ` [kvm-unit-tests PATCH 15/16] x86/sev: Use amd_sev_es_enabled() to detect if SEV-ES is enabled Sean Christopherson
2025-05-30 16:22 ` Liam Merwick
2025-05-29 22:19 ` [kvm-unit-tests PATCH 16/16] x86: Move SEV MSR definitions to msr.h Sean Christopherson
2025-06-10 19:42 ` [kvm-unit-tests PATCH 00/16] x86: Add CPUID properties, clean up related code Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).