* [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions
@ 2025-09-16 17:22 Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 01/17] lib: add linux vmx.h clone from 6.16 Jon Kohler
` (17 more replies)
0 siblings, 18 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
This series modernizes VMX definitions to align with the canonical ones
within Linux kernel source. Currently, kvm-unit-tests uses custom VMX
constant definitions that have grown organically and have diverged from
the kernel, increasing the overhead to grok from one code base to
another.
This alignment provides several benefits:
- Reduces maintenance overhead by using authoritative definitions
- Eliminates potential bugs from definition mismatches
- Makes the test suite more consistent with kernel code
- Simplifies future updates when new VMX features are added
Given the lines touched, I've broken this up into two groups within the
series:
Group 1: Import various headers from Linux kernel 6.16 (P01-04)
Headers were brought in with minimal adaptation outside of minor tweaks
for includes, etc.
Group 2: Mechanically replace existing constants with equivalents (P05-17)
Replace custom VMX constant definitions in x86/vmx.h with Linux kernel
equivalents from lib/linux/vmx.h. This systematic replacement covers:
- Pin-based VM-execution controls (PIN_* -> PIN_BASED_*)
- CPU-based VM-execution controls (CPU_* -> CPU_BASED_*, SECONDARY_EXEC_*)
- VM-exit controls (EXI_* -> VM_EXIT_*)
- VM-entry controls (ENT_* -> VM_ENTRY_*)
- VMCS field names (custom enum -> standard Linux enum)
- VMX exit reasons (VMX_* -> EXIT_REASON_*)
- Interrupt/exception type definitions
All functional behavior is preserved - only the constant names and
values change to match Linux kernel definitions. All existing VMX tests
pass with no functional changes.
There is still a bit of bulk in x86/vmx.h, which can be addressed in
future patches as needed.
Jon Kohler (17):
lib: add linux vmx.h clone from 6.16
lib: add linux trapnr.h clone from 6.16
lib: add vmxfeatures.h clone from 6.16
lib: define __aligned() in compiler.h
x86/vmx: basic integration for new vmx.h
x86/vmx: switch to new vmx.h EPT violation defs
x86/vmx: switch to new vmx.h EPT RWX defs
x86/vmx: switch to new vmx.h EPT access and dirty defs
x86/vmx: switch to new vmx.h EPT capability and memory type defs
x86/vmx: switch to new vmx.h primary processor-based VM-execution
controls
x86/vmx: switch to new vmx.h secondary execution control bit
x86/vmx: switch to new vmx.h secondary execution controls
x86/vmx: switch to new vmx.h pin based VM-execution controls
x86/vmx: switch to new vmx.h exit controls
x86/vmx: switch to new vmx.h entry controls
x86/vmx: switch to new vmx.h interrupt defs
x86/vmx: align exit reasons with Linux uapi
lib/linux/compiler.h | 1 +
lib/linux/trapnr.h | 44 ++
lib/linux/vmx.h | 672 ++++++++++++++++++
lib/linux/vmxfeatures.h | 93 +++
lib/x86/msr.h | 14 +
x86/vmx.c | 230 +++---
x86/vmx.h | 356 ++--------
x86/vmx_tests.c | 1489 ++++++++++++++++++++++-----------------
8 files changed, 1876 insertions(+), 1023 deletions(-)
create mode 100644 lib/linux/trapnr.h
create mode 100644 lib/linux/vmx.h
create mode 100644 lib/linux/vmxfeatures.h
base-commit: 890498d834b68104e79b57a801fa11fc6ce82846
--
2.43.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 01/17] lib: add linux vmx.h clone from 6.16
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 02/17] lib: add linux trapnr.h " Jon Kohler
` (16 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Jon Kohler
Add Linux's arch/x86/include/asm/vmx.h from [1] into lib/linux/vmx.h.
This copy will replace most (if not all) of the existing vmx.h in follow
up commits, and will allow kvm-unit-tests to be directly aligned with
how Linux defines vmx.h.
[1] e6a8578 ("KVM: TDX: Detect unexpected SEPT violations due to pending SPTEs")
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/vmx.h | 672 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 672 insertions(+)
create mode 100644 lib/linux/vmx.h
diff --git a/lib/linux/vmx.h b/lib/linux/vmx.h
new file mode 100644
index 00000000..cca7d664
--- /dev/null
+++ b/lib/linux/vmx.h
@@ -0,0 +1,672 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * vmx.h: VMX Architecture related definitions
+ * Copyright (c) 2004, Intel Corporation.
+ *
+ * A few random additions are:
+ * Copyright (C) 2006 Qumranet
+ * Avi Kivity <avi@qumranet.com>
+ * Yaniv Kamay <yaniv@qumranet.com>
+ */
+#ifndef VMX_H
+#define VMX_H
+
+
+#include <linux/bitops.h>
+#include <linux/bug.h>
+#include <linux/types.h>
+
+#include <uapi/asm/vmx.h>
+#include <asm/trapnr.h>
+#include <asm/vmxfeatures.h>
+
+#define VMCS_CONTROL_BIT(x) BIT(VMX_FEATURE_##x & 0x1f)
+
+/*
+ * Definitions of Primary Processor-Based VM-Execution Controls.
+ */
+#define CPU_BASED_INTR_WINDOW_EXITING VMCS_CONTROL_BIT(INTR_WINDOW_EXITING)
+#define CPU_BASED_USE_TSC_OFFSETTING VMCS_CONTROL_BIT(USE_TSC_OFFSETTING)
+#define CPU_BASED_HLT_EXITING VMCS_CONTROL_BIT(HLT_EXITING)
+#define CPU_BASED_INVLPG_EXITING VMCS_CONTROL_BIT(INVLPG_EXITING)
+#define CPU_BASED_MWAIT_EXITING VMCS_CONTROL_BIT(MWAIT_EXITING)
+#define CPU_BASED_RDPMC_EXITING VMCS_CONTROL_BIT(RDPMC_EXITING)
+#define CPU_BASED_RDTSC_EXITING VMCS_CONTROL_BIT(RDTSC_EXITING)
+#define CPU_BASED_CR3_LOAD_EXITING VMCS_CONTROL_BIT(CR3_LOAD_EXITING)
+#define CPU_BASED_CR3_STORE_EXITING VMCS_CONTROL_BIT(CR3_STORE_EXITING)
+#define CPU_BASED_ACTIVATE_TERTIARY_CONTROLS VMCS_CONTROL_BIT(TERTIARY_CONTROLS)
+#define CPU_BASED_CR8_LOAD_EXITING VMCS_CONTROL_BIT(CR8_LOAD_EXITING)
+#define CPU_BASED_CR8_STORE_EXITING VMCS_CONTROL_BIT(CR8_STORE_EXITING)
+#define CPU_BASED_TPR_SHADOW VMCS_CONTROL_BIT(VIRTUAL_TPR)
+#define CPU_BASED_NMI_WINDOW_EXITING VMCS_CONTROL_BIT(NMI_WINDOW_EXITING)
+#define CPU_BASED_MOV_DR_EXITING VMCS_CONTROL_BIT(MOV_DR_EXITING)
+#define CPU_BASED_UNCOND_IO_EXITING VMCS_CONTROL_BIT(UNCOND_IO_EXITING)
+#define CPU_BASED_USE_IO_BITMAPS VMCS_CONTROL_BIT(USE_IO_BITMAPS)
+#define CPU_BASED_MONITOR_TRAP_FLAG VMCS_CONTROL_BIT(MONITOR_TRAP_FLAG)
+#define CPU_BASED_USE_MSR_BITMAPS VMCS_CONTROL_BIT(USE_MSR_BITMAPS)
+#define CPU_BASED_MONITOR_EXITING VMCS_CONTROL_BIT(MONITOR_EXITING)
+#define CPU_BASED_PAUSE_EXITING VMCS_CONTROL_BIT(PAUSE_EXITING)
+#define CPU_BASED_ACTIVATE_SECONDARY_CONTROLS VMCS_CONTROL_BIT(SEC_CONTROLS)
+
+#define CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR 0x0401e172
+
+/*
+ * Definitions of Secondary Processor-Based VM-Execution Controls.
+ */
+#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES VMCS_CONTROL_BIT(VIRT_APIC_ACCESSES)
+#define SECONDARY_EXEC_ENABLE_EPT VMCS_CONTROL_BIT(EPT)
+#define SECONDARY_EXEC_DESC VMCS_CONTROL_BIT(DESC_EXITING)
+#define SECONDARY_EXEC_ENABLE_RDTSCP VMCS_CONTROL_BIT(RDTSCP)
+#define SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE VMCS_CONTROL_BIT(VIRTUAL_X2APIC)
+#define SECONDARY_EXEC_ENABLE_VPID VMCS_CONTROL_BIT(VPID)
+#define SECONDARY_EXEC_WBINVD_EXITING VMCS_CONTROL_BIT(WBINVD_EXITING)
+#define SECONDARY_EXEC_UNRESTRICTED_GUEST VMCS_CONTROL_BIT(UNRESTRICTED_GUEST)
+#define SECONDARY_EXEC_APIC_REGISTER_VIRT VMCS_CONTROL_BIT(APIC_REGISTER_VIRT)
+#define SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY VMCS_CONTROL_BIT(VIRT_INTR_DELIVERY)
+#define SECONDARY_EXEC_PAUSE_LOOP_EXITING VMCS_CONTROL_BIT(PAUSE_LOOP_EXITING)
+#define SECONDARY_EXEC_RDRAND_EXITING VMCS_CONTROL_BIT(RDRAND_EXITING)
+#define SECONDARY_EXEC_ENABLE_INVPCID VMCS_CONTROL_BIT(INVPCID)
+#define SECONDARY_EXEC_ENABLE_VMFUNC VMCS_CONTROL_BIT(VMFUNC)
+#define SECONDARY_EXEC_SHADOW_VMCS VMCS_CONTROL_BIT(SHADOW_VMCS)
+#define SECONDARY_EXEC_ENCLS_EXITING VMCS_CONTROL_BIT(ENCLS_EXITING)
+#define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING)
+#define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_LOGGING)
+#define SECONDARY_EXEC_EPT_VIOLATION_VE VMCS_CONTROL_BIT(EPT_VIOLATION_VE)
+#define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX)
+#define SECONDARY_EXEC_ENABLE_XSAVES VMCS_CONTROL_BIT(XSAVES)
+#define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC)
+#define SECONDARY_EXEC_PT_USE_GPA VMCS_CONTROL_BIT(PT_USE_GPA)
+#define SECONDARY_EXEC_TSC_SCALING VMCS_CONTROL_BIT(TSC_SCALING)
+#define SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE VMCS_CONTROL_BIT(USR_WAIT_PAUSE)
+#define SECONDARY_EXEC_BUS_LOCK_DETECTION VMCS_CONTROL_BIT(BUS_LOCK_DETECTION)
+#define SECONDARY_EXEC_NOTIFY_VM_EXITING VMCS_CONTROL_BIT(NOTIFY_VM_EXITING)
+
+/*
+ * Definitions of Tertiary Processor-Based VM-Execution Controls.
+ */
+#define TERTIARY_EXEC_IPI_VIRT VMCS_CONTROL_BIT(IPI_VIRT)
+
+#define PIN_BASED_EXT_INTR_MASK VMCS_CONTROL_BIT(INTR_EXITING)
+#define PIN_BASED_NMI_EXITING VMCS_CONTROL_BIT(NMI_EXITING)
+#define PIN_BASED_VIRTUAL_NMIS VMCS_CONTROL_BIT(VIRTUAL_NMIS)
+#define PIN_BASED_VMX_PREEMPTION_TIMER VMCS_CONTROL_BIT(PREEMPTION_TIMER)
+#define PIN_BASED_POSTED_INTR VMCS_CONTROL_BIT(POSTED_INTR)
+
+#define PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR 0x00000016
+
+#define VM_EXIT_SAVE_DEBUG_CONTROLS 0x00000004
+#define VM_EXIT_HOST_ADDR_SPACE_SIZE 0x00000200
+#define VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL 0x00001000
+#define VM_EXIT_ACK_INTR_ON_EXIT 0x00008000
+#define VM_EXIT_SAVE_IA32_PAT 0x00040000
+#define VM_EXIT_LOAD_IA32_PAT 0x00080000
+#define VM_EXIT_SAVE_IA32_EFER 0x00100000
+#define VM_EXIT_LOAD_IA32_EFER 0x00200000
+#define VM_EXIT_SAVE_VMX_PREEMPTION_TIMER 0x00400000
+#define VM_EXIT_CLEAR_BNDCFGS 0x00800000
+#define VM_EXIT_PT_CONCEAL_PIP 0x01000000
+#define VM_EXIT_CLEAR_IA32_RTIT_CTL 0x02000000
+
+#define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR 0x00036dff
+
+#define VM_ENTRY_LOAD_DEBUG_CONTROLS 0x00000004
+#define VM_ENTRY_IA32E_MODE 0x00000200
+#define VM_ENTRY_SMM 0x00000400
+#define VM_ENTRY_DEACT_DUAL_MONITOR 0x00000800
+#define VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL 0x00002000
+#define VM_ENTRY_LOAD_IA32_PAT 0x00004000
+#define VM_ENTRY_LOAD_IA32_EFER 0x00008000
+#define VM_ENTRY_LOAD_BNDCFGS 0x00010000
+#define VM_ENTRY_PT_CONCEAL_PIP 0x00020000
+#define VM_ENTRY_LOAD_IA32_RTIT_CTL 0x00040000
+
+#define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR 0x000011ff
+
+/* VMFUNC functions */
+#define VMFUNC_CONTROL_BIT(x) BIT((VMX_FEATURE_##x & 0x1f) - 28)
+
+#define VMX_VMFUNC_EPTP_SWITCHING VMFUNC_CONTROL_BIT(EPTP_SWITCHING)
+#define VMFUNC_EPTP_ENTRIES 512
+
+#define VMX_BASIC_32BIT_PHYS_ADDR_ONLY BIT_ULL(48)
+#define VMX_BASIC_DUAL_MONITOR_TREATMENT BIT_ULL(49)
+#define VMX_BASIC_INOUT BIT_ULL(54)
+#define VMX_BASIC_TRUE_CTLS BIT_ULL(55)
+
+static inline u32 vmx_basic_vmcs_revision_id(u64 vmx_basic)
+{
+ return vmx_basic & GENMASK_ULL(30, 0);
+}
+
+static inline u32 vmx_basic_vmcs_size(u64 vmx_basic)
+{
+ return (vmx_basic & GENMASK_ULL(44, 32)) >> 32;
+}
+
+static inline u32 vmx_basic_vmcs_mem_type(u64 vmx_basic)
+{
+ return (vmx_basic & GENMASK_ULL(53, 50)) >> 50;
+}
+
+static inline u64 vmx_basic_encode_vmcs_info(u32 revision, u16 size, u8 memtype)
+{
+ return revision | ((u64)size << 32) | ((u64)memtype << 50);
+}
+
+#define VMX_MISC_SAVE_EFER_LMA BIT_ULL(5)
+#define VMX_MISC_ACTIVITY_HLT BIT_ULL(6)
+#define VMX_MISC_ACTIVITY_SHUTDOWN BIT_ULL(7)
+#define VMX_MISC_ACTIVITY_WAIT_SIPI BIT_ULL(8)
+#define VMX_MISC_INTEL_PT BIT_ULL(14)
+#define VMX_MISC_RDMSR_IN_SMM BIT_ULL(15)
+#define VMX_MISC_VMXOFF_BLOCK_SMI BIT_ULL(28)
+#define VMX_MISC_VMWRITE_SHADOW_RO_FIELDS BIT_ULL(29)
+#define VMX_MISC_ZERO_LEN_INS BIT_ULL(30)
+#define VMX_MISC_MSR_LIST_MULTIPLIER 512
+
+static inline int vmx_misc_preemption_timer_rate(u64 vmx_misc)
+{
+ return vmx_misc & GENMASK_ULL(4, 0);
+}
+
+static inline int vmx_misc_cr3_count(u64 vmx_misc)
+{
+ return (vmx_misc & GENMASK_ULL(24, 16)) >> 16;
+}
+
+static inline int vmx_misc_max_msr(u64 vmx_misc)
+{
+ return (vmx_misc & GENMASK_ULL(27, 25)) >> 25;
+}
+
+static inline int vmx_misc_mseg_revid(u64 vmx_misc)
+{
+ return (vmx_misc & GENMASK_ULL(63, 32)) >> 32;
+}
+
+/* VMCS Encodings */
+enum vmcs_field {
+ VIRTUAL_PROCESSOR_ID = 0x00000000,
+ POSTED_INTR_NV = 0x00000002,
+ LAST_PID_POINTER_INDEX = 0x00000008,
+ GUEST_ES_SELECTOR = 0x00000800,
+ GUEST_CS_SELECTOR = 0x00000802,
+ GUEST_SS_SELECTOR = 0x00000804,
+ GUEST_DS_SELECTOR = 0x00000806,
+ GUEST_FS_SELECTOR = 0x00000808,
+ GUEST_GS_SELECTOR = 0x0000080a,
+ GUEST_LDTR_SELECTOR = 0x0000080c,
+ GUEST_TR_SELECTOR = 0x0000080e,
+ GUEST_INTR_STATUS = 0x00000810,
+ GUEST_PML_INDEX = 0x00000812,
+ HOST_ES_SELECTOR = 0x00000c00,
+ HOST_CS_SELECTOR = 0x00000c02,
+ HOST_SS_SELECTOR = 0x00000c04,
+ HOST_DS_SELECTOR = 0x00000c06,
+ HOST_FS_SELECTOR = 0x00000c08,
+ HOST_GS_SELECTOR = 0x00000c0a,
+ HOST_TR_SELECTOR = 0x00000c0c,
+ IO_BITMAP_A = 0x00002000,
+ IO_BITMAP_A_HIGH = 0x00002001,
+ IO_BITMAP_B = 0x00002002,
+ IO_BITMAP_B_HIGH = 0x00002003,
+ MSR_BITMAP = 0x00002004,
+ MSR_BITMAP_HIGH = 0x00002005,
+ VM_EXIT_MSR_STORE_ADDR = 0x00002006,
+ VM_EXIT_MSR_STORE_ADDR_HIGH = 0x00002007,
+ VM_EXIT_MSR_LOAD_ADDR = 0x00002008,
+ VM_EXIT_MSR_LOAD_ADDR_HIGH = 0x00002009,
+ VM_ENTRY_MSR_LOAD_ADDR = 0x0000200a,
+ VM_ENTRY_MSR_LOAD_ADDR_HIGH = 0x0000200b,
+ PML_ADDRESS = 0x0000200e,
+ PML_ADDRESS_HIGH = 0x0000200f,
+ TSC_OFFSET = 0x00002010,
+ TSC_OFFSET_HIGH = 0x00002011,
+ VIRTUAL_APIC_PAGE_ADDR = 0x00002012,
+ VIRTUAL_APIC_PAGE_ADDR_HIGH = 0x00002013,
+ APIC_ACCESS_ADDR = 0x00002014,
+ APIC_ACCESS_ADDR_HIGH = 0x00002015,
+ POSTED_INTR_DESC_ADDR = 0x00002016,
+ POSTED_INTR_DESC_ADDR_HIGH = 0x00002017,
+ VM_FUNCTION_CONTROL = 0x00002018,
+ VM_FUNCTION_CONTROL_HIGH = 0x00002019,
+ EPT_POINTER = 0x0000201a,
+ EPT_POINTER_HIGH = 0x0000201b,
+ EOI_EXIT_BITMAP0 = 0x0000201c,
+ EOI_EXIT_BITMAP0_HIGH = 0x0000201d,
+ EOI_EXIT_BITMAP1 = 0x0000201e,
+ EOI_EXIT_BITMAP1_HIGH = 0x0000201f,
+ EOI_EXIT_BITMAP2 = 0x00002020,
+ EOI_EXIT_BITMAP2_HIGH = 0x00002021,
+ EOI_EXIT_BITMAP3 = 0x00002022,
+ EOI_EXIT_BITMAP3_HIGH = 0x00002023,
+ EPTP_LIST_ADDRESS = 0x00002024,
+ EPTP_LIST_ADDRESS_HIGH = 0x00002025,
+ VMREAD_BITMAP = 0x00002026,
+ VMREAD_BITMAP_HIGH = 0x00002027,
+ VMWRITE_BITMAP = 0x00002028,
+ VMWRITE_BITMAP_HIGH = 0x00002029,
+ VE_INFORMATION_ADDRESS = 0x0000202A,
+ VE_INFORMATION_ADDRESS_HIGH = 0x0000202B,
+ XSS_EXIT_BITMAP = 0x0000202C,
+ XSS_EXIT_BITMAP_HIGH = 0x0000202D,
+ ENCLS_EXITING_BITMAP = 0x0000202E,
+ ENCLS_EXITING_BITMAP_HIGH = 0x0000202F,
+ TSC_MULTIPLIER = 0x00002032,
+ TSC_MULTIPLIER_HIGH = 0x00002033,
+ TERTIARY_VM_EXEC_CONTROL = 0x00002034,
+ TERTIARY_VM_EXEC_CONTROL_HIGH = 0x00002035,
+ SHARED_EPT_POINTER = 0x0000203C,
+ PID_POINTER_TABLE = 0x00002042,
+ PID_POINTER_TABLE_HIGH = 0x00002043,
+ GUEST_PHYSICAL_ADDRESS = 0x00002400,
+ GUEST_PHYSICAL_ADDRESS_HIGH = 0x00002401,
+ VMCS_LINK_POINTER = 0x00002800,
+ VMCS_LINK_POINTER_HIGH = 0x00002801,
+ GUEST_IA32_DEBUGCTL = 0x00002802,
+ GUEST_IA32_DEBUGCTL_HIGH = 0x00002803,
+ GUEST_IA32_PAT = 0x00002804,
+ GUEST_IA32_PAT_HIGH = 0x00002805,
+ GUEST_IA32_EFER = 0x00002806,
+ GUEST_IA32_EFER_HIGH = 0x00002807,
+ GUEST_IA32_PERF_GLOBAL_CTRL = 0x00002808,
+ GUEST_IA32_PERF_GLOBAL_CTRL_HIGH= 0x00002809,
+ GUEST_PDPTR0 = 0x0000280a,
+ GUEST_PDPTR0_HIGH = 0x0000280b,
+ GUEST_PDPTR1 = 0x0000280c,
+ GUEST_PDPTR1_HIGH = 0x0000280d,
+ GUEST_PDPTR2 = 0x0000280e,
+ GUEST_PDPTR2_HIGH = 0x0000280f,
+ GUEST_PDPTR3 = 0x00002810,
+ GUEST_PDPTR3_HIGH = 0x00002811,
+ GUEST_BNDCFGS = 0x00002812,
+ GUEST_BNDCFGS_HIGH = 0x00002813,
+ GUEST_IA32_RTIT_CTL = 0x00002814,
+ GUEST_IA32_RTIT_CTL_HIGH = 0x00002815,
+ HOST_IA32_PAT = 0x00002c00,
+ HOST_IA32_PAT_HIGH = 0x00002c01,
+ HOST_IA32_EFER = 0x00002c02,
+ HOST_IA32_EFER_HIGH = 0x00002c03,
+ HOST_IA32_PERF_GLOBAL_CTRL = 0x00002c04,
+ HOST_IA32_PERF_GLOBAL_CTRL_HIGH = 0x00002c05,
+ PIN_BASED_VM_EXEC_CONTROL = 0x00004000,
+ CPU_BASED_VM_EXEC_CONTROL = 0x00004002,
+ EXCEPTION_BITMAP = 0x00004004,
+ PAGE_FAULT_ERROR_CODE_MASK = 0x00004006,
+ PAGE_FAULT_ERROR_CODE_MATCH = 0x00004008,
+ CR3_TARGET_COUNT = 0x0000400a,
+ VM_EXIT_CONTROLS = 0x0000400c,
+ VM_EXIT_MSR_STORE_COUNT = 0x0000400e,
+ VM_EXIT_MSR_LOAD_COUNT = 0x00004010,
+ VM_ENTRY_CONTROLS = 0x00004012,
+ VM_ENTRY_MSR_LOAD_COUNT = 0x00004014,
+ VM_ENTRY_INTR_INFO_FIELD = 0x00004016,
+ VM_ENTRY_EXCEPTION_ERROR_CODE = 0x00004018,
+ VM_ENTRY_INSTRUCTION_LEN = 0x0000401a,
+ TPR_THRESHOLD = 0x0000401c,
+ SECONDARY_VM_EXEC_CONTROL = 0x0000401e,
+ PLE_GAP = 0x00004020,
+ PLE_WINDOW = 0x00004022,
+ NOTIFY_WINDOW = 0x00004024,
+ VM_INSTRUCTION_ERROR = 0x00004400,
+ VM_EXIT_REASON = 0x00004402,
+ VM_EXIT_INTR_INFO = 0x00004404,
+ VM_EXIT_INTR_ERROR_CODE = 0x00004406,
+ IDT_VECTORING_INFO_FIELD = 0x00004408,
+ IDT_VECTORING_ERROR_CODE = 0x0000440a,
+ VM_EXIT_INSTRUCTION_LEN = 0x0000440c,
+ VMX_INSTRUCTION_INFO = 0x0000440e,
+ GUEST_ES_LIMIT = 0x00004800,
+ GUEST_CS_LIMIT = 0x00004802,
+ GUEST_SS_LIMIT = 0x00004804,
+ GUEST_DS_LIMIT = 0x00004806,
+ GUEST_FS_LIMIT = 0x00004808,
+ GUEST_GS_LIMIT = 0x0000480a,
+ GUEST_LDTR_LIMIT = 0x0000480c,
+ GUEST_TR_LIMIT = 0x0000480e,
+ GUEST_GDTR_LIMIT = 0x00004810,
+ GUEST_IDTR_LIMIT = 0x00004812,
+ GUEST_ES_AR_BYTES = 0x00004814,
+ GUEST_CS_AR_BYTES = 0x00004816,
+ GUEST_SS_AR_BYTES = 0x00004818,
+ GUEST_DS_AR_BYTES = 0x0000481a,
+ GUEST_FS_AR_BYTES = 0x0000481c,
+ GUEST_GS_AR_BYTES = 0x0000481e,
+ GUEST_LDTR_AR_BYTES = 0x00004820,
+ GUEST_TR_AR_BYTES = 0x00004822,
+ GUEST_INTERRUPTIBILITY_INFO = 0x00004824,
+ GUEST_ACTIVITY_STATE = 0x00004826,
+ GUEST_SYSENTER_CS = 0x0000482A,
+ VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
+ HOST_IA32_SYSENTER_CS = 0x00004c00,
+ CR0_GUEST_HOST_MASK = 0x00006000,
+ CR4_GUEST_HOST_MASK = 0x00006002,
+ CR0_READ_SHADOW = 0x00006004,
+ CR4_READ_SHADOW = 0x00006006,
+ CR3_TARGET_VALUE0 = 0x00006008,
+ CR3_TARGET_VALUE1 = 0x0000600a,
+ CR3_TARGET_VALUE2 = 0x0000600c,
+ CR3_TARGET_VALUE3 = 0x0000600e,
+ EXIT_QUALIFICATION = 0x00006400,
+ GUEST_LINEAR_ADDRESS = 0x0000640a,
+ GUEST_CR0 = 0x00006800,
+ GUEST_CR3 = 0x00006802,
+ GUEST_CR4 = 0x00006804,
+ GUEST_ES_BASE = 0x00006806,
+ GUEST_CS_BASE = 0x00006808,
+ GUEST_SS_BASE = 0x0000680a,
+ GUEST_DS_BASE = 0x0000680c,
+ GUEST_FS_BASE = 0x0000680e,
+ GUEST_GS_BASE = 0x00006810,
+ GUEST_LDTR_BASE = 0x00006812,
+ GUEST_TR_BASE = 0x00006814,
+ GUEST_GDTR_BASE = 0x00006816,
+ GUEST_IDTR_BASE = 0x00006818,
+ GUEST_DR7 = 0x0000681a,
+ GUEST_RSP = 0x0000681c,
+ GUEST_RIP = 0x0000681e,
+ GUEST_RFLAGS = 0x00006820,
+ GUEST_PENDING_DBG_EXCEPTIONS = 0x00006822,
+ GUEST_SYSENTER_ESP = 0x00006824,
+ GUEST_SYSENTER_EIP = 0x00006826,
+ HOST_CR0 = 0x00006c00,
+ HOST_CR3 = 0x00006c02,
+ HOST_CR4 = 0x00006c04,
+ HOST_FS_BASE = 0x00006c06,
+ HOST_GS_BASE = 0x00006c08,
+ HOST_TR_BASE = 0x00006c0a,
+ HOST_GDTR_BASE = 0x00006c0c,
+ HOST_IDTR_BASE = 0x00006c0e,
+ HOST_IA32_SYSENTER_ESP = 0x00006c10,
+ HOST_IA32_SYSENTER_EIP = 0x00006c12,
+ HOST_RSP = 0x00006c14,
+ HOST_RIP = 0x00006c16,
+};
+
+/*
+ * Interruption-information format
+ */
+#define INTR_INFO_VECTOR_MASK 0xff /* 7:0 */
+#define INTR_INFO_INTR_TYPE_MASK 0x700 /* 10:8 */
+#define INTR_INFO_DELIVER_CODE_MASK 0x800 /* 11 */
+#define INTR_INFO_UNBLOCK_NMI 0x1000 /* 12 */
+#define INTR_INFO_VALID_MASK 0x80000000 /* 31 */
+#define INTR_INFO_RESVD_BITS_MASK 0x7ffff000
+
+#define VECTORING_INFO_VECTOR_MASK INTR_INFO_VECTOR_MASK
+#define VECTORING_INFO_TYPE_MASK INTR_INFO_INTR_TYPE_MASK
+#define VECTORING_INFO_DELIVER_CODE_MASK INTR_INFO_DELIVER_CODE_MASK
+#define VECTORING_INFO_VALID_MASK INTR_INFO_VALID_MASK
+
+#define INTR_TYPE_EXT_INTR (EVENT_TYPE_EXTINT << 8) /* external interrupt */
+#define INTR_TYPE_RESERVED (EVENT_TYPE_RESERVED << 8) /* reserved */
+#define INTR_TYPE_NMI_INTR (EVENT_TYPE_NMI << 8) /* NMI */
+#define INTR_TYPE_HARD_EXCEPTION (EVENT_TYPE_HWEXC << 8) /* processor exception */
+#define INTR_TYPE_SOFT_INTR (EVENT_TYPE_SWINT << 8) /* software interrupt */
+#define INTR_TYPE_PRIV_SW_EXCEPTION (EVENT_TYPE_PRIV_SWEXC << 8) /* ICE breakpoint */
+#define INTR_TYPE_SOFT_EXCEPTION (EVENT_TYPE_SWEXC << 8) /* software exception */
+#define INTR_TYPE_OTHER_EVENT (EVENT_TYPE_OTHER << 8) /* other event */
+
+/* GUEST_INTERRUPTIBILITY_INFO flags. */
+#define GUEST_INTR_STATE_STI 0x00000001
+#define GUEST_INTR_STATE_MOV_SS 0x00000002
+#define GUEST_INTR_STATE_SMI 0x00000004
+#define GUEST_INTR_STATE_NMI 0x00000008
+#define GUEST_INTR_STATE_ENCLAVE_INTR 0x00000010
+
+/* GUEST_ACTIVITY_STATE flags */
+#define GUEST_ACTIVITY_ACTIVE 0
+#define GUEST_ACTIVITY_HLT 1
+#define GUEST_ACTIVITY_SHUTDOWN 2
+#define GUEST_ACTIVITY_WAIT_SIPI 3
+
+/*
+ * Exit Qualifications for MOV for Control Register Access
+ */
+#define CONTROL_REG_ACCESS_NUM 0x7 /* 2:0, number of control reg.*/
+#define CONTROL_REG_ACCESS_TYPE 0x30 /* 5:4, access type */
+#define CONTROL_REG_ACCESS_REG 0xf00 /* 10:8, general purpose reg. */
+#define LMSW_SOURCE_DATA_SHIFT 16
+#define LMSW_SOURCE_DATA (0xFFFF << LMSW_SOURCE_DATA_SHIFT) /* 16:31 lmsw source */
+#define REG_EAX (0 << 8)
+#define REG_ECX (1 << 8)
+#define REG_EDX (2 << 8)
+#define REG_EBX (3 << 8)
+#define REG_ESP (4 << 8)
+#define REG_EBP (5 << 8)
+#define REG_ESI (6 << 8)
+#define REG_EDI (7 << 8)
+#define REG_R8 (8 << 8)
+#define REG_R9 (9 << 8)
+#define REG_R10 (10 << 8)
+#define REG_R11 (11 << 8)
+#define REG_R12 (12 << 8)
+#define REG_R13 (13 << 8)
+#define REG_R14 (14 << 8)
+#define REG_R15 (15 << 8)
+
+/*
+ * Exit Qualifications for MOV for Debug Register Access
+ */
+#define DEBUG_REG_ACCESS_NUM 0x7 /* 2:0, number of debug reg. */
+#define DEBUG_REG_ACCESS_TYPE 0x10 /* 4, direction of access */
+#define TYPE_MOV_TO_DR (0 << 4)
+#define TYPE_MOV_FROM_DR (1 << 4)
+#define DEBUG_REG_ACCESS_REG(eq) (((eq) >> 8) & 0xf) /* 11:8, general purpose reg. */
+
+
+/*
+ * Exit Qualifications for APIC-Access
+ */
+#define APIC_ACCESS_OFFSET 0xfff /* 11:0, offset within the APIC page */
+#define APIC_ACCESS_TYPE 0xf000 /* 15:12, access type */
+#define TYPE_LINEAR_APIC_INST_READ (0 << 12)
+#define TYPE_LINEAR_APIC_INST_WRITE (1 << 12)
+#define TYPE_LINEAR_APIC_INST_FETCH (2 << 12)
+#define TYPE_LINEAR_APIC_EVENT (3 << 12)
+#define TYPE_PHYSICAL_APIC_EVENT (10 << 12)
+#define TYPE_PHYSICAL_APIC_INST (15 << 12)
+
+/* segment AR in VMCS -- these are different from what LAR reports */
+#define VMX_SEGMENT_AR_L_MASK (1 << 13)
+
+#define VMX_AR_TYPE_ACCESSES_MASK 1
+#define VMX_AR_TYPE_READABLE_MASK (1 << 1)
+#define VMX_AR_TYPE_WRITEABLE_MASK (1 << 2)
+#define VMX_AR_TYPE_CODE_MASK (1 << 3)
+#define VMX_AR_TYPE_MASK 0x0f
+#define VMX_AR_TYPE_BUSY_64_TSS 11
+#define VMX_AR_TYPE_BUSY_32_TSS 11
+#define VMX_AR_TYPE_BUSY_16_TSS 3
+#define VMX_AR_TYPE_LDT 2
+
+#define VMX_AR_UNUSABLE_MASK (1 << 16)
+#define VMX_AR_S_MASK (1 << 4)
+#define VMX_AR_P_MASK (1 << 7)
+#define VMX_AR_L_MASK (1 << 13)
+#define VMX_AR_DB_MASK (1 << 14)
+#define VMX_AR_G_MASK (1 << 15)
+#define VMX_AR_DPL_SHIFT 5
+#define VMX_AR_DPL(ar) (((ar) >> VMX_AR_DPL_SHIFT) & 3)
+
+#define VMX_AR_RESERVD_MASK 0xfffe0f00
+
+#define TSS_PRIVATE_MEMSLOT (KVM_USER_MEM_SLOTS + 0)
+#define APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (KVM_USER_MEM_SLOTS + 1)
+#define IDENTITY_PAGETABLE_PRIVATE_MEMSLOT (KVM_USER_MEM_SLOTS + 2)
+
+#define VMX_NR_VPIDS (1 << 16)
+#define VMX_VPID_EXTENT_INDIVIDUAL_ADDR 0
+#define VMX_VPID_EXTENT_SINGLE_CONTEXT 1
+#define VMX_VPID_EXTENT_ALL_CONTEXT 2
+#define VMX_VPID_EXTENT_SINGLE_NON_GLOBAL 3
+
+#define VMX_EPT_EXTENT_CONTEXT 1
+#define VMX_EPT_EXTENT_GLOBAL 2
+#define VMX_EPT_EXTENT_SHIFT 24
+
+#define VMX_EPT_EXECUTE_ONLY_BIT (1ull)
+#define VMX_EPT_PAGE_WALK_4_BIT (1ull << 6)
+#define VMX_EPT_PAGE_WALK_5_BIT (1ull << 7)
+#define VMX_EPTP_UC_BIT (1ull << 8)
+#define VMX_EPTP_WB_BIT (1ull << 14)
+#define VMX_EPT_2MB_PAGE_BIT (1ull << 16)
+#define VMX_EPT_1GB_PAGE_BIT (1ull << 17)
+#define VMX_EPT_INVEPT_BIT (1ull << 20)
+#define VMX_EPT_AD_BIT (1ull << 21)
+#define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25)
+#define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26)
+
+#define VMX_VPID_INVVPID_BIT (1ull << 0) /* (32 - 32) */
+#define VMX_VPID_EXTENT_INDIVIDUAL_ADDR_BIT (1ull << 8) /* (40 - 32) */
+#define VMX_VPID_EXTENT_SINGLE_CONTEXT_BIT (1ull << 9) /* (41 - 32) */
+#define VMX_VPID_EXTENT_GLOBAL_CONTEXT_BIT (1ull << 10) /* (42 - 32) */
+#define VMX_VPID_EXTENT_SINGLE_NON_GLOBAL_BIT (1ull << 11) /* (43 - 32) */
+
+#define VMX_EPT_MT_EPTE_SHIFT 3
+#define VMX_EPTP_PWL_MASK 0x38ull
+#define VMX_EPTP_PWL_4 0x18ull
+#define VMX_EPTP_PWL_5 0x20ull
+#define VMX_EPTP_AD_ENABLE_BIT (1ull << 6)
+/* The EPTP memtype is encoded in bits 2:0, i.e. doesn't need to be shifted. */
+#define VMX_EPTP_MT_MASK 0x7ull
+#define VMX_EPTP_MT_WB X86_MEMTYPE_WB
+#define VMX_EPTP_MT_UC X86_MEMTYPE_UC
+#define VMX_EPT_READABLE_MASK 0x1ull
+#define VMX_EPT_WRITABLE_MASK 0x2ull
+#define VMX_EPT_EXECUTABLE_MASK 0x4ull
+#define VMX_EPT_IPAT_BIT (1ull << 6)
+#define VMX_EPT_ACCESS_BIT (1ull << 8)
+#define VMX_EPT_DIRTY_BIT (1ull << 9)
+#define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63)
+#define VMX_EPT_RWX_MASK (VMX_EPT_READABLE_MASK | \
+ VMX_EPT_WRITABLE_MASK | \
+ VMX_EPT_EXECUTABLE_MASK)
+#define VMX_EPT_MT_MASK (7ull << VMX_EPT_MT_EPTE_SHIFT)
+
+static inline u8 vmx_eptp_page_walk_level(u64 eptp)
+{
+ u64 encoded_level = eptp & VMX_EPTP_PWL_MASK;
+
+ if (encoded_level == VMX_EPTP_PWL_5)
+ return 5;
+
+ /* @eptp must be pre-validated by the caller. */
+ WARN_ON_ONCE(encoded_level != VMX_EPTP_PWL_4);
+ return 4;
+}
+
+/* The mask to use to trigger an EPT Misconfiguration in order to track MMIO */
+#define VMX_EPT_MISCONFIG_WX_VALUE (VMX_EPT_WRITABLE_MASK | \
+ VMX_EPT_EXECUTABLE_MASK)
+
+#define VMX_EPT_IDENTITY_PAGETABLE_ADDR 0xfffbc000ul
+
+struct vmx_msr_entry {
+ u32 index;
+ u32 reserved;
+ u64 value;
+} __aligned(16);
+
+/*
+ * Exit Qualifications for entry failure during or after loading guest state
+ */
+enum vm_entry_failure_code {
+ ENTRY_FAIL_DEFAULT = 0,
+ ENTRY_FAIL_PDPTE = 2,
+ ENTRY_FAIL_NMI = 3,
+ ENTRY_FAIL_VMCS_LINK_PTR = 4,
+};
+
+/*
+ * Exit Qualifications for EPT Violations
+ */
+#define EPT_VIOLATION_ACC_READ BIT(0)
+#define EPT_VIOLATION_ACC_WRITE BIT(1)
+#define EPT_VIOLATION_ACC_INSTR BIT(2)
+#define EPT_VIOLATION_PROT_READ BIT(3)
+#define EPT_VIOLATION_PROT_WRITE BIT(4)
+#define EPT_VIOLATION_PROT_EXEC BIT(5)
+#define EPT_VIOLATION_EXEC_FOR_RING3_LIN BIT(6)
+#define EPT_VIOLATION_PROT_MASK (EPT_VIOLATION_PROT_READ | \
+ EPT_VIOLATION_PROT_WRITE | \
+ EPT_VIOLATION_PROT_EXEC)
+#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
+#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
+
+#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
+
+static_assert(EPT_VIOLATION_RWX_TO_PROT(VMX_EPT_RWX_MASK) ==
+ (EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_WRITE | EPT_VIOLATION_PROT_EXEC));
+
+/*
+ * Exit Qualifications for NOTIFY VM EXIT
+ */
+#define NOTIFY_VM_CONTEXT_INVALID BIT(0)
+
+/*
+ * VM-instruction error numbers
+ */
+enum vm_instruction_error_number {
+ VMXERR_VMCALL_IN_VMX_ROOT_OPERATION = 1,
+ VMXERR_VMCLEAR_INVALID_ADDRESS = 2,
+ VMXERR_VMCLEAR_VMXON_POINTER = 3,
+ VMXERR_VMLAUNCH_NONCLEAR_VMCS = 4,
+ VMXERR_VMRESUME_NONLAUNCHED_VMCS = 5,
+ VMXERR_VMRESUME_AFTER_VMXOFF = 6,
+ VMXERR_ENTRY_INVALID_CONTROL_FIELD = 7,
+ VMXERR_ENTRY_INVALID_HOST_STATE_FIELD = 8,
+ VMXERR_VMPTRLD_INVALID_ADDRESS = 9,
+ VMXERR_VMPTRLD_VMXON_POINTER = 10,
+ VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID = 11,
+ VMXERR_UNSUPPORTED_VMCS_COMPONENT = 12,
+ VMXERR_VMWRITE_READ_ONLY_VMCS_COMPONENT = 13,
+ VMXERR_VMXON_IN_VMX_ROOT_OPERATION = 15,
+ VMXERR_ENTRY_INVALID_EXECUTIVE_VMCS_POINTER = 16,
+ VMXERR_ENTRY_NONLAUNCHED_EXECUTIVE_VMCS = 17,
+ VMXERR_ENTRY_EXECUTIVE_VMCS_POINTER_NOT_VMXON_POINTER = 18,
+ VMXERR_VMCALL_NONCLEAR_VMCS = 19,
+ VMXERR_VMCALL_INVALID_VM_EXIT_CONTROL_FIELDS = 20,
+ VMXERR_VMCALL_INCORRECT_MSEG_REVISION_ID = 22,
+ VMXERR_VMXOFF_UNDER_DUAL_MONITOR_TREATMENT_OF_SMIS_AND_SMM = 23,
+ VMXERR_VMCALL_INVALID_SMM_MONITOR_FEATURES = 24,
+ VMXERR_ENTRY_INVALID_VM_EXECUTION_CONTROL_FIELDS_IN_EXECUTIVE_VMCS = 25,
+ VMXERR_ENTRY_EVENTS_BLOCKED_BY_MOV_SS = 26,
+ VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
+};
+
+/*
+ * VM-instruction errors that can be encountered on VM-Enter, used to trace
+ * nested VM-Enter failures reported by hardware. Errors unique to VM-Enter
+ * from a SMI Transfer Monitor are not included as things have gone seriously
+ * sideways if we get one of those...
+ */
+#define VMX_VMENTER_INSTRUCTION_ERRORS \
+ { VMXERR_VMLAUNCH_NONCLEAR_VMCS, "VMLAUNCH_NONCLEAR_VMCS" }, \
+ { VMXERR_VMRESUME_NONLAUNCHED_VMCS, "VMRESUME_NONLAUNCHED_VMCS" }, \
+ { VMXERR_VMRESUME_AFTER_VMXOFF, "VMRESUME_AFTER_VMXOFF" }, \
+ { VMXERR_ENTRY_INVALID_CONTROL_FIELD, "VMENTRY_INVALID_CONTROL_FIELD" }, \
+ { VMXERR_ENTRY_INVALID_HOST_STATE_FIELD, "VMENTRY_INVALID_HOST_STATE_FIELD" }, \
+ { VMXERR_ENTRY_EVENTS_BLOCKED_BY_MOV_SS, "VMENTRY_EVENTS_BLOCKED_BY_MOV_SS" }
+
+enum vmx_l1d_flush_state {
+ VMENTER_L1D_FLUSH_AUTO,
+ VMENTER_L1D_FLUSH_NEVER,
+ VMENTER_L1D_FLUSH_COND,
+ VMENTER_L1D_FLUSH_ALWAYS,
+ VMENTER_L1D_FLUSH_EPT_DISABLED,
+ VMENTER_L1D_FLUSH_NOT_REQUIRED,
+};
+
+extern enum vmx_l1d_flush_state l1tf_vmx_mitigation;
+
+struct vmx_ve_information {
+ u32 exit_reason;
+ u32 delivery;
+ u64 exit_qualification;
+ u64 guest_linear_address;
+ u64 guest_physical_address;
+ u16 eptp_index;
+};
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 02/17] lib: add linux trapnr.h clone from 6.16
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 01/17] lib: add linux vmx.h clone from 6.16 Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 03/17] lib: add vmxfeatures.h " Jon Kohler
` (15 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Jon Kohler
Add Linux's arch/x86/include/asm/trapnr.h from [1] into
lib/linux/trapnr.h, to allow definitions in vmx.h to resolve.
[1] 8df7193 ("x86/trapnr: Add event type macros to <asm/trapnr.h>")
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/trapnr.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 lib/linux/trapnr.h
diff --git a/lib/linux/trapnr.h b/lib/linux/trapnr.h
new file mode 100644
index 00000000..8d1154cd
--- /dev/null
+++ b/lib/linux/trapnr.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_TRAPNR_H
+#define _ASM_X86_TRAPNR_H
+
+/*
+ * Event type codes used by FRED, Intel VT-x and AMD SVM
+ */
+#define EVENT_TYPE_EXTINT 0 // External interrupt
+#define EVENT_TYPE_RESERVED 1
+#define EVENT_TYPE_NMI 2 // NMI
+#define EVENT_TYPE_HWEXC 3 // Hardware originated traps, exceptions
+#define EVENT_TYPE_SWINT 4 // INT n
+#define EVENT_TYPE_PRIV_SWEXC 5 // INT1
+#define EVENT_TYPE_SWEXC 6 // INTO, INT3
+#define EVENT_TYPE_OTHER 7 // FRED SYSCALL/SYSENTER, VT-x MTF
+
+/* Interrupts/Exceptions */
+
+#define X86_TRAP_DE 0 /* Divide-by-zero */
+#define X86_TRAP_DB 1 /* Debug */
+#define X86_TRAP_NMI 2 /* Non-maskable Interrupt */
+#define X86_TRAP_BP 3 /* Breakpoint */
+#define X86_TRAP_OF 4 /* Overflow */
+#define X86_TRAP_BR 5 /* Bound Range Exceeded */
+#define X86_TRAP_UD 6 /* Invalid Opcode */
+#define X86_TRAP_NM 7 /* Device Not Available */
+#define X86_TRAP_DF 8 /* Double Fault */
+#define X86_TRAP_OLD_MF 9 /* Coprocessor Segment Overrun */
+#define X86_TRAP_TS 10 /* Invalid TSS */
+#define X86_TRAP_NP 11 /* Segment Not Present */
+#define X86_TRAP_SS 12 /* Stack Segment Fault */
+#define X86_TRAP_GP 13 /* General Protection Fault */
+#define X86_TRAP_PF 14 /* Page Fault */
+#define X86_TRAP_SPURIOUS 15 /* Spurious Interrupt */
+#define X86_TRAP_MF 16 /* x87 Floating-Point Exception */
+#define X86_TRAP_AC 17 /* Alignment Check */
+#define X86_TRAP_MC 18 /* Machine Check */
+#define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */
+#define X86_TRAP_VE 20 /* Virtualization Exception */
+#define X86_TRAP_CP 21 /* Control Protection Exception */
+#define X86_TRAP_VC 29 /* VMM Communication Exception */
+#define X86_TRAP_IRET 32 /* IRET Exception */
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 03/17] lib: add vmxfeatures.h clone from 6.16
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 01/17] lib: add linux vmx.h clone from 6.16 Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 02/17] lib: add linux trapnr.h " Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 04/17] lib: define __aligned() in compiler.h Jon Kohler
` (14 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Jon Kohler
Include Linux's arch/x86/include/asm/vmxfeatures.h from [1] into
lib/linux/vmxfeatures.h, to allow definitions in vmx.h to resolve.
[1] 78ce84b ("x86/cpufeatures: Flip the /proc/cpuinfo appearance logic")
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/vmxfeatures.h | 93 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 93 insertions(+)
create mode 100644 lib/linux/vmxfeatures.h
diff --git a/lib/linux/vmxfeatures.h b/lib/linux/vmxfeatures.h
new file mode 100644
index 00000000..09b1d7e6
--- /dev/null
+++ b/lib/linux/vmxfeatures.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_VMXFEATURES_H
+#define _ASM_X86_VMXFEATURES_H
+
+/*
+ * Defines VMX CPU feature bits
+ */
+#define NVMXINTS 5 /* N 32-bit words worth of info */
+
+/*
+ * Note: If the comment begins with a quoted string, that string is used
+ * in /proc/cpuinfo instead of the macro name. Otherwise, this feature bit
+ * is not displayed in /proc/cpuinfo at all.
+ */
+
+/* Pin-Based VM-Execution Controls, EPT/VPID, APIC and VM-Functions, word 0 */
+#define VMX_FEATURE_INTR_EXITING ( 0*32+ 0) /* VM-Exit on vectored interrupts */
+#define VMX_FEATURE_NMI_EXITING ( 0*32+ 3) /* VM-Exit on NMIs */
+#define VMX_FEATURE_VIRTUAL_NMIS ( 0*32+ 5) /* "vnmi" NMI virtualization */
+#define VMX_FEATURE_PREEMPTION_TIMER ( 0*32+ 6) /* "preemption_timer" VMX Preemption Timer */
+#define VMX_FEATURE_POSTED_INTR ( 0*32+ 7) /* "posted_intr" Posted Interrupts */
+
+/* EPT/VPID features, scattered to bits 16-23 */
+#define VMX_FEATURE_INVVPID ( 0*32+ 16) /* "invvpid" INVVPID is supported */
+#define VMX_FEATURE_EPT_EXECUTE_ONLY ( 0*32+ 17) /* "ept_x_only" EPT entries can be execute only */
+#define VMX_FEATURE_EPT_AD ( 0*32+ 18) /* "ept_ad" EPT Accessed/Dirty bits */
+#define VMX_FEATURE_EPT_1GB ( 0*32+ 19) /* "ept_1gb" 1GB EPT pages */
+#define VMX_FEATURE_EPT_5LEVEL ( 0*32+ 20) /* "ept_5level" 5-level EPT paging */
+
+/* Aggregated APIC features 24-27 */
+#define VMX_FEATURE_FLEXPRIORITY ( 0*32+ 24) /* "flexpriority" TPR shadow + virt APIC */
+#define VMX_FEATURE_APICV ( 0*32+ 25) /* "apicv" TPR shadow + APIC reg virt + virt intr delivery + posted interrupts */
+
+/* VM-Functions, shifted to bits 28-31 */
+#define VMX_FEATURE_EPTP_SWITCHING ( 0*32+ 28) /* "eptp_switching" EPTP switching (in guest) */
+
+/* Primary Processor-Based VM-Execution Controls, word 1 */
+#define VMX_FEATURE_INTR_WINDOW_EXITING ( 1*32+ 2) /* VM-Exit if INTRs are unblocked in guest */
+#define VMX_FEATURE_USE_TSC_OFFSETTING ( 1*32+ 3) /* "tsc_offset" Offset hardware TSC when read in guest */
+#define VMX_FEATURE_HLT_EXITING ( 1*32+ 7) /* VM-Exit on HLT */
+#define VMX_FEATURE_INVLPG_EXITING ( 1*32+ 9) /* VM-Exit on INVLPG */
+#define VMX_FEATURE_MWAIT_EXITING ( 1*32+ 10) /* VM-Exit on MWAIT */
+#define VMX_FEATURE_RDPMC_EXITING ( 1*32+ 11) /* VM-Exit on RDPMC */
+#define VMX_FEATURE_RDTSC_EXITING ( 1*32+ 12) /* VM-Exit on RDTSC */
+#define VMX_FEATURE_CR3_LOAD_EXITING ( 1*32+ 15) /* VM-Exit on writes to CR3 */
+#define VMX_FEATURE_CR3_STORE_EXITING ( 1*32+ 16) /* VM-Exit on reads from CR3 */
+#define VMX_FEATURE_TERTIARY_CONTROLS ( 1*32+ 17) /* Enable Tertiary VM-Execution Controls */
+#define VMX_FEATURE_CR8_LOAD_EXITING ( 1*32+ 19) /* VM-Exit on writes to CR8 */
+#define VMX_FEATURE_CR8_STORE_EXITING ( 1*32+ 20) /* VM-Exit on reads from CR8 */
+#define VMX_FEATURE_VIRTUAL_TPR ( 1*32+ 21) /* "vtpr" TPR virtualization, a.k.a. TPR shadow */
+#define VMX_FEATURE_NMI_WINDOW_EXITING ( 1*32+ 22) /* VM-Exit if NMIs are unblocked in guest */
+#define VMX_FEATURE_MOV_DR_EXITING ( 1*32+ 23) /* VM-Exit on accesses to debug registers */
+#define VMX_FEATURE_UNCOND_IO_EXITING ( 1*32+ 24) /* VM-Exit on *all* IN{S} and OUT{S}*/
+#define VMX_FEATURE_USE_IO_BITMAPS ( 1*32+ 25) /* VM-Exit based on I/O port */
+#define VMX_FEATURE_MONITOR_TRAP_FLAG ( 1*32+ 27) /* "mtf" VMX single-step VM-Exits */
+#define VMX_FEATURE_USE_MSR_BITMAPS ( 1*32+ 28) /* VM-Exit based on MSR index */
+#define VMX_FEATURE_MONITOR_EXITING ( 1*32+ 29) /* VM-Exit on MONITOR (MWAIT's accomplice) */
+#define VMX_FEATURE_PAUSE_EXITING ( 1*32+ 30) /* VM-Exit on PAUSE (unconditionally) */
+#define VMX_FEATURE_SEC_CONTROLS ( 1*32+ 31) /* Enable Secondary VM-Execution Controls */
+
+/* Secondary Processor-Based VM-Execution Controls, word 2 */
+#define VMX_FEATURE_VIRT_APIC_ACCESSES ( 2*32+ 0) /* "vapic" Virtualize memory mapped APIC accesses */
+#define VMX_FEATURE_EPT ( 2*32+ 1) /* "ept" Extended Page Tables, a.k.a. Two-Dimensional Paging */
+#define VMX_FEATURE_DESC_EXITING ( 2*32+ 2) /* VM-Exit on {S,L}*DT instructions */
+#define VMX_FEATURE_RDTSCP ( 2*32+ 3) /* Enable RDTSCP in guest */
+#define VMX_FEATURE_VIRTUAL_X2APIC ( 2*32+ 4) /* Virtualize X2APIC for the guest */
+#define VMX_FEATURE_VPID ( 2*32+ 5) /* "vpid" Virtual Processor ID (TLB ASID modifier) */
+#define VMX_FEATURE_WBINVD_EXITING ( 2*32+ 6) /* VM-Exit on WBINVD */
+#define VMX_FEATURE_UNRESTRICTED_GUEST ( 2*32+ 7) /* "unrestricted_guest" Allow Big Real Mode and other "invalid" states */
+#define VMX_FEATURE_APIC_REGISTER_VIRT ( 2*32+ 8) /* "vapic_reg" Hardware emulation of reads to the virtual-APIC */
+#define VMX_FEATURE_VIRT_INTR_DELIVERY ( 2*32+ 9) /* "vid" Evaluation and delivery of pending virtual interrupts */
+#define VMX_FEATURE_PAUSE_LOOP_EXITING ( 2*32+ 10) /* "ple" Conditionally VM-Exit on PAUSE at CPL0 */
+#define VMX_FEATURE_RDRAND_EXITING ( 2*32+ 11) /* VM-Exit on RDRAND*/
+#define VMX_FEATURE_INVPCID ( 2*32+ 12) /* Enable INVPCID in guest */
+#define VMX_FEATURE_VMFUNC ( 2*32+ 13) /* Enable VM-Functions (leaf dependent) */
+#define VMX_FEATURE_SHADOW_VMCS ( 2*32+ 14) /* "shadow_vmcs" VMREAD/VMWRITE in guest can access shadow VMCS */
+#define VMX_FEATURE_ENCLS_EXITING ( 2*32+ 15) /* VM-Exit on ENCLS (leaf dependent) */
+#define VMX_FEATURE_RDSEED_EXITING ( 2*32+ 16) /* VM-Exit on RDSEED */
+#define VMX_FEATURE_PAGE_MOD_LOGGING ( 2*32+ 17) /* "pml" Log dirty pages into buffer */
+#define VMX_FEATURE_EPT_VIOLATION_VE ( 2*32+ 18) /* "ept_violation_ve" Conditionally reflect EPT violations as #VE exceptions */
+#define VMX_FEATURE_PT_CONCEAL_VMX ( 2*32+ 19) /* Suppress VMX indicators in Processor Trace */
+#define VMX_FEATURE_XSAVES ( 2*32+ 20) /* Enable XSAVES and XRSTORS in guest */
+#define VMX_FEATURE_MODE_BASED_EPT_EXEC ( 2*32+ 22) /* "ept_mode_based_exec" Enable separate EPT EXEC bits for supervisor vs. user */
+#define VMX_FEATURE_PT_USE_GPA ( 2*32+ 24) /* Processor Trace logs GPAs */
+#define VMX_FEATURE_TSC_SCALING ( 2*32+ 25) /* "tsc_scaling" Scale hardware TSC when read in guest */
+#define VMX_FEATURE_USR_WAIT_PAUSE ( 2*32+ 26) /* "usr_wait_pause" Enable TPAUSE, UMONITOR, UMWAIT in guest */
+#define VMX_FEATURE_ENCLV_EXITING ( 2*32+ 28) /* VM-Exit on ENCLV (leaf dependent) */
+#define VMX_FEATURE_BUS_LOCK_DETECTION ( 2*32+ 30) /* VM-Exit when bus lock caused */
+#define VMX_FEATURE_NOTIFY_VM_EXITING ( 2*32+ 31) /* "notify_vm_exiting" VM-Exit when no event windows after notify window */
+
+/* Tertiary Processor-Based VM-Execution Controls, word 3 */
+#define VMX_FEATURE_IPI_VIRT ( 3*32+ 4) /* "ipi_virt" Enable IPI virtualization */
+#endif /* _ASM_X86_VMXFEATURES_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 04/17] lib: define __aligned() in compiler.h
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (2 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 03/17] lib: add vmxfeatures.h " Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 05/17] x86/vmx: basic integration for new vmx.h Jon Kohler
` (13 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Andrew Jones, Thomas Huth, Jon Kohler
Add __aligned() to compiler.h, copied from Linux 6.16's
include/linux/compiler_attributes.h to support __aligned(16) in vmx.h.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/compiler.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/linux/compiler.h b/lib/linux/compiler.h
index 8e62aae0..5a1c66f4 100644
--- a/lib/linux/compiler.h
+++ b/lib/linux/compiler.h
@@ -54,6 +54,7 @@
#define __always_inline __inline __attribute__ ((__always_inline__))
#define noinline __attribute__((noinline))
+#define __aligned(x) __attribute__((__aligned__(x)))
#define __unused __attribute__((__unused__))
static __always_inline void __read_once_size(const volatile void *p, void *res, int size)
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 05/17] x86/vmx: basic integration for new vmx.h
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (3 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 04/17] lib: define __aligned() in compiler.h Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 06/17] x86/vmx: switch to new vmx.h EPT violation defs Jon Kohler
` (12 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Jon Kohler
Integrate Linux's vmx.h to vmx.c / vmx_tests.c, and do misc cleanup
to remove conflicting definitions from the original vmx.h.
Make minor modifications to the new vmx.h to update includes to fit
into the KUT repository as a standalone header file.
Replaced WARN_ON_ONCE in vmx_eptp_page_walk_level with report_info.
Renamed struct vmcs_field to struct vmcs_field_struct to avoid conflict
with the new vmx.h's enum vmcs_field.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/vmx.h | 15 +++++----
x86/vmx.c | 16 +++++-----
x86/vmx.h | 82 -------------------------------------------------
x86/vmx_tests.c | 6 +---
4 files changed, 17 insertions(+), 102 deletions(-)
diff --git a/lib/linux/vmx.h b/lib/linux/vmx.h
index cca7d664..5973bd86 100644
--- a/lib/linux/vmx.h
+++ b/lib/linux/vmx.h
@@ -12,13 +12,10 @@
#define VMX_H
-#include <linux/bitops.h>
-#include <linux/bug.h>
-#include <linux/types.h>
-
-#include <uapi/asm/vmx.h>
-#include <asm/trapnr.h>
-#include <asm/vmxfeatures.h>
+#include "bitops.h"
+#include "libcflat.h"
+#include "trapnr.h"
+#include "util.h"
#define VMCS_CONTROL_BIT(x) BIT(VMX_FEATURE_##x & 0x1f)
@@ -552,7 +549,9 @@ static inline u8 vmx_eptp_page_walk_level(u64 eptp)
return 5;
/* @eptp must be pre-validated by the caller. */
- WARN_ON_ONCE(encoded_level != VMX_EPTP_PWL_4);
+ if (encoded_level != VMX_EPTP_PWL_4)
+ report_info("encoded_level %ld != VMX_EPTP_PWL_4", encoded_level);
+
return 4;
}
diff --git a/x86/vmx.c b/x86/vmx.c
index c803eaa6..e79781f2 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -28,6 +28,8 @@
* Author : Arthur Chunqi Li <yzt356@gmail.com>
*/
+#include <linux/vmx.h>
+
#include "libcflat.h"
#include "processor.h"
#include "alloc_page.h"
@@ -83,7 +85,7 @@ static volatile u32 stage;
static jmp_buf abort_target;
-struct vmcs_field {
+struct vmcs_field_struct {
u64 mask;
u64 encoding;
};
@@ -91,7 +93,7 @@ struct vmcs_field {
#define MASK(_bits) GENMASK_ULL((_bits) - 1, 0)
#define MASK_NATURAL MASK(sizeof(unsigned long) * 8)
-static struct vmcs_field vmcs_fields[] = {
+static struct vmcs_field_struct vmcs_fields[] = {
{ MASK(16), VPID },
{ MASK(16), PINV },
{ MASK(16), EPTP_IDX },
@@ -250,12 +252,12 @@ enum vmcs_field_type {
VMCS_FIELD_TYPES,
};
-static inline int vmcs_field_type(struct vmcs_field *f)
+static inline int vmcs_field_type(struct vmcs_field_struct *f)
{
return (f->encoding >> VMCS_FIELD_TYPE_SHIFT) & 0x3;
}
-static int vmcs_field_readonly(struct vmcs_field *f)
+static int vmcs_field_readonly(struct vmcs_field_struct *f)
{
u64 ia32_vmx_misc;
@@ -264,7 +266,7 @@ static int vmcs_field_readonly(struct vmcs_field *f)
(vmcs_field_type(f) == VMCS_FIELD_TYPE_READ_ONLY_DATA);
}
-static inline u64 vmcs_field_value(struct vmcs_field *f, u8 cookie)
+static inline u64 vmcs_field_value(struct vmcs_field_struct *f, u8 cookie)
{
u64 value;
@@ -276,12 +278,12 @@ static inline u64 vmcs_field_value(struct vmcs_field *f, u8 cookie)
return value & f->mask;
}
-static void set_vmcs_field(struct vmcs_field *f, u8 cookie)
+static void set_vmcs_field(struct vmcs_field_struct *f, u8 cookie)
{
vmcs_write(f->encoding, vmcs_field_value(f, cookie));
}
-static bool check_vmcs_field(struct vmcs_field *f, u8 cookie)
+static bool check_vmcs_field(struct vmcs_field_struct *f, u8 cookie)
{
u64 expected;
u64 actual;
diff --git a/x86/vmx.h b/x86/vmx.h
index 9cd90488..41346252 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -204,7 +204,6 @@ enum Encoding {
GUEST_SEL_LDTR = 0x080cul,
GUEST_SEL_TR = 0x080eul,
GUEST_INT_STATUS = 0x0810ul,
- GUEST_PML_INDEX = 0x0812ul,
/* 16-Bit Host State Fields */
HOST_SEL_ES = 0x0c00ul,
@@ -216,28 +215,17 @@ enum Encoding {
HOST_SEL_TR = 0x0c0cul,
/* 64-Bit Control Fields */
- IO_BITMAP_A = 0x2000ul,
- IO_BITMAP_B = 0x2002ul,
- MSR_BITMAP = 0x2004ul,
EXIT_MSR_ST_ADDR = 0x2006ul,
EXIT_MSR_LD_ADDR = 0x2008ul,
ENTER_MSR_LD_ADDR = 0x200aul,
VMCS_EXEC_PTR = 0x200cul,
- TSC_OFFSET = 0x2010ul,
TSC_OFFSET_HI = 0x2011ul,
APIC_VIRT_ADDR = 0x2012ul,
APIC_ACCS_ADDR = 0x2014ul,
- POSTED_INTR_DESC_ADDR = 0x2016ul,
EPTP = 0x201aul,
EPTP_HI = 0x201bul,
- VMREAD_BITMAP = 0x2026ul,
VMREAD_BITMAP_HI = 0x2027ul,
- VMWRITE_BITMAP = 0x2028ul,
VMWRITE_BITMAP_HI = 0x2029ul,
- EOI_EXIT_BITMAP0 = 0x201cul,
- EOI_EXIT_BITMAP1 = 0x201eul,
- EOI_EXIT_BITMAP2 = 0x2020ul,
- EOI_EXIT_BITMAP3 = 0x2022ul,
PMLADDR = 0x200eul,
PMLADDR_HI = 0x200ful,
@@ -254,7 +242,6 @@ enum Encoding {
GUEST_PAT = 0x2804ul,
GUEST_PERF_GLOBAL_CTRL = 0x2808ul,
GUEST_PDPTE = 0x280aul,
- GUEST_BNDCFGS = 0x2812ul,
/* 64-Bit Host State */
HOST_PAT = 0x2c00ul,
@@ -267,7 +254,6 @@ enum Encoding {
EXC_BITMAP = 0x4004ul,
PF_ERROR_MASK = 0x4006ul,
PF_ERROR_MATCH = 0x4008ul,
- CR3_TARGET_COUNT = 0x400aul,
EXI_CONTROLS = 0x400cul,
EXI_MSR_ST_CNT = 0x400eul,
EXI_MSR_LD_CNT = 0x4010ul,
@@ -276,7 +262,6 @@ enum Encoding {
ENT_INTR_INFO = 0x4016ul,
ENT_INTR_ERROR = 0x4018ul,
ENT_INST_LEN = 0x401aul,
- TPR_THRESHOLD = 0x401cul,
CPU_EXEC_CTRL1 = 0x401eul,
/* 32-Bit R/O Data Fields */
@@ -311,7 +296,6 @@ enum Encoding {
GUEST_INTR_STATE = 0x4824ul,
GUEST_ACTV_STATE = 0x4826ul,
GUEST_SMBASE = 0x4828ul,
- GUEST_SYSENTER_CS = 0x482aul,
PREEMPT_TIMER_VALUE = 0x482eul,
/* 32-Bit Host State Fields */
@@ -320,8 +304,6 @@ enum Encoding {
/* Natural-Width Control Fields */
CR0_MASK = 0x6000ul,
CR4_MASK = 0x6002ul,
- CR0_READ_SHADOW = 0x6004ul,
- CR4_READ_SHADOW = 0x6006ul,
CR3_TARGET_0 = 0x6008ul,
CR3_TARGET_1 = 0x600aul,
CR3_TARGET_2 = 0x600cul,
@@ -333,12 +315,8 @@ enum Encoding {
IO_RSI = 0x6404ul,
IO_RDI = 0x6406ul,
IO_RIP = 0x6408ul,
- GUEST_LINEAR_ADDRESS = 0x640aul,
/* Natural-Width Guest State Fields */
- GUEST_CR0 = 0x6800ul,
- GUEST_CR3 = 0x6802ul,
- GUEST_CR4 = 0x6804ul,
GUEST_BASE_ES = 0x6806ul,
GUEST_BASE_CS = 0x6808ul,
GUEST_BASE_SS = 0x680aul,
@@ -349,18 +327,9 @@ enum Encoding {
GUEST_BASE_TR = 0x6814ul,
GUEST_BASE_GDTR = 0x6816ul,
GUEST_BASE_IDTR = 0x6818ul,
- GUEST_DR7 = 0x681aul,
- GUEST_RSP = 0x681cul,
- GUEST_RIP = 0x681eul,
- GUEST_RFLAGS = 0x6820ul,
GUEST_PENDING_DEBUG = 0x6822ul,
- GUEST_SYSENTER_ESP = 0x6824ul,
- GUEST_SYSENTER_EIP = 0x6826ul,
/* Natural-Width Host State Fields */
- HOST_CR0 = 0x6c00ul,
- HOST_CR3 = 0x6c02ul,
- HOST_CR4 = 0x6c04ul,
HOST_BASE_FS = 0x6c06ul,
HOST_BASE_GS = 0x6c08ul,
HOST_BASE_TR = 0x6c0aul,
@@ -368,8 +337,6 @@ enum Encoding {
HOST_BASE_IDTR = 0x6c0eul,
HOST_SYSENTER_ESP = 0x6c10ul,
HOST_SYSENTER_EIP = 0x6c12ul,
- HOST_RSP = 0x6c14ul,
- HOST_RIP = 0x6c16ul
};
#define VMX_ENTRY_FAILURE (1ul << 31)
@@ -528,61 +495,12 @@ enum Intr_type {
#define INTR_INFO_INTR_TYPE_SHIFT 8
-#define INTR_TYPE_EXT_INTR (0 << 8) /* external interrupt */
-#define INTR_TYPE_RESERVED (1 << 8) /* reserved */
-#define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */
-#define INTR_TYPE_HARD_EXCEPTION (3 << 8) /* processor exception */
-#define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */
-#define INTR_TYPE_PRIV_SW_EXCEPTION (5 << 8) /* priv. software exception */
-#define INTR_TYPE_SOFT_EXCEPTION (6 << 8) /* software exception */
-#define INTR_TYPE_OTHER_EVENT (7 << 8) /* other event */
-
/*
* Guest interruptibility state
*/
-#define GUEST_INTR_STATE_STI (1 << 0)
#define GUEST_INTR_STATE_MOVSS (1 << 1)
-#define GUEST_INTR_STATE_SMI (1 << 2)
-#define GUEST_INTR_STATE_NMI (1 << 3)
#define GUEST_INTR_STATE_ENCLAVE (1 << 4)
-/*
- * VM-instruction error numbers
- */
-enum vm_instruction_error_number {
- VMXERR_VMCALL_IN_VMX_ROOT_OPERATION = 1,
- VMXERR_VMCLEAR_INVALID_ADDRESS = 2,
- VMXERR_VMCLEAR_VMXON_POINTER = 3,
- VMXERR_VMLAUNCH_NONCLEAR_VMCS = 4,
- VMXERR_VMRESUME_NONLAUNCHED_VMCS = 5,
- VMXERR_VMRESUME_AFTER_VMXOFF = 6,
- VMXERR_ENTRY_INVALID_CONTROL_FIELD = 7,
- VMXERR_ENTRY_INVALID_HOST_STATE_FIELD = 8,
- VMXERR_VMPTRLD_INVALID_ADDRESS = 9,
- VMXERR_VMPTRLD_VMXON_POINTER = 10,
- VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID = 11,
- VMXERR_UNSUPPORTED_VMCS_COMPONENT = 12,
- VMXERR_VMWRITE_READ_ONLY_VMCS_COMPONENT = 13,
- VMXERR_VMXON_IN_VMX_ROOT_OPERATION = 15,
- VMXERR_ENTRY_INVALID_EXECUTIVE_VMCS_POINTER = 16,
- VMXERR_ENTRY_NONLAUNCHED_EXECUTIVE_VMCS = 17,
- VMXERR_ENTRY_EXECUTIVE_VMCS_POINTER_NOT_VMXON_POINTER = 18,
- VMXERR_VMCALL_NONCLEAR_VMCS = 19,
- VMXERR_VMCALL_INVALID_VM_EXIT_CONTROL_FIELDS = 20,
- VMXERR_VMCALL_INCORRECT_MSEG_REVISION_ID = 22,
- VMXERR_VMXOFF_UNDER_DUAL_MONITOR_TREATMENT_OF_SMIS_AND_SMM = 23,
- VMXERR_VMCALL_INVALID_SMM_MONITOR_FEATURES = 24,
- VMXERR_ENTRY_INVALID_VM_EXECUTION_CONTROL_FIELDS_IN_EXECUTIVE_VMCS = 25,
- VMXERR_ENTRY_EVENTS_BLOCKED_BY_MOV_SS = 26,
- VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID = 28,
-};
-
-enum vm_entry_failure_code {
- ENTRY_FAIL_DEFAULT = 0,
- ENTRY_FAIL_PDPTE = 2,
- ENTRY_FAIL_NMI = 3,
- ENTRY_FAIL_VMCS_LINK_PTR = 4,
-};
#define SAVE_GPR \
"xchg %rax, regs\n\t" \
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 0b3cfe50..dbcb6cae 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -5,6 +5,7 @@
*/
#include <asm/debugreg.h>
+#include <linux/vmx.h>
#include "vmx.h"
#include "msr.h"
@@ -1962,11 +1963,6 @@ static int dbgctls_exit_handler(union exit_reason exit_reason)
return VMX_TEST_VMEXIT;
}
-struct vmx_msr_entry {
- u32 index;
- u32 reserved;
- u64 value;
-} __attribute__((packed));
#define MSR_MAGIC 0x31415926
struct vmx_msr_entry *exit_msr_store, *entry_msr_load, *exit_msr_load;
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 06/17] x86/vmx: switch to new vmx.h EPT violation defs
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (4 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 05/17] x86/vmx: basic integration for new vmx.h Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 07/17] x86/vmx: switch to new vmx.h EPT RWX defs Jon Kohler
` (11 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's EPT violation defs, which makes it easier
to grok from one code base to another.
Fix a few small formatting issues along the way.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.h | 11 -----
x86/vmx_tests.c | 127 +++++++++++++++++++++++++++++-------------------
2 files changed, 77 insertions(+), 61 deletions(-)
diff --git a/x86/vmx.h b/x86/vmx.h
index 41346252..9b076b0c 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -618,17 +618,6 @@ enum Intr_type {
#define EPT_ADDR_MASK GENMASK_ULL(51, 12)
#define PAGE_MASK_2M (~(PAGE_SIZE_2M-1))
-#define EPT_VLT_RD (1ull << 0)
-#define EPT_VLT_WR (1ull << 1)
-#define EPT_VLT_FETCH (1ull << 2)
-#define EPT_VLT_PERM_RD (1ull << 3)
-#define EPT_VLT_PERM_WR (1ull << 4)
-#define EPT_VLT_PERM_EX (1ull << 5)
-#define EPT_VLT_PERM_USER_EX (1ull << 6)
-#define EPT_VLT_PERMS (EPT_VLT_PERM_RD | EPT_VLT_PERM_WR | \
- EPT_VLT_PERM_EX)
-#define EPT_VLT_LADDR_VLD (1ull << 7)
-#define EPT_VLT_PADDR (1ull << 8)
#define EPT_VLT_GUEST_USER (1ull << 9)
#define EPT_VLT_GUEST_RW (1ull << 10)
#define EPT_VLT_GUEST_EX (1ull << 11)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index dbcb6cae..a09b687f 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1443,8 +1443,9 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
check_ept_ad(pml4, guest_cr3, (unsigned long)data_page1, 0,
have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
- if (exit_qual == (EPT_VLT_WR | EPT_VLT_LADDR_VLD |
- EPT_VLT_PADDR))
+ if (exit_qual == (EPT_VIOLATION_ACC_WRITE |
+ EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED))
vmx_inc_test_stage();
set_ept_pte(pml4, (unsigned long)data_page1,
1, data_page1_pte | (EPT_PRESENT));
@@ -1454,16 +1455,16 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
check_ept_ad(pml4, guest_cr3, (unsigned long)data_page1, 0,
have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
- if (exit_qual == (EPT_VLT_RD |
- (have_ad ? EPT_VLT_WR : 0) |
- EPT_VLT_LADDR_VLD))
+ if (exit_qual == (EPT_VIOLATION_ACC_READ |
+ (have_ad ? EPT_VIOLATION_ACC_WRITE : 0) |
+ EPT_VIOLATION_GVA_IS_VALID))
vmx_inc_test_stage();
set_ept_pte(pml4, guest_pte_addr, 2,
data_page1_pte_pte | (EPT_PRESENT));
invept(INVEPT_SINGLE, eptp);
break;
case 5:
- if (exit_qual & EPT_VLT_RD)
+ if (exit_qual & EPT_VIOLATION_ACC_READ)
vmx_inc_test_stage();
TEST_ASSERT(get_ept_pte(pml4, (unsigned long)pci_physaddr,
1, &memaddr_pte));
@@ -1471,7 +1472,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
invept(INVEPT_SINGLE, eptp);
break;
case 6:
- if (exit_qual & EPT_VLT_WR)
+ if (exit_qual & EPT_VIOLATION_ACC_WRITE)
vmx_inc_test_stage();
TEST_ASSERT(get_ept_pte(pml4, (unsigned long)pci_physaddr,
1, &memaddr_pte));
@@ -2283,14 +2284,14 @@ do { \
(expected & flag) ? "" : "un"); \
} while (0)
- DIAGNOSE(EPT_VLT_RD);
- DIAGNOSE(EPT_VLT_WR);
- DIAGNOSE(EPT_VLT_FETCH);
- DIAGNOSE(EPT_VLT_PERM_RD);
- DIAGNOSE(EPT_VLT_PERM_WR);
- DIAGNOSE(EPT_VLT_PERM_EX);
- DIAGNOSE(EPT_VLT_LADDR_VLD);
- DIAGNOSE(EPT_VLT_PADDR);
+ DIAGNOSE(EPT_VIOLATION_ACC_READ);
+ DIAGNOSE(EPT_VIOLATION_ACC_WRITE);
+ DIAGNOSE(EPT_VIOLATION_ACC_INSTR);
+ DIAGNOSE(EPT_VIOLATION_PROT_READ);
+ DIAGNOSE(EPT_VIOLATION_PROT_WRITE);
+ DIAGNOSE(EPT_VIOLATION_PROT_EXEC);
+ DIAGNOSE(EPT_VIOLATION_GVA_IS_VALID);
+ DIAGNOSE(EPT_VIOLATION_GVA_TRANSLATED);
#undef DIAGNOSE
}
@@ -2357,7 +2358,7 @@ static void do_ept_violation(bool leaf, enum ept_access_op op,
/* Mask undefined bits (which may later be defined in certain cases). */
qual &= ~(EPT_VLT_GUEST_USER | EPT_VLT_GUEST_RW | EPT_VLT_GUEST_EX |
- EPT_VLT_PERM_USER_EX);
+ EPT_VIOLATION_EXEC_FOR_RING3_LIN);
diagnose_ept_violation_qual(expected_qual, qual);
TEST_EXPECT_EQ(expected_qual, qual);
@@ -2419,18 +2420,20 @@ static void ept_access_violation(unsigned long access, enum ept_access_op op,
u64 expected_qual)
{
ept_violation(EPT_PRESENT, access, op,
- expected_qual | EPT_VLT_LADDR_VLD | EPT_VLT_PADDR);
+ expected_qual | EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED);
}
/*
* For translations that don't involve a GVA, that is physical address (paddr)
- * accesses, EPT violations don't set the flag EPT_VLT_PADDR. For a typical
- * guest memory access, the hardware does GVA -> GPA -> HPA. However, certain
- * translations don't involve GVAs, such as when the hardware does the guest
- * page table walk. For example, in translating GVA_1 -> GPA_1, the guest MMU
- * might try to set an A bit on a guest PTE. If the GPA_2 that the PTE resides
- * on isn't present in the EPT, then the EPT violation will be for GPA_2 and
- * the EPT_VLT_PADDR bit will be clear in the exit qualification.
+ * accesses, EPT violations don't set the flag EPT_VIOLATION_GVA_TRANSLATED.
+ * For a typical guest memory access, the hardware does GVA -> GPA -> HPA.
+ * However, certain translations don't involve GVAs, such as when the hardware
+ * does the guest page table walk. For example, in translating GVA_1 -> GPA_1,
+ * the guest MMU might try to set an A bit on a guest PTE. If the GPA_2 that
+ * the PTE resides on isn't present in the EPT, then the EPT violation will be
+ * for GPA_2 and the EPT_VIOLATION_GVA_TRANSLATED bit will be clear in the exit
+ * qualification.
*
* Note that paddr violations can also be triggered by loading PAE page tables
* with wonky addresses. We don't test that yet.
@@ -2449,7 +2452,7 @@ static void ept_access_violation(unsigned long access, enum ept_access_op op,
* Is a violation expected during the paddr access?
*
* @expected_qual Expected qualification for the EPT violation.
- * EPT_VLT_PADDR should be clear.
+ * EPT_VIOLATION_GVA_TRANSLATED should be clear.
*/
static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
enum ept_access_op op, bool expect_violation,
@@ -2492,7 +2495,7 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
if (expect_violation) {
do_ept_violation(/*leaf=*/true, op,
- expected_qual | EPT_VLT_LADDR_VLD, gpa);
+ expected_qual | EPT_VIOLATION_GVA_IS_VALID, gpa);
ept_untwiddle(gpa, /*level=*/1, orig_epte);
do_ept_access_op(op);
} else {
@@ -2611,9 +2614,10 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
/*
* broken:
* According to description of exit qual for EPT violation,
- * EPT_VLT_LADDR_VLD indicates if GUEST_LINEAR_ADDRESS is valid.
+ * EPT_VIOLATION_GVA_IS_VALID indicates if GUEST_LINEAR_ADDRESS is
+ * valid.
* However, I can't find anything that says GUEST_LINEAR_ADDRESS ought
- * to be set for msiconfig.
+ * to be set for misconfig.
*/
TEST_EXPECT_EQ(vmcs_read(GUEST_LINEAR_ADDRESS),
(unsigned long) (
@@ -2664,7 +2668,9 @@ static void ept_reserved_bit_at_level_nohuge(int level, int bit)
/* Making the entry non-present turns reserved bits into ignored. */
ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
- EPT_VLT_RD | EPT_VLT_LADDR_VLD | EPT_VLT_PADDR);
+ EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED);
}
static void ept_reserved_bit_at_level_huge(int level, int bit)
@@ -2674,7 +2680,9 @@ static void ept_reserved_bit_at_level_huge(int level, int bit)
/* Making the entry non-present turns reserved bits into ignored. */
ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
- EPT_VLT_RD | EPT_VLT_LADDR_VLD | EPT_VLT_PADDR);
+ EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED);
}
static void ept_reserved_bit_at_level(int level, int bit)
@@ -2684,7 +2692,9 @@ static void ept_reserved_bit_at_level(int level, int bit)
/* Making the entry non-present turns reserved bits into ignored. */
ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
- EPT_VLT_RD | EPT_VLT_LADDR_VLD | EPT_VLT_PADDR);
+ EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED);
}
static void ept_reserved_bit(int bit)
@@ -2787,9 +2797,9 @@ static void ept_access_test_not_present(void)
{
ept_access_test_setup();
/* --- */
- ept_access_violation(0, OP_READ, EPT_VLT_RD);
- ept_access_violation(0, OP_WRITE, EPT_VLT_WR);
- ept_access_violation(0, OP_EXEC, EPT_VLT_FETCH);
+ ept_access_violation(0, OP_READ, EPT_VIOLATION_ACC_READ);
+ ept_access_violation(0, OP_WRITE, EPT_VIOLATION_ACC_WRITE);
+ ept_access_violation(0, OP_EXEC, EPT_VIOLATION_ACC_INSTR);
}
static void ept_access_test_read_only(void)
@@ -2798,8 +2808,10 @@ static void ept_access_test_read_only(void)
/* r-- */
ept_access_allowed(EPT_RA, OP_READ);
- ept_access_violation(EPT_RA, OP_WRITE, EPT_VLT_WR | EPT_VLT_PERM_RD);
- ept_access_violation(EPT_RA, OP_EXEC, EPT_VLT_FETCH | EPT_VLT_PERM_RD);
+ ept_access_violation(EPT_RA, OP_WRITE, EPT_VIOLATION_ACC_WRITE |
+ EPT_VIOLATION_PROT_READ);
+ ept_access_violation(EPT_RA, OP_EXEC, EPT_VIOLATION_ACC_INSTR |
+ EPT_VIOLATION_PROT_READ);
}
static void ept_access_test_write_only(void)
@@ -2816,7 +2828,9 @@ static void ept_access_test_read_write(void)
ept_access_allowed(EPT_RA | EPT_WA, OP_READ);
ept_access_allowed(EPT_RA | EPT_WA, OP_WRITE);
ept_access_violation(EPT_RA | EPT_WA, OP_EXEC,
- EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR);
+ EPT_VIOLATION_ACC_INSTR |
+ EPT_VIOLATION_PROT_READ |
+ EPT_VIOLATION_PROT_WRITE);
}
@@ -2826,9 +2840,11 @@ static void ept_access_test_execute_only(void)
/* --x */
if (ept_execute_only_supported()) {
ept_access_violation(EPT_EA, OP_READ,
- EPT_VLT_RD | EPT_VLT_PERM_EX);
+ EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_PROT_EXEC);
ept_access_violation(EPT_EA, OP_WRITE,
- EPT_VLT_WR | EPT_VLT_PERM_EX);
+ EPT_VIOLATION_ACC_WRITE |
+ EPT_VIOLATION_PROT_EXEC);
ept_access_allowed(EPT_EA, OP_EXEC);
} else {
ept_access_misconfig(EPT_EA);
@@ -2841,7 +2857,9 @@ static void ept_access_test_read_execute(void)
/* r-x */
ept_access_allowed(EPT_RA | EPT_EA, OP_READ);
ept_access_violation(EPT_RA | EPT_EA, OP_WRITE,
- EPT_VLT_WR | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
+ EPT_VIOLATION_ACC_WRITE |
+ EPT_VIOLATION_PROT_READ |
+ EPT_VIOLATION_PROT_EXEC);
ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC);
}
@@ -2936,14 +2954,17 @@ static void ept_access_test_paddr_not_present_ad_disabled(void)
ept_access_test_setup();
ept_disable_ad_bits();
- ept_access_violation_paddr(0, PT_AD_MASK, OP_READ, EPT_VLT_RD);
- ept_access_violation_paddr(0, PT_AD_MASK, OP_WRITE, EPT_VLT_RD);
- ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC, EPT_VLT_RD);
+ ept_access_violation_paddr(0, PT_AD_MASK, OP_READ,
+ EPT_VIOLATION_ACC_READ);
+ ept_access_violation_paddr(0, PT_AD_MASK, OP_WRITE,
+ EPT_VIOLATION_ACC_READ);
+ ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC,
+ EPT_VIOLATION_ACC_READ);
}
static void ept_access_test_paddr_not_present_ad_enabled(void)
{
- u64 qual = EPT_VLT_RD | EPT_VLT_WR;
+ u64 qual = EPT_VIOLATION_ACC_READ | EPT_VIOLATION_ACC_WRITE;
ept_access_test_setup();
ept_enable_ad_bits_or_skip_test();
@@ -2961,7 +2982,8 @@ static void ept_access_test_paddr_read_only_ad_disabled(void)
* translation of the GPA to host physical address) a read+write
* if the A/D bits have to be set.
*/
- u64 qual = EPT_VLT_WR | EPT_VLT_RD | EPT_VLT_PERM_RD;
+ u64 qual = EPT_VIOLATION_ACC_WRITE | EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_PROT_READ;
ept_access_test_setup();
ept_disable_ad_bits();
@@ -2987,7 +3009,8 @@ static void ept_access_test_paddr_read_only_ad_enabled(void)
* structures are considered writes as far as EPT translation
* is concerned.
*/
- u64 qual = EPT_VLT_WR | EPT_VLT_RD | EPT_VLT_PERM_RD;
+ u64 qual = EPT_VIOLATION_ACC_WRITE | EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_PROT_READ;
ept_access_test_setup();
ept_enable_ad_bits_or_skip_test();
@@ -3029,7 +3052,8 @@ static void ept_access_test_paddr_read_execute_ad_disabled(void)
* translation of the GPA to host physical address) a read+write
* if the A/D bits have to be set.
*/
- u64 qual = EPT_VLT_WR | EPT_VLT_RD | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX;
+ u64 qual = EPT_VIOLATION_ACC_WRITE | EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_EXEC;
ept_access_test_setup();
ept_disable_ad_bits();
@@ -3055,7 +3079,8 @@ static void ept_access_test_paddr_read_execute_ad_enabled(void)
* structures are considered writes as far as EPT translation
* is concerned.
*/
- u64 qual = EPT_VLT_WR | EPT_VLT_RD | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX;
+ u64 qual = EPT_VIOLATION_ACC_WRITE | EPT_VIOLATION_ACC_READ |
+ EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_EXEC;
ept_access_test_setup();
ept_enable_ad_bits_or_skip_test();
@@ -3089,8 +3114,10 @@ static void ept_access_test_force_2m_page(void)
TEST_ASSERT_EQ(ept_2m_supported(), true);
ept_allowed_at_level_mkhuge(true, 2, 0, 0, OP_READ);
ept_violation_at_level_mkhuge(true, 2, EPT_PRESENT, EPT_RA, OP_WRITE,
- EPT_VLT_WR | EPT_VLT_PERM_RD |
- EPT_VLT_LADDR_VLD | EPT_VLT_PADDR);
+ EPT_VIOLATION_ACC_WRITE |
+ EPT_VIOLATION_PROT_READ |
+ EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED);
ept_misconfig_at_level_mkhuge(true, 2, EPT_PRESENT, EPT_WA);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 07/17] x86/vmx: switch to new vmx.h EPT RWX defs
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (5 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 06/17] x86/vmx: switch to new vmx.h EPT violation defs Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 08/17] x86/vmx: switch to new vmx.h EPT access and dirty defs Jon Kohler
` (10 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's EPT RWX defs, which makes it easier
to grok from one code base to another.
Fix a few small formatting issues along the way.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 14 +--
x86/vmx.h | 4 -
x86/vmx_tests.c | 245 +++++++++++++++++++++++++++++++-----------------
3 files changed, 165 insertions(+), 98 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index e79781f2..6b7dca34 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -823,7 +823,7 @@ static void split_large_ept_entry(unsigned long *ptep, int level)
int i;
pte = *ptep;
- assert(pte & EPT_PRESENT);
+ assert(pte & VMX_EPT_RWX_MASK);
assert(pte & EPT_LARGE_PAGE);
assert(level == 2 || level == 3);
@@ -870,15 +870,17 @@ void install_ept_entry(unsigned long *pml4,
for (level = EPT_PAGE_LEVEL; level > pte_level; --level) {
offset = (guest_addr >> EPT_LEVEL_SHIFT(level))
& EPT_PGDIR_MASK;
- if (!(pt[offset] & (EPT_PRESENT))) {
+ if (!(pt[offset] & (VMX_EPT_RWX_MASK))) {
unsigned long *new_pt = pt_page;
if (!new_pt)
new_pt = alloc_page();
else
pt_page = 0;
memset(new_pt, 0, PAGE_SIZE);
- pt[offset] = virt_to_phys(new_pt)
- | EPT_RA | EPT_WA | EPT_EA;
+ pt[offset] = virt_to_phys(new_pt) |
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK;
} else if (pt[offset] & EPT_LARGE_PAGE)
split_large_ept_entry(&pt[offset], level);
pt = phys_to_virt(pt[offset] & EPT_ADDR_MASK);
@@ -965,7 +967,7 @@ bool get_ept_pte(unsigned long *pml4, unsigned long guest_addr, int level,
break;
if (l < 4 && (iter_pte & EPT_LARGE_PAGE))
return false;
- if (!(iter_pte & (EPT_PRESENT)))
+ if (!(iter_pte & (VMX_EPT_RWX_MASK)))
return false;
pt = (unsigned long *)(iter_pte & EPT_ADDR_MASK);
}
@@ -1089,7 +1091,7 @@ void set_ept_pte(unsigned long *pml4, unsigned long guest_addr,
offset = (guest_addr >> EPT_LEVEL_SHIFT(l)) & EPT_PGDIR_MASK;
if (l == level)
break;
- assert(pt[offset] & EPT_PRESENT);
+ assert(pt[offset] & VMX_EPT_RWX_MASK);
pt = (unsigned long *)(pt[offset] & EPT_ADDR_MASK);
}
offset = (guest_addr >> EPT_LEVEL_SHIFT(l)) & EPT_PGDIR_MASK;
diff --git a/x86/vmx.h b/x86/vmx.h
index 9b076b0c..3f792d4a 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -578,10 +578,6 @@ enum Intr_type {
#define EPT_MEM_TYPE_WP 5ul
#define EPT_MEM_TYPE_WB 6ul
-#define EPT_RA 1ul
-#define EPT_WA 2ul
-#define EPT_EA 4ul
-#define EPT_PRESENT (EPT_RA | EPT_WA | EPT_EA)
#define EPT_ACCESS_FLAG (1ul << 8)
#define EPT_DIRTY_FLAG (1ul << 9)
#define EPT_LARGE_PAGE (1ul << 7)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index a09b687f..eda9e88a 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1101,7 +1101,9 @@ static int setup_ept(bool enable_ad)
*/
setup_ept_range(pml4, 0, end_of_memory, 0,
!enable_ad && ept_2m_supported(),
- EPT_WA | EPT_RA | EPT_EA);
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK);
return 0;
}
@@ -1180,7 +1182,9 @@ static int ept_init_common(bool have_ad)
*((u32 *)data_page1) = MAGIC_VAL_1;
*((u32 *)data_page2) = MAGIC_VAL_2;
install_ept(pml4, (unsigned long)data_page1, (unsigned long)data_page2,
- EPT_RA | EPT_WA | EPT_EA);
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK);
apic_version = apic_read(APIC_LVR);
@@ -1360,29 +1364,33 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
*((u32 *)data_page2) == MAGIC_VAL_2) {
vmx_inc_test_stage();
install_ept(pml4, (unsigned long)data_page2,
- (unsigned long)data_page2,
- EPT_RA | EPT_WA | EPT_EA);
+ (unsigned long)data_page2,
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK);
} else
report_fail("EPT basic framework - write");
break;
case 1:
install_ept(pml4, (unsigned long)data_page1,
- (unsigned long)data_page1, EPT_WA);
+ (unsigned long)data_page1, VMX_EPT_WRITABLE_MASK);
invept(INVEPT_SINGLE, eptp);
break;
case 2:
install_ept(pml4, (unsigned long)data_page1,
- (unsigned long)data_page1,
- EPT_RA | EPT_WA | EPT_EA |
- (2 << EPT_MEM_TYPE_SHIFT));
+ (unsigned long)data_page1,
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK |
+ (2 << EPT_MEM_TYPE_SHIFT));
invept(INVEPT_SINGLE, eptp);
break;
case 3:
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
TEST_ASSERT(get_ept_pte(pml4, (unsigned long)data_page1,
1, &data_page1_pte));
- set_ept_pte(pml4, (unsigned long)data_page1,
- 1, data_page1_pte & ~EPT_PRESENT);
+ set_ept_pte(pml4, (unsigned long)data_page1,
+ 1, data_page1_pte & ~VMX_EPT_RWX_MASK);
invept(INVEPT_SINGLE, eptp);
break;
case 4:
@@ -1391,7 +1399,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
TEST_ASSERT(get_ept_pte(pml4, guest_pte_addr, 2, &data_page1_pte_pte));
set_ept_pte(pml4, guest_pte_addr, 2,
- data_page1_pte_pte & ~EPT_PRESENT);
+ data_page1_pte_pte & ~VMX_EPT_RWX_MASK);
invept(INVEPT_SINGLE, eptp);
break;
case 5:
@@ -1418,8 +1426,10 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
case 2:
vmx_inc_test_stage();
install_ept(pml4, (unsigned long)data_page1,
- (unsigned long)data_page1,
- EPT_RA | EPT_WA | EPT_EA);
+ (unsigned long)data_page1,
+ VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK);
invept(INVEPT_SINGLE, eptp);
break;
// Should not reach here
@@ -1448,7 +1458,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
EPT_VIOLATION_GVA_TRANSLATED))
vmx_inc_test_stage();
set_ept_pte(pml4, (unsigned long)data_page1,
- 1, data_page1_pte | (EPT_PRESENT));
+ 1, data_page1_pte | (VMX_EPT_RWX_MASK));
invept(INVEPT_SINGLE, eptp);
break;
case 4:
@@ -1460,7 +1470,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
EPT_VIOLATION_GVA_IS_VALID))
vmx_inc_test_stage();
set_ept_pte(pml4, guest_pte_addr, 2,
- data_page1_pte_pte | (EPT_PRESENT));
+ data_page1_pte_pte |
+ (VMX_EPT_RWX_MASK));
invept(INVEPT_SINGLE, eptp);
break;
case 5:
@@ -1468,7 +1479,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
vmx_inc_test_stage();
TEST_ASSERT(get_ept_pte(pml4, (unsigned long)pci_physaddr,
1, &memaddr_pte));
- set_ept_pte(pml4, memaddr_pte, 1, memaddr_pte | EPT_RA);
+ set_ept_pte(pml4, memaddr_pte, 1,
+ memaddr_pte | VMX_EPT_READABLE_MASK);
invept(INVEPT_SINGLE, eptp);
break;
case 6:
@@ -1476,7 +1488,9 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
vmx_inc_test_stage();
TEST_ASSERT(get_ept_pte(pml4, (unsigned long)pci_physaddr,
1, &memaddr_pte));
- set_ept_pte(pml4, memaddr_pte, 1, memaddr_pte | EPT_RA | EPT_WA);
+ set_ept_pte(pml4, memaddr_pte, 1,
+ memaddr_pte | VMX_EPT_READABLE_MASK |
+ VMX_EPT_WRITABLE_MASK);
invept(INVEPT_SINGLE, eptp);
break;
default:
@@ -2419,7 +2433,7 @@ static void ept_violation(unsigned long clear, unsigned long set,
static void ept_access_violation(unsigned long access, enum ept_access_op op,
u64 expected_qual)
{
- ept_violation(EPT_PRESENT, access, op,
+ ept_violation(VMX_EPT_RWX_MASK, access, op,
expected_qual | EPT_VIOLATION_GVA_IS_VALID |
EPT_VIOLATION_GVA_TRANSLATED);
}
@@ -2489,9 +2503,9 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
* otherwise our level=1 twiddling below will fail. We use the
* identity map (gpa = gpa) since page tables are shared with the host.
*/
- install_ept(pml4, gpa, gpa, EPT_PRESENT);
+ install_ept(pml4, gpa, gpa, VMX_EPT_RWX_MASK);
orig_epte = ept_twiddle(gpa, /*mkhuge=*/0, /*level=*/1,
- /*clear=*/EPT_PRESENT, /*set=*/ept_access);
+ /*clear=*/VMX_EPT_RWX_MASK, /*set=*/ept_access);
if (expect_violation) {
do_ept_violation(/*leaf=*/true, op,
@@ -2588,7 +2602,7 @@ static void ept_ignored_bit(int bit)
static void ept_access_allowed(unsigned long access, enum ept_access_op op)
{
- ept_allowed(EPT_PRESENT, access, op);
+ ept_allowed(VMX_EPT_RWX_MASK, access, op);
}
@@ -2658,7 +2672,7 @@ static void ept_misconfig(unsigned long clear, unsigned long set)
static void ept_access_misconfig(unsigned long access)
{
- ept_misconfig(EPT_PRESENT, access);
+ ept_misconfig(VMX_EPT_RWX_MASK, access);
}
static void ept_reserved_bit_at_level_nohuge(int level, int bit)
@@ -2667,7 +2681,7 @@ static void ept_reserved_bit_at_level_nohuge(int level, int bit)
ept_misconfig_at_level_mkhuge(false, level, 0, 1ul << bit);
/* Making the entry non-present turns reserved bits into ignored. */
- ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
+ ept_violation_at_level(level, VMX_EPT_RWX_MASK, 1ul << bit, OP_READ,
EPT_VIOLATION_ACC_READ |
EPT_VIOLATION_GVA_IS_VALID |
EPT_VIOLATION_GVA_TRANSLATED);
@@ -2679,7 +2693,7 @@ static void ept_reserved_bit_at_level_huge(int level, int bit)
ept_misconfig_at_level_mkhuge(true, level, 0, 1ul << bit);
/* Making the entry non-present turns reserved bits into ignored. */
- ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
+ ept_violation_at_level(level, VMX_EPT_RWX_MASK, 1ul << bit, OP_READ,
EPT_VIOLATION_ACC_READ |
EPT_VIOLATION_GVA_IS_VALID |
EPT_VIOLATION_GVA_TRANSLATED);
@@ -2691,7 +2705,7 @@ static void ept_reserved_bit_at_level(int level, int bit)
ept_misconfig_at_level(level, 0, 1ul << bit);
/* Making the entry non-present turns reserved bits into ignored. */
- ept_violation_at_level(level, EPT_PRESENT, 1ul << bit, OP_READ,
+ ept_violation_at_level(level, VMX_EPT_RWX_MASK, 1ul << bit, OP_READ,
EPT_VIOLATION_ACC_READ |
EPT_VIOLATION_GVA_IS_VALID |
EPT_VIOLATION_GVA_TRANSLATED);
@@ -2787,7 +2801,7 @@ static void ept_access_test_setup(void)
*/
TEST_ASSERT(get_ept_pte(pml4, data->gpa, 4, &pte) && pte == 0);
TEST_ASSERT(get_ept_pte(pml4, data->gpa + size - 1, 4, &pte) && pte == 0);
- install_ept(pml4, data->hpa, data->gpa, EPT_PRESENT);
+ install_ept(pml4, data->hpa, data->gpa, VMX_EPT_RWX_MASK);
data->hva[0] = MAGIC_VAL_1;
memcpy(&data->hva[1], &ret42_start, &ret42_end - &ret42_start);
@@ -2807,10 +2821,12 @@ static void ept_access_test_read_only(void)
ept_access_test_setup();
/* r-- */
- ept_access_allowed(EPT_RA, OP_READ);
- ept_access_violation(EPT_RA, OP_WRITE, EPT_VIOLATION_ACC_WRITE |
+ ept_access_allowed(VMX_EPT_READABLE_MASK, OP_READ);
+ ept_access_violation(VMX_EPT_READABLE_MASK, OP_WRITE,
+ EPT_VIOLATION_ACC_WRITE |
EPT_VIOLATION_PROT_READ);
- ept_access_violation(EPT_RA, OP_EXEC, EPT_VIOLATION_ACC_INSTR |
+ ept_access_violation(VMX_EPT_READABLE_MASK, OP_EXEC,
+ EPT_VIOLATION_ACC_INSTR |
EPT_VIOLATION_PROT_READ);
}
@@ -2818,16 +2834,19 @@ static void ept_access_test_write_only(void)
{
ept_access_test_setup();
/* -w- */
- ept_access_misconfig(EPT_WA);
+ ept_access_misconfig(VMX_EPT_WRITABLE_MASK);
}
static void ept_access_test_read_write(void)
{
ept_access_test_setup();
/* rw- */
- ept_access_allowed(EPT_RA | EPT_WA, OP_READ);
- ept_access_allowed(EPT_RA | EPT_WA, OP_WRITE);
- ept_access_violation(EPT_RA | EPT_WA, OP_EXEC,
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK,
+ OP_READ);
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK,
+ OP_WRITE);
+ ept_access_violation(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK,
+ OP_EXEC,
EPT_VIOLATION_ACC_INSTR |
EPT_VIOLATION_PROT_READ |
EPT_VIOLATION_PROT_WRITE);
@@ -2839,15 +2858,15 @@ static void ept_access_test_execute_only(void)
ept_access_test_setup();
/* --x */
if (ept_execute_only_supported()) {
- ept_access_violation(EPT_EA, OP_READ,
+ ept_access_violation(VMX_EPT_EXECUTABLE_MASK, OP_READ,
EPT_VIOLATION_ACC_READ |
EPT_VIOLATION_PROT_EXEC);
- ept_access_violation(EPT_EA, OP_WRITE,
+ ept_access_violation(VMX_EPT_EXECUTABLE_MASK, OP_WRITE,
EPT_VIOLATION_ACC_WRITE |
EPT_VIOLATION_PROT_EXEC);
- ept_access_allowed(EPT_EA, OP_EXEC);
+ ept_access_allowed(VMX_EPT_EXECUTABLE_MASK, OP_EXEC);
} else {
- ept_access_misconfig(EPT_EA);
+ ept_access_misconfig(VMX_EPT_EXECUTABLE_MASK);
}
}
@@ -2855,28 +2874,34 @@ static void ept_access_test_read_execute(void)
{
ept_access_test_setup();
/* r-x */
- ept_access_allowed(EPT_RA | EPT_EA, OP_READ);
- ept_access_violation(EPT_RA | EPT_EA, OP_WRITE,
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_EXECUTABLE_MASK,
+ OP_READ);
+ ept_access_violation(VMX_EPT_READABLE_MASK | VMX_EPT_EXECUTABLE_MASK,
+ OP_WRITE,
EPT_VIOLATION_ACC_WRITE |
EPT_VIOLATION_PROT_READ |
EPT_VIOLATION_PROT_EXEC);
- ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC);
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_EXECUTABLE_MASK,
+ OP_EXEC);
}
static void ept_access_test_write_execute(void)
{
ept_access_test_setup();
/* -wx */
- ept_access_misconfig(EPT_WA | EPT_EA);
+ ept_access_misconfig(VMX_EPT_WRITABLE_MASK | VMX_EPT_EXECUTABLE_MASK);
}
static void ept_access_test_read_write_execute(void)
{
ept_access_test_setup();
/* rwx */
- ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_READ);
- ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_WRITE);
- ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC);
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, OP_READ);
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, OP_WRITE);
+ ept_access_allowed(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, OP_EXEC);
}
static void ept_access_test_reserved_bits(void)
@@ -2989,17 +3014,20 @@ static void ept_access_test_paddr_read_only_ad_disabled(void)
ept_disable_ad_bits();
/* Can't update A bit, so all accesses fail. */
- ept_access_violation_paddr(EPT_RA, 0, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA, 0, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_WRITE, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_EXEC, qual);
/* AD bits disabled, so only writes try to update the D bit. */
- ept_access_allowed_paddr(EPT_RA, PT_ACCESSED_MASK, OP_READ);
- ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_WRITE, qual);
- ept_access_allowed_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_READ);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_WRITE, qual);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_EXEC);
/* Both A and D already set, so read-only is OK. */
- ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_READ);
- ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_WRITE);
- ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_READ);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_WRITE);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_EXEC);
}
static void ept_access_test_paddr_read_only_ad_enabled(void)
@@ -3015,33 +3043,42 @@ static void ept_access_test_paddr_read_only_ad_enabled(void)
ept_access_test_setup();
ept_enable_ad_bits_or_skip_test();
- ept_access_violation_paddr(EPT_RA, 0, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA, 0, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA, 0, OP_EXEC, qual);
- ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC, qual);
- ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_WRITE, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_WRITE, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_ACCESSED_MASK,
+ OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_READ,
+ qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_WRITE,
+ qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK, PT_AD_MASK, OP_EXEC,
+ qual);
}
static void ept_access_test_paddr_read_write(void)
{
ept_access_test_setup();
/* Read-write access to paging structure. */
- ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_READ);
- ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_WRITE);
- ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK, 0,
+ OP_READ);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK, 0,
+ OP_WRITE);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK | VMX_EPT_WRITABLE_MASK, 0,
+ OP_EXEC);
}
static void ept_access_test_paddr_read_write_execute(void)
{
ept_access_test_setup();
/* RWX access to paging structure. */
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_READ);
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_WRITE);
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_RWX_MASK, 0, OP_READ);
+ ept_access_allowed_paddr(VMX_EPT_RWX_MASK, 0, OP_WRITE);
+ ept_access_allowed_paddr(VMX_EPT_RWX_MASK, 0, OP_EXEC);
}
static void ept_access_test_paddr_read_execute_ad_disabled(void)
@@ -3059,17 +3096,31 @@ static void ept_access_test_paddr_read_execute_ad_disabled(void)
ept_disable_ad_bits();
/* Can't update A bit, so all accesses fail. */
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_WRITE,
+ qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_EXEC,
+ qual);
/* AD bits disabled, so only writes try to update the D bit. */
- ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_READ);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_WRITE, qual);
- ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_READ);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_WRITE, qual);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_EXEC);
/* Both A and D already set, so read-only is OK. */
- ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_READ);
- ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_WRITE);
- ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK, OP_READ);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK, OP_WRITE);
+ ept_access_allowed_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK, OP_EXEC);
}
static void ept_access_test_paddr_read_execute_ad_enabled(void)
@@ -3085,15 +3136,31 @@ static void ept_access_test_paddr_read_execute_ad_enabled(void)
ept_access_test_setup();
ept_enable_ad_bits_or_skip_test();
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_READ, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_WRITE, qual);
- ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_WRITE,
+ qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_WRITE, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_ACCESSED_MASK,
+ OP_EXEC, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK,
+ OP_READ, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK,
+ OP_WRITE, qual);
+ ept_access_violation_paddr(VMX_EPT_READABLE_MASK |
+ VMX_EPT_EXECUTABLE_MASK, PT_AD_MASK,
+ OP_EXEC, qual);
}
static void ept_access_test_paddr_not_present_page_fault(void)
@@ -3113,12 +3180,14 @@ static void ept_access_test_force_2m_page(void)
TEST_ASSERT_EQ(ept_2m_supported(), true);
ept_allowed_at_level_mkhuge(true, 2, 0, 0, OP_READ);
- ept_violation_at_level_mkhuge(true, 2, EPT_PRESENT, EPT_RA, OP_WRITE,
+ ept_violation_at_level_mkhuge(true, 2, VMX_EPT_RWX_MASK,
+ VMX_EPT_READABLE_MASK, OP_WRITE,
EPT_VIOLATION_ACC_WRITE |
EPT_VIOLATION_PROT_READ |
EPT_VIOLATION_GVA_IS_VALID |
EPT_VIOLATION_GVA_TRANSLATED);
- ept_misconfig_at_level_mkhuge(true, 2, EPT_PRESENT, EPT_WA);
+ ept_misconfig_at_level_mkhuge(true, 2, VMX_EPT_RWX_MASK,
+ VMX_EPT_WRITABLE_MASK);
}
static bool invvpid_valid(u64 type, u64 vpid, u64 gla)
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 08/17] x86/vmx: switch to new vmx.h EPT access and dirty defs
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (6 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 07/17] x86/vmx: switch to new vmx.h EPT RWX defs Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 09/17] x86/vmx: switch to new vmx.h EPT capability and memory type defs Jon Kohler
` (9 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's EPT defs for access and dirty bits, which makes
it easier to grok from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 20 +++++++++++---------
x86/vmx.h | 3 ---
x86/vmx_tests.c | 43 ++++++++++++++++++++++++-------------------
3 files changed, 35 insertions(+), 31 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index 6b7dca34..a3c6c60b 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -986,7 +986,7 @@ static void clear_ept_ad_pte(unsigned long *pml4, unsigned long guest_addr)
for (l = EPT_PAGE_LEVEL; ; --l) {
offset = (guest_addr >> EPT_LEVEL_SHIFT(l)) & EPT_PGDIR_MASK;
- pt[offset] &= ~(EPT_ACCESS_FLAG|EPT_DIRTY_FLAG);
+ pt[offset] &= ~(VMX_EPT_ACCESS_BIT | VMX_EPT_DIRTY_BIT);
pte = pt[offset];
if (l == 1 || (l < 4 && (pte & EPT_LARGE_PAGE)))
break;
@@ -1043,12 +1043,14 @@ void check_ept_ad(unsigned long *pml4, u64 guest_cr3,
}
if (!bad_pt_ad) {
- bad_pt_ad |= (ept_pte & (EPT_ACCESS_FLAG|EPT_DIRTY_FLAG)) != expected_pt_ad;
+ bad_pt_ad |=
+ (ept_pte & (VMX_EPT_ACCESS_BIT | VMX_EPT_DIRTY_BIT)) !=
+ expected_pt_ad;
if (bad_pt_ad)
report_fail("EPT - guest level %d page table A=%d/D=%d",
l,
- !!(expected_pt_ad & EPT_ACCESS_FLAG),
- !!(expected_pt_ad & EPT_DIRTY_FLAG));
+ !!(expected_pt_ad & VMX_EPT_ACCESS_BIT),
+ !!(expected_pt_ad & VMX_EPT_DIRTY_BIT));
}
pte = pt[offset];
@@ -1061,8 +1063,8 @@ void check_ept_ad(unsigned long *pml4, u64 guest_cr3,
if (!bad_pt_ad)
report_pass("EPT - guest page table structures A=%d/D=%d",
- !!(expected_pt_ad & EPT_ACCESS_FLAG),
- !!(expected_pt_ad & EPT_DIRTY_FLAG));
+ !!(expected_pt_ad & VMX_EPT_ACCESS_BIT),
+ !!(expected_pt_ad & VMX_EPT_DIRTY_BIT));
offset = (guest_addr >> EPT_LEVEL_SHIFT(l)) & EPT_PGDIR_MASK;
offset_in_page = guest_addr & ((1 << EPT_LEVEL_SHIFT(l)) - 1);
@@ -1072,10 +1074,10 @@ void check_ept_ad(unsigned long *pml4, u64 guest_cr3,
report_fail("EPT - guest physical address is not mapped");
return;
}
- report((ept_pte & (EPT_ACCESS_FLAG | EPT_DIRTY_FLAG)) == expected_gpa_ad,
+ report((ept_pte & (VMX_EPT_ACCESS_BIT | VMX_EPT_DIRTY_BIT)) == expected_gpa_ad,
"EPT - guest physical address A=%d/D=%d",
- !!(expected_gpa_ad & EPT_ACCESS_FLAG),
- !!(expected_gpa_ad & EPT_DIRTY_FLAG));
+ !!(expected_gpa_ad & VMX_EPT_ACCESS_BIT),
+ !!(expected_gpa_ad & VMX_EPT_DIRTY_BIT));
}
void set_ept_pte(unsigned long *pml4, unsigned long guest_addr,
diff --git a/x86/vmx.h b/x86/vmx.h
index 3f792d4a..65012e0e 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -570,7 +570,6 @@ enum Intr_type {
#define EPTP_PG_WALK_LEN_MASK 0x38ul
#define EPTP_RESERV_BITS_MASK 0x1ful
#define EPTP_RESERV_BITS_SHIFT 0x7ul
-#define EPTP_AD_FLAG (1ul << 6)
#define EPT_MEM_TYPE_UC 0ul
#define EPT_MEM_TYPE_WC 1ul
@@ -578,8 +577,6 @@ enum Intr_type {
#define EPT_MEM_TYPE_WP 5ul
#define EPT_MEM_TYPE_WB 6ul
-#define EPT_ACCESS_FLAG (1ul << 8)
-#define EPT_DIRTY_FLAG (1ul << 9)
#define EPT_LARGE_PAGE (1ul << 7)
#define EPT_MEM_TYPE_SHIFT 3ul
#define EPT_MEM_TYPE_MASK 0x7ul
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index eda9e88a..f7ea411f 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1066,7 +1066,7 @@ static int __setup_ept(u64 hpa, bool enable_ad)
eptp |= (3 << EPTP_PG_WALK_LEN_SHIFT);
eptp |= hpa;
if (enable_ad)
- eptp |= EPTP_AD_FLAG;
+ eptp |= VMX_EPTP_AD_ENABLE_BIT;
vmcs_write(EPTP, eptp);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
@@ -1141,19 +1141,19 @@ static int enable_unrestricted_guest(bool need_valid_ept)
static void ept_enable_ad_bits(void)
{
- eptp |= EPTP_AD_FLAG;
+ eptp |= VMX_EPTP_AD_ENABLE_BIT;
vmcs_write(EPTP, eptp);
}
static void ept_disable_ad_bits(void)
{
- eptp &= ~EPTP_AD_FLAG;
+ eptp &= ~VMX_EPTP_AD_ENABLE_BIT;
vmcs_write(EPTP, eptp);
}
static int ept_ad_enabled(void)
{
- return eptp & EPTP_AD_FLAG;
+ return eptp & VMX_EPTP_AD_ENABLE_BIT;
}
static void ept_enable_ad_bits_or_skip_test(void)
@@ -1350,12 +1350,15 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
case 0:
check_ept_ad(pml4, guest_cr3,
(unsigned long)data_page1,
- have_ad ? EPT_ACCESS_FLAG : 0,
- have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
+ have_ad ? VMX_EPT_ACCESS_BIT : 0,
+ have_ad ? VMX_EPT_ACCESS_BIT |
+ VMX_EPT_DIRTY_BIT : 0);
check_ept_ad(pml4, guest_cr3,
(unsigned long)data_page2,
- have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0,
- have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
+ have_ad ? VMX_EPT_ACCESS_BIT |
+ VMX_EPT_DIRTY_BIT : 0,
+ have_ad ? VMX_EPT_ACCESS_BIT |
+ VMX_EPT_DIRTY_BIT : 0);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page2);
if (have_ad)
@@ -1451,7 +1454,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
switch(vmx_get_test_stage()) {
case 3:
check_ept_ad(pml4, guest_cr3, (unsigned long)data_page1, 0,
- have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
+ have_ad ? VMX_EPT_ACCESS_BIT |
+ VMX_EPT_DIRTY_BIT : 0);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
if (exit_qual == (EPT_VIOLATION_ACC_WRITE |
EPT_VIOLATION_GVA_IS_VALID |
@@ -1463,7 +1467,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
break;
case 4:
check_ept_ad(pml4, guest_cr3, (unsigned long)data_page1, 0,
- have_ad ? EPT_ACCESS_FLAG | EPT_DIRTY_FLAG : 0);
+ have_ad ? VMX_EPT_ACCESS_BIT |
+ VMX_EPT_DIRTY_BIT : 0);
clear_ept_ad(pml4, guest_cr3, (unsigned long)data_page1);
if (exit_qual == (EPT_VIOLATION_ACC_READ |
(have_ad ? EPT_VIOLATION_ACC_WRITE : 0) |
@@ -2517,11 +2522,11 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
if (ept_ad_enabled()) {
for (i = EPT_PAGE_LEVEL; i > 0; i--) {
TEST_ASSERT(get_ept_pte(pml4, gpa, i, &epte));
- TEST_ASSERT(epte & EPT_ACCESS_FLAG);
+ TEST_ASSERT(epte & VMX_EPT_ACCESS_BIT);
if (i == 1)
- TEST_ASSERT(epte & EPT_DIRTY_FLAG);
+ TEST_ASSERT(epte & VMX_EPT_DIRTY_BIT);
else
- TEST_ASSERT_EQ(epte & EPT_DIRTY_FLAG, 0);
+ TEST_ASSERT_EQ(epte & VMX_EPT_DIRTY_BIT, 0);
}
}
@@ -4783,7 +4788,7 @@ static void test_eptp_ad_bit(u64 eptp, bool is_ctrl_valid)
{
vmcs_write(EPTP, eptp);
report_prefix_pushf("Enable-EPT enabled; EPT accessed and dirty flag %s",
- (eptp & EPTP_AD_FLAG) ? "1": "0");
+ (eptp & VMX_EPTP_AD_ENABLE_BIT) ? "1" : "0");
if (is_ctrl_valid)
test_vmx_valid_controls();
else
@@ -4872,20 +4877,20 @@ static void test_ept_eptp(void)
*/
if (ept_ad_bits_supported()) {
report_info("Processor supports accessed and dirty flag");
- eptp &= ~EPTP_AD_FLAG;
+ eptp &= ~VMX_EPTP_AD_ENABLE_BIT;
test_eptp_ad_bit(eptp, true);
- eptp |= EPTP_AD_FLAG;
+ eptp |= VMX_EPTP_AD_ENABLE_BIT;
test_eptp_ad_bit(eptp, true);
} else {
report_info("Processor does not supports accessed and dirty flag");
- eptp &= ~EPTP_AD_FLAG;
+ eptp &= ~VMX_EPTP_AD_ENABLE_BIT;
test_eptp_ad_bit(eptp, true);
- eptp |= EPTP_AD_FLAG;
+ eptp |= VMX_EPTP_AD_ENABLE_BIT;
test_eptp_ad_bit(eptp, false);
- eptp &= ~EPTP_AD_FLAG;
+ eptp &= ~VMX_EPTP_AD_ENABLE_BIT;
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 09/17] x86/vmx: switch to new vmx.h EPT capability and memory type defs
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (7 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 08/17] x86/vmx: switch to new vmx.h EPT access and dirty defs Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 10/17] x86/vmx: switch to new vmx.h primary processor-based VM-execution controls Jon Kohler
` (8 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's EPT definitions for capability and memory type,
which makes it easier to grok from one code base to another.
Pickup new lib/x86/msr.h definitions for memory types, from:
e7e80b6 ("x86/cpu: KVM: Add common defines for architectural memory types (PAT, MTRRs, etc.)")
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/x86/msr.h | 14 ++++++++++++++
x86/vmx.c | 18 +++++++++---------
x86/vmx.h | 33 ++++++++++-----------------------
x86/vmx_tests.c | 14 +++++++-------
4 files changed, 40 insertions(+), 39 deletions(-)
diff --git a/lib/x86/msr.h b/lib/x86/msr.h
index cc4cb855..06a3b34a 100644
--- a/lib/x86/msr.h
+++ b/lib/x86/msr.h
@@ -31,6 +31,20 @@
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+/*
+ * Architectural memory types that are common to MTRRs, PAT, VMX MSRs, etc.
+ * Most MSRs support/allow only a subset of memory types, but the values
+ * themselves are common across all relevant MSRs.
+ */
+#define X86_MEMTYPE_UC 0ull /* Uncacheable, a.k.a. Strong Uncacheable */
+#define X86_MEMTYPE_WC 1ull /* Write Combining */
+/* RESERVED 2 */
+/* RESERVED 3 */
+#define X86_MEMTYPE_WT 4ull /* Write Through */
+#define X86_MEMTYPE_WP 5ull /* Write Protected */
+#define X86_MEMTYPE_WB 6ull /* Write Back */
+#define X86_MEMTYPE_UC_MINUS 7ull /* Weak Uncacheabled (PAT only) */
+
/* Intel MSRs. Some also available on other CPUs */
#define MSR_IA32_SPEC_CTRL 0x00000048
#define SPEC_CTRL_IBRS BIT(0)
diff --git a/x86/vmx.c b/x86/vmx.c
index a3c6c60b..df9a23c7 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1641,15 +1641,15 @@ static void test_vmx_caps(void)
"MSR_IA32_VMX_VMCS_ENUM");
fixed0 = -1ull;
- fixed0 &= ~(EPT_CAP_EXEC_ONLY |
- EPT_CAP_PWL4 |
- EPT_CAP_PWL5 |
- EPT_CAP_UC |
- EPT_CAP_WB |
- EPT_CAP_2M_PAGE |
- EPT_CAP_1G_PAGE |
- EPT_CAP_INVEPT |
- EPT_CAP_AD_FLAG |
+ fixed0 &= ~(VMX_EPT_EXECUTE_ONLY_BIT |
+ VMX_EPT_PAGE_WALK_4_BIT |
+ VMX_EPT_PAGE_WALK_5_BIT |
+ VMX_EPTP_UC_BIT |
+ VMX_EPTP_WB_BIT |
+ VMX_EPT_2MB_PAGE_BIT |
+ VMX_EPT_1GB_PAGE_BIT |
+ VMX_EPT_INVEPT_BIT |
+ VMX_EPT_AD_BIT |
EPT_CAP_ADV_EPT_INFO |
EPT_CAP_INVEPT_SINGLE |
EPT_CAP_INVEPT_ALL |
diff --git a/x86/vmx.h b/x86/vmx.h
index 65012e0e..4d13ad91 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -571,27 +571,14 @@ enum Intr_type {
#define EPTP_RESERV_BITS_MASK 0x1ful
#define EPTP_RESERV_BITS_SHIFT 0x7ul
-#define EPT_MEM_TYPE_UC 0ul
#define EPT_MEM_TYPE_WC 1ul
#define EPT_MEM_TYPE_WT 4ul
#define EPT_MEM_TYPE_WP 5ul
-#define EPT_MEM_TYPE_WB 6ul
#define EPT_LARGE_PAGE (1ul << 7)
-#define EPT_MEM_TYPE_SHIFT 3ul
-#define EPT_MEM_TYPE_MASK 0x7ul
#define EPT_IGNORE_PAT (1ul << 6)
#define EPT_SUPPRESS_VE (1ull << 63)
-#define EPT_CAP_EXEC_ONLY (1ull << 0)
-#define EPT_CAP_PWL4 (1ull << 6)
-#define EPT_CAP_PWL5 (1ull << 7)
-#define EPT_CAP_UC (1ull << 8)
-#define EPT_CAP_WB (1ull << 14)
-#define EPT_CAP_2M_PAGE (1ull << 16)
-#define EPT_CAP_1G_PAGE (1ull << 17)
-#define EPT_CAP_INVEPT (1ull << 20)
-#define EPT_CAP_AD_FLAG (1ull << 21)
#define EPT_CAP_ADV_EPT_INFO (1ull << 22)
#define EPT_CAP_INVEPT_SINGLE (1ull << 25)
#define EPT_CAP_INVEPT_ALL (1ull << 26)
@@ -662,12 +649,12 @@ extern union vmx_ept_vpid ept_vpid;
static inline bool ept_2m_supported(void)
{
- return ept_vpid.val & EPT_CAP_2M_PAGE;
+ return ept_vpid.val & VMX_EPT_2MB_PAGE_BIT;
}
static inline bool ept_1g_supported(void)
{
- return ept_vpid.val & EPT_CAP_1G_PAGE;
+ return ept_vpid.val & VMX_EPT_1GB_PAGE_BIT;
}
static inline bool ept_huge_pages_supported(int level)
@@ -682,31 +669,31 @@ static inline bool ept_huge_pages_supported(int level)
static inline bool ept_execute_only_supported(void)
{
- return ept_vpid.val & EPT_CAP_EXEC_ONLY;
+ return ept_vpid.val & VMX_EPT_EXECUTE_ONLY_BIT;
}
static inline bool ept_ad_bits_supported(void)
{
- return ept_vpid.val & EPT_CAP_AD_FLAG;
+ return ept_vpid.val & VMX_EPT_AD_BIT;
}
static inline bool is_4_level_ept_supported(void)
{
- return ept_vpid.val & EPT_CAP_PWL4;
+ return ept_vpid.val & VMX_EPT_PAGE_WALK_4_BIT;
}
static inline bool is_5_level_ept_supported(void)
{
- return ept_vpid.val & EPT_CAP_PWL5;
+ return ept_vpid.val & VMX_EPT_PAGE_WALK_5_BIT;
}
static inline bool is_ept_memtype_supported(int type)
{
- if (type == EPT_MEM_TYPE_UC)
- return ept_vpid.val & EPT_CAP_UC;
+ if (type == VMX_EPTP_MT_UC)
+ return ept_vpid.val & VMX_EPTP_UC_BIT;
- if (type == EPT_MEM_TYPE_WB)
- return ept_vpid.val & EPT_CAP_WB;
+ if (type == VMX_EPTP_MT_WB)
+ return ept_vpid.val & VMX_EPTP_WB_BIT;
return false;
}
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index f7ea411f..5ca4b79b 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1050,7 +1050,7 @@ static int __setup_ept(u64 hpa, bool enable_ad)
printf("\tEPT is not supported\n");
return 1;
}
- if (!is_ept_memtype_supported(EPT_MEM_TYPE_WB)) {
+ if (!is_ept_memtype_supported(VMX_EPTP_MT_WB)) {
printf("\tWB memtype for EPT walks not supported\n");
return 1;
}
@@ -1062,7 +1062,7 @@ static int __setup_ept(u64 hpa, bool enable_ad)
return 1;
}
- eptp = EPT_MEM_TYPE_WB;
+ eptp = VMX_EPTP_MT_WB;
eptp |= (3 << EPTP_PG_WALK_LEN_SHIFT);
eptp |= hpa;
if (enable_ad)
@@ -1385,7 +1385,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
VMX_EPT_READABLE_MASK |
VMX_EPT_WRITABLE_MASK |
VMX_EPT_EXECUTABLE_MASK |
- (2 << EPT_MEM_TYPE_SHIFT));
+ (2 << VMX_EPT_MT_EPTE_SHIFT));
invept(INVEPT_SINGLE, eptp);
break;
case 3:
@@ -4838,10 +4838,10 @@ static void test_ept_eptp(void)
eptp = vmcs_read(EPTP);
for (i = 0; i < 8; i++) {
- eptp = (eptp & ~EPT_MEM_TYPE_MASK) | i;
+ eptp = (eptp & ~VMX_EPTP_MT_MASK) | i;
vmcs_write(EPTP, eptp);
- report_prefix_pushf("Enable-EPT enabled; EPT memory type %lu",
- eptp & EPT_MEM_TYPE_MASK);
+ report_prefix_pushf("Enable-EPT enabled; EPT memory type %llu",
+ eptp & VMX_EPTP_MT_MASK);
if (is_ept_memtype_supported(i))
test_vmx_valid_controls();
else
@@ -4849,7 +4849,7 @@ static void test_ept_eptp(void)
report_prefix_pop();
}
- eptp = (eptp & ~EPT_MEM_TYPE_MASK) | 6ul;
+ eptp = (eptp & ~VMX_EPTP_MT_MASK) | 6ul;
/*
* Page walk length (bits 5:3). Note, the value in VMCS.EPTP "is 1
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 10/17] x86/vmx: switch to new vmx.h primary processor-based VM-execution controls
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (8 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 09/17] x86/vmx: switch to new vmx.h EPT capability and memory type defs Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 11/17] x86/vmx: switch to new vmx.h secondary execution control bit Jon Kohler
` (7 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm, Jon Kohler
Migrate to new vmx.h's primary processor-based VM-execution controls,
which makes it easier to grok from one code base to another.
Save secondary execution controls bit 31 for the next patch.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
lib/linux/vmx.h | 1 +
x86/vmx.c | 2 +-
x86/vmx.h | 19 ------
x86/vmx_tests.c | 157 ++++++++++++++++++++++++++----------------------
4 files changed, 87 insertions(+), 92 deletions(-)
diff --git a/lib/linux/vmx.h b/lib/linux/vmx.h
index 5973bd86..f3c2aacc 100644
--- a/lib/linux/vmx.h
+++ b/lib/linux/vmx.h
@@ -16,6 +16,7 @@
#include "libcflat.h"
#include "trapnr.h"
#include "util.h"
+#include "vmxfeatures.h"
#define VMCS_CONTROL_BIT(x) BIT(VMX_FEATURE_##x & 0x1f)
diff --git a/x86/vmx.c b/x86/vmx.c
index df9a23c7..c1845cea 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1258,7 +1258,7 @@ int init_vmcs(struct vmcs **vmcs)
ctrl_exit = EXI_LOAD_EFER | EXI_HOST_64 | EXI_LOAD_PAT;
ctrl_enter = (ENT_LOAD_EFER | ENT_GUEST_64);
/* DIsable IO instruction VMEXIT now */
- ctrl_cpu[0] &= (~(CPU_IO | CPU_IO_BITMAP));
+ ctrl_cpu[0] &= (~(CPU_BASED_UNCOND_IO_EXITING | CPU_BASED_USE_IO_BITMAPS));
ctrl_cpu[1] = 0;
ctrl_pin = (ctrl_pin | ctrl_pin_rev.set) & ctrl_pin_rev.clr;
diff --git a/x86/vmx.h b/x86/vmx.h
index 4d13ad91..a83d08b8 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -436,25 +436,6 @@ enum Ctrl_pin {
};
enum Ctrl0 {
- CPU_INTR_WINDOW = 1ul << 2,
- CPU_USE_TSC_OFFSET = 1ul << 3,
- CPU_HLT = 1ul << 7,
- CPU_INVLPG = 1ul << 9,
- CPU_MWAIT = 1ul << 10,
- CPU_RDPMC = 1ul << 11,
- CPU_RDTSC = 1ul << 12,
- CPU_CR3_LOAD = 1ul << 15,
- CPU_CR3_STORE = 1ul << 16,
- CPU_CR8_LOAD = 1ul << 19,
- CPU_CR8_STORE = 1ul << 20,
- CPU_TPR_SHADOW = 1ul << 21,
- CPU_NMI_WINDOW = 1ul << 22,
- CPU_IO = 1ul << 24,
- CPU_IO_BITMAP = 1ul << 25,
- CPU_MTF = 1ul << 27,
- CPU_MSR_BITMAP = 1ul << 28,
- CPU_MONITOR = 1ul << 29,
- CPU_PAUSE = 1ul << 30,
CPU_SECONDARY = 1ul << 31,
};
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 5ca4b79b..55d151a4 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -266,7 +266,7 @@ static void msr_bmp_init(void)
msr_bitmap = alloc_page();
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
- ctrl_cpu0 |= CPU_MSR_BITMAP;
+ ctrl_cpu0 |= CPU_BASED_USE_MSR_BITMAPS;
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0);
vmcs_write(MSR_BITMAP, (u64)msr_bitmap);
}
@@ -275,13 +275,13 @@ static void *get_msr_bitmap(void)
{
void *msr_bitmap;
- if (vmcs_read(CPU_EXEC_CTRL0) & CPU_MSR_BITMAP) {
+ if (vmcs_read(CPU_EXEC_CTRL0) & CPU_BASED_USE_MSR_BITMAPS) {
msr_bitmap = (void *)vmcs_read(MSR_BITMAP);
} else {
msr_bitmap = alloc_page();
memset(msr_bitmap, 0xff, PAGE_SIZE);
vmcs_write(MSR_BITMAP, (u64)msr_bitmap);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_MSR_BITMAP);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_USE_MSR_BITMAPS);
}
return msr_bitmap;
@@ -643,8 +643,8 @@ static int iobmp_init(struct vmcs *vmcs)
io_bitmap_a = alloc_page();
io_bitmap_b = alloc_page();
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
- ctrl_cpu0 |= CPU_IO_BITMAP;
- ctrl_cpu0 &= (~CPU_IO);
+ ctrl_cpu0 |= CPU_BASED_USE_IO_BITMAPS;
+ ctrl_cpu0 &= (~CPU_BASED_UNCOND_IO_EXITING);
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0);
vmcs_write(IO_BITMAP_A, (u64)io_bitmap_a);
vmcs_write(IO_BITMAP_B, (u64)io_bitmap_b);
@@ -754,7 +754,8 @@ static int iobmp_exit_handler(union exit_reason exit_reason)
case 9:
case 10:
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
- vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0 & ~CPU_IO);
+ vmcs_write(CPU_EXEC_CTRL0,
+ ctrl_cpu0 & ~CPU_BASED_UNCOND_IO_EXITING);
vmx_inc_test_stage();
break;
default:
@@ -770,12 +771,14 @@ static int iobmp_exit_handler(union exit_reason exit_reason)
switch (vmx_get_test_stage()) {
case 9:
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
- ctrl_cpu0 |= CPU_IO | CPU_IO_BITMAP;
+ ctrl_cpu0 |= CPU_BASED_UNCOND_IO_EXITING |
+ CPU_BASED_USE_IO_BITMAPS;
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0);
break;
case 10:
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
- ctrl_cpu0 = (ctrl_cpu0 & ~CPU_IO_BITMAP) | CPU_IO;
+ ctrl_cpu0 = (ctrl_cpu0 & ~CPU_BASED_USE_IO_BITMAPS) |
+ CPU_BASED_UNCOND_IO_EXITING;
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu0);
break;
default:
@@ -886,22 +889,25 @@ struct insn_table {
*/
static struct insn_table insn_table[] = {
// Flags for Primary Processor-Based VM-Execution Controls
- {"HLT", CPU_HLT, insn_hlt, INSN_CPU0, 12, 0, 0, 0},
- {"INVLPG", CPU_INVLPG, insn_invlpg, INSN_CPU0, 14,
+ {"HLT", CPU_BASED_HLT_EXITING, insn_hlt, INSN_CPU0, 12, 0, 0, 0},
+ {"INVLPG", CPU_BASED_INVLPG_EXITING, insn_invlpg, INSN_CPU0, 14,
0x12345678, 0, FIELD_EXIT_QUAL},
- {"MWAIT", CPU_MWAIT, insn_mwait, INSN_CPU0, 36, 0, 0, 0, this_cpu_has_mwait},
- {"RDPMC", CPU_RDPMC, insn_rdpmc, INSN_CPU0, 15, 0, 0, 0, this_cpu_has_pmu},
- {"RDTSC", CPU_RDTSC, insn_rdtsc, INSN_CPU0, 16, 0, 0, 0},
- {"CR3 load", CPU_CR3_LOAD, insn_cr3_load, INSN_CPU0, 28, 0x3, 0,
- FIELD_EXIT_QUAL},
- {"CR3 store", CPU_CR3_STORE, insn_cr3_store, INSN_CPU0, 28, 0x13, 0,
- FIELD_EXIT_QUAL},
- {"CR8 load", CPU_CR8_LOAD, insn_cr8_load, INSN_CPU0, 28, 0x8, 0,
- FIELD_EXIT_QUAL},
- {"CR8 store", CPU_CR8_STORE, insn_cr8_store, INSN_CPU0, 28, 0x18, 0,
- FIELD_EXIT_QUAL},
- {"MONITOR", CPU_MONITOR, insn_monitor, INSN_CPU0, 39, 0, 0, 0, this_cpu_has_mwait},
- {"PAUSE", CPU_PAUSE, insn_pause, INSN_CPU0, 40, 0, 0, 0},
+ {"MWAIT", CPU_BASED_MWAIT_EXITING, insn_mwait, INSN_CPU0, 36, 0, 0, 0,
+ this_cpu_has_mwait},
+ {"RDPMC", CPU_BASED_RDPMC_EXITING, insn_rdpmc, INSN_CPU0, 15, 0, 0, 0,
+ this_cpu_has_pmu},
+ {"RDTSC", CPU_BASED_RDTSC_EXITING, insn_rdtsc, INSN_CPU0, 16, 0, 0, 0},
+ {"CR3 load", CPU_BASED_CR3_LOAD_EXITING, insn_cr3_load, INSN_CPU0, 28,
+ 0x3, 0, FIELD_EXIT_QUAL},
+ {"CR3 store", CPU_BASED_CR3_STORE_EXITING, insn_cr3_store, INSN_CPU0,
+ 28, 0x13, 0, FIELD_EXIT_QUAL},
+ {"CR8 load", CPU_BASED_CR8_LOAD_EXITING, insn_cr8_load, INSN_CPU0, 28,
+ 0x8, 0, FIELD_EXIT_QUAL},
+ {"CR8 store", CPU_BASED_CR8_STORE_EXITING, insn_cr8_store, INSN_CPU0,
+ 28, 0x18, 0, FIELD_EXIT_QUAL},
+ {"MONITOR", CPU_BASED_MONITOR_TRAP_FLAG, insn_monitor, INSN_CPU0, 39,
+ 0, 0, 0, this_cpu_has_mwait},
+ {"PAUSE", CPU_BASED_PAUSE_EXITING, insn_pause, INSN_CPU0, 40, 0, 0, 0},
// Flags for Secondary Processor-Based VM-Execution Controls
{"WBINVD", CPU_WBINVD, insn_wbinvd, INSN_CPU1, 54, 0, 0, 0},
{"DESC_TABLE (SGDT)", CPU_DESC_TABLE, insn_sgdt, INSN_CPU1, 46, 0, 0, 0},
@@ -3814,10 +3820,10 @@ static void test_vmcs_addr_reference(u32 control_bit, enum Encoding field,
*/
static void test_io_bitmaps(void)
{
- test_vmcs_addr_reference(CPU_IO_BITMAP, IO_BITMAP_A,
+ test_vmcs_addr_reference(CPU_BASED_USE_IO_BITMAPS, IO_BITMAP_A,
"I/O bitmap A", "Use I/O bitmaps",
PAGE_SIZE, false, true);
- test_vmcs_addr_reference(CPU_IO_BITMAP, IO_BITMAP_B,
+ test_vmcs_addr_reference(CPU_BASED_USE_IO_BITMAPS, IO_BITMAP_B,
"I/O bitmap B", "Use I/O bitmaps",
PAGE_SIZE, false, true);
}
@@ -3830,7 +3836,7 @@ static void test_io_bitmaps(void)
*/
static void test_msr_bitmap(void)
{
- test_vmcs_addr_reference(CPU_MSR_BITMAP, MSR_BITMAP,
+ test_vmcs_addr_reference(CPU_BASED_USE_MSR_BITMAPS, MSR_BITMAP,
"MSR bitmap", "Use MSR bitmaps",
PAGE_SIZE, false, true);
}
@@ -3851,8 +3857,9 @@ static void test_apic_virt_addr(void)
* what we're trying to achieve and fails vmentry.
*/
u32 cpu_ctrls0 = vmcs_read(CPU_EXEC_CTRL0);
- vmcs_write(CPU_EXEC_CTRL0, cpu_ctrls0 | CPU_CR8_LOAD | CPU_CR8_STORE);
- test_vmcs_addr_reference(CPU_TPR_SHADOW, APIC_VIRT_ADDR,
+ vmcs_write(CPU_EXEC_CTRL0, cpu_ctrls0 | CPU_BASED_CR8_LOAD_EXITING |
+ CPU_BASED_CR8_STORE_EXITING);
+ test_vmcs_addr_reference(CPU_BASED_TPR_SHADOW, APIC_VIRT_ADDR,
"virtual-APIC address", "Use TPR shadow",
PAGE_SIZE, false, true);
vmcs_write(CPU_EXEC_CTRL0, cpu_ctrls0);
@@ -3924,18 +3931,18 @@ static void test_apic_virtual_ctls(void)
/*
* First test
*/
- if (!((ctrl_cpu_rev[0].clr & (CPU_SECONDARY | CPU_TPR_SHADOW)) ==
- (CPU_SECONDARY | CPU_TPR_SHADOW)))
+ if (!((ctrl_cpu_rev[0].clr & (CPU_SECONDARY | CPU_BASED_TPR_SHADOW)) ==
+ (CPU_SECONDARY | CPU_BASED_TPR_SHADOW)))
return;
primary |= CPU_SECONDARY;
- primary &= ~CPU_TPR_SHADOW;
+ primary &= ~CPU_BASED_TPR_SHADOW;
vmcs_write(CPU_EXEC_CTRL0, primary);
while (1) {
for (j = 1; j < 8; j++) {
secondary &= ~(CPU_VIRT_X2APIC | CPU_APIC_REG_VIRT | CPU_VINTD);
- if (primary & CPU_TPR_SHADOW) {
+ if (primary & CPU_BASED_TPR_SHADOW) {
is_ctrl_valid = true;
} else {
if (! set_bit_pattern(j, &secondary))
@@ -3958,7 +3965,7 @@ static void test_apic_virtual_ctls(void)
break;
i++;
- primary |= CPU_TPR_SHADOW;
+ primary |= CPU_BASED_TPR_SHADOW;
vmcs_write(CPU_EXEC_CTRL0, primary);
strcpy(str, "enabled");
}
@@ -4017,7 +4024,8 @@ static void test_virtual_intr_ctls(void)
(ctrl_pin_rev.clr & PIN_EXTINT)))
return;
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY | CPU_TPR_SHADOW);
+ vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY |
+ CPU_BASED_TPR_SHADOW);
vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VINTD);
vmcs_write(PIN_CONTROLS, pin & ~PIN_EXTINT);
report_prefix_pushf("Virtualize interrupt-delivery disabled; external-interrupt exiting disabled");
@@ -4086,7 +4094,8 @@ static void test_posted_intr(void)
(ctrl_exit_rev.clr & EXI_INTA)))
return;
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY | CPU_TPR_SHADOW);
+ vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY |
+ CPU_BASED_TPR_SHADOW);
/*
* Test virtual-interrupt-delivery and acknowledge-interrupt-on-exit
@@ -4237,7 +4246,7 @@ static void try_tpr_threshold_and_vtpr(unsigned threshold, unsigned vtpr)
u32 primary = vmcs_read(CPU_EXEC_CTRL0);
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
- if ((primary & CPU_TPR_SHADOW) &&
+ if ((primary & CPU_BASED_TPR_SHADOW) &&
(!(primary & CPU_SECONDARY) ||
!(secondary & (CPU_VINTD | CPU_VIRT_APIC_ACCESSES))))
valid = (threshold & 0xf) <= ((vtpr >> 4) & 0xf);
@@ -4571,7 +4580,7 @@ static void try_tpr_threshold(unsigned threshold)
u32 primary = vmcs_read(CPU_EXEC_CTRL0);
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
- if ((primary & CPU_TPR_SHADOW) && !((primary & CPU_SECONDARY) &&
+ if ((primary & CPU_BASED_TPR_SHADOW) && !((primary & CPU_SECONDARY) &&
(secondary & CPU_VINTD)))
valid = !(threshold >> 4);
@@ -4627,18 +4636,20 @@ static void test_tpr_threshold(void)
u64 threshold = vmcs_read(TPR_THRESHOLD);
void *virtual_apic_page;
- if (!(ctrl_cpu_rev[0].clr & CPU_TPR_SHADOW))
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_TPR_SHADOW))
return;
virtual_apic_page = alloc_page();
memset(virtual_apic_page, 0xff, PAGE_SIZE);
vmcs_write(APIC_VIRT_ADDR, virt_to_phys(virtual_apic_page));
- vmcs_write(CPU_EXEC_CTRL0, primary & ~(CPU_TPR_SHADOW | CPU_SECONDARY));
+ vmcs_write(CPU_EXEC_CTRL0, primary & ~(CPU_BASED_TPR_SHADOW |
+ CPU_SECONDARY));
report_prefix_pushf("Use TPR shadow disabled, secondary controls disabled");
test_tpr_threshold_values();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) | CPU_TPR_SHADOW);
+ vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_TPR_SHADOW);
report_prefix_pushf("Use TPR shadow enabled, secondary controls disabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4727,7 +4738,7 @@ static void test_nmi_ctrls(void)
cpu_ctrls0 = vmcs_read(CPU_EXEC_CTRL0);
test_pin_ctrls = pin_ctrls & ~(PIN_NMI | PIN_VIRT_NMI);
- test_cpu_ctrls0 = cpu_ctrls0 & ~CPU_NMI_WINDOW;
+ test_cpu_ctrls0 = cpu_ctrls0 & ~CPU_BASED_NMI_WINDOW_EXITING;
vmcs_write(PIN_CONTROLS, test_pin_ctrls);
report_prefix_pushf("NMI-exiting disabled, virtual-NMIs disabled");
@@ -4749,13 +4760,14 @@ static void test_nmi_ctrls(void)
test_vmx_valid_controls();
report_prefix_pop();
- if (!(ctrl_cpu_rev[0].clr & CPU_NMI_WINDOW)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_NMI_WINDOW_EXITING)) {
report_info("NMI-window exiting is not supported, skipping...");
goto done;
}
vmcs_write(PIN_CONTROLS, test_pin_ctrls);
- vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0 | CPU_NMI_WINDOW);
+ vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0 |
+ CPU_BASED_NMI_WINDOW_EXITING);
report_prefix_pushf("Virtual-NMIs disabled, NMI-window-exiting enabled");
test_vmx_invalid_controls();
report_prefix_pop();
@@ -4767,7 +4779,8 @@ static void test_nmi_ctrls(void)
report_prefix_pop();
vmcs_write(PIN_CONTROLS, test_pin_ctrls | (PIN_NMI | PIN_VIRT_NMI));
- vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0 | CPU_NMI_WINDOW);
+ vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0 |
+ CPU_BASED_NMI_WINDOW_EXITING);
report_prefix_pushf("Virtual-NMIs enabled, NMI-window-exiting enabled");
test_vmx_valid_controls();
report_prefix_pop();
@@ -5121,14 +5134,14 @@ static void enable_mtf(void)
{
u32 ctrl0 = vmcs_read(CPU_EXEC_CTRL0);
- vmcs_write(CPU_EXEC_CTRL0, ctrl0 | CPU_MTF);
+ vmcs_write(CPU_EXEC_CTRL0, ctrl0 | CPU_BASED_MONITOR_TRAP_FLAG);
}
static void disable_mtf(void)
{
u32 ctrl0 = vmcs_read(CPU_EXEC_CTRL0);
- vmcs_write(CPU_EXEC_CTRL0, ctrl0 & ~CPU_MTF);
+ vmcs_write(CPU_EXEC_CTRL0, ctrl0 & ~CPU_BASED_MONITOR_TRAP_FLAG);
}
static void enable_tf(void)
@@ -5159,7 +5172,7 @@ static void vmx_mtf_test(void)
unsigned long pending_dbg;
handler old_gp, old_db;
- if (!(ctrl_cpu_rev[0].clr & CPU_MTF)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_MONITOR_TRAP_FLAG)) {
report_skip("%s : \"Monitor trap flag\" exec control not supported", __func__);
return;
}
@@ -5262,7 +5275,7 @@ static void vmx_mtf_pdpte_test(void)
if (setup_ept(false))
return;
- if (!(ctrl_cpu_rev[0].clr & CPU_MTF)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_MONITOR_TRAP_FLAG)) {
report_skip("%s : \"Monitor trap flag\" exec control not supported", __func__);
return;
}
@@ -6185,13 +6198,13 @@ static enum Config_type configure_apic_reg_virt_test(
}
if (apic_reg_virt_config->use_tpr_shadow) {
- if (!(ctrl_cpu_rev[0].clr & CPU_TPR_SHADOW)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_TPR_SHADOW)) {
printf("VM-execution control \"use TPR shadow\" NOT supported.\n");
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl0 |= CPU_TPR_SHADOW;
+ cpu_exec_ctrl0 |= CPU_BASED_TPR_SHADOW;
} else {
- cpu_exec_ctrl0 &= ~CPU_TPR_SHADOW;
+ cpu_exec_ctrl0 &= ~CPU_BASED_TPR_SHADOW;
}
if (apic_reg_virt_config->apic_register_virtualization) {
@@ -6968,9 +6981,9 @@ static enum Config_type configure_virt_x2apic_mode_test(
/* x2apic-specific VMCS config */
if (virt_x2apic_mode_config->use_msr_bitmaps) {
/* virt_x2apic_mode_test() checks for MSR bitmaps support */
- cpu_exec_ctrl0 |= CPU_MSR_BITMAP;
+ cpu_exec_ctrl0 |= CPU_BASED_USE_MSR_BITMAPS;
} else {
- cpu_exec_ctrl0 &= ~CPU_MSR_BITMAP;
+ cpu_exec_ctrl0 &= ~CPU_BASED_USE_MSR_BITMAPS;
}
if (virt_x2apic_mode_config->virtual_interrupt_delivery) {
@@ -7035,10 +7048,10 @@ static void virt_x2apic_mode_test(void)
* - "Virtual-APIC address", indicated by "use TPR shadow"
* - "MSR-bitmap address", indicated by "use MSR bitmaps"
*/
- if (!(ctrl_cpu_rev[0].clr & CPU_TPR_SHADOW)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_TPR_SHADOW)) {
report_skip("%s : \"Use TPR shadow\" exec control not supported", __func__);
return;
- } else if (!(ctrl_cpu_rev[0].clr & CPU_MSR_BITMAP)) {
+ } else if (!(ctrl_cpu_rev[0].clr & CPU_BASED_USE_MSR_BITMAPS)) {
report_skip("%s : \"Use MSR bitmaps\" exec control not supported", __func__);
return;
}
@@ -8673,7 +8686,7 @@ static void vmx_nmi_window_test(void)
return;
}
- if (!(ctrl_cpu_rev[0].clr & CPU_NMI_WINDOW)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_NMI_WINDOW_EXITING)) {
report_skip("%s : \"NMI-window exiting\" exec control not supported", __func__);
return;
}
@@ -8692,7 +8705,7 @@ static void vmx_nmi_window_test(void)
* RIP will not advance.
*/
report_prefix_push("active, no blocking");
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_NMI_WINDOW);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_NMI_WINDOW_EXITING);
enter_guest();
verify_nmi_window_exit(nop_addr);
report_prefix_pop();
@@ -8764,7 +8777,7 @@ static void vmx_nmi_window_test(void)
report_prefix_pop();
}
- vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_NMI_WINDOW);
+ vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_NMI_WINDOW_EXITING);
enter_guest();
report_prefix_pop();
}
@@ -8804,7 +8817,7 @@ static void vmx_intr_window_test(void)
unsigned int orig_db_gate_type;
void *db_fault_addr = get_idt_addr(&boot_idt[DB_VECTOR]);
- if (!(ctrl_cpu_rev[0].clr & CPU_INTR_WINDOW)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_INTR_WINDOW_EXITING)) {
report_skip("%s : \"Interrupt-window exiting\" exec control not supported", __func__);
return;
}
@@ -8830,7 +8843,7 @@ static void vmx_intr_window_test(void)
* point to the vmcall instruction.
*/
report_prefix_push("active, no blocking, RFLAGS.IF=1");
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_INTR_WINDOW);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_INTR_WINDOW_EXITING);
vmcs_write(GUEST_RFLAGS, X86_EFLAGS_FIXED | X86_EFLAGS_IF);
enter_guest();
verify_intr_window_exit(vmcall_addr);
@@ -8857,11 +8870,11 @@ static void vmx_intr_window_test(void)
* VM-exits. Then, advance past the VMCALL and set the
* "interrupt-window exiting" VM-execution control again.
*/
- vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_INTR_WINDOW);
+ vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_INTR_WINDOW_EXITING);
enter_guest();
skip_exit_vmcall();
nop_addr = vmcs_read(GUEST_RIP);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_INTR_WINDOW);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_INTR_WINDOW_EXITING);
/*
* Ask for "interrupt-window exiting" in a MOV-SS shadow with
@@ -8932,7 +8945,7 @@ static void vmx_intr_window_test(void)
}
boot_idt[DB_VECTOR].type = orig_db_gate_type;
- vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_INTR_WINDOW);
+ vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_INTR_WINDOW_EXITING);
enter_guest();
report_prefix_pop();
}
@@ -8956,14 +8969,14 @@ static void vmx_store_tsc_test(void)
struct vmx_msr_entry msr_entry = { .index = MSR_IA32_TSC };
u64 low, high;
- if (!(ctrl_cpu_rev[0].clr & CPU_USE_TSC_OFFSET)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_USE_TSC_OFFSETTING)) {
report_skip("%s : \"Use TSC offsetting\" exec control not supported", __func__);
return;
}
test_set_guest(vmx_store_tsc_test_guest);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_USE_TSC_OFFSET);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_USE_TSC_OFFSETTING);
vmcs_write(EXI_MSR_ST_CNT, 1);
vmcs_write(EXIT_MSR_ST_ADDR, virt_to_phys(&msr_entry));
vmcs_write(TSC_OFFSET, GUEST_TSC_OFFSET);
@@ -9506,7 +9519,7 @@ static void enable_vid(void)
vmcs_write(EOI_EXIT_BITMAP2, 0x0);
vmcs_write(EOI_EXIT_BITMAP3, 0x0);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY | CPU_TPR_SHADOW);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY | CPU_BASED_TPR_SHADOW);
vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VINTD | CPU_VIRT_X2APIC);
}
@@ -10388,7 +10401,7 @@ static void vmx_vmcs_shadow_test(void)
shadow->hdr.shadow_vmcs = 1;
TEST_ASSERT(!vmcs_clear(shadow));
- vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_RDTSC);
+ vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_RDTSC_EXITING);
vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY);
vmcs_set_bits(CPU_EXEC_CTRL1, CPU_SHADOW_VMCS);
@@ -10423,7 +10436,7 @@ static void vmx_vmcs_shadow_test(void)
*/
static void reset_guest_tsc_to_zero(void)
{
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_USE_TSC_OFFSET);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_USE_TSC_OFFSETTING);
vmcs_write(TSC_OFFSET, -rdtsc());
}
@@ -10446,7 +10459,7 @@ static unsigned long long host_time_to_guest_time(unsigned long long t)
TEST_ASSERT(!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
!(vmcs_read(CPU_EXEC_CTRL1) & CPU_USE_TSC_SCALING));
- if (vmcs_read(CPU_EXEC_CTRL0) & CPU_USE_TSC_OFFSET)
+ if (vmcs_read(CPU_EXEC_CTRL0) & CPU_BASED_USE_TSC_OFFSETTING)
t += vmcs_read(TSC_OFFSET);
return t;
@@ -10470,7 +10483,7 @@ static void rdtsc_vmexit_diff_test(void)
int fail = 0;
int i;
- if (!(ctrl_cpu_rev[0].clr & CPU_USE_TSC_OFFSET))
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_USE_TSC_OFFSETTING))
test_skip("CPU doesn't support the 'use TSC offsetting' processor-based VM-execution control.\n");
test_set_guest(rdtsc_vmexit_diff_test_guest);
@@ -10691,9 +10704,9 @@ static void __vmx_pf_exception_test(invalidate_tlb_t inv_fn, void *data,
/* Intercept INVLPG when to perform TLB invalidation from L1 (this). */
if (inv_fn)
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_INVLPG);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_INVLPG_EXITING);
else
- vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_INVLPG);
+ vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_INVLPG_EXITING);
enter_guest();
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 11/17] x86/vmx: switch to new vmx.h secondary execution control bit
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (9 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 10/17] x86/vmx: switch to new vmx.h primary processor-based VM-execution controls Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 12/17] x86/vmx: switch to new vmx.h secondary execution controls Jon Kohler
` (6 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's secondary execution control bit 31, which makes
it easier to grok from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 4 +-
x86/vmx.h | 6 +--
x86/vmx_tests.c | 102 ++++++++++++++++++++++++++++--------------------
3 files changed, 63 insertions(+), 49 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index c1845cea..f3368a4a 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1107,7 +1107,7 @@ static void init_vmcs_ctrl(void)
vmcs_write(PIN_CONTROLS, ctrl_pin);
/* Disable VMEXIT of IO instruction */
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu[0]);
- if (ctrl_cpu_rev[0].set & CPU_SECONDARY) {
+ if (ctrl_cpu_rev[0].set & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) {
ctrl_cpu[1] = (ctrl_cpu[1] | ctrl_cpu_rev[1].set) &
ctrl_cpu_rev[1].clr;
vmcs_write(CPU_EXEC_CTRL1, ctrl_cpu[1]);
@@ -1296,7 +1296,7 @@ static void init_vmx_caps(void)
: MSR_IA32_VMX_ENTRY_CTLS);
ctrl_cpu_rev[0].val = rdmsr(basic_msr.ctrl ? MSR_IA32_VMX_TRUE_PROC
: MSR_IA32_VMX_PROCBASED_CTLS);
- if ((ctrl_cpu_rev[0].clr & CPU_SECONDARY) != 0)
+ if ((ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) != 0)
ctrl_cpu_rev[1].val = rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2);
else
ctrl_cpu_rev[1].val = 0;
diff --git a/x86/vmx.h b/x86/vmx.h
index a83d08b8..16332247 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -435,10 +435,6 @@ enum Ctrl_pin {
PIN_POST_INTR = 1ul << 7,
};
-enum Ctrl0 {
- CPU_SECONDARY = 1ul << 31,
-};
-
enum Ctrl1 {
CPU_VIRT_APIC_ACCESSES = 1ul << 0,
CPU_EPT = 1ul << 1,
@@ -689,7 +685,7 @@ static inline bool is_invept_type_supported(u64 type)
static inline bool is_vpid_supported(void)
{
- return (ctrl_cpu_rev[0].clr & CPU_SECONDARY) &&
+ return (ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
(ctrl_cpu_rev[1].clr & CPU_VPID);
}
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 55d151a4..f092c22d 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -931,7 +931,7 @@ static int insn_intercept_init(struct vmcs *vmcs)
{
u32 ctrl_cpu, cur_insn;
- ctrl_cpu = ctrl_cpu_rev[0].set | CPU_SECONDARY;
+ ctrl_cpu = ctrl_cpu_rev[0].set | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
ctrl_cpu &= ctrl_cpu_rev[0].clr;
vmcs_write(CPU_EXEC_CTRL0, ctrl_cpu);
vmcs_write(CPU_EXEC_CTRL1, ctrl_cpu_rev[1].set);
@@ -1051,7 +1051,7 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
*/
static int __setup_ept(u64 hpa, bool enable_ad)
{
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(ctrl_cpu_rev[1].clr & CPU_EPT)) {
printf("\tEPT is not supported\n");
return 1;
@@ -1075,7 +1075,8 @@ static int __setup_ept(u64 hpa, bool enable_ad)
eptp |= VMX_EPTP_AD_ENABLE_BIT;
vmcs_write(EPTP, eptp);
- vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
return 0;
@@ -1129,7 +1130,7 @@ static void setup_dummy_ept(void)
static int enable_unrestricted_guest(bool need_valid_ept)
{
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(ctrl_cpu_rev[1].clr & CPU_URG) ||
!(ctrl_cpu_rev[1].clr & CPU_EPT))
return 1;
@@ -1139,7 +1140,8 @@ static int enable_unrestricted_guest(bool need_valid_ept)
else
setup_dummy_ept();
- vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) | CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | CPU_URG);
return 0;
@@ -1547,7 +1549,7 @@ static int pml_init(struct vmcs *vmcs)
if (r == VMX_TEST_EXIT)
return r;
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(ctrl_cpu_rev[1].clr & CPU_PML)) {
printf("\tPML is not supported");
return VMX_TEST_EXIT;
@@ -2100,7 +2102,7 @@ static int disable_rdtscp_init(struct vmcs *vmcs)
{
u32 ctrl_cpu1;
- if (ctrl_cpu_rev[0].clr & CPU_SECONDARY) {
+ if (ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) {
ctrl_cpu1 = vmcs_read(CPU_EXEC_CTRL1);
ctrl_cpu1 &= ~CPU_RDTSCP;
vmcs_write(CPU_EXEC_CTRL1, ctrl_cpu1);
@@ -3643,13 +3645,14 @@ static void test_secondary_processor_based_ctls(void)
u32 secondary;
unsigned bit;
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY))
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS))
return;
primary = vmcs_read(CPU_EXEC_CTRL0);
secondary = vmcs_read(CPU_EXEC_CTRL1);
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0, primary |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
printf("\nMSR_IA32_VMX_PROCBASED_CTLS2: %lx\n", ctrl_cpu_rev[1].val);
for (bit = 0; bit < 32; bit++)
test_rsvd_ctl_bit("secondary processor-based controls",
@@ -3659,7 +3662,8 @@ static void test_secondary_processor_based_ctls(void)
* When the "activate secondary controls" VM-execution control
* is clear, there are no checks on the secondary controls.
*/
- vmcs_write(CPU_EXEC_CTRL0, primary & ~CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0,
+ primary & ~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, ~0);
report(vmlaunch(),
"Secondary processor-based controls ignored");
@@ -3788,7 +3792,8 @@ static void test_vmcs_addr_reference(u32 control_bit, enum Encoding field,
if (control_primary) {
vmcs_write(CPU_EXEC_CTRL0, primary | control_bit);
} else {
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0, primary |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, secondary | control_bit);
}
@@ -3800,7 +3805,8 @@ static void test_vmcs_addr_reference(u32 control_bit, enum Encoding field,
if (control_primary) {
vmcs_write(CPU_EXEC_CTRL0, primary & ~control_bit);
} else {
- vmcs_write(CPU_EXEC_CTRL0, primary & ~CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0,
+ primary & ~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, secondary & ~control_bit);
}
@@ -3931,11 +3937,12 @@ static void test_apic_virtual_ctls(void)
/*
* First test
*/
- if (!((ctrl_cpu_rev[0].clr & (CPU_SECONDARY | CPU_BASED_TPR_SHADOW)) ==
- (CPU_SECONDARY | CPU_BASED_TPR_SHADOW)))
+ if (!((ctrl_cpu_rev[0].clr &
+ (CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | CPU_BASED_TPR_SHADOW)) ==
+ (CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | CPU_BASED_TPR_SHADOW)))
return;
- primary |= CPU_SECONDARY;
+ primary |= CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
primary &= ~CPU_BASED_TPR_SHADOW;
vmcs_write(CPU_EXEC_CTRL0, primary);
@@ -3980,7 +3987,8 @@ static void test_apic_virtual_ctls(void)
if (!((ctrl_cpu_rev[1].clr & apic_virt_ctls) == apic_virt_ctls))
return;
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0,
+ primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
secondary &= ~CPU_VIRT_APIC_ACCESSES;
vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VIRT_X2APIC);
report_prefix_pushf("Virtualize x2APIC mode disabled; virtualize APIC access disabled");
@@ -4024,7 +4032,8 @@ static void test_virtual_intr_ctls(void)
(ctrl_pin_rev.clr & PIN_EXTINT)))
return;
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY |
+ vmcs_write(CPU_EXEC_CTRL0,
+ primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS |
CPU_BASED_TPR_SHADOW);
vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VINTD);
vmcs_write(PIN_CONTROLS, pin & ~PIN_EXTINT);
@@ -4094,7 +4103,8 @@ static void test_posted_intr(void)
(ctrl_exit_rev.clr & EXI_INTA)))
return;
- vmcs_write(CPU_EXEC_CTRL0, primary | CPU_SECONDARY |
+ vmcs_write(CPU_EXEC_CTRL0,
+ primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS |
CPU_BASED_TPR_SHADOW);
/*
@@ -4211,7 +4221,8 @@ static void test_vpid(void)
return;
}
- vmcs_write(CPU_EXEC_CTRL0, saved_primary | CPU_SECONDARY);
+ vmcs_write(CPU_EXEC_CTRL0,
+ saved_primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, saved_secondary & ~CPU_VPID);
vmcs_write(VPID, vpid);
report_prefix_pushf("VPID disabled; VPID value %x", vpid);
@@ -4247,7 +4258,7 @@ static void try_tpr_threshold_and_vtpr(unsigned threshold, unsigned vtpr)
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
if ((primary & CPU_BASED_TPR_SHADOW) &&
- (!(primary & CPU_SECONDARY) ||
+ (!(primary & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(secondary & (CPU_VINTD | CPU_VIRT_APIC_ACCESSES))))
valid = (threshold & 0xf) <= ((vtpr >> 4) & 0xf);
@@ -4340,7 +4351,7 @@ static void test_invalid_event_injection(void)
*/
/* Assert that unrestricted guest is disabled or unsupported */
- assert(!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
+ assert(!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(secondary_save & CPU_URG));
ent_intr_info = ent_intr_info_base | INTR_TYPE_HARD_EXCEPTION |
@@ -4580,7 +4591,8 @@ static void try_tpr_threshold(unsigned threshold)
u32 primary = vmcs_read(CPU_EXEC_CTRL0);
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
- if ((primary & CPU_BASED_TPR_SHADOW) && !((primary & CPU_SECONDARY) &&
+ if ((primary & CPU_BASED_TPR_SHADOW) &&
+ !((primary & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
(secondary & CPU_VINTD)))
valid = !(threshold >> 4);
@@ -4644,7 +4656,7 @@ static void test_tpr_threshold(void)
vmcs_write(APIC_VIRT_ADDR, virt_to_phys(virtual_apic_page));
vmcs_write(CPU_EXEC_CTRL0, primary & ~(CPU_BASED_TPR_SHADOW |
- CPU_SECONDARY));
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS));
report_prefix_pushf("Use TPR shadow disabled, secondary controls disabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4654,7 +4666,7 @@ static void test_tpr_threshold(void)
test_tpr_threshold_values();
report_prefix_pop();
- if (!((ctrl_cpu_rev[0].clr & CPU_SECONDARY) &&
+ if (!((ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
(ctrl_cpu_rev[1].clr & (CPU_VINTD | CPU_VIRT_APIC_ACCESSES))))
goto out;
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
@@ -4666,7 +4678,8 @@ static void test_tpr_threshold(void)
report_prefix_pop();
vmcs_write(CPU_EXEC_CTRL0,
- vmcs_read(CPU_EXEC_CTRL0) | CPU_SECONDARY);
+ vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
report_prefix_pushf("Use TPR shadow enabled; secondary controls enabled; virtual-interrupt delivery enabled; virtualize APIC accesses disabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4674,14 +4687,16 @@ static void test_tpr_threshold(void)
if (ctrl_cpu_rev[1].clr & CPU_VIRT_APIC_ACCESSES) {
vmcs_write(CPU_EXEC_CTRL0,
- vmcs_read(CPU_EXEC_CTRL0) & ~CPU_SECONDARY);
+ vmcs_read(CPU_EXEC_CTRL0) &
+ ~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1, CPU_VIRT_APIC_ACCESSES);
report_prefix_pushf("Use TPR shadow enabled; secondary controls disabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
test_tpr_threshold_values();
report_prefix_pop();
vmcs_write(CPU_EXEC_CTRL0,
- vmcs_read(CPU_EXEC_CTRL0) | CPU_SECONDARY);
+ vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
report_prefix_pushf("Use TPR shadow enabled; secondary controls enabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4691,7 +4706,8 @@ static void test_tpr_threshold(void)
(CPU_VINTD | CPU_VIRT_APIC_ACCESSES)) ==
(CPU_VINTD | CPU_VIRT_APIC_ACCESSES)) {
vmcs_write(CPU_EXEC_CTRL0,
- vmcs_read(CPU_EXEC_CTRL0) & ~CPU_SECONDARY);
+ vmcs_read(CPU_EXEC_CTRL0) &
+ ~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1,
CPU_VINTD | CPU_VIRT_APIC_ACCESSES);
report_prefix_pushf("Use TPR shadow enabled; secondary controls disabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
@@ -4699,7 +4715,8 @@ static void test_tpr_threshold(void)
report_prefix_pop();
vmcs_write(CPU_EXEC_CTRL0,
- vmcs_read(CPU_EXEC_CTRL0) | CPU_SECONDARY);
+ vmcs_read(CPU_EXEC_CTRL0) |
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
report_prefix_pushf("Use TPR shadow enabled; secondary controls enabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4995,13 +5012,13 @@ static void test_pml(void)
u32 primary = primary_saved;
u32 secondary = secondary_saved;
- if (!((ctrl_cpu_rev[0].clr & CPU_SECONDARY) &&
+ if (!((ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
(ctrl_cpu_rev[1].clr & CPU_EPT) && (ctrl_cpu_rev[1].clr & CPU_PML))) {
report_skip("%s : \"Secondary execution\" or \"enable EPT\" or \"enable PML\" control not supported", __func__);
return;
}
- primary |= CPU_SECONDARY;
+ primary |= CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
vmcs_write(CPU_EXEC_CTRL0, primary);
secondary &= ~(CPU_PML | CPU_EPT);
vmcs_write(CPU_EXEC_CTRL1, secondary);
@@ -6178,13 +6195,13 @@ static enum Config_type configure_apic_reg_virt_test(
virtualize_apic_accesses_incorrectly_on;
if (apic_reg_virt_config->activate_secondary_controls) {
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) {
printf("VM-execution control \"activate secondary controls\" NOT supported.\n");
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl0 |= CPU_SECONDARY;
+ cpu_exec_ctrl0 |= CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
} else {
- cpu_exec_ctrl0 &= ~CPU_SECONDARY;
+ cpu_exec_ctrl0 &= ~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
}
if (apic_reg_virt_config->virtualize_apic_accesses) {
@@ -9519,7 +9536,8 @@ static void enable_vid(void)
vmcs_write(EOI_EXIT_BITMAP2, 0x0);
vmcs_write(EOI_EXIT_BITMAP3, 0x0);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY | CPU_BASED_TPR_SHADOW);
+ vmcs_set_bits(CPU_EXEC_CTRL0,
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | CPU_BASED_TPR_SHADOW);
vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VINTD | CPU_VIRT_X2APIC);
}
@@ -9696,7 +9714,7 @@ static void vmx_apic_passthrough(bool set_irq_line_from_thread)
report_skip("%s : No test device enabled", __func__);
return;
}
- u64 cpu_ctrl_0 = CPU_SECONDARY;
+ u64 cpu_ctrl_0 = CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
u64 cpu_ctrl_1 = 0;
disable_intercept_for_x2apic_msrs();
@@ -10015,7 +10033,7 @@ static void sipi_test_ap_thread(void *data)
struct vmcs *ap_vmcs;
u64 *ap_vmxon_region;
void *ap_stack, *ap_syscall_stack;
- u64 cpu_ctrl_0 = CPU_SECONDARY;
+ u64 cpu_ctrl_0 = CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
u64 cpu_ctrl_1 = 0;
/* Enter VMX operation (i.e. exec VMXON) */
@@ -10081,7 +10099,7 @@ static void vmx_sipi_signal_test(void)
return;
}
- u64 cpu_ctrl_0 = CPU_SECONDARY;
+ u64 cpu_ctrl_0 = CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
u64 cpu_ctrl_1 = 0;
/* passthrough lapic to L2 */
@@ -10372,7 +10390,7 @@ static void vmx_vmcs_shadow_test(void)
u8 *bitmap[2];
struct vmcs *shadow;
- if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY)) {
+ if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) {
report_skip("%s : \"Activate secondary controls\" not supported", __func__);
return;
}
@@ -10402,7 +10420,7 @@ static void vmx_vmcs_shadow_test(void)
TEST_ASSERT(!vmcs_clear(shadow));
vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_RDTSC_EXITING);
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_set_bits(CPU_EXEC_CTRL1, CPU_SHADOW_VMCS);
vmcs_write(VMCS_LINK_PTR, virt_to_phys(shadow));
@@ -10456,7 +10474,7 @@ static void rdtsc_vmexit_diff_test_guest(void)
*/
static unsigned long long host_time_to_guest_time(unsigned long long t)
{
- TEST_ASSERT(!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
+ TEST_ASSERT(!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
!(vmcs_read(CPU_EXEC_CTRL1) & CPU_USE_TSC_SCALING));
if (vmcs_read(CPU_EXEC_CTRL0) & CPU_BASED_USE_TSC_OFFSETTING)
@@ -10801,7 +10819,7 @@ static void __vmx_pf_vpid_test(invalidate_tlb_t inv_fn, u16 vpid)
if (!is_invvpid_supported())
test_skip("INVVPID unsupported");
- vmcs_set_bits(CPU_EXEC_CTRL0, CPU_SECONDARY);
+ vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VPID);
vmcs_write(VPID, vpid);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 12/17] x86/vmx: switch to new vmx.h secondary execution controls
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (10 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 11/17] x86/vmx: switch to new vmx.h secondary execution control bit Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 13/17] x86/vmx: switch to new vmx.h pin based VM-execution controls Jon Kohler
` (5 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's secondary execution controls, which makes it
easier to grok from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 6 +-
x86/vmx.h | 20 +----
x86/vmx_tests.c | 220 ++++++++++++++++++++++++++++--------------------
3 files changed, 136 insertions(+), 110 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index f3368a4a..dc52efa7 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1300,7 +1300,8 @@ static void init_vmx_caps(void)
ctrl_cpu_rev[1].val = rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2);
else
ctrl_cpu_rev[1].val = 0;
- if ((ctrl_cpu_rev[1].clr & (CPU_EPT | CPU_VPID)) != 0)
+ if ((ctrl_cpu_rev[1].clr &
+ (SECONDARY_EXEC_ENABLE_EPT | SECONDARY_EXEC_ENABLE_VPID)) != 0)
ept_vpid.val = rdmsr(MSR_IA32_VMX_EPT_VPID_CAP);
else
ept_vpid.val = 0;
@@ -1607,7 +1608,8 @@ static void test_vmx_caps(void)
"MSR_IA32_VMX_BASIC");
val = rdmsr(MSR_IA32_VMX_MISC);
- report((!(ctrl_cpu_rev[1].clr & CPU_URG) || val & (1ul << 5)) &&
+ report((!(ctrl_cpu_rev[1].clr &
+ SECONDARY_EXEC_UNRESTRICTED_GUEST) || val & (1ul << 5)) &&
((val >> 16) & 0x1ff) <= 256 &&
(val & 0x80007e00) == 0,
"MSR_IA32_VMX_MISC");
diff --git a/x86/vmx.h b/x86/vmx.h
index 16332247..36e784a7 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -435,24 +435,6 @@ enum Ctrl_pin {
PIN_POST_INTR = 1ul << 7,
};
-enum Ctrl1 {
- CPU_VIRT_APIC_ACCESSES = 1ul << 0,
- CPU_EPT = 1ul << 1,
- CPU_DESC_TABLE = 1ul << 2,
- CPU_RDTSCP = 1ul << 3,
- CPU_VIRT_X2APIC = 1ul << 4,
- CPU_VPID = 1ul << 5,
- CPU_WBINVD = 1ul << 6,
- CPU_URG = 1ul << 7,
- CPU_APIC_REG_VIRT = 1ul << 8,
- CPU_VINTD = 1ul << 9,
- CPU_RDRAND = 1ul << 11,
- CPU_SHADOW_VMCS = 1ul << 14,
- CPU_RDSEED = 1ul << 16,
- CPU_PML = 1ul << 17,
- CPU_USE_TSC_SCALING = 1ul << 25,
-};
-
enum Intr_type {
VMX_INTR_TYPE_EXT_INTR = 0,
VMX_INTR_TYPE_NMI_INTR = 2,
@@ -686,7 +668,7 @@ static inline bool is_invept_type_supported(u64 type)
static inline bool is_vpid_supported(void)
{
return (ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
- (ctrl_cpu_rev[1].clr & CPU_VPID);
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_VPID);
}
static inline bool is_invvpid_supported(void)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index f092c22d..ba50f2ee 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -909,17 +909,27 @@ static struct insn_table insn_table[] = {
0, 0, 0, this_cpu_has_mwait},
{"PAUSE", CPU_BASED_PAUSE_EXITING, insn_pause, INSN_CPU0, 40, 0, 0, 0},
// Flags for Secondary Processor-Based VM-Execution Controls
- {"WBINVD", CPU_WBINVD, insn_wbinvd, INSN_CPU1, 54, 0, 0, 0},
- {"DESC_TABLE (SGDT)", CPU_DESC_TABLE, insn_sgdt, INSN_CPU1, 46, 0, 0, 0},
- {"DESC_TABLE (LGDT)", CPU_DESC_TABLE, insn_lgdt, INSN_CPU1, 46, 0, 0, 0},
- {"DESC_TABLE (SIDT)", CPU_DESC_TABLE, insn_sidt, INSN_CPU1, 46, 0, 0, 0},
- {"DESC_TABLE (LIDT)", CPU_DESC_TABLE, insn_lidt, INSN_CPU1, 46, 0, 0, 0},
- {"DESC_TABLE (SLDT)", CPU_DESC_TABLE, insn_sldt, INSN_CPU1, 47, 0, 0, 0},
- {"DESC_TABLE (LLDT)", CPU_DESC_TABLE, insn_lldt, INSN_CPU1, 47, 0, 0, 0},
- {"DESC_TABLE (STR)", CPU_DESC_TABLE, insn_str, INSN_CPU1, 47, 0, 0, 0},
+ {"WBINVD", SECONDARY_EXEC_WBINVD_EXITING, insn_wbinvd, INSN_CPU1, 54,
+ 0, 0, 0},
+ {"DESC_TABLE (SGDT)", SECONDARY_EXEC_DESC, insn_sgdt, INSN_CPU1, 46,
+ 0, 0, 0},
+ {"DESC_TABLE (LGDT)", SECONDARY_EXEC_DESC, insn_lgdt, INSN_CPU1, 46,
+ 0, 0, 0},
+ {"DESC_TABLE (SIDT)", SECONDARY_EXEC_DESC, insn_sidt, INSN_CPU1, 46,
+ 0, 0, 0},
+ {"DESC_TABLE (LIDT)", SECONDARY_EXEC_DESC, insn_lidt, INSN_CPU1, 46,
+ 0, 0, 0},
+ {"DESC_TABLE (SLDT)", SECONDARY_EXEC_DESC, insn_sldt, INSN_CPU1, 47,
+ 0, 0, 0},
+ {"DESC_TABLE (LLDT)", SECONDARY_EXEC_DESC, insn_lldt, INSN_CPU1, 47,
+ 0, 0, 0},
+ {"DESC_TABLE (STR)", SECONDARY_EXEC_DESC, insn_str, INSN_CPU1, 47,
+ 0, 0, 0},
/* LTR causes a #GP if done with a busy selector, so it is not tested. */
- {"RDRAND", CPU_RDRAND, insn_rdrand, INSN_CPU1, VMX_RDRAND, 0, 0, 0},
- {"RDSEED", CPU_RDSEED, insn_rdseed, INSN_CPU1, VMX_RDSEED, 0, 0, 0},
+ {"RDRAND", SECONDARY_EXEC_RDRAND_EXITING, insn_rdrand, INSN_CPU1,
+ VMX_RDRAND, 0, 0, 0},
+ {"RDSEED", SECONDARY_EXEC_RDSEED_EXITING, insn_rdseed, INSN_CPU1,
+ VMX_RDSEED, 0, 0, 0},
// Instructions always trap
{"CPUID", 0, insn_cpuid, INSN_ALWAYS_TRAP, 10, 0, 0, 0},
{"INVD", 0, insn_invd, INSN_ALWAYS_TRAP, 13, 0, 0, 0},
@@ -1052,7 +1062,7 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
static int __setup_ept(u64 hpa, bool enable_ad)
{
if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(ctrl_cpu_rev[1].clr & CPU_EPT)) {
+ !(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_EPT)) {
printf("\tEPT is not supported\n");
return 1;
}
@@ -1077,7 +1087,8 @@ static int __setup_ept(u64 hpa, bool enable_ad)
vmcs_write(EPTP, eptp);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) |
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
+ vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) |
+ SECONDARY_EXEC_ENABLE_EPT);
return 0;
}
@@ -1131,8 +1142,8 @@ static void setup_dummy_ept(void)
static int enable_unrestricted_guest(bool need_valid_ept)
{
if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(ctrl_cpu_rev[1].clr & CPU_URG) ||
- !(ctrl_cpu_rev[1].clr & CPU_EPT))
+ !(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_UNRESTRICTED_GUEST) ||
+ !(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_EPT))
return 1;
if (need_valid_ept)
@@ -1142,7 +1153,8 @@ static int enable_unrestricted_guest(bool need_valid_ept)
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) |
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | CPU_URG);
+ vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) |
+ SECONDARY_EXEC_UNRESTRICTED_GUEST);
return 0;
}
@@ -1550,7 +1562,7 @@ static int pml_init(struct vmcs *vmcs)
return r;
if (!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(ctrl_cpu_rev[1].clr & CPU_PML)) {
+ !(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_PML)) {
printf("\tPML is not supported");
return VMX_TEST_EXIT;
}
@@ -1559,7 +1571,7 @@ static int pml_init(struct vmcs *vmcs)
vmcs_write(PMLADDR, (u64)pml_log);
vmcs_write(GUEST_PML_INDEX, PML_INDEX - 1);
- ctrl_cpu = vmcs_read(CPU_EXEC_CTRL1) | CPU_PML;
+ ctrl_cpu = vmcs_read(CPU_EXEC_CTRL1) | SECONDARY_EXEC_ENABLE_PML;
vmcs_write(CPU_EXEC_CTRL1, ctrl_cpu);
return VMX_TEST_START;
@@ -2104,7 +2116,7 @@ static int disable_rdtscp_init(struct vmcs *vmcs)
if (ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) {
ctrl_cpu1 = vmcs_read(CPU_EXEC_CTRL1);
- ctrl_cpu1 &= ~CPU_RDTSCP;
+ ctrl_cpu1 &= ~SECONDARY_EXEC_ENABLE_RDTSCP;
vmcs_write(CPU_EXEC_CTRL1, ctrl_cpu1);
}
@@ -3885,7 +3897,8 @@ static void test_apic_access_addr(void)
vmcs_write(APIC_ACCS_ADDR, virt_to_phys(apic_access_page));
- test_vmcs_addr_reference(CPU_VIRT_APIC_ACCESSES, APIC_ACCS_ADDR,
+ test_vmcs_addr_reference(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES,
+ APIC_ACCS_ADDR,
"APIC-access address",
"virtualize APIC-accesses", PAGE_SIZE,
true, false);
@@ -3896,9 +3909,9 @@ static bool set_bit_pattern(u8 mask, u32 *secondary)
u8 i;
bool flag = false;
u32 test_bits[3] = {
- CPU_VIRT_X2APIC,
- CPU_APIC_REG_VIRT,
- CPU_VINTD
+ SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE,
+ SECONDARY_EXEC_APIC_REGISTER_VIRT,
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY
};
for (i = 0; i < ARRAY_SIZE(test_bits); i++) {
@@ -3948,7 +3961,9 @@ static void test_apic_virtual_ctls(void)
while (1) {
for (j = 1; j < 8; j++) {
- secondary &= ~(CPU_VIRT_X2APIC | CPU_APIC_REG_VIRT | CPU_VINTD);
+ secondary &= ~(SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
+ SECONDARY_EXEC_APIC_REGISTER_VIRT |
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
if (primary & CPU_BASED_TPR_SHADOW) {
is_ctrl_valid = true;
} else {
@@ -3959,8 +3974,12 @@ static void test_apic_virtual_ctls(void)
}
vmcs_write(CPU_EXEC_CTRL1, secondary);
- report_prefix_pushf("Use TPR shadow %s, virtualize x2APIC mode %s, APIC-register virtualization %s, virtual-interrupt delivery %s",
- str, (secondary & CPU_VIRT_X2APIC) ? "enabled" : "disabled", (secondary & CPU_APIC_REG_VIRT) ? "enabled" : "disabled", (secondary & CPU_VINTD) ? "enabled" : "disabled");
+ report_prefix_pushf(
+ "Use TPR shadow %s, virtualize x2APIC mode %s, APIC-register virtualization %s, virtual-interrupt delivery %s",
+ str,
+ (secondary & SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) ? "enabled" : "disabled",
+ (secondary & SECONDARY_EXEC_APIC_REGISTER_VIRT) ? "enabled" : "disabled",
+ (secondary & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) ? "enabled" : "disabled");
if (is_ctrl_valid)
test_vmx_valid_controls();
else
@@ -3980,7 +3999,8 @@ static void test_apic_virtual_ctls(void)
/*
* Second test
*/
- u32 apic_virt_ctls = (CPU_VIRT_X2APIC | CPU_VIRT_APIC_ACCESSES);
+ u32 apic_virt_ctls = (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
primary = saved_primary;
secondary = saved_secondary;
@@ -3989,23 +4009,27 @@ static void test_apic_virtual_ctls(void)
vmcs_write(CPU_EXEC_CTRL0,
primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- secondary &= ~CPU_VIRT_APIC_ACCESSES;
- vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VIRT_X2APIC);
+ secondary &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary & ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
report_prefix_pushf("Virtualize x2APIC mode disabled; virtualize APIC access disabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL1, secondary | CPU_VIRT_APIC_ACCESSES);
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary | SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
report_prefix_pushf("Virtualize x2APIC mode disabled; virtualize APIC access enabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL1, secondary | CPU_VIRT_X2APIC);
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary | SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
report_prefix_pushf("Virtualize x2APIC mode enabled; virtualize APIC access enabled");
test_vmx_invalid_controls();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VIRT_APIC_ACCESSES);
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary & ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
report_prefix_pushf("Virtualize x2APIC mode enabled; virtualize APIC access disabled");
test_vmx_valid_controls();
report_prefix_pop();
@@ -4028,20 +4052,22 @@ static void test_virtual_intr_ctls(void)
u32 secondary = saved_secondary;
u32 pin = saved_pin;
- if (!((ctrl_cpu_rev[1].clr & CPU_VINTD) &&
+ if (!((ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
(ctrl_pin_rev.clr & PIN_EXTINT)))
return;
vmcs_write(CPU_EXEC_CTRL0,
primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS |
CPU_BASED_TPR_SHADOW);
- vmcs_write(CPU_EXEC_CTRL1, secondary & ~CPU_VINTD);
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary & ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
vmcs_write(PIN_CONTROLS, pin & ~PIN_EXTINT);
report_prefix_pushf("Virtualize interrupt-delivery disabled; external-interrupt exiting disabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL1, secondary | CPU_VINTD);
+ vmcs_write(CPU_EXEC_CTRL1,
+ secondary | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
report_prefix_pushf("Virtualize interrupt-delivery enabled; external-interrupt exiting disabled");
test_vmx_invalid_controls();
report_prefix_pop();
@@ -4099,7 +4125,7 @@ static void test_posted_intr(void)
int i;
if (!((ctrl_pin_rev.clr & PIN_POST_INTR) &&
- (ctrl_cpu_rev[1].clr & CPU_VINTD) &&
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
(ctrl_exit_rev.clr & EXI_INTA)))
return;
@@ -4112,13 +4138,13 @@ static void test_posted_intr(void)
*/
pin |= PIN_POST_INTR;
vmcs_write(PIN_CONTROLS, pin);
- secondary &= ~CPU_VINTD;
+ secondary &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery disabled");
test_vmx_invalid_controls();
report_prefix_pop();
- secondary |= CPU_VINTD;
+ secondary |= SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery enabled");
test_vmx_invalid_controls();
@@ -4136,13 +4162,13 @@ static void test_posted_intr(void)
test_vmx_valid_controls();
report_prefix_pop();
- secondary &= ~CPU_VINTD;
+ secondary &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery disabled; acknowledge-interrupt-on-exit enabled");
test_vmx_invalid_controls();
report_prefix_pop();
- secondary |= CPU_VINTD;
+ secondary |= SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery enabled; acknowledge-interrupt-on-exit enabled");
test_vmx_valid_controls();
@@ -4223,13 +4249,15 @@ static void test_vpid(void)
vmcs_write(CPU_EXEC_CTRL0,
saved_primary | CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_write(CPU_EXEC_CTRL1, saved_secondary & ~CPU_VPID);
+ vmcs_write(CPU_EXEC_CTRL1,
+ saved_secondary & ~SECONDARY_EXEC_ENABLE_VPID);
vmcs_write(VPID, vpid);
report_prefix_pushf("VPID disabled; VPID value %x", vpid);
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(CPU_EXEC_CTRL1, saved_secondary | CPU_VPID);
+ vmcs_write(CPU_EXEC_CTRL1,
+ saved_secondary | SECONDARY_EXEC_ENABLE_VPID);
report_prefix_pushf("VPID enabled; VPID value %x", vpid);
test_vmx_invalid_controls();
report_prefix_pop();
@@ -4259,7 +4287,8 @@ static void try_tpr_threshold_and_vtpr(unsigned threshold, unsigned vtpr)
if ((primary & CPU_BASED_TPR_SHADOW) &&
(!(primary & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(secondary & (CPU_VINTD | CPU_VIRT_APIC_ACCESSES))))
+ !(secondary & (SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))))
valid = (threshold & 0xf) <= ((vtpr >> 4) & 0xf);
set_vtpr(vtpr);
@@ -4352,7 +4381,7 @@ static void test_invalid_event_injection(void)
/* Assert that unrestricted guest is disabled or unsupported */
assert(!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(secondary_save & CPU_URG));
+ !(secondary_save & SECONDARY_EXEC_UNRESTRICTED_GUEST));
ent_intr_info = ent_intr_info_base | INTR_TYPE_HARD_EXCEPTION |
GP_VECTOR;
@@ -4593,7 +4622,7 @@ static void try_tpr_threshold(unsigned threshold)
if ((primary & CPU_BASED_TPR_SHADOW) &&
!((primary & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
- (secondary & CPU_VINTD)))
+ (secondary & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)))
valid = !(threshold >> 4);
set_vtpr(-1);
@@ -4667,12 +4696,14 @@ static void test_tpr_threshold(void)
report_prefix_pop();
if (!((ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
- (ctrl_cpu_rev[1].clr & (CPU_VINTD | CPU_VIRT_APIC_ACCESSES))))
+ (ctrl_cpu_rev[1].clr & (SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))))
goto out;
u32 secondary = vmcs_read(CPU_EXEC_CTRL1);
- if (ctrl_cpu_rev[1].clr & CPU_VINTD) {
- vmcs_write(CPU_EXEC_CTRL1, CPU_VINTD);
+ if (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) {
+ vmcs_write(CPU_EXEC_CTRL1,
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
report_prefix_pushf("Use TPR shadow enabled; secondary controls disabled; virtual-interrupt delivery enabled; virtualize APIC accesses disabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4685,11 +4716,12 @@ static void test_tpr_threshold(void)
report_prefix_pop();
}
- if (ctrl_cpu_rev[1].clr & CPU_VIRT_APIC_ACCESSES) {
+ if (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES) {
vmcs_write(CPU_EXEC_CTRL0,
vmcs_read(CPU_EXEC_CTRL0) &
~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_write(CPU_EXEC_CTRL1, CPU_VIRT_APIC_ACCESSES);
+ vmcs_write(CPU_EXEC_CTRL1,
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
report_prefix_pushf("Use TPR shadow enabled; secondary controls disabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4703,13 +4735,14 @@ static void test_tpr_threshold(void)
}
if ((ctrl_cpu_rev[1].clr &
- (CPU_VINTD | CPU_VIRT_APIC_ACCESSES)) ==
- (CPU_VINTD | CPU_VIRT_APIC_ACCESSES)) {
+ (SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) ==
+ (SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY | SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
vmcs_write(CPU_EXEC_CTRL0,
vmcs_read(CPU_EXEC_CTRL0) &
~CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
vmcs_write(CPU_EXEC_CTRL1,
- CPU_VINTD | CPU_VIRT_APIC_ACCESSES);
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
report_prefix_pushf("Use TPR shadow enabled; secondary controls disabled; virtual-interrupt delivery enabled; virtualize APIC accesses enabled");
test_tpr_threshold_values();
report_prefix_pop();
@@ -4961,29 +4994,30 @@ static void test_ept_eptp(void)
report_prefix_pop();
}
- secondary &= ~(CPU_EPT | CPU_URG);
+ secondary &= ~(SECONDARY_EXEC_ENABLE_EPT |
+ SECONDARY_EXEC_UNRESTRICTED_GUEST);
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Enable-EPT disabled, unrestricted-guest disabled");
test_vmx_valid_controls();
report_prefix_pop();
- if (!(ctrl_cpu_rev[1].clr & CPU_URG))
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_UNRESTRICTED_GUEST))
goto skip_unrestricted_guest;
- secondary |= CPU_URG;
+ secondary |= SECONDARY_EXEC_UNRESTRICTED_GUEST;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Enable-EPT disabled, unrestricted-guest enabled");
test_vmx_invalid_controls();
report_prefix_pop();
- secondary |= CPU_EPT;
+ secondary |= SECONDARY_EXEC_ENABLE_EPT;
setup_dummy_ept();
report_prefix_pushf("Enable-EPT enabled, unrestricted-guest enabled");
test_vmx_valid_controls();
report_prefix_pop();
skip_unrestricted_guest:
- secondary &= ~CPU_URG;
+ secondary &= ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Enable-EPT enabled, unrestricted-guest disabled");
test_vmx_valid_controls();
@@ -5013,38 +5047,40 @@ static void test_pml(void)
u32 secondary = secondary_saved;
if (!((ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) &&
- (ctrl_cpu_rev[1].clr & CPU_EPT) && (ctrl_cpu_rev[1].clr & CPU_PML))) {
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_EPT) &&
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_ENABLE_PML))) {
report_skip("%s : \"Secondary execution\" or \"enable EPT\" or \"enable PML\" control not supported", __func__);
return;
}
primary |= CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
vmcs_write(CPU_EXEC_CTRL0, primary);
- secondary &= ~(CPU_PML | CPU_EPT);
+ secondary &= ~(SECONDARY_EXEC_ENABLE_PML | SECONDARY_EXEC_ENABLE_EPT);
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("enable-PML disabled, enable-EPT disabled");
test_vmx_valid_controls();
report_prefix_pop();
- secondary |= CPU_PML;
+ secondary |= SECONDARY_EXEC_ENABLE_PML;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("enable-PML enabled, enable-EPT disabled");
test_vmx_invalid_controls();
report_prefix_pop();
- secondary |= CPU_EPT;
+ secondary |= SECONDARY_EXEC_ENABLE_EPT;
setup_dummy_ept();
report_prefix_pushf("enable-PML enabled, enable-EPT enabled");
test_vmx_valid_controls();
report_prefix_pop();
- secondary &= ~CPU_PML;
+ secondary &= ~SECONDARY_EXEC_ENABLE_PML;
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("enable-PML disabled, enable EPT enabled");
test_vmx_valid_controls();
report_prefix_pop();
- test_vmcs_addr_reference(CPU_PML, PMLADDR, "PML address", "PML",
+ test_vmcs_addr_reference(SECONDARY_EXEC_ENABLE_PML,
+ PMLADDR, "PML address", "PML",
PAGE_SIZE, false, false);
vmcs_write(CPU_EXEC_CTRL0, primary_saved);
@@ -5297,13 +5333,14 @@ static void vmx_mtf_pdpte_test(void)
return;
}
- if (!(ctrl_cpu_rev[1].clr & CPU_URG)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_UNRESTRICTED_GUEST)) {
report_skip("%s : \"Unrestricted guest\" exec control not supported", __func__);
return;
}
vmcs_write(EXC_BITMAP, ~0);
- vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | CPU_URG);
+ vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) |
+ SECONDARY_EXEC_UNRESTRICTED_GUEST);
/*
* Copy the guest code to an identity-mapped page.
@@ -6205,13 +6242,13 @@ static enum Config_type configure_apic_reg_virt_test(
}
if (apic_reg_virt_config->virtualize_apic_accesses) {
- if (!(ctrl_cpu_rev[1].clr & CPU_VIRT_APIC_ACCESSES)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) {
printf("VM-execution control \"virtualize APIC accesses\" NOT supported.\n");
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl1 |= CPU_VIRT_APIC_ACCESSES;
+ cpu_exec_ctrl1 |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
} else {
- cpu_exec_ctrl1 &= ~CPU_VIRT_APIC_ACCESSES;
+ cpu_exec_ctrl1 &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
}
if (apic_reg_virt_config->use_tpr_shadow) {
@@ -6225,23 +6262,23 @@ static enum Config_type configure_apic_reg_virt_test(
}
if (apic_reg_virt_config->apic_register_virtualization) {
- if (!(ctrl_cpu_rev[1].clr & CPU_APIC_REG_VIRT)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_APIC_REGISTER_VIRT)) {
printf("VM-execution control \"APIC-register virtualization\" NOT supported.\n");
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl1 |= CPU_APIC_REG_VIRT;
+ cpu_exec_ctrl1 |= SECONDARY_EXEC_APIC_REGISTER_VIRT;
} else {
- cpu_exec_ctrl1 &= ~CPU_APIC_REG_VIRT;
+ cpu_exec_ctrl1 &= ~SECONDARY_EXEC_APIC_REGISTER_VIRT;
}
if (apic_reg_virt_config->virtualize_x2apic_mode) {
- if (!(ctrl_cpu_rev[1].clr & CPU_VIRT_X2APIC)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
printf("VM-execution control \"virtualize x2APIC mode\" NOT supported.\n");
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl1 |= CPU_VIRT_X2APIC;
+ cpu_exec_ctrl1 |= SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
} else {
- cpu_exec_ctrl1 &= ~CPU_VIRT_X2APIC;
+ cpu_exec_ctrl1 &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
}
vmcs_write(CPU_EXEC_CTRL0, cpu_exec_ctrl0);
@@ -6255,8 +6292,8 @@ static enum Config_type configure_apic_reg_virt_test(
static bool cpu_has_apicv(void)
{
- return ((ctrl_cpu_rev[1].clr & CPU_APIC_REG_VIRT) &&
- (ctrl_cpu_rev[1].clr & CPU_VINTD) &&
+ return ((ctrl_cpu_rev[1].clr & SECONDARY_EXEC_APIC_REGISTER_VIRT) &&
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
(ctrl_pin_rev.clr & PIN_POST_INTR));
}
@@ -6277,7 +6314,7 @@ static void apic_reg_virt_test(void)
}
control = cpu_exec_ctrl1;
- control &= ~CPU_VINTD;
+ control &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, control);
test_set_guest(apic_reg_virt_guest);
@@ -7004,13 +7041,13 @@ static enum Config_type configure_virt_x2apic_mode_test(
}
if (virt_x2apic_mode_config->virtual_interrupt_delivery) {
- if (!(ctrl_cpu_rev[1].clr & CPU_VINTD)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)) {
report_skip("%s : \"virtual-interrupt delivery\" exec control not supported", __func__);
return CONFIG_TYPE_UNSUPPORTED;
}
- cpu_exec_ctrl1 |= CPU_VINTD;
+ cpu_exec_ctrl1 |= SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
} else {
- cpu_exec_ctrl1 &= ~CPU_VINTD;
+ cpu_exec_ctrl1 &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
}
vmcs_write(CPU_EXEC_CTRL0, cpu_exec_ctrl0);
@@ -8089,7 +8126,7 @@ static void test_guest_segment_sel_fields(void)
cpu_ctrl1_saved = vmcs_read(CPU_EXEC_CTRL1);
ar_saved = vmcs_read(GUEST_AR_SS);
/* Turn off "unrestricted guest" vm-execution control */
- vmcs_write(CPU_EXEC_CTRL1, cpu_ctrl1_saved & ~CPU_URG);
+ vmcs_write(CPU_EXEC_CTRL1, cpu_ctrl1_saved & ~SECONDARY_EXEC_UNRESTRICTED_GUEST);
cs_rpl_bits = vmcs_read(GUEST_SEL_CS) & 0x3;
sel_saved = vmcs_read(GUEST_SEL_SS);
TEST_INVALID_SEG_SEL(GUEST_SEL_SS, ((sel_saved & ~0x3) | (~cs_rpl_bits & 0x3)));
@@ -8302,13 +8339,17 @@ static void vmentry_unrestricted_guest_test(void)
test_set_guest(unrestricted_guest_main);
setup_unrestricted_guest();
- test_guest_state("Unrestricted guest test", false, CPU_URG, "CPU_URG");
+ test_guest_state("Unrestricted guest test", false,
+ SECONDARY_EXEC_UNRESTRICTED_GUEST,
+ "SECONDARY_EXEC_UNRESTRICTED_GUEST");
/*
* Let the guest finish execution as a regular guest
*/
unsetup_unrestricted_guest();
- vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) & ~CPU_URG);
+ vmcs_write(CPU_EXEC_CTRL1,
+ vmcs_read(CPU_EXEC_CTRL1) &
+ ~SECONDARY_EXEC_UNRESTRICTED_GUEST);
enter_guest();
}
@@ -9538,7 +9579,8 @@ static void enable_vid(void)
vmcs_set_bits(CPU_EXEC_CTRL0,
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS | CPU_BASED_TPR_SHADOW);
- vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VINTD | CPU_VIRT_X2APIC);
+ vmcs_set_bits(CPU_EXEC_CTRL1, SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
}
#define PI_VECTOR 255
@@ -10395,7 +10437,7 @@ static void vmx_vmcs_shadow_test(void)
return;
}
- if (!(ctrl_cpu_rev[1].clr & CPU_SHADOW_VMCS)) {
+ if (!(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_SHADOW_VMCS)) {
report_skip("%s : \"VMCS shadowing\" not supported", __func__);
return;
}
@@ -10421,7 +10463,7 @@ static void vmx_vmcs_shadow_test(void)
vmcs_clear_bits(CPU_EXEC_CTRL0, CPU_BASED_RDTSC_EXITING);
vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_set_bits(CPU_EXEC_CTRL1, CPU_SHADOW_VMCS);
+ vmcs_set_bits(CPU_EXEC_CTRL1, SECONDARY_EXEC_SHADOW_VMCS);
vmcs_write(VMCS_LINK_PTR, virt_to_phys(shadow));
report_prefix_push("valid link pointer");
@@ -10475,7 +10517,7 @@ static void rdtsc_vmexit_diff_test_guest(void)
static unsigned long long host_time_to_guest_time(unsigned long long t)
{
TEST_ASSERT(!(ctrl_cpu_rev[0].clr & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ||
- !(vmcs_read(CPU_EXEC_CTRL1) & CPU_USE_TSC_SCALING));
+ !(vmcs_read(CPU_EXEC_CTRL1) & SECONDARY_EXEC_TSC_SCALING));
if (vmcs_read(CPU_EXEC_CTRL0) & CPU_BASED_USE_TSC_OFFSETTING)
t += vmcs_read(TSC_OFFSET);
@@ -10783,7 +10825,7 @@ static void invalidate_tlb_no_vpid(void *data)
static void vmx_pf_no_vpid_test(void)
{
if (is_vpid_supported())
- vmcs_clear_bits(CPU_EXEC_CTRL1, CPU_VPID);
+ vmcs_clear_bits(CPU_EXEC_CTRL1, SECONDARY_EXEC_ENABLE_VPID);
__vmx_pf_exception_test(invalidate_tlb_no_vpid, NULL,
vmx_pf_exception_test_guest);
@@ -10820,7 +10862,7 @@ static void __vmx_pf_vpid_test(invalidate_tlb_t inv_fn, u16 vpid)
test_skip("INVVPID unsupported");
vmcs_set_bits(CPU_EXEC_CTRL0, CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
- vmcs_set_bits(CPU_EXEC_CTRL1, CPU_VPID);
+ vmcs_set_bits(CPU_EXEC_CTRL1, SECONDARY_EXEC_ENABLE_VPID);
vmcs_write(VPID, vpid);
__vmx_pf_exception_test(inv_fn, &vpid, vmx_pf_exception_test_guest);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 13/17] x86/vmx: switch to new vmx.h pin based VM-execution controls
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (11 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 12/17] x86/vmx: switch to new vmx.h secondary execution controls Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 14/17] x86/vmx: switch to new vmx.h exit controls Jon Kohler
` (4 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's pin based VM-execution controls, which makes it
easier to grok from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 4 +-
x86/vmx.h | 8 ----
x86/vmx_tests.c | 125 +++++++++++++++++++++++++++---------------------
3 files changed, 74 insertions(+), 63 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index dc52efa7..25a8d9f8 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1254,7 +1254,9 @@ int init_vmcs(struct vmcs **vmcs)
/* All settings to pin/exit/enter/cpu
control fields should be placed here */
- ctrl_pin |= PIN_EXTINT | PIN_NMI | PIN_VIRT_NMI;
+ ctrl_pin |= PIN_BASED_EXT_INTR_MASK |
+ PIN_BASED_NMI_EXITING |
+ PIN_BASED_VIRTUAL_NMIS;
ctrl_exit = EXI_LOAD_EFER | EXI_HOST_64 | EXI_LOAD_PAT;
ctrl_enter = (ENT_LOAD_EFER | ENT_GUEST_64);
/* DIsable IO instruction VMEXIT now */
diff --git a/x86/vmx.h b/x86/vmx.h
index 36e784a7..e0e23ab6 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -427,14 +427,6 @@ enum Ctrl_ent {
ENT_LOAD_BNDCFGS = 1UL << 16
};
-enum Ctrl_pin {
- PIN_EXTINT = 1ul << 0,
- PIN_NMI = 1ul << 3,
- PIN_VIRT_NMI = 1ul << 5,
- PIN_PREEMPT = 1ul << 6,
- PIN_POST_INTR = 1ul << 7,
-};
-
enum Intr_type {
VMX_INTR_TYPE_EXT_INTR = 0,
VMX_INTR_TYPE_NMI_INTR = 2,
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index ba50f2ee..1ea5d35b 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -128,11 +128,12 @@ u64 saved_rip;
static int preemption_timer_init(struct vmcs *vmcs)
{
- if (!(ctrl_pin_rev.clr & PIN_PREEMPT)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER)) {
printf("\tPreemption timer is not supported\n");
return VMX_TEST_EXIT;
}
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) | PIN_PREEMPT);
+ vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) |
+ PIN_BASED_VMX_PREEMPTION_TIMER);
preempt_val = 10000000;
vmcs_write(PREEMPT_TIMER_VALUE, preempt_val);
preempt_scale = rdmsr(MSR_IA32_VMX_MISC) & 0x1F;
@@ -194,7 +195,8 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
"preemption timer during hlt");
vmx_set_test_stage(4);
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) & ~PIN_PREEMPT);
+ vmcs_read(PIN_CONTROLS) &
+ ~PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(EXI_CONTROLS,
vmcs_read(EXI_CONTROLS) & ~EXI_SAVE_PREEMPT);
vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
@@ -236,7 +238,8 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
/* fall through */
case 4:
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) | PIN_PREEMPT);
+ vmcs_read(PIN_CONTROLS) |
+ PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(PREEMPT_TIMER_VALUE, 0);
saved_rip = guest_rip + insn_len;
return VMX_TEST_RESUME;
@@ -255,7 +258,8 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
report_fail("Unknown exit reason, 0x%x", exit_reason.full);
print_vmexit_info(exit_reason);
}
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) & ~PIN_PREEMPT);
+ vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) &
+ ~PIN_BASED_VMX_PREEMPTION_TIMER);
return VMX_TEST_VMEXIT;
}
@@ -1618,7 +1622,8 @@ static void timer_isr(isr_regs_t *regs)
static int interrupt_init(struct vmcs *vmcs)
{
msr_bmp_init();
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) &
+ ~PIN_BASED_EXT_INTR_MASK);
handle_irq(TIMER_VECTOR, timer_isr);
return VMX_TEST_START;
}
@@ -1727,17 +1732,20 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
case 2:
case 5:
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) | PIN_EXTINT);
+ vmcs_read(PIN_CONTROLS) |
+ PIN_BASED_EXT_INTR_MASK);
break;
case 7:
vmcs_write(EXI_CONTROLS, vmcs_read(EXI_CONTROLS) | EXI_INTA);
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) | PIN_EXTINT);
+ vmcs_read(PIN_CONTROLS) |
+ PIN_BASED_EXT_INTR_MASK);
break;
case 1:
case 3:
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) & ~PIN_EXTINT);
+ vmcs_read(PIN_CONTROLS) &
+ ~PIN_BASED_EXT_INTR_MASK);
break;
case 4:
case 6:
@@ -1788,9 +1796,9 @@ static int nmi_hlt_init(struct vmcs *vmcs)
msr_bmp_init();
handle_irq(NMI_VECTOR, nmi_isr);
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) & ~PIN_NMI);
+ vmcs_read(PIN_CONTROLS) & ~PIN_BASED_NMI_EXITING);
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) & ~PIN_VIRT_NMI);
+ vmcs_read(PIN_CONTROLS) & ~PIN_BASED_VIRTUAL_NMIS);
return VMX_TEST_START;
}
@@ -1860,9 +1868,9 @@ static int nmi_hlt_exit_handler(union exit_reason exit_reason)
}
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) | PIN_NMI);
+ vmcs_read(PIN_CONTROLS) | PIN_BASED_NMI_EXITING);
vmcs_write(PIN_CONTROLS,
- vmcs_read(PIN_CONTROLS) | PIN_VIRT_NMI);
+ vmcs_read(PIN_CONTROLS) | PIN_BASED_VIRTUAL_NMIS);
vmcs_write(GUEST_RIP, guest_rip + insn_len);
break;
@@ -4053,7 +4061,7 @@ static void test_virtual_intr_ctls(void)
u32 pin = saved_pin;
if (!((ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
- (ctrl_pin_rev.clr & PIN_EXTINT)))
+ (ctrl_pin_rev.clr & PIN_BASED_EXT_INTR_MASK)))
return;
vmcs_write(CPU_EXEC_CTRL0,
@@ -4061,7 +4069,7 @@ static void test_virtual_intr_ctls(void)
CPU_BASED_TPR_SHADOW);
vmcs_write(CPU_EXEC_CTRL1,
secondary & ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
- vmcs_write(PIN_CONTROLS, pin & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS, pin & ~PIN_BASED_EXT_INTR_MASK);
report_prefix_pushf("Virtualize interrupt-delivery disabled; external-interrupt exiting disabled");
test_vmx_valid_controls();
report_prefix_pop();
@@ -4072,12 +4080,12 @@ static void test_virtual_intr_ctls(void)
test_vmx_invalid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, pin | PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS, pin | PIN_BASED_EXT_INTR_MASK);
report_prefix_pushf("Virtualize interrupt-delivery enabled; external-interrupt exiting enabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, pin & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS, pin & ~PIN_BASED_EXT_INTR_MASK);
report_prefix_pushf("Virtualize interrupt-delivery enabled; external-interrupt exiting disabled");
test_vmx_invalid_controls();
report_prefix_pop();
@@ -4124,9 +4132,9 @@ static void test_posted_intr(void)
u16 vec;
int i;
- if (!((ctrl_pin_rev.clr & PIN_POST_INTR) &&
- (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
- (ctrl_exit_rev.clr & EXI_INTA)))
+ if (!((ctrl_pin_rev.clr & PIN_BASED_POSTED_INTR) &&
+ (ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
+ (ctrl_exit_rev.clr & EXI_INTA)))
return;
vmcs_write(CPU_EXEC_CTRL0,
@@ -4136,7 +4144,7 @@ static void test_posted_intr(void)
/*
* Test virtual-interrupt-delivery and acknowledge-interrupt-on-exit
*/
- pin |= PIN_POST_INTR;
+ pin |= PIN_BASED_POSTED_INTR;
vmcs_write(PIN_CONTROLS, pin);
secondary &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
vmcs_write(CPU_EXEC_CTRL1, secondary);
@@ -4777,8 +4785,9 @@ static void test_nmi_ctrls(void)
{
u32 pin_ctrls, cpu_ctrls0, test_pin_ctrls, test_cpu_ctrls0;
- if ((ctrl_pin_rev.clr & (PIN_NMI | PIN_VIRT_NMI)) !=
- (PIN_NMI | PIN_VIRT_NMI)) {
+ if ((ctrl_pin_rev.clr &
+ (PIN_BASED_NMI_EXITING | PIN_BASED_VIRTUAL_NMIS)) !=
+ (PIN_BASED_NMI_EXITING | PIN_BASED_VIRTUAL_NMIS)) {
report_skip("%s : NMI exiting and/or Virtual NMIs not supported", __func__);
return;
}
@@ -4787,7 +4796,7 @@ static void test_nmi_ctrls(void)
pin_ctrls = vmcs_read(PIN_CONTROLS);
cpu_ctrls0 = vmcs_read(CPU_EXEC_CTRL0);
- test_pin_ctrls = pin_ctrls & ~(PIN_NMI | PIN_VIRT_NMI);
+ test_pin_ctrls = pin_ctrls & ~(PIN_BASED_NMI_EXITING | PIN_BASED_VIRTUAL_NMIS);
test_cpu_ctrls0 = cpu_ctrls0 & ~CPU_BASED_NMI_WINDOW_EXITING;
vmcs_write(PIN_CONTROLS, test_pin_ctrls);
@@ -4795,17 +4804,19 @@ static void test_nmi_ctrls(void)
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, test_pin_ctrls | PIN_VIRT_NMI);
+ vmcs_write(PIN_CONTROLS, test_pin_ctrls | PIN_BASED_VIRTUAL_NMIS);
report_prefix_pushf("NMI-exiting disabled, virtual-NMIs enabled");
test_vmx_invalid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, test_pin_ctrls | (PIN_NMI | PIN_VIRT_NMI));
+ vmcs_write(PIN_CONTROLS,
+ test_pin_ctrls | (PIN_BASED_NMI_EXITING |
+ PIN_BASED_VIRTUAL_NMIS));
report_prefix_pushf("NMI-exiting enabled, virtual-NMIs enabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, test_pin_ctrls | PIN_NMI);
+ vmcs_write(PIN_CONTROLS, test_pin_ctrls | PIN_BASED_NMI_EXITING);
report_prefix_pushf("NMI-exiting enabled, virtual-NMIs disabled");
test_vmx_valid_controls();
report_prefix_pop();
@@ -4828,14 +4839,16 @@ static void test_nmi_ctrls(void)
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, test_pin_ctrls | (PIN_NMI | PIN_VIRT_NMI));
+ vmcs_write(PIN_CONTROLS, test_pin_ctrls |
+ (PIN_BASED_NMI_EXITING | PIN_BASED_VIRTUAL_NMIS));
vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0 |
CPU_BASED_NMI_WINDOW_EXITING);
report_prefix_pushf("Virtual-NMIs enabled, NMI-window-exiting enabled");
test_vmx_valid_controls();
report_prefix_pop();
- vmcs_write(PIN_CONTROLS, test_pin_ctrls | (PIN_NMI | PIN_VIRT_NMI));
+ vmcs_write(PIN_CONTROLS, test_pin_ctrls |
+ (PIN_BASED_NMI_EXITING | PIN_BASED_VIRTUAL_NMIS));
vmcs_write(CPU_EXEC_CTRL0, test_cpu_ctrls0);
report_prefix_pushf("Virtual-NMIs enabled, NMI-window-exiting disabled");
test_vmx_valid_controls();
@@ -5101,12 +5114,12 @@ static void test_vmx_preemption_timer(void)
u32 exit = saved_exit;
if (!((ctrl_exit_rev.clr & EXI_SAVE_PREEMPT) ||
- (ctrl_pin_rev.clr & PIN_PREEMPT))) {
+ (ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER))) {
report_skip("%s : \"Save-VMX-preemption-timer\" and/or \"Enable-VMX-preemption-timer\" control not supported", __func__);
return;
}
- pin |= PIN_PREEMPT;
+ pin |= PIN_BASED_VMX_PREEMPTION_TIMER;
vmcs_write(PIN_CONTROLS, pin);
exit &= ~EXI_SAVE_PREEMPT;
vmcs_write(EXI_CONTROLS, exit);
@@ -5120,7 +5133,7 @@ static void test_vmx_preemption_timer(void)
test_vmx_valid_controls();
report_prefix_pop();
- pin &= ~PIN_PREEMPT;
+ pin &= ~PIN_BASED_VMX_PREEMPTION_TIMER;
vmcs_write(PIN_CONTROLS, pin);
report_prefix_pushf("enable-VMX-preemption-timer disabled, save-VMX-preemption-timer enabled");
test_vmx_invalid_controls();
@@ -6294,7 +6307,7 @@ static bool cpu_has_apicv(void)
{
return ((ctrl_cpu_rev[1].clr & SECONDARY_EXEC_APIC_REGISTER_VIRT) &&
(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
- (ctrl_pin_rev.clr & PIN_POST_INTR));
+ (ctrl_pin_rev.clr & PIN_BASED_POSTED_INTR));
}
/* Validates APIC register access across valid virtualization configurations. */
@@ -8665,7 +8678,7 @@ static void vmx_pending_event_test_core(bool guest_hlt)
vmx_pending_event_guest_run = false;
test_set_guest(vmx_pending_event_guest);
- vmcs_set_bits(PIN_CONTROLS, PIN_EXTINT);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_EXT_INTR_MASK);
enter_guest();
skip_exit_vmcall();
@@ -8739,7 +8752,7 @@ static void vmx_nmi_window_test(void)
u64 nop_addr;
void *db_fault_addr = get_idt_addr(&boot_idt[DB_VECTOR]);
- if (!(ctrl_pin_rev.clr & PIN_VIRT_NMI)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VIRTUAL_NMIS)) {
report_skip("%s : \"Virtual NMIs\" exec control not supported", __func__);
return;
}
@@ -8753,7 +8766,7 @@ static void vmx_nmi_window_test(void)
report_prefix_push("NMI-window");
test_set_guest(vmx_nmi_window_test_guest);
- vmcs_set_bits(PIN_CONTROLS, PIN_VIRT_NMI);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_VIRTUAL_NMIS);
enter_guest();
skip_exit_vmcall();
nop_addr = vmcs_read(GUEST_RIP);
@@ -9064,13 +9077,13 @@ static void vmx_preemption_timer_zero_test_guest(void)
static void vmx_preemption_timer_zero_activate_preemption_timer(void)
{
- vmcs_set_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(PREEMPT_TIMER_VALUE, 0);
}
static void vmx_preemption_timer_zero_advance_past_vmcall(void)
{
- vmcs_clear_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
enter_guest();
skip_exit_vmcall();
}
@@ -9114,7 +9127,7 @@ static void vmx_preemption_timer_zero_test(void)
handler old_db;
u32 reason;
- if (!(ctrl_pin_rev.clr & PIN_PREEMPT)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER)) {
report_skip("%s : \"Activate VMX-preemption timer\" pin control not supported", __func__);
return;
}
@@ -9165,7 +9178,7 @@ static void vmx_preemption_timer_zero_test(void)
report(reason == VMX_EXC_NMI, "Exit reason is 0x%x (expected 0x%x)",
reason, VMX_EXC_NMI);
- vmcs_clear_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
enter_guest();
handle_exception(DB_VECTOR, old_db);
@@ -9229,7 +9242,7 @@ static void vmx_preemption_timer_tf_test(void)
u32 reason;
int i;
- if (!(ctrl_pin_rev.clr & PIN_PREEMPT)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER)) {
report_skip("%s : \"Activate VMX-preemption timer\" pin control not supported", __func__);
return;
}
@@ -9243,7 +9256,7 @@ static void vmx_preemption_timer_tf_test(void)
skip_exit_vmcall();
vmx_set_test_stage(1);
- vmcs_set_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(PREEMPT_TIMER_VALUE, 50000);
vmcs_write(GUEST_RFLAGS, X86_EFLAGS_FIXED | X86_EFLAGS_TF);
@@ -9268,7 +9281,7 @@ static void vmx_preemption_timer_tf_test(void)
report(reason == VMX_PREEMPT, "No single-step traps skipped");
vmx_set_test_stage(2);
- vmcs_clear_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
enter_guest();
handle_exception(DB_VECTOR, old_db);
@@ -9320,7 +9333,7 @@ static void vmx_preemption_timer_expiry_test(void)
u64 tsc_deadline;
u32 reason;
- if (!(ctrl_pin_rev.clr & PIN_PREEMPT)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER)) {
report_skip("%s : \"Activate VMX-preemption timer\" pin control not supported", __func__);
return;
}
@@ -9334,7 +9347,7 @@ static void vmx_preemption_timer_expiry_test(void)
preemption_timer_value =
VMX_PREEMPTION_TIMER_EXPIRY_CYCLES >> misc.pt_bit;
- vmcs_set_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(PREEMPT_TIMER_VALUE, preemption_timer_value);
vmx_set_test_stage(0);
@@ -9349,7 +9362,7 @@ static void vmx_preemption_timer_expiry_test(void)
"Last stored guest TSC (%lu) < TSC deadline (%lu)",
vmx_preemption_timer_expiry_finish, tsc_deadline);
- vmcs_clear_bits(PIN_CONTROLS, PIN_PREEMPT);
+ vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
vmx_set_test_stage(1);
enter_guest();
}
@@ -9570,7 +9583,7 @@ static void enable_vid(void)
virtual_apic_page = alloc_page();
vmcs_write(APIC_VIRT_ADDR, (u64)virtual_apic_page);
- vmcs_set_bits(PIN_CONTROLS, PIN_EXTINT);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_EXT_INTR_MASK);
vmcs_write(EOI_EXIT_BITMAP0, 0x0);
vmcs_write(EOI_EXIT_BITMAP1, 0x0);
@@ -9589,7 +9602,7 @@ static void enable_posted_interrupts(void)
{
void *pi_desc = alloc_page();
- vmcs_set_bits(PIN_CONTROLS, PIN_POST_INTR);
+ vmcs_set_bits(PIN_CONTROLS, PIN_BASED_POSTED_INTR);
vmcs_set_bits(EXI_CONTROLS, EXI_INTA);
vmcs_write(PINV, PI_VECTOR);
vmcs_write(POSTED_INTR_DESC_ADDR, (u64)pi_desc);
@@ -9761,7 +9774,8 @@ static void vmx_apic_passthrough(bool set_irq_line_from_thread)
disable_intercept_for_x2apic_msrs();
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS,
+ vmcs_read(PIN_CONTROLS) & ~PIN_BASED_EXT_INTR_MASK);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) | cpu_ctrl_0);
vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | cpu_ctrl_1);
@@ -9823,7 +9837,7 @@ static void vmx_apic_passthrough_tpr_threshold_test(void)
int ipi_vector = 0xe1;
disable_intercept_for_x2apic_msrs();
- vmcs_clear_bits(PIN_CONTROLS, PIN_EXTINT);
+ vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_EXT_INTR_MASK);
/* Raise L0 TPR-threshold by queueing vector in LAPIC IRR */
cli();
@@ -10094,7 +10108,8 @@ static void sipi_test_ap_thread(void *data)
/* passthrough lapic to L2 */
disable_intercept_for_x2apic_msrs();
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS,
+ vmcs_read(PIN_CONTROLS) & ~PIN_BASED_EXT_INTR_MASK);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) | cpu_ctrl_0);
vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | cpu_ctrl_1);
@@ -10146,7 +10161,8 @@ static void vmx_sipi_signal_test(void)
/* passthrough lapic to L2 */
disable_intercept_for_x2apic_msrs();
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) & ~PIN_EXTINT);
+ vmcs_write(PIN_CONTROLS,
+ vmcs_read(PIN_CONTROLS) & ~PIN_BASED_EXT_INTR_MASK);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0) | cpu_ctrl_0);
vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1) | cpu_ctrl_1);
@@ -10577,11 +10593,12 @@ static void rdtsc_vmexit_diff_test(void)
static int invalid_msr_init(struct vmcs *vmcs)
{
- if (!(ctrl_pin_rev.clr & PIN_PREEMPT)) {
+ if (!(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER)) {
printf("\tPreemption timer is not supported\n");
return VMX_TEST_EXIT;
}
- vmcs_write(PIN_CONTROLS, vmcs_read(PIN_CONTROLS) | PIN_PREEMPT);
+ vmcs_write(PIN_CONTROLS,
+ vmcs_read(PIN_CONTROLS) | PIN_BASED_VMX_PREEMPTION_TIMER);
preempt_val = 10000000;
vmcs_write(PREEMPT_TIMER_VALUE, preempt_val);
preempt_scale = rdmsr(MSR_IA32_VMX_MISC) & 0x1F;
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 14/17] x86/vmx: switch to new vmx.h exit controls
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (12 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 13/17] x86/vmx: switch to new vmx.h pin based VM-execution controls Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 15/17] x86/vmx: switch to new vmx.h entry controls Jon Kohler
` (3 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's exit controls, which makes it easier to grok
from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 6 ++--
x86/vmx.h | 12 -------
x86/vmx_tests.c | 86 +++++++++++++++++++++++++++----------------------
3 files changed, 52 insertions(+), 52 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index 25a8d9f8..bd16e833 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1132,7 +1132,7 @@ static void init_vmcs_host(void)
vmcs_write(HOST_CR4, read_cr4());
vmcs_write(HOST_SYSENTER_EIP, (u64)(&entry_sysenter));
vmcs_write(HOST_SYSENTER_CS, KERNEL_CS);
- if (ctrl_exit_rev.clr & EXI_LOAD_PAT)
+ if (ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT)
vmcs_write(HOST_PAT, rdmsr(MSR_IA32_CR_PAT));
/* 26.2.3 */
@@ -1257,7 +1257,9 @@ int init_vmcs(struct vmcs **vmcs)
ctrl_pin |= PIN_BASED_EXT_INTR_MASK |
PIN_BASED_NMI_EXITING |
PIN_BASED_VIRTUAL_NMIS;
- ctrl_exit = EXI_LOAD_EFER | EXI_HOST_64 | EXI_LOAD_PAT;
+ ctrl_exit = VM_EXIT_LOAD_IA32_EFER |
+ VM_EXIT_HOST_ADDR_SPACE_SIZE |
+ VM_EXIT_LOAD_IA32_PAT;
ctrl_enter = (ENT_LOAD_EFER | ENT_GUEST_64);
/* DIsable IO instruction VMEXIT now */
ctrl_cpu[0] &= (~(CPU_BASED_UNCOND_IO_EXITING | CPU_BASED_USE_IO_BITMAPS));
diff --git a/x86/vmx.h b/x86/vmx.h
index e0e23ab6..30503ff4 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -406,18 +406,6 @@ enum Reason {
VMX_XRSTORS = 64,
};
-enum Ctrl_exi {
- EXI_SAVE_DBGCTLS = 1UL << 2,
- EXI_HOST_64 = 1UL << 9,
- EXI_LOAD_PERF = 1UL << 12,
- EXI_INTA = 1UL << 15,
- EXI_SAVE_PAT = 1UL << 18,
- EXI_LOAD_PAT = 1UL << 19,
- EXI_SAVE_EFER = 1UL << 20,
- EXI_LOAD_EFER = 1UL << 21,
- EXI_SAVE_PREEMPT = 1UL << 22,
-};
-
enum Ctrl_ent {
ENT_LOAD_DBGCTLS = 1UL << 2,
ENT_GUEST_64 = 1UL << 9,
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 1ea5d35b..77a63a3e 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -138,7 +138,7 @@ static int preemption_timer_init(struct vmcs *vmcs)
vmcs_write(PREEMPT_TIMER_VALUE, preempt_val);
preempt_scale = rdmsr(MSR_IA32_VMX_MISC) & 0x1F;
- if (!(ctrl_exit_rev.clr & EXI_SAVE_PREEMPT))
+ if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER))
printf("\tSave preemption value is not supported\n");
return VMX_TEST_START;
@@ -147,7 +147,7 @@ static int preemption_timer_init(struct vmcs *vmcs)
static void preemption_timer_main(void)
{
tsc_val = rdtsc();
- if (ctrl_exit_rev.clr & EXI_SAVE_PREEMPT) {
+ if (ctrl_exit_rev.clr & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER) {
vmx_set_test_stage(0);
vmcall();
if (vmx_get_test_stage() == 1)
@@ -198,7 +198,8 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
vmcs_read(PIN_CONTROLS) &
~PIN_BASED_VMX_PREEMPTION_TIMER);
vmcs_write(EXI_CONTROLS,
- vmcs_read(EXI_CONTROLS) & ~EXI_SAVE_PREEMPT);
+ vmcs_read(EXI_CONTROLS) &
+ ~VM_EXIT_SAVE_VMX_PREEMPTION_TIMER);
vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
return VMX_TEST_RESUME;
case 4:
@@ -220,7 +221,8 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
vmx_set_test_stage(1);
vmcs_write(PREEMPT_TIMER_VALUE, preempt_val);
ctrl_exit = (vmcs_read(EXI_CONTROLS) |
- EXI_SAVE_PREEMPT) & ctrl_exit_rev.clr;
+ VM_EXIT_SAVE_VMX_PREEMPTION_TIMER) &
+ ctrl_exit_rev.clr;
vmcs_write(EXI_CONTROLS, ctrl_exit);
return VMX_TEST_RESUME;
case 1:
@@ -312,8 +314,8 @@ static int test_ctrl_pat_init(struct vmcs *vmcs)
u64 ctrl_exi;
msr_bmp_init();
- if (!(ctrl_exit_rev.clr & EXI_SAVE_PAT) &&
- !(ctrl_exit_rev.clr & EXI_LOAD_PAT) &&
+ if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_PAT) &&
+ !(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT) &&
!(ctrl_enter_rev.clr & ENT_LOAD_PAT)) {
printf("\tSave/load PAT is not supported\n");
return 1;
@@ -322,7 +324,8 @@ static int test_ctrl_pat_init(struct vmcs *vmcs)
ctrl_ent = vmcs_read(ENT_CONTROLS);
ctrl_exi = vmcs_read(EXI_CONTROLS);
ctrl_ent |= ctrl_enter_rev.clr & ENT_LOAD_PAT;
- ctrl_exi |= ctrl_exit_rev.clr & (EXI_SAVE_PAT | EXI_LOAD_PAT);
+ ctrl_exi |= ctrl_exit_rev.clr & (VM_EXIT_SAVE_IA32_PAT |
+ VM_EXIT_LOAD_IA32_PAT);
vmcs_write(ENT_CONTROLS, ctrl_ent);
vmcs_write(EXI_CONTROLS, ctrl_exi);
ia32_pat = rdmsr(MSR_IA32_CR_PAT);
@@ -360,13 +363,13 @@ static int test_ctrl_pat_exit_handler(union exit_reason exit_reason)
switch (exit_reason.basic) {
case VMX_VMCALL:
guest_pat = vmcs_read(GUEST_PAT);
- if (!(ctrl_exit_rev.clr & EXI_SAVE_PAT)) {
+ if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_PAT)) {
printf("\tEXI_SAVE_PAT is not supported\n");
vmcs_write(GUEST_PAT, 0x6);
} else {
report(guest_pat == 0x6, "Exit save PAT");
}
- if (!(ctrl_exit_rev.clr & EXI_LOAD_PAT))
+ if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT))
printf("\tEXI_LOAD_PAT is not supported\n");
else
report(rdmsr(MSR_IA32_CR_PAT) == ia32_pat,
@@ -388,7 +391,9 @@ static int test_ctrl_efer_init(struct vmcs *vmcs)
msr_bmp_init();
ctrl_ent = vmcs_read(ENT_CONTROLS) | ENT_LOAD_EFER;
- ctrl_exi = vmcs_read(EXI_CONTROLS) | EXI_SAVE_EFER | EXI_LOAD_EFER;
+ ctrl_exi = vmcs_read(EXI_CONTROLS) |
+ VM_EXIT_SAVE_IA32_EFER |
+ VM_EXIT_LOAD_IA32_EFER;
vmcs_write(ENT_CONTROLS, ctrl_ent & ctrl_enter_rev.clr);
vmcs_write(EXI_CONTROLS, ctrl_exi & ctrl_exit_rev.clr);
ia32_efer = rdmsr(MSR_EFER);
@@ -426,13 +431,13 @@ static int test_ctrl_efer_exit_handler(union exit_reason exit_reason)
switch (exit_reason.basic) {
case VMX_VMCALL:
guest_efer = vmcs_read(GUEST_EFER);
- if (!(ctrl_exit_rev.clr & EXI_SAVE_EFER)) {
+ if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_EFER)) {
printf("\tEXI_SAVE_EFER is not supported\n");
vmcs_write(GUEST_EFER, ia32_efer);
} else {
report(guest_efer == ia32_efer, "Exit save EFER");
}
- if (!(ctrl_exit_rev.clr & EXI_LOAD_EFER)) {
+ if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_EFER)) {
printf("\tEXI_LOAD_EFER is not supported\n");
wrmsr(MSR_EFER, ia32_efer ^ EFER_NX);
} else {
@@ -1736,7 +1741,9 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
PIN_BASED_EXT_INTR_MASK);
break;
case 7:
- vmcs_write(EXI_CONTROLS, vmcs_read(EXI_CONTROLS) | EXI_INTA);
+ vmcs_write(EXI_CONTROLS,
+ vmcs_read(EXI_CONTROLS) |
+ VM_EXIT_ACK_INTR_ON_EXIT);
vmcs_write(PIN_CONTROLS,
vmcs_read(PIN_CONTROLS) |
PIN_BASED_EXT_INTR_MASK);
@@ -1764,7 +1771,7 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
case VMX_EXTINT:
- if (vmcs_read(EXI_CONTROLS) & EXI_INTA) {
+ if (vmcs_read(EXI_CONTROLS) & VM_EXIT_ACK_INTR_ON_EXIT) {
int vector = vmcs_read(EXI_INTR_INFO) & 0xff;
handle_external_interrupt(vector);
} else {
@@ -1916,7 +1923,8 @@ static int dbgctls_init(struct vmcs *vmcs)
vmcs_write(GUEST_DEBUGCTL, 0x2);
vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) | ENT_LOAD_DBGCTLS);
- vmcs_write(EXI_CONTROLS, vmcs_read(EXI_CONTROLS) | EXI_SAVE_DBGCTLS);
+ vmcs_write(EXI_CONTROLS,
+ vmcs_read(EXI_CONTROLS) | VM_EXIT_SAVE_DEBUG_CONTROLS);
return VMX_TEST_START;
}
@@ -1940,7 +1948,7 @@ static void dbgctls_main(void)
report(vmx_get_test_stage() == 1, "Save debug controls");
if (ctrl_enter_rev.set & ENT_LOAD_DBGCTLS ||
- ctrl_exit_rev.set & EXI_SAVE_DBGCTLS) {
+ ctrl_exit_rev.set & VM_EXIT_SAVE_DEBUG_CONTROLS) {
printf("\tDebug controls are always loaded/saved\n");
return;
}
@@ -1992,7 +2000,8 @@ static int dbgctls_exit_handler(union exit_reason exit_reason)
vmcs_write(ENT_CONTROLS,
vmcs_read(ENT_CONTROLS) & ~ENT_LOAD_DBGCTLS);
vmcs_write(EXI_CONTROLS,
- vmcs_read(EXI_CONTROLS) & ~EXI_SAVE_DBGCTLS);
+ vmcs_read(EXI_CONTROLS) &
+ ~VM_EXIT_SAVE_DEBUG_CONTROLS);
break;
case 3:
if (dr7 == 0x400 && debugctl == 0 &&
@@ -4134,7 +4143,7 @@ static void test_posted_intr(void)
if (!((ctrl_pin_rev.clr & PIN_BASED_POSTED_INTR) &&
(ctrl_cpu_rev[1].clr & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY) &&
- (ctrl_exit_rev.clr & EXI_INTA)))
+ (ctrl_exit_rev.clr & VM_EXIT_ACK_INTR_ON_EXIT)))
return;
vmcs_write(CPU_EXEC_CTRL0,
@@ -4158,13 +4167,13 @@ static void test_posted_intr(void)
test_vmx_invalid_controls();
report_prefix_pop();
- exit_ctl &= ~EXI_INTA;
+ exit_ctl &= ~VM_EXIT_ACK_INTR_ON_EXIT;
vmcs_write(EXI_CONTROLS, exit_ctl);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery enabled; acknowledge-interrupt-on-exit disabled");
test_vmx_invalid_controls();
report_prefix_pop();
- exit_ctl |= EXI_INTA;
+ exit_ctl |= VM_EXIT_ACK_INTR_ON_EXIT;
vmcs_write(EXI_CONTROLS, exit_ctl);
report_prefix_pushf("Process-posted-interrupts enabled; virtual-interrupt-delivery enabled; acknowledge-interrupt-on-exit enabled");
test_vmx_valid_controls();
@@ -5113,7 +5122,7 @@ static void test_vmx_preemption_timer(void)
u32 pin = saved_pin;
u32 exit = saved_exit;
- if (!((ctrl_exit_rev.clr & EXI_SAVE_PREEMPT) ||
+ if (!((ctrl_exit_rev.clr & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER) ||
(ctrl_pin_rev.clr & PIN_BASED_VMX_PREEMPTION_TIMER))) {
report_skip("%s : \"Save-VMX-preemption-timer\" and/or \"Enable-VMX-preemption-timer\" control not supported", __func__);
return;
@@ -5121,13 +5130,13 @@ static void test_vmx_preemption_timer(void)
pin |= PIN_BASED_VMX_PREEMPTION_TIMER;
vmcs_write(PIN_CONTROLS, pin);
- exit &= ~EXI_SAVE_PREEMPT;
+ exit &= ~VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
vmcs_write(EXI_CONTROLS, exit);
report_prefix_pushf("enable-VMX-preemption-timer enabled, save-VMX-preemption-timer disabled");
test_vmx_valid_controls();
report_prefix_pop();
- exit |= EXI_SAVE_PREEMPT;
+ exit |= VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
vmcs_write(EXI_CONTROLS, exit);
report_prefix_pushf("enable-VMX-preemption-timer enabled, save-VMX-preemption-timer enabled");
test_vmx_valid_controls();
@@ -5139,7 +5148,7 @@ static void test_vmx_preemption_timer(void)
test_vmx_invalid_controls();
report_prefix_pop();
- exit &= ~EXI_SAVE_PREEMPT;
+ exit &= ~VM_EXIT_SAVE_VMX_PREEMPTION_TIMER;
vmcs_write(EXI_CONTROLS, exit);
report_prefix_pushf("enable-VMX-preemption-timer disabled, save-VMX-preemption-timer disabled");
test_vmx_valid_controls();
@@ -7284,10 +7293,10 @@ static void test_efer_one(u32 fld, const char * fld_name, u64 efer,
bool ok;
ok = true;
- if (ctrl_fld == EXI_CONTROLS && (ctrl & EXI_LOAD_EFER)) {
- if (!!(efer & EFER_LMA) != !!(ctrl & EXI_HOST_64))
+ if (ctrl_fld == EXI_CONTROLS && (ctrl & VM_EXIT_LOAD_IA32_EFER)) {
+ if (!!(efer & EFER_LMA) != !!(ctrl & VM_EXIT_HOST_ADDR_SPACE_SIZE))
ok = false;
- if (!!(efer & EFER_LME) != !!(ctrl & EXI_HOST_64))
+ if (!!(efer & EFER_LME) != !!(ctrl & VM_EXIT_HOST_ADDR_SPACE_SIZE))
ok = false;
}
if (ctrl_fld == ENT_CONTROLS && (ctrl & ENT_LOAD_EFER)) {
@@ -7425,8 +7434,8 @@ test_entry_exit_mode:
static void test_host_efer(void)
{
test_efer(HOST_EFER, "HOST_EFER", EXI_CONTROLS,
- ctrl_exit_rev.clr & EXI_LOAD_EFER,
- EXI_HOST_64);
+ ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_EFER,
+ VM_EXIT_HOST_ADDR_SPACE_SIZE);
}
/*
@@ -7514,7 +7523,7 @@ static void test_pat(u32 field, const char * field_name, u32 ctrl_field,
test_guest_state("ENT_LOAD_PAT enabled", !!error,
val, "GUEST_PAT");
- if (!(ctrl_exit_rev.clr & EXI_LOAD_PAT))
+ if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT))
wrmsr(MSR_IA32_CR_PAT, pat_msr_saved);
}
@@ -7539,12 +7548,12 @@ static void test_load_host_pat(void)
/*
* "load IA32_PAT" VM-exit control
*/
- if (!(ctrl_exit_rev.clr & EXI_LOAD_PAT)) {
+ if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT)) {
report_skip("%s : \"Load-IA32-PAT\" exit control not supported", __func__);
return;
}
- test_pat(HOST_PAT, "HOST_PAT", EXI_CONTROLS, EXI_LOAD_PAT);
+ test_pat(HOST_PAT, "HOST_PAT", EXI_CONTROLS, VM_EXIT_LOAD_IA32_PAT);
}
union cpuidA_eax {
@@ -7698,13 +7707,14 @@ static void test_load_host_perf_global_ctrl(void)
return;
}
- if (!(ctrl_exit_rev.clr & EXI_LOAD_PERF)) {
+ if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL)) {
report_skip("%s : \"Load IA32_PERF_GLOBAL_CTRL\" exit control not supported", __func__);
return;
}
test_perf_global_ctrl(HOST_PERF_GLOBAL_CTRL, "HOST_PERF_GLOBAL_CTRL",
- EXI_CONTROLS, "EXI_CONTROLS", EXI_LOAD_PERF);
+ EXI_CONTROLS, "EXI_CONTROLS",
+ VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL);
}
@@ -7853,7 +7863,7 @@ static void test_host_segment_regs(void)
selector_saved = vmcs_read(HOST_SEL_SS);
vmcs_write(HOST_SEL_SS, 0);
report_prefix_pushf("HOST_SEL_SS 0");
- if (vmcs_read(EXI_CONTROLS) & EXI_HOST_64) {
+ if (vmcs_read(EXI_CONTROLS) & VM_EXIT_HOST_ADDR_SPACE_SIZE) {
test_vmx_vmlaunch(0);
} else {
test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
@@ -7899,7 +7909,7 @@ static void test_host_addr_size(void)
u64 rip_saved = vmcs_read(HOST_RIP);
u64 entry_ctrl_saved = vmcs_read(ENT_CONTROLS);
- assert(vmcs_read(EXI_CONTROLS) & EXI_HOST_64);
+ assert(vmcs_read(EXI_CONTROLS) & VM_EXIT_HOST_ADDR_SPACE_SIZE);
assert(cr4_saved & X86_CR4_PAE);
vmcs_write(ENT_CONTROLS, entry_ctrl_saved | ENT_GUEST_64);
@@ -9603,7 +9613,7 @@ static void enable_posted_interrupts(void)
void *pi_desc = alloc_page();
vmcs_set_bits(PIN_CONTROLS, PIN_BASED_POSTED_INTR);
- vmcs_set_bits(EXI_CONTROLS, EXI_INTA);
+ vmcs_set_bits(EXI_CONTROLS, VM_EXIT_ACK_INTR_ON_EXIT);
vmcs_write(PINV, PI_VECTOR);
vmcs_write(POSTED_INTR_DESC_ADDR, (u64)pi_desc);
}
@@ -10603,7 +10613,7 @@ static int invalid_msr_init(struct vmcs *vmcs)
vmcs_write(PREEMPT_TIMER_VALUE, preempt_val);
preempt_scale = rdmsr(MSR_IA32_VMX_MISC) & 0x1F;
- if (!(ctrl_exit_rev.clr & EXI_SAVE_PREEMPT))
+ if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_VMX_PREEMPTION_TIMER))
printf("\tSave preemption value is not supported\n");
vmcs_write(ENT_MSR_LD_CNT, 1);
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 15/17] x86/vmx: switch to new vmx.h entry controls
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (13 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 14/17] x86/vmx: switch to new vmx.h exit controls Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 16/17] x86/vmx: switch to new vmx.h interrupt defs Jon Kohler
` (2 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's entry controls, which makes it easier to grok
from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 6 +--
x86/vmx.h | 9 -----
x86/vmx_tests.c | 97 ++++++++++++++++++++++++++-----------------------
3 files changed, 55 insertions(+), 57 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index bd16e833..7be93a72 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1167,11 +1167,11 @@ static void init_vmcs_guest(void)
guest_cr0 = read_cr0();
guest_cr4 = read_cr4();
guest_cr3 = read_cr3();
- if (ctrl_enter & ENT_GUEST_64) {
+ if (ctrl_enter & VM_ENTRY_IA32E_MODE) {
guest_cr0 |= X86_CR0_PG;
guest_cr4 |= X86_CR4_PAE;
}
- if ((ctrl_enter & ENT_GUEST_64) == 0)
+ if ((ctrl_enter & VM_ENTRY_IA32E_MODE) == 0)
guest_cr4 &= (~X86_CR4_PCIDE);
if (guest_cr0 & X86_CR0_PG)
guest_cr0 |= X86_CR0_PE;
@@ -1260,7 +1260,7 @@ int init_vmcs(struct vmcs **vmcs)
ctrl_exit = VM_EXIT_LOAD_IA32_EFER |
VM_EXIT_HOST_ADDR_SPACE_SIZE |
VM_EXIT_LOAD_IA32_PAT;
- ctrl_enter = (ENT_LOAD_EFER | ENT_GUEST_64);
+ ctrl_enter = (VM_ENTRY_LOAD_IA32_EFER | VM_ENTRY_IA32E_MODE);
/* DIsable IO instruction VMEXIT now */
ctrl_cpu[0] &= (~(CPU_BASED_UNCOND_IO_EXITING | CPU_BASED_USE_IO_BITMAPS));
ctrl_cpu[1] = 0;
diff --git a/x86/vmx.h b/x86/vmx.h
index 30503ff4..8bb49d8e 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -406,15 +406,6 @@ enum Reason {
VMX_XRSTORS = 64,
};
-enum Ctrl_ent {
- ENT_LOAD_DBGCTLS = 1UL << 2,
- ENT_GUEST_64 = 1UL << 9,
- ENT_LOAD_PERF = 1UL << 13,
- ENT_LOAD_PAT = 1UL << 14,
- ENT_LOAD_EFER = 1UL << 15,
- ENT_LOAD_BNDCFGS = 1UL << 16
-};
-
enum Intr_type {
VMX_INTR_TYPE_EXT_INTR = 0,
VMX_INTR_TYPE_NMI_INTR = 2,
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 77a63a3e..2f9858a3 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -316,14 +316,14 @@ static int test_ctrl_pat_init(struct vmcs *vmcs)
msr_bmp_init();
if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_PAT) &&
!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT) &&
- !(ctrl_enter_rev.clr & ENT_LOAD_PAT)) {
+ !(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PAT)) {
printf("\tSave/load PAT is not supported\n");
return 1;
}
ctrl_ent = vmcs_read(ENT_CONTROLS);
ctrl_exi = vmcs_read(EXI_CONTROLS);
- ctrl_ent |= ctrl_enter_rev.clr & ENT_LOAD_PAT;
+ ctrl_ent |= ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PAT;
ctrl_exi |= ctrl_exit_rev.clr & (VM_EXIT_SAVE_IA32_PAT |
VM_EXIT_LOAD_IA32_PAT);
vmcs_write(ENT_CONTROLS, ctrl_ent);
@@ -339,7 +339,7 @@ static void test_ctrl_pat_main(void)
u64 guest_ia32_pat;
guest_ia32_pat = rdmsr(MSR_IA32_CR_PAT);
- if (!(ctrl_enter_rev.clr & ENT_LOAD_PAT))
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PAT))
printf("\tENT_LOAD_PAT is not supported.\n");
else {
if (guest_ia32_pat != 0) {
@@ -350,7 +350,7 @@ static void test_ctrl_pat_main(void)
wrmsr(MSR_IA32_CR_PAT, 0x6);
vmcall();
guest_ia32_pat = rdmsr(MSR_IA32_CR_PAT);
- if (ctrl_enter_rev.clr & ENT_LOAD_PAT)
+ if (ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PAT)
report(guest_ia32_pat == ia32_pat, "Entry load PAT");
}
@@ -390,7 +390,7 @@ static int test_ctrl_efer_init(struct vmcs *vmcs)
u64 ctrl_exi;
msr_bmp_init();
- ctrl_ent = vmcs_read(ENT_CONTROLS) | ENT_LOAD_EFER;
+ ctrl_ent = vmcs_read(ENT_CONTROLS) | VM_ENTRY_LOAD_IA32_EFER;
ctrl_exi = vmcs_read(EXI_CONTROLS) |
VM_EXIT_SAVE_IA32_EFER |
VM_EXIT_LOAD_IA32_EFER;
@@ -407,7 +407,7 @@ static void test_ctrl_efer_main(void)
u64 guest_ia32_efer;
guest_ia32_efer = rdmsr(MSR_EFER);
- if (!(ctrl_enter_rev.clr & ENT_LOAD_EFER))
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_EFER))
printf("\tENT_LOAD_EFER is not supported.\n");
else {
if (guest_ia32_efer != (ia32_efer ^ EFER_NX)) {
@@ -418,7 +418,7 @@ static void test_ctrl_efer_main(void)
wrmsr(MSR_EFER, ia32_efer);
vmcall();
guest_ia32_efer = rdmsr(MSR_EFER);
- if (ctrl_enter_rev.clr & ENT_LOAD_EFER)
+ if (ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_EFER)
report(guest_ia32_efer == ia32_efer, "Entry load EFER");
}
@@ -1922,7 +1922,8 @@ static int dbgctls_init(struct vmcs *vmcs)
vmcs_write(GUEST_DR7, 0x404);
vmcs_write(GUEST_DEBUGCTL, 0x2);
- vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) | ENT_LOAD_DBGCTLS);
+ vmcs_write(ENT_CONTROLS,
+ vmcs_read(ENT_CONTROLS) | VM_ENTRY_LOAD_DEBUG_CONTROLS);
vmcs_write(EXI_CONTROLS,
vmcs_read(EXI_CONTROLS) | VM_EXIT_SAVE_DEBUG_CONTROLS);
@@ -1947,7 +1948,7 @@ static void dbgctls_main(void)
vmcall();
report(vmx_get_test_stage() == 1, "Save debug controls");
- if (ctrl_enter_rev.set & ENT_LOAD_DBGCTLS ||
+ if (ctrl_enter_rev.set & VM_ENTRY_LOAD_DEBUG_CONTROLS ||
ctrl_exit_rev.set & VM_EXIT_SAVE_DEBUG_CONTROLS) {
printf("\tDebug controls are always loaded/saved\n");
return;
@@ -1998,7 +1999,8 @@ static int dbgctls_exit_handler(union exit_reason exit_reason)
vmcs_write(GUEST_DEBUGCTL, 0x2);
vmcs_write(ENT_CONTROLS,
- vmcs_read(ENT_CONTROLS) & ~ENT_LOAD_DBGCTLS);
+ vmcs_read(ENT_CONTROLS) &
+ ~VM_ENTRY_LOAD_DEBUG_CONTROLS);
vmcs_write(EXI_CONTROLS,
vmcs_read(EXI_CONTROLS) &
~VM_EXIT_SAVE_DEBUG_CONTROLS);
@@ -5382,7 +5384,7 @@ static void vmx_mtf_pdpte_test(void)
* when the guest started out in long mode.
*/
ent_ctls = vmcs_read(ENT_CONTROLS);
- vmcs_write(ENT_CONTROLS, ent_ctls & ~ENT_GUEST_64);
+ vmcs_write(ENT_CONTROLS, ent_ctls & ~VM_ENTRY_IA32E_MODE);
guest_efer = vmcs_read(GUEST_EFER);
vmcs_write(GUEST_EFER, guest_efer & ~(EFER_LMA | EFER_LME));
@@ -7299,11 +7301,11 @@ static void test_efer_one(u32 fld, const char * fld_name, u64 efer,
if (!!(efer & EFER_LME) != !!(ctrl & VM_EXIT_HOST_ADDR_SPACE_SIZE))
ok = false;
}
- if (ctrl_fld == ENT_CONTROLS && (ctrl & ENT_LOAD_EFER)) {
+ if (ctrl_fld == ENT_CONTROLS && (ctrl & VM_ENTRY_LOAD_IA32_EFER)) {
/* Check LMA too since CR0.PG is set. */
- if (!!(efer & EFER_LMA) != !!(ctrl & ENT_GUEST_64))
+ if (!!(efer & EFER_LMA) != !!(ctrl & VM_ENTRY_IA32E_MODE))
ok = false;
- if (!!(efer & EFER_LME) != !!(ctrl & ENT_GUEST_64))
+ if (!!(efer & EFER_LME) != !!(ctrl & VM_ENTRY_IA32E_MODE))
ok = false;
}
@@ -7312,7 +7314,7 @@ static void test_efer_one(u32 fld, const char * fld_name, u64 efer,
* Perhaps write the test in assembly and make sure it
* can be run in either mode?
*/
- if (fld == GUEST_EFER && ok && !(ctrl & ENT_GUEST_64))
+ if (fld == GUEST_EFER && ok && !(ctrl & VM_ENTRY_IA32E_MODE))
return;
vmcs_write(ctrl_fld, ctrl);
@@ -7446,15 +7448,15 @@ static void test_host_efer(void)
*/
static void test_guest_efer(void)
{
- if (!(ctrl_enter_rev.clr & ENT_LOAD_EFER)) {
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_EFER)) {
report_skip("%s : \"Load-IA32-EFER\" entry control not supported", __func__);
return;
}
vmcs_write(GUEST_EFER, rdmsr(MSR_EFER));
test_efer(GUEST_EFER, "GUEST_EFER", ENT_CONTROLS,
- ctrl_enter_rev.clr & ENT_LOAD_EFER,
- ENT_GUEST_64);
+ ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_EFER,
+ VM_ENTRY_IA32E_MODE);
}
/*
@@ -7487,8 +7489,8 @@ static void test_pat(u32 field, const char * field_name, u32 ctrl_field,
report_prefix_pop();
} else { // GUEST_PAT
- test_guest_state("ENT_LOAD_PAT disabled", false,
- val, "GUEST_PAT");
+ test_guest_state("VM_ENTRY_LOAD_IA32_PAT disabled",
+ false, val, "GUEST_PAT");
}
}
}
@@ -7520,7 +7522,7 @@ static void test_pat(u32 field, const char * field_name, u32 ctrl_field,
} else { // GUEST_PAT
error = (i == 0x2 || i == 0x3 || i >= 0x8);
- test_guest_state("ENT_LOAD_PAT enabled", !!error,
+ test_guest_state("VM_ENTRY_LOAD_IA32_PAT enabled", !!error,
val, "GUEST_PAT");
if (!(ctrl_exit_rev.clr & VM_EXIT_LOAD_IA32_PAT))
@@ -7725,13 +7727,14 @@ static void test_load_guest_perf_global_ctrl(void)
return;
}
- if (!(ctrl_enter_rev.clr & ENT_LOAD_PERF)) {
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)) {
report_skip("%s : \"Load IA32_PERF_GLOBAL_CTRL\" entry control not supported", __func__);
return;
}
test_perf_global_ctrl(GUEST_PERF_GLOBAL_CTRL, "GUEST_PERF_GLOBAL_CTRL",
- ENT_CONTROLS, "ENT_CONTROLS", ENT_LOAD_PERF);
+ ENT_CONTROLS, "ENT_CONTROLS",
+ VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL);
}
@@ -7912,7 +7915,7 @@ static void test_host_addr_size(void)
assert(vmcs_read(EXI_CONTROLS) & VM_EXIT_HOST_ADDR_SPACE_SIZE);
assert(cr4_saved & X86_CR4_PAE);
- vmcs_write(ENT_CONTROLS, entry_ctrl_saved | ENT_GUEST_64);
+ vmcs_write(ENT_CONTROLS, entry_ctrl_saved | VM_ENTRY_IA32E_MODE);
report_prefix_pushf("\"IA-32e mode guest\" enabled");
test_vmx_vmlaunch(0);
report_prefix_pop();
@@ -7935,7 +7938,7 @@ static void test_host_addr_size(void)
test_vmx_vmlaunch_must_fail(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
report_prefix_pop();
- vmcs_write(ENT_CONTROLS, entry_ctrl_saved | ENT_GUEST_64);
+ vmcs_write(ENT_CONTROLS, entry_ctrl_saved | VM_ENTRY_IA32E_MODE);
vmcs_write(HOST_RIP, rip_saved);
vmcs_write(HOST_CR4, cr4_saved);
@@ -7994,22 +7997,22 @@ static void test_guest_dr7(void)
u64 val;
int i;
- if (ctrl_enter_rev.set & ENT_LOAD_DBGCTLS) {
- vmcs_clear_bits(ENT_CONTROLS, ENT_LOAD_DBGCTLS);
+ if (ctrl_enter_rev.set & VM_ENTRY_LOAD_DEBUG_CONTROLS) {
+ vmcs_clear_bits(ENT_CONTROLS, VM_ENTRY_LOAD_DEBUG_CONTROLS);
for (i = 0; i < 64; i++) {
val = 1ull << i;
vmcs_write(GUEST_DR7, val);
- test_guest_state("ENT_LOAD_DBGCTLS disabled", false,
- val, "GUEST_DR7");
+ test_guest_state("VM_ENTRY_LOAD_DEBUG_CONTROLS disabled",
+ false, val, "GUEST_DR7");
}
}
- if (ctrl_enter_rev.clr & ENT_LOAD_DBGCTLS) {
- vmcs_set_bits(ENT_CONTROLS, ENT_LOAD_DBGCTLS);
+ if (ctrl_enter_rev.clr & VM_ENTRY_LOAD_DEBUG_CONTROLS) {
+ vmcs_set_bits(ENT_CONTROLS, VM_ENTRY_LOAD_DEBUG_CONTROLS);
for (i = 0; i < 64; i++) {
val = 1ull << i;
vmcs_write(GUEST_DR7, val);
- test_guest_state("ENT_LOAD_DBGCTLS enabled", i >= 32,
- val, "GUEST_DR7");
+ test_guest_state("VM_ENTRY_LOAD_DEBUG_CONTROLS enabled",
+ i >= 32, val, "GUEST_DR7");
}
}
vmcs_write(GUEST_DR7, dr7_saved);
@@ -8030,12 +8033,13 @@ static void test_load_guest_pat(void)
/*
* "load IA32_PAT" VM-entry control
*/
- if (!(ctrl_enter_rev.clr & ENT_LOAD_PAT)) {
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_IA32_PAT)) {
report_skip("%s : \"Load-IA32-PAT\" entry control not supported", __func__);
return;
}
- test_pat(GUEST_PAT, "GUEST_PAT", ENT_CONTROLS, ENT_LOAD_PAT);
+ test_pat(GUEST_PAT, "GUEST_PAT", ENT_CONTROLS,
+ VM_ENTRY_LOAD_IA32_PAT);
}
#define MSR_IA32_BNDCFGS_RSVD_MASK 0x00000ffc
@@ -8054,29 +8058,29 @@ static void test_load_guest_bndcfgs(void)
u64 bndcfgs_saved = vmcs_read(GUEST_BNDCFGS);
u64 bndcfgs;
- if (!(ctrl_enter_rev.clr & ENT_LOAD_BNDCFGS)) {
+ if (!(ctrl_enter_rev.clr & VM_ENTRY_LOAD_BNDCFGS)) {
report_skip("%s : \"Load-IA32-BNDCFGS\" entry control not supported", __func__);
return;
}
- vmcs_clear_bits(ENT_CONTROLS, ENT_LOAD_BNDCFGS);
+ vmcs_clear_bits(ENT_CONTROLS, VM_ENTRY_LOAD_BNDCFGS);
vmcs_write(GUEST_BNDCFGS, NONCANONICAL);
- test_guest_state("ENT_LOAD_BNDCFGS disabled", false,
+ test_guest_state("VM_ENTRY_LOAD_BNDCFGS disabled", false,
GUEST_BNDCFGS, "GUEST_BNDCFGS");
bndcfgs = bndcfgs_saved | MSR_IA32_BNDCFGS_RSVD_MASK;
vmcs_write(GUEST_BNDCFGS, bndcfgs);
- test_guest_state("ENT_LOAD_BNDCFGS disabled", false,
+ test_guest_state("VM_ENTRY_LOAD_BNDCFGS disabled", false,
GUEST_BNDCFGS, "GUEST_BNDCFGS");
- vmcs_set_bits(ENT_CONTROLS, ENT_LOAD_BNDCFGS);
+ vmcs_set_bits(ENT_CONTROLS, VM_ENTRY_LOAD_BNDCFGS);
vmcs_write(GUEST_BNDCFGS, NONCANONICAL);
- test_guest_state("ENT_LOAD_BNDCFGS enabled", true,
+ test_guest_state("VM_ENTRY_LOAD_BNDCFGS enabled", true,
GUEST_BNDCFGS, "GUEST_BNDCFGS");
bndcfgs = bndcfgs_saved | MSR_IA32_BNDCFGS_RSVD_MASK;
vmcs_write(GUEST_BNDCFGS, bndcfgs);
- test_guest_state("ENT_LOAD_BNDCFGS enabled", true,
+ test_guest_state("VM_ENTRY_LOAD_BNDCFGS enabled", true,
GUEST_BNDCFGS, "GUEST_BNDCFGS");
vmcs_write(GUEST_BNDCFGS, bndcfgs_saved);
@@ -8335,7 +8339,8 @@ asm (".code32\n"
static void setup_unrestricted_guest(void)
{
vmcs_write(GUEST_CR0, vmcs_read(GUEST_CR0) & ~(X86_CR0_PG));
- vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) & ~ENT_GUEST_64);
+ vmcs_write(ENT_CONTROLS,
+ vmcs_read(ENT_CONTROLS) & ~VM_ENTRY_IA32E_MODE);
vmcs_write(GUEST_EFER, vmcs_read(GUEST_EFER) & ~EFER_LMA);
vmcs_write(GUEST_RIP, virt_to_phys(unrestricted_guest_main));
}
@@ -8343,7 +8348,8 @@ static void setup_unrestricted_guest(void)
static void unsetup_unrestricted_guest(void)
{
vmcs_write(GUEST_CR0, vmcs_read(GUEST_CR0) | X86_CR0_PG);
- vmcs_write(ENT_CONTROLS, vmcs_read(ENT_CONTROLS) | ENT_GUEST_64);
+ vmcs_write(ENT_CONTROLS,
+ vmcs_read(ENT_CONTROLS) | VM_ENTRY_IA32E_MODE);
vmcs_write(GUEST_EFER, vmcs_read(GUEST_EFER) | EFER_LMA);
vmcs_write(GUEST_RIP, (u64) phys_to_virt(vmcs_read(GUEST_RIP)));
vmcs_write(GUEST_RSP, (u64) phys_to_virt(vmcs_read(GUEST_RSP)));
@@ -9563,7 +9569,8 @@ static void vmx_db_test(void)
*/
if (this_cpu_has(X86_FEATURE_RTM)) {
vmcs_write(ENT_CONTROLS,
- vmcs_read(ENT_CONTROLS) | ENT_LOAD_DBGCTLS);
+ vmcs_read(ENT_CONTROLS) |
+ VM_ENTRY_LOAD_DEBUG_CONTROLS);
/*
* Set DR7.RTM[bit 11] and IA32_DEBUGCTL.RTM[bit 15]
* in the guest to enable advanced debugging of RTM
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 16/17] x86/vmx: switch to new vmx.h interrupt defs
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (14 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 15/17] x86/vmx: switch to new vmx.h entry controls Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 17/17] x86/vmx: align exit reasons with Linux uapi Jon Kohler
2025-11-12 19:02 ` [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Sean Christopherson
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Migrate to new vmx.h's interrupt definitions, which makes it easier to
grok from one code base to another.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.h | 22 ----------------------
x86/vmx_tests.c | 12 ++++++------
2 files changed, 6 insertions(+), 28 deletions(-)
diff --git a/x86/vmx.h b/x86/vmx.h
index 8bb49d8e..99ba7e52 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -406,31 +406,9 @@ enum Reason {
VMX_XRSTORS = 64,
};
-enum Intr_type {
- VMX_INTR_TYPE_EXT_INTR = 0,
- VMX_INTR_TYPE_NMI_INTR = 2,
- VMX_INTR_TYPE_HARD_EXCEPTION = 3,
- VMX_INTR_TYPE_SOFT_INTR = 4,
- VMX_INTR_TYPE_SOFT_EXCEPTION = 6,
-};
-
-/*
- * Interruption-information format
- */
-#define INTR_INFO_VECTOR_MASK 0xff /* 7:0 */
-#define INTR_INFO_INTR_TYPE_MASK 0x700 /* 10:8 */
-#define INTR_INFO_DELIVER_CODE_MASK 0x800 /* 11 */
-#define INTR_INFO_UNBLOCK_NMI_MASK 0x1000 /* 12 */
-#define INTR_INFO_VALID_MASK 0x80000000 /* 31 */
#define INTR_INFO_INTR_TYPE_SHIFT 8
-/*
- * Guest interruptibility state
- */
-#define GUEST_INTR_STATE_MOVSS (1 << 1)
-#define GUEST_INTR_STATE_ENCLAVE (1 << 4)
-
#define SAVE_GPR \
"xchg %rax, regs\n\t" \
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 2f9858a3..338e39b0 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1763,7 +1763,7 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
vmcs_write(GUEST_ACTV_STATE, ACTV_HLT);
vmcs_write(ENT_INTR_INFO,
TIMER_VECTOR |
- (VMX_INTR_TYPE_EXT_INTR << INTR_INFO_INTR_TYPE_SHIFT) |
+ INTR_TYPE_EXT_INTR |
INTR_INFO_VALID_MASK);
break;
}
@@ -8803,7 +8803,7 @@ static void vmx_nmi_window_test(void)
* is one byte.)
*/
report_prefix_push("active, blocking by MOV-SS");
- vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOVSS);
+ vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOV_SS);
enter_guest();
verify_nmi_window_exit(nop_addr + 1);
report_prefix_pop();
@@ -8969,7 +8969,7 @@ static void vmx_intr_window_test(void)
* instruction. (NOP is one byte.)
*/
report_prefix_push("active, blocking by MOV-SS, RFLAGS.IF=1");
- vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOVSS);
+ vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOV_SS);
enter_guest();
verify_intr_window_exit(nop_addr + 1);
report_prefix_pop();
@@ -9479,7 +9479,7 @@ static void single_step_guest(const char *test_name, u64 starting_dr6,
vmcs_write(GUEST_RFLAGS, X86_EFLAGS_FIXED | X86_EFLAGS_TF);
if (pending_debug_exceptions) {
vmcs_write(GUEST_PENDING_DEBUG, pending_debug_exceptions);
- vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOVSS);
+ vmcs_write(GUEST_INTR_STATE, GUEST_INTR_STATE_MOV_SS);
}
enter_guest();
}
@@ -10982,9 +10982,9 @@ static void handle_exception_in_l1(u32 vector)
enter_guest();
if (vector == BP_VECTOR || vector == OF_VECTOR)
- intr_type = VMX_INTR_TYPE_SOFT_EXCEPTION;
+ intr_type = EVENT_TYPE_SWEXC;
else
- intr_type = VMX_INTR_TYPE_HARD_EXCEPTION;
+ intr_type = EVENT_TYPE_HWEXC;
intr_info = vmcs_read(EXI_INTR_INFO);
report((vmcs_read(EXI_REASON) == VMX_EXC_NMI) &&
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [kvm-unit-tests PATCH 17/17] x86/vmx: align exit reasons with Linux uapi
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (15 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 16/17] x86/vmx: switch to new vmx.h interrupt defs Jon Kohler
@ 2025-09-16 17:22 ` Jon Kohler
2025-11-12 19:02 ` [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Sean Christopherson
17 siblings, 0 replies; 21+ messages in thread
From: Jon Kohler @ 2025-09-16 17:22 UTC (permalink / raw)
To: seanjc, pbonzini, kvm; +Cc: Jon Kohler
Align x86/vmx.h' enum Reasons with Linux's arch/x86/include/uapi/asm/vmx.h
EXIT_REASON_* definitions. Given how exit codes are wired up already in
KUT, it doesn't make a whole lot of sense to switch to the uapi header
itself; however, aligning the definitions makes it easier to grok from
one code base to another.
Note: this change picks up several previously undefined exit reasons, such
as UMWAIT, TPAUSE, BUS_LOCK, NOTIFY, TDCALL but does not add test cases
for them.
Note: Fixed misc indentation issues picked up by checkpatch along the way.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
---
x86/vmx.c | 134 ++++++++++++-----------
x86/vmx.h | 125 +++++++++++----------
x86/vmx_tests.c | 283 +++++++++++++++++++++++++-----------------------
3 files changed, 280 insertions(+), 262 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index 7be93a72..98b05754 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -582,67 +582,71 @@ static void __attribute__((__used__)) syscall_handler(u64 syscall_no)
current->syscall_handler(syscall_no);
}
+/* Naming scheme aligns with Linux's arch/x86/include/uapi/asm/vmx.h */
static const char * const exit_reason_descriptions[] = {
- [VMX_EXC_NMI] = "VMX_EXC_NMI",
- [VMX_EXTINT] = "VMX_EXTINT",
- [VMX_TRIPLE_FAULT] = "VMX_TRIPLE_FAULT",
- [VMX_INIT] = "VMX_INIT",
- [VMX_SIPI] = "VMX_SIPI",
- [VMX_SMI_IO] = "VMX_SMI_IO",
- [VMX_SMI_OTHER] = "VMX_SMI_OTHER",
- [VMX_INTR_WINDOW] = "VMX_INTR_WINDOW",
- [VMX_NMI_WINDOW] = "VMX_NMI_WINDOW",
- [VMX_TASK_SWITCH] = "VMX_TASK_SWITCH",
- [VMX_CPUID] = "VMX_CPUID",
- [VMX_GETSEC] = "VMX_GETSEC",
- [VMX_HLT] = "VMX_HLT",
- [VMX_INVD] = "VMX_INVD",
- [VMX_INVLPG] = "VMX_INVLPG",
- [VMX_RDPMC] = "VMX_RDPMC",
- [VMX_RDTSC] = "VMX_RDTSC",
- [VMX_RSM] = "VMX_RSM",
- [VMX_VMCALL] = "VMX_VMCALL",
- [VMX_VMCLEAR] = "VMX_VMCLEAR",
- [VMX_VMLAUNCH] = "VMX_VMLAUNCH",
- [VMX_VMPTRLD] = "VMX_VMPTRLD",
- [VMX_VMPTRST] = "VMX_VMPTRST",
- [VMX_VMREAD] = "VMX_VMREAD",
- [VMX_VMRESUME] = "VMX_VMRESUME",
- [VMX_VMWRITE] = "VMX_VMWRITE",
- [VMX_VMXOFF] = "VMX_VMXOFF",
- [VMX_VMXON] = "VMX_VMXON",
- [VMX_CR] = "VMX_CR",
- [VMX_DR] = "VMX_DR",
- [VMX_IO] = "VMX_IO",
- [VMX_RDMSR] = "VMX_RDMSR",
- [VMX_WRMSR] = "VMX_WRMSR",
- [VMX_FAIL_STATE] = "VMX_FAIL_STATE",
- [VMX_FAIL_MSR] = "VMX_FAIL_MSR",
- [VMX_MWAIT] = "VMX_MWAIT",
- [VMX_MTF] = "VMX_MTF",
- [VMX_MONITOR] = "VMX_MONITOR",
- [VMX_PAUSE] = "VMX_PAUSE",
- [VMX_FAIL_MCHECK] = "VMX_FAIL_MCHECK",
- [VMX_TPR_THRESHOLD] = "VMX_TPR_THRESHOLD",
- [VMX_APIC_ACCESS] = "VMX_APIC_ACCESS",
- [VMX_EOI_INDUCED] = "VMX_EOI_INDUCED",
- [VMX_GDTR_IDTR] = "VMX_GDTR_IDTR",
- [VMX_LDTR_TR] = "VMX_LDTR_TR",
- [VMX_EPT_VIOLATION] = "VMX_EPT_VIOLATION",
- [VMX_EPT_MISCONFIG] = "VMX_EPT_MISCONFIG",
- [VMX_INVEPT] = "VMX_INVEPT",
- [VMX_PREEMPT] = "VMX_PREEMPT",
- [VMX_INVVPID] = "VMX_INVVPID",
- [VMX_WBINVD] = "VMX_WBINVD",
- [VMX_XSETBV] = "VMX_XSETBV",
- [VMX_APIC_WRITE] = "VMX_APIC_WRITE",
- [VMX_RDRAND] = "VMX_RDRAND",
- [VMX_INVPCID] = "VMX_INVPCID",
- [VMX_VMFUNC] = "VMX_VMFUNC",
- [VMX_RDSEED] = "VMX_RDSEED",
- [VMX_PML_FULL] = "VMX_PML_FULL",
- [VMX_XSAVES] = "VMX_XSAVES",
- [VMX_XRSTORS] = "VMX_XRSTORS",
+ [EXIT_REASON_EXCEPTION_NMI] = "EXCEPTION_NMI",
+ [EXIT_REASON_EXTERNAL_INTERRUPT] = "EXTERNAL_INTERRUPT",
+ [EXIT_REASON_TRIPLE_FAULT] = "TRIPLE_FAULT",
+ [EXIT_REASON_INIT_SIGNAL] = "INIT_SIGNAL",
+ [EXIT_REASON_SIPI_SIGNAL] = "SIPI_SIGNAL",
+ [EXIT_REASON_INTERRUPT_WINDOW] = "INTERRUPT_WINDOW",
+ [EXIT_REASON_NMI_WINDOW] = "NMI_WINDOW",
+ [EXIT_REASON_TASK_SWITCH] = "TASK_SWITCH",
+ [EXIT_REASON_CPUID] = "CPUID",
+ [EXIT_REASON_HLT] = "HLT",
+ [EXIT_REASON_INVD] = "INVD",
+ [EXIT_REASON_INVLPG] = "INVLPG",
+ [EXIT_REASON_RDPMC] = "RDPMC",
+ [EXIT_REASON_RDTSC] = "RDTSC",
+ [EXIT_REASON_VMCALL] = "VMCALL",
+ [EXIT_REASON_VMCLEAR] = "VMCLEAR",
+ [EXIT_REASON_VMLAUNCH] = "VMLAUNCH",
+ [EXIT_REASON_VMPTRLD] = "VMPTRLD",
+ [EXIT_REASON_VMPTRST] = "VMPTRST",
+ [EXIT_REASON_VMREAD] = "VMREAD",
+ [EXIT_REASON_VMRESUME] = "VMRESUME",
+ [EXIT_REASON_VMWRITE] = "VMWRITE",
+ [EXIT_REASON_VMOFF] = "VMOFF",
+ [EXIT_REASON_VMON] = "VMON",
+ [EXIT_REASON_CR_ACCESS] = "CR_ACCESS",
+ [EXIT_REASON_DR_ACCESS] = "DR_ACCESS",
+ [EXIT_REASON_IO_INSTRUCTION] = "IO_INSTRUCTION",
+ [EXIT_REASON_MSR_READ] = "MSR_READ",
+ [EXIT_REASON_MSR_WRITE] = "MSR_WRITE",
+ [EXIT_REASON_INVALID_STATE] = "INVALID_STATE",
+ [EXIT_REASON_MSR_LOAD_FAIL] = "MSR_LOAD_FAIL",
+ [EXIT_REASON_MWAIT_INSTRUCTION] = "MWAIT_INSTRUCTION",
+ [EXIT_REASON_MONITOR_TRAP_FLAG] = "MONITOR_TRAP_FLAG",
+ [EXIT_REASON_MONITOR_INSTRUCTION] = "MONITOR_INSTRUCTION",
+ [EXIT_REASON_PAUSE_INSTRUCTION] = "PAUSE_INSTRUCTION",
+ [EXIT_REASON_MCE_DURING_VMENTRY] = "MCE_DURING_VMENTRY",
+ [EXIT_REASON_TPR_BELOW_THRESHOLD] = "TPR_BELOW_THRESHOLD",
+ [EXIT_REASON_APIC_ACCESS] = "APIC_ACCESS",
+ [EXIT_REASON_EOI_INDUCED] = "EOI_INDUCED",
+ [EXIT_REASON_GDTR_IDTR] = "GDTR_IDTR",
+ [EXIT_REASON_LDTR_TR] = "LDTR_TR",
+ [EXIT_REASON_EPT_VIOLATION] = "EPT_VIOLATION",
+ [EXIT_REASON_EPT_MISCONFIG] = "EPT_MISCONFIG",
+ [EXIT_REASON_INVEPT] = "INVEPT",
+ [EXIT_REASON_RDTSCP] = "RDTSCP",
+ [EXIT_REASON_PREEMPTION_TIMER] = "PREEMPTION_TIMER",
+ [EXIT_REASON_INVVPID] = "INVVPID",
+ [EXIT_REASON_WBINVD] = "WBINVD",
+ [EXIT_REASON_XSETBV] = "XSETBV",
+ [EXIT_REASON_APIC_WRITE] = "APIC_WRITE",
+ [EXIT_REASON_RDRAND] = "RDRAND",
+ [EXIT_REASON_INVPCID] = "INVPCID",
+ [EXIT_REASON_VMFUNC] = "VMFUNC",
+ [EXIT_REASON_ENCLS] = "ENCLS",
+ [EXIT_REASON_RDSEED] = "RDSEED",
+ [EXIT_REASON_PML_FULL] = "PML_FULL",
+ [EXIT_REASON_XSAVES] = "XSAVES",
+ [EXIT_REASON_XRSTORS] = "XRSTORS",
+ [EXIT_REASON_UMWAIT] = "UMWAIT",
+ [EXIT_REASON_TPAUSE] = "TPAUSE",
+ [EXIT_REASON_BUS_LOCK] = "BUS_LOCK",
+ [EXIT_REASON_NOTIFY] = "NOTIFY",
+ [EXIT_REASON_TDCALL] = "TDCALL",
};
const char *exit_reason_description(u64 reason)
@@ -698,13 +702,13 @@ void print_vmentry_failure_info(struct vmentry_result *result)
result->instr, result->exit_reason.full, qual);
switch (result->exit_reason.basic) {
- case VMX_FAIL_STATE:
+ case EXIT_REASON_INVALID_STATE:
printf("invalid guest state\n");
break;
- case VMX_FAIL_MSR:
+ case EXIT_REASON_MSR_LOAD_FAIL:
printf("MSR loading\n");
break;
- case VMX_FAIL_MCHECK:
+ case EXIT_REASON_MCE_DURING_VMENTRY:
printf("machine-check event\n");
break;
default:
@@ -1681,7 +1685,7 @@ void __attribute__((__used__)) hypercall(u32 hypercall_no)
static bool is_hypercall(union exit_reason exit_reason)
{
- return exit_reason.basic == VMX_VMCALL &&
+ return exit_reason.basic == EXIT_REASON_VMCALL &&
(hypercall_field & HYPERCALL_BIT);
}
@@ -2002,7 +2006,7 @@ void __enter_guest(u8 abort_flag, struct vmentry_result *result)
}
if (result->exit_reason.failed_vmentry) {
if ((abort_flag & ABORT_ON_INVALID_GUEST_STATE) ||
- result->exit_reason.basic != VMX_FAIL_STATE)
+ result->exit_reason.basic != EXIT_REASON_INVALID_STATE)
goto do_abort;
return;
}
diff --git a/x86/vmx.h b/x86/vmx.h
index 99ba7e52..5001886b 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -343,67 +343,72 @@ enum Encoding {
#define VMX_ENTRY_FLAGS (X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF | \
X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF)
+/* Naming scheme aligns with Linux's arch/x86/include/uapi/asm/vmx.h */
enum Reason {
- VMX_EXC_NMI = 0,
- VMX_EXTINT = 1,
- VMX_TRIPLE_FAULT = 2,
- VMX_INIT = 3,
- VMX_SIPI = 4,
- VMX_SMI_IO = 5,
- VMX_SMI_OTHER = 6,
- VMX_INTR_WINDOW = 7,
- VMX_NMI_WINDOW = 8,
- VMX_TASK_SWITCH = 9,
- VMX_CPUID = 10,
- VMX_GETSEC = 11,
- VMX_HLT = 12,
- VMX_INVD = 13,
- VMX_INVLPG = 14,
- VMX_RDPMC = 15,
- VMX_RDTSC = 16,
- VMX_RSM = 17,
- VMX_VMCALL = 18,
- VMX_VMCLEAR = 19,
- VMX_VMLAUNCH = 20,
- VMX_VMPTRLD = 21,
- VMX_VMPTRST = 22,
- VMX_VMREAD = 23,
- VMX_VMRESUME = 24,
- VMX_VMWRITE = 25,
- VMX_VMXOFF = 26,
- VMX_VMXON = 27,
- VMX_CR = 28,
- VMX_DR = 29,
- VMX_IO = 30,
- VMX_RDMSR = 31,
- VMX_WRMSR = 32,
- VMX_FAIL_STATE = 33,
- VMX_FAIL_MSR = 34,
- VMX_MWAIT = 36,
- VMX_MTF = 37,
- VMX_MONITOR = 39,
- VMX_PAUSE = 40,
- VMX_FAIL_MCHECK = 41,
- VMX_TPR_THRESHOLD = 43,
- VMX_APIC_ACCESS = 44,
- VMX_EOI_INDUCED = 45,
- VMX_GDTR_IDTR = 46,
- VMX_LDTR_TR = 47,
- VMX_EPT_VIOLATION = 48,
- VMX_EPT_MISCONFIG = 49,
- VMX_INVEPT = 50,
- VMX_PREEMPT = 52,
- VMX_INVVPID = 53,
- VMX_WBINVD = 54,
- VMX_XSETBV = 55,
- VMX_APIC_WRITE = 56,
- VMX_RDRAND = 57,
- VMX_INVPCID = 58,
- VMX_VMFUNC = 59,
- VMX_RDSEED = 61,
- VMX_PML_FULL = 62,
- VMX_XSAVES = 63,
- VMX_XRSTORS = 64,
+ EXIT_REASON_EXCEPTION_NMI = 0,
+ EXIT_REASON_EXTERNAL_INTERRUPT = 1,
+ EXIT_REASON_TRIPLE_FAULT = 2,
+ EXIT_REASON_INIT_SIGNAL = 3,
+ EXIT_REASON_SIPI_SIGNAL = 4,
+ EXIT_REASON_OTHER_SMI = 6,
+ EXIT_REASON_INTERRUPT_WINDOW = 7,
+ EXIT_REASON_NMI_WINDOW = 8,
+ EXIT_REASON_TASK_SWITCH = 9,
+ EXIT_REASON_CPUID = 10,
+ EXIT_REASON_HLT = 12,
+ EXIT_REASON_INVD = 13,
+ EXIT_REASON_INVLPG = 14,
+ EXIT_REASON_RDPMC = 15,
+ EXIT_REASON_RDTSC = 16,
+ EXIT_REASON_VMCALL = 18,
+ EXIT_REASON_VMCLEAR = 19,
+ EXIT_REASON_VMLAUNCH = 20,
+ EXIT_REASON_VMPTRLD = 21,
+ EXIT_REASON_VMPTRST = 22,
+ EXIT_REASON_VMREAD = 23,
+ EXIT_REASON_VMRESUME = 24,
+ EXIT_REASON_VMWRITE = 25,
+ EXIT_REASON_VMOFF = 26,
+ EXIT_REASON_VMON = 27,
+ EXIT_REASON_CR_ACCESS = 28,
+ EXIT_REASON_DR_ACCESS = 29,
+ EXIT_REASON_IO_INSTRUCTION = 30,
+ EXIT_REASON_MSR_READ = 31,
+ EXIT_REASON_MSR_WRITE = 32,
+ EXIT_REASON_INVALID_STATE = 33,
+ EXIT_REASON_MSR_LOAD_FAIL = 34,
+ EXIT_REASON_MWAIT_INSTRUCTION = 36,
+ EXIT_REASON_MONITOR_TRAP_FLAG = 37,
+ EXIT_REASON_MONITOR_INSTRUCTION = 39,
+ EXIT_REASON_PAUSE_INSTRUCTION = 40,
+ EXIT_REASON_MCE_DURING_VMENTRY = 41,
+ EXIT_REASON_TPR_BELOW_THRESHOLD = 43,
+ EXIT_REASON_APIC_ACCESS = 44,
+ EXIT_REASON_EOI_INDUCED = 45,
+ EXIT_REASON_GDTR_IDTR = 46,
+ EXIT_REASON_LDTR_TR = 47,
+ EXIT_REASON_EPT_VIOLATION = 48,
+ EXIT_REASON_EPT_MISCONFIG = 49,
+ EXIT_REASON_INVEPT = 50,
+ EXIT_REASON_RDTSCP = 51,
+ EXIT_REASON_PREEMPTION_TIMER = 52,
+ EXIT_REASON_INVVPID = 53,
+ EXIT_REASON_WBINVD = 54,
+ EXIT_REASON_XSETBV = 55,
+ EXIT_REASON_APIC_WRITE = 56,
+ EXIT_REASON_RDRAND = 57,
+ EXIT_REASON_INVPCID = 58,
+ EXIT_REASON_VMFUNC = 59,
+ EXIT_REASON_ENCLS = 60,
+ EXIT_REASON_RDSEED = 61,
+ EXIT_REASON_PML_FULL = 62,
+ EXIT_REASON_XSAVES = 63,
+ EXIT_REASON_XRSTORS = 64,
+ EXIT_REASON_UMWAIT = 67,
+ EXIT_REASON_TPAUSE = 68,
+ EXIT_REASON_BUS_LOCK = 74,
+ EXIT_REASON_NOTIFY = 75,
+ EXIT_REASON_TDCALL = 77,
};
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 338e39b0..dd4bd43c 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -106,7 +106,7 @@ static int vmenter_exit_handler(union exit_reason exit_reason)
u64 guest_rip = vmcs_read(GUEST_RIP);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
if (regs.rax != 0xABCD) {
report_fail("test vmresume");
return VMX_TEST_VMEXIT;
@@ -178,7 +178,7 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
guest_rip = vmcs_read(GUEST_RIP);
insn_len = vmcs_read(EXI_INST_LEN);
switch (exit_reason.basic) {
- case VMX_PREEMPT:
+ case EXIT_REASON_PREEMPTION_TIMER:
switch (vmx_get_test_stage()) {
case 1:
case 2:
@@ -212,7 +212,7 @@ static int preemption_timer_exit_handler(union exit_reason exit_reason)
break;
}
break;
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
vmcs_write(GUEST_RIP, guest_rip + insn_len);
switch (vmx_get_test_stage()) {
case 0:
@@ -361,7 +361,7 @@ static int test_ctrl_pat_exit_handler(union exit_reason exit_reason)
guest_rip = vmcs_read(GUEST_RIP);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
guest_pat = vmcs_read(GUEST_PAT);
if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_PAT)) {
printf("\tEXI_SAVE_PAT is not supported\n");
@@ -429,7 +429,7 @@ static int test_ctrl_efer_exit_handler(union exit_reason exit_reason)
guest_rip = vmcs_read(GUEST_RIP);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
guest_efer = vmcs_read(GUEST_EFER);
if (!(ctrl_exit_rev.clr & VM_EXIT_SAVE_IA32_EFER)) {
printf("\tEXI_SAVE_EFER is not supported\n");
@@ -556,7 +556,7 @@ static int cr_shadowing_exit_handler(union exit_reason exit_reason)
insn_len = vmcs_read(EXI_INST_LEN);
exit_qual = vmcs_read(EXI_QUALIFICATION);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
report(guest_cr0 == vmcs_read(GUEST_CR0),
@@ -599,7 +599,7 @@ static int cr_shadowing_exit_handler(union exit_reason exit_reason)
}
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
- case VMX_CR:
+ case EXIT_REASON_CR_ACCESS:
switch (vmx_get_test_stage()) {
case 4:
report_fail("Read shadowing CR0");
@@ -719,7 +719,7 @@ static int iobmp_exit_handler(union exit_reason exit_reason)
exit_qual = vmcs_read(EXI_QUALIFICATION);
insn_len = vmcs_read(EXI_INST_LEN);
switch (exit_reason.basic) {
- case VMX_IO:
+ case EXIT_REASON_IO_INSTRUCTION:
switch (vmx_get_test_stage()) {
case 0:
case 1:
@@ -776,7 +776,7 @@ static int iobmp_exit_handler(union exit_reason exit_reason)
}
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 9:
ctrl_cpu0 = vmcs_read(CPU_EXEC_CTRL0);
@@ -936,9 +936,9 @@ static struct insn_table insn_table[] = {
0, 0, 0},
/* LTR causes a #GP if done with a busy selector, so it is not tested. */
{"RDRAND", SECONDARY_EXEC_RDRAND_EXITING, insn_rdrand, INSN_CPU1,
- VMX_RDRAND, 0, 0, 0},
+ EXIT_REASON_RDRAND, 0, 0, 0},
{"RDSEED", SECONDARY_EXEC_RDSEED_EXITING, insn_rdseed, INSN_CPU1,
- VMX_RDSEED, 0, 0, 0},
+ EXIT_REASON_RDSEED, 0, 0, 0},
// Instructions always trap
{"CPUID", 0, insn_cpuid, INSN_ALWAYS_TRAP, 10, 0, 0, 0},
{"INVD", 0, insn_invd, INSN_ALWAYS_TRAP, 13, 0, 0, 0},
@@ -1024,7 +1024,7 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
insn_len = vmcs_read(EXI_INST_LEN);
insn_info = vmcs_read(EXI_INST_INFO);
- if (exit_reason.basic == VMX_VMCALL) {
+ if (exit_reason.basic == EXIT_REASON_VMCALL) {
u32 val = 0;
if (insn_table[cur_insn].type == INSN_CPU0)
@@ -1323,7 +1323,7 @@ static int pml_exit_handler(union exit_reason exit_reason)
u32 insn_len = vmcs_read(EXI_INST_LEN);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
index = vmcs_read(GUEST_PML_INDEX);
@@ -1348,7 +1348,7 @@ static int pml_exit_handler(union exit_reason exit_reason)
}
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
- case VMX_PML_FULL:
+ case EXIT_REASON_PML_FULL:
vmx_inc_test_stage();
vmcs_write(GUEST_PML_INDEX, PML_INDEX - 1);
return VMX_TEST_RESUME;
@@ -1374,7 +1374,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
exit_qual = vmcs_read(EXI_QUALIFICATION);
pteval_t *ptep;
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
check_ept_ad(pml4, guest_cr3,
@@ -1452,7 +1452,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
}
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
- case VMX_EPT_MISCONFIG:
+ case EXIT_REASON_EPT_MISCONFIG:
switch (vmx_get_test_stage()) {
case 1:
case 2:
@@ -1472,7 +1472,7 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
return VMX_TEST_VMEXIT;
}
return VMX_TEST_RESUME;
- case VMX_EPT_VIOLATION:
+ case EXIT_REASON_EPT_VIOLATION:
/*
* Exit-qualifications are masked not to account for advanced
* VM-exit information. Once KVM supports this feature, this
@@ -1731,7 +1731,7 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
u32 insn_len = vmcs_read(EXI_INST_LEN);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
case 2:
@@ -1770,7 +1770,7 @@ static int interrupt_exit_handler(union exit_reason exit_reason)
vmx_inc_test_stage();
vmcs_write(GUEST_RIP, guest_rip + insn_len);
return VMX_TEST_RESUME;
- case VMX_EXTINT:
+ case EXIT_REASON_EXTERNAL_INTERRUPT:
if (vmcs_read(EXI_CONTROLS) & VM_EXIT_ACK_INTR_ON_EXIT) {
int vector = vmcs_read(EXI_INTR_INFO) & 0xff;
handle_external_interrupt(vector);
@@ -1867,11 +1867,11 @@ static int nmi_hlt_exit_handler(union exit_reason exit_reason)
switch (vmx_get_test_stage()) {
case 1:
- if (exit_reason.basic != VMX_VMCALL) {
- report_fail("VMEXIT not due to vmcall. Exit reason 0x%x",
- exit_reason.full);
- print_vmexit_info(exit_reason);
- return VMX_TEST_VMEXIT;
+ if (exit_reason.basic != EXIT_REASON_VMCALL) {
+ report_fail("VMEXIT not due to vmcall. Exit reason 0x%x",
+ exit_reason.full);
+ print_vmexit_info(exit_reason);
+ return VMX_TEST_VMEXIT;
}
vmcs_write(PIN_CONTROLS,
@@ -1882,15 +1882,15 @@ static int nmi_hlt_exit_handler(union exit_reason exit_reason)
break;
case 2:
- if (exit_reason.basic != VMX_EXC_NMI) {
- report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
- exit_reason.full);
- print_vmexit_info(exit_reason);
- return VMX_TEST_VMEXIT;
- }
- report_pass("NMI intercept while running guest");
- vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
- break;
+ if (exit_reason.basic != EXIT_REASON_EXCEPTION_NMI) {
+ report_fail("VMEXIT not due to NMI intercept. Exit reason 0x%x",
+ exit_reason.full);
+ print_vmexit_info(exit_reason);
+ return VMX_TEST_VMEXIT;
+ }
+ report_pass("NMI intercept while running guest");
+ vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
+ break;
case 3:
break;
@@ -1982,7 +1982,7 @@ static int dbgctls_exit_handler(union exit_reason exit_reason)
debugctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
if (dr7 == 0x400 && debugctl == 0 &&
@@ -2061,7 +2061,8 @@ static void msr_switch_main(void)
static int msr_switch_exit_handler(union exit_reason exit_reason)
{
- if (exit_reason.basic == VMX_VMCALL && vmx_get_test_stage() == 2) {
+ if (exit_reason.basic == EXIT_REASON_VMCALL &&
+ vmx_get_test_stage() == 2) {
report(exit_msr_store[0].value == MSR_MAGIC + 1,
"VM exit MSR store");
report(rdmsr(MSR_KERNEL_GS_BASE) == MSR_MAGIC + 2,
@@ -2083,7 +2084,7 @@ static int msr_switch_entry_failure(struct vmentry_result *result)
}
if (result->exit_reason.failed_vmentry &&
- result->exit_reason.basic == VMX_FAIL_MSR &&
+ result->exit_reason.basic == EXIT_REASON_MSR_LOAD_FAIL &&
vmx_get_test_stage() == 3) {
report(vmcs_read(EXI_QUALIFICATION) == 1,
"VM entry MSR load: try to load FS_BASE");
@@ -2113,11 +2114,11 @@ static void vmmcall_main(void)
static int vmmcall_exit_handler(union exit_reason exit_reason)
{
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
printf("here\n");
report_fail("VMMCALL triggers #UD");
break;
- case VMX_EXC_NMI:
+ case EXIT_REASON_EXCEPTION_NMI:
report((vmcs_read(EXI_INTR_INFO) & 0xff) == UD_VECTOR,
"VMMCALL triggers #UD");
break;
@@ -2177,7 +2178,7 @@ static void disable_rdtscp_main(void)
static int disable_rdtscp_exit_handler(union exit_reason exit_reason)
{
switch (exit_reason.basic) {
- case VMX_VMCALL:
+ case EXIT_REASON_VMCALL:
switch (vmx_get_test_stage()) {
case 0:
report_fail("RDTSCP triggers #UD");
@@ -2230,7 +2231,7 @@ static void skip_exit_insn(void)
static void skip_exit_vmcall(void)
{
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
skip_exit_insn();
}
@@ -2410,7 +2411,7 @@ static void do_ept_violation(bool leaf, enum ept_access_op op,
/* Try the access and observe the violation. */
do_ept_access_op(op);
- assert_exit_reason(VMX_EPT_VIOLATION);
+ assert_exit_reason(EXIT_REASON_EPT_VIOLATION);
qual = vmcs_read(EXI_QUALIFICATION);
@@ -2661,7 +2662,7 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
do_ept_access_op(op);
- assert_exit_reason(VMX_EPT_MISCONFIG);
+ assert_exit_reason(EXIT_REASON_EPT_MISCONFIG);
/* Intel 27.2.1, "For all other VM exits, this field is cleared." */
#if 0
@@ -3522,8 +3523,8 @@ static bool vmlaunch(void)
return false;
success:
exit_reason = vmcs_read(EXI_REASON);
- TEST_ASSERT(exit_reason == (VMX_FAIL_STATE | VMX_ENTRY_FAILURE) ||
- exit_reason == (VMX_FAIL_MSR | VMX_ENTRY_FAILURE));
+ TEST_ASSERT(exit_reason == (EXIT_REASON_INVALID_STATE | VMX_ENTRY_FAILURE) ||
+ exit_reason == (EXIT_REASON_MSR_LOAD_FAIL | VMX_ENTRY_FAILURE));
return true;
}
@@ -5239,7 +5240,7 @@ static void report_mtf(const char *insn_name, unsigned long exp_rip)
{
unsigned long rip = vmcs_read(GUEST_RIP);
- assert_exit_reason(VMX_MTF);
+ assert_exit_reason(EXIT_REASON_MONITOR_TRAP_FLAG);
report(rip == exp_rip, "MTF VM-exit after %s. RIP: 0x%lx (expected 0x%lx)",
insn_name, rip, exp_rip);
}
@@ -5464,7 +5465,7 @@ static void vmx_mtf_pdpte_test(void)
enable_mtf();
enter_guest();
- assert_exit_reason(VMX_MTF);
+ assert_exit_reason(EXIT_REASON_MONITOR_TRAP_FLAG);
disable_mtf();
/*
@@ -5626,10 +5627,10 @@ static void test_guest_state(const char *test, bool xfail, u64 field,
__enter_guest(abort_flags, &result);
report(result.exit_reason.failed_vmentry == xfail &&
- ((xfail && result.exit_reason.basic == VMX_FAIL_STATE) ||
- (!xfail && result.exit_reason.basic == VMX_VMCALL)) &&
+ ((xfail && result.exit_reason.basic == EXIT_REASON_INVALID_STATE) ||
+ (!xfail && result.exit_reason.basic == EXIT_REASON_VMCALL)) &&
(!xfail || vmcs_read(EXI_QUALIFICATION) == ENTRY_FAIL_DEFAULT),
- "%s, %s = %lx", test, field_name, field);
+ "%s, %s = %lx", test, field_name, field);
if (!result.exit_reason.failed_vmentry)
skip_exit_insn();
@@ -5810,21 +5811,21 @@ static bool apic_reg_virt_exit_expectation(
config->virtualize_apic_accesses &&
config->activate_secondary_controls;
if (virtualize_apic_accesses_only) {
- expectation->rd_exit_reason = VMX_APIC_ACCESS;
- expectation->wr_exit_reason = VMX_APIC_ACCESS;
+ expectation->rd_exit_reason = EXIT_REASON_APIC_ACCESS;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_ACCESS;
} else if (virtualize_apic_accesses_and_use_tpr_shadow) {
switch (reg) {
case APIC_TASKPRI:
- expectation->rd_exit_reason = VMX_VMCALL;
- expectation->wr_exit_reason = VMX_VMCALL;
+ expectation->rd_exit_reason = EXIT_REASON_VMCALL;
+ expectation->wr_exit_reason = EXIT_REASON_VMCALL;
expectation->virt_fn = apic_virt_nibble1;
break;
default:
- expectation->rd_exit_reason = VMX_APIC_ACCESS;
- expectation->wr_exit_reason = VMX_APIC_ACCESS;
+ expectation->rd_exit_reason = EXIT_REASON_APIC_ACCESS;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_ACCESS;
}
} else if (apic_register_virtualization) {
- expectation->rd_exit_reason = VMX_VMCALL;
+ expectation->rd_exit_reason = EXIT_REASON_VMCALL;
switch (reg) {
case APIC_ID:
@@ -5842,25 +5843,25 @@ static bool apic_reg_virt_exit_expectation(
case APIC_LVTERR:
case APIC_TMICT:
case APIC_TDCR:
- expectation->wr_exit_reason = VMX_APIC_WRITE;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_WRITE;
break;
case APIC_LVR:
case APIC_ISR ... APIC_ISR + 0x70:
case APIC_TMR ... APIC_TMR + 0x70:
case APIC_IRR ... APIC_IRR + 0x70:
- expectation->wr_exit_reason = VMX_APIC_ACCESS;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_ACCESS;
break;
case APIC_TASKPRI:
- expectation->wr_exit_reason = VMX_VMCALL;
+ expectation->wr_exit_reason = EXIT_REASON_VMCALL;
expectation->virt_fn = apic_virt_nibble1;
break;
case APIC_ICR2:
- expectation->wr_exit_reason = VMX_VMCALL;
+ expectation->wr_exit_reason = EXIT_REASON_VMCALL;
expectation->virt_fn = apic_virt_byte3;
break;
default:
- expectation->rd_exit_reason = VMX_APIC_ACCESS;
- expectation->wr_exit_reason = VMX_APIC_ACCESS;
+ expectation->rd_exit_reason = EXIT_REASON_APIC_ACCESS;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_ACCESS;
}
} else if (!expectation->virtualize_apic_accesses) {
/*
@@ -5869,8 +5870,8 @@ static bool apic_reg_virt_exit_expectation(
* the use TPR shadow control, but not through directly
* accessing VTPR.
*/
- expectation->rd_exit_reason = VMX_VMCALL;
- expectation->wr_exit_reason = VMX_VMCALL;
+ expectation->rd_exit_reason = EXIT_REASON_VMCALL;
+ expectation->wr_exit_reason = EXIT_REASON_VMCALL;
} else {
printf("Cannot parse APIC register virtualization config:\n"
"\tvirtualize_apic_accesses: %d\n"
@@ -6111,14 +6112,14 @@ static void test_xapic_rd(
args->apic_access_address = apic_access_address;
args->reg = reg;
args->val = val;
- args->check_rd = exit_reason_want == VMX_VMCALL;
+ args->check_rd = exit_reason_want == EXIT_REASON_VMCALL;
args->virt_fn = expectation->virt_fn;
/* Setup virtual APIC page */
if (!expectation->virtualize_apic_accesses) {
apic_access_address[apic_reg_index(reg)] = val;
virtual_apic_page[apic_reg_index(reg)] = 0;
- } else if (exit_reason_want == VMX_VMCALL) {
+ } else if (exit_reason_want == EXIT_REASON_VMCALL) {
apic_access_address[apic_reg_index(reg)] = 0;
virtual_apic_page[apic_reg_index(reg)] = val;
}
@@ -6130,7 +6131,7 @@ static void test_xapic_rd(
* Validate the behavior and
* pass a magic value back to the guest.
*/
- if (exit_reason_want == VMX_APIC_ACCESS) {
+ if (exit_reason_want == EXIT_REASON_APIC_ACCESS) {
u32 apic_page_offset = vmcs_read(EXI_QUALIFICATION) & 0xfff;
assert_exit_reason(exit_reason_want);
@@ -6141,7 +6142,7 @@ static void test_xapic_rd(
/* Reenter guest so it can consume/check rcx and exit again. */
enter_guest();
- } else if (exit_reason_want != VMX_VMCALL) {
+ } else if (exit_reason_want != EXIT_REASON_VMCALL) {
report_fail("Oops, bad exit expectation: %u.", exit_reason_want);
}
@@ -6158,8 +6159,8 @@ static void test_xapic_wr(
struct apic_reg_virt_guest_args *args = &apic_reg_virt_guest_args;
bool virtualized =
expectation->virtualize_apic_accesses &&
- (exit_reason_want == VMX_APIC_WRITE ||
- exit_reason_want == VMX_VMCALL);
+ (exit_reason_want == EXIT_REASON_APIC_WRITE ||
+ exit_reason_want == EXIT_REASON_VMCALL);
bool checked = false;
report_prefix_pushf("xapic - writing 0x%x to 0x%03x", val, reg);
@@ -6183,7 +6184,7 @@ static void test_xapic_wr(
* Validate the behavior and
* pass a magic value back to the guest.
*/
- if (exit_reason_want == VMX_APIC_ACCESS) {
+ if (exit_reason_want == EXIT_REASON_APIC_ACCESS) {
u32 apic_page_offset = vmcs_read(EXI_QUALIFICATION) & 0xfff;
assert_exit_reason(exit_reason_want);
@@ -6194,7 +6195,7 @@ static void test_xapic_wr(
/* Reenter guest so it can consume/check rcx and exit again. */
enter_guest();
- } else if (exit_reason_want == VMX_APIC_WRITE) {
+ } else if (exit_reason_want == EXIT_REASON_APIC_WRITE) {
assert_exit_reason(exit_reason_want);
report(virtual_apic_page[apic_reg_index(reg)] == val,
"got APIC write exit @ page offset 0x%03x; val is 0x%x, want 0x%x",
@@ -6204,11 +6205,11 @@ static void test_xapic_wr(
/* Reenter guest so it can consume/check rcx and exit again. */
enter_guest();
- } else if (exit_reason_want != VMX_VMCALL) {
+ } else if (exit_reason_want != EXIT_REASON_VMCALL) {
report_fail("Oops, bad exit expectation: %u.", exit_reason_want);
}
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
if (virtualized && !checked) {
u32 want = expectation->virt_fn(val);
u32 got = virtual_apic_page[apic_reg_index(reg)];
@@ -6398,7 +6399,7 @@ static void apic_reg_virt_test(void)
vmcs_write(CPU_EXEC_CTRL1, cpu_exec_ctrl1);
args->op = TERMINATE;
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
}
struct virt_x2apic_mode_config {
@@ -6461,7 +6462,7 @@ static void virt_x2apic_mode_rd_expectation(
{
enum x2apic_reg_semantics semantics = get_x2apic_reg_semantics(reg);
- expectation->rd_exit_reason = VMX_VMCALL;
+ expectation->rd_exit_reason = EXIT_REASON_VMCALL;
expectation->virt_fn = virt_x2apic_mode_identity;
if (virt_x2apic_mode_on && apic_register_virtualization) {
expectation->rd_val = MAGIC_VAL_1;
@@ -6569,7 +6570,7 @@ static void virt_x2apic_mode_wr_expectation(
bool virt_int_delivery,
struct virt_x2apic_mode_expectation *expectation)
{
- expectation->wr_exit_reason = VMX_VMCALL;
+ expectation->wr_exit_reason = EXIT_REASON_VMCALL;
expectation->wr_val = MAGIC_VAL_1;
expectation->wr_only = false;
@@ -6578,14 +6579,14 @@ static void virt_x2apic_mode_wr_expectation(
virt_int_delivery)) {
expectation->wr_behavior = X2APIC_ACCESS_VIRTUALIZED;
if (reg == APIC_SELF_IPI)
- expectation->wr_exit_reason = VMX_APIC_WRITE;
+ expectation->wr_exit_reason = EXIT_REASON_APIC_WRITE;
} else if (!disable_x2apic &&
get_x2apic_wr_val(reg, &expectation->wr_val)) {
expectation->wr_behavior = X2APIC_ACCESS_PASSED_THROUGH;
if (reg == APIC_EOI || reg == APIC_SELF_IPI)
expectation->wr_only = true;
if (reg == APIC_ICR)
- expectation->wr_exit_reason = VMX_EXTINT;
+ expectation->wr_exit_reason = EXIT_REASON_EXTERNAL_INTERRUPT;
} else {
expectation->wr_behavior = X2APIC_ACCESS_TRIGGERS_GP;
/*
@@ -6928,9 +6929,8 @@ static void test_x2apic_rd(
/* Enter guest */
enter_guest();
- if (exit_reason_want != VMX_VMCALL) {
+ if (exit_reason_want != EXIT_REASON_VMCALL)
report_fail("Oops, bad exit expectation: %u.", exit_reason_want);
- }
skip_exit_vmcall();
report_prefix_pop();
@@ -6978,7 +6978,7 @@ static void test_x2apic_wr(
* Validate the behavior and
* pass a magic value back to the guest.
*/
- if (exit_reason_want == VMX_EXTINT) {
+ if (exit_reason_want == EXIT_REASON_EXTERNAL_INTERRUPT) {
assert_exit_reason(exit_reason_want);
/* Clear the external interrupt. */
@@ -6987,7 +6987,7 @@ static void test_x2apic_wr(
"Got pending interrupt after IRQ enabled.");
enter_guest();
- } else if (exit_reason_want == VMX_APIC_WRITE) {
+ } else if (exit_reason_want == EXIT_REASON_APIC_WRITE) {
assert_exit_reason(exit_reason_want);
report(virtual_apic_page[apic_reg_index(reg)] == val,
"got APIC write exit @ page offset 0x%03x; val is 0x%x, want 0x%lx",
@@ -6996,11 +6996,11 @@ static void test_x2apic_wr(
/* Reenter guest so it can consume/check rcx and exit again. */
enter_guest();
- } else if (exit_reason_want != VMX_VMCALL) {
+ } else if (exit_reason_want != EXIT_REASON_VMCALL) {
report_fail("Oops, bad exit expectation: %u.", exit_reason_want);
}
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
if (expectation->wr_behavior == X2APIC_ACCESS_VIRTUALIZED) {
u64 want = val;
u32 got = virtual_apic_page[apic_reg_index(reg)];
@@ -7180,7 +7180,7 @@ static void virt_x2apic_mode_test(void)
vmcs_write(CPU_EXEC_CTRL1, cpu_exec_ctrl1);
args->op = X2APIC_TERMINATE;
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
}
static void test_ctl_reg(const char *cr_name, u64 cr, u64 fixed0, u64 fixed1)
@@ -7663,7 +7663,7 @@ static void test_perf_global_ctrl(u32 nr, const char *name, u32 ctrl_nr,
val = 1ull << i;
vmcs_write(nr, val);
report_prefix_pushf("%s = 0x%lx", name, val);
- test_pgc_vmlaunch(0, VMX_VMCALL, false, host);
+ test_pgc_vmlaunch(0, EXIT_REASON_VMCALL, false, host);
report_prefix_pop();
}
report_prefix_pop();
@@ -7678,7 +7678,7 @@ static void test_perf_global_ctrl(u32 nr, const char *name, u32 ctrl_nr,
vmcs_write(nr, val);
report_prefix_pushf("%s = 0x%lx", name, val);
if (valid_pgc(val)) {
- test_pgc_vmlaunch(0, VMX_VMCALL, false, host);
+ test_pgc_vmlaunch(0, EXIT_REASON_VMCALL, false, host);
} else {
if (host)
test_pgc_vmlaunch(
@@ -7689,7 +7689,8 @@ static void test_perf_global_ctrl(u32 nr, const char *name, u32 ctrl_nr,
else
test_pgc_vmlaunch(
0,
- VMX_ENTRY_FAILURE | VMX_FAIL_STATE,
+ VMX_ENTRY_FAILURE |
+ EXIT_REASON_INVALID_STATE,
true,
host);
}
@@ -8709,7 +8710,7 @@ static void vmx_pending_event_test_core(bool guest_hlt)
enter_guest();
- assert_exit_reason(VMX_EXTINT);
+ assert_exit_reason(EXIT_REASON_EXTERNAL_INTERRUPT);
report(!vmx_pending_event_guest_run,
"Guest did not run before host received IPI");
@@ -8756,7 +8757,7 @@ static void verify_nmi_window_exit(u64 rip)
{
u32 exit_reason = vmcs_read(EXI_REASON);
- report(exit_reason == VMX_NMI_WINDOW,
+ report(exit_reason == EXIT_REASON_NMI_WINDOW,
"Exit reason (%d) is 'NMI window'", exit_reason);
report(vmcs_read(GUEST_RIP) == rip, "RIP (%#lx) is %#lx",
vmcs_read(GUEST_RIP), rip);
@@ -8890,7 +8891,7 @@ static void verify_intr_window_exit(u64 rip)
{
u32 exit_reason = vmcs_read(EXI_REASON);
- report(exit_reason == VMX_INTR_WINDOW,
+ report(exit_reason == EXIT_REASON_INTERRUPT_WINDOW,
"Exit reason (%d) is 'interrupt window'", exit_reason);
report(vmcs_read(GUEST_RIP) == rip, "RIP (%#lx) is %#lx",
vmcs_read(GUEST_RIP), rip);
@@ -8920,7 +8921,7 @@ static void vmx_intr_window_test(void)
report_prefix_push("interrupt-window");
test_set_guest(vmx_intr_window_test_guest);
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
vmcall_addr = vmcs_read(GUEST_RIP);
/*
@@ -9126,9 +9127,9 @@ static void vmx_preemption_timer_zero_expect_preempt_at_rip(u64 expected_rip)
u32 reason = (u32)vmcs_read(EXI_REASON);
u64 guest_rip = vmcs_read(GUEST_RIP);
- report(reason == VMX_PREEMPT && guest_rip == expected_rip,
+ report(reason == EXIT_REASON_PREEMPTION_TIMER && guest_rip == expected_rip,
"Exit reason is 0x%x (expected 0x%x) and guest RIP is %lx (0x%lx expected).",
- reason, VMX_PREEMPT, guest_rip, expected_rip);
+ reason, EXIT_REASON_PREEMPTION_TIMER, guest_rip, expected_rip);
}
/*
@@ -9191,8 +9192,9 @@ static void vmx_preemption_timer_zero_test(void)
vmx_set_test_stage(3);
vmx_preemption_timer_zero_set_pending_dbg(1 << DB_VECTOR);
reason = (u32)vmcs_read(EXI_REASON);
- report(reason == VMX_EXC_NMI, "Exit reason is 0x%x (expected 0x%x)",
- reason, VMX_EXC_NMI);
+ report(reason == EXIT_REASON_EXCEPTION_NMI,
+ "Exit reason is 0x%x (expected 0x%x)",
+ reason, EXIT_REASON_EXCEPTION_NMI);
vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
enter_guest();
@@ -9287,14 +9289,14 @@ static void vmx_preemption_timer_tf_test(void)
for (i = 0; i < 10000; i++) {
enter_guest();
reason = (u32)vmcs_read(EXI_REASON);
- if (reason == VMX_PREEMPT)
+ if (reason == EXIT_REASON_PREEMPTION_TIMER)
continue;
- TEST_ASSERT(reason == VMX_VMCALL);
+ TEST_ASSERT(reason == EXIT_REASON_VMCALL);
skip_exit_insn();
break;
}
- report(reason == VMX_PREEMPT, "No single-step traps skipped");
+ report(reason == EXIT_REASON_PREEMPTION_TIMER, "No single-step traps skipped");
vmx_set_test_stage(2);
vmcs_clear_bits(PIN_CONTROLS, PIN_BASED_VMX_PREEMPTION_TIMER);
@@ -9369,7 +9371,7 @@ static void vmx_preemption_timer_expiry_test(void)
enter_guest();
reason = (u32)vmcs_read(EXI_REASON);
- TEST_ASSERT(reason == VMX_PREEMPT);
+ TEST_ASSERT(reason == EXIT_REASON_PREEMPTION_TIMER);
tsc_deadline = ((vmx_preemption_timer_expiry_start >> misc.pt_bit) <<
misc.pt_bit) + (preemption_timer_value << misc.pt_bit);
@@ -9448,7 +9450,7 @@ static void check_db_exit(bool xfail_qual, bool xfail_dr6, bool xfail_pdbg,
const u32 expected_intr_info = INTR_INFO_VALID_MASK |
INTR_TYPE_HARD_EXCEPTION | DB_VECTOR;
- report(reason == VMX_EXC_NMI && intr_info == expected_intr_info,
+ report(reason == EXIT_REASON_EXCEPTION_NMI && intr_info == expected_intr_info,
"Expected #DB VM-exit");
report((u64)expected_rip == guest_rip, "Expected RIP %p (actual %lx)",
expected_rip, guest_rip);
@@ -9640,7 +9642,9 @@ static void irq_79_handler_guest(isr_regs_t *regs)
{
eoi();
- /* L1 expects vmexit on VMX_VMCALL and not VMX_EOI_INDUCED */
+ /* L1 expects vmexit on EXIT_REASON_VMCALL
+ * and not EXIT_REASON_EOI_INDUCED.
+ */
vmcall();
}
@@ -9685,9 +9689,10 @@ static void vmx_eoi_bitmap_ioapic_scan_test(void)
/*
* Launch L2.
- * We expect the exit reason to be VMX_VMCALL (and not EOI INDUCED).
- * In case the reason isn't VMX_VMCALL, the assertion inside
- * skip_exit_vmcall() will fail.
+ * We expect the exit reason to be EXIT_REASON_VMCALL
+ * (and not EXIT_REASON_EOI_INDUCED). In case the reason isn't
+ * EXIT_REASON_VMCALL, the assertion inside skip_exit_vmcall()
+ * will fail.
*/
enter_guest();
skip_exit_vmcall();
@@ -9928,7 +9933,8 @@ static void init_signal_test_thread(void *data)
/*
* Signal to BSP CPU that we continue as usual as INIT signal
- * should have been consumed by VMX_INIT exit from guest
+ * should have been consumed by EXIT_REASON_INIT_SIGNAL exit from
+ * guest
*/
vmx_set_test_stage(7);
@@ -10011,10 +10017,10 @@ static void vmx_init_signal_test(void)
report_fail("Pending INIT signal didn't result in VMX exit");
return;
}
- report(init_signal_test_exit_reason == VMX_INIT,
- "INIT signal during VMX non-root mode result in exit-reason %s (%lu)",
- exit_reason_description(init_signal_test_exit_reason),
- init_signal_test_exit_reason);
+ report(init_signal_test_exit_reason == EXIT_REASON_INIT_SIGNAL,
+ "INIT signal during VMX non-root mode result in exit-reason %s (%lu)",
+ exit_reason_description(init_signal_test_exit_reason),
+ init_signal_test_exit_reason);
/* Run guest to completion */
make_vmcs_current(test_vmcs);
@@ -10027,7 +10033,7 @@ static void vmx_init_signal_test(void)
/* Wait reasonable amount of time for other CPU to exit VMX operation */
delay(INIT_SIGNAL_TEST_DELAY);
report(vmx_get_test_stage() == 7,
- "INIT signal consumed on VMX_INIT exit");
+ "INIT signal consumed on EXIT_REASON_INIT_SIGNAL exit");
/* No point to continue if we failed at this point */
if (vmx_get_test_stage() != 7)
return;
@@ -10138,7 +10144,7 @@ static void sipi_test_ap_thread(void *data)
/* AP enter guest */
enter_guest();
- if (vmcs_read(EXI_REASON) == VMX_SIPI) {
+ if (vmcs_read(EXI_REASON) == EXIT_REASON_SIPI_SIGNAL) {
report_pass("AP: Handle SIPI VMExit");
vmcs_write(GUEST_ACTV_STATE, ACTV_ACTIVE);
vmx_set_test_stage(2);
@@ -10151,7 +10157,7 @@ static void sipi_test_ap_thread(void *data)
/* AP enter guest */
enter_guest();
- report(vmcs_read(EXI_REASON) != VMX_SIPI,
+ report(vmcs_read(EXI_REASON) != EXIT_REASON_SIPI_SIGNAL,
"AP: should no SIPI VMExit since activity is not in WAIT_SIPI state");
/* notify BSP that AP is already exit from non-root mode */
@@ -10290,7 +10296,7 @@ static void vmcs_shadow_test_access(u8 *bitmap[2], enum vmcs_access access)
vmcs_write(VMX_INST_ERROR, 0);
enter_guest();
c->reason = vmcs_read(EXI_REASON) & 0xffff;
- if (c->reason != VMX_VMCALL) {
+ if (c->reason != EXIT_REASON_VMCALL) {
skip_exit_insn();
enter_guest();
}
@@ -10332,9 +10338,9 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
set_bit(field, bitmap[ACCESS_VMWRITE]);
}
vmcs_shadow_test_access(bitmap, ACCESS_VMWRITE);
- report(c->reason == VMX_VMWRITE, "not shadowed for VMWRITE");
+ report(c->reason == EXIT_REASON_VMWRITE, "not shadowed for VMWRITE");
vmcs_shadow_test_access(bitmap, ACCESS_VMREAD);
- report(c->reason == VMX_VMREAD, "not shadowed for VMREAD");
+ report(c->reason == EXIT_REASON_VMREAD, "not shadowed for VMREAD");
report_prefix_pop();
if (field >> VMCS_FIELD_RESERVED_SHIFT)
@@ -10347,10 +10353,11 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
if (good_shadow)
value = vmwrite_to_shadow(field, MAGIC_VAL_1 + field);
vmcs_shadow_test_access(bitmap, ACCESS_VMWRITE);
- report(c->reason == VMX_VMWRITE, "not shadowed for VMWRITE");
+ report(c->reason == EXIT_REASON_VMWRITE, "not shadowed for VMWRITE");
vmcs_shadow_test_access(bitmap, ACCESS_VMREAD);
vmx_inst_error = vmcs_read(VMX_INST_ERROR);
- report(c->reason == VMX_VMCALL, "shadowed for VMREAD (in %ld cycles)",
+ report(c->reason == EXIT_REASON_VMCALL,
+ "shadowed for VMREAD (in %ld cycles)",
c->time);
report(c->flags == flags[ACCESS_VMREAD],
"ALU flags after VMREAD (%lx) are as expected (%lx)",
@@ -10373,7 +10380,7 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
vmwrite_to_shadow(field, MAGIC_VAL_1 + field);
vmcs_shadow_test_access(bitmap, ACCESS_VMWRITE);
vmx_inst_error = vmcs_read(VMX_INST_ERROR);
- report(c->reason == VMX_VMCALL,
+ report(c->reason == EXIT_REASON_VMCALL,
"shadowed for VMWRITE (in %ld cycles)",
c->time);
report(c->flags == flags[ACCESS_VMREAD],
@@ -10390,7 +10397,7 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
vmx_inst_error, VMXERR_UNSUPPORTED_VMCS_COMPONENT);
}
vmcs_shadow_test_access(bitmap, ACCESS_VMREAD);
- report(c->reason == VMX_VMREAD, "not shadowed for VMREAD");
+ report(c->reason == EXIT_REASON_VMREAD, "not shadowed for VMREAD");
report_prefix_pop();
/* Permit shadowed VMREAD and VMWRITE. */
@@ -10401,7 +10408,7 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
vmwrite_to_shadow(field, MAGIC_VAL_1 + field);
vmcs_shadow_test_access(bitmap, ACCESS_VMWRITE);
vmx_inst_error = vmcs_read(VMX_INST_ERROR);
- report(c->reason == VMX_VMCALL,
+ report(c->reason == EXIT_REASON_VMCALL,
"shadowed for VMWRITE (in %ld cycles)",
c->time);
report(c->flags == flags[ACCESS_VMREAD],
@@ -10419,7 +10426,8 @@ static void vmcs_shadow_test_field(u8 *bitmap[2], u64 field)
}
vmcs_shadow_test_access(bitmap, ACCESS_VMREAD);
vmx_inst_error = vmcs_read(VMX_INST_ERROR);
- report(c->reason == VMX_VMCALL, "shadowed for VMREAD (in %ld cycles)",
+ report(c->reason == EXIT_REASON_VMCALL,
+ "shadowed for VMREAD (in %ld cycles)",
c->time);
report(c->flags == flags[ACCESS_VMREAD],
"ALU flags after VMREAD (%lx) are as expected (%lx)",
@@ -10645,7 +10653,8 @@ static int invalid_msr_exit_handler(union exit_reason exit_reason)
static int invalid_msr_entry_failure(struct vmentry_result *result)
{
report(result->exit_reason.failed_vmentry &&
- result->exit_reason.basic == VMX_FAIL_MSR, "Invalid MSR load");
+ result->exit_reason.basic == EXIT_REASON_MSR_LOAD_FAIL,
+ "Invalid MSR load");
return VMX_TEST_VMEXIT;
}
@@ -10735,7 +10744,7 @@ static void atomic_switch_msrs_test(int count)
if (count <= max_allowed) {
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
skip_exit_vmcall();
} else {
u32 exit_qual;
@@ -10804,20 +10813,20 @@ static void __vmx_pf_exception_test(invalidate_tlb_t inv_fn, void *data,
enter_guest();
- while (vmcs_read(EXI_REASON) != VMX_VMCALL) {
+ while (vmcs_read(EXI_REASON) != EXIT_REASON_VMCALL) {
switch (vmcs_read(EXI_REASON)) {
- case VMX_RDMSR:
+ case EXIT_REASON_MSR_READ:
assert(regs.rcx == MSR_EFER);
efer = vmcs_read(GUEST_EFER);
regs.rdx = efer >> 32;
regs.rax = efer & 0xffffffff;
break;
- case VMX_WRMSR:
+ case EXIT_REASON_MSR_WRITE:
assert(regs.rcx == MSR_EFER);
efer = regs.rdx << 32 | (regs.rax & 0xffffffff);
vmcs_write(GUEST_EFER, efer);
break;
- case VMX_CPUID:
+ case EXIT_REASON_CPUID:
cpuid = (struct cpuid) {0, 0, 0, 0};
cpuid = raw_cpuid(regs.rax, regs.rcx);
regs.rax = cpuid.a;
@@ -10825,7 +10834,7 @@ static void __vmx_pf_exception_test(invalidate_tlb_t inv_fn, void *data,
regs.rcx = cpuid.c;
regs.rdx = cpuid.d;
break;
- case VMX_INVLPG:
+ case EXIT_REASON_INVLPG:
inv_fn(data);
break;
default:
@@ -10838,7 +10847,7 @@ static void __vmx_pf_exception_test(invalidate_tlb_t inv_fn, void *data,
enter_guest();
}
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
}
static void vmx_pf_exception_test(void)
@@ -10965,7 +10974,7 @@ static void handle_exception_in_l2(u8 vector)
vmx_exception_test_vector = vector;
enter_guest();
- report(vmcs_read(EXI_REASON) == VMX_VMCALL,
+ report(vmcs_read(EXI_REASON) == EXIT_REASON_VMCALL,
"%s handled by L2", exception_mnemonic(vector));
handle_exception(vector, old_handler);
@@ -10987,7 +10996,7 @@ static void handle_exception_in_l1(u32 vector)
intr_type = EVENT_TYPE_HWEXC;
intr_info = vmcs_read(EXI_INTR_INFO);
- report((vmcs_read(EXI_REASON) == VMX_EXC_NMI) &&
+ report((vmcs_read(EXI_REASON) == EXIT_REASON_EXCEPTION_NMI) &&
(intr_info & INTR_INFO_VALID_MASK) &&
(intr_info & INTR_INFO_VECTOR_MASK) == vector &&
((intr_info & INTR_INFO_INTR_TYPE_MASK) >> INTR_INFO_INTR_TYPE_SHIFT) == intr_type,
@@ -11396,10 +11405,10 @@ static void test_basic_vid(u8 nr, u8 tpr, enum Vid_op op, u32 isr_exec_cnt_want,
if (eoi_exit_induced) {
u32 exit_cnt;
- assert_exit_reason(VMX_EOI_INDUCED);
+ assert_exit_reason(EXIT_REASON_EOI_INDUCED);
for (exit_cnt = 1; exit_cnt < isr_exec_cnt_want; exit_cnt++) {
enter_guest();
- assert_exit_reason(VMX_EOI_INDUCED);
+ assert_exit_reason(EXIT_REASON_EOI_INDUCED);
}
enter_guest();
}
@@ -11468,7 +11477,7 @@ static void vmx_basic_vid_test(void)
/* Terminate the guest */
args->op = VID_OP_TERMINATE;
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
}
static void test_eoi_virt(u8 nr, u8 lo_pri_nr, bool eoi_exit_induced)
@@ -11526,7 +11535,7 @@ static void vmx_eoi_virt_test(void)
/* Terminate the guest */
args->op = VID_OP_TERMINATE;
enter_guest();
- assert_exit_reason(VMX_VMCALL);
+ assert_exit_reason(EXIT_REASON_VMCALL);
}
static void vmx_posted_interrupts_test(void)
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
` (16 preceding siblings ...)
2025-09-16 17:22 ` [kvm-unit-tests PATCH 17/17] x86/vmx: align exit reasons with Linux uapi Jon Kohler
@ 2025-11-12 19:02 ` Sean Christopherson
2025-11-14 14:52 ` Jon Kohler
17 siblings, 1 reply; 21+ messages in thread
From: Sean Christopherson @ 2025-11-12 19:02 UTC (permalink / raw)
To: Jon Kohler; +Cc: pbonzini, kvm
On Tue, Sep 16, 2025, Jon Kohler wrote:
> This series modernizes VMX definitions to align with the canonical ones
> within Linux kernel source. Currently, kvm-unit-tests uses custom VMX
> constant definitions that have grown organically and have diverged from
> the kernel, increasing the overhead to grok from one code base to
> another.
>
> This alignment provides several benefits:
> - Reduces maintenance overhead by using authoritative definitions
> - Eliminates potential bugs from definition mismatches
> - Makes the test suite more consistent with kernel code
> - Simplifies future updates when new VMX features are added
>
> Given the lines touched, I've broken this up into two groups within the
> series:
>
> Group 1: Import various headers from Linux kernel 6.16 (P01-04)
Hrm. I'm definitely in favor of aligning names, and not opposed to pulling
information from the kernel, but I don't think I like the idea of doing a straight
copy+paste. The arch/x86/include/asm/vmxfeatures.h insanity in particular is pure
overhead/noise in KUT. E.g. the layer of indirection to find out the bit number is
_really_ annoying, and the shifting done for VMFUNC is downright gross, but at
least in the kernel we get pretty printing in /proc/cpuinfo.
Similarly, I don't want to pull in trapnr.h verbatim, because KVM already provides
<nr>_VECTOR in a uapi header, and I strongly prefer the <nr>_VECTOR macros
("trap" is very misleading when considering fault-like vs. trap-like exceptions).
This is also a good opportunity to align the third player: KVM selftests. Which
kinda sorta copy the kernel headers, but with stale and annoying differences.
Lastly, if we're going to pull from the kernel, ideally we would have a script to
semi-automate updating the KUT side of things.
So, I think/hope we can kill a bunch of birds at once by creating a script to
parse the kernel's vmxfeatures.h, vmx.h, trapnr.h, msr-index.h (to replace lib/x86/msr.h),
and generate the pieces we want. And if we do that for KVM selftests, then we
can commit the script to the kernel repo, i.e. we can make it the kernel's
responsibility to keep the script up-to-date, e.g. if there's a big rename or
something.
> Headers were brought in with minimal adaptation outside of minor tweaks
> for includes, etc.
>
> Group 2: Mechanically replace existing constants with equivalents (P05-17)
>
> Replace custom VMX constant definitions in x86/vmx.h with Linux kernel
> equivalents from lib/linux/vmx.h. This systematic replacement covers:
>
> - Pin-based VM-execution controls (PIN_* -> PIN_BASED_*)
> - CPU-based VM-execution controls (CPU_* -> CPU_BASED_*, SECONDARY_EXEC_*)
> - VM-exit controls (EXI_* -> VM_EXIT_*)
> - VM-entry controls (ENT_* -> VM_ENTRY_*)
> - VMCS field names (custom enum -> standard Linux enum)
> - VMX exit reasons (VMX_* -> EXIT_REASON_*)
> - Interrupt/exception type definitions
>
> All functional behavior is preserved - only the constant names and
> values change to match Linux kernel definitions. All existing VMX tests
> pass with no functional changes.
>
> There is still a bit of bulk in x86/vmx.h, which can be addressed in
> future patches as needed.
>
> Jon Kohler (17):
> lib: add linux vmx.h clone from 6.16
> lib: add linux trapnr.h clone from 6.16
> lib: add vmxfeatures.h clone from 6.16
> lib: define __aligned() in compiler.h
> x86/vmx: basic integration for new vmx.h
> x86/vmx: switch to new vmx.h EPT violation defs
> x86/vmx: switch to new vmx.h EPT RWX defs
> x86/vmx: switch to new vmx.h EPT access and dirty defs
> x86/vmx: switch to new vmx.h EPT capability and memory type defs
> x86/vmx: switch to new vmx.h primary processor-based VM-execution
> controls
> x86/vmx: switch to new vmx.h secondary execution control bit
> x86/vmx: switch to new vmx.h secondary execution controls
> x86/vmx: switch to new vmx.h pin based VM-execution controls
> x86/vmx: switch to new vmx.h exit controls
> x86/vmx: switch to new vmx.h entry controls
> x86/vmx: switch to new vmx.h interrupt defs
> x86/vmx: align exit reasons with Linux uapi
>
> lib/linux/compiler.h | 1 +
> lib/linux/trapnr.h | 44 ++
> lib/linux/vmx.h | 672 ++++++++++++++++++
> lib/linux/vmxfeatures.h | 93 +++
> lib/x86/msr.h | 14 +
> x86/vmx.c | 230 +++---
> x86/vmx.h | 356 ++--------
> x86/vmx_tests.c | 1489 ++++++++++++++++++++++-----------------
> 8 files changed, 1876 insertions(+), 1023 deletions(-)
> create mode 100644 lib/linux/trapnr.h
> create mode 100644 lib/linux/vmx.h
> create mode 100644 lib/linux/vmxfeatures.h
>
> base-commit: 890498d834b68104e79b57a801fa11fc6ce82846
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions
2025-11-12 19:02 ` [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Sean Christopherson
@ 2025-11-14 14:52 ` Jon Kohler
2025-11-17 17:41 ` Sean Christopherson
0 siblings, 1 reply; 21+ messages in thread
From: Jon Kohler @ 2025-11-14 14:52 UTC (permalink / raw)
To: Sean Christopherson; +Cc: pbonzini@redhat.com, kvm@vger.kernel.org
> On Nov 12, 2025, at 2:02 PM, Sean Christopherson <seanjc@google.com> wrote:
>
> !-------------------------------------------------------------------|
> CAUTION: External Email
>
> |-------------------------------------------------------------------!
>
> On Tue, Sep 16, 2025, Jon Kohler wrote:
>> This series modernizes VMX definitions to align with the canonical ones
>> within Linux kernel source. Currently, kvm-unit-tests uses custom VMX
>> constant definitions that have grown organically and have diverged from
>> the kernel, increasing the overhead to grok from one code base to
>> another.
>>
>> This alignment provides several benefits:
>> - Reduces maintenance overhead by using authoritative definitions
>> - Eliminates potential bugs from definition mismatches
>> - Makes the test suite more consistent with kernel code
>> - Simplifies future updates when new VMX features are added
>>
>> Given the lines touched, I've broken this up into two groups within the
>> series:
>>
>> Group 1: Import various headers from Linux kernel 6.16 (P01-04)
>
> Hrm. I'm definitely in favor of aligning names, and not opposed to pulling
> information from the kernel, but I don't think I like the idea of doing a straight
> copy+paste. The arch/x86/include/asm/vmxfeatures.h insanity in particular is pure
> overhead/noise in KUT. E.g. the layer of indirection to find out the bit number is
> _really_ annoying, and the shifting done for VMFUNC is downright gross, but at
> least in the kernel we get pretty printing in /proc/cpuinfo.
>
> Similarly, I don't want to pull in trapnr.h verbatim, because KVM already provides
> <nr>_VECTOR in a uapi header, and I strongly prefer the <nr>_VECTOR macros
> ("trap" is very misleading when considering fault-like vs. trap-like exceptions).
>
> This is also a good opportunity to align the third player: KVM selftests. Which
> kinda sorta copy the kernel headers, but with stale and annoying differences.
>
> Lastly, if we're going to pull from the kernel, ideally we would have a script to
> semi-automate updating the KUT side of things.
>
> So, I think/hope we can kill a bunch of birds at once by creating a script to
> parse the kernel's vmxfeatures.h, vmx.h, trapnr.h, msr-index.h (to replace lib/x86/msr.h),
> and generate the pieces we want. And if we do that for KVM selftests, then we
> can commit the script to the kernel repo, i.e. we can make it the kernel's
> responsibility to keep the script up-to-date, e.g. if there's a big rename or
> something.
Thanks, Sean - Happy to take a swing at if you don’t already have something
cooked up to magic that into existence. Any chance any other subsystems do
something similar? Want to make sure we don’t re-invent the wheel if so.
Otherwise, happy to start from scratch, thats fine too.
Jon
>
>> Headers were brought in with minimal adaptation outside of minor tweaks
>> for includes, etc.
>>
>> Group 2: Mechanically replace existing constants with equivalents (P05-17)
>>
>> Replace custom VMX constant definitions in x86/vmx.h with Linux kernel
>> equivalents from lib/linux/vmx.h. This systematic replacement covers:
>>
>> - Pin-based VM-execution controls (PIN_* -> PIN_BASED_*)
>> - CPU-based VM-execution controls (CPU_* -> CPU_BASED_*, SECONDARY_EXEC_*)
>> - VM-exit controls (EXI_* -> VM_EXIT_*)
>> - VM-entry controls (ENT_* -> VM_ENTRY_*)
>> - VMCS field names (custom enum -> standard Linux enum)
>> - VMX exit reasons (VMX_* -> EXIT_REASON_*)
>> - Interrupt/exception type definitions
>>
>> All functional behavior is preserved - only the constant names and
>> values change to match Linux kernel definitions. All existing VMX tests
>> pass with no functional changes.
>>
>> There is still a bit of bulk in x86/vmx.h, which can be addressed in
>> future patches as needed.
>>
>> Jon Kohler (17):
>> lib: add linux vmx.h clone from 6.16
>> lib: add linux trapnr.h clone from 6.16
>> lib: add vmxfeatures.h clone from 6.16
>> lib: define __aligned() in compiler.h
>> x86/vmx: basic integration for new vmx.h
>> x86/vmx: switch to new vmx.h EPT violation defs
>> x86/vmx: switch to new vmx.h EPT RWX defs
>> x86/vmx: switch to new vmx.h EPT access and dirty defs
>> x86/vmx: switch to new vmx.h EPT capability and memory type defs
>> x86/vmx: switch to new vmx.h primary processor-based VM-execution
>> controls
>> x86/vmx: switch to new vmx.h secondary execution control bit
>> x86/vmx: switch to new vmx.h secondary execution controls
>> x86/vmx: switch to new vmx.h pin based VM-execution controls
>> x86/vmx: switch to new vmx.h exit controls
>> x86/vmx: switch to new vmx.h entry controls
>> x86/vmx: switch to new vmx.h interrupt defs
>> x86/vmx: align exit reasons with Linux uapi
>>
>> lib/linux/compiler.h | 1 +
>> lib/linux/trapnr.h | 44 ++
>> lib/linux/vmx.h | 672 ++++++++++++++++++
>> lib/linux/vmxfeatures.h | 93 +++
>> lib/x86/msr.h | 14 +
>> x86/vmx.c | 230 +++---
>> x86/vmx.h | 356 ++--------
>> x86/vmx_tests.c | 1489 ++++++++++++++++++++++-----------------
>> 8 files changed, 1876 insertions(+), 1023 deletions(-)
>> create mode 100644 lib/linux/trapnr.h
>> create mode 100644 lib/linux/vmx.h
>> create mode 100644 lib/linux/vmxfeatures.h
>>
>> base-commit: 890498d834b68104e79b57a801fa11fc6ce82846
>>
>> --
>> 2.43.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions
2025-11-14 14:52 ` Jon Kohler
@ 2025-11-17 17:41 ` Sean Christopherson
0 siblings, 0 replies; 21+ messages in thread
From: Sean Christopherson @ 2025-11-17 17:41 UTC (permalink / raw)
To: Jon Kohler; +Cc: pbonzini@redhat.com, kvm@vger.kernel.org
On Fri, Nov 14, 2025, Jon Kohler wrote:
> > On Nov 12, 2025, at 2:02 PM, Sean Christopherson <seanjc@google.com> wrote:
> > On Tue, Sep 16, 2025, Jon Kohler wrote:
> >> This series modernizes VMX definitions to align with the canonical ones
> >> within Linux kernel source. Currently, kvm-unit-tests uses custom VMX
> >> constant definitions that have grown organically and have diverged from
> >> the kernel, increasing the overhead to grok from one code base to
> >> another.
> >>
> >> This alignment provides several benefits:
> >> - Reduces maintenance overhead by using authoritative definitions
> >> - Eliminates potential bugs from definition mismatches
> >> - Makes the test suite more consistent with kernel code
> >> - Simplifies future updates when new VMX features are added
> >>
> >> Given the lines touched, I've broken this up into two groups within the
> >> series:
> >>
> >> Group 1: Import various headers from Linux kernel 6.16 (P01-04)
> >
> > Hrm. I'm definitely in favor of aligning names, and not opposed to pulling
> > information from the kernel, but I don't think I like the idea of doing a straight
> > copy+paste. The arch/x86/include/asm/vmxfeatures.h insanity in particular is pure
> > overhead/noise in KUT. E.g. the layer of indirection to find out the bit number is
> > _really_ annoying, and the shifting done for VMFUNC is downright gross, but at
> > least in the kernel we get pretty printing in /proc/cpuinfo.
> >
> > Similarly, I don't want to pull in trapnr.h verbatim, because KVM already provides
> > <nr>_VECTOR in a uapi header, and I strongly prefer the <nr>_VECTOR macros
> > ("trap" is very misleading when considering fault-like vs. trap-like exceptions).
> >
> > This is also a good opportunity to align the third player: KVM selftests. Which
> > kinda sorta copy the kernel headers, but with stale and annoying differences.
> >
> > Lastly, if we're going to pull from the kernel, ideally we would have a script to
> > semi-automate updating the KUT side of things.
> >
> > So, I think/hope we can kill a bunch of birds at once by creating a script to
> > parse the kernel's vmxfeatures.h, vmx.h, trapnr.h, msr-index.h (to replace lib/x86/msr.h),
> > and generate the pieces we want. And if we do that for KVM selftests, then we
> > can commit the script to the kernel repo, i.e. we can make it the kernel's
> > responsibility to keep the script up-to-date, e.g. if there's a big rename or
> > something.
>
> Thanks, Sean - Happy to take a swing at if you don’t already have something
> cooked up to magic that into existence. Any chance any other subsystems do
> something similar? Want to make sure we don’t re-invent the wheel if so.
AFAIK, there's no prior art. :-/
People do have scripts to manage headers, but they're for simple use cases of
copying kernel headers elsewhere.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-11-17 17:41 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-16 17:22 [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 01/17] lib: add linux vmx.h clone from 6.16 Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 02/17] lib: add linux trapnr.h " Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 03/17] lib: add vmxfeatures.h " Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 04/17] lib: define __aligned() in compiler.h Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 05/17] x86/vmx: basic integration for new vmx.h Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 06/17] x86/vmx: switch to new vmx.h EPT violation defs Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 07/17] x86/vmx: switch to new vmx.h EPT RWX defs Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 08/17] x86/vmx: switch to new vmx.h EPT access and dirty defs Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 09/17] x86/vmx: switch to new vmx.h EPT capability and memory type defs Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 10/17] x86/vmx: switch to new vmx.h primary processor-based VM-execution controls Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 11/17] x86/vmx: switch to new vmx.h secondary execution control bit Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 12/17] x86/vmx: switch to new vmx.h secondary execution controls Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 13/17] x86/vmx: switch to new vmx.h pin based VM-execution controls Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 14/17] x86/vmx: switch to new vmx.h exit controls Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 15/17] x86/vmx: switch to new vmx.h entry controls Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 16/17] x86/vmx: switch to new vmx.h interrupt defs Jon Kohler
2025-09-16 17:22 ` [kvm-unit-tests PATCH 17/17] x86/vmx: align exit reasons with Linux uapi Jon Kohler
2025-11-12 19:02 ` [kvm-unit-tests PATCH 00/17] x86/vmx: align with Linux kernel VMX definitions Sean Christopherson
2025-11-14 14:52 ` Jon Kohler
2025-11-17 17:41 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox