* [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests
@ 2026-03-26 14:50 Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 1/9] move PFERR_* constants to lib Paolo Bonzini
` (9 more replies)
0 siblings, 10 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
This adds new tests for both GMET and MBEC.
The code for MBEC is roughly based on the previous submission at
https://lore.kernel.org/kvm/20251223054850.1611618-1-jon@nutanix.com/,
though with pretty heavy reorganization and fixing work on top.
The existing EPT tests now test execution on both supervisor and
user-mode pages, with different expected outcomes depending on
whether MBEC is enabled or not; on top of this, the last patch
adds tests for XU=1 and XS=XU=1.
For simplicity, the tests always enable MBEC when available.
A new block in unittests.cfg ensures that both non-MBEC and
MBEC is covered.
I will shortly send a new version of the patches that passes
the tests.
Paolo
Jon Kohler (1):
x86/vmx: update EPT installation to use EPT_PRESENT flag
Paolo Bonzini (8):
move PFERR_* constants to lib
add definitions for nested_ctl
svm: add basic GMET tests
x86/vmx: diagnose unexpected EPT violations
x86/vmx: add mode-based execute control test for Skylake and above
x86/vmx: add user execution operation to EPT access tests
x86/vmx: run EPT tests with MBEC enabled when available
x86/vmx: add EPT tests covering XU permission
Makefile | 2 +-
lib/util.h | 10 +-
lib/x86/asm/page.h | 7 +
lib/x86/processor.h | 1 +
x86/access.c | 7 -
x86/svm.c | 19 +-
x86/svm.h | 4 +
x86/svm_npt.c | 83 ++++++++-
x86/unittests.cfg | 21 ++-
x86/vmx.c | 3 +-
x86/vmx.h | 32 ++--
x86/vmx_tests.c | 414 +++++++++++++++++++++++++++++++++++++-------
12 files changed, 513 insertions(+), 90 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 1/9] move PFERR_* constants to lib
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 2/9] add definitions for nested_ctl Paolo Bonzini
` (8 subsequent siblings)
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Allow using them from the SVM tests as well.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
lib/x86/asm/page.h | 7 +++++++
x86/access.c | 7 -------
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/lib/x86/asm/page.h b/lib/x86/asm/page.h
index bc0e78c7..84a315f3 100644
--- a/lib/x86/asm/page.h
+++ b/lib/x86/asm/page.h
@@ -19,6 +19,13 @@ typedef unsigned long pgd_t;
#define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
+#define PFERR_PRESENT_MASK (1ull << 0)
+#define PFERR_WRITE_MASK (1ull << 1)
+#define PFERR_USER_MASK (1ull << 2)
+#define PFERR_RESERVED_MASK (1ull << 3)
+#define PFERR_FETCH_MASK (1ull << 4)
+#define PFERR_PK_MASK (1ull << 5)
+
#ifdef __x86_64__
#define LARGE_PAGE_SIZE (512 * PAGE_SIZE)
#else
diff --git a/x86/access.c b/x86/access.c
index d94910bf..142c1d92 100644
--- a/x86/access.c
+++ b/x86/access.c
@@ -17,13 +17,6 @@ static int invalid_mask;
#define PT_BASE_ADDR_MASK ((pt_element_t)((((pt_element_t)1 << 36) - 1) & PAGE_MASK))
#define PT_PSE_BASE_ADDR_MASK (PT_BASE_ADDR_MASK & ~(1ull << 21))
-#define PFERR_PRESENT_MASK (1U << 0)
-#define PFERR_WRITE_MASK (1U << 1)
-#define PFERR_USER_MASK (1U << 2)
-#define PFERR_RESERVED_MASK (1U << 3)
-#define PFERR_FETCH_MASK (1U << 4)
-#define PFERR_PK_MASK (1U << 5)
-
#define MSR_EFER 0xc0000080
#define EFER_NX_MASK (1ull << 11)
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 2/9] add definitions for nested_ctl
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 1/9] move PFERR_* constants to lib Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests Paolo Bonzini
` (7 subsequent siblings)
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Right now only bit 0 was used, and it was hard coded. Change
that to a #define.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/svm.c | 2 +-
x86/svm.h | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/x86/svm.c b/x86/svm.c
index de9eb194..58cbf0a5 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -200,7 +200,7 @@ void vmcb_ident(struct vmcb *vmcb)
ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
if (npt_supported()) {
- ctrl->nested_ctl = 1;
+ ctrl->nested_ctl = SVM_NESTED_ENABLE;
ctrl->nested_cr3 = (u64)pml4e;
ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
diff --git a/x86/svm.h b/x86/svm.h
index 264583a6..947206bb 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -151,6 +151,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
+#define SVM_NESTED_ENABLE 1
+#define SVM_NESTED_GMET 8
+
#define SVM_VM_CR_VALID_MASK 0x001fULL
#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
#define SVM_VM_CR_SVM_DIS_MASK 0x0010ULL
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 1/9] move PFERR_* constants to lib Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 2/9] add definitions for nested_ctl Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-27 16:03 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 4/9] x86/vmx: update EPT installation to use EPT_PRESENT flag Paolo Bonzini
` (6 subsequent siblings)
9 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
These cover three basic scenarios of running successfully,
failing due to NX=1 and failing due to U/S=1.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Makefile | 2 +-
lib/x86/processor.h | 1 +
x86/svm.c | 17 ++++++++++
x86/svm.h | 1 +
x86/svm_npt.c | 83 +++++++++++++++++++++++++++++++++++++++++++--
5 files changed, 101 insertions(+), 3 deletions(-)
diff --git a/Makefile b/Makefile
index 0ce0813b..403fd495 100644
--- a/Makefile
+++ b/Makefile
@@ -93,7 +93,7 @@ COMMON_CFLAGS += $(wunused_but_set_parameter)
CFLAGS += $(COMMON_CFLAGS)
CFLAGS += $(wmissing_parameter_type)
CFLAGS += $(wold_style_declaration)
-CFLAGS += -Woverride-init -Wmissing-prototypes -Wstrict-prototypes
+CFLAGS += -Wmissing-prototypes -Wstrict-prototypes
# Evaluate and add late cflags last -- they may depend on previous flags
LATE_CFLAGS := $(LATE_CFLAGS)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 42dd2d2a..32ce08e2 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -377,6 +377,7 @@ struct x86_cpu_feature {
#define X86_FEATURE_PAUSEFILTER X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
#define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
#define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
+#define X86_FEATURE_GMET X86_CPU_FEATURE(0x8000000A, 0, EDX, 17)
#define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
#define X86_FEATURE_SME X86_CPU_FEATURE(0x8000001F, 0, EAX, 0)
#define X86_FEATURE_SEV X86_CPU_FEATURE(0x8000001F, 0, EAX, 1)
diff --git a/x86/svm.c b/x86/svm.c
index 58cbf0a5..a85da905 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -43,6 +43,23 @@ u64 *npt_get_pml4e(void)
return pml4e;
}
+void npt_prepare_gmet_pte(bool user)
+{
+ extern u8 start;
+ u64 address = (u64)&start & ~(1 << 21);
+ u64 mask = user ? PT_USER_MASK : 0;
+ u64 *pte;
+ int i;
+
+
+ /* flip the U bit on the 2 MiB region where the code is loaded.
+ * the U bit is only used for execution, therefore page table accesses ignore it
+ */
+ pte = npt_get_pte(address);
+ for (i = 0; i < 512; i++)
+ pte[i] = (pte[i] & ~PT_USER_MASK) | mask;
+}
+
bool smp_supported(void)
{
return cpu_count() > 1;
diff --git a/x86/svm.h b/x86/svm.h
index 947206bb..c5695b37 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -418,6 +418,7 @@ u64 *npt_get_pte(u64 address);
u64 *npt_get_pde(u64 address);
u64 *npt_get_pdpe(u64 address);
u64 *npt_get_pml4e(void);
+void npt_prepare_gmet_pte(bool user);
bool smp_supported(void);
bool default_supported(void);
bool fep_supported(void);
diff --git a/x86/svm_npt.c b/x86/svm_npt.c
index bd5e8f35..75d9c2c9 100644
--- a/x86/svm_npt.c
+++ b/x86/svm_npt.c
@@ -87,6 +87,79 @@ static bool npt_us_check(struct svm_test *test)
&& (vmcb->control.exit_info_1 == 0x100000005ULL);
}
+static bool npt_gmet_supported(void)
+{
+ return npt_supported() && this_cpu_has(X86_FEATURE_GMET);
+}
+
+static void npt_gmet_null_prepare(struct svm_test *test)
+{
+ /* set U=0 - no failure */
+ npt_prepare_gmet_pte(false);
+ vmcb->control.nested_ctl |= SVM_NESTED_GMET;
+}
+
+static bool npt_gmet_null_check(struct svm_test *test)
+{
+ /* reset U=1 */
+ npt_prepare_gmet_pte(true);
+ vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
+ return vmcb->control.exit_code == SVM_EXIT_VMMCALL;
+}
+
+static void npt_gmet_nx_prepare(struct svm_test *test)
+{
+ u64 *pte = npt_get_pte((u64) null_test);
+
+ /* set U=0 - failure will be from NX */
+ npt_prepare_gmet_pte(false);
+ *pte |= PT64_NX_MASK;
+ vmcb->control.nested_ctl |= SVM_NESTED_GMET;
+
+ test->scratch = rdmsr(MSR_EFER);
+ wrmsr(MSR_EFER, test->scratch | EFER_NX);
+}
+
+static bool npt_gmet_nx_check(struct svm_test *test)
+{
+ u64 *pte = npt_get_pte((u64) null_test);
+
+ /* reset U=1, NX=0 */
+ npt_prepare_gmet_pte(true);
+ *pte &= ~PT64_NX_MASK;
+ vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
+
+ wrmsr(MSR_EFER, test->scratch);
+
+ /* errata 1218 - the U bit in the page fault error code may be incorrect */
+ return (vmcb->control.exit_code == SVM_EXIT_NPF)
+ && ((vmcb->control.exit_info_1 & ~PFERR_USER_MASK) == 0x100000011ULL);
+}
+
+static void npt_gmet_us_prepare(struct svm_test *test)
+{
+ u64 *pte = npt_get_pte((u64) null_test);
+
+ npt_prepare_gmet_pte(false);
+ *pte |= PT_USER_MASK;
+ vmcb->control.nested_ctl |= SVM_NESTED_GMET;
+
+ test->scratch = rdmsr(MSR_EFER);
+ wrmsr(MSR_EFER, test->scratch | EFER_NX);
+}
+
+static bool npt_gmet_us_check(struct svm_test *test)
+{
+ npt_prepare_gmet_pte(true);
+ vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
+
+ wrmsr(MSR_EFER, test->scratch);
+
+ /* errata 1218 - the U bit in the page fault error code may be incorrect */
+ return (vmcb->control.exit_code == SVM_EXIT_NPF)
+ && ((vmcb->control.exit_info_1 & ~PFERR_USER_MASK) == 0x100000011ULL);
+}
+
static void npt_rw_prepare(struct svm_test *test)
{
@@ -380,9 +453,9 @@ skip_pte_test:
vmcb->save.cr4 = sg_cr4;
}
-#define NPT_V1_TEST(name, prepare, guest_code, check) \
+#define NPT_V1_TEST(name, prepare, guest_code, check, more...) \
{ #name, npt_supported, prepare, default_prepare_gif_clear, guest_code, \
- default_finished, check }
+ default_finished, check, more }
#define NPT_V2_TEST(name) { #name, .v2 = name }
@@ -390,6 +463,12 @@ static struct svm_test npt_tests[] = {
NPT_V1_TEST(npt_nx, npt_nx_prepare, null_test, npt_nx_check),
NPT_V1_TEST(npt_np, npt_np_prepare, npt_np_test, npt_np_check),
NPT_V1_TEST(npt_us, npt_us_prepare, npt_us_test, npt_us_check),
+ NPT_V1_TEST(npt_gmet_null, npt_gmet_null_prepare, null_test, npt_gmet_null_check,
+ .supported = npt_gmet_supported),
+ NPT_V1_TEST(npt_gmet_nx, npt_gmet_nx_prepare, null_test, npt_gmet_nx_check,
+ .supported = npt_gmet_supported),
+ NPT_V1_TEST(npt_gmet_us, npt_gmet_us_prepare, null_test, npt_gmet_us_check,
+ .supported = npt_gmet_supported),
NPT_V1_TEST(npt_rw, npt_rw_prepare, npt_rw_test, npt_rw_check),
NPT_V1_TEST(npt_rw_pfwalk, npt_rw_pfwalk_prepare, null_test, npt_rw_pfwalk_check),
NPT_V1_TEST(npt_l1mmio, npt_l1mmio_prepare, npt_l1mmio_test, npt_l1mmio_check),
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 4/9] x86/vmx: update EPT installation to use EPT_PRESENT flag
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (2 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 5/9] x86/vmx: diagnose unexpected EPT violations Paolo Bonzini
` (5 subsequent siblings)
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
From: Jon Kohler <jon@nutanix.com>
Prepare for MBEC EPT access test cases by refactoring the EPT
installation logic in vmx_tests.c and vmx.c to replace the use of
EPT_RA | EPT_WA | EPT_EA flags with the EPT_PRESENT flag; this
will allow adding the XU bit to all upper-level page tables
easily when MBEC is active.
Note that EPT_PRESENT is not used testing the RWX combination
specifically.
No functional change intended.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/vmx.c | 3 +--
x86/vmx.h | 13 +++++++------
x86/vmx_tests.c | 24 ++++++++++++------------
3 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/x86/vmx.c b/x86/vmx.c
index c803eaa6..eb2965d8 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -875,8 +875,7 @@ void install_ept_entry(unsigned long *pml4,
else
pt_page = 0;
memset(new_pt, 0, PAGE_SIZE);
- pt[offset] = virt_to_phys(new_pt)
- | EPT_RA | EPT_WA | EPT_EA;
+ pt[offset] = virt_to_phys(new_pt) | EPT_PRESENT;
} else if (pt[offset] & EPT_LARGE_PAGE)
split_large_ept_entry(&pt[offset], level);
pt = phys_to_virt(pt[offset] & EPT_ADDR_MASK);
diff --git a/x86/vmx.h b/x86/vmx.h
index 33373bd1..d9f493d3 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -664,18 +664,19 @@ enum vm_entry_failure_code {
#define EPT_MEM_TYPE_WP 5ul
#define EPT_MEM_TYPE_WB 6ul
-#define EPT_RA 1ul
-#define EPT_WA 2ul
-#define EPT_EA 4ul
-#define EPT_PRESENT (EPT_RA | EPT_WA | EPT_EA)
+#define EPT_RA (1ul << 0)
+#define EPT_WA (1ul << 1)
+#define EPT_EA (1ul << 2)
+#define EPT_IGNORE_PAT (1ul << 6)
+#define EPT_LARGE_PAGE (1ul << 7)
#define EPT_ACCESS_FLAG (1ul << 8)
#define EPT_DIRTY_FLAG (1ul << 9)
-#define EPT_LARGE_PAGE (1ul << 7)
#define EPT_MEM_TYPE_SHIFT 3ul
#define EPT_MEM_TYPE_MASK 0x7ul
-#define EPT_IGNORE_PAT (1ul << 6)
#define EPT_SUPPRESS_VE (1ull << 63)
+#define EPT_PRESENT (EPT_RA | EPT_WA | EPT_EA)
+
#define EPT_CAP_EXEC_ONLY (1ull << 0)
#define EPT_CAP_PWL4 (1ull << 6)
#define EPT_CAP_PWL5 (1ull << 7)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 5ffb80a3..707e8ca4 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1100,7 +1100,7 @@ static int setup_ept(bool enable_ad)
*/
setup_ept_range(pml4, 0, end_of_memory, 0,
!enable_ad && ept_2m_supported(),
- EPT_WA | EPT_RA | EPT_EA);
+ EPT_PRESENT);
return 0;
}
@@ -1179,7 +1179,7 @@ static int ept_init_common(bool have_ad)
*((u32 *)data_page1) = MAGIC_VAL_1;
*((u32 *)data_page2) = MAGIC_VAL_2;
install_ept(pml4, (unsigned long)data_page1, (unsigned long)data_page2,
- EPT_RA | EPT_WA | EPT_EA);
+ EPT_PRESENT);
apic_version = apic_read(APIC_LVR);
@@ -1359,8 +1359,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
*((u32 *)data_page2) == MAGIC_VAL_2) {
vmx_inc_test_stage();
install_ept(pml4, (unsigned long)data_page2,
- (unsigned long)data_page2,
- EPT_RA | EPT_WA | EPT_EA);
+ (unsigned long)data_page2,
+ EPT_PRESENT);
} else
report_fail("EPT basic framework - write");
break;
@@ -1371,9 +1371,9 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
break;
case 2:
install_ept(pml4, (unsigned long)data_page1,
- (unsigned long)data_page1,
- EPT_RA | EPT_WA | EPT_EA |
- (2 << EPT_MEM_TYPE_SHIFT));
+ (unsigned long)data_page1,
+ EPT_PRESENT |
+ (2 << EPT_MEM_TYPE_SHIFT));
invept(INVEPT_SINGLE, eptp);
break;
case 3:
@@ -1417,8 +1417,8 @@ static int ept_exit_handler_common(union exit_reason exit_reason, bool have_ad)
case 2:
vmx_inc_test_stage();
install_ept(pml4, (unsigned long)data_page1,
- (unsigned long)data_page1,
- EPT_RA | EPT_WA | EPT_EA);
+ (unsigned long)data_page1,
+ EPT_PRESENT);
invept(INVEPT_SINGLE, eptp);
break;
// Should not reach here
@@ -3020,9 +3020,9 @@ static void ept_access_test_paddr_read_write_execute(void)
{
ept_access_test_setup();
/* RWX access to paging structure. */
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_READ);
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_WRITE);
- ept_access_allowed_paddr(EPT_PRESENT, 0, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_READ);
+ ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_WRITE);
+ ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_EXEC);
}
static void ept_access_test_paddr_read_execute_ad_disabled(void)
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 5/9] x86/vmx: diagnose unexpected EPT violations
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (3 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 4/9] x86/vmx: update EPT installation to use EPT_PRESENT flag Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above Paolo Bonzini
` (4 subsequent siblings)
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Knowing the exit qualification when a test incorrectly raises an
EPT violation greatly simplifies debugging. This requires a tweak to
__TEST_EQ, allowing any code block instead of just a function name for
the abort code.
For this particular case, include advanced vmexit info in the diagnosis.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
lib/util.h | 10 +++-----
x86/vmx.h | 8 +++---
x86/vmx_tests.c | 65 ++++++++++++++++++++++++++++---------------------
3 files changed, 44 insertions(+), 39 deletions(-)
diff --git a/lib/util.h b/lib/util.h
index 00d0b47d..93f16410 100644
--- a/lib/util.h
+++ b/lib/util.h
@@ -41,17 +41,13 @@ do { \
(unsigned long) _b, _bin_b, (unsigned long) _b, \
fmt[0] == '\0' ? "" : "\n", ## args); \
dump_stack(); \
- if (assertion) \
- do_abort(); \
+ if (assertion) { do_abort; } \
} \
report_passed(); \
} while (0)
-/* FIXME: Extend VMX's assert/abort framework to SVM and other environs. */
-static inline void dummy_abort(void) {}
-
-#define TEST_EXPECT_EQ(a, b) __TEST_EQ(a, b, #a, #b, 0, dummy_abort, "")
+#define TEST_EXPECT_EQ(a, b) __TEST_EQ(a, b, #a, #b, 0, , "")
#define TEST_EXPECT_EQ_MSG(a, b, fmt, args...) \
- __TEST_EQ(a, b, #a, #b, 0, dummy_abort fmt, ## args)
+ __TEST_EQ(a, b, #a, #b, 0, fmt, ## args)
#endif
diff --git a/x86/vmx.h b/x86/vmx.h
index d9f493d3..0e29a57d 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -37,9 +37,9 @@ do { \
report_passed(); \
} while (0)
-#define TEST_ASSERT_EQ(a, b) __TEST_EQ(a, b, #a, #b, 1, __abort_test, "")
+#define TEST_ASSERT_EQ(a, b) __TEST_EQ(a, b, #a, #b, 1, __abort_test(), "")
#define TEST_ASSERT_EQ_MSG(a, b, fmt, args...) \
- __TEST_EQ(a, b, #a, #b, 1, __abort_test, fmt, ## args)
+ __TEST_EQ(a, b, #a, #b, 1, __abort_test(), fmt, ## args)
struct vmcs_hdr {
u32 revision_id:31;
@@ -718,9 +718,9 @@ enum vm_entry_failure_code {
#define EPT_VLT_PADDR (1ull << 8)
#define EPT_VLT_GUEST_USER (1ull << 9)
#define EPT_VLT_GUEST_RW (1ull << 10)
-#define EPT_VLT_GUEST_EX (1ull << 11)
+#define EPT_VLT_GUEST_NX (1ull << 11)
#define EPT_VLT_GUEST_MASK (EPT_VLT_GUEST_USER | EPT_VLT_GUEST_RW | \
- EPT_VLT_GUEST_EX)
+ EPT_VLT_GUEST_NX)
#define MAGIC_VAL_1 0x12345678ul
#define MAGIC_VAL_2 0x87654321ul
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 707e8ca4..0e3dca3c 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -2157,13 +2157,46 @@ static int exit_monitor_from_l2_handler(union exit_reason exit_reason)
return VMX_TEST_EXIT;
}
+static void
+diagnose_ept_violation_qual(u64 expected, u64 actual)
+{
+
+#define DIAGNOSE(flag) \
+do { \
+ if ((expected & flag) != (actual & flag)) \
+ printf(#flag " %sexpected\n", \
+ (expected & flag) ? "" : "un"); \
+} while (0)
+
+ DIAGNOSE(EPT_VLT_RD);
+ DIAGNOSE(EPT_VLT_WR);
+ DIAGNOSE(EPT_VLT_FETCH);
+ DIAGNOSE(EPT_VLT_PERM_RD);
+ DIAGNOSE(EPT_VLT_PERM_WR);
+ DIAGNOSE(EPT_VLT_PERM_EX);
+ DIAGNOSE(EPT_VLT_LADDR_VLD);
+ DIAGNOSE(EPT_VLT_PADDR);
+ DIAGNOSE(EPT_VLT_GUEST_USER);
+ DIAGNOSE(EPT_VLT_GUEST_RW);
+ DIAGNOSE(EPT_VLT_GUEST_NX);
+
+#undef DIAGNOSE
+}
+
static void assert_exit_reason(u64 expected)
{
u64 actual = vmcs_read(EXI_REASON);
- TEST_ASSERT_EQ_MSG(expected, actual, "Expected %s, got %s.",
- exit_reason_description(expected),
- exit_reason_description(actual));
+ __TEST_EQ(expected, actual, "expected", "actual", 1, {
+ printf("guest linear address %lx\n", vmcs_read(GUEST_LINEAR_ADDRESS));
+ if (actual == VMX_EPT_VIOLATION) {
+ u64 qual = vmcs_read(EXI_QUALIFICATION);
+ diagnose_ept_violation_qual(0, qual);
+ }
+ __abort_test();
+ }, "Expected %s, got %s.",
+ exit_reason_description(expected),
+ exit_reason_description(actual));
}
static void skip_exit_insn(void)
@@ -2276,29 +2309,6 @@ asm(
"ret42_end:\n"
);
-static void
-diagnose_ept_violation_qual(u64 expected, u64 actual)
-{
-
-#define DIAGNOSE(flag) \
-do { \
- if ((expected & flag) != (actual & flag)) \
- printf(#flag " %sexpected\n", \
- (expected & flag) ? "" : "un"); \
-} while (0)
-
- DIAGNOSE(EPT_VLT_RD);
- DIAGNOSE(EPT_VLT_WR);
- DIAGNOSE(EPT_VLT_FETCH);
- DIAGNOSE(EPT_VLT_PERM_RD);
- DIAGNOSE(EPT_VLT_PERM_WR);
- DIAGNOSE(EPT_VLT_PERM_EX);
- DIAGNOSE(EPT_VLT_LADDR_VLD);
- DIAGNOSE(EPT_VLT_PADDR);
-
-#undef DIAGNOSE
-}
-
static void do_ept_access_op(enum ept_access_op op)
{
ept_access_test_data.op = op;
@@ -2360,8 +2370,7 @@ static void do_ept_violation(bool leaf, enum ept_access_op op,
qual = vmcs_read(EXI_QUALIFICATION);
/* Mask undefined bits (which may later be defined in certain cases). */
- qual &= ~(EPT_VLT_GUEST_USER | EPT_VLT_GUEST_RW | EPT_VLT_GUEST_EX |
- EPT_VLT_PERM_USER_EX);
+ qual &= ~(EPT_VLT_GUEST_MASK | EPT_VLT_PERM_USER_EX);
diagnose_ept_violation_qual(expected_qual, qual);
TEST_EXPECT_EQ(expected_qual, qual);
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (4 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 5/9] x86/vmx: diagnose unexpected EPT violations Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-27 15:57 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 7/9] x86/vmx: add user execution operation to EPT access tests Paolo Bonzini
` (3 subsequent siblings)
9 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Introduce a new test for mode-based execute control (MBEC) in the VMX
controls, validating the dependency between MBEC and EPT VM-execution
controls. The test ensures that VM entry fails when MBEC is enabled
without EPT, and succeeds in valid combinations.
Update the unit test configuration to include a specific test case for
MBEC on Skylake-Server CPU model, as that was the first CPU series to
have MBEC.
Passing test result
Test suite: vmx_controls_test_mbec
PASS: MBEC disabled, EPT disabled (valid combination): vmlaunch succeeds
PASS: MBEC enabled, EPT disabled (invalid combination): vmlaunch fails
PASS: MBEC enabled, EPT disabled (invalid combination): VMX inst error is 7 (actual 7)
PASS: MBEC enabled, EPT enabled (valid combination): vmlaunch succeeds
PASS: MBEC disabled, EPT enabled (valid combination): vmlaunch succeeds
Test ran with "-vmx-mbec":
Test suite: vmx_controls_test_mbec
SKIP: test_mode_based_execute_control : "Secondary execution" or
"enable EPT" or "enable mode-based execute control" control not supported
Co-authored-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/unittests.cfg | 9 +++++++
x86/vmx.h | 8 ++++++
x86/vmx_tests.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 81 insertions(+)
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 522318d3..b82bbc4e 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -324,6 +324,15 @@ qemu_params = -cpu max,+vmx
arch = x86_64
groups = vmx
+# VMX controls is a generic test; however, mode-based execute control
+# aka MBEC is only available on Skylake and above, be specific about
+# the CPU model and test it directly.
+[vmx_controls_test_mbec]
+file = vmx.flat
+extra_params = -cpu Skylake-Server,+vmx,+vmx-mbec -append "vmx_controls_test_mbec"
+arch = x86_64
+groups = vmx
+
[ept]
file = vmx.flat
test_args = "ept_access*"
diff --git a/x86/vmx.h b/x86/vmx.h
index 0e29a57d..b492ec74 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -510,6 +510,7 @@ enum Ctrl1 {
CPU_SHADOW_VMCS = 1ul << 14,
CPU_RDSEED = 1ul << 16,
CPU_PML = 1ul << 17,
+ CPU_MODE_BASED_EPT_EXEC = 1ul << 22,
CPU_USE_TSC_SCALING = 1ul << 25,
};
@@ -843,6 +844,13 @@ static inline bool is_invvpid_type_supported(unsigned long type)
return ept_vpid.val & (VPID_CAP_INVVPID_ADDR << (type - INVVPID_ADDR));
}
+static inline bool is_mbec_supported(void)
+{
+ return (ctrl_cpu_rev[0].clr & CPU_SECONDARY) &&
+ (ctrl_cpu_rev[1].clr & CPU_EPT) &&
+ (ctrl_cpu_rev[1].clr & CPU_MODE_BASED_EPT_EXEC);
+}
+
extern u64 *bsp_vmxon_region;
extern bool launched;
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 0e3dca3c..dbc456cb 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -4876,6 +4876,69 @@ skip_unrestricted_guest:
vmcs_write(EPTP, eptp_saved);
}
+/*
+ * Test the dependency between mode-based execute control for EPT (MBEC) and
+ * enable EPT VM-execution controls.
+ *
+ * When MBEC (bit 22 of secondary processor-based VM-execution controls) is enabled,
+ * it allows separate execute permissions for supervisor-mode and user-mode linear
+ * addresses in EPT paging structures. However, per Intel SDM requirement:
+ *
+ * "If the 'mode-based execute control for EPT' VM-execution control is 1,
+ * the 'enable EPT' VM-execution control must also be 1."
+ *
+ * This test validates that VM entry fails when MBEC is enabled without EPT,
+ * and succeeds in all other valid combinations.
+ *
+ * [Intel SDM Vol. 3C, Section 26.6.2, Table 26-7]
+ */
+static void test_mode_based_execute_control(void)
+{
+ u32 primary_saved = vmcs_read(CPU_EXEC_CTRL0);
+ u32 secondary_saved = vmcs_read(CPU_EXEC_CTRL1);
+ u32 primary = primary_saved;
+ u32 secondary = secondary_saved;
+
+ /* Skip test if required VM-execution controls are not supported */
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ /* Test case 1: MBEC disabled, EPT disabled - should be valid */
+ primary |= CPU_SECONDARY;
+ vmcs_write(CPU_EXEC_CTRL0, primary);
+ secondary &= ~(CPU_MODE_BASED_EPT_EXEC | CPU_EPT);
+ vmcs_write(CPU_EXEC_CTRL1, secondary);
+ report_prefix_pushf("MBEC disabled, EPT disabled (valid combination)");
+ test_vmx_valid_controls();
+ report_prefix_pop();
+
+ /* Test case 2: MBEC enabled, EPT disabled - should be invalid per SDM */
+ secondary |= CPU_MODE_BASED_EPT_EXEC;
+ vmcs_write(CPU_EXEC_CTRL1, secondary);
+ report_prefix_pushf("MBEC enabled, EPT disabled (invalid combination)");
+ test_vmx_invalid_controls();
+ report_prefix_pop();
+
+ /* Test case 3: MBEC enabled, EPT enabled - should be valid */
+ secondary |= CPU_EPT;
+ setup_dummy_ept();
+ report_prefix_pushf("MBEC enabled, EPT enabled (valid combination)");
+ test_vmx_valid_controls();
+ report_prefix_pop();
+
+ /* Test case 4: MBEC disabled, EPT enabled - should be valid */
+ secondary &= ~CPU_MODE_BASED_EPT_EXEC;
+ vmcs_write(CPU_EXEC_CTRL1, secondary);
+ report_prefix_pushf("MBEC disabled, EPT enabled (valid combination)");
+ test_vmx_valid_controls();
+ report_prefix_pop();
+
+ vmcs_write(CPU_EXEC_CTRL0, primary_saved);
+ vmcs_write(CPU_EXEC_CTRL1, secondary_saved);
+}
+
/*
* If the 'enable PML' VM-execution control is 1, the 'enable EPT'
* VM-execution control must also be 1. In addition, the PML address
@@ -5336,6 +5399,7 @@ static void test_vm_execution_ctls(void)
test_pml();
test_vpid();
test_ept_eptp();
+ test_mode_based_execute_control();
test_vmx_preemption_timer();
}
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 7/9] x86/vmx: add user execution operation to EPT access tests
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (5 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available Paolo Bonzini
` (2 subsequent siblings)
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Introduce a new ept_access_op, OP_EXEC_USER, to the EPT access tests to
prepare for MBEC, which allows execution of user-level code.
Since the tests do not support MBEC yet, the expected behavior is the
same as for OP_EXEC.
Co-authored-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/vmx_tests.c | 51 +++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 47 insertions(+), 4 deletions(-)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index dbc456cb..023512e6 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -2285,6 +2285,7 @@ enum ept_access_op {
OP_READ,
OP_WRITE,
OP_EXEC,
+ OP_EXEC_USER,
OP_FLUSH_TLB,
OP_EXIT,
};
@@ -2399,8 +2400,8 @@ ept_violation_at_level_mkhuge(bool mkhuge, int level, unsigned long clear,
orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
do_ept_violation(level == 1 || mkhuge, op, expected_qual,
- op == OP_EXEC ? data->gpa + sizeof(unsigned long) :
- data->gpa);
+ (op == OP_EXEC || op == OP_EXEC_USER
+ ? data->gpa + sizeof(unsigned long) : data->gpa));
/* Fix the violation and resume the op loop. */
ept_untwiddle(data->gpa, level, orig_pte);
@@ -2589,11 +2590,13 @@ static void ept_ignored_bit(int bit)
ept_allowed(0, 1ul << bit, OP_READ);
ept_allowed(0, 1ul << bit, OP_WRITE);
ept_allowed(0, 1ul << bit, OP_EXEC);
+ ept_allowed(0, 1ul << bit, OP_EXEC_USER);
/* Clear the bit. */
ept_allowed(1ul << bit, 0, OP_READ);
ept_allowed(1ul << bit, 0, OP_WRITE);
ept_allowed(1ul << bit, 0, OP_EXEC);
+ ept_allowed(1ul << bit, 0, OP_EXEC_USER);
}
static void ept_access_allowed(unsigned long access, enum ept_access_op op)
@@ -2726,10 +2729,20 @@ static void ept_access_test_teardown(void *unused)
do_ept_access_op(OP_EXIT);
}
+static u64 exec_user_on_gva(void)
+{
+ struct ept_access_test_data *data = &ept_access_test_data;
+ int (*code)(void) = (int (*)(void)) &data->gva[1];
+
+ return code();
+}
+
static void ept_access_test_guest(void)
{
struct ept_access_test_data *data = &ept_access_test_data;
int (*code)(void) = (int (*)(void)) &data->gva[1];
+ bool unused;
+ u64 ret_val;
while (true) {
switch (data->op) {
@@ -2744,6 +2757,11 @@ static void ept_access_test_guest(void)
case OP_EXEC:
TEST_ASSERT_EQ(42, code());
break;
+ case OP_EXEC_USER:
+ ret_val = run_in_user(exec_user_on_gva, DE_VECTOR, // no exceptions
+ 0, 0, 0, 0, &unused);
+ TEST_ASSERT_EQ(42, ret_val);
+ break;
case OP_FLUSH_TLB:
write_cr3(read_cr3());
break;
@@ -2803,6 +2821,7 @@ static void ept_access_test_not_present(void)
ept_access_violation(0, OP_READ, EPT_VLT_RD);
ept_access_violation(0, OP_WRITE, EPT_VLT_WR);
ept_access_violation(0, OP_EXEC, EPT_VLT_FETCH);
+ ept_access_violation(0, OP_EXEC_USER, EPT_VLT_FETCH);
}
static void ept_access_test_read_only(void)
@@ -2813,6 +2832,7 @@ static void ept_access_test_read_only(void)
ept_access_allowed(EPT_RA, OP_READ);
ept_access_violation(EPT_RA, OP_WRITE, EPT_VLT_WR | EPT_VLT_PERM_RD);
ept_access_violation(EPT_RA, OP_EXEC, EPT_VLT_FETCH | EPT_VLT_PERM_RD);
+ ept_access_violation(EPT_RA, OP_EXEC_USER, EPT_VLT_FETCH | EPT_VLT_PERM_RD);
}
static void ept_access_test_write_only(void)
@@ -2829,7 +2849,11 @@ static void ept_access_test_read_write(void)
ept_access_allowed(EPT_RA | EPT_WA, OP_READ);
ept_access_allowed(EPT_RA | EPT_WA, OP_WRITE);
ept_access_violation(EPT_RA | EPT_WA, OP_EXEC,
- EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR);
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD |
+ EPT_VLT_PERM_WR);
+ ept_access_violation(EPT_RA | EPT_WA, OP_EXEC_USER,
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD |
+ EPT_VLT_PERM_WR);
}
@@ -2843,6 +2867,7 @@ static void ept_access_test_execute_only(void)
ept_access_violation(EPT_EA, OP_WRITE,
EPT_VLT_WR | EPT_VLT_PERM_EX);
ept_access_allowed(EPT_EA, OP_EXEC);
+ ept_access_allowed(EPT_EA, OP_EXEC_USER);
} else {
ept_access_misconfig(EPT_EA);
}
@@ -2854,8 +2879,9 @@ static void ept_access_test_read_execute(void)
/* r-x */
ept_access_allowed(EPT_RA | EPT_EA, OP_READ);
ept_access_violation(EPT_RA | EPT_EA, OP_WRITE,
- EPT_VLT_WR | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
+ EPT_VLT_WR | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC);
+ ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
}
static void ept_access_test_write_execute(void)
@@ -2872,6 +2898,7 @@ static void ept_access_test_read_write_execute(void)
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_READ);
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_WRITE);
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
}
static void ept_access_test_reserved_bits(void)
@@ -2952,6 +2979,7 @@ static void ept_access_test_paddr_not_present_ad_disabled(void)
ept_access_violation_paddr(0, PT_AD_MASK, OP_READ, EPT_VLT_RD);
ept_access_violation_paddr(0, PT_AD_MASK, OP_WRITE, EPT_VLT_RD);
ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC, EPT_VLT_RD);
+ ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC_USER, EPT_VLT_RD);
}
static void ept_access_test_paddr_not_present_ad_enabled(void)
@@ -2964,6 +2992,7 @@ static void ept_access_test_paddr_not_present_ad_enabled(void)
ept_access_violation_paddr(0, PT_AD_MASK, OP_READ, qual);
ept_access_violation_paddr(0, PT_AD_MASK, OP_WRITE, qual);
ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(0, PT_AD_MASK, OP_EXEC_USER, qual);
}
static void ept_access_test_paddr_read_only_ad_disabled(void)
@@ -2983,14 +3012,17 @@ static void ept_access_test_paddr_read_only_ad_disabled(void)
ept_access_violation_paddr(EPT_RA, 0, OP_READ, qual);
ept_access_violation_paddr(EPT_RA, 0, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA, 0, OP_EXEC_USER, qual);
/* AD bits disabled, so only writes try to update the D bit. */
ept_access_allowed_paddr(EPT_RA, PT_ACCESSED_MASK, OP_READ);
ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_WRITE, qual);
ept_access_allowed_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC_USER);
/* Both A and D already set, so read-only is OK. */
ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_READ);
ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_WRITE);
ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA, PT_AD_MASK, OP_EXEC_USER);
}
static void ept_access_test_paddr_read_only_ad_enabled(void)
@@ -3008,12 +3040,15 @@ static void ept_access_test_paddr_read_only_ad_enabled(void)
ept_access_violation_paddr(EPT_RA, 0, OP_READ, qual);
ept_access_violation_paddr(EPT_RA, 0, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA, 0, OP_EXEC_USER, qual);
ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_READ, qual);
ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA, PT_ACCESSED_MASK, OP_EXEC_USER, qual);
ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_READ, qual);
ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA, PT_AD_MASK, OP_EXEC_USER, qual);
}
static void ept_access_test_paddr_read_write(void)
@@ -3023,6 +3058,7 @@ static void ept_access_test_paddr_read_write(void)
ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_READ);
ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_WRITE);
ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA | EPT_WA, 0, OP_EXEC_USER);
}
static void ept_access_test_paddr_read_write_execute(void)
@@ -3032,6 +3068,7 @@ static void ept_access_test_paddr_read_write_execute(void)
ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_READ);
ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_WRITE);
ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA | EPT_WA | EPT_EA, 0, OP_EXEC_USER);
}
static void ept_access_test_paddr_read_execute_ad_disabled(void)
@@ -3051,14 +3088,17 @@ static void ept_access_test_paddr_read_execute_ad_disabled(void)
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_READ, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC_USER, qual);
/* AD bits disabled, so only writes try to update the D bit. */
ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_READ);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_WRITE, qual);
ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC_USER);
/* Both A and D already set, so read-only is OK. */
ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_READ);
ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_WRITE);
ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC);
+ ept_access_allowed_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC_USER);
}
static void ept_access_test_paddr_read_execute_ad_enabled(void)
@@ -3076,12 +3116,15 @@ static void ept_access_test_paddr_read_execute_ad_enabled(void)
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_READ, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA | EPT_EA, 0, OP_EXEC_USER, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_READ, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA | EPT_EA, PT_ACCESSED_MASK, OP_EXEC_USER, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_READ, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_WRITE, qual);
ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC, qual);
+ ept_access_violation_paddr(EPT_RA | EPT_EA, PT_AD_MASK, OP_EXEC_USER, qual);
}
static void ept_access_test_paddr_not_present_page_fault(void)
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (6 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 7/9] x86/vmx: add user execution operation to EPT access tests Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-26 16:13 ` Paolo Bonzini
2026-03-27 15:57 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission Paolo Bonzini
2026-05-12 11:06 ` [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
9 siblings, 2 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Check that the XS bit does not allow execution of user-mode pages
when MBEC is available (and enabled); this requires tweaking
the guest page tables to set U=0 for OP_EXEC. Update the unit test
configuration to include a specific test case for MBEC.
Co-authored-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/unittests.cfg | 12 +++++-
x86/vmx.h | 5 ++-
x86/vmx_tests.c | 98 ++++++++++++++++++++++++++++++++++++++---------
3 files changed, 94 insertions(+), 21 deletions(-)
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index b82bbc4e..022ea52c 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -336,7 +336,17 @@ groups = vmx
[ept]
file = vmx.flat
test_args = "ept_access*"
-qemu_params = -cpu max,host-phys-bits,+vmx -m 2560
+qemu_params = -cpu max,host-phys-bits,+vmx,-vmx-mbec -m 2560
+arch = x86_64
+groups = vmx
+
+# EPT is a generic test; however, mode-based execute control aka MBEC
+# is only available on Skylake and above, be specific about the CPU
+# model and test it directly.
+[ept-mbec]
+file = vmx.flat
+test_args = "ept_access*"
+qemu_params = -cpu Skylake-Server,host-phys-bits,+vmx,+vmx-mbec -m 2560
arch = x86_64
groups = vmx
diff --git a/x86/vmx.h b/x86/vmx.h
index b492ec74..7ad7672a 100644
--- a/x86/vmx.h
+++ b/x86/vmx.h
@@ -672,11 +672,14 @@ enum vm_entry_failure_code {
#define EPT_LARGE_PAGE (1ul << 7)
#define EPT_ACCESS_FLAG (1ul << 8)
#define EPT_DIRTY_FLAG (1ul << 9)
+#define EPT_EA_USER (1ul << 10)
#define EPT_MEM_TYPE_SHIFT 3ul
#define EPT_MEM_TYPE_MASK 0x7ul
#define EPT_SUPPRESS_VE (1ull << 63)
-#define EPT_PRESENT (EPT_RA | EPT_WA | EPT_EA)
+#define EPT_PRESENT (is_mbec_supported() ? \
+ (EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER) : \
+ (EPT_RA | EPT_WA | EPT_EA))
#define EPT_CAP_EXEC_ONLY (1ull << 0)
#define EPT_CAP_PWL4 (1ull << 6)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 023512e6..bf03451a 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -1044,6 +1044,8 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
*/
static int __setup_ept(u64 hpa, bool enable_ad)
{
+ u64 secondary;
+
if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
!(ctrl_cpu_rev[1].clr & CPU_EPT)) {
printf("\tEPT is not supported\n");
@@ -1067,9 +1069,13 @@ static int __setup_ept(u64 hpa, bool enable_ad)
if (enable_ad)
eptp |= EPTP_AD_FLAG;
+ secondary = vmcs_read(CPU_EXEC_CTRL1) | CPU_EPT;
+ if (is_mbec_supported())
+ secondary |= CPU_MODE_BASED_EPT_EXEC;
+
vmcs_write(EPTP, eptp);
vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
- vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
+ vmcs_write(CPU_EXEC_CTRL1, secondary);
return 0;
}
@@ -2174,6 +2180,7 @@ do { \
DIAGNOSE(EPT_VLT_PERM_RD);
DIAGNOSE(EPT_VLT_PERM_WR);
DIAGNOSE(EPT_VLT_PERM_EX);
+ DIAGNOSE(EPT_VLT_PERM_USER_EX);
DIAGNOSE(EPT_VLT_LADDR_VLD);
DIAGNOSE(EPT_VLT_PADDR);
DIAGNOSE(EPT_VLT_GUEST_USER);
@@ -2326,13 +2333,36 @@ static void ept_access_test_guest_flush_tlb(void)
skip_exit_vmcall();
}
+/*
+ * Modifies the leaf guest page table entry that maps @gva, clearing the bits
+ * in @clear then setting the bits in @set. This is needed when testing
+ * MBEC so that the processor knows whether to observe XS or XU.
+ */
+static void guest_page_table_twiddle(unsigned long *gva, unsigned long clear, unsigned long set)
+{
+ pgd_t *cr3 = current_page_table();
+ int i;
+
+ for (i = 1; i <= PAGE_LEVEL; i++) {
+ u64 *pte = get_pte_level(cr3, gva, i);
+ if (!pte)
+ continue;
+
+ TEST_ASSERT(*pte & PT_PRESENT_MASK);
+ *pte = (*pte & ~clear) | set;
+ break;
+ }
+ invlpg((void *)gva);
+}
+
/*
* Modifies the EPT entry at @level in the mapping of @gpa. First clears the
* bits in @clear then sets the bits in @set. @mkhuge transforms the entry into
* a huge page.
*/
static unsigned long ept_twiddle(unsigned long gpa, bool mkhuge, int level,
- unsigned long clear, unsigned long set)
+ unsigned long clear, unsigned long set,
+ enum ept_access_op op)
{
struct ept_access_test_data *data = &ept_access_test_data;
unsigned long orig_pte;
@@ -2347,15 +2377,27 @@ static unsigned long ept_twiddle(unsigned long gpa, bool mkhuge, int level,
pte = orig_pte;
pte = (pte & ~clear) | set;
set_ept_pte(pml4, gpa, level, pte);
- invept(INVEPT_SINGLE, eptp);
+ if (is_mbec_supported() && op == OP_EXEC)
+ guest_page_table_twiddle(data->gva, PT_USER_MASK, 0);
+
+ invept(INVEPT_SINGLE, eptp);
return orig_pte;
}
-static void ept_untwiddle(unsigned long gpa, int level, unsigned long orig_pte)
+static void ept_untwiddle(unsigned long gpa, int level, unsigned long orig_pte,
+ enum ept_access_op op)
{
+ unsigned long pte;
+
+ pte = get_ept_pte(pml4, gpa, level, &pte);
set_ept_pte(pml4, gpa, level, orig_pte);
invept(INVEPT_SINGLE, eptp);
+
+ if (is_mbec_supported() && op == OP_EXEC) {
+ struct ept_access_test_data *data = &ept_access_test_data;
+ guest_page_table_twiddle(data->gva, 0, PT_USER_MASK);
+ }
}
static void do_ept_violation(bool leaf, enum ept_access_op op,
@@ -2370,8 +2412,12 @@ static void do_ept_violation(bool leaf, enum ept_access_op op,
qual = vmcs_read(EXI_QUALIFICATION);
- /* Mask undefined bits (which may later be defined in certain cases). */
- qual &= ~(EPT_VLT_GUEST_MASK | EPT_VLT_PERM_USER_EX);
+ /*
+ * Exit-qualifications are masked not to account for advanced
+ * VM-exit information. KVM supports this feature, so the tests
+ * could be enhanced to cover it.
+ */
+ qual &= ~EPT_VLT_GUEST_MASK;
diagnose_ept_violation_qual(expected_qual, qual);
TEST_EXPECT_EQ(expected_qual, qual);
@@ -2397,14 +2443,14 @@ ept_violation_at_level_mkhuge(bool mkhuge, int level, unsigned long clear,
struct ept_access_test_data *data = &ept_access_test_data;
unsigned long orig_pte;
- orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
+ orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
do_ept_violation(level == 1 || mkhuge, op, expected_qual,
(op == OP_EXEC || op == OP_EXEC_USER
? data->gpa + sizeof(unsigned long) : data->gpa));
/* Fix the violation and resume the op loop. */
- ept_untwiddle(data->gpa, level, orig_pte);
+ ept_untwiddle(data->gpa, level, orig_pte, op);
enter_guest();
skip_exit_vmcall();
}
@@ -2502,12 +2548,12 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
*/
install_ept(pml4, gpa, gpa, EPT_PRESENT);
orig_epte = ept_twiddle(gpa, /*mkhuge=*/0, /*level=*/1,
- /*clear=*/EPT_PRESENT, /*set=*/ept_access);
+ /*clear=*/EPT_PRESENT, /*set=*/ept_access, op);
if (expect_violation) {
do_ept_violation(/*leaf=*/true, op,
expected_qual | EPT_VLT_LADDR_VLD, gpa);
- ept_untwiddle(gpa, /*level=*/1, orig_epte);
+ ept_untwiddle(gpa, /*level=*/1, orig_epte, op);
do_ept_access_op(op);
} else {
do_ept_access_op(op);
@@ -2522,7 +2568,7 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
}
}
- ept_untwiddle(gpa, /*level=*/1, orig_epte);
+ ept_untwiddle(gpa, /*level=*/1, orig_epte, op);
}
TEST_ASSERT(*ptep & PT_ACCESSED_MASK);
@@ -2558,13 +2604,13 @@ static void ept_allowed_at_level_mkhuge(bool mkhuge, int level,
struct ept_access_test_data *data = &ept_access_test_data;
unsigned long orig_pte;
- orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
+ orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
/* No violation. Should proceed to vmcall. */
do_ept_access_op(op);
skip_exit_vmcall();
- ept_untwiddle(data->gpa, level, orig_pte);
+ ept_untwiddle(data->gpa, level, orig_pte, op);
}
static void ept_allowed_at_level(int level, unsigned long clear,
@@ -2613,7 +2659,7 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
struct ept_access_test_data *data = &ept_access_test_data;
unsigned long orig_pte;
- orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
+ orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
do_ept_access_op(op);
assert_exit_reason(VMX_EPT_MISCONFIG);
@@ -2637,7 +2683,7 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
#endif
/* Fix the violation and resume the op loop. */
- ept_untwiddle(data->gpa, level, orig_pte);
+ ept_untwiddle(data->gpa, level, orig_pte, op);
enter_guest();
skip_exit_vmcall();
}
@@ -2867,7 +2913,12 @@ static void ept_access_test_execute_only(void)
ept_access_violation(EPT_EA, OP_WRITE,
EPT_VLT_WR | EPT_VLT_PERM_EX);
ept_access_allowed(EPT_EA, OP_EXEC);
- ept_access_allowed(EPT_EA, OP_EXEC_USER);
+ if (is_mbec_supported())
+ ept_access_violation(EPT_EA, OP_EXEC_USER,
+ EPT_VLT_FETCH |
+ EPT_VLT_PERM_EX);
+ else
+ ept_access_allowed(EPT_EA, OP_EXEC_USER);
} else {
ept_access_misconfig(EPT_EA);
}
@@ -2881,7 +2932,11 @@ static void ept_access_test_read_execute(void)
ept_access_violation(EPT_RA | EPT_EA, OP_WRITE,
EPT_VLT_WR | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC);
- ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
+ if (is_mbec_supported())
+ ept_access_violation(EPT_RA | EPT_EA, OP_EXEC_USER,
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
+ else
+ ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
}
static void ept_access_test_write_execute(void)
@@ -2898,7 +2953,11 @@ static void ept_access_test_read_write_execute(void)
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_READ);
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_WRITE);
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC);
- ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
+ if (is_mbec_supported())
+ ept_access_violation(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER,
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR | EPT_VLT_PERM_EX);
+ else
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
}
static void ept_access_test_reserved_bits(void)
@@ -2955,7 +3014,8 @@ static void ept_access_test_ignored_bits(void)
*/
ept_ignored_bit(8);
ept_ignored_bit(9);
- ept_ignored_bit(10);
+ if (!is_mbec_supported())
+ ept_ignored_bit(10);
ept_ignored_bit(11);
ept_ignored_bit(52);
ept_ignored_bit(53);
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (7 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available Paolo Bonzini
@ 2026-03-26 14:50 ` Paolo Bonzini
2026-03-27 15:56 ` Jon Kohler
2026-05-12 11:06 ` [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
9 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 14:50 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
Add tests to validate MBEC execute access when XU=1, with and without XS=1.
Co-authored-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
x86/vmx_tests.c | 120 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 120 insertions(+)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index bf03451a..a47e8470 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -2924,6 +2924,52 @@ static void ept_access_test_execute_only(void)
}
}
+static void ept_access_test_execute_user_only(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* --x (exec user only) */
+ if (ept_execute_only_supported()) {
+ ept_access_violation(EPT_EA_USER, OP_READ,
+ EPT_VLT_RD |
+ EPT_VLT_PERM_USER_EX);
+ ept_access_violation(EPT_EA_USER, OP_WRITE,
+ EPT_VLT_WR |
+ EPT_VLT_PERM_USER_EX);
+ ept_access_violation(EPT_EA_USER, OP_EXEC,
+ EPT_VLT_FETCH |
+ EPT_VLT_PERM_USER_EX);
+ ept_access_allowed(EPT_EA_USER, OP_EXEC_USER);
+ } else {
+ ept_access_misconfig(EPT_EA_USER);
+ }
+}
+
+static void ept_access_test_execute_both(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* --x (both XS and XU) */
+ if (ept_execute_only_supported()) {
+ ept_access_violation(EPT_EA | EPT_EA_USER, OP_READ,
+ EPT_VLT_RD | EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
+ ept_access_violation(EPT_EA | EPT_EA_USER, OP_WRITE,
+ EPT_VLT_WR | EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
+ ept_access_allowed(EPT_EA | EPT_EA_USER, OP_EXEC);
+ ept_access_allowed(EPT_EA | EPT_EA_USER, OP_EXEC_USER);
+ } else {
+ ept_access_misconfig(EPT_EA | EPT_EA_USER);
+ }
+}
+
static void ept_access_test_read_execute(void)
{
ept_access_test_setup();
@@ -2939,6 +2985,43 @@ static void ept_access_test_read_execute(void)
ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
}
+static void ept_access_test_read_execute_user_only(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* r-x (exec user only) */
+ ept_access_allowed(EPT_RA | EPT_EA_USER, OP_READ);
+ ept_access_violation(EPT_RA | EPT_EA_USER, OP_WRITE,
+ EPT_VLT_WR | EPT_VLT_PERM_RD |
+ EPT_VLT_PERM_USER_EX);
+ ept_access_violation(EPT_RA | EPT_EA_USER, OP_EXEC,
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD |
+ EPT_VLT_PERM_USER_EX);
+ ept_access_allowed(EPT_RA | EPT_EA_USER, OP_EXEC_USER);
+}
+
+static void ept_access_test_read_execute_both(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* r-x (both XS and XU) */
+ ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_READ);
+ ept_access_violation(EPT_RA | EPT_EA | EPT_EA_USER, OP_WRITE,
+ EPT_VLT_WR | EPT_VLT_PERM_RD |
+ EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
+ ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_EXEC);
+ ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_EXEC_USER);
+}
+
+
static void ept_access_test_write_execute(void)
{
ept_access_test_setup();
@@ -2960,6 +3043,37 @@ static void ept_access_test_read_write_execute(void)
ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
}
+static void ept_access_test_read_write_execute_user_only(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* rwx (exec user only) */
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_READ);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_WRITE);
+ ept_access_violation(EPT_RA | EPT_WA | EPT_EA_USER, OP_EXEC,
+ EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR | EPT_VLT_PERM_USER_EX);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_EXEC_USER);
+}
+
+static void ept_access_test_read_write_execute_both(void)
+{
+ if (!is_mbec_supported()) {
+ report_skip("MBEC not supported");
+ return;
+ }
+
+ ept_access_test_setup();
+ /* rwx (both XS and XU) */
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_READ);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_WRITE);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_EXEC);
+ ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_EXEC_USER);
+}
+
static void ept_access_test_reserved_bits(void)
{
int i;
@@ -11722,9 +11836,15 @@ struct vmx_test vmx_tests[] = {
TEST(ept_access_test_write_only),
TEST(ept_access_test_read_write),
TEST(ept_access_test_execute_only),
+ TEST(ept_access_test_execute_user_only),
+ TEST(ept_access_test_execute_both),
TEST(ept_access_test_read_execute),
+ TEST(ept_access_test_read_execute_user_only),
+ TEST(ept_access_test_read_execute_both),
TEST(ept_access_test_write_execute),
TEST(ept_access_test_read_write_execute),
+ TEST(ept_access_test_read_write_execute_user_only),
+ TEST(ept_access_test_read_write_execute_both),
TEST(ept_access_test_reserved_bits),
TEST(ept_access_test_ignored_bits),
TEST(ept_access_test_paddr_not_present_ad_disabled),
--
2.52.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available
2026-03-26 14:50 ` [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available Paolo Bonzini
@ 2026-03-26 16:13 ` Paolo Bonzini
2026-03-27 15:57 ` Jon Kohler
2026-03-27 15:57 ` Jon Kohler
1 sibling, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2026-03-26 16:13 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
On 3/26/26 15:50, Paolo Bonzini wrote:
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 023512e6..bf03451a 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -1044,6 +1044,8 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
> */
> static int __setup_ept(u64 hpa, bool enable_ad)
> {
> + u64 secondary;
> +
> if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
> !(ctrl_cpu_rev[1].clr & CPU_EPT)) {
> printf("\tEPT is not supported\n");
> @@ -1067,9 +1069,13 @@ static int __setup_ept(u64 hpa, bool enable_ad)
> if (enable_ad)
> eptp |= EPTP_AD_FLAG;
>
> + secondary = vmcs_read(CPU_EXEC_CTRL1) | CPU_EPT;
> + if (is_mbec_supported())
> + secondary |= CPU_MODE_BASED_EPT_EXEC;
> +
> vmcs_write(EPTP, eptp);
> vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
> - vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
> + vmcs_write(CPU_EXEC_CTRL1, secondary);
>
> return 0;
> }
This is missing a hunk:
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 023512e6..b5994719 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -4886,7 +4946,7 @@ static void test_ept_eptp(void)
report_prefix_pop();
}
- secondary &= ~(CPU_EPT | CPU_URG);
+ secondary &= ~(CPU_URG | CPU_EPT | CPU_MODE_BASED_EPT_EXEC);
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("Enable-EPT disabled, unrestricted-guest disabled");
test_vmx_valid_controls();
and possibly this as well for consistency:
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index b5994719..cf1022ba 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -5068,7 +5068,7 @@ static void test_pml(void)
primary |= CPU_SECONDARY;
vmcs_write(CPU_EXEC_CTRL0, primary);
- secondary &= ~(CPU_PML | CPU_EPT);
+ secondary &= ~(CPU_PML | CPU_EPT | CPU_MODE_BASED_EPT_EXEC);
vmcs_write(CPU_EXEC_CTRL1, secondary);
report_prefix_pushf("enable-PML disabled, enable-EPT disabled");
test_vmx_valid_controls();
Paolo
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission
2026-03-26 14:50 ` [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission Paolo Bonzini
@ 2026-03-27 15:56 ` Jon Kohler
0 siblings, 0 replies; 17+ messages in thread
From: Jon Kohler @ 2026-03-27 15:56 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm@vger.kernel.org, Nikunj A Dadhania, amit.shah@amd.com,
Sean Christopherson
> On Mar 26, 2026, at 10:50 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> Add tests to validate MBEC execute access when XU=1, with and without XS=1.
>
> Co-authored-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> x86/vmx_tests.c | 120 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 120 insertions(+)
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index bf03451a..a47e8470 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -2924,6 +2924,52 @@ static void ept_access_test_execute_only(void)
> }
> }
>
> +static void ept_access_test_execute_user_only(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* --x (exec user only) */
> + if (ept_execute_only_supported()) {
> + ept_access_violation(EPT_EA_USER, OP_READ,
> + EPT_VLT_RD |
> + EPT_VLT_PERM_USER_EX);
> + ept_access_violation(EPT_EA_USER, OP_WRITE,
> + EPT_VLT_WR |
> + EPT_VLT_PERM_USER_EX);
> + ept_access_violation(EPT_EA_USER, OP_EXEC,
> + EPT_VLT_FETCH |
> + EPT_VLT_PERM_USER_EX);
> + ept_access_allowed(EPT_EA_USER, OP_EXEC_USER);
> + } else {
> + ept_access_misconfig(EPT_EA_USER);
> + }
> +}
> +
> +static void ept_access_test_execute_both(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* --x (both XS and XU) */
> + if (ept_execute_only_supported()) {
> + ept_access_violation(EPT_EA | EPT_EA_USER, OP_READ,
> + EPT_VLT_RD | EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
> + ept_access_violation(EPT_EA | EPT_EA_USER, OP_WRITE,
> + EPT_VLT_WR | EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
Wrap EPT_VLT_PERM_USER_EX ? ^^
> + ept_access_allowed(EPT_EA | EPT_EA_USER, OP_EXEC);
> + ept_access_allowed(EPT_EA | EPT_EA_USER, OP_EXEC_USER);
> + } else {
> + ept_access_misconfig(EPT_EA | EPT_EA_USER);
> + }
> +}
> +
> static void ept_access_test_read_execute(void)
> {
> ept_access_test_setup();
> @@ -2939,6 +2985,43 @@ static void ept_access_test_read_execute(void)
> ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
> }
>
> +static void ept_access_test_read_execute_user_only(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* r-x (exec user only) */
> + ept_access_allowed(EPT_RA | EPT_EA_USER, OP_READ);
> + ept_access_violation(EPT_RA | EPT_EA_USER, OP_WRITE,
> + EPT_VLT_WR | EPT_VLT_PERM_RD |
> + EPT_VLT_PERM_USER_EX);
> + ept_access_violation(EPT_RA | EPT_EA_USER, OP_EXEC,
> + EPT_VLT_FETCH | EPT_VLT_PERM_RD |
> + EPT_VLT_PERM_USER_EX);
> + ept_access_allowed(EPT_RA | EPT_EA_USER, OP_EXEC_USER);
> +}
> +
> +static void ept_access_test_read_execute_both(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* r-x (both XS and XU) */
> + ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_READ);
> + ept_access_violation(EPT_RA | EPT_EA | EPT_EA_USER, OP_WRITE,
> + EPT_VLT_WR | EPT_VLT_PERM_RD |
> + EPT_VLT_PERM_EX | EPT_VLT_PERM_USER_EX);
> + ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_EXEC);
> + ept_access_allowed(EPT_RA | EPT_EA | EPT_EA_USER, OP_EXEC_USER);
> +}
> +
> +
> static void ept_access_test_write_execute(void)
> {
> ept_access_test_setup();
> @@ -2960,6 +3043,37 @@ static void ept_access_test_read_write_execute(void)
> ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
> }
>
> +static void ept_access_test_read_write_execute_user_only(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* rwx (exec user only) */
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_READ);
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_WRITE);
> + ept_access_violation(EPT_RA | EPT_WA | EPT_EA_USER, OP_EXEC,
> + EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR | EPT_VLT_PERM_USER_EX);
Wrap line ^^
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA_USER, OP_EXEC_USER);
> +}
> +
> +static void ept_access_test_read_write_execute_both(void)
> +{
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + ept_access_test_setup();
> + /* rwx (both XS and XU) */
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_READ);
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_WRITE);
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_EXEC);
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER, OP_EXEC_USER);
> +}
> +
> static void ept_access_test_reserved_bits(void)
> {
> int i;
> @@ -11722,9 +11836,15 @@ struct vmx_test vmx_tests[] = {
> TEST(ept_access_test_write_only),
> TEST(ept_access_test_read_write),
> TEST(ept_access_test_execute_only),
> + TEST(ept_access_test_execute_user_only),
> + TEST(ept_access_test_execute_both),
> TEST(ept_access_test_read_execute),
> + TEST(ept_access_test_read_execute_user_only),
> + TEST(ept_access_test_read_execute_both),
> TEST(ept_access_test_write_execute),
> TEST(ept_access_test_read_write_execute),
> + TEST(ept_access_test_read_write_execute_user_only),
> + TEST(ept_access_test_read_write_execute_both),
> TEST(ept_access_test_reserved_bits),
> TEST(ept_access_test_ignored_bits),
> TEST(ept_access_test_paddr_not_present_ad_disabled),
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available
2026-03-26 16:13 ` Paolo Bonzini
@ 2026-03-27 15:57 ` Jon Kohler
0 siblings, 0 replies; 17+ messages in thread
From: Jon Kohler @ 2026-03-27 15:57 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm@vger.kernel.org, Nikunj A Dadhania, amit.shah@amd.com,
Sean Christopherson
> On Mar 26, 2026, at 12:13 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 3/26/26 15:50, Paolo Bonzini wrote:
>> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>> index 023512e6..bf03451a 100644
>> --- a/x86/vmx_tests.c
>> +++ b/x86/vmx_tests.c
>> @@ -1044,6 +1044,8 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
>> */
>> static int __setup_ept(u64 hpa, bool enable_ad)
>> {
>> + u64 secondary;
>> +
>> if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
>> !(ctrl_cpu_rev[1].clr & CPU_EPT)) {
>> printf("\tEPT is not supported\n");
>> @@ -1067,9 +1069,13 @@ static int __setup_ept(u64 hpa, bool enable_ad)
>> if (enable_ad)
>> eptp |= EPTP_AD_FLAG;
>> + secondary = vmcs_read(CPU_EXEC_CTRL1) | CPU_EPT;
>> + if (is_mbec_supported())
>> + secondary |= CPU_MODE_BASED_EPT_EXEC;
>> +
>> vmcs_write(EPTP, eptp);
>> vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
>> - vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
>> + vmcs_write(CPU_EXEC_CTRL1, secondary);
>> return 0;
>> }
>
> This is missing a hunk:
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 023512e6..b5994719 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -4886,7 +4946,7 @@ static void test_ept_eptp(void)
> report_prefix_pop();
> }
> - secondary &= ~(CPU_EPT | CPU_URG);
> + secondary &= ~(CPU_URG | CPU_EPT | CPU_MODE_BASED_EPT_EXEC);
> vmcs_write(CPU_EXEC_CTRL1, secondary);
> report_prefix_pushf("Enable-EPT disabled, unrestricted-guest disabled");
> test_vmx_valid_controls();
Thanks, agreed. This works for me (which fixes the sole failure I had
on EPT side)
>
> and possibly this as well for consistency:
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index b5994719..cf1022ba 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -5068,7 +5068,7 @@ static void test_pml(void)
> primary |= CPU_SECONDARY;
> vmcs_write(CPU_EXEC_CTRL0, primary);
> - secondary &= ~(CPU_PML | CPU_EPT);
> + secondary &= ~(CPU_PML | CPU_EPT | CPU_MODE_BASED_EPT_EXEC);
> vmcs_write(CPU_EXEC_CTRL1, secondary);
> report_prefix_pushf("enable-PML disabled, enable-EPT disabled");
> test_vmx_valid_controls();
>
> Paolo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available
2026-03-26 14:50 ` [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available Paolo Bonzini
2026-03-26 16:13 ` Paolo Bonzini
@ 2026-03-27 15:57 ` Jon Kohler
1 sibling, 0 replies; 17+ messages in thread
From: Jon Kohler @ 2026-03-27 15:57 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm@vger.kernel.org, Nikunj A Dadhania, amit.shah@amd.com,
Sean Christopherson
> On Mar 26, 2026, at 10:50 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> Check that the XS bit does not allow execution of user-mode pages
> when MBEC is available (and enabled); this requires tweaking
> the guest page tables to set U=0 for OP_EXEC. Update the unit test
> configuration to include a specific test case for MBEC.
>
> Co-authored-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> x86/unittests.cfg | 12 +++++-
> x86/vmx.h | 5 ++-
> x86/vmx_tests.c | 98 ++++++++++++++++++++++++++++++++++++++---------
> 3 files changed, 94 insertions(+), 21 deletions(-)
>
> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> index b82bbc4e..022ea52c 100644
> --- a/x86/unittests.cfg
> +++ b/x86/unittests.cfg
> @@ -336,7 +336,17 @@ groups = vmx
> [ept]
> file = vmx.flat
> test_args = "ept_access*"
> -qemu_params = -cpu max,host-phys-bits,+vmx -m 2560
> +qemu_params = -cpu max,host-phys-bits,+vmx,-vmx-mbec -m 2560
> +arch = x86_64
> +groups = vmx
> +
> +# EPT is a generic test; however, mode-based execute control aka MBEC
> +# is only available on Skylake and above, be specific about the CPU
> +# model and test it directly.
> +[ept-mbec]
> +file = vmx.flat
> +test_args = "ept_access*"
> +qemu_params = -cpu Skylake-Server,host-phys-bits,+vmx,+vmx-mbec -m 2560
> arch = x86_64
> groups = vmx
>
> diff --git a/x86/vmx.h b/x86/vmx.h
> index b492ec74..7ad7672a 100644
> --- a/x86/vmx.h
> +++ b/x86/vmx.h
> @@ -672,11 +672,14 @@ enum vm_entry_failure_code {
> #define EPT_LARGE_PAGE (1ul << 7)
> #define EPT_ACCESS_FLAG (1ul << 8)
> #define EPT_DIRTY_FLAG (1ul << 9)
> +#define EPT_EA_USER (1ul << 10)
> #define EPT_MEM_TYPE_SHIFT 3ul
> #define EPT_MEM_TYPE_MASK 0x7ul
> #define EPT_SUPPRESS_VE (1ull << 63)
>
> -#define EPT_PRESENT (EPT_RA | EPT_WA | EPT_EA)
> +#define EPT_PRESENT (is_mbec_supported() ? \
> + (EPT_RA | EPT_WA | EPT_EA | EPT_EA_USER) : \
> + (EPT_RA | EPT_WA | EPT_EA))
>
> #define EPT_CAP_EXEC_ONLY (1ull << 0)
> #define EPT_CAP_PWL4 (1ull << 6)
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 023512e6..bf03451a 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -1044,6 +1044,8 @@ static int insn_intercept_exit_handler(union exit_reason exit_reason)
> */
> static int __setup_ept(u64 hpa, bool enable_ad)
> {
> + u64 secondary;
> +
> if (!(ctrl_cpu_rev[0].clr & CPU_SECONDARY) ||
> !(ctrl_cpu_rev[1].clr & CPU_EPT)) {
> printf("\tEPT is not supported\n");
> @@ -1067,9 +1069,13 @@ static int __setup_ept(u64 hpa, bool enable_ad)
> if (enable_ad)
> eptp |= EPTP_AD_FLAG;
>
> + secondary = vmcs_read(CPU_EXEC_CTRL1) | CPU_EPT;
> + if (is_mbec_supported())
> + secondary |= CPU_MODE_BASED_EPT_EXEC;
> +
> vmcs_write(EPTP, eptp);
> vmcs_write(CPU_EXEC_CTRL0, vmcs_read(CPU_EXEC_CTRL0)| CPU_SECONDARY);
> - vmcs_write(CPU_EXEC_CTRL1, vmcs_read(CPU_EXEC_CTRL1)| CPU_EPT);
> + vmcs_write(CPU_EXEC_CTRL1, secondary);
>
> return 0;
> }
> @@ -2174,6 +2180,7 @@ do { \
> DIAGNOSE(EPT_VLT_PERM_RD);
> DIAGNOSE(EPT_VLT_PERM_WR);
> DIAGNOSE(EPT_VLT_PERM_EX);
> + DIAGNOSE(EPT_VLT_PERM_USER_EX);
> DIAGNOSE(EPT_VLT_LADDR_VLD);
> DIAGNOSE(EPT_VLT_PADDR);
> DIAGNOSE(EPT_VLT_GUEST_USER);
> @@ -2326,13 +2333,36 @@ static void ept_access_test_guest_flush_tlb(void)
> skip_exit_vmcall();
> }
>
> +/*
> + * Modifies the leaf guest page table entry that maps @gva, clearing the bits
> + * in @clear then setting the bits in @set. This is needed when testing
> + * MBEC so that the processor knows whether to observe XS or XU.
> + */
> +static void guest_page_table_twiddle(unsigned long *gva, unsigned long clear, unsigned long set)
> +{
> + pgd_t *cr3 = current_page_table();
> + int i;
> +
> + for (i = 1; i <= PAGE_LEVEL; i++) {
> + u64 *pte = get_pte_level(cr3, gva, i);
> + if (!pte)
> + continue;
> +
> + TEST_ASSERT(*pte & PT_PRESENT_MASK);
> + *pte = (*pte & ~clear) | set;
> + break;
> + }
> + invlpg((void *)gva);
> +}
> +
> /*
> * Modifies the EPT entry at @level in the mapping of @gpa. First clears the
> * bits in @clear then sets the bits in @set. @mkhuge transforms the entry into
> * a huge page.
> */
> static unsigned long ept_twiddle(unsigned long gpa, bool mkhuge, int level,
> - unsigned long clear, unsigned long set)
> + unsigned long clear, unsigned long set,
> + enum ept_access_op op)
> {
> struct ept_access_test_data *data = &ept_access_test_data;
> unsigned long orig_pte;
> @@ -2347,15 +2377,27 @@ static unsigned long ept_twiddle(unsigned long gpa, bool mkhuge, int level,
> pte = orig_pte;
> pte = (pte & ~clear) | set;
> set_ept_pte(pml4, gpa, level, pte);
> - invept(INVEPT_SINGLE, eptp);
>
> + if (is_mbec_supported() && op == OP_EXEC)
> + guest_page_table_twiddle(data->gva, PT_USER_MASK, 0);
> +
> + invept(INVEPT_SINGLE, eptp);
> return orig_pte;
> }
>
> -static void ept_untwiddle(unsigned long gpa, int level, unsigned long orig_pte)
> +static void ept_untwiddle(unsigned long gpa, int level, unsigned long orig_pte,
> + enum ept_access_op op)
> {
> + unsigned long pte;
> +
> + pte = get_ept_pte(pml4, gpa, level, &pte);
> set_ept_pte(pml4, gpa, level, orig_pte);
> invept(INVEPT_SINGLE, eptp);
> +
> + if (is_mbec_supported() && op == OP_EXEC) {
> + struct ept_access_test_data *data = &ept_access_test_data;
> + guest_page_table_twiddle(data->gva, 0, PT_USER_MASK);
> + }
> }
>
> static void do_ept_violation(bool leaf, enum ept_access_op op,
> @@ -2370,8 +2412,12 @@ static void do_ept_violation(bool leaf, enum ept_access_op op,
>
> qual = vmcs_read(EXI_QUALIFICATION);
>
> - /* Mask undefined bits (which may later be defined in certain cases). */
> - qual &= ~(EPT_VLT_GUEST_MASK | EPT_VLT_PERM_USER_EX);
> + /*
> + * Exit-qualifications are masked not to account for advanced
> + * VM-exit information. KVM supports this feature, so the tests
> + * could be enhanced to cover it.
> + */
> + qual &= ~EPT_VLT_GUEST_MASK;
>
> diagnose_ept_violation_qual(expected_qual, qual);
> TEST_EXPECT_EQ(expected_qual, qual);
> @@ -2397,14 +2443,14 @@ ept_violation_at_level_mkhuge(bool mkhuge, int level, unsigned long clear,
> struct ept_access_test_data *data = &ept_access_test_data;
> unsigned long orig_pte;
>
> - orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
> + orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
>
> do_ept_violation(level == 1 || mkhuge, op, expected_qual,
> (op == OP_EXEC || op == OP_EXEC_USER
> ? data->gpa + sizeof(unsigned long) : data->gpa));
>
> /* Fix the violation and resume the op loop. */
> - ept_untwiddle(data->gpa, level, orig_pte);
> + ept_untwiddle(data->gpa, level, orig_pte, op);
> enter_guest();
> skip_exit_vmcall();
> }
> @@ -2502,12 +2548,12 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
> */
> install_ept(pml4, gpa, gpa, EPT_PRESENT);
> orig_epte = ept_twiddle(gpa, /*mkhuge=*/0, /*level=*/1,
> - /*clear=*/EPT_PRESENT, /*set=*/ept_access);
> + /*clear=*/EPT_PRESENT, /*set=*/ept_access, op);
>
> if (expect_violation) {
> do_ept_violation(/*leaf=*/true, op,
> expected_qual | EPT_VLT_LADDR_VLD, gpa);
> - ept_untwiddle(gpa, /*level=*/1, orig_epte);
> + ept_untwiddle(gpa, /*level=*/1, orig_epte, op);
> do_ept_access_op(op);
> } else {
> do_ept_access_op(op);
> @@ -2522,7 +2568,7 @@ static void ept_access_paddr(unsigned long ept_access, unsigned long pte_ad,
> }
> }
>
> - ept_untwiddle(gpa, /*level=*/1, orig_epte);
> + ept_untwiddle(gpa, /*level=*/1, orig_epte, op);
> }
>
> TEST_ASSERT(*ptep & PT_ACCESSED_MASK);
> @@ -2558,13 +2604,13 @@ static void ept_allowed_at_level_mkhuge(bool mkhuge, int level,
> struct ept_access_test_data *data = &ept_access_test_data;
> unsigned long orig_pte;
>
> - orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
> + orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
>
> /* No violation. Should proceed to vmcall. */
> do_ept_access_op(op);
> skip_exit_vmcall();
>
> - ept_untwiddle(data->gpa, level, orig_pte);
> + ept_untwiddle(data->gpa, level, orig_pte, op);
> }
>
> static void ept_allowed_at_level(int level, unsigned long clear,
> @@ -2613,7 +2659,7 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
> struct ept_access_test_data *data = &ept_access_test_data;
> unsigned long orig_pte;
>
> - orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set);
> + orig_pte = ept_twiddle(data->gpa, mkhuge, level, clear, set, op);
>
> do_ept_access_op(op);
> assert_exit_reason(VMX_EPT_MISCONFIG);
> @@ -2637,7 +2683,7 @@ static void ept_misconfig_at_level_mkhuge_op(bool mkhuge, int level,
> #endif
>
> /* Fix the violation and resume the op loop. */
> - ept_untwiddle(data->gpa, level, orig_pte);
> + ept_untwiddle(data->gpa, level, orig_pte, op);
> enter_guest();
> skip_exit_vmcall();
> }
> @@ -2867,7 +2913,12 @@ static void ept_access_test_execute_only(void)
> ept_access_violation(EPT_EA, OP_WRITE,
> EPT_VLT_WR | EPT_VLT_PERM_EX);
> ept_access_allowed(EPT_EA, OP_EXEC);
> - ept_access_allowed(EPT_EA, OP_EXEC_USER);
> + if (is_mbec_supported())
> + ept_access_violation(EPT_EA, OP_EXEC_USER,
> + EPT_VLT_FETCH |
> + EPT_VLT_PERM_EX);
> + else
> + ept_access_allowed(EPT_EA, OP_EXEC_USER);
> } else {
> ept_access_misconfig(EPT_EA);
> }
> @@ -2881,7 +2932,11 @@ static void ept_access_test_read_execute(void)
> ept_access_violation(EPT_RA | EPT_EA, OP_WRITE,
> EPT_VLT_WR | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
> ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC);
> - ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
> + if (is_mbec_supported())
> + ept_access_violation(EPT_RA | EPT_EA, OP_EXEC_USER,
> + EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_EX);
> + else
> + ept_access_allowed(EPT_RA | EPT_EA, OP_EXEC_USER);
> }
>
> static void ept_access_test_write_execute(void)
> @@ -2898,7 +2953,11 @@ static void ept_access_test_read_write_execute(void)
> ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_READ);
> ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_WRITE);
> ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC);
> - ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
> + if (is_mbec_supported())
> + ept_access_violation(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER,
> + EPT_VLT_FETCH | EPT_VLT_PERM_RD | EPT_VLT_PERM_WR | EPT_VLT_PERM_EX);
Wrap line a bit ^
> + else
> + ept_access_allowed(EPT_RA | EPT_WA | EPT_EA, OP_EXEC_USER);
> }
>
> static void ept_access_test_reserved_bits(void)
> @@ -2955,7 +3014,8 @@ static void ept_access_test_ignored_bits(void)
> */
> ept_ignored_bit(8);
> ept_ignored_bit(9);
> - ept_ignored_bit(10);
> + if (!is_mbec_supported())
> + ept_ignored_bit(10);
> ept_ignored_bit(11);
> ept_ignored_bit(52);
> ept_ignored_bit(53);
> --
> 2.52.0
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above
2026-03-26 14:50 ` [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above Paolo Bonzini
@ 2026-03-27 15:57 ` Jon Kohler
0 siblings, 0 replies; 17+ messages in thread
From: Jon Kohler @ 2026-03-27 15:57 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm@vger.kernel.org, Nikunj A Dadhania, amit.shah@amd.com,
Sean Christopherson
> On Mar 26, 2026, at 10:50 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> Introduce a new test for mode-based execute control (MBEC) in the VMX
> controls, validating the dependency between MBEC and EPT VM-execution
> controls. The test ensures that VM entry fails when MBEC is enabled
> without EPT, and succeeds in valid combinations.
>
> Update the unit test configuration to include a specific test case for
> MBEC on Skylake-Server CPU model, as that was the first CPU series to
> have MBEC.
>
> Passing test result
> Test suite: vmx_controls_test_mbec
The test suite name was a left over from my original patch series, and
does not exist in this patch. As such, this test fails like so:
TESTNAME=vmx_controls_test_mbec TIMEOUT=90s MACHINE= ACCEL= ./x86/run x86/vmx.flat -smp 1 -cpu Skylake-Server,+vmx,+vmx-mbec -append "vmx_controls_test_mbec"
FAIL vmx_controls_test_mbec (1 tests, 1 unexpected failures)
And when run directly, we get:
TESTNAME=vmx_controls_test_mbec TIMEOUT=90s MACHINE= ACCEL= ./x86/run x86/vmx.flat -smp 1 -cpu Skylake-Server,+vmx,+vmx-mbec -append "vmx_controls_test_mbec"
timeout -k 1s --foreground 90s /usr/libexec/qemu-kvm --no-reboot -nodefaults -global kvm-pit.lost_tick_policy=discard -device pc-testdev -device isa-debug-exit,iobase=0xf4,iosize=0x4 -display none -serial stdio -device pci-testdev -machine accel=kvm -kernel x86/vmx.flat -smp 1 -cpu Skylake-Server,+vmx,+vmx-mbec -append vmx_controls_test_mbec # -initrd /tmp/tmp.E4mbYzCHB3
qemu-kvm: warning: Machine type 'pc-i440fx-rhel7.6.0' is deprecated: machines from the previous RHEL major release are subject to deletion in the next RHEL major release
qemu-kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.mpx [bit 14]
enabling apic
smp: waiting for 0 APs
paging enabled
cr0 = 80010011
cr3 = 1007000
cr4 = 20
...
FAIL: command line didn't match any tests! <<<<
SUMMARY: 1 tests, 1 unexpected failures
That said, the current patch is actually working as it runs as part of the
larger controls test and does indeed pass with the same log entry as I had
originally punched in below.
> PASS: MBEC disabled, EPT disabled (valid combination): vmlaunch succeeds
> PASS: MBEC enabled, EPT disabled (invalid combination): vmlaunch fails
> PASS: MBEC enabled, EPT disabled (invalid combination): VMX inst error is 7 (actual 7)
> PASS: MBEC enabled, EPT enabled (valid combination): vmlaunch succeeds
> PASS: MBEC disabled, EPT enabled (valid combination): vmlaunch succeeds
>
> Test ran with "-vmx-mbec":
> Test suite: vmx_controls_test_mbec
> SKIP: test_mode_based_execute_control : "Secondary execution" or
> "enable EPT" or "enable mode-based execute control" control not supported
>
> Co-authored-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> x86/unittests.cfg | 9 +++++++
> x86/vmx.h | 8 ++++++
> x86/vmx_tests.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 81 insertions(+)
>
> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> index 522318d3..b82bbc4e 100644
> --- a/x86/unittests.cfg
> +++ b/x86/unittests.cfg
> @@ -324,6 +324,15 @@ qemu_params = -cpu max,+vmx
> arch = x86_64
> groups = vmx
>
> +# VMX controls is a generic test; however, mode-based execute control
> +# aka MBEC is only available on Skylake and above, be specific about
> +# the CPU model and test it directly.
> +[vmx_controls_test_mbec]
> +file = vmx.flat
> +extra_params = -cpu Skylake-Server,+vmx,+vmx-mbec -append "vmx_controls_test_mbec"
> +arch = x86_64
> +groups = vmx
> +
> [ept]
> file = vmx.flat
> test_args = "ept_access*"
> diff --git a/x86/vmx.h b/x86/vmx.h
> index 0e29a57d..b492ec74 100644
> --- a/x86/vmx.h
> +++ b/x86/vmx.h
> @@ -510,6 +510,7 @@ enum Ctrl1 {
> CPU_SHADOW_VMCS = 1ul << 14,
> CPU_RDSEED = 1ul << 16,
> CPU_PML = 1ul << 17,
> + CPU_MODE_BASED_EPT_EXEC = 1ul << 22,
> CPU_USE_TSC_SCALING = 1ul << 25,
> };
>
> @@ -843,6 +844,13 @@ static inline bool is_invvpid_type_supported(unsigned long type)
> return ept_vpid.val & (VPID_CAP_INVVPID_ADDR << (type - INVVPID_ADDR));
> }
>
> +static inline bool is_mbec_supported(void)
> +{
> + return (ctrl_cpu_rev[0].clr & CPU_SECONDARY) &&
> + (ctrl_cpu_rev[1].clr & CPU_EPT) &&
> + (ctrl_cpu_rev[1].clr & CPU_MODE_BASED_EPT_EXEC);
> +}
> +
> extern u64 *bsp_vmxon_region;
> extern bool launched;
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 0e3dca3c..dbc456cb 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -4876,6 +4876,69 @@ skip_unrestricted_guest:
> vmcs_write(EPTP, eptp_saved);
> }
>
> +/*
> + * Test the dependency between mode-based execute control for EPT (MBEC) and
> + * enable EPT VM-execution controls.
> + *
> + * When MBEC (bit 22 of secondary processor-based VM-execution controls) is enabled,
> + * it allows separate execute permissions for supervisor-mode and user-mode linear
> + * addresses in EPT paging structures. However, per Intel SDM requirement:
> + *
> + * "If the 'mode-based execute control for EPT' VM-execution control is 1,
> + * the 'enable EPT' VM-execution control must also be 1."
> + *
> + * This test validates that VM entry fails when MBEC is enabled without EPT,
> + * and succeeds in all other valid combinations.
> + *
> + * [Intel SDM Vol. 3C, Section 26.6.2, Table 26-7]
> + */
> +static void test_mode_based_execute_control(void)
> +{
> + u32 primary_saved = vmcs_read(CPU_EXEC_CTRL0);
> + u32 secondary_saved = vmcs_read(CPU_EXEC_CTRL1);
> + u32 primary = primary_saved;
> + u32 secondary = secondary_saved;
> +
> + /* Skip test if required VM-execution controls are not supported */
> + if (!is_mbec_supported()) {
> + report_skip("MBEC not supported");
> + return;
> + }
> +
> + /* Test case 1: MBEC disabled, EPT disabled - should be valid */
> + primary |= CPU_SECONDARY;
> + vmcs_write(CPU_EXEC_CTRL0, primary);
> + secondary &= ~(CPU_MODE_BASED_EPT_EXEC | CPU_EPT);
> + vmcs_write(CPU_EXEC_CTRL1, secondary);
> + report_prefix_pushf("MBEC disabled, EPT disabled (valid combination)");
> + test_vmx_valid_controls();
> + report_prefix_pop();
> +
> + /* Test case 2: MBEC enabled, EPT disabled - should be invalid per SDM */
> + secondary |= CPU_MODE_BASED_EPT_EXEC;
> + vmcs_write(CPU_EXEC_CTRL1, secondary);
> + report_prefix_pushf("MBEC enabled, EPT disabled (invalid combination)");
> + test_vmx_invalid_controls();
> + report_prefix_pop();
> +
> + /* Test case 3: MBEC enabled, EPT enabled - should be valid */
> + secondary |= CPU_EPT;
> + setup_dummy_ept();
> + report_prefix_pushf("MBEC enabled, EPT enabled (valid combination)");
> + test_vmx_valid_controls();
> + report_prefix_pop();
> +
> + /* Test case 4: MBEC disabled, EPT enabled - should be valid */
> + secondary &= ~CPU_MODE_BASED_EPT_EXEC;
> + vmcs_write(CPU_EXEC_CTRL1, secondary);
> + report_prefix_pushf("MBEC disabled, EPT enabled (valid combination)");
> + test_vmx_valid_controls();
> + report_prefix_pop();
> +
> + vmcs_write(CPU_EXEC_CTRL0, primary_saved);
> + vmcs_write(CPU_EXEC_CTRL1, secondary_saved);
> +}
> +
> /*
> * If the 'enable PML' VM-execution control is 1, the 'enable EPT'
> * VM-execution control must also be 1. In addition, the PML address
> @@ -5336,6 +5399,7 @@ static void test_vm_execution_ctls(void)
> test_pml();
> test_vpid();
> test_ept_eptp();
> + test_mode_based_execute_control();
> test_vmx_preemption_timer();
> }
If we were interested in keeping this as a separate test, what I had before was:
/*
* Check that Intel MBEC controls function properly, which is a
* Skylake and above feature, and is not supported on older CPUs.
*/
static void vmx_controls_test_mbec(void)
{
vmcs_write(GUEST_RFLAGS, 0);
test_mode_based_execute_control();
}
...
struct vmx_test vmx_tests[] = {
...
TEST(vmx_controls_test_mbec),
...
}
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests
2026-03-26 14:50 ` [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests Paolo Bonzini
@ 2026-03-27 16:03 ` Jon Kohler
0 siblings, 0 replies; 17+ messages in thread
From: Jon Kohler @ 2026-03-27 16:03 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm@vger.kernel.org, Nikunj A Dadhania, amit.shah@amd.com,
Sean Christopherson
> On Mar 26, 2026, at 10:50 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> These cover three basic scenarios of running successfully,
> failing due to NX=1 and failing due to U/S=1.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> Makefile | 2 +-
> lib/x86/processor.h | 1 +
> x86/svm.c | 17 ++++++++++
> x86/svm.h | 1 +
> x86/svm_npt.c | 83 +++++++++++++++++++++++++++++++++++++++++++--
> 5 files changed, 101 insertions(+), 3 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index 0ce0813b..403fd495 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -93,7 +93,7 @@ COMMON_CFLAGS += $(wunused_but_set_parameter)
> CFLAGS += $(COMMON_CFLAGS)
> CFLAGS += $(wmissing_parameter_type)
> CFLAGS += $(wold_style_declaration)
> -CFLAGS += -Woverride-init -Wmissing-prototypes -Wstrict-prototypes
> +CFLAGS += -Wmissing-prototypes -Wstrict-prototypes
>
> # Evaluate and add late cflags last -- they may depend on previous flags
> LATE_CFLAGS := $(LATE_CFLAGS)
> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
> index 42dd2d2a..32ce08e2 100644
> --- a/lib/x86/processor.h
> +++ b/lib/x86/processor.h
> @@ -377,6 +377,7 @@ struct x86_cpu_feature {
> #define X86_FEATURE_PAUSEFILTER X86_CPU_FEATURE(0x8000000A, 0, EDX, 10)
> #define X86_FEATURE_PFTHRESHOLD X86_CPU_FEATURE(0x8000000A, 0, EDX, 12)
> #define X86_FEATURE_VGIF X86_CPU_FEATURE(0x8000000A, 0, EDX, 16)
> +#define X86_FEATURE_GMET X86_CPU_FEATURE(0x8000000A, 0, EDX, 17)
> #define X86_FEATURE_VNMI X86_CPU_FEATURE(0x8000000A, 0, EDX, 25)
> #define X86_FEATURE_SME X86_CPU_FEATURE(0x8000001F, 0, EAX, 0)
> #define X86_FEATURE_SEV X86_CPU_FEATURE(0x8000001F, 0, EAX, 1)
> diff --git a/x86/svm.c b/x86/svm.c
> index 58cbf0a5..a85da905 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -43,6 +43,23 @@ u64 *npt_get_pml4e(void)
> return pml4e;
> }
>
> +void npt_prepare_gmet_pte(bool user)
> +{
> + extern u8 start;
> + u64 address = (u64)&start & ~(1 << 21);
> + u64 mask = user ? PT_USER_MASK : 0;
> + u64 *pte;
> + int i;
> +
> +
> + /* flip the U bit on the 2 MiB region where the code is loaded.
> + * the U bit is only used for execution, therefore page table accesses ignore it
> + */
> + pte = npt_get_pte(address);
> + for (i = 0; i < 512; i++)
> + pte[i] = (pte[i] & ~PT_USER_MASK) | mask;
> +}
> +
> bool smp_supported(void)
> {
> return cpu_count() > 1;
> diff --git a/x86/svm.h b/x86/svm.h
> index 947206bb..c5695b37 100644
> --- a/x86/svm.h
> +++ b/x86/svm.h
> @@ -418,6 +418,7 @@ u64 *npt_get_pte(u64 address);
> u64 *npt_get_pde(u64 address);
> u64 *npt_get_pdpe(u64 address);
> u64 *npt_get_pml4e(void);
> +void npt_prepare_gmet_pte(bool user);
> bool smp_supported(void);
> bool default_supported(void);
> bool fep_supported(void);
> diff --git a/x86/svm_npt.c b/x86/svm_npt.c
> index bd5e8f35..75d9c2c9 100644
> --- a/x86/svm_npt.c
> +++ b/x86/svm_npt.c
> @@ -87,6 +87,79 @@ static bool npt_us_check(struct svm_test *test)
> && (vmcb->control.exit_info_1 == 0x100000005ULL);
> }
>
> +static bool npt_gmet_supported(void)
> +{
> + return npt_supported() && this_cpu_has(X86_FEATURE_GMET);
> +}
> +
> +static void npt_gmet_null_prepare(struct svm_test *test)
> +{
> + /* set U=0 - no failure */
> + npt_prepare_gmet_pte(false);
> + vmcb->control.nested_ctl |= SVM_NESTED_GMET;
> +}
> +
> +static bool npt_gmet_null_check(struct svm_test *test)
> +{
> + /* reset U=1 */
> + npt_prepare_gmet_pte(true);
> + vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
> + return vmcb->control.exit_code == SVM_EXIT_VMMCALL;
> +}
Nit: space vs tabs above ^^
> +
> +static void npt_gmet_nx_prepare(struct svm_test *test)
> +{
> + u64 *pte = npt_get_pte((u64) null_test);
> +
> + /* set U=0 - failure will be from NX */
> + npt_prepare_gmet_pte(false);
> + *pte |= PT64_NX_MASK;
> + vmcb->control.nested_ctl |= SVM_NESTED_GMET;
> +
> + test->scratch = rdmsr(MSR_EFER);
> + wrmsr(MSR_EFER, test->scratch | EFER_NX);
> +}
> +
> +static bool npt_gmet_nx_check(struct svm_test *test)
> +{
> + u64 *pte = npt_get_pte((u64) null_test);
> +
> + /* reset U=1, NX=0 */
> + npt_prepare_gmet_pte(true);
> + *pte &= ~PT64_NX_MASK;
> + vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
> +
> + wrmsr(MSR_EFER, test->scratch);
> +
> + /* errata 1218 - the U bit in the page fault error code may be incorrect */
> + return (vmcb->control.exit_code == SVM_EXIT_NPF)
> + && ((vmcb->control.exit_info_1 & ~PFERR_USER_MASK) == 0x100000011ULL);
> +}
> +
> +static void npt_gmet_us_prepare(struct svm_test *test)
> +{
> + u64 *pte = npt_get_pte((u64) null_test);
> +
> + npt_prepare_gmet_pte(false);
> + *pte |= PT_USER_MASK;
> + vmcb->control.nested_ctl |= SVM_NESTED_GMET;
> +
> + test->scratch = rdmsr(MSR_EFER);
> + wrmsr(MSR_EFER, test->scratch | EFER_NX);
> +}
> +
> +static bool npt_gmet_us_check(struct svm_test *test)
> +{
> + npt_prepare_gmet_pte(true);
> + vmcb->control.nested_ctl &= ~SVM_NESTED_GMET;
> +
> + wrmsr(MSR_EFER, test->scratch);
> +
> + /* errata 1218 - the U bit in the page fault error code may be incorrect */
> + return (vmcb->control.exit_code == SVM_EXIT_NPF)
> + && ((vmcb->control.exit_info_1 & ~PFERR_USER_MASK) == 0x100000011ULL);
> +}
> +
> static void npt_rw_prepare(struct svm_test *test)
> {
>
> @@ -380,9 +453,9 @@ skip_pte_test:
> vmcb->save.cr4 = sg_cr4;
> }
>
> -#define NPT_V1_TEST(name, prepare, guest_code, check) \
> +#define NPT_V1_TEST(name, prepare, guest_code, check, more...) \
> { #name, npt_supported, prepare, default_prepare_gif_clear, guest_code, \
> - default_finished, check }
> + default_finished, check, more }
>
> #define NPT_V2_TEST(name) { #name, .v2 = name }
>
> @@ -390,6 +463,12 @@ static struct svm_test npt_tests[] = {
> NPT_V1_TEST(npt_nx, npt_nx_prepare, null_test, npt_nx_check),
> NPT_V1_TEST(npt_np, npt_np_prepare, npt_np_test, npt_np_check),
> NPT_V1_TEST(npt_us, npt_us_prepare, npt_us_test, npt_us_check),
> + NPT_V1_TEST(npt_gmet_null, npt_gmet_null_prepare, null_test, npt_gmet_null_check,
> + .supported = npt_gmet_supported),
> + NPT_V1_TEST(npt_gmet_nx, npt_gmet_nx_prepare, null_test, npt_gmet_nx_check,
> + .supported = npt_gmet_supported),
> + NPT_V1_TEST(npt_gmet_us, npt_gmet_us_prepare, null_test, npt_gmet_us_check,
> + .supported = npt_gmet_supported),
> NPT_V1_TEST(npt_rw, npt_rw_prepare, npt_rw_test, npt_rw_check),
> NPT_V1_TEST(npt_rw_pfwalk, npt_rw_pfwalk_prepare, null_test, npt_rw_pfwalk_check),
> NPT_V1_TEST(npt_l1mmio, npt_l1mmio_prepare, npt_l1mmio_test, npt_l1mmio_check),
> --
> 2.52.0
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
` (8 preceding siblings ...)
2026-03-26 14:50 ` [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission Paolo Bonzini
@ 2026-05-12 11:06 ` Paolo Bonzini
9 siblings, 0 replies; 17+ messages in thread
From: Paolo Bonzini @ 2026-05-12 11:06 UTC (permalink / raw)
To: kvm; +Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson
On 3/26/26 15:50, Paolo Bonzini wrote:
> This adds new tests for both GMET and MBEC.
>
> The code for MBEC is roughly based on the previous submission at
> https://lore.kernel.org/kvm/20251223054850.1611618-1-jon@nutanix.com/,
> though with pretty heavy reorganization and fixing work on top.
>
> The existing EPT tests now test execution on both supervisor and
> user-mode pages, with different expected outcomes depending on
> whether MBEC is enabled or not; on top of this, the last patch
> adds tests for XU=1 and XS=XU=1.
>
> For simplicity, the tests always enable MBEC when available.
> A new block in unittests.cfg ensures that both non-MBEC and
> MBEC is covered.
Pushed to kvm-unit-tests.git.
Paolo
>
> Jon Kohler (1):
> x86/vmx: update EPT installation to use EPT_PRESENT flag
>
> Paolo Bonzini (8):
> move PFERR_* constants to lib
> add definitions for nested_ctl
> svm: add basic GMET tests
> x86/vmx: diagnose unexpected EPT violations
> x86/vmx: add mode-based execute control test for Skylake and above
> x86/vmx: add user execution operation to EPT access tests
> x86/vmx: run EPT tests with MBEC enabled when available
> x86/vmx: add EPT tests covering XU permission
>
> Makefile | 2 +-
> lib/util.h | 10 +-
> lib/x86/asm/page.h | 7 +
> lib/x86/processor.h | 1 +
> x86/access.c | 7 -
> x86/svm.c | 19 +-
> x86/svm.h | 4 +
> x86/svm_npt.c | 83 ++++++++-
> x86/unittests.cfg | 21 ++-
> x86/vmx.c | 3 +-
> x86/vmx.h | 32 ++--
> x86/vmx_tests.c | 414 +++++++++++++++++++++++++++++++++++++-------
> 12 files changed, 513 insertions(+), 90 deletions(-)
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-05-12 11:06 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 14:50 [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 1/9] move PFERR_* constants to lib Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 2/9] add definitions for nested_ctl Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 3/9] svm: add basic GMET tests Paolo Bonzini
2026-03-27 16:03 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 4/9] x86/vmx: update EPT installation to use EPT_PRESENT flag Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 5/9] x86/vmx: diagnose unexpected EPT violations Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 6/9] x86/vmx: add mode-based execute control test for Skylake and above Paolo Bonzini
2026-03-27 15:57 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 7/9] x86/vmx: add user execution operation to EPT access tests Paolo Bonzini
2026-03-26 14:50 ` [PATCH kvm-unit-tests 8/9] x86/vmx: run EPT tests with MBEC enabled when available Paolo Bonzini
2026-03-26 16:13 ` Paolo Bonzini
2026-03-27 15:57 ` Jon Kohler
2026-03-27 15:57 ` Jon Kohler
2026-03-26 14:50 ` [PATCH kvm-unit-tests 9/9] x86/vmx: add EPT tests covering XU permission Paolo Bonzini
2026-03-27 15:56 ` Jon Kohler
2026-05-12 11:06 ` [PATCH kvm-unit-tests 0/9] Combined GMET and MBEC tests Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.