* [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support
@ 2026-03-26 18:16 Paolo Bonzini
2026-03-26 18:16 ` [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
` (25 more replies)
0 siblings, 26 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:16 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
This series introduces support for two related features that Hyper-V uses
in its implementation of Virtual Secure Mode; these are Intel Mode-Based
Execute Control and AMD Guest Mode Execution Trap.
It's still RFC because it can definitely use more testing and review,
but I'm pretty confident with the overall shape and design.
Both MBEC and GMET allow more granular control over execute permissions,
with different levels of separation between supervisor and user mode.
MBEC provides support for separate supervisor and user-mode bits in the
PTEs; GMET instead lacks supervisor-mode only execution (with NX=0,
"both" is represented by U=0 and user-mode only by U=1). GMET was
clearly inspired by SMEP though with some differences and annoyances.
The series was developed starting from Jon Kohler's earlier version at
https://lore.kernel.org/kvm/20251223054806.1611168-1-jon@nutanix.com/.
The difference is that I am starting this implementation from two
changes to core MMU code, even before looking at nested MBEC/GMET;
these are seemingly unnecessary for that goal but they make the actual
feature almost trivial to implement:
- first, I'm cleaning up the implementation of nVMX exec-only, by
properly adding read permissions to the ACC_* constant and to the
permission bitmask machinery. Jon also had to add a fourth ACC_*
bit, but used it only in the special case of nested MBEC; here
instead ACC_READ_MASK is the normality, which simplifies testing
a lot and removes gratuitous complexity.
- second, I'm enforcing that KVM runs with MBEC/GMET enabled even in
non-nested mode, if it wants to provide the feature to nested
hypervisors. Initially I thought this would mostly simplify the
testing; but it actually has a big effect on the code as well, because
the creation of SPTEs now looks *exactly the same* for L1 and L2 guests;
the difference lies only in the input access permissions.
Later patches have to use slightly different meanings for ACC_* in Intel
and AMD, but the differences are driven by whether the underlying SPTEs
have U/NX or XS/XU bits, and propagate from there. In other words,
unlike the older ACC_USER_MASK hack these differences are backed by
concrete concepts of the page table format, and there is always a 1:1
mapping from ACC_* bits to PT_*_MASK or shadow_*_mask:
Intel AMD
-------------------- ------------------- -------------------
ACC_READ_MASK PT_PRESENT_MASK PT_PRESENT_MASK
ACC_WRITE_MASK PT_WRITABLE_MASK PT_WRITABLE_MASK
ACC_EXEC_MASK shadow_xs_mask shadow_nx_mask
ACC_USER_MASK --- shadow_user_mask
ACC_USER_EXEC_MASK shadow_xu_mask ---
As can be seen above, the Intel side needs a little work to split
shadow_x_mask and ACC_EXEC_MASK in two; now that there is an actual
ACC_READ_MASK to be used for exec-only pages, ACC_USER_MASK is unused
and can be reused as ACC_USER_EXEC_MASK. ACC_EXEC_MASK is used for
kernel-mode execution and is tied to shadow_xs_mask (when MBEC is disabled
shadow_xs_mask == shadow_xu_mask, and ACC_USER_EXEC_MASK is computed but
ineffective). update_permission_bitmask() precomputes all the necessary
conditions. Note that with MBEC the user/supervisor distinction
depends on the U bit of the page tables rather than the CPL; processors
provide this information to the hypervisor through the "advanced EPT
violation vmexit info" feature, which is a requirement for KVM to use
MBEC, and kvm-intel.ko passes it to the MMU in PFERR_USER_MASK.
On the AMD side, the U bit maps to ACC_USER_MASK but nNPT adjusts the
permission bitmask to ignore it for reads and writes when GMET is active.
Despite the smaller scale of the changes compared to MBEC, there are some
changes to make to use GMET for L1 guests, because the page tables have
to be created with U=0. This means that the root page has role.access !=
ACC_ALL and its permissions have to be propagated down.
In both cases, the complexity added to the core is limited in comparison
to the benefits of a pretty seamless nested support.
The former "smep_andnot_wp" bit of cpu_role.base, now named "cr4_smep",
is repurposed for nested TDP to indicate that MBEC/GMET is on. The minor
pessimization for shadow page tables (toggling CR4.SMEP now always forces
building a separate version of the shadow page tables, even though that's
technically unnecessary if CR4.WP=1) is not really worth fretting about;
in practice, guests are not going to flip CR4.SMEP in a way that would
prevent efficient reuse of shadow page tables.
Patches 1-9 are general cleanups, mostly for MMU code.
Patches 10-17 are for Intel MBEC, with the first three covering
non-nested use.
Patches 18-24 are for AMD GMET, with 18/19/20/22 covering non-nested
use and the others covering nested virtualization.
(Patch 25 is a nice little hack that can be useful for testing).
Paolo
v1->v2:
- fix EXPORT_SYMBOL_FOR_KVM_INTERNAL goof
- drop bit 10 from FROZEN_SPTE, add static_assert to catch it [Kai]
- fix exit qualification for page table EPT violations [kvm-unit-tests]
- add XU to shadow_acc_track_mask
- propagate root_role->access also in shadow MMU direct_map()
- add requested access to kvm_mmu_spte_requested tracepoint
- include support for passing advanced EPT violation vmexit info to guest
- advanced EPT violation vmexit info is now required for nested MBEC
- fix nested MBEC to gate XS vs XU based on the U bit of paging structures
- drop SECONDARY_EXEC_MODE_BASED_EPT_EXEC if L1 does not set it
- add commit message to "KVM: nVMX: advertise MBEC to nested guests" [Jon]
- fix checkpatch.pl issues [Jon]
- drop gmet from /proc/cpuinfo [Borislav]
- fix running L1 without GMET
Jon Kohler (5):
KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK
KVM: x86/mmu: remove SPTE_PERM_MASK
KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC
KVM: nVMX: advertise MBEC to nested guests
KVM: nVMX: allow MBEC with EVMCS
Paolo Bonzini (20):
KVM: x86/mmu: shuffle high bits of SPTEs in preparation for MBEC
KVM: x86/mmu: remove SPTE_EPT_*
KVM: x86/mmu: merge make_spte_{non,}executable
KVM: x86/mmu: rename and clarify BYTE_MASK
KVM: x86/mmu: introduce ACC_READ_MASK
KVM: x86/mmu: separate more EPT/non-EPT permission_fault()
KVM: x86/mmu: split XS/XU bits for EPT
KVM: x86/mmu: move cr4_smep to base role
KVM: VMX: enable use of MBEC
KVM: nVMX: pass advanced EPT violation vmexit info to guest
KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations
KVM: x86/mmu: add support for MBEC to EPT page table walks
KVM: x86/mmu: propagate access mask from root pages down
KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D
KVM: SVM: add GMET bit definitions
KVM: x86/mmu: add support for GMET to NPT page table walks
KVM: SVM: enable GMET and set it in MMU role
KVM: SVM: work around errata 1218
KVM: nSVM: enable GMET for guests
stats hack
Documentation/virt/kvm/x86/mmu.rst | 10 +-
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 45 +++++---
arch/x86/include/asm/svm.h | 1 +
arch/x86/include/asm/vmx.h | 14 ++-
arch/x86/kvm/mmu.h | 12 ++-
arch/x86/kvm/mmu/mmu.c | 162 ++++++++++++++++++++---------
arch/x86/kvm/mmu/mmutrace.h | 19 ++--
arch/x86/kvm/mmu/paging_tmpl.h | 66 ++++++++----
arch/x86/kvm/mmu/spte.c | 77 ++++++++------
arch/x86/kvm/mmu/spte.h | 65 +++++++-----
arch/x86/kvm/mmu/tdp_mmu.c | 6 +-
arch/x86/kvm/svm/nested.c | 16 ++-
arch/x86/kvm/svm/svm.c | 32 +++++-
arch/x86/kvm/svm/svm.h | 1 +
arch/x86/kvm/vmx/capabilities.h | 11 +-
arch/x86/kvm/vmx/common.h | 20 ++--
arch/x86/kvm/vmx/hyperv_evmcs.h | 1 +
arch/x86/kvm/vmx/main.c | 9 ++
arch/x86/kvm/vmx/nested.c | 25 ++++-
arch/x86/kvm/vmx/tdx.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 29 +++++-
arch/x86/kvm/vmx/vmx.h | 1 +
arch/x86/kvm/vmx/x86_ops.h | 1 +
arch/x86/kvm/x86.c | 1 +
26 files changed, 437 insertions(+), 191 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
@ 2026-03-26 18:16 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 02/24] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
` (24 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:16 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
EPT exit qualification bit 6 is used when mode-based execute control
is enabled, and reflects user executable addresses. Rework name to
reflect the intention and add to EPT_VIOLATION_PROT_MASK, which allows
simplifying the return evaluation in
tdx_is_sept_violation_unexpected_pending a pinch.
Rework handling in __vmx_handle_ept_violation to unconditionally clear
EPT_VIOLATION_PROT_USER_EXEC until MBEC is implemented, as suggested by
Sean [1].
Note: Intel SDM Table 29-7 defines bit 6 as:
If the "mode-based execute control" VM-execution control is 0, the
value of this bit is undefined. If that control is 1, this bit is the
logical-AND of bit 10 in the EPT paging-structure entries used to
translate the guest-physical address of the access causing the EPT
violation. In this case, it indicates whether the guest-physical
address was executable for user-mode linear addresses.
[1] https://lore.kernel.org/all/aCJDzU1p_SFNRIJd@google.com/
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-2-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 5 +++--
arch/x86/kvm/vmx/common.h | 9 +++++++--
arch/x86/kvm/vmx/tdx.c | 2 +-
3 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index b92ff87e3560..7fdc6b787d70 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -597,10 +597,11 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_PROT_READ BIT(3)
#define EPT_VIOLATION_PROT_WRITE BIT(4)
#define EPT_VIOLATION_PROT_EXEC BIT(5)
-#define EPT_VIOLATION_EXEC_FOR_RING3_LIN BIT(6)
+#define EPT_VIOLATION_PROT_USER_EXEC BIT(6)
#define EPT_VIOLATION_PROT_MASK (EPT_VIOLATION_PROT_READ | \
EPT_VIOLATION_PROT_WRITE | \
- EPT_VIOLATION_PROT_EXEC)
+ EPT_VIOLATION_PROT_EXEC | \
+ EPT_VIOLATION_PROT_USER_EXEC)
#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 412d0829d7a2..adf925500b9e 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -94,8 +94,13 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
- /* ept page table entry is present? */
- error_code |= (exit_qualification & EPT_VIOLATION_PROT_MASK)
+ /*
+ * ept page table entry is present?
+ * note: unconditionally clear USER_EXEC until mode-based
+ * execute control is implemented
+ */
+ error_code |= (exit_qualification &
+ (EPT_VIOLATION_PROT_MASK & ~EPT_VIOLATION_PROT_USER_EXEC))
? PFERR_PRESENT_MASK : 0;
if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c5065f84b78b..fa740f70ee75 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1855,7 +1855,7 @@ static inline bool tdx_is_sept_violation_unexpected_pending(struct kvm_vcpu *vcp
if (eeq_type != TDX_EXT_EXIT_QUAL_TYPE_PENDING_EPT_VIOLATION)
return false;
- return !(eq & EPT_VIOLATION_PROT_MASK) && !(eq & EPT_VIOLATION_EXEC_FOR_RING3_LIN);
+ return !(eq & EPT_VIOLATION_PROT_MASK);
}
static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 02/24] KVM: x86/mmu: remove SPTE_PERM_MASK
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-03-26 18:16 ` [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 03/24] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
` (23 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
SPTE_PERM_MASK is no longer referenced by anything in the kernel.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-3-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 91ce29fd6f1b..28086fa86fe0 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -42,9 +42,6 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#endif
-#define SPTE_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \
- | shadow_x_mask | shadow_nx_mask | shadow_me_mask)
-
#define ACC_EXEC_MASK 1
#define ACC_WRITE_MASK PT_WRITABLE_MASK
#define ACC_USER_MASK PT_USER_MASK
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 03/24] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-03-26 18:16 ` [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 02/24] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 04/24] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
` (22 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti, Kai Huang
From: Jon Kohler <jon@nutanix.com>
Update SPTE_MMIO_ALLOWED_MASK to allow EPT user executable (bit 10) to
be treated like EPT RWX bit2:0, as when mode-based execute control is
enabled, bit 10 can act like a "present" bit. Likewise do not include
it in FROZEN_SPTE.
No functional changes intended, other than the reduction of the maximum
MMIO generation that is stored in page tables.
Cc: Kai Huang <kai.huang@intel.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-4-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 2 ++
arch/x86/kvm/mmu/spte.h | 20 +++++++++++---------
2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 7fdc6b787d70..59e3b095a315 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -549,10 +549,12 @@ enum vmcs_field {
#define VMX_EPT_ACCESS_BIT (1ull << 8)
#define VMX_EPT_DIRTY_BIT (1ull << 9)
#define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63)
+
#define VMX_EPT_RWX_MASK (VMX_EPT_READABLE_MASK | \
VMX_EPT_WRITABLE_MASK | \
VMX_EPT_EXECUTABLE_MASK)
#define VMX_EPT_MT_MASK (7ull << VMX_EPT_MT_EPTE_SHIFT)
+#define VMX_EPT_USER_EXECUTABLE_MASK (1ull << 10)
static inline u8 vmx_eptp_page_walk_level(u64 eptp)
{
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 28086fa86fe0..4283cea3e66c 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -96,11 +96,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
#undef SHADOW_ACC_TRACK_SAVED_MASK
/*
- * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of
+ * Due to limited space in PTEs, the MMIO generation is an 18 bit subset of
* the memslots generation and is derived as follows:
*
- * Bits 0-7 of the MMIO generation are propagated to spte bits 3-10
- * Bits 8-18 of the MMIO generation are propagated to spte bits 52-62
+ * Bits 0-6 of the MMIO generation are propagated to spte bits 3-9
+ * Bits 7-17 of the MMIO generation are propagated to spte bits 52-62
*
* The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in
* the MMIO generation number, as doing so would require stealing a bit from
@@ -111,7 +111,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
*/
#define MMIO_SPTE_GEN_LOW_START 3
-#define MMIO_SPTE_GEN_LOW_END 10
+#define MMIO_SPTE_GEN_LOW_END 9
#define MMIO_SPTE_GEN_HIGH_START 52
#define MMIO_SPTE_GEN_HIGH_END 62
@@ -133,7 +133,8 @@ static_assert(!(SPTE_MMU_PRESENT_MASK &
* and so they're off-limits for generation; additional checks ensure the mask
* doesn't overlap legal PA bits), and bit 63 (carved out for future usage).
*/
-#define SPTE_MMIO_ALLOWED_MASK (BIT_ULL(63) | GENMASK_ULL(51, 12) | GENMASK_ULL(2, 0))
+#define SPTE_MMIO_ALLOWED_MASK (BIT_ULL(63) | GENMASK_ULL(51, 12) | \
+ BIT_ULL(10) | GENMASK_ULL(2, 0))
static_assert(!(SPTE_MMIO_ALLOWED_MASK &
(SPTE_MMU_PRESENT_MASK | MMIO_SPTE_GEN_LOW_MASK | MMIO_SPTE_GEN_HIGH_MASK)));
@@ -141,7 +142,7 @@ static_assert(!(SPTE_MMIO_ALLOWED_MASK &
#define MMIO_SPTE_GEN_HIGH_BITS (MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1)
/* remember to adjust the comment above as well if you change these */
-static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
+static_assert(MMIO_SPTE_GEN_LOW_BITS == 7 && MMIO_SPTE_GEN_HIGH_BITS == 11);
#define MMIO_SPTE_GEN_LOW_SHIFT (MMIO_SPTE_GEN_LOW_START - 0)
#define MMIO_SPTE_GEN_HIGH_SHIFT (MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS)
@@ -217,10 +218,11 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
*
* Only used by the TDP MMU.
*/
-#define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x5a0ULL)
+#define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x1a0ULL)
-/* Frozen SPTEs must not be misconstrued as shadow present PTEs. */
-static_assert(!(FROZEN_SPTE & SPTE_MMU_PRESENT_MASK));
+/* Frozen SPTEs must not be misconstrued as shadow or MMU present PTEs. */
+static_assert(!(FROZEN_SPTE & (SPTE_MMU_PRESENT_MASK |
+ VMX_EPT_RWX_MASK | VMX_EPT_USER_EXECUTABLE_MASK)));
static inline bool is_frozen_spte(u64 spte)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 04/24] KVM: x86/mmu: shuffle high bits of SPTEs in preparation for MBEC
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (2 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 03/24] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 05/24] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
` (21 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Access tracking will need to save bit 10 when MBEC is enabled.
Right now it is simply shifting the R and X bits into bits 54 and 56,
but bit 10 would not fit with the same scheme. Reorganize the
high bits so that access tracking will use bits 52, 54 and 62.
As a side effect, the free bits are compacted slightly, with
56-59 still unused.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 4283cea3e66c..317b9cd1537c 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -17,10 +17,20 @@
*/
#define SPTE_MMU_PRESENT_MASK BIT_ULL(11)
+/*
+ * The ignored high bits are allocated as follows:
+ * - bits 52, 54: saved X-R bits for access tracking when EPT does not have A/D
+ * - bits 53 (EPT only): host writable
+ * - bits 55 (EPT only): MMU-writable
+ * - bits 56-59: unused
+ * - bits 60-61: type of A/D tracking
+ * - bits 62: unused
+ */
+
/*
* TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also
* be restricted to using write-protection (for L2 when CPU dirty logging, i.e.
- * PML, is enabled). Use bits 52 and 53 to hold the type of A/D tracking that
+ * PML, is enabled). Use bits 60 and 61 to hold the type of A/D tracking that
* is must be employed for a given TDP SPTE.
*
* Note, the "enabled" mask must be '0', as bits 62:52 are _reserved_ for PAE
@@ -29,7 +39,7 @@
* TDP with CPU dirty logging (PML). If NPT ever gains PML-like support, it
* must be restricted to 64-bit KVM.
*/
-#define SPTE_TDP_AD_SHIFT 52
+#define SPTE_TDP_AD_SHIFT 60
#define SPTE_TDP_AD_MASK (3ULL << SPTE_TDP_AD_SHIFT)
#define SPTE_TDP_AD_ENABLED (0ULL << SPTE_TDP_AD_SHIFT)
#define SPTE_TDP_AD_DISABLED (1ULL << SPTE_TDP_AD_SHIFT)
@@ -65,7 +75,7 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
*/
#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \
SPTE_EPT_EXECUTABLE_MASK)
-#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54
+#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
static_assert(!(SPTE_TDP_AD_MASK & SHADOW_ACC_TRACK_SAVED_MASK));
@@ -84,8 +94,8 @@ static_assert(!(SPTE_TDP_AD_MASK & SHADOW_ACC_TRACK_SAVED_MASK));
* to not overlap the A/D type mask or the saved access bits of access-tracked
* SPTEs when A/D bits are disabled.
*/
-#define EPT_SPTE_HOST_WRITABLE BIT_ULL(57)
-#define EPT_SPTE_MMU_WRITABLE BIT_ULL(58)
+#define EPT_SPTE_HOST_WRITABLE BIT_ULL(53)
+#define EPT_SPTE_MMU_WRITABLE BIT_ULL(55)
static_assert(!(EPT_SPTE_HOST_WRITABLE & SPTE_TDP_AD_MASK));
static_assert(!(EPT_SPTE_MMU_WRITABLE & SPTE_TDP_AD_MASK));
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 05/24] KVM: x86/mmu: remove SPTE_EPT_*
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (3 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 04/24] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 06/24] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
` (20 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
spte.h is already including vmx.h, use the constants it defines.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 317b9cd1537c..bc02a2e89a31 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -57,10 +57,6 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define ACC_USER_MASK PT_USER_MASK
#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK)
-/* The mask for the R/X bits in EPT PTEs */
-#define SPTE_EPT_READABLE_MASK 0x1ull
-#define SPTE_EPT_EXECUTABLE_MASK 0x4ull
-
#define SPTE_LEVEL_BITS 9
#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
#define SPTE_INDEX(address, level) __PT_INDEX(address, level, SPTE_LEVEL_BITS)
@@ -73,8 +69,8 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
* restored only when a write is attempted to the page. This mask obviously
* must not overlap the A/D type mask.
*/
-#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \
- SPTE_EPT_EXECUTABLE_MASK)
+#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (VMX_EPT_READABLE_MASK | \
+ VMX_EPT_EXECUTABLE_MASK)
#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 06/24] KVM: x86/mmu: merge make_spte_{non,}executable
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (4 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 05/24] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 07/24] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
` (19 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
As the logic will become more complicated with the introduction
of MBEC, at least write it only once.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 85a0473809b0..e9dc0ae44274 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -317,14 +317,16 @@ static u64 modify_spte_protections(u64 spte, u64 set, u64 clear)
return spte;
}
-static u64 make_spte_executable(u64 spte)
+static u64 make_spte_executable(u64 spte, u8 access)
{
- return modify_spte_protections(spte, shadow_x_mask, shadow_nx_mask);
-}
+ u64 set, clear;
-static u64 make_spte_nonexecutable(u64 spte)
-{
- return modify_spte_protections(spte, shadow_nx_mask, shadow_x_mask);
+ if (access & ACC_EXEC_MASK)
+ set = shadow_x_mask;
+ else
+ set = shadow_nx_mask;
+ clear = set ^ (shadow_nx_mask | shadow_x_mask);
+ return modify_spte_protections(spte, set, clear);
}
/*
@@ -356,8 +358,8 @@ u64 make_small_spte(struct kvm *kvm, u64 huge_spte,
* the page executable as the NX hugepage mitigation no longer
* applies.
*/
- if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm))
- child_spte = make_spte_executable(child_spte);
+ if (is_nx_huge_page_enabled(kvm))
+ child_spte = make_spte_executable(child_spte, role.access);
}
return child_spte;
@@ -379,7 +381,7 @@ u64 make_huge_spte(struct kvm *kvm, u64 small_spte, int level)
huge_spte &= KVM_HPAGE_MASK(level) | ~PAGE_MASK;
if (is_nx_huge_page_enabled(kvm))
- huge_spte = make_spte_nonexecutable(huge_spte);
+ huge_spte = make_spte_executable(huge_spte, 0);
return huge_spte;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 07/24] KVM: x86/mmu: rename and clarify BYTE_MASK
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (5 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 06/24] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
` (18 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
The BYTE_MASK macro is the central point of the black magic
in update_permission_bitmask(). Rename it to something
that relates to how it is used, and add a comment explaining
how it works.
Using shifts instead of powers of two was actually suggested by
David Hildenbrand back in 2017 for clarity[1] but I evidently
forgot his suggestion when applying to kvm.git.
[1] https://lore.kernel.org/kvm/e4b5df86-31ae-2f4e-0666-393753e256df@redhat.com/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 55 ++++++++++++++++++++++++++++++------------
1 file changed, 39 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b922a8b00057..170952a840db 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5517,29 +5517,53 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
max_huge_page_level);
}
-#define BYTE_MASK(access) \
- ((1 & (access) ? 2 : 0) | \
- (2 & (access) ? 4 : 0) | \
- (3 & (access) ? 8 : 0) | \
- (4 & (access) ? 16 : 0) | \
- (5 & (access) ? 32 : 0) | \
- (6 & (access) ? 64 : 0) | \
- (7 & (access) ? 128 : 0))
-
+/*
+ * Build a mask with all combinations of PTE access rights that
+ * include the given access bit. The mask can be queried with
+ * "mask & (1 << access)", where access is a combination of
+ * ACC_* bits.
+ *
+ * By mixing and matching multiple masks returned by ACC_BITS_MASK,
+ * update_permission_bitmask() builds what is effectively a
+ * two-dimensional array of bools. The second dimension is
+ * provided by individual bits of permissions[pfec >> 1], and
+ * logical &, | and ~ operations operate on all the 8 possible
+ * combinations of ACC_* bits.
+ */
+#define ACC_BITS_MASK(access) \
+ ((1 & (access) ? 1 << 1 : 0) | \
+ (2 & (access) ? 1 << 2 : 0) | \
+ (3 & (access) ? 1 << 3 : 0) | \
+ (4 & (access) ? 1 << 4 : 0) | \
+ (5 & (access) ? 1 << 5 : 0) | \
+ (6 & (access) ? 1 << 6 : 0) | \
+ (7 & (access) ? 1 << 7 : 0))
static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u8 x = BYTE_MASK(ACC_EXEC_MASK);
- const u8 w = BYTE_MASK(ACC_WRITE_MASK);
- const u8 u = BYTE_MASK(ACC_USER_MASK);
+ const u8 x = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u8 w = ACC_BITS_MASK(ACC_WRITE_MASK);
+ const u8 u = ACC_BITS_MASK(ACC_USER_MASK);
bool cr4_smep = is_cr4_smep(mmu);
bool cr4_smap = is_cr4_smap(mmu);
bool cr0_wp = is_cr0_wp(mmu);
bool efer_nx = is_efer_nx(mmu);
+ /*
+ * In hardware, page fault error codes are generated (as the name
+ * suggests) on any kind of page fault. permission_fault() and
+ * paging_tmpl.h already use the same bits after a successful page
+ * table walk, to indicate the kind of access being performed.
+ *
+ * However, PFERR_PRESENT_MASK and PFERR_RSVD_MASK are never set here,
+ * exactly because the page walk is successful. PFERR_PRESENT_MASK is
+ * removed by the shift, while PFERR_RSVD_MASK is repurposed in
+ * permission_fault() to indicate accesses that are *not* subject to
+ * SMAP restrictions.
+ */
for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
unsigned pfec = byte << 1;
@@ -5586,10 +5610,9 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
* - The access is supervisor mode
* - If implicit supervisor access or X86_EFLAGS_AC is clear
*
- * Here, we cover the first four conditions.
- * The fifth is computed dynamically in permission_fault();
- * PFERR_RSVD_MASK bit will be set in PFEC if the access is
- * *not* subject to SMAP restrictions.
+ * Here, we cover the first four conditions. The fifth
+ * is computed dynamically in permission_fault() and
+ * communicated by setting PFERR_RSVD_MASK.
*/
if (cr4_smap)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (6 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 07/24] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-27 4:06 ` Jon Kohler
2026-03-30 4:12 ` kernel test robot
2026-03-26 18:17 ` [PATCH 09/24] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
` (17 subsequent siblings)
25 siblings, 2 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Read permissions so far were only needed for EPT, which does not need
ACC_USER_MASK. Therefore, for EPT page tables ACC_USER_MASK was repurposed
as a read permission bit.
In order to implement nested MBEC, EPT will genuinely have four kinds of
accesses, and there will be no room for such hacks; bite the bullet at
last, enlarging ACC_ALL to four bits and permissions[] to 2^4 bits (u16).
The new code does not enforce that the XWR bits on non-execonly processors
have their R bit set, even when running nested: none of the shadow_*_mask
values have bit 0 set, and make_spte() genuinely relies on ACC_READ_MASK
being requested! This works becase, if execonly is not supported by the
processor, shadow EPT will generate an EPT misconfig vmexit if the XWR
bits represent a non-readable page, and therefore the pte_access argument
to make_spte() will also always have ACC_READ_MASK set.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 12 +++++-----
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++++++------------
arch/x86/kvm/mmu/mmutrace.h | 3 ++-
arch/x86/kvm/mmu/paging_tmpl.h | 35 +++++++++++++++++------------
arch/x86/kvm/mmu/spte.c | 18 ++++++---------
arch/x86/kvm/mmu/spte.h | 5 +++--
arch/x86/kvm/vmx/capabilities.h | 5 -----
arch/x86/kvm/vmx/common.h | 5 +----
arch/x86/kvm/vmx/vmx.c | 3 +--
10 files changed, 67 insertions(+), 60 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6e4e3ef9b8c7..65671d3769f0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -327,11 +327,11 @@ struct kvm_kernel_irq_routing_entry;
* the number of unique SPs that can theoretically be created is 2^n, where n
* is the number of bits that are used to compute the role.
*
- * But, even though there are 20 bits in the mask below, not all combinations
+ * But, even though there are 21 bits in the mask below, not all combinations
* of modes and flags are possible:
*
* - invalid shadow pages are not accounted, mirror pages are not shadowed,
- * so the bits are effectively 18.
+ * so the bits are effectively 19.
*
* - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging);
* execonly and ad_disabled are only used for nested EPT which has
@@ -346,7 +346,7 @@ struct kvm_kernel_irq_routing_entry;
* cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
*
* Therefore, the maximum number of possible upper-level shadow pages for a
- * single gfn is a bit less than 2^13.
+ * single gfn is a bit less than 2^14.
*/
union kvm_mmu_page_role {
u32 word;
@@ -355,7 +355,7 @@ union kvm_mmu_page_role {
unsigned has_4_byte_gpte:1;
unsigned quadrant:2;
unsigned direct:1;
- unsigned access:3;
+ unsigned access:4;
unsigned invalid:1;
unsigned efer_nx:1;
unsigned cr0_wp:1;
@@ -365,7 +365,7 @@ union kvm_mmu_page_role {
unsigned guest_mode:1;
unsigned passthrough:1;
unsigned is_mirror:1;
- unsigned :4;
+ unsigned:3;
/*
* This is left at the top of the word so that
@@ -491,7 +491,7 @@ struct kvm_mmu {
* Byte index: page fault error code [4:1]
* Bit index: pte permissions in ACC_* format
*/
- u8 permissions[16];
+ u16 permissions[16];
u64 *pae_root;
u64 *pml4_root;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 830f46145692..23f37535c0ce 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -81,7 +81,7 @@ u8 kvm_mmu_get_max_tdp_level(void);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value);
void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
+void kvm_mmu_set_ept_masks(bool has_ad_bits);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 170952a840db..5f578435b5ad 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2033,7 +2033,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
*/
const union kvm_mmu_page_role sync_role_ign = {
.level = 0xf,
- .access = 0x7,
+ .access = ACC_ALL,
.quadrant = 0x3,
.passthrough = 0x1,
};
@@ -5527,7 +5527,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
* update_permission_bitmask() builds what is effectively a
* two-dimensional array of bools. The second dimension is
* provided by individual bits of permissions[pfec >> 1], and
- * logical &, | and ~ operations operate on all the 8 possible
+ * logical &, | and ~ operations operate on all the 16 possible
* combinations of ACC_* bits.
*/
#define ACC_BITS_MASK(access) \
@@ -5537,15 +5537,24 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
(4 & (access) ? 1 << 4 : 0) | \
(5 & (access) ? 1 << 5 : 0) | \
(6 & (access) ? 1 << 6 : 0) | \
- (7 & (access) ? 1 << 7 : 0))
+ (7 & (access) ? 1 << 7 : 0) | \
+ (8 & (access) ? 1 << 8 : 0) | \
+ (9 & (access) ? 1 << 9 : 0) | \
+ (10 & (access) ? 1 << 10 : 0) | \
+ (11 & (access) ? 1 << 11 : 0) | \
+ (12 & (access) ? 1 << 12 : 0) | \
+ (13 & (access) ? 1 << 13 : 0) | \
+ (14 & (access) ? 1 << 14 : 0) | \
+ (15 & (access) ? 1 << 15 : 0))
static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u8 x = ACC_BITS_MASK(ACC_EXEC_MASK);
- const u8 w = ACC_BITS_MASK(ACC_WRITE_MASK);
- const u8 u = ACC_BITS_MASK(ACC_USER_MASK);
+ const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
+ const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
+ const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
bool cr4_smep = is_cr4_smep(mmu);
bool cr4_smap = is_cr4_smap(mmu);
@@ -5568,24 +5577,26 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
unsigned pfec = byte << 1;
/*
- * Each "*f" variable has a 1 bit for each UWX value
+ * Each "*f" variable has a 1 bit for each ACC_* combo
* that causes a fault with the given PFEC.
*/
+ /* Faults from reads to non-readable pages */
+ u16 rf = (pfec & (PFERR_WRITE_MASK|PFERR_FETCH_MASK)) ? 0 : (u16)~r;
/* Faults from writes to non-writable pages */
- u8 wf = (pfec & PFERR_WRITE_MASK) ? (u8)~w : 0;
+ u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
/* Faults from user mode accesses to supervisor pages */
- u8 uf = (pfec & PFERR_USER_MASK) ? (u8)~u : 0;
+ u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
/* Faults from fetches of non-executable pages*/
- u8 ff = (pfec & PFERR_FETCH_MASK) ? (u8)~x : 0;
+ u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
/* Faults from kernel mode fetches of user pages */
- u8 smepf = 0;
+ u16 smepf = 0;
/* Faults from kernel mode accesses of user pages */
- u8 smapf = 0;
+ u16 smapf = 0;
if (!ept) {
/* Faults from kernel mode accesses to user pages */
- u8 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
+ u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
/* Not really needed: !nx will cause pte.nx to fault */
if (!efer_nx)
@@ -5618,7 +5629,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
}
- mmu->permissions[byte] = ff | uf | wf | smepf | smapf;
+ mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
}
}
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index 764e3015d021..dcfdfedfc4e9 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -25,7 +25,8 @@
#define KVM_MMU_PAGE_PRINTK() ({ \
const char *saved_ptr = trace_seq_buffer_ptr(p); \
static const char *access_str[] = { \
- "---", "--x", "w--", "w-x", "-u-", "-ux", "wu-", "wux" \
+ "----", "r---", "-w--", "rw--", "--u-", "r-u-", "-wu-", "rwu-", \
+ "---x", "r--x", "-w-x", "rw-x", "--ux", "r-ux", "-wux", "rwux" \
}; \
union kvm_mmu_page_role role; \
\
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 901cd2bd40b8..fb1b5d8b23e5 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -170,25 +170,24 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
return true;
}
-/*
- * For PTTYPE_EPT, a page table can be executable but not readable
- * on supported processors. Therefore, set_spte does not automatically
- * set bit 0 if execute only is supported. Here, we repurpose ACC_USER_MASK
- * to signify readability since it isn't used in the EPT case
- */
static inline unsigned FNAME(gpte_access)(u64 gpte)
{
unsigned access;
#if PTTYPE == PTTYPE_EPT
access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
- ((gpte & VMX_EPT_READABLE_MASK) ? ACC_USER_MASK : 0);
+ ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0);
#else
- BUILD_BUG_ON(ACC_EXEC_MASK != PT_PRESENT_MASK);
- BUILD_BUG_ON(ACC_EXEC_MASK != 1);
+ /*
+ * P is set here, so the page is always readable and W/U/!NX represent
+ * allowed accesses.
+ */
+ BUILD_BUG_ON(ACC_READ_MASK != PT_PRESENT_MASK);
+ BUILD_BUG_ON(ACC_WRITE_MASK != PT_WRITABLE_MASK);
+ BUILD_BUG_ON(ACC_USER_MASK != PT_USER_MASK);
+ BUILD_BUG_ON(ACC_EXEC_MASK & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK));
access = gpte & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK);
- /* Combine NX with P (which is set here) to get ACC_EXEC_MASK. */
- access ^= (gpte >> PT64_NX_SHIFT);
+ access |= gpte & PT64_NX_MASK ? 0 : ACC_EXEC_MASK;
#endif
return access;
@@ -501,10 +500,18 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
if (write_fault)
walker->fault.exit_qualification |= EPT_VIOLATION_ACC_WRITE;
- if (user_fault)
- walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
- if (fetch_fault)
+ else if (fetch_fault)
walker->fault.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
+ else
+ walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
+
+ /*
+ * Accesses to guest paging structures are either "reads" or
+ * "read+write" accesses, so consider them the latter if write_fault
+ * is true.
+ */
+ if (access & PFERR_GUEST_PAGE_MASK)
+ walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
/*
* Note, pte_access holds the raw RWX bits from the EPTE, not
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index e9dc0ae44274..7b5f118ae211 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -194,12 +194,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
int is_host_mmio = -1;
bool wrprot = false;
- /*
- * For the EPT case, shadow_present_mask has no RWX bits set if
- * exec-only page table entries are supported. In that case,
- * ACC_USER_MASK and shadow_user_mask are used to represent
- * read access. See FNAME(gpte_access) in paging_tmpl.h.
- */
WARN_ON_ONCE((pte_access | shadow_present_mask) == SHADOW_NONPRESENT_VALUE);
if (sp->role.ad_disabled)
@@ -228,6 +222,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
pte_access &= ~ACC_EXEC_MASK;
}
+ if (pte_access & ACC_READ_MASK)
+ spte |= PT_PRESENT_MASK; /* or VMX_EPT_READABLE_MASK */
+
if (pte_access & ACC_EXEC_MASK)
spte |= shadow_x_mask;
else
@@ -391,6 +388,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
u64 spte = SPTE_MMU_PRESENT_MASK;
spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK |
+ PT_PRESENT_MASK /* or VMX_EPT_READABLE_MASK */ |
shadow_user_mask | shadow_x_mask | shadow_me_value;
if (ad_disabled)
@@ -491,18 +489,16 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
+void kvm_mmu_set_ept_masks(bool has_ad_bits)
{
kvm_ad_enabled = has_ad_bits;
- shadow_user_mask = VMX_EPT_READABLE_MASK;
+ shadow_user_mask = 0;
shadow_accessed_mask = VMX_EPT_ACCESS_BIT;
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
shadow_x_mask = VMX_EPT_EXECUTABLE_MASK;
- /* VMX_EPT_SUPPRESS_VE_BIT is needed for W or X violation. */
- shadow_present_mask =
- (has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | VMX_EPT_SUPPRESS_VE_BIT;
+ shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
shadow_acc_track_mask = VMX_EPT_RWX_MASK;
shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index bc02a2e89a31..121bfb2217e8 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -52,10 +52,11 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#endif
-#define ACC_EXEC_MASK 1
+#define ACC_READ_MASK PT_PRESENT_MASK
#define ACC_WRITE_MASK PT_WRITABLE_MASK
#define ACC_USER_MASK PT_USER_MASK
-#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK)
+#define ACC_EXEC_MASK 8
+#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
#define SPTE_LEVEL_BITS 9
#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 4e371c93ae16..609477f190e8 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -300,11 +300,6 @@ static inline bool cpu_has_vmx_flexpriority(void)
cpu_has_vmx_virtualize_apic_accesses();
}
-static inline bool cpu_has_vmx_ept_execute_only(void)
-{
- return vmx_capability.ept & VMX_EPT_EXECUTE_ONLY_BIT;
-}
-
static inline bool cpu_has_vmx_ept_4levels(void)
{
return vmx_capability.ept & VMX_EPT_PAGE_WALK_4_BIT;
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index adf925500b9e..1afbf272efae 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -85,11 +85,8 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
{
u64 error_code;
- /* Is it a read fault? */
- error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
- ? PFERR_USER_MASK : 0;
/* Is it a write fault? */
- error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
+ error_code = (exit_qualification & EPT_VIOLATION_ACC_WRITE)
? PFERR_WRITE_MASK : 0;
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 8b24e682535b..e27868fa4eb7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8798,8 +8798,7 @@ __init int vmx_hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
if (enable_ept)
- kvm_mmu_set_ept_masks(enable_ept_ad_bits,
- cpu_has_vmx_ept_execute_only());
+ kvm_mmu_set_ept_masks(enable_ept_ad_bits);
else
vt_x86_ops.get_mt_mask = NULL;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 09/24] KVM: x86/mmu: separate more EPT/non-EPT permission_fault()
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (7 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 10/24] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
` (16 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Now that EPT is not abusing anymore ACC_USER_MASK, move its
handling entirely in the !ept branch. Merge smepf and ff
into a single variable because EPT's "SMEP" (actually
MBEC) is defined differently.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5f578435b5ad..dd5419a1f891 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5553,7 +5553,6 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
- const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
bool cr4_smep = is_cr4_smep(mmu);
@@ -5586,21 +5585,24 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Faults from writes to non-writable pages */
u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
/* Faults from user mode accesses to supervisor pages */
- u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
- /* Faults from fetches of non-executable pages*/
- u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
- /* Faults from kernel mode fetches of user pages */
- u16 smepf = 0;
+ u16 uf = 0;
+ /* Faults from fetches of non-executable pages */
+ u16 ff = 0;
/* Faults from kernel mode accesses of user pages */
u16 smapf = 0;
- if (!ept) {
+ if (ept) {
+ ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
+ } else {
+ const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
+
/* Faults from kernel mode accesses to user pages */
u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
- /* Not really needed: !nx will cause pte.nx to fault */
- if (!efer_nx)
- ff = 0;
+ uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
+
+ if (efer_nx)
+ ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
/* Allow supervisor writes if !cr0.wp */
if (!cr0_wp)
@@ -5608,7 +5610,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Disallow supervisor fetches of user code if cr4.smep */
if (cr4_smep)
- smepf = (pfec & PFERR_FETCH_MASK) ? kf : 0;
+ ff |= (pfec & PFERR_FETCH_MASK) ? kf : 0;
/*
* SMAP:kernel-mode data accesses from user-mode
@@ -5629,7 +5631,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
}
- mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
+ mmu->permissions[byte] = ff | uf | wf | rf | smapf;
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 10/24] KVM: x86/mmu: split XS/XU bits for EPT
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (8 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 09/24] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
` (15 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
When EPT is in use, replace ACC_USER_MASK with ACC_USER_EXEC_MASK,
so that supervisor and user-mode execution can be controlled
independently (ACC_USER_MASK would not allow a setting similar to
XU=0 XS=1 W=1 R=1).
Replace shadow_x_mask with shadow_xs_mask/shadow_xu_mask, to allow
setting XS and XU bits separately in EPT entries.
Note that ACC_USER_EXEC_MASK is already set through ACC_ALL in
the kvm_mmu_page roles, but it does not propagate to the XU bit
because shadow_xs_mask == shadow_xu_mask. On the other hand,
access tracking for eptad=0 does take it into account when
saving/restoring page permissions.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/mmutrace.h | 6 ++---
arch/x86/kvm/mmu/spte.c | 49 +++++++++++++++++++++++--------------
arch/x86/kvm/mmu/spte.h | 8 +++---
4 files changed, 40 insertions(+), 25 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dd5419a1f891..a6ee467ad838 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5472,7 +5472,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
static inline bool boot_cpu_is_amd(void)
{
WARN_ON_ONCE(!tdp_enabled);
- return shadow_x_mask == 0;
+ return shadow_xs_mask == 0;
}
/*
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index dcfdfedfc4e9..3429c1413f42 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -357,8 +357,8 @@ TRACE_EVENT(
__entry->sptep = virt_to_phys(sptep);
__entry->level = level;
__entry->r = shadow_present_mask || (__entry->spte & PT_PRESENT_MASK);
- __entry->x = is_executable_pte(__entry->spte);
- __entry->u = shadow_user_mask ? !!(__entry->spte & shadow_user_mask) : -1;
+ __entry->x = (__entry->spte & (shadow_xs_mask | shadow_nx_mask)) == shadow_xs_mask;
+ __entry->u = !!(__entry->spte & (shadow_xu_mask | shadow_user_mask));
),
TP_printk("gfn %llx spte %llx (%s%s%s%s) level %d at %llx",
@@ -366,7 +366,7 @@ TRACE_EVENT(
__entry->r ? "r" : "-",
__entry->spte & PT_WRITABLE_MASK ? "w" : "-",
__entry->x ? "x" : "-",
- __entry->u == -1 ? "" : (__entry->u ? "u" : "-"),
+ __entry->u ? "u" : "-",
__entry->level, __entry->sptep
)
);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 7b5f118ae211..fc7eb73476f6 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -29,8 +29,9 @@ bool __read_mostly kvm_ad_enabled;
u64 __read_mostly shadow_host_writable_mask;
u64 __read_mostly shadow_mmu_writable_mask;
u64 __read_mostly shadow_nx_mask;
-u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
u64 __read_mostly shadow_user_mask;
+u64 __read_mostly shadow_xs_mask; /* mutual exclusive with nx_mask and user_mask */
+u64 __read_mostly shadow_xu_mask; /* mutual exclusive with nx_mask and user_mask */
u64 __read_mostly shadow_accessed_mask;
u64 __read_mostly shadow_dirty_mask;
u64 __read_mostly shadow_mmio_value;
@@ -216,22 +217,30 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
* when CR0.PG is toggled, but leveraging that to ignore the mitigation
* would tie make_spte() further to vCPU/MMU state, and add complexity
* just to optimize a mode that is anything but performance critical.
+ *
+ * Use ACC_USER_EXEC_MASK here assuming only Intel processors (EPT)
+ * are affected by the NX huge page erratum.
*/
- if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) &&
+ if (level > PG_LEVEL_4K &&
+ (pte_access & (ACC_EXEC_MASK | ACC_USER_EXEC_MASK)) &&
is_nx_huge_page_enabled(vcpu->kvm)) {
- pte_access &= ~ACC_EXEC_MASK;
+ pte_access &= ~(ACC_EXEC_MASK | ACC_USER_EXEC_MASK);
}
if (pte_access & ACC_READ_MASK)
spte |= PT_PRESENT_MASK; /* or VMX_EPT_READABLE_MASK */
- if (pte_access & ACC_EXEC_MASK)
- spte |= shadow_x_mask;
- else
- spte |= shadow_nx_mask;
-
- if (pte_access & ACC_USER_MASK)
- spte |= shadow_user_mask;
+ if (shadow_nx_mask) {
+ if (!(pte_access & ACC_EXEC_MASK))
+ spte |= shadow_nx_mask;
+ if (pte_access & ACC_USER_MASK)
+ spte |= shadow_user_mask;
+ } else {
+ if (pte_access & ACC_EXEC_MASK)
+ spte |= shadow_xs_mask;
+ if (pte_access & ACC_USER_EXEC_MASK)
+ spte |= shadow_xu_mask;
+ }
if (level > PG_LEVEL_4K)
spte |= PT_PAGE_SIZE_MASK;
@@ -318,11 +327,13 @@ static u64 make_spte_executable(u64 spte, u8 access)
{
u64 set, clear;
- if (access & ACC_EXEC_MASK)
- set = shadow_x_mask;
+ if (shadow_nx_mask)
+ set = (access & ACC_EXEC_MASK) ? 0 : shadow_nx_mask;
else
- set = shadow_nx_mask;
- clear = set ^ (shadow_nx_mask | shadow_x_mask);
+ set =
+ (access & ACC_EXEC_MASK ? shadow_xs_mask : 0) |
+ (access & ACC_USER_EXEC_MASK ? shadow_xu_mask : 0);
+ clear = set ^ (shadow_nx_mask | shadow_xs_mask | shadow_xu_mask);
return modify_spte_protections(spte, set, clear);
}
@@ -389,7 +400,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK |
PT_PRESENT_MASK /* or VMX_EPT_READABLE_MASK */ |
- shadow_user_mask | shadow_x_mask | shadow_me_value;
+ shadow_user_mask | shadow_xs_mask | shadow_xu_mask | shadow_me_value;
if (ad_disabled)
spte |= SPTE_TDP_AD_DISABLED;
@@ -497,10 +508,11 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits)
shadow_accessed_mask = VMX_EPT_ACCESS_BIT;
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
- shadow_x_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xs_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xu_mask = VMX_EPT_EXECUTABLE_MASK;
shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
- shadow_acc_track_mask = VMX_EPT_RWX_MASK;
+ shadow_acc_track_mask = VMX_EPT_RWX_MASK | VMX_EPT_USER_EXECUTABLE_MASK;
shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
shadow_mmu_writable_mask = EPT_SPTE_MMU_WRITABLE;
@@ -548,7 +560,8 @@ void kvm_mmu_reset_all_pte_masks(void)
shadow_accessed_mask = PT_ACCESSED_MASK;
shadow_dirty_mask = PT_DIRTY_MASK;
shadow_nx_mask = PT64_NX_MASK;
- shadow_x_mask = 0;
+ shadow_xs_mask = 0;
+ shadow_xu_mask = 0;
shadow_present_mask = PT_PRESENT_MASK;
shadow_acc_track_mask = 0;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 121bfb2217e8..204f16aaf4e5 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -54,7 +54,8 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define ACC_READ_MASK PT_PRESENT_MASK
#define ACC_WRITE_MASK PT_WRITABLE_MASK
-#define ACC_USER_MASK PT_USER_MASK
+#define ACC_USER_MASK PT_USER_MASK /* non EPT */
+#define ACC_USER_EXEC_MASK ACC_USER_MASK /* EPT only */
#define ACC_EXEC_MASK 8
#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
@@ -184,8 +185,9 @@ extern bool __read_mostly kvm_ad_enabled;
extern u64 __read_mostly shadow_host_writable_mask;
extern u64 __read_mostly shadow_mmu_writable_mask;
extern u64 __read_mostly shadow_nx_mask;
-extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
extern u64 __read_mostly shadow_user_mask;
+extern u64 __read_mostly shadow_xs_mask; /* mutual exclusive with nx_mask and user_mask */
+extern u64 __read_mostly shadow_xu_mask; /* mutual exclusive with nx_mask and user_mask */
extern u64 __read_mostly shadow_accessed_mask;
extern u64 __read_mostly shadow_dirty_mask;
extern u64 __read_mostly shadow_mmio_value;
@@ -363,7 +365,7 @@ static inline bool is_last_spte(u64 pte, int level)
static inline bool is_executable_pte(u64 spte)
{
- return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask;
+ return (spte & (shadow_xs_mask | shadow_xu_mask | shadow_nx_mask)) != shadow_nx_mask;
}
static inline kvm_pfn_t spte_to_pfn(u64 pte)
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (9 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 10/24] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 12/24] KVM: VMX: enable use of MBEC Paolo Bonzini
` (14 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Guest page tables can be reused independent of the value of CR4.SMEP
(at least if WP=1). However, this is not true of EPT MBEC pages,
because presence of EPT entries is signaled by bits 0-2 when MBEC
is off, and bits 0-2 + bit 10 when MBEC is on.
In preparation for enabling MBEC, move cr4_smep to the base role.
This makes the smep_andnot_wp bit redundant, so remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Documentation/virt/kvm/x86/mmu.rst | 10 ++++------
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 23 +++++++++++++++--------
arch/x86/kvm/mmu/mmu.c | 6 +++---
4 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/Documentation/virt/kvm/x86/mmu.rst b/Documentation/virt/kvm/x86/mmu.rst
index 2b3b6d442302..666aa179601a 100644
--- a/Documentation/virt/kvm/x86/mmu.rst
+++ b/Documentation/virt/kvm/x86/mmu.rst
@@ -184,10 +184,8 @@ Shadow pages contain the following information:
Contains the value of efer.nx for which the page is valid.
role.cr0_wp:
Contains the value of cr0.wp for which the page is valid.
- role.smep_andnot_wp:
- Contains the value of cr4.smep && !cr0.wp for which the page is valid
- (pages for which this is true are different from other pages; see the
- treatment of cr0.wp=0 below).
+ role.cr4_smep:
+ Contains the value of cr4.smep for which the page is valid.
role.smap_andnot_wp:
Contains the value of cr4.smap && !cr0.wp for which the page is valid
(pages for which this is true are different from other pages; see the
@@ -435,8 +433,8 @@ from being written by the kernel after cr0.wp has changed to 1, we make
the value of cr0.wp part of the page role. This means that an spte created
with one value of cr0.wp cannot be used when cr0.wp has a different value -
it will simply be missed by the shadow page lookup code. A similar issue
-exists when an spte created with cr0.wp=0 and cr4.smep=0 is used after
-changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep
+exists when an spte created with cr0.wp=0 and cr4.smap=0 is used after
+changing cr4.smap to 1. To avoid this, the value of !cr0.wp && cr4.smap
is also made a part of the page role.
Large pages
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index de709fb5bd76..a02b486cc6fe 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -93,6 +93,7 @@ KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
+KVM_X86_OP_OPTIONAL_RET0(tdp_has_smep)
KVM_X86_OP(load_mmu_pgd)
KVM_X86_OP_OPTIONAL(link_external_spt)
KVM_X86_OP_OPTIONAL(set_external_spte)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 65671d3769f0..50a941ff61d1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -342,8 +342,8 @@ struct kvm_kernel_irq_routing_entry;
* paging has exactly one upper level, making level completely redundant
* when has_4_byte_gpte=1.
*
- * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if
- * cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
+ * - on top of this, smap_andnot_wp is only set if cr0_wp=0,
+ * therefore these two bits only give rise to 3 possibilities.
*
* Therefore, the maximum number of possible upper-level shadow pages for a
* single gfn is a bit less than 2^14.
@@ -359,12 +359,19 @@ union kvm_mmu_page_role {
unsigned invalid:1;
unsigned efer_nx:1;
unsigned cr0_wp:1;
- unsigned smep_andnot_wp:1;
unsigned smap_andnot_wp:1;
unsigned ad_disabled:1;
unsigned guest_mode:1;
unsigned passthrough:1;
unsigned is_mirror:1;
+
+ /*
+ * cr4_smep is also set for EPT MBEC. Because it affects
+ * which pages are considered non-present (bit 10 additionally
+ * must be zero if MBEC is on) it has to be in the base role.
+ */
+ unsigned cr4_smep:1;
+
unsigned:3;
/*
@@ -391,10 +398,10 @@ union kvm_mmu_page_role {
* tables (because KVM doesn't support Protection Keys with shadow paging), and
* CR0.PG, CR4.PAE, and CR4.PSE are indirectly reflected in role.level.
*
- * Note, SMEP and SMAP are not redundant with sm*p_andnot_wp in the page role.
- * If CR0.WP=1, KVM can reuse shadow pages for the guest regardless of SMEP and
- * SMAP, but the MMU's permission checks for software walks need to be SMEP and
- * SMAP aware regardless of CR0.WP.
+ * Note, SMAP is not redundant with smap_andnot_wp in the page role. If
+ * CR0.WP=1, KVM can reuse shadow pages for the guest regardless of SMAP,
+ * but the MMU's permission checks for software walks need to be SMAP
+ * aware regardless of CR0.WP.
*/
union kvm_mmu_extended_role {
u32 word;
@@ -404,7 +411,6 @@ union kvm_mmu_extended_role {
unsigned int cr4_pse:1;
unsigned int cr4_pke:1;
unsigned int cr4_smap:1;
- unsigned int cr4_smep:1;
unsigned int cr4_la57:1;
unsigned int efer_lma:1;
};
@@ -1856,6 +1862,7 @@ struct kvm_x86_ops {
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
u8 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+ bool (*tdp_has_smep)(struct kvm *kvm);
void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
int root_level);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a6ee467ad838..e768aeb05886 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -227,7 +227,7 @@ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \
}
BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse);
-BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep);
+BUILD_MMU_ROLE_ACCESSOR(base, cr4, smep);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57);
@@ -5745,7 +5745,7 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
role.base.efer_nx = ____is_efer_nx(regs);
role.base.cr0_wp = ____is_cr0_wp(regs);
- role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs);
+ role.base.cr4_smep = ____is_cr4_smep(regs);
role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs);
role.base.has_4_byte_gpte = !____is_cr4_pae(regs);
@@ -5757,7 +5757,6 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
else
role.base.level = PT32_ROOT_LEVEL;
- role.ext.cr4_smep = ____is_cr4_smep(regs);
role.ext.cr4_smap = ____is_cr4_smap(regs);
role.ext.cr4_pse = ____is_cr4_pse(regs);
@@ -5816,6 +5815,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
role.access = ACC_ALL;
role.cr0_wp = true;
+ role.cr4_smep = kvm_x86_call(tdp_has_smep)(vcpu->kvm);
role.efer_nx = true;
role.smm = cpu_role.base.smm;
role.guest_mode = cpu_role.base.guest_mode;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 12/24] KVM: VMX: enable use of MBEC
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (10 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 13/24] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
` (13 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
If available, set SECONDARY_EXEC_MODE_BASED_EPT_EXEC in the secondary
execution controls and configure XS and XU separately (even if they
are always used together).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/mmu.h | 7 ++++++-
arch/x86/kvm/mmu/spte.c | 4 ++--
arch/x86/kvm/mmu/spte.h | 5 +++--
arch/x86/kvm/vmx/capabilities.h | 6 ++++++
arch/x86/kvm/vmx/common.h | 10 +++++-----
arch/x86/kvm/vmx/main.c | 9 +++++++++
arch/x86/kvm/vmx/nested.c | 1 +
arch/x86/kvm/vmx/vmx.c | 16 +++++++++++++++-
arch/x86/kvm/vmx/vmx.h | 1 +
arch/x86/kvm/vmx/x86_ops.h | 1 +
11 files changed, 52 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 59e3b095a315..2b449a3948d3 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -608,9 +608,12 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
+#define EPT_VIOLATION_USER_EXEC_TO_PROT(__epte) (((__epte) & VMX_EPT_USER_EXECUTABLE_MASK) >> 4)
static_assert(EPT_VIOLATION_RWX_TO_PROT(VMX_EPT_RWX_MASK) ==
(EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_WRITE | EPT_VIOLATION_PROT_EXEC));
+static_assert(EPT_VIOLATION_USER_EXEC_TO_PROT(VMX_EPT_USER_EXECUTABLE_MASK) ==
+ (EPT_VIOLATION_PROT_USER_EXEC));
/*
* Exit Qualifications for NOTIFY VM EXIT
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 23f37535c0ce..678ce021991f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -76,12 +76,17 @@ static inline gfn_t kvm_mmu_max_gfn(void)
return (1ULL << (max_gpa_bits - PAGE_SHIFT)) - 1;
}
+static inline bool mmu_has_mbec(struct kvm_mmu *mmu)
+{
+ return mmu->root_role.cr4_smep;
+}
+
u8 kvm_mmu_get_max_tdp_level(void);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value);
void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits);
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index fc7eb73476f6..800312e46d0a 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -500,7 +500,7 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits)
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec)
{
kvm_ad_enabled = has_ad_bits;
@@ -509,7 +509,7 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits)
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
shadow_xs_mask = VMX_EPT_EXECUTABLE_MASK;
- shadow_xu_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xu_mask = has_mbec ? VMX_EPT_USER_EXECUTABLE_MASK : VMX_EPT_EXECUTABLE_MASK;
shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
shadow_acc_track_mask = VMX_EPT_RWX_MASK | VMX_EPT_USER_EXECUTABLE_MASK;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 204f16aaf4e5..6c514194a513 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -24,7 +24,7 @@
* - bits 55 (EPT only): MMU-writable
* - bits 56-59: unused
* - bits 60-61: type of A/D tracking
- * - bits 62: unused
+ * - bits 62 (EPT only): saved XU bit for disabled AD
*/
/*
@@ -72,7 +72,8 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
* must not overlap the A/D type mask.
*/
#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (VMX_EPT_READABLE_MASK | \
- VMX_EPT_EXECUTABLE_MASK)
+ VMX_EPT_EXECUTABLE_MASK | \
+ VMX_EPT_USER_EXECUTABLE_MASK)
#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 609477f190e8..90c0bb4b7216 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -406,4 +406,10 @@ static inline bool cpu_has_notify_vmexit(void)
SECONDARY_EXEC_NOTIFY_VM_EXITING;
}
+static inline bool cpu_has_ept_mbec(void)
+{
+ return vmcs_config.cpu_based_2nd_exec_ctrl &
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+}
+
#endif /* __KVM_X86_VMX_CAPS_H */
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 1afbf272efae..40fa72f31fc7 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -91,15 +91,15 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
- /*
- * ept page table entry is present?
- * note: unconditionally clear USER_EXEC until mode-based
- * execute control is implemented
- */
+ /* ept page table entry is present? */
error_code |= (exit_qualification &
(EPT_VIOLATION_PROT_MASK & ~EPT_VIOLATION_PROT_USER_EXEC))
? PFERR_PRESENT_MASK : 0;
+ if (mmu_has_mbec(vcpu->arch.mmu))
+ error_code |= (exit_qualification & EPT_VIOLATION_PROT_USER_EXEC)
+ ? PFERR_PRESENT_MASK : 0;
+
if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index a46ccd670785..c0dd506bed64 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -750,6 +750,14 @@ static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
return vmx_set_identity_map_addr(kvm, ident_addr);
}
+static bool vt_tdp_has_smep(struct kvm *kvm)
+{
+ if (is_td(kvm))
+ return false;
+
+ return vmx_tdp_has_smep(kvm);
+}
+
static u64 vt_get_l2_tsc_offset(struct kvm_vcpu *vcpu)
{
/* TDX doesn't support L2 guest at the moment. */
@@ -961,6 +969,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.set_tss_addr = vt_op(set_tss_addr),
.set_identity_map_addr = vt_op(set_identity_map_addr),
.get_mt_mask = vmx_get_mt_mask,
+ .tdp_has_smep = vt_op(tdp_has_smep),
.get_exit_info = vt_op(get_exit_info),
.get_entry_info = vt_op(get_entry_info),
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 937aeb474af7..adeb5a29169f 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2440,6 +2440,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_ENABLE_VMFUNC |
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC |
SECONDARY_EXEC_DESC);
if (nested_cpu_has(vmcs12,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e27868fa4eb7..0c25c6865f91 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -113,6 +113,9 @@ module_param(emulate_invalid_guest_state, bool, 0444);
static bool __read_mostly fasteoi = 1;
module_param(fasteoi, bool, 0444);
+static bool __read_mostly enable_mbec = 1;
+module_param_named(mbec, enable_mbec, bool, 0444);
+
module_param(enable_apicv, bool, 0444);
module_param(enable_ipiv, bool, 0444);
@@ -2809,6 +2812,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
return -EIO;
vmx_cap->ept = 0;
+ _cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
}
if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_VPID) &&
@@ -4844,6 +4848,9 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
*/
exec_control &= ~SECONDARY_EXEC_ENABLE_VMFUNC;
+ if (!enable_mbec)
+ exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
/* SECONDARY_EXEC_DESC is enabled/disabled on writes to CR4.UMIP,
* in vmx_set_cr4. */
exec_control &= ~SECONDARY_EXEC_DESC;
@@ -7932,6 +7939,11 @@ u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT);
}
+bool vmx_tdp_has_smep(struct kvm *kvm)
+{
+ return enable_mbec;
+}
+
static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx, u32 new_ctl)
{
/*
@@ -8779,6 +8791,8 @@ __init int vmx_hardware_setup(void)
ple_window_shrink = 0;
}
+ if (!cpu_has_ept_mbec())
+ enable_mbec = 0;
if (!cpu_has_vmx_apicv())
enable_apicv = 0;
if (!enable_apicv)
@@ -8798,7 +8812,7 @@ __init int vmx_hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
if (enable_ept)
- kvm_mmu_set_ept_masks(enable_ept_ad_bits);
+ kvm_mmu_set_ept_masks(enable_ept_ad_bits, enable_mbec);
else
vt_x86_ops.get_mt_mask = NULL;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 70bfe81dea54..594717e619d9 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -570,6 +570,7 @@ static inline u8 vmx_get_rvi(void)
SECONDARY_EXEC_ENABLE_VMFUNC | \
SECONDARY_EXEC_BUS_LOCK_DETECTION | \
SECONDARY_EXEC_NOTIFY_VM_EXITING | \
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC | \
SECONDARY_EXEC_ENCLS_EXITING | \
SECONDARY_EXEC_EPT_VIOLATION_VE)
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index d09abeac2b56..69cf276be88e 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -103,6 +103,7 @@ void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr);
u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+bool vmx_tdp_has_smep(struct kvm *kvm);
void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 13/24] KVM: nVMX: pass advanced EPT violation vmexit info to guest
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (11 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 12/24] KVM: VMX: enable use of MBEC Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 14/24] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
` (12 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
KVM will use advanced vmexit information for EPT violations to
virtualize MBEC. Pass it to the guest since it is easy and allows
testing nested nested.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 4 ++++
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
arch/x86/kvm/vmx/nested.c | 13 +++++++++----
3 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 2b449a3948d3..fcd623719334 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -524,6 +524,7 @@ enum vmcs_field {
#define VMX_EPT_1GB_PAGE_BIT (1ull << 17)
#define VMX_EPT_INVEPT_BIT (1ull << 20)
#define VMX_EPT_AD_BIT (1ull << 21)
+#define VMX_EPT_ADVANCED_VMEXIT_INFO_BIT (1ull << 22)
#define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25)
#define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26)
@@ -606,6 +607,9 @@ enum vm_entry_failure_code {
EPT_VIOLATION_PROT_USER_EXEC)
#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
+#define EPT_VIOLATION_GVA_USER BIT(9)
+#define EPT_VIOLATION_GVA_WRITABLE BIT(10)
+#define EPT_VIOLATION_GVA_NX BIT(11)
#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
#define EPT_VIOLATION_USER_EXEC_TO_PROT(__epte) (((__epte) & VMX_EPT_USER_EXECUTABLE_MASK) >> 4)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index fb1b5d8b23e5..09e2e630d4b6 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -491,7 +491,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
* [2:0] - Derive from the access bits. The exit_qualification might be
* out of date if it is serving an EPT misconfiguration.
* [5:3] - Calculated by the page walk of the guest EPT page tables
- * [7:8] - Derived from [7:8] of real exit_qualification
+ * [7:11] - Derived from [7:11] of real exit_qualification
*
* The other bits are set to 0.
*/
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index adeb5a29169f..4b742a19bfde 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -443,10 +443,14 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu,
vm_exit_reason = EXIT_REASON_EPT_MISCONFIG;
exit_qualification = 0;
} else {
+ u64 mask = EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED;
+ if (vmx->nested.msrs.ept_caps & VMX_EPT_ADVANCED_VMEXIT_INFO_BIT)
+ mask |= EPT_VIOLATION_GVA_USER |
+ EPT_VIOLATION_GVA_WRITABLE |
+ EPT_VIOLATION_GVA_NX;
exit_qualification = fault->exit_qualification;
- exit_qualification |= vmx_get_exit_qual(vcpu) &
- (EPT_VIOLATION_GVA_IS_VALID |
- EPT_VIOLATION_GVA_TRANSLATED);
+ exit_qualification |= vmx_get_exit_qual(vcpu) & mask;
vm_exit_reason = EXIT_REASON_EPT_VIOLATION;
}
@@ -7238,7 +7242,8 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_caps,
VMX_EPT_PAGE_WALK_5_BIT |
VMX_EPTP_WB_BIT |
VMX_EPT_INVEPT_BIT |
- VMX_EPT_EXECUTE_ONLY_BIT;
+ VMX_EPT_EXECUTE_ONLY_BIT |
+ VMX_EPT_ADVANCED_VMEXIT_INFO_BIT;
msrs->ept_caps &= ept_caps;
msrs->ept_caps |= VMX_EPT_EXTENT_GLOBAL_BIT |
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 14/24] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (12 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 13/24] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 15/24] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
` (11 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
For EPT, PFERR_USER_MASK refers not to the CPL of the guest,
but to the AND of the U bits encountered while walking guest
page tables; this is consistent with how MBEC differentiates
between XS and XU. This is available through the
"advanced vmexit information for EPT violations" feature.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/vmx/common.h | 6 +++++-
arch/x86/kvm/vmx/vmx.c | 10 ++++++++++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 40fa72f31fc7..48520fa1c8e8 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -100,9 +100,13 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
error_code |= (exit_qualification & EPT_VIOLATION_PROT_USER_EXEC)
? PFERR_PRESENT_MASK : 0;
- if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
+ if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID) {
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
+ if ((exit_qualification & (EPT_VIOLATION_GVA_TRANSLATED|EPT_VIOLATION_GVA_USER))
+ == (EPT_VIOLATION_GVA_TRANSLATED|EPT_VIOLATION_GVA_USER))
+ error_code |= PFERR_USER_MASK;
+ }
if (vt_is_tdx_private_gpa(vcpu->kvm, gpa))
error_code |= PFERR_PRIVATE_ACCESS;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0c25c6865f91..65892dc6f478 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2826,6 +2826,16 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
vmx_cap->vpid = 0;
}
+ /*
+ * Virtualizing MBEC requires advanced vmexit information in order to
+ * distinguish supervisor and user accesses. For simplicity and clarity
+ * disable MBEC entirely if advanced vmexit information is not available,
+ * this way mbec=1 in the kvm_intel module parameters implies availability
+ * to nested guests as well.
+ */
+ if (!(vmx_cap->ept & VMX_EPT_ADVANCED_VMEXIT_INFO_BIT))
+ _cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
if (!cpu_has_sgx())
_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_ENCLS_EXITING;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 15/24] KVM: x86/mmu: add support for MBEC to EPT page table walks
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (13 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 14/24] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 16/24] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
` (10 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Extend the page walker to support moving bit 10 of the PTEs
into ACC_USER_EXEC_MASK and bit 6 of the exit qualification of
EPT violation VM exits.
Note that while mmu_has_mbec()/cr4_smep affect the interpretation of
ACC_USER_EXEC_MASK and add bit 10 as a "present bit" in guest EPT page
table entries, they do not affect how KVM operates on SPTEs. That's
because the MMU uses explicit ACC_USER_EXEC_MASK/shadow_xu_mask even for
the non-nested EPT; the only difference is that ACC_USER_EXEC_MASK and
ACC_EXEC_MASK will always be set in tandem outside the nested scenario.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 13 +++++++++++--
arch/x86/kvm/mmu/paging_tmpl.h | 27 +++++++++++++++++++++------
2 files changed, 32 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e768aeb05886..cd2418fe8708 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5551,7 +5551,6 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
@@ -5592,8 +5591,18 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
u16 smapf = 0;
if (ept) {
- ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
+ const u16 xs = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u16 xu = ACC_BITS_MASK(ACC_USER_EXEC_MASK);
+
+ if (pfec & PFERR_FETCH_MASK) {
+ /* Ignore XU unless MBEC is enabled. */
+ if (cr4_smep)
+ ff = pfec & PFERR_USER_MASK ? (u16)~xu : (u16)~xs;
+ else
+ ff = (u16)~xs;
+ }
} else {
+ const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
/* Faults from kernel mode accesses to user pages */
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 09e2e630d4b6..95aa1b4fc327 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -124,12 +124,17 @@ static inline void FNAME(protect_clean_gpte)(struct kvm_mmu *mmu, unsigned *acce
*access &= mask;
}
-static inline int FNAME(is_present_gpte)(unsigned long pte)
+static inline int FNAME(is_present_gpte)(struct kvm_mmu *mmu,
+ unsigned long pte)
{
#if PTTYPE != PTTYPE_EPT
return pte & PT_PRESENT_MASK;
#else
- return pte & 7;
+ /*
+ * For EPT, an entry is present if any of bits 2:0 are set.
+ * With mode-based execute control, bit 10 also indicates presence.
+ */
+ return pte & (7 | (mmu_has_mbec(mmu) ? VMX_EPT_USER_EXECUTABLE_MASK : 0));
#endif
}
@@ -152,7 +157,7 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp, u64 *spte,
u64 gpte)
{
- if (!FNAME(is_present_gpte)(gpte))
+ if (!FNAME(is_present_gpte)(vcpu->arch.mmu, gpte))
goto no_present;
/* Prefetch only accessed entries (unless A/D bits are disabled). */
@@ -173,10 +178,17 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
static inline unsigned FNAME(gpte_access)(u64 gpte)
{
unsigned access;
+ /*
+ * Set bits in ACC_*_MASK even if they might not be used in the
+ * actual checks. For example, if EFER.NX is clear permission_fault()
+ * will ignore ACC_EXEC_MASK, and if MBEC is disabled it will
+ * ignore ACC_USER_EXEC_MASK.
+ */
#if PTTYPE == PTTYPE_EPT
access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
- ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0);
+ ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0) |
+ ((gpte & VMX_EPT_USER_EXECUTABLE_MASK) ? ACC_USER_EXEC_MASK : 0);
#else
/*
* P is set here, so the page is always readable and W/U/!NX represent
@@ -331,7 +343,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
if (walker->level == PT32E_ROOT_LEVEL) {
pte = mmu->get_pdptr(vcpu, (addr >> 30) & 3);
trace_kvm_mmu_paging_element(pte, walker->level);
- if (!FNAME(is_present_gpte)(pte))
+ if (!FNAME(is_present_gpte)(mmu, pte))
goto error;
--walker->level;
}
@@ -413,7 +425,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
*/
pte_access = pt_access & (pte ^ walk_nx_mask);
- if (unlikely(!FNAME(is_present_gpte)(pte)))
+ if (unlikely(!FNAME(is_present_gpte)(mmu, pte)))
goto error;
if (unlikely(FNAME(is_rsvd_bits_set)(mmu, pte, walker->level))) {
@@ -518,6 +530,9 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
* ACC_*_MASK flags!
*/
walker->fault.exit_qualification |= EPT_VIOLATION_RWX_TO_PROT(pte_access);
+ if (mmu_has_mbec(mmu))
+ walker->fault.exit_qualification |=
+ EPT_VIOLATION_USER_EXEC_TO_PROT(pte_access);
}
#endif
walker->fault.address = addr;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 16/24] KVM: nVMX: advertise MBEC to nested guests
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (14 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 15/24] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 17/24] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
` (9 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
Advertise SECONDARY_EXEC_MODE_BASED_EPT_EXEC (MBEC) to userspace, which
allows userspace to expose and advertise the feature to the guest.
When MBEC is enabled by the guest, it is passed to the MMU via cr4_smep,
and to the processor by the merging of vmcs12->secondary_vm_exec_control
into the VMCS02's secondary VM execution controls.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-9-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 7 ++++---
arch/x86/kvm/vmx/nested.c | 11 +++++++++++
3 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 678ce021991f..fa1942b126fb 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -93,7 +93,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
unsigned long cr4, u64 efer, gpa_t nested_cr3);
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
- gpa_t new_eptp);
+ bool mbec, gpa_t new_eptp);
bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu);
int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
u64 fault_address, char *insn, int insn_len);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index cd2418fe8708..442cbaeaf547 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5940,7 +5940,7 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_npt_mmu);
static union kvm_cpu_role
kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
- bool execonly, u8 level)
+ bool execonly, u8 level, bool mbec)
{
union kvm_cpu_role role = {0};
@@ -5950,6 +5950,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
*/
WARN_ON_ONCE(is_smm(vcpu));
role.base.level = level;
+ role.base.cr4_smep = mbec;
role.base.has_4_byte_gpte = false;
role.base.direct = false;
role.base.ad_disabled = !accessed_dirty;
@@ -5965,13 +5966,13 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
- gpa_t new_eptp)
+ bool mbec, gpa_t new_eptp)
{
struct kvm_mmu *context = &vcpu->arch.guest_mmu;
u8 level = vmx_eptp_page_walk_level(new_eptp);
union kvm_cpu_role new_mode =
kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
- execonly, level);
+ execonly, level, mbec);
if (new_mode.as_u64 != context->cpu_role.as_u64) {
/* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 4b742a19bfde..1e84ca353cec 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -469,6 +469,13 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu,
vmcs12->guest_physical_address = fault->address;
}
+static inline bool nested_ept_mbec_enabled(struct kvm_vcpu *vcpu)
+{
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+
+ return nested_cpu_has2(vmcs12, SECONDARY_EXEC_MODE_BASED_EPT_EXEC);
+}
+
static void nested_ept_new_eptp(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -477,6 +484,7 @@ static void nested_ept_new_eptp(struct kvm_vcpu *vcpu)
kvm_init_shadow_ept_mmu(vcpu, execonly, ept_lpage_level,
nested_ept_ad_enabled(vcpu),
+ nested_ept_mbec_enabled(vcpu),
nested_ept_get_eptp(vcpu));
}
@@ -7255,6 +7263,9 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_caps,
msrs->ept_caps |= VMX_EPT_AD_BIT;
}
+ if (cpu_has_ept_mbec())
+ msrs->secondary_ctls_high |=
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
/*
* Advertise EPTP switching irrespective of hardware support,
* KVM emulates it in software so long as VMFUNC is supported.
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 17/24] KVM: nVMX: allow MBEC with EVMCS
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (15 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 16/24] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 18/24] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
` (8 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
Extend EVMCS1_SUPPORTED_2NDEXEC to allow MBEC and EVMCS to coexist.
Presenting both EVMCS and MBEC simultaneously causes KVM to filter out
MBEC and not present it as a supported control to the guest, preventing
performance gains from MBEC when Windows HVCI is enabled.
The guest may choose not to use MBEC (e.g., if the admin does not enable
Windows HVCI / Memory Integrity), but if they use traditional nested
virt (Hyper-V, WSL2, etc.), having EVMCS exposed is important for
improving nested guest performance. IOW allowing MBEC and EVMCS to
coexist provides maximum optionality to Windows users without
overcomplicating VM administration.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-8-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/vmx/hyperv_evmcs.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/vmx/hyperv_evmcs.h b/arch/x86/kvm/vmx/hyperv_evmcs.h
index fc7c4e7bd1bf..bc08fe40590e 100644
--- a/arch/x86/kvm/vmx/hyperv_evmcs.h
+++ b/arch/x86/kvm/vmx/hyperv_evmcs.h
@@ -87,6 +87,7 @@
SECONDARY_EXEC_PT_CONCEAL_VMX | \
SECONDARY_EXEC_BUS_LOCK_DETECTION | \
SECONDARY_EXEC_NOTIFY_VM_EXITING | \
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC | \
SECONDARY_EXEC_ENCLS_EXITING)
#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 18/24] KVM: x86/mmu: propagate access mask from root pages down
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (16 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 17/24] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 19/24] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
` (7 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Until now, all SPTEs have had all kinds of access allowed; however,
for GMET to be enabled all the pages have to have ACC_USER_MASK
disabled. By marking them as supervisor pages, the processor
allows execution from either user or supervisor mode (unlike
for normal paging, NPT ignores the U bit for reads and writes).
That will mean that the root page's role has ACC_USER_MASK
cleared and that has to be propagated down through the kvm_mmu_page
tree.
Do that, and pass the required access to the
kvm_mmu_spte_requested tracepoint since it's not ACC_ALL
anymore.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 9 +++++----
arch/x86/kvm/mmu/mmutrace.h | 10 ++++++----
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
4 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 442cbaeaf547..834ba9c0c809 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3434,12 +3434,13 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
struct kvm_shadow_walk_iterator it;
struct kvm_mmu_page *sp;
- int ret;
+ int ret, access;
gfn_t base_gfn = fault->gfn;
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ access = vcpu->arch.mmu->root_role.access;
+ trace_kvm_mmu_spte_requested(fault, access);
for_each_shadow_entry(vcpu, fault->addr, it) {
/*
* We cannot overwrite existing page tables with an NX
@@ -3452,7 +3453,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (it.level == fault->goal_level)
break;
- sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
+ sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, access);
if (sp == ERR_PTR(-EEXIST))
continue;
@@ -3465,7 +3466,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (WARN_ON_ONCE(it.level != fault->goal_level))
return -EFAULT;
- ret = mmu_set_spte(vcpu, fault->slot, it.sptep, ACC_ALL,
+ ret = mmu_set_spte(vcpu, fault->slot, it.sptep, access,
base_gfn, fault->pfn, fault);
if (ret == RET_PF_SPURIOUS)
return ret;
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index 3429c1413f42..fa01719baf8d 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -373,23 +373,25 @@ TRACE_EVENT(
TRACE_EVENT(
kvm_mmu_spte_requested,
- TP_PROTO(struct kvm_page_fault *fault),
- TP_ARGS(fault),
+ TP_PROTO(struct kvm_page_fault *fault, u8 access),
+ TP_ARGS(fault, access),
TP_STRUCT__entry(
__field(u64, gfn)
__field(u64, pfn)
__field(u8, level)
+ __field(u8, access)
),
TP_fast_assign(
__entry->gfn = fault->gfn;
__entry->pfn = fault->pfn | (fault->gfn & (KVM_PAGES_PER_HPAGE(fault->goal_level) - 1));
__entry->level = fault->goal_level;
+ __entry->access = access;
),
- TP_printk("gfn %llx pfn %llx level %d",
- __entry->gfn, __entry->pfn, __entry->level
+ TP_printk("gfn %llx pfn %llx level %d access %x",
+ __entry->gfn, __entry->pfn, __entry->level, __entry->access
)
);
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 95aa1b4fc327..31331fe10723 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -731,7 +731,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
*/
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ trace_kvm_mmu_spte_requested(fault, gw->pte_access);
for (; shadow_walk_okay(&it); shadow_walk_next(&it)) {
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 9c26038f6b77..25e557de99d6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1185,9 +1185,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
}
if (unlikely(!fault->slot))
- new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
+ new_spte = make_mmio_spte(vcpu, iter->gfn, sp->role.access);
else
- wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
+ wrprot = make_spte(vcpu, sp, fault->slot, sp->role.access, iter->gfn,
fault->pfn, iter->old_spte, fault->prefetch,
false, fault->map_writable, &new_spte);
@@ -1272,7 +1272,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ trace_kvm_mmu_spte_requested(fault, root->role.access);
rcu_read_lock();
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 19/24] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (17 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 18/24] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 20/24] KVM: SVM: add GMET bit definitions Paolo Bonzini
` (6 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
While GMET looks a lot like SMEP, it has several annoying differences.
The main one is that the availability of the I/D bit in the page fault
error code still depends on the host CR4.SMEP and EFER.NXE bits. If the
base.cr4_smep bit of the cpu_role is (ab)used to enable GMET, there needs
to be another place where the host CR4.SMEP is read from; just merge it
with EFER.NXE into a new cpu_role bit that tells paging_tmpl.h whether
to set the I/D bit at all.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 7 +++++++
arch/x86/kvm/mmu/mmu.c | 8 ++++++++
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 50a941ff61d1..df46ee605b9b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -413,6 +413,13 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1;
unsigned int cr4_la57:1;
unsigned int efer_lma:1;
+
+ /*
+ * True if either CR4.SMEP or EFER.NXE are set. For AMD NPT
+ * this is the "real" host CR4.SMEP whereas cr4_smep is
+ * actually GMET.
+ */
+ unsigned int has_pferr_fetch:1;
};
};
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 834ba9c0c809..94d7e39a9417 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -234,6 +234,11 @@ BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57);
BUILD_MMU_ROLE_ACCESSOR(base, efer, nx);
BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma);
+static inline bool has_pferr_fetch(struct kvm_mmu *mmu)
+{
+ return mmu->cpu_role.ext.has_pferr_fetch;
+}
+
static inline bool is_cr0_pg(struct kvm_mmu *mmu)
{
return mmu->cpu_role.base.level > 0;
@@ -5774,6 +5779,8 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
role.ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs);
role.ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs);
role.ext.efer_lma = ____is_efer_lma(regs);
+
+ role.ext.has_pferr_fetch = role.base.efer_nx | role.base.cr4_smep;
return role;
}
@@ -5927,6 +5934,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
/* NPT requires CR0.PG=1. */
WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
+ cpu_role.base.cr4_smep = false;
root_role = cpu_role.base;
root_role.level = kvm_mmu_get_tdp_level(vcpu);
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 31331fe10723..8ea248e1918b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -486,7 +486,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
error:
errcode |= write_fault | user_fault;
- if (fetch_fault && (is_efer_nx(mmu) || is_cr4_smep(mmu)))
+ if (fetch_fault && has_pferr_fetch(mmu))
errcode |= PFERR_FETCH_MASK;
walker->fault.vector = PF_VECTOR;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 20/24] KVM: SVM: add GMET bit definitions
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (18 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 19/24] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 21/24] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
` (5 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti, Borislav Petkov (AMD)
GMET (Guest Mode Execute Trap) is an AMD virtualization feature,
essentially the nested paging version of SMEP. Hyper-V uses it;
add it in preparation for making it available to hypervisors
running under KVM.
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/svm.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dbe104df339b..9f876fbdcc3a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -379,6 +379,7 @@
#define X86_FEATURE_AVIC (15*32+13) /* "avic" Virtual Interrupt Controller */
#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* "v_vmsave_vmload" Virtual VMSAVE VMLOAD */
#define X86_FEATURE_VGIF (15*32+16) /* "vgif" Virtual GIF */
+#define X86_FEATURE_GMET (15*32+17) /* Guest Mode Execution Trap */
#define X86_FEATURE_X2AVIC (15*32+18) /* "x2avic" Virtual x2apic */
#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */
#define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index edde36097ddc..03e9e0112b10 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -242,6 +242,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_NESTED_CTL_NP_ENABLE BIT(0)
#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2)
+#define SVM_NESTED_CTL_GMET_ENABLE BIT(3)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 21/24] KVM: x86/mmu: add support for GMET to NPT page table walks
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (19 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 20/24] KVM: SVM: add GMET bit definitions Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 22/24] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
` (4 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
GMET allows page table entries to be created with U=0 in NPT.
However, when GMET=1 U=0 only affects execution, not reads or
writes. Ignore user faults on non-fetch accesses for NPT GMET.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.h | 3 ++-
arch/x86/kvm/mmu/mmu.c | 19 +++++++++++++------
arch/x86/kvm/svm/nested.c | 3 ++-
4 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index df46ee605b9b..2a26c8fe3f4b 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -369,6 +369,8 @@ union kvm_mmu_page_role {
* cr4_smep is also set for EPT MBEC. Because it affects
* which pages are considered non-present (bit 10 additionally
* must be zero if MBEC is on) it has to be in the base role.
+ * It also has to be in the base role for AMD GMET because
+ * kernel-executable pages need to have U=0 with GMET enabled.
*/
unsigned cr4_smep:1;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index fa1942b126fb..ddca3e3e4eb2 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -90,7 +90,8 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
- unsigned long cr4, u64 efer, gpa_t nested_cr3);
+ unsigned long cr4, u64 efer, gpa_t nested_cr3,
+ u64 nested_ctl);
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
bool mbec, gpa_t new_eptp);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 94d7e39a9417..d9eb059d24de 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -55,6 +55,7 @@
#include <asm/io.h>
#include <asm/set_memory.h>
#include <asm/spec-ctrl.h>
+#include <asm/svm.h>
#include <asm/vmx.h>
#include "trace.h"
@@ -5553,7 +5554,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
(14 & (access) ? 1 << 14 : 0) | \
(15 & (access) ? 1 << 15 : 0))
-static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
+static void update_permission_bitmask(struct kvm_mmu *mmu, bool tdp, bool ept)
{
unsigned byte;
@@ -5614,7 +5615,12 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Faults from kernel mode accesses to user pages */
u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
- uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
+ /*
+ * For NPT GMET, U=0 does not affect reads and writes. Fetches
+ * are handled below via cr4_smep.
+ */
+ if (!(tdp && cr4_smep))
+ uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
if (efer_nx)
ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
@@ -5725,7 +5731,7 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu,
return;
reset_guest_rsvds_bits_mask(vcpu, mmu);
- update_permission_bitmask(mmu, false);
+ update_permission_bitmask(mmu, mmu == &vcpu->arch.guest_mmu, false);
update_pkru_bitmask(mmu);
}
@@ -5921,7 +5927,8 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
}
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
- unsigned long cr4, u64 efer, gpa_t nested_cr3)
+ unsigned long cr4, u64 efer, gpa_t nested_cr3,
+ u64 nested_ctl)
{
struct kvm_mmu *context = &vcpu->arch.guest_mmu;
struct kvm_mmu_role_regs regs = {
@@ -5934,7 +5941,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
/* NPT requires CR0.PG=1. */
WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
- cpu_role.base.cr4_smep = false;
+ cpu_role.base.cr4_smep = (nested_ctl & SVM_NESTED_CTL_GMET_ENABLE) != 0;
root_role = cpu_role.base;
root_role.level = kvm_mmu_get_tdp_level(vcpu);
@@ -5992,7 +5999,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
context->gva_to_gpa = ept_gva_to_gpa;
context->sync_spte = ept_sync_spte;
- update_permission_bitmask(context, true);
+ update_permission_bitmask(context, true, true);
context->pkru_mask = 0;
reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level);
reset_ept_shadow_zero_bits_mask(context, execonly);
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b36c33255bed..99edcca7ee64 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -95,7 +95,8 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu)
*/
kvm_init_shadow_npt_mmu(vcpu, X86_CR0_PG, svm->vmcb01.ptr->save.cr4,
svm->vmcb01.ptr->save.efer,
- svm->nested.ctl.nested_cr3);
+ svm->nested.ctl.nested_cr3,
+ svm->nested.ctl.nested_ctl);
vcpu->arch.mmu->get_guest_pgd = nested_svm_get_tdp_cr3;
vcpu->arch.mmu->get_pdptr = nested_svm_get_tdp_pdptr;
vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 22/24] KVM: SVM: enable GMET and set it in MMU role
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (20 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 21/24] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 23/24] KVM: SVM: work around errata 1218 Paolo Bonzini
` (3 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Set the GMET bit in the nested control field. This has effectively
no impact as long as NPT page tables are changed to have U=0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 6 +++++-
arch/x86/kvm/svm/nested.c | 7 +++++--
arch/x86/kvm/svm/svm.c | 16 ++++++++++++++++
arch/x86/kvm/svm/svm.h | 1 +
4 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d9eb059d24de..51eb3e69f3a8 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5836,7 +5836,6 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
{
union kvm_mmu_page_role role = {0};
- role.access = ACC_ALL;
role.cr0_wp = true;
role.cr4_smep = kvm_x86_call(tdp_has_smep)(vcpu->kvm);
role.efer_nx = true;
@@ -5847,6 +5846,11 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
role.direct = true;
role.has_4_byte_gpte = false;
+ /* All TDP pages are supervisor-executable */
+ role.access = ACC_ALL;
+ if (role.cr4_smep && shadow_user_mask)
+ role.access &= ~ACC_USER_MASK;
+
return role;
}
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 99edcca7ee64..4c7bc0e7f908 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -829,9 +829,12 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
/* Also overwritten later if necessary. */
vmcb02->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
- /* nested_cr3. */
- if (nested_npt_enabled(svm))
+ /* Use vmcb01 MMU and format if guest does not use nNPT */
+ if (nested_npt_enabled(svm)) {
+ vmcb02->control.nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+
nested_svm_init_mmu_context(vcpu);
+ }
vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
vcpu->arch.l1_tsc_offset,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e6477affac9a..1705e3cafcb0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -135,6 +135,9 @@ module_param(pause_filter_count_max, ushort, 0444);
bool npt_enabled = true;
module_param_named(npt, npt_enabled, bool, 0444);
+bool gmet_enabled = true;
+module_param_named(gmet, gmet_enabled, bool, 0444);
+
/* allow nested virtualization in KVM/SVM */
static int nested = true;
module_param(nested, int, 0444);
@@ -1170,6 +1173,10 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
save->g_pat = vcpu->arch.pat;
save->cr3 = 0;
}
+
+ if (gmet_enabled)
+ control->nested_ctl |= SVM_NESTED_CTL_GMET_ENABLE;
+
svm->current_vmcb->asid_generation = 0;
svm->asid = 0;
@@ -4475,6 +4482,11 @@ svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
hypercall[2] = 0xd9;
}
+static bool svm_tdp_has_smep(struct kvm *kvm)
+{
+ return gmet_enabled;
+}
+
/*
* The kvm parameter can be NULL (module initialization, or invocation before
* VM creation). Be sure to check the kvm parameter before using it.
@@ -5224,6 +5236,7 @@ struct kvm_x86_ops svm_x86_ops __initdata = {
.write_tsc_multiplier = svm_write_tsc_multiplier,
.load_mmu_pgd = svm_load_mmu_pgd,
+ .tdp_has_smep = svm_tdp_has_smep,
.check_intercept = svm_check_intercept,
.handle_exit_irqoff = svm_handle_exit_irqoff,
@@ -5464,6 +5477,9 @@ static __init int svm_hardware_setup(void)
if (!boot_cpu_has(X86_FEATURE_NPT))
npt_enabled = false;
+ if (!npt_enabled || !boot_cpu_has(X86_FEATURE_GMET))
+ gmet_enabled = false;
+
/* Force VM NPT level equal to the host's paging level */
kvm_configure_mmu(npt_enabled, get_npt_level(),
get_npt_level(), PG_LEVEL_1G);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 6942e6b0eda6..41042379aa48 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -44,6 +44,7 @@ static inline struct page *__sme_pa_to_page(unsigned long pa)
#define IOPM_SIZE PAGE_SIZE * 3
#define MSRPM_SIZE PAGE_SIZE * 2
+extern bool gmet_enabled;
extern bool npt_enabled;
extern int nrips;
extern int vgif;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 23/24] KVM: SVM: work around errata 1218
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (21 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 22/24] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 24/24] KVM: nSVM: enable GMET for guests Paolo Bonzini
` (2 subsequent siblings)
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
According to AMD, the hypervisor may not be able to determine whether a
fault was a GMET fault or an NX fault based on EXITINFO1, and software
"must read the relevant VMCB to determine whether a fault was a GMET
fault or an NX fault". The APM further details that they meant the
CPL field.
KVM uses the page fault error code to distinguish the causes of a
nested page fault, so recalculate the PFERR_USER_MASK bit of the
vmexit information. Only do it for fetches and only if GMET is in
use, because KVM does not differentiate based on PFERR_USER_MASK
for other nested NPT page faults.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/svm.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1705e3cafcb0..700090c3408c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1957,6 +1957,17 @@ static int npf_interception(struct kvm_vcpu *vcpu)
}
}
+ if ((svm->vmcb->control.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE) &&
+ (error_code & PFERR_FETCH_MASK)) {
+ /*
+ * Work around errata 1218: EXITINFO1[2] May Be Incorrectly Set
+ * When GMET (Guest Mode Execute Trap extension) is Enabled
+ */
+ error_code |= PFERR_USER_MASK;
+ if (svm_get_cpl(vcpu) == 0)
+ error_code &= ~PFERR_USER_MASK;
+ }
+
if (sev_snp_guest(vcpu->kvm) && (error_code & PFERR_GUEST_ENC_MASK))
error_code |= PFERR_PRIVATE_ACCESS;
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 24/24] KVM: nSVM: enable GMET for guests
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (22 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 23/24] KVM: SVM: work around errata 1218 Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-26 18:17 ` [PATCH 25/24] stats hack Paolo Bonzini
2026-03-30 2:27 ` [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Jon Kohler
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
All that needs to be done is moving the GMET bit from vmcs12 to
vmcs02. The only new thing is that __nested_svm_check_controls
now ensures that ignored-if-unavailable bits are zero in
svm->nested.ctl.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/nested.c | 6 ++++++
arch/x86/kvm/svm/svm.c | 3 +++
2 files changed, 9 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 4c7bc0e7f908..235477bac7e7 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -343,6 +343,8 @@ static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa, u32 size)
static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
struct vmcb_ctrl_area_cached *control)
{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
if (CC(!vmcb12_is_intercept(control, INTERCEPT_VMRUN)))
return false;
@@ -364,6 +366,9 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
return false;
}
+ if (!gmet_enabled || !guest_cpu_cap_has(vcpu, X86_FEATURE_GMET))
+ svm->nested.ctl.nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+
return true;
}
@@ -832,6 +837,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
/* Use vmcb01 MMU and format if guest does not use nNPT */
if (nested_npt_enabled(svm)) {
vmcb02->control.nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+ vmcb02->control.nested_ctl |= (svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE);
nested_svm_init_mmu_context(vcpu);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 700090c3408c..430e4f4ef55b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5372,6 +5372,9 @@ static __init void svm_set_cpu_caps(void)
if (boot_cpu_has(X86_FEATURE_PFTHRESHOLD))
kvm_cpu_cap_set(X86_FEATURE_PFTHRESHOLD);
+ if (boot_cpu_has(X86_FEATURE_GMET))
+ kvm_cpu_cap_set(X86_FEATURE_GMET);
+
if (vgif)
kvm_cpu_cap_set(X86_FEATURE_VGIF);
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 25/24] stats hack
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (23 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 24/24] KVM: nSVM: enable GMET for guests Paolo Bonzini
@ 2026-03-26 18:17 ` Paolo Bonzini
2026-03-30 2:27 ` [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Jon Kohler
25 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-26 18:17 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
arch/x86/kvm/x86.c | 1 +
4 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2a26c8fe3f4b..1bd12c03c319 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1705,6 +1705,7 @@ struct kvm_vcpu_stat {
u64 nmi_injections;
u64 req_event;
u64 nested_run;
+ u64 nested_run_gmet;
u64 directed_yield_attempted;
u64 directed_yield_successful;
u64 preemption_reported;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 430e4f4ef55b..705bfb98ebfc 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4424,7 +4424,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
/* Track VMRUNs that have made past consistency checking */
if (svm->nested.nested_run_pending &&
!svm_is_vmrun_failure(svm->vmcb->control.exit_code))
- ++vcpu->stat.nested_run;
+ ++vcpu->stat.nested_run, vcpu->stat.nested_run_gmet += !!(svm->vmcb->control.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE);
svm->nested.nested_run_pending = 0;
}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 65892dc6f478..262cf8a69bb8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7749,7 +7749,7 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
*/
if (vmx->nested.nested_run_pending &&
!vmx_get_exit_reason(vcpu).failed_vmentry)
- ++vcpu->stat.nested_run;
+ ++vcpu->stat.nested_run, vcpu->stat.nested_run_gmet += !!nested_cpu_has2(get_vmcs12(vcpu), SECONDARY_EXEC_MODE_BASED_EPT_EXEC);
vmx->nested.nested_run_pending = 0;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fd1c4a36b593..09e4b53f34f8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -299,6 +299,7 @@ const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, nmi_injections),
STATS_DESC_COUNTER(VCPU, req_event),
STATS_DESC_COUNTER(VCPU, nested_run),
+ STATS_DESC_COUNTER(VCPU, nested_run_gmet),
STATS_DESC_COUNTER(VCPU, directed_yield_attempted),
STATS_DESC_COUNTER(VCPU, directed_yield_successful),
STATS_DESC_COUNTER(VCPU, preemption_reported),
--
2.53.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
@ 2026-03-27 4:06 ` Jon Kohler
2026-03-30 4:12 ` kernel test robot
1 sibling, 0 replies; 32+ messages in thread
From: Jon Kohler @ 2026-03-27 4:06 UTC (permalink / raw)
To: Paolo Bonzini
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Nikunj A Dadhania, amit.shah@amd.com, Sean Christopherson,
Marcelo Tosatti
> On Mar 26, 2026, at 2:17 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> Read permissions so far were only needed for EPT, which does not need
> ACC_USER_MASK. Therefore, for EPT page tables ACC_USER_MASK was repurposed
> as a read permission bit.
>
> In order to implement nested MBEC, EPT will genuinely have four kinds of
> accesses, and there will be no room for such hacks; bite the bullet at
> last, enlarging ACC_ALL to four bits and permissions[] to 2^4 bits (u16).
>
> The new code does not enforce that the XWR bits on non-execonly processors
> have their R bit set, even when running nested: none of the shadow_*_mask
> values have bit 0 set, and make_spte() genuinely relies on ACC_READ_MASK
> being requested! This works becase, if execonly is not supported by the
nit: becase -> because
> processor, shadow EPT will generate an EPT misconfig vmexit if the XWR
> bits represent a non-readable page, and therefore the pte_access argument
> to make_spte() will also always have ACC_READ_MASK set.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 12 +++++-----
> arch/x86/kvm/mmu.h | 2 +-
> arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++++++------------
> arch/x86/kvm/mmu/mmutrace.h | 3 ++-
> arch/x86/kvm/mmu/paging_tmpl.h | 35 +++++++++++++++++------------
> arch/x86/kvm/mmu/spte.c | 18 ++++++---------
> arch/x86/kvm/mmu/spte.h | 5 +++--
> arch/x86/kvm/vmx/capabilities.h | 5 -----
> arch/x86/kvm/vmx/common.h | 5 +----
> arch/x86/kvm/vmx/vmx.c | 3 +--
> 10 files changed, 67 insertions(+), 60 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6e4e3ef9b8c7..65671d3769f0 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -327,11 +327,11 @@ struct kvm_kernel_irq_routing_entry;
> * the number of unique SPs that can theoretically be created is 2^n, where n
> * is the number of bits that are used to compute the role.
> *
> - * But, even though there are 20 bits in the mask below, not all combinations
> + * But, even though there are 21 bits in the mask below, not all combinations
> * of modes and flags are possible:
> *
> * - invalid shadow pages are not accounted, mirror pages are not shadowed,
> - * so the bits are effectively 18.
> + * so the bits are effectively 19.
> *
> * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging);
> * execonly and ad_disabled are only used for nested EPT which has
> @@ -346,7 +346,7 @@ struct kvm_kernel_irq_routing_entry;
> * cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
> *
> * Therefore, the maximum number of possible upper-level shadow pages for a
> - * single gfn is a bit less than 2^13.
> + * single gfn is a bit less than 2^14.
> */
> union kvm_mmu_page_role {
> u32 word;
> @@ -355,7 +355,7 @@ union kvm_mmu_page_role {
> unsigned has_4_byte_gpte:1;
> unsigned quadrant:2;
> unsigned direct:1;
> - unsigned access:3;
> + unsigned access:4;
> unsigned invalid:1;
> unsigned efer_nx:1;
> unsigned cr0_wp:1;
> @@ -365,7 +365,7 @@ union kvm_mmu_page_role {
> unsigned guest_mode:1;
> unsigned passthrough:1;
> unsigned is_mirror:1;
> - unsigned :4;
> + unsigned:3;
>
> /*
> * This is left at the top of the word so that
> @@ -491,7 +491,7 @@ struct kvm_mmu {
> * Byte index: page fault error code [4:1]
> * Bit index: pte permissions in ACC_* format
> */
> - u8 permissions[16];
> + u16 permissions[16];
>
> u64 *pae_root;
> u64 *pml4_root;
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 830f46145692..23f37535c0ce 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -81,7 +81,7 @@ u8 kvm_mmu_get_max_tdp_level(void);
> void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
> void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value);
> void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
> -void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
> +void kvm_mmu_set_ept_masks(bool has_ad_bits);
>
> void kvm_init_mmu(struct kvm_vcpu *vcpu);
> void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 170952a840db..5f578435b5ad 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2033,7 +2033,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> */
> const union kvm_mmu_page_role sync_role_ign = {
> .level = 0xf,
> - .access = 0x7,
> + .access = ACC_ALL,
> .quadrant = 0x3,
> .passthrough = 0x1,
> };
> @@ -5527,7 +5527,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
> * update_permission_bitmask() builds what is effectively a
> * two-dimensional array of bools. The second dimension is
> * provided by individual bits of permissions[pfec >> 1], and
> - * logical &, | and ~ operations operate on all the 8 possible
> + * logical &, | and ~ operations operate on all the 16 possible
> * combinations of ACC_* bits.
> */
> #define ACC_BITS_MASK(access) \
> @@ -5537,15 +5537,24 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
> (4 & (access) ? 1 << 4 : 0) | \
> (5 & (access) ? 1 << 5 : 0) | \
> (6 & (access) ? 1 << 6 : 0) | \
> - (7 & (access) ? 1 << 7 : 0))
> + (7 & (access) ? 1 << 7 : 0) | \
> + (8 & (access) ? 1 << 8 : 0) | \
> + (9 & (access) ? 1 << 9 : 0) | \
> + (10 & (access) ? 1 << 10 : 0) | \
> + (11 & (access) ? 1 << 11 : 0) | \
> + (12 & (access) ? 1 << 12 : 0) | \
> + (13 & (access) ? 1 << 13 : 0) | \
> + (14 & (access) ? 1 << 14 : 0) | \
> + (15 & (access) ? 1 << 15 : 0))
>
> static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
> {
> unsigned byte;
>
> - const u8 x = ACC_BITS_MASK(ACC_EXEC_MASK);
> - const u8 w = ACC_BITS_MASK(ACC_WRITE_MASK);
> - const u8 u = ACC_BITS_MASK(ACC_USER_MASK);
> + const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
> + const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
> + const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
> + const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
>
> bool cr4_smep = is_cr4_smep(mmu);
> bool cr4_smap = is_cr4_smap(mmu);
> @@ -5568,24 +5577,26 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
> unsigned pfec = byte << 1;
>
> /*
> - * Each "*f" variable has a 1 bit for each UWX value
> + * Each "*f" variable has a 1 bit for each ACC_* combo
> * that causes a fault with the given PFEC.
> */
>
> + /* Faults from reads to non-readable pages */
> + u16 rf = (pfec & (PFERR_WRITE_MASK|PFERR_FETCH_MASK)) ? 0 : (u16)~r;
> /* Faults from writes to non-writable pages */
> - u8 wf = (pfec & PFERR_WRITE_MASK) ? (u8)~w : 0;
> + u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
> /* Faults from user mode accesses to supervisor pages */
> - u8 uf = (pfec & PFERR_USER_MASK) ? (u8)~u : 0;
> + u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
> /* Faults from fetches of non-executable pages*/
> - u8 ff = (pfec & PFERR_FETCH_MASK) ? (u8)~x : 0;
> + u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
> /* Faults from kernel mode fetches of user pages */
> - u8 smepf = 0;
> + u16 smepf = 0;
> /* Faults from kernel mode accesses of user pages */
> - u8 smapf = 0;
> + u16 smapf = 0;
>
> if (!ept) {
> /* Faults from kernel mode accesses to user pages */
> - u8 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
> + u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
>
> /* Not really needed: !nx will cause pte.nx to fault */
> if (!efer_nx)
> @@ -5618,7 +5629,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
> smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
> }
>
> - mmu->permissions[byte] = ff | uf | wf | smepf | smapf;
> + mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
> }
> }
>
> diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
> index 764e3015d021..dcfdfedfc4e9 100644
> --- a/arch/x86/kvm/mmu/mmutrace.h
> +++ b/arch/x86/kvm/mmu/mmutrace.h
> @@ -25,7 +25,8 @@
> #define KVM_MMU_PAGE_PRINTK() ({ \
> const char *saved_ptr = trace_seq_buffer_ptr(p); \
> static const char *access_str[] = { \
> - "---", "--x", "w--", "w-x", "-u-", "-ux", "wu-", "wux" \
> + "----", "r---", "-w--", "rw--", "--u-", "r-u-", "-wu-", "rwu-", \
> + "---x", "r--x", "-w-x", "rw-x", "--ux", "r-ux", "-wux", "rwux" \
> }; \
> union kvm_mmu_page_role role; \
> \
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 901cd2bd40b8..fb1b5d8b23e5 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -170,25 +170,24 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
> return true;
> }
>
> -/*
> - * For PTTYPE_EPT, a page table can be executable but not readable
> - * on supported processors. Therefore, set_spte does not automatically
> - * set bit 0 if execute only is supported. Here, we repurpose ACC_USER_MASK
> - * to signify readability since it isn't used in the EPT case
> - */
> static inline unsigned FNAME(gpte_access)(u64 gpte)
> {
> unsigned access;
> #if PTTYPE == PTTYPE_EPT
> access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
> ((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
> - ((gpte & VMX_EPT_READABLE_MASK) ? ACC_USER_MASK : 0);
> + ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0);
> #else
> - BUILD_BUG_ON(ACC_EXEC_MASK != PT_PRESENT_MASK);
> - BUILD_BUG_ON(ACC_EXEC_MASK != 1);
> + /*
> + * P is set here, so the page is always readable and W/U/!NX represent
> + * allowed accesses.
> + */
> + BUILD_BUG_ON(ACC_READ_MASK != PT_PRESENT_MASK);
> + BUILD_BUG_ON(ACC_WRITE_MASK != PT_WRITABLE_MASK);
> + BUILD_BUG_ON(ACC_USER_MASK != PT_USER_MASK);
> + BUILD_BUG_ON(ACC_EXEC_MASK & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK));
> access = gpte & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK);
> - /* Combine NX with P (which is set here) to get ACC_EXEC_MASK. */
> - access ^= (gpte >> PT64_NX_SHIFT);
> + access |= gpte & PT64_NX_MASK ? 0 : ACC_EXEC_MASK;
> #endif
>
> return access;
> @@ -501,10 +500,18 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
>
> if (write_fault)
> walker->fault.exit_qualification |= EPT_VIOLATION_ACC_WRITE;
> - if (user_fault)
> - walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
> - if (fetch_fault)
> + else if (fetch_fault)
> walker->fault.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
> + else
> + walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
> +
> + /*
> + * Accesses to guest paging structures are either "reads" or
> + * "read+write" accesses, so consider them the latter if write_fault
> + * is true.
> + */
> + if (access & PFERR_GUEST_PAGE_MASK)
> + walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
>
> /*
> * Note, pte_access holds the raw RWX bits from the EPTE, not
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index e9dc0ae44274..7b5f118ae211 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -194,12 +194,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> int is_host_mmio = -1;
> bool wrprot = false;
>
> - /*
> - * For the EPT case, shadow_present_mask has no RWX bits set if
> - * exec-only page table entries are supported. In that case,
> - * ACC_USER_MASK and shadow_user_mask are used to represent
> - * read access. See FNAME(gpte_access) in paging_tmpl.h.
> - */
> WARN_ON_ONCE((pte_access | shadow_present_mask) == SHADOW_NONPRESENT_VALUE);
>
> if (sp->role.ad_disabled)
> @@ -228,6 +222,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> pte_access &= ~ACC_EXEC_MASK;
> }
>
> + if (pte_access & ACC_READ_MASK)
> + spte |= PT_PRESENT_MASK; /* or VMX_EPT_READABLE_MASK */
> +
> if (pte_access & ACC_EXEC_MASK)
> spte |= shadow_x_mask;
> else
> @@ -391,6 +388,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
> u64 spte = SPTE_MMU_PRESENT_MASK;
>
> spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK |
> + PT_PRESENT_MASK /* or VMX_EPT_READABLE_MASK */ |
> shadow_user_mask | shadow_x_mask | shadow_me_value;
>
> if (ad_disabled)
> @@ -491,18 +489,16 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
> }
> EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
>
> -void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
> +void kvm_mmu_set_ept_masks(bool has_ad_bits)
> {
> kvm_ad_enabled = has_ad_bits;
>
> - shadow_user_mask = VMX_EPT_READABLE_MASK;
> + shadow_user_mask = 0;
> shadow_accessed_mask = VMX_EPT_ACCESS_BIT;
> shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
> shadow_nx_mask = 0ull;
> shadow_x_mask = VMX_EPT_EXECUTABLE_MASK;
> - /* VMX_EPT_SUPPRESS_VE_BIT is needed for W or X violation. */
> - shadow_present_mask =
> - (has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | VMX_EPT_SUPPRESS_VE_BIT;
> + shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
>
> shadow_acc_track_mask = VMX_EPT_RWX_MASK;
> shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index bc02a2e89a31..121bfb2217e8 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -52,10 +52,11 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
> #define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
> #endif
>
> -#define ACC_EXEC_MASK 1
> +#define ACC_READ_MASK PT_PRESENT_MASK
> #define ACC_WRITE_MASK PT_WRITABLE_MASK
> #define ACC_USER_MASK PT_USER_MASK
> -#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK)
> +#define ACC_EXEC_MASK 8
> +#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
>
> #define SPTE_LEVEL_BITS 9
> #define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
> diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
> index 4e371c93ae16..609477f190e8 100644
> --- a/arch/x86/kvm/vmx/capabilities.h
> +++ b/arch/x86/kvm/vmx/capabilities.h
> @@ -300,11 +300,6 @@ static inline bool cpu_has_vmx_flexpriority(void)
> cpu_has_vmx_virtualize_apic_accesses();
> }
>
> -static inline bool cpu_has_vmx_ept_execute_only(void)
> -{
> - return vmx_capability.ept & VMX_EPT_EXECUTE_ONLY_BIT;
> -}
> -
> static inline bool cpu_has_vmx_ept_4levels(void)
> {
> return vmx_capability.ept & VMX_EPT_PAGE_WALK_4_BIT;
> diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
> index adf925500b9e..1afbf272efae 100644
> --- a/arch/x86/kvm/vmx/common.h
> +++ b/arch/x86/kvm/vmx/common.h
> @@ -85,11 +85,8 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
> {
> u64 error_code;
>
> - /* Is it a read fault? */
> - error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
> - ? PFERR_USER_MASK : 0;
> /* Is it a write fault? */
> - error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
> + error_code = (exit_qualification & EPT_VIOLATION_ACC_WRITE)
> ? PFERR_WRITE_MASK : 0;
> /* Is it a fetch fault? */
> error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 8b24e682535b..e27868fa4eb7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8798,8 +8798,7 @@ __init int vmx_hardware_setup(void)
> set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
>
> if (enable_ept)
> - kvm_mmu_set_ept_masks(enable_ept_ad_bits,
> - cpu_has_vmx_ept_execute_only());
> + kvm_mmu_set_ept_masks(enable_ept_ad_bits);
> else
> vt_x86_ops.get_mt_mask = NULL;
>
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (24 preceding siblings ...)
2026-03-26 18:17 ` [PATCH 25/24] stats hack Paolo Bonzini
@ 2026-03-30 2:27 ` Jon Kohler
2026-03-30 10:43 ` Paolo Bonzini
25 siblings, 1 reply; 32+ messages in thread
From: Jon Kohler @ 2026-03-30 2:27 UTC (permalink / raw)
To: Paolo Bonzini
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Nikunj A Dadhania, amit.shah@amd.com, Sean Christopherson,
Marcelo Tosatti
> On Mar 26, 2026, at 2:16 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> This series introduces support for two related features that Hyper-V uses
> in its implementation of Virtual Secure Mode; these are Intel Mode-Based
> Execute Control and AMD Guest Mode Execution Trap.
>
> It's still RFC because it can definitely use more testing and review,
> but I'm pretty confident with the overall shape and design.
I've tested this on both Intel and AMD in our 6.18.y based tree and
everything works as expected, and we're very happy with performance
thus far.
I've also ran the kvm-unit-tests series on both Intel/AMD and thats
also working as expected. I proposed a few more SVM tests off-list, so
we'll see if that helps increase coverage a pinch more.
Guest:
Windows 11 25H2 OS Build 26200.8037
AMD proc:
AMD EPYC 9745 (Turin/Zen5)
Intel proc:
Intel(R) Xeon(R) Gold 6426Y (SPR)
For this RFCv2 series:
Tested-By: Jon Kohler <jon@nutanix.com>
On the ecosystem enablement side, qemu has both mbec [1] and gmet [2];
however, they are not exposed via any model definitions (yet), so users
would need to manually enable them in the short term. I'll work up
a patch to expose these via model definitions and propose that to the
list this week.
[1] https://github.com/qemu/qemu/commit/bfff4b2ae5452463ab8c14b4a8a020288b5ff5d8
[2] https://github.com/qemu/qemu/commit/746a823a17f25393cc8c0cd1257f6dcef757bc09
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
2026-03-27 4:06 ` Jon Kohler
@ 2026-03-30 4:12 ` kernel test robot
1 sibling, 0 replies; 32+ messages in thread
From: kernel test robot @ 2026-03-30 4:12 UTC (permalink / raw)
To: Paolo Bonzini, linux-kernel, kvm
Cc: oe-kbuild-all, Jon Kohler, Nikunj A Dadhania, Amit Shah,
Sean Christopherson, Marcelo Tosatti
Hi Paolo,
kernel test robot noticed the following build warnings:
[auto build test WARNING on kvm/queue]
[also build test WARNING on kvm/next tip/x86/tdx linus/master v7.0-rc6 next-20260327]
[cannot apply to kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Paolo-Bonzini/KVM-TDX-VMX-rework-EPT_VIOLATION_EXEC_FOR_RING3_LIN-into-PROT_MASK/20260329-124019
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/r/20260326181723.218115-9-pbonzini%40redhat.com
patch subject: [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
config: x86_64-randconfig-123-20260329 (https://download.01.org/0day-ci/archive/20260330/202603301246.a5sPkQdh-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
sparse: v0.6.5-rc1
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260330/202603301246.a5sPkQdh-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603301246.a5sPkQdh-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
arch/x86/kvm/mmu/mmu.c: note: in included file:
arch/x86/kvm/mmu/paging_tmpl.h:106:24: sparse: sparse: cast truncates bits from constant value (ffffffffff000 becomes fffff000)
arch/x86/kvm/mmu/paging_tmpl.h:440:24: sparse: sparse: cast truncates bits from constant value (ffffffffff000 becomes fffff000)
>> arch/x86/kvm/mmu/mmu.c:5585:82: sparse: sparse: cast truncates bits from constant value (ffff5555 becomes 5555)
>> arch/x86/kvm/mmu/mmu.c:5587:59: sparse: sparse: cast truncates bits from constant value (ffff3333 becomes 3333)
>> arch/x86/kvm/mmu/mmu.c:5589:58: sparse: sparse: cast truncates bits from constant value (ffff0f0f becomes f0f)
>> arch/x86/kvm/mmu/mmu.c:5591:59: sparse: sparse: cast truncates bits from constant value (ffff00ff becomes ff)
vim +5585 arch/x86/kvm/mmu/mmu.c
5519
5520 /*
5521 * Build a mask with all combinations of PTE access rights that
5522 * include the given access bit. The mask can be queried with
5523 * "mask & (1 << access)", where access is a combination of
5524 * ACC_* bits.
5525 *
5526 * By mixing and matching multiple masks returned by ACC_BITS_MASK,
5527 * update_permission_bitmask() builds what is effectively a
5528 * two-dimensional array of bools. The second dimension is
5529 * provided by individual bits of permissions[pfec >> 1], and
5530 * logical &, | and ~ operations operate on all the 16 possible
5531 * combinations of ACC_* bits.
5532 */
5533 #define ACC_BITS_MASK(access) \
5534 ((1 & (access) ? 1 << 1 : 0) | \
5535 (2 & (access) ? 1 << 2 : 0) | \
5536 (3 & (access) ? 1 << 3 : 0) | \
5537 (4 & (access) ? 1 << 4 : 0) | \
5538 (5 & (access) ? 1 << 5 : 0) | \
5539 (6 & (access) ? 1 << 6 : 0) | \
5540 (7 & (access) ? 1 << 7 : 0) | \
5541 (8 & (access) ? 1 << 8 : 0) | \
5542 (9 & (access) ? 1 << 9 : 0) | \
5543 (10 & (access) ? 1 << 10 : 0) | \
5544 (11 & (access) ? 1 << 11 : 0) | \
5545 (12 & (access) ? 1 << 12 : 0) | \
5546 (13 & (access) ? 1 << 13 : 0) | \
5547 (14 & (access) ? 1 << 14 : 0) | \
5548 (15 & (access) ? 1 << 15 : 0))
5549
5550 static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
5551 {
5552 unsigned byte;
5553
5554 const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
5555 const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
5556 const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
5557 const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
5558
5559 bool cr4_smep = is_cr4_smep(mmu);
5560 bool cr4_smap = is_cr4_smap(mmu);
5561 bool cr0_wp = is_cr0_wp(mmu);
5562 bool efer_nx = is_efer_nx(mmu);
5563
5564 /*
5565 * In hardware, page fault error codes are generated (as the name
5566 * suggests) on any kind of page fault. permission_fault() and
5567 * paging_tmpl.h already use the same bits after a successful page
5568 * table walk, to indicate the kind of access being performed.
5569 *
5570 * However, PFERR_PRESENT_MASK and PFERR_RSVD_MASK are never set here,
5571 * exactly because the page walk is successful. PFERR_PRESENT_MASK is
5572 * removed by the shift, while PFERR_RSVD_MASK is repurposed in
5573 * permission_fault() to indicate accesses that are *not* subject to
5574 * SMAP restrictions.
5575 */
5576 for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
5577 unsigned pfec = byte << 1;
5578
5579 /*
5580 * Each "*f" variable has a 1 bit for each ACC_* combo
5581 * that causes a fault with the given PFEC.
5582 */
5583
5584 /* Faults from reads to non-readable pages */
> 5585 u16 rf = (pfec & (PFERR_WRITE_MASK|PFERR_FETCH_MASK)) ? 0 : (u16)~r;
5586 /* Faults from writes to non-writable pages */
> 5587 u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
5588 /* Faults from user mode accesses to supervisor pages */
> 5589 u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
5590 /* Faults from fetches of non-executable pages*/
> 5591 u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
5592 /* Faults from kernel mode fetches of user pages */
5593 u16 smepf = 0;
5594 /* Faults from kernel mode accesses of user pages */
5595 u16 smapf = 0;
5596
5597 if (!ept) {
5598 /* Faults from kernel mode accesses to user pages */
5599 u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
5600
5601 /* Not really needed: !nx will cause pte.nx to fault */
5602 if (!efer_nx)
5603 ff = 0;
5604
5605 /* Allow supervisor writes if !cr0.wp */
5606 if (!cr0_wp)
5607 wf = (pfec & PFERR_USER_MASK) ? wf : 0;
5608
5609 /* Disallow supervisor fetches of user code if cr4.smep */
5610 if (cr4_smep)
5611 smepf = (pfec & PFERR_FETCH_MASK) ? kf : 0;
5612
5613 /*
5614 * SMAP:kernel-mode data accesses from user-mode
5615 * mappings should fault. A fault is considered
5616 * as a SMAP violation if all of the following
5617 * conditions are true:
5618 * - X86_CR4_SMAP is set in CR4
5619 * - A user page is accessed
5620 * - The access is not a fetch
5621 * - The access is supervisor mode
5622 * - If implicit supervisor access or X86_EFLAGS_AC is clear
5623 *
5624 * Here, we cover the first four conditions. The fifth
5625 * is computed dynamically in permission_fault() and
5626 * communicated by setting PFERR_RSVD_MASK.
5627 */
5628 if (cr4_smap)
5629 smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
5630 }
5631
5632 mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
5633 }
5634 }
5635
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support
2026-03-30 2:27 ` [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Jon Kohler
@ 2026-03-30 10:43 ` Paolo Bonzini
2026-03-30 18:59 ` Jon Kohler
0 siblings, 1 reply; 32+ messages in thread
From: Paolo Bonzini @ 2026-03-30 10:43 UTC (permalink / raw)
To: Jon Kohler
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Nikunj A Dadhania, amit.shah@amd.com, Sean Christopherson,
Marcelo Tosatti
On Mon, Mar 30, 2026 at 4:28 AM Jon Kohler <jon@nutanix.com> wrote:
> For this RFCv2 series:
> Tested-By: Jon Kohler <jon@nutanix.com>
Great, thanks! FWIW I found a small hole (just by code inspection);
translate_nested_gpa is always setting PFERR_USER_MASK and therefore
always using XU (and always allowing execution for GMET). The fix is
not hard, basically translate_nested_gpa needs to become an entry in
the nested_ops and the callers need a little bit of adjustment to pass
more info down. Then the vendor code can do respectively:
/*
* MBEC differentiates based on the effective U/S bit of
* the guest page tables; not the processor CPL.
*/
access &= ~PFERR_USER_MASK;
if ((pte_access & ACC_USER_MASK)
&& (access & PFERR_GUEST_FINAL_MASK))
access |= PFERR_USER_MASK;
and
/* Non-GMET walks are always user-walks */
if (!(svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE))
access |= PFERR_USER_MASK;
I'll post this after the series gets more review altogether.
> On the ecosystem enablement side, qemu has both mbec [1] and gmet [2];
> however, they are not exposed via any model definitions (yet), so users
> would need to manually enable them in the short term. I'll work up
> a patch to expose these via model definitions and propose that to the
> list this week.
>
> [1] https://github.com/qemu/qemu/commit/bfff4b2ae5452463ab8c14b4a8a020288b5ff5d8
> [2] https://github.com/qemu/qemu/commit/746a823a17f25393cc8c0cd1257f6dcef757bc09
Sounds good!
Paolo
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support
2026-03-30 10:43 ` Paolo Bonzini
@ 2026-03-30 18:59 ` Jon Kohler
2026-04-01 16:27 ` Paolo Bonzini
0 siblings, 1 reply; 32+ messages in thread
From: Jon Kohler @ 2026-03-30 18:59 UTC (permalink / raw)
To: Paolo Bonzini
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Nikunj A Dadhania, amit.shah@amd.com, Sean Christopherson,
Marcelo Tosatti
> On Mar 30, 2026, at 6:43 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On Mon, Mar 30, 2026 at 4:28 AM Jon Kohler <jon@nutanix.com> wrote:
>> For this RFCv2 series:
>> Tested-By: Jon Kohler <jon@nutanix.com>
>
> Great, thanks! FWIW I found a small hole (just by code inspection);
> translate_nested_gpa is always setting PFERR_USER_MASK and therefore
> always using XU (and always allowing execution for GMET). The fix is
> not hard, basically translate_nested_gpa needs to become an entry in
> the nested_ops and the callers need a little bit of adjustment to pass
> more info down. Then the vendor code can do respectively:
>
> /*
> * MBEC differentiates based on the effective U/S bit of
> * the guest page tables; not the processor CPL.
> */
> access &= ~PFERR_USER_MASK;
> if ((pte_access & ACC_USER_MASK)
> && (access & PFERR_GUEST_FINAL_MASK))
> access |= PFERR_USER_MASK;
>
> and
>
> /* Non-GMET walks are always user-walks */
> if (!(svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE))
> access |= PFERR_USER_MASK;
>
> I'll post this after the series gets more review altogether.
Ok cool, will look forward to that!
>
>> On the ecosystem enablement side, qemu has both mbec [1] and gmet [2];
>> however, they are not exposed via any model definitions (yet), so users
>> would need to manually enable them in the short term. I'll work up
>> a patch to expose these via model definitions and propose that to the
>> list this week.
>
> Sounds good!
>
> Paolo
QEMU GMET models:
https://lists.nongnu.org/archive/html/qemu-devel/2026-03/msg08090.html
https://github.com/qemu/qemu/commit/fa530e86bed12c5743f6e88c4a1e4cc02cf0e68b
QEMU MBEC models:
https://lists.nongnu.org/archive/html/qemu-devel/2026-03/msg08091.html
https://github.com/qemu/qemu/commit/2b70121653b2037e551677377f92ca4623f8d4ff
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support
2026-03-30 18:59 ` Jon Kohler
@ 2026-04-01 16:27 ` Paolo Bonzini
0 siblings, 0 replies; 32+ messages in thread
From: Paolo Bonzini @ 2026-04-01 16:27 UTC (permalink / raw)
To: Jon Kohler
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
Nikunj A Dadhania, amit.shah@amd.com, Sean Christopherson,
Marcelo Tosatti
On Mon, Mar 30, 2026 at 8:59 PM Jon Kohler <jon@nutanix.com> wrote:
> > Great, thanks! FWIW I found a small hole (just by code inspection);
> > translate_nested_gpa is always setting PFERR_USER_MASK and therefore
> > always using XU (and always allowing execution for GMET). The fix is
> > not hard, basically translate_nested_gpa needs to become an entry in
> > the nested_ops and the callers need a little bit of adjustment to pass
> > more info down. Then the vendor code can do respectively:
>
> Ok cool, will look forward to that!
And a few more from Sashiko...
1) do not clear blindly ACC_USER_EXEC_MASK for NX huge pages in
make_spte(), because NX huge page mitigation can actually run on AMD
machines as well.
2) add a comment about fast_page_fault happening only with XS==XU, at
least for now
3) include the XU bit in __is_bad_mt_xwr(), evaluating bad_mt_xwr for
(XS|XU, W, R)
4) only enable nested MBEC if enable_mbec==true, likewise only set
X86_FEATURE_GMET in svm_set_cpu_caps if gmet_enabled==true
5) check CPL != 3 instead of CPL == 0 to recover PFERR_USER_MASK for GMET
6) change shadow_acc_track_mask in "KVM: VMX: enable use of MBEC", not
"KVM: x86/mmu: split XS/XU bits for EPT" (for consistency with
shadow_xu_mask).
7) clear SVM_NESTED_CTL_GMET_ENABLE in
__nested_copy_vmcb_control_to_cache, not __nested_vmcb_check_controls
(i.e. sanitize the destination of the copy)
Overall a pretty solid analysis from our new overlord. Roughly 50%
false positives, but I accept that given the high quality of the other
50%. Also, a couple false positives are worth adding comments about;
it may not shut up the AI (#1 was explicitly called out as "cannot
happen" by an incorrect comment that I added...) but it can be useful
for humans anyway.
That said, it uses a damn *lot* of tokens to do this kind of analysis.
Paolo
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2026-04-01 16:27 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-03-26 18:16 ` [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 02/24] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 03/24] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
2026-03-26 18:17 ` [PATCH 04/24] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
2026-03-26 18:17 ` [PATCH 05/24] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
2026-03-26 18:17 ` [PATCH 06/24] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
2026-03-26 18:17 ` [PATCH 07/24] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
2026-03-27 4:06 ` Jon Kohler
2026-03-30 4:12 ` kernel test robot
2026-03-26 18:17 ` [PATCH 09/24] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
2026-03-26 18:17 ` [PATCH 10/24] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
2026-03-26 18:17 ` [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
2026-03-26 18:17 ` [PATCH 12/24] KVM: VMX: enable use of MBEC Paolo Bonzini
2026-03-26 18:17 ` [PATCH 13/24] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
2026-03-26 18:17 ` [PATCH 14/24] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
2026-03-26 18:17 ` [PATCH 15/24] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
2026-03-26 18:17 ` [PATCH 16/24] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
2026-03-26 18:17 ` [PATCH 17/24] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
2026-03-26 18:17 ` [PATCH 18/24] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
2026-03-26 18:17 ` [PATCH 19/24] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
2026-03-26 18:17 ` [PATCH 20/24] KVM: SVM: add GMET bit definitions Paolo Bonzini
2026-03-26 18:17 ` [PATCH 21/24] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
2026-03-26 18:17 ` [PATCH 22/24] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
2026-03-26 18:17 ` [PATCH 23/24] KVM: SVM: work around errata 1218 Paolo Bonzini
2026-03-26 18:17 ` [PATCH 24/24] KVM: nSVM: enable GMET for guests Paolo Bonzini
2026-03-26 18:17 ` [PATCH 25/24] stats hack Paolo Bonzini
2026-03-30 2:27 ` [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Jon Kohler
2026-03-30 10:43 ` Paolo Bonzini
2026-03-30 18:59 ` Jon Kohler
2026-04-01 16:27 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox