* [PATCH 01/27] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 02/27] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
` (25 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
EPT exit qualification bit 6 is used when mode-based execute control
is enabled, and reflects user executable addresses. Rework name to
reflect the intention and add to EPT_VIOLATION_PROT_MASK, which allows
simplifying the return evaluation in
tdx_is_sept_violation_unexpected_pending a pinch.
Rework handling in __vmx_handle_ept_violation to unconditionally clear
EPT_VIOLATION_PROT_USER_EXEC until MBEC is implemented, as suggested by
Sean [1].
Note: Intel SDM Table 29-7 defines bit 6 as:
If the "mode-based execute control" VM-execution control is 0, the
value of this bit is undefined. If that control is 1, this bit is the
logical-AND of bit 10 in the EPT paging-structure entries used to
translate the guest-physical address of the access causing the EPT
violation. In this case, it indicates whether the guest-physical
address was executable for user-mode linear addresses.
[1] https://lore.kernel.org/all/aCJDzU1p_SFNRIJd@google.com/
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-2-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 5 +++--
arch/x86/kvm/vmx/common.h | 9 +++++++--
arch/x86/kvm/vmx/tdx.c | 2 +-
3 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index b92ff87e3560..7fdc6b787d70 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -597,10 +597,11 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_PROT_READ BIT(3)
#define EPT_VIOLATION_PROT_WRITE BIT(4)
#define EPT_VIOLATION_PROT_EXEC BIT(5)
-#define EPT_VIOLATION_EXEC_FOR_RING3_LIN BIT(6)
+#define EPT_VIOLATION_PROT_USER_EXEC BIT(6)
#define EPT_VIOLATION_PROT_MASK (EPT_VIOLATION_PROT_READ | \
EPT_VIOLATION_PROT_WRITE | \
- EPT_VIOLATION_PROT_EXEC)
+ EPT_VIOLATION_PROT_EXEC | \
+ EPT_VIOLATION_PROT_USER_EXEC)
#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 412d0829d7a2..adf925500b9e 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -94,8 +94,13 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
- /* ept page table entry is present? */
- error_code |= (exit_qualification & EPT_VIOLATION_PROT_MASK)
+ /*
+ * ept page table entry is present?
+ * note: unconditionally clear USER_EXEC until mode-based
+ * execute control is implemented
+ */
+ error_code |= (exit_qualification &
+ (EPT_VIOLATION_PROT_MASK & ~EPT_VIOLATION_PROT_USER_EXEC))
? PFERR_PRESENT_MASK : 0;
if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index c5065f84b78b..fa740f70ee75 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1855,7 +1855,7 @@ static inline bool tdx_is_sept_violation_unexpected_pending(struct kvm_vcpu *vcp
if (eeq_type != TDX_EXT_EXIT_QUAL_TYPE_PENDING_EPT_VIOLATION)
return false;
- return !(eq & EPT_VIOLATION_PROT_MASK) && !(eq & EPT_VIOLATION_EXEC_FOR_RING3_LIN);
+ return !(eq & EPT_VIOLATION_PROT_MASK);
}
static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu)
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 02/27] KVM: x86/mmu: remove SPTE_PERM_MASK
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-04-08 15:41 ` [PATCH 01/27] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
` (24 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
SPTE_PERM_MASK is no longer referenced by anything in the kernel.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-3-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 91ce29fd6f1b..28086fa86fe0 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -42,9 +42,6 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#endif
-#define SPTE_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \
- | shadow_x_mask | shadow_nx_mask | shadow_me_mask)
-
#define ACC_EXEC_MASK 1
#define ACC_WRITE_MASK PT_WRITABLE_MASK
#define ACC_USER_MASK PT_USER_MASK
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-04-08 15:41 ` [PATCH 01/27] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
2026-04-08 15:41 ` [PATCH 02/27] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 21:29 ` Huang, Kai
2026-04-08 15:41 ` [PATCH 04/27] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
` (23 subsequent siblings)
26 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti, Kai Huang
From: Jon Kohler <jon@nutanix.com>
Update SPTE_MMIO_ALLOWED_MASK to allow EPT user executable (bit 10) to
be treated like EPT RWX bit2:0, as when mode-based execute control is
enabled, bit 10 can act like a "present" bit. Likewise do not include
it in FROZEN_SPTE.
No functional changes intended, other than the reduction of the maximum
MMIO generation that is stored in page tables.
Cc: Kai Huang <kai.huang@intel.com>
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-4-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 2 ++
arch/x86/kvm/mmu/spte.h | 20 +++++++++++---------
2 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 7fdc6b787d70..59e3b095a315 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -549,10 +549,12 @@ enum vmcs_field {
#define VMX_EPT_ACCESS_BIT (1ull << 8)
#define VMX_EPT_DIRTY_BIT (1ull << 9)
#define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63)
+
#define VMX_EPT_RWX_MASK (VMX_EPT_READABLE_MASK | \
VMX_EPT_WRITABLE_MASK | \
VMX_EPT_EXECUTABLE_MASK)
#define VMX_EPT_MT_MASK (7ull << VMX_EPT_MT_EPTE_SHIFT)
+#define VMX_EPT_USER_EXECUTABLE_MASK (1ull << 10)
static inline u8 vmx_eptp_page_walk_level(u64 eptp)
{
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 28086fa86fe0..4283cea3e66c 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -96,11 +96,11 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
#undef SHADOW_ACC_TRACK_SAVED_MASK
/*
- * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of
+ * Due to limited space in PTEs, the MMIO generation is an 18 bit subset of
* the memslots generation and is derived as follows:
*
- * Bits 0-7 of the MMIO generation are propagated to spte bits 3-10
- * Bits 8-18 of the MMIO generation are propagated to spte bits 52-62
+ * Bits 0-6 of the MMIO generation are propagated to spte bits 3-9
+ * Bits 7-17 of the MMIO generation are propagated to spte bits 52-62
*
* The KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS flag is intentionally not included in
* the MMIO generation number, as doing so would require stealing a bit from
@@ -111,7 +111,7 @@ static_assert(!(EPT_SPTE_MMU_WRITABLE & SHADOW_ACC_TRACK_SAVED_MASK));
*/
#define MMIO_SPTE_GEN_LOW_START 3
-#define MMIO_SPTE_GEN_LOW_END 10
+#define MMIO_SPTE_GEN_LOW_END 9
#define MMIO_SPTE_GEN_HIGH_START 52
#define MMIO_SPTE_GEN_HIGH_END 62
@@ -133,7 +133,8 @@ static_assert(!(SPTE_MMU_PRESENT_MASK &
* and so they're off-limits for generation; additional checks ensure the mask
* doesn't overlap legal PA bits), and bit 63 (carved out for future usage).
*/
-#define SPTE_MMIO_ALLOWED_MASK (BIT_ULL(63) | GENMASK_ULL(51, 12) | GENMASK_ULL(2, 0))
+#define SPTE_MMIO_ALLOWED_MASK (BIT_ULL(63) | GENMASK_ULL(51, 12) | \
+ BIT_ULL(10) | GENMASK_ULL(2, 0))
static_assert(!(SPTE_MMIO_ALLOWED_MASK &
(SPTE_MMU_PRESENT_MASK | MMIO_SPTE_GEN_LOW_MASK | MMIO_SPTE_GEN_HIGH_MASK)));
@@ -141,7 +142,7 @@ static_assert(!(SPTE_MMIO_ALLOWED_MASK &
#define MMIO_SPTE_GEN_HIGH_BITS (MMIO_SPTE_GEN_HIGH_END - MMIO_SPTE_GEN_HIGH_START + 1)
/* remember to adjust the comment above as well if you change these */
-static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11);
+static_assert(MMIO_SPTE_GEN_LOW_BITS == 7 && MMIO_SPTE_GEN_HIGH_BITS == 11);
#define MMIO_SPTE_GEN_LOW_SHIFT (MMIO_SPTE_GEN_LOW_START - 0)
#define MMIO_SPTE_GEN_HIGH_SHIFT (MMIO_SPTE_GEN_HIGH_START - MMIO_SPTE_GEN_LOW_BITS)
@@ -217,10 +218,11 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
*
* Only used by the TDP MMU.
*/
-#define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x5a0ULL)
+#define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x1a0ULL)
-/* Frozen SPTEs must not be misconstrued as shadow present PTEs. */
-static_assert(!(FROZEN_SPTE & SPTE_MMU_PRESENT_MASK));
+/* Frozen SPTEs must not be misconstrued as shadow or MMU present PTEs. */
+static_assert(!(FROZEN_SPTE & (SPTE_MMU_PRESENT_MASK |
+ VMX_EPT_RWX_MASK | VMX_EPT_USER_EXECUTABLE_MASK)));
static inline bool is_frozen_spte(u64 spte)
{
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC
2026-04-08 15:41 ` [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
@ 2026-04-08 21:29 ` Huang, Kai
0 siblings, 0 replies; 29+ messages in thread
From: Huang, Kai @ 2026-04-08 21:29 UTC (permalink / raw)
To: kvm@vger.kernel.org, pbonzini@redhat.com,
linux-kernel@vger.kernel.org
Cc: amit.shah@amd.com, Kohler, Jon, seanjc@google.com, nikunj@amd.com,
mtosatti@redhat.com
On Wed, 2026-04-08 at 11:41 -0400, Paolo Bonzini wrote:
> From: Jon Kohler <jon@nutanix.com>
>
> Update SPTE_MMIO_ALLOWED_MASK to allow EPT user executable (bit 10) to
> be treated like EPT RWX bit2:0, as when mode-based execute control is
> enabled, bit 10 can act like a "present" bit. Likewise do not include
> it in FROZEN_SPTE.
>
> No functional changes intended, other than the reduction of the maximum
> MMIO generation that is stored in page tables.
>
> Cc: Kai Huang <kai.huang@intel.com>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> Message-ID: <20251223054806.1611168-4-jon@nutanix.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
> ---
> arch/x86/include/asm/vmx.h | 2 ++
> arch/x86/kvm/mmu/spte.h | 20 +++++++++++---------
> 2 files changed, 13 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
> index 7fdc6b787d70..59e3b095a315 100644
> --- a/arch/x86/include/asm/vmx.h
> +++ b/arch/x86/include/asm/vmx.h
> @@ -549,10 +549,12 @@ enum vmcs_field {
> #define VMX_EPT_ACCESS_BIT (1ull << 8)
> #define VMX_EPT_DIRTY_BIT (1ull << 9)
> #define VMX_EPT_SUPPRESS_VE_BIT (1ull << 63)
> +
Nit: seems unintentional change here.
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 04/27] KVM: x86/mmu: shuffle high bits of SPTEs in preparation for MBEC
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (2 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 05/27] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
` (22 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Access tracking will need to save bit 10 when MBEC is enabled.
Right now it is simply shifting the R and X bits into bits 54 and 56,
but bit 10 would not fit with the same scheme. Reorganize the
high bits so that access tracking will use bits 52, 54 and 62.
As a side effect, the free bits are compacted slightly, with
56-59 still unused.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 4283cea3e66c..317b9cd1537c 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -17,10 +17,20 @@
*/
#define SPTE_MMU_PRESENT_MASK BIT_ULL(11)
+/*
+ * The ignored high bits are allocated as follows:
+ * - bits 52, 54: saved X-R bits for access tracking when EPT does not have A/D
+ * - bits 53 (EPT only): host writable
+ * - bits 55 (EPT only): MMU-writable
+ * - bits 56-59: unused
+ * - bits 60-61: type of A/D tracking
+ * - bits 62: unused
+ */
+
/*
* TDP SPTES (more specifically, EPT SPTEs) may not have A/D bits, and may also
* be restricted to using write-protection (for L2 when CPU dirty logging, i.e.
- * PML, is enabled). Use bits 52 and 53 to hold the type of A/D tracking that
+ * PML, is enabled). Use bits 60 and 61 to hold the type of A/D tracking that
* is must be employed for a given TDP SPTE.
*
* Note, the "enabled" mask must be '0', as bits 62:52 are _reserved_ for PAE
@@ -29,7 +39,7 @@
* TDP with CPU dirty logging (PML). If NPT ever gains PML-like support, it
* must be restricted to 64-bit KVM.
*/
-#define SPTE_TDP_AD_SHIFT 52
+#define SPTE_TDP_AD_SHIFT 60
#define SPTE_TDP_AD_MASK (3ULL << SPTE_TDP_AD_SHIFT)
#define SPTE_TDP_AD_ENABLED (0ULL << SPTE_TDP_AD_SHIFT)
#define SPTE_TDP_AD_DISABLED (1ULL << SPTE_TDP_AD_SHIFT)
@@ -65,7 +75,7 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
*/
#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \
SPTE_EPT_EXECUTABLE_MASK)
-#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 54
+#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
static_assert(!(SPTE_TDP_AD_MASK & SHADOW_ACC_TRACK_SAVED_MASK));
@@ -84,8 +94,8 @@ static_assert(!(SPTE_TDP_AD_MASK & SHADOW_ACC_TRACK_SAVED_MASK));
* to not overlap the A/D type mask or the saved access bits of access-tracked
* SPTEs when A/D bits are disabled.
*/
-#define EPT_SPTE_HOST_WRITABLE BIT_ULL(57)
-#define EPT_SPTE_MMU_WRITABLE BIT_ULL(58)
+#define EPT_SPTE_HOST_WRITABLE BIT_ULL(53)
+#define EPT_SPTE_MMU_WRITABLE BIT_ULL(55)
static_assert(!(EPT_SPTE_HOST_WRITABLE & SPTE_TDP_AD_MASK));
static_assert(!(EPT_SPTE_MMU_WRITABLE & SPTE_TDP_AD_MASK));
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 05/27] KVM: x86/mmu: remove SPTE_EPT_*
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (3 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 04/27] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 06/27] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
` (21 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
spte.h is already including vmx.h, use the constants it defines.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.h | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 317b9cd1537c..bc02a2e89a31 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -57,10 +57,6 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define ACC_USER_MASK PT_USER_MASK
#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK)
-/* The mask for the R/X bits in EPT PTEs */
-#define SPTE_EPT_READABLE_MASK 0x1ull
-#define SPTE_EPT_EXECUTABLE_MASK 0x4ull
-
#define SPTE_LEVEL_BITS 9
#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
#define SPTE_INDEX(address, level) __PT_INDEX(address, level, SPTE_LEVEL_BITS)
@@ -73,8 +69,8 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
* restored only when a write is attempted to the page. This mask obviously
* must not overlap the A/D type mask.
*/
-#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (SPTE_EPT_READABLE_MASK | \
- SPTE_EPT_EXECUTABLE_MASK)
+#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (VMX_EPT_READABLE_MASK | \
+ VMX_EPT_EXECUTABLE_MASK)
#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 06/27] KVM: x86/mmu: merge make_spte_{non,}executable
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (4 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 05/27] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 07/27] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
` (20 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
As the logic will become more complicated with the introduction
of MBEC, at least write it only once.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/spte.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 85a0473809b0..e9dc0ae44274 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -317,14 +317,16 @@ static u64 modify_spte_protections(u64 spte, u64 set, u64 clear)
return spte;
}
-static u64 make_spte_executable(u64 spte)
+static u64 make_spte_executable(u64 spte, u8 access)
{
- return modify_spte_protections(spte, shadow_x_mask, shadow_nx_mask);
-}
+ u64 set, clear;
-static u64 make_spte_nonexecutable(u64 spte)
-{
- return modify_spte_protections(spte, shadow_nx_mask, shadow_x_mask);
+ if (access & ACC_EXEC_MASK)
+ set = shadow_x_mask;
+ else
+ set = shadow_nx_mask;
+ clear = set ^ (shadow_nx_mask | shadow_x_mask);
+ return modify_spte_protections(spte, set, clear);
}
/*
@@ -356,8 +358,8 @@ u64 make_small_spte(struct kvm *kvm, u64 huge_spte,
* the page executable as the NX hugepage mitigation no longer
* applies.
*/
- if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm))
- child_spte = make_spte_executable(child_spte);
+ if (is_nx_huge_page_enabled(kvm))
+ child_spte = make_spte_executable(child_spte, role.access);
}
return child_spte;
@@ -379,7 +381,7 @@ u64 make_huge_spte(struct kvm *kvm, u64 small_spte, int level)
huge_spte &= KVM_HPAGE_MASK(level) | ~PAGE_MASK;
if (is_nx_huge_page_enabled(kvm))
- huge_spte = make_spte_nonexecutable(huge_spte);
+ huge_spte = make_spte_executable(huge_spte, 0);
return huge_spte;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 07/27] KVM: x86/mmu: rename and clarify BYTE_MASK
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (5 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 06/27] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 08/27] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
` (19 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
The BYTE_MASK macro is the central point of the black magic
in update_permission_bitmask(). Rename it to something
that relates to how it is used, and add a comment explaining
how it works.
Using shifts instead of powers of two was actually suggested by
David Hildenbrand back in 2017 for clarity[1] but I evidently
forgot his suggestion when applying to kvm.git.
[1] https://lore.kernel.org/kvm/e4b5df86-31ae-2f4e-0666-393753e256df@redhat.com/
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 55 ++++++++++++++++++++++++++++++------------
1 file changed, 39 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index dd06453d5b72..d4bb693b3555 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5520,29 +5520,53 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
max_huge_page_level);
}
-#define BYTE_MASK(access) \
- ((1 & (access) ? 2 : 0) | \
- (2 & (access) ? 4 : 0) | \
- (3 & (access) ? 8 : 0) | \
- (4 & (access) ? 16 : 0) | \
- (5 & (access) ? 32 : 0) | \
- (6 & (access) ? 64 : 0) | \
- (7 & (access) ? 128 : 0))
-
+/*
+ * Build a mask with all combinations of PTE access rights that
+ * include the given access bit. The mask can be queried with
+ * "mask & (1 << access)", where access is a combination of
+ * ACC_* bits.
+ *
+ * By mixing and matching multiple masks returned by ACC_BITS_MASK,
+ * update_permission_bitmask() builds what is effectively a
+ * two-dimensional array of bools. The second dimension is
+ * provided by individual bits of permissions[pfec >> 1], and
+ * logical &, | and ~ operations operate on all the 8 possible
+ * combinations of ACC_* bits.
+ */
+#define ACC_BITS_MASK(access) \
+ ((1 & (access) ? 1 << 1 : 0) | \
+ (2 & (access) ? 1 << 2 : 0) | \
+ (3 & (access) ? 1 << 3 : 0) | \
+ (4 & (access) ? 1 << 4 : 0) | \
+ (5 & (access) ? 1 << 5 : 0) | \
+ (6 & (access) ? 1 << 6 : 0) | \
+ (7 & (access) ? 1 << 7 : 0))
static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u8 x = BYTE_MASK(ACC_EXEC_MASK);
- const u8 w = BYTE_MASK(ACC_WRITE_MASK);
- const u8 u = BYTE_MASK(ACC_USER_MASK);
+ const u8 x = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u8 w = ACC_BITS_MASK(ACC_WRITE_MASK);
+ const u8 u = ACC_BITS_MASK(ACC_USER_MASK);
bool cr4_smep = is_cr4_smep(mmu);
bool cr4_smap = is_cr4_smap(mmu);
bool cr0_wp = is_cr0_wp(mmu);
bool efer_nx = is_efer_nx(mmu);
+ /*
+ * In hardware, page fault error codes are generated (as the name
+ * suggests) on any kind of page fault. permission_fault() and
+ * paging_tmpl.h already use the same bits after a successful page
+ * table walk, to indicate the kind of access being performed.
+ *
+ * However, PFERR_PRESENT_MASK and PFERR_RSVD_MASK are never set here,
+ * exactly because the page walk is successful. PFERR_PRESENT_MASK is
+ * removed by the shift, while PFERR_RSVD_MASK is repurposed in
+ * permission_fault() to indicate accesses that are *not* subject to
+ * SMAP restrictions.
+ */
for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
unsigned pfec = byte << 1;
@@ -5589,10 +5613,9 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
* - The access is supervisor mode
* - If implicit supervisor access or X86_EFLAGS_AC is clear
*
- * Here, we cover the first four conditions.
- * The fifth is computed dynamically in permission_fault();
- * PFERR_RSVD_MASK bit will be set in PFEC if the access is
- * *not* subject to SMAP restrictions.
+ * Here, we cover the first four conditions. The fifth
+ * is computed dynamically in permission_fault() and
+ * communicated by setting PFERR_RSVD_MASK.
*/
if (cr4_smap)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 08/27] KVM: x86/mmu: introduce ACC_READ_MASK
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (6 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 07/27] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:41 ` [PATCH 09/27] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
` (18 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Read permissions so far were only needed for EPT, which does not need
ACC_USER_MASK. Therefore, for EPT page tables ACC_USER_MASK was repurposed
as a read permission bit.
In order to implement nested MBEC, EPT will genuinely have four kinds of
accesses, and there will be no room for such hacks; bite the bullet at
last, enlarging ACC_ALL to four bits and permissions[] to 2^4 bits (u16).
The new code does not enforce that the XWR bits on non-execonly processors
have their R bit set, even when running nested: none of the shadow_*_mask
values have bit 0 set, and make_spte() genuinely relies on ACC_READ_MASK
being requested! This works because, if execonly is not supported by the
processor, shadow EPT will generate an EPT misconfig vmexit if the XWR
bits represent a non-readable page, and therefore the pte_access argument
to make_spte() will also always have ACC_READ_MASK set.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 12 +++++-----
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 39 +++++++++++++++++++++------------
arch/x86/kvm/mmu/mmutrace.h | 3 ++-
arch/x86/kvm/mmu/paging_tmpl.h | 35 +++++++++++++++++------------
arch/x86/kvm/mmu/spte.c | 18 ++++++---------
arch/x86/kvm/mmu/spte.h | 5 +++--
arch/x86/kvm/vmx/capabilities.h | 5 -----
arch/x86/kvm/vmx/common.h | 5 +----
arch/x86/kvm/vmx/vmx.c | 3 +--
10 files changed, 67 insertions(+), 60 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6e4e3ef9b8c7..65671d3769f0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -327,11 +327,11 @@ struct kvm_kernel_irq_routing_entry;
* the number of unique SPs that can theoretically be created is 2^n, where n
* is the number of bits that are used to compute the role.
*
- * But, even though there are 20 bits in the mask below, not all combinations
+ * But, even though there are 21 bits in the mask below, not all combinations
* of modes and flags are possible:
*
* - invalid shadow pages are not accounted, mirror pages are not shadowed,
- * so the bits are effectively 18.
+ * so the bits are effectively 19.
*
* - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging);
* execonly and ad_disabled are only used for nested EPT which has
@@ -346,7 +346,7 @@ struct kvm_kernel_irq_routing_entry;
* cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
*
* Therefore, the maximum number of possible upper-level shadow pages for a
- * single gfn is a bit less than 2^13.
+ * single gfn is a bit less than 2^14.
*/
union kvm_mmu_page_role {
u32 word;
@@ -355,7 +355,7 @@ union kvm_mmu_page_role {
unsigned has_4_byte_gpte:1;
unsigned quadrant:2;
unsigned direct:1;
- unsigned access:3;
+ unsigned access:4;
unsigned invalid:1;
unsigned efer_nx:1;
unsigned cr0_wp:1;
@@ -365,7 +365,7 @@ union kvm_mmu_page_role {
unsigned guest_mode:1;
unsigned passthrough:1;
unsigned is_mirror:1;
- unsigned :4;
+ unsigned:3;
/*
* This is left at the top of the word so that
@@ -491,7 +491,7 @@ struct kvm_mmu {
* Byte index: page fault error code [4:1]
* Bit index: pte permissions in ACC_* format
*/
- u8 permissions[16];
+ u16 permissions[16];
u64 *pae_root;
u64 *pml4_root;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 830f46145692..23f37535c0ce 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -81,7 +81,7 @@ u8 kvm_mmu_get_max_tdp_level(void);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value);
void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only);
+void kvm_mmu_set_ept_masks(bool has_ad_bits);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index d4bb693b3555..72c0e2c806c3 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2033,7 +2033,7 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
*/
const union kvm_mmu_page_role sync_role_ign = {
.level = 0xf,
- .access = 0x7,
+ .access = ACC_ALL,
.quadrant = 0x3,
.passthrough = 0x1,
};
@@ -5530,7 +5530,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
* update_permission_bitmask() builds what is effectively a
* two-dimensional array of bools. The second dimension is
* provided by individual bits of permissions[pfec >> 1], and
- * logical &, | and ~ operations operate on all the 8 possible
+ * logical &, | and ~ operations operate on all the 16 possible
* combinations of ACC_* bits.
*/
#define ACC_BITS_MASK(access) \
@@ -5540,15 +5540,24 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
(4 & (access) ? 1 << 4 : 0) | \
(5 & (access) ? 1 << 5 : 0) | \
(6 & (access) ? 1 << 6 : 0) | \
- (7 & (access) ? 1 << 7 : 0))
+ (7 & (access) ? 1 << 7 : 0) | \
+ (8 & (access) ? 1 << 8 : 0) | \
+ (9 & (access) ? 1 << 9 : 0) | \
+ (10 & (access) ? 1 << 10 : 0) | \
+ (11 & (access) ? 1 << 11 : 0) | \
+ (12 & (access) ? 1 << 12 : 0) | \
+ (13 & (access) ? 1 << 13 : 0) | \
+ (14 & (access) ? 1 << 14 : 0) | \
+ (15 & (access) ? 1 << 15 : 0))
static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u8 x = ACC_BITS_MASK(ACC_EXEC_MASK);
- const u8 w = ACC_BITS_MASK(ACC_WRITE_MASK);
- const u8 u = ACC_BITS_MASK(ACC_USER_MASK);
+ const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
+ const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
+ const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
bool cr4_smep = is_cr4_smep(mmu);
bool cr4_smap = is_cr4_smap(mmu);
@@ -5571,24 +5580,26 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
unsigned pfec = byte << 1;
/*
- * Each "*f" variable has a 1 bit for each UWX value
+ * Each "*f" variable has a 1 bit for each ACC_* combo
* that causes a fault with the given PFEC.
*/
+ /* Faults from reads to non-readable pages */
+ u16 rf = (pfec & (PFERR_WRITE_MASK|PFERR_FETCH_MASK)) ? 0 : (u16)~r;
/* Faults from writes to non-writable pages */
- u8 wf = (pfec & PFERR_WRITE_MASK) ? (u8)~w : 0;
+ u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
/* Faults from user mode accesses to supervisor pages */
- u8 uf = (pfec & PFERR_USER_MASK) ? (u8)~u : 0;
+ u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
/* Faults from fetches of non-executable pages*/
- u8 ff = (pfec & PFERR_FETCH_MASK) ? (u8)~x : 0;
+ u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
/* Faults from kernel mode fetches of user pages */
- u8 smepf = 0;
+ u16 smepf = 0;
/* Faults from kernel mode accesses of user pages */
- u8 smapf = 0;
+ u16 smapf = 0;
if (!ept) {
/* Faults from kernel mode accesses to user pages */
- u8 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
+ u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
/* Not really needed: !nx will cause pte.nx to fault */
if (!efer_nx)
@@ -5621,7 +5632,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
}
- mmu->permissions[byte] = ff | uf | wf | smepf | smapf;
+ mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
}
}
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index 764e3015d021..dcfdfedfc4e9 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -25,7 +25,8 @@
#define KVM_MMU_PAGE_PRINTK() ({ \
const char *saved_ptr = trace_seq_buffer_ptr(p); \
static const char *access_str[] = { \
- "---", "--x", "w--", "w-x", "-u-", "-ux", "wu-", "wux" \
+ "----", "r---", "-w--", "rw--", "--u-", "r-u-", "-wu-", "rwu-", \
+ "---x", "r--x", "-w-x", "rw-x", "--ux", "r-ux", "-wux", "rwux" \
}; \
union kvm_mmu_page_role role; \
\
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 901cd2bd40b8..fb1b5d8b23e5 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -170,25 +170,24 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
return true;
}
-/*
- * For PTTYPE_EPT, a page table can be executable but not readable
- * on supported processors. Therefore, set_spte does not automatically
- * set bit 0 if execute only is supported. Here, we repurpose ACC_USER_MASK
- * to signify readability since it isn't used in the EPT case
- */
static inline unsigned FNAME(gpte_access)(u64 gpte)
{
unsigned access;
#if PTTYPE == PTTYPE_EPT
access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
- ((gpte & VMX_EPT_READABLE_MASK) ? ACC_USER_MASK : 0);
+ ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0);
#else
- BUILD_BUG_ON(ACC_EXEC_MASK != PT_PRESENT_MASK);
- BUILD_BUG_ON(ACC_EXEC_MASK != 1);
+ /*
+ * P is set here, so the page is always readable and W/U/!NX represent
+ * allowed accesses.
+ */
+ BUILD_BUG_ON(ACC_READ_MASK != PT_PRESENT_MASK);
+ BUILD_BUG_ON(ACC_WRITE_MASK != PT_WRITABLE_MASK);
+ BUILD_BUG_ON(ACC_USER_MASK != PT_USER_MASK);
+ BUILD_BUG_ON(ACC_EXEC_MASK & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK));
access = gpte & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK);
- /* Combine NX with P (which is set here) to get ACC_EXEC_MASK. */
- access ^= (gpte >> PT64_NX_SHIFT);
+ access |= gpte & PT64_NX_MASK ? 0 : ACC_EXEC_MASK;
#endif
return access;
@@ -501,10 +500,18 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
if (write_fault)
walker->fault.exit_qualification |= EPT_VIOLATION_ACC_WRITE;
- if (user_fault)
- walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
- if (fetch_fault)
+ else if (fetch_fault)
walker->fault.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
+ else
+ walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
+
+ /*
+ * Accesses to guest paging structures are either "reads" or
+ * "read+write" accesses, so consider them the latter if write_fault
+ * is true.
+ */
+ if (access & PFERR_GUEST_PAGE_MASK)
+ walker->fault.exit_qualification |= EPT_VIOLATION_ACC_READ;
/*
* Note, pte_access holds the raw RWX bits from the EPTE, not
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index e9dc0ae44274..7b5f118ae211 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -194,12 +194,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
int is_host_mmio = -1;
bool wrprot = false;
- /*
- * For the EPT case, shadow_present_mask has no RWX bits set if
- * exec-only page table entries are supported. In that case,
- * ACC_USER_MASK and shadow_user_mask are used to represent
- * read access. See FNAME(gpte_access) in paging_tmpl.h.
- */
WARN_ON_ONCE((pte_access | shadow_present_mask) == SHADOW_NONPRESENT_VALUE);
if (sp->role.ad_disabled)
@@ -228,6 +222,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
pte_access &= ~ACC_EXEC_MASK;
}
+ if (pte_access & ACC_READ_MASK)
+ spte |= PT_PRESENT_MASK; /* or VMX_EPT_READABLE_MASK */
+
if (pte_access & ACC_EXEC_MASK)
spte |= shadow_x_mask;
else
@@ -391,6 +388,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
u64 spte = SPTE_MMU_PRESENT_MASK;
spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK |
+ PT_PRESENT_MASK /* or VMX_EPT_READABLE_MASK */ |
shadow_user_mask | shadow_x_mask | shadow_me_value;
if (ad_disabled)
@@ -491,18 +489,16 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only)
+void kvm_mmu_set_ept_masks(bool has_ad_bits)
{
kvm_ad_enabled = has_ad_bits;
- shadow_user_mask = VMX_EPT_READABLE_MASK;
+ shadow_user_mask = 0;
shadow_accessed_mask = VMX_EPT_ACCESS_BIT;
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
shadow_x_mask = VMX_EPT_EXECUTABLE_MASK;
- /* VMX_EPT_SUPPRESS_VE_BIT is needed for W or X violation. */
- shadow_present_mask =
- (has_exec_only ? 0ull : VMX_EPT_READABLE_MASK) | VMX_EPT_SUPPRESS_VE_BIT;
+ shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
shadow_acc_track_mask = VMX_EPT_RWX_MASK;
shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index bc02a2e89a31..121bfb2217e8 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -52,10 +52,11 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#endif
-#define ACC_EXEC_MASK 1
+#define ACC_READ_MASK PT_PRESENT_MASK
#define ACC_WRITE_MASK PT_WRITABLE_MASK
#define ACC_USER_MASK PT_USER_MASK
-#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK)
+#define ACC_EXEC_MASK 8
+#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
#define SPTE_LEVEL_BITS 9
#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 4e371c93ae16..609477f190e8 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -300,11 +300,6 @@ static inline bool cpu_has_vmx_flexpriority(void)
cpu_has_vmx_virtualize_apic_accesses();
}
-static inline bool cpu_has_vmx_ept_execute_only(void)
-{
- return vmx_capability.ept & VMX_EPT_EXECUTE_ONLY_BIT;
-}
-
static inline bool cpu_has_vmx_ept_4levels(void)
{
return vmx_capability.ept & VMX_EPT_PAGE_WALK_4_BIT;
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index adf925500b9e..1afbf272efae 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -85,11 +85,8 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
{
u64 error_code;
- /* Is it a read fault? */
- error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
- ? PFERR_USER_MASK : 0;
/* Is it a write fault? */
- error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
+ error_code = (exit_qualification & EPT_VIOLATION_ACC_WRITE)
? PFERR_WRITE_MASK : 0;
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 8b24e682535b..e27868fa4eb7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8798,8 +8798,7 @@ __init int vmx_hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
if (enable_ept)
- kvm_mmu_set_ept_masks(enable_ept_ad_bits,
- cpu_has_vmx_ept_execute_only());
+ kvm_mmu_set_ept_masks(enable_ept_ad_bits);
else
vt_x86_ops.get_mt_mask = NULL;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 09/27] KVM: x86/mmu: separate more EPT/non-EPT permission_fault()
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (7 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 08/27] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
@ 2026-04-08 15:41 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 10/27] KVM: x86/mmu: pass PFERR_GUEST_PAGE/FINAL_MASK to kvm_translate_gpa Paolo Bonzini
` (17 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:41 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Now that EPT is not abusing anymore ACC_USER_MASK, move its
handling entirely in the !ept branch. Merge smepf and ff
into a single variable because EPT's "SMEP" (actually
MBEC) is defined differently.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 72c0e2c806c3..a26514124fe5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5556,7 +5556,6 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
- const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
bool cr4_smep = is_cr4_smep(mmu);
@@ -5589,21 +5588,24 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Faults from writes to non-writable pages */
u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
/* Faults from user mode accesses to supervisor pages */
- u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
- /* Faults from fetches of non-executable pages*/
- u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
- /* Faults from kernel mode fetches of user pages */
- u16 smepf = 0;
+ u16 uf = 0;
+ /* Faults from fetches of non-executable pages */
+ u16 ff = 0;
/* Faults from kernel mode accesses of user pages */
u16 smapf = 0;
- if (!ept) {
+ if (ept) {
+ ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
+ } else {
+ const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
+
/* Faults from kernel mode accesses to user pages */
u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
- /* Not really needed: !nx will cause pte.nx to fault */
- if (!efer_nx)
- ff = 0;
+ uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
+
+ if (efer_nx)
+ ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
/* Allow supervisor writes if !cr0.wp */
if (!cr0_wp)
@@ -5611,7 +5613,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Disallow supervisor fetches of user code if cr4.smep */
if (cr4_smep)
- smepf = (pfec & PFERR_FETCH_MASK) ? kf : 0;
+ ff |= (pfec & PFERR_FETCH_MASK) ? kf : 0;
/*
* SMAP:kernel-mode data accesses from user-mode
@@ -5632,7 +5634,7 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
}
- mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
+ mmu->permissions[byte] = ff | uf | wf | rf | smapf;
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 10/27] KVM: x86/mmu: pass PFERR_GUEST_PAGE/FINAL_MASK to kvm_translate_gpa
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (8 preceding siblings ...)
2026-04-08 15:41 ` [PATCH 09/27] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 11/27] KVM: x86/mmu: pass pte_access for final nGPA->GPA walk Paolo Bonzini
` (16 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
The XS/XU bit for EPT are only applied to final accesses, and use the
U bit from the page walk itself. While strictly speaking not necessary
(any value of PFERR_USER_MASK would be the same for page table accesses,
because they're reads and writes only), it is clearer and less hackish
to only apply MBEC to PFERR_GUEST_FINAL_MASK. Allow kvm-intel.ko to
distinguish the two cases.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/hyperv.c | 3 ++-
arch/x86/kvm/mmu/mmu.c | 3 ++-
arch/x86/kvm/mmu/paging_tmpl.h | 7 +++++--
arch/x86/kvm/x86.c | 3 ++-
4 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 9b140bbdc1d8..cf9dd565b894 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2041,7 +2041,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* read with kvm_read_guest().
*/
if (!hc->fast && is_guest_mode(vcpu)) {
- hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa, 0, NULL);
+ hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa,
+ PFERR_GUEST_FINAL_MASK, NULL);
if (unlikely(hc->ingpa == INVALID_GPA))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
}
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a26514124fe5..a459177c0480 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4339,7 +4339,8 @@ static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
{
if (exception)
exception->error_code = 0;
- return kvm_translate_gpa(vcpu, mmu, vaddr, access, exception);
+ return kvm_translate_gpa(vcpu, mmu, vaddr, access | PFERR_GUEST_FINAL_MASK,
+ exception);
}
static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index fb1b5d8b23e5..567f8b77ffe0 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -376,7 +376,8 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
walker->pte_gpa[walker->level - 1] = pte_gpa;
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn),
- nested_access, &walker->fault);
+ nested_access | PFERR_GUEST_PAGE_MASK,
+ &walker->fault);
/*
* FIXME: This can happen if emulation (for of an INS/OUTS
@@ -444,7 +445,9 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
gfn += pse36_gfn_delta(pte);
#endif
- real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault);
+ real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn),
+ access | PFERR_GUEST_FINAL_MASK,
+ &walker->fault);
if (real_gpa == INVALID_GPA)
return 0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fd1c4a36b593..bbafd10fd72c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1070,7 +1070,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
* to an L1 GPA.
*/
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(pdpt_gfn),
- PFERR_USER_MASK | PFERR_WRITE_MASK, NULL);
+ PFERR_USER_MASK | PFERR_WRITE_MASK |
+ PFERR_GUEST_PAGE_MASK, NULL);
if (real_gpa == INVALID_GPA)
return 0;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 11/27] KVM: x86/mmu: pass pte_access for final nGPA->GPA walk
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (9 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 10/27] KVM: x86/mmu: pass PFERR_GUEST_PAGE/FINAL_MASK to kvm_translate_gpa Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 12/27] KVM: x86: make translate_nested_gpa vendor-specific Paolo Bonzini
` (15 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
The XS/XU bit for EPT are only applied to final accesses, and use the
U bit from the page walk itself. This is available in the page walker
as pte_access & ACC_USER_MASK but not available to translate_nested_gpa,
so pass it down.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/hyperv.c | 2 +-
arch/x86/kvm/mmu.h | 15 ++++++++++++---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/paging_tmpl.h | 4 ++--
arch/x86/kvm/mmu/spte.h | 6 ------
arch/x86/kvm/x86.c | 5 +++--
6 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index cf9dd565b894..53688f7b76eb 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2042,7 +2042,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
*/
if (!hc->fast && is_guest_mode(vcpu)) {
hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa,
- PFERR_GUEST_FINAL_MASK, NULL);
+ PFERR_GUEST_FINAL_MASK, NULL, 0);
if (unlikely(hc->ingpa == INVALID_GPA))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
}
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 23f37535c0ce..635c2e5d8513 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -37,6 +37,12 @@ extern bool __read_mostly enable_mmio_caching;
#define PT32_ROOT_LEVEL 2
#define PT32E_ROOT_LEVEL 3
+#define ACC_READ_MASK PT_PRESENT_MASK
+#define ACC_WRITE_MASK PT_WRITABLE_MASK
+#define ACC_USER_MASK PT_USER_MASK
+#define ACC_EXEC_MASK 8
+#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
+
#define KVM_MMU_CR4_ROLE_BITS (X86_CR4_PSE | X86_CR4_PAE | X86_CR4_LA57 | \
X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE)
@@ -289,16 +295,19 @@ static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count)
}
gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
- struct x86_exception *exception);
+ struct x86_exception *exception,
+ u64 pte_access);
static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu,
gpa_t gpa, u64 access,
- struct x86_exception *exception)
+ struct x86_exception *exception,
+ u64 pte_access)
{
if (mmu != &vcpu->arch.nested_mmu)
return gpa;
- return translate_nested_gpa(vcpu, gpa, access, exception);
+ return translate_nested_gpa(vcpu, gpa, access, exception,
+ pte_access);
}
static inline bool kvm_has_mirrored_tdp(const struct kvm *kvm)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a459177c0480..f911f7578613 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4340,7 +4340,7 @@ static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
if (exception)
exception->error_code = 0;
return kvm_translate_gpa(vcpu, mmu, vaddr, access | PFERR_GUEST_FINAL_MASK,
- exception);
+ exception, ACC_ALL);
}
static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 567f8b77ffe0..de8770d2fcfc 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -377,7 +377,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn),
nested_access | PFERR_GUEST_PAGE_MASK,
- &walker->fault);
+ &walker->fault, 0);
/*
* FIXME: This can happen if emulation (for of an INS/OUTS
@@ -447,7 +447,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn),
access | PFERR_GUEST_FINAL_MASK,
- &walker->fault);
+ &walker->fault, pte_access);
if (real_gpa == INVALID_GPA)
return 0;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 121bfb2217e8..8a4c09c5cdbf 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -52,12 +52,6 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
#define SPTE_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
#endif
-#define ACC_READ_MASK PT_PRESENT_MASK
-#define ACC_WRITE_MASK PT_WRITABLE_MASK
-#define ACC_USER_MASK PT_USER_MASK
-#define ACC_EXEC_MASK 8
-#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
-
#define SPTE_LEVEL_BITS 9
#define SPTE_LEVEL_SHIFT(level) __PT_LEVEL_SHIFT(level, SPTE_LEVEL_BITS)
#define SPTE_INDEX(address, level) __PT_INDEX(address, level, SPTE_LEVEL_BITS)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bbafd10fd72c..15c1b2de3d93 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1071,7 +1071,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
*/
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(pdpt_gfn),
PFERR_USER_MASK | PFERR_WRITE_MASK |
- PFERR_GUEST_PAGE_MASK, NULL);
+ PFERR_GUEST_PAGE_MASK, NULL, 0);
if (real_gpa == INVALID_GPA)
return 0;
@@ -7824,7 +7824,8 @@ void kvm_get_segment(struct kvm_vcpu *vcpu,
}
gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
- struct x86_exception *exception)
+ struct x86_exception *exception,
+ u64 pte_access)
{
struct kvm_mmu *mmu = vcpu->arch.mmu;
gpa_t t_gpa;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 12/27] KVM: x86: make translate_nested_gpa vendor-specific
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (10 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 11/27] KVM: x86/mmu: pass pte_access for final nGPA->GPA walk Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 13/27] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
` (14 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
EPT and NPT have different rules for passing PFERR_USER_MASK to the
nested page table walk. In particular, for final addresses EPT
uses the U bit of the guest (nGVA->nGPA) walk.
While at it, remove PFERR_USER_MASK from the VMX version of the
function, since it is actually ignored by the tables that
update_permission_bitmask() generates for EPT.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 4 ++++
arch/x86/kvm/hyperv.c | 3 ++-
arch/x86/kvm/mmu.h | 9 +++------
arch/x86/kvm/svm/nested.c | 15 +++++++++++++++
arch/x86/kvm/vmx/nested.c | 12 ++++++++++++
arch/x86/kvm/x86.c | 16 ----------------
6 files changed, 36 insertions(+), 23 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 65671d3769f0..a20263a4e727 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1979,6 +1979,10 @@ struct kvm_x86_nested_ops {
struct kvm_nested_state *kvm_state);
bool (*get_nested_state_pages)(struct kvm_vcpu *vcpu);
int (*write_log_dirty)(struct kvm_vcpu *vcpu, gpa_t l2_gpa);
+ gpa_t (*translate_nested_gpa)(struct kvm_vcpu *vcpu, gpa_t gpa,
+ u64 access,
+ struct x86_exception *exception,
+ u64 pte_access);
int (*enable_evmcs)(struct kvm_vcpu *vcpu,
uint16_t *vmcs_version);
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 53688f7b76eb..f35fae3a7b3d 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2041,7 +2041,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* read with kvm_read_guest().
*/
if (!hc->fast && is_guest_mode(vcpu)) {
- hc->ingpa = translate_nested_gpa(vcpu, hc->ingpa,
+ hc->ingpa = kvm_x86_ops.nested_ops->translate_nested_gpa(
+ vcpu, hc->ingpa,
PFERR_GUEST_FINAL_MASK, NULL, 0);
if (unlikely(hc->ingpa == INVALID_GPA))
return HV_STATUS_INVALID_HYPERCALL_INPUT;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 635c2e5d8513..63be5c5efed9 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -294,10 +294,6 @@ static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count)
atomic64_add(count, &kvm->stat.pages[level - 1]);
}
-gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
- struct x86_exception *exception,
- u64 pte_access);
-
static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu,
gpa_t gpa, u64 access,
@@ -306,8 +302,9 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
{
if (mmu != &vcpu->arch.nested_mmu)
return gpa;
- return translate_nested_gpa(vcpu, gpa, access, exception,
- pte_access);
+ return kvm_x86_ops.nested_ops->translate_nested_gpa(vcpu, gpa, access,
+ exception,
+ pte_access);
}
static inline bool kvm_has_mirrored_tdp(const struct kvm *kvm)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b36c33255bed..3b670ee4eb26 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1968,8 +1968,23 @@ static bool svm_get_nested_state_pages(struct kvm_vcpu *vcpu)
return true;
}
+static gpa_t svm_translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa,
+ u64 access,
+ struct x86_exception *exception,
+ u64 pte_access)
+{
+ struct kvm_mmu *mmu = vcpu->arch.mmu;
+
+ BUG_ON(!mmu_is_nested(vcpu));
+
+ /* NPT walks are always user-walks */
+ access |= PFERR_USER_MASK;
+ return mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception);
+}
+
struct kvm_x86_nested_ops svm_nested_ops = {
.leave_nested = svm_leave_nested,
+ .translate_nested_gpa = svm_translate_nested_gpa,
.is_exception_vmexit = nested_svm_is_exception_vmexit,
.check_events = svm_check_nested_events,
.triple_fault = nested_svm_triple_fault,
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 937aeb474af7..e4cb317807ab 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -7436,8 +7436,20 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
return 0;
}
+static gpa_t vmx_translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa,
+ u64 access,
+ struct x86_exception *exception,
+ u64 pte_access)
+{
+ struct kvm_mmu *mmu = vcpu->arch.mmu;
+
+ BUG_ON(!mmu_is_nested(vcpu));
+ return mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception);
+}
+
struct kvm_x86_nested_ops vmx_nested_ops = {
.leave_nested = vmx_leave_nested,
+ .translate_nested_gpa = vmx_translate_nested_gpa,
.is_exception_vmexit = nested_vmx_is_exception_vmexit,
.check_events = vmx_check_nested_events,
.has_events = vmx_has_nested_events,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 15c1b2de3d93..0757b93e528d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7823,22 +7823,6 @@ void kvm_get_segment(struct kvm_vcpu *vcpu,
kvm_x86_call(get_segment)(vcpu, var, seg);
}
-gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access,
- struct x86_exception *exception,
- u64 pte_access)
-{
- struct kvm_mmu *mmu = vcpu->arch.mmu;
- gpa_t t_gpa;
-
- BUG_ON(!mmu_is_nested(vcpu));
-
- /* NPT walks are always user-walks */
- access |= PFERR_USER_MASK;
- t_gpa = mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception);
-
- return t_gpa;
-}
-
gpa_t kvm_mmu_gva_to_gpa_read(struct kvm_vcpu *vcpu, gva_t gva,
struct x86_exception *exception)
{
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 13/27] KVM: x86/mmu: split XS/XU bits for EPT
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (11 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 12/27] KVM: x86: make translate_nested_gpa vendor-specific Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 14/27] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
` (13 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
When EPT is in use, replace ACC_USER_MASK with ACC_USER_EXEC_MASK,
so that supervisor and user-mode execution can be controlled
independently (ACC_USER_MASK would not allow a setting similar to
XU=0 XS=1 W=1 R=1).
Replace shadow_x_mask with shadow_xs_mask/shadow_xu_mask, to allow
setting XS and XU bits separately in EPT entries.
Note that ACC_USER_EXEC_MASK is already set through ACC_ALL in
the kvm_mmu_page roles, but it does not propagate to the XU bit
because (for now) shadow_xs_mask == shadow_xu_mask.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu.h | 3 ++-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/mmutrace.h | 6 ++---
arch/x86/kvm/mmu/spte.c | 44 +++++++++++++++++++++++--------------
arch/x86/kvm/mmu/spte.h | 11 ++++++++--
5 files changed, 42 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 63be5c5efed9..d8c13e43c2d7 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -39,7 +39,8 @@ extern bool __read_mostly enable_mmio_caching;
#define ACC_READ_MASK PT_PRESENT_MASK
#define ACC_WRITE_MASK PT_WRITABLE_MASK
-#define ACC_USER_MASK PT_USER_MASK
+#define ACC_USER_MASK PT_USER_MASK /* non EPT */
+#define ACC_USER_EXEC_MASK ACC_USER_MASK /* EPT only */
#define ACC_EXEC_MASK 8
#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK | ACC_READ_MASK)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f911f7578613..ff87dca46cc6 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5476,7 +5476,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
static inline bool boot_cpu_is_amd(void)
{
WARN_ON_ONCE(!tdp_enabled);
- return shadow_x_mask == 0;
+ return shadow_xs_mask == 0;
}
/*
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index dcfdfedfc4e9..3429c1413f42 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -357,8 +357,8 @@ TRACE_EVENT(
__entry->sptep = virt_to_phys(sptep);
__entry->level = level;
__entry->r = shadow_present_mask || (__entry->spte & PT_PRESENT_MASK);
- __entry->x = is_executable_pte(__entry->spte);
- __entry->u = shadow_user_mask ? !!(__entry->spte & shadow_user_mask) : -1;
+ __entry->x = (__entry->spte & (shadow_xs_mask | shadow_nx_mask)) == shadow_xs_mask;
+ __entry->u = !!(__entry->spte & (shadow_xu_mask | shadow_user_mask));
),
TP_printk("gfn %llx spte %llx (%s%s%s%s) level %d at %llx",
@@ -366,7 +366,7 @@ TRACE_EVENT(
__entry->r ? "r" : "-",
__entry->spte & PT_WRITABLE_MASK ? "w" : "-",
__entry->x ? "x" : "-",
- __entry->u == -1 ? "" : (__entry->u ? "u" : "-"),
+ __entry->u ? "u" : "-",
__entry->level, __entry->sptep
)
);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 7b5f118ae211..779ee44893b0 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -29,8 +29,9 @@ bool __read_mostly kvm_ad_enabled;
u64 __read_mostly shadow_host_writable_mask;
u64 __read_mostly shadow_mmu_writable_mask;
u64 __read_mostly shadow_nx_mask;
-u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
u64 __read_mostly shadow_user_mask;
+u64 __read_mostly shadow_xs_mask; /* mutual exclusive with nx_mask and user_mask */
+u64 __read_mostly shadow_xu_mask; /* mutual exclusive with nx_mask and user_mask */
u64 __read_mostly shadow_accessed_mask;
u64 __read_mostly shadow_dirty_mask;
u64 __read_mostly shadow_mmio_value;
@@ -217,21 +218,26 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
* would tie make_spte() further to vCPU/MMU state, and add complexity
* just to optimize a mode that is anything but performance critical.
*/
- if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) &&
- is_nx_huge_page_enabled(vcpu->kvm)) {
+ if (level > PG_LEVEL_4K && is_nx_huge_page_enabled(vcpu->kvm)) {
pte_access &= ~ACC_EXEC_MASK;
+ if (shadow_xu_mask)
+ pte_access &= ~ACC_USER_EXEC_MASK;
}
if (pte_access & ACC_READ_MASK)
spte |= PT_PRESENT_MASK; /* or VMX_EPT_READABLE_MASK */
- if (pte_access & ACC_EXEC_MASK)
- spte |= shadow_x_mask;
- else
- spte |= shadow_nx_mask;
-
- if (pte_access & ACC_USER_MASK)
- spte |= shadow_user_mask;
+ if (shadow_nx_mask) {
+ if (!(pte_access & ACC_EXEC_MASK))
+ spte |= shadow_nx_mask;
+ if (pte_access & ACC_USER_MASK)
+ spte |= shadow_user_mask;
+ } else {
+ if (pte_access & ACC_EXEC_MASK)
+ spte |= shadow_xs_mask;
+ if (pte_access & ACC_USER_EXEC_MASK)
+ spte |= shadow_xu_mask;
+ }
if (level > PG_LEVEL_4K)
spte |= PT_PAGE_SIZE_MASK;
@@ -318,11 +324,13 @@ static u64 make_spte_executable(u64 spte, u8 access)
{
u64 set, clear;
- if (access & ACC_EXEC_MASK)
- set = shadow_x_mask;
+ if (shadow_nx_mask)
+ set = (access & ACC_EXEC_MASK) ? 0 : shadow_nx_mask;
else
- set = shadow_nx_mask;
- clear = set ^ (shadow_nx_mask | shadow_x_mask);
+ set =
+ (access & ACC_EXEC_MASK ? shadow_xs_mask : 0) |
+ (access & ACC_USER_EXEC_MASK ? shadow_xu_mask : 0);
+ clear = set ^ (shadow_nx_mask | shadow_xs_mask | shadow_xu_mask);
return modify_spte_protections(spte, set, clear);
}
@@ -389,7 +397,7 @@ u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled)
spte |= __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK |
PT_PRESENT_MASK /* or VMX_EPT_READABLE_MASK */ |
- shadow_user_mask | shadow_x_mask | shadow_me_value;
+ shadow_user_mask | shadow_xs_mask | shadow_xu_mask | shadow_me_value;
if (ad_disabled)
spte |= SPTE_TDP_AD_DISABLED;
@@ -497,7 +505,8 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits)
shadow_accessed_mask = VMX_EPT_ACCESS_BIT;
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
- shadow_x_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xs_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xu_mask = VMX_EPT_EXECUTABLE_MASK;
shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
shadow_acc_track_mask = VMX_EPT_RWX_MASK;
@@ -548,7 +557,8 @@ void kvm_mmu_reset_all_pte_masks(void)
shadow_accessed_mask = PT_ACCESSED_MASK;
shadow_dirty_mask = PT_DIRTY_MASK;
shadow_nx_mask = PT64_NX_MASK;
- shadow_x_mask = 0;
+ shadow_xs_mask = 0;
+ shadow_xu_mask = 0;
shadow_present_mask = PT_PRESENT_MASK;
shadow_acc_track_mask = 0;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 8a4c09c5cdbf..0ed690f78e17 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -178,8 +178,9 @@ extern bool __read_mostly kvm_ad_enabled;
extern u64 __read_mostly shadow_host_writable_mask;
extern u64 __read_mostly shadow_mmu_writable_mask;
extern u64 __read_mostly shadow_nx_mask;
-extern u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */
extern u64 __read_mostly shadow_user_mask;
+extern u64 __read_mostly shadow_xs_mask; /* mutual exclusive with nx_mask and user_mask */
+extern u64 __read_mostly shadow_xu_mask; /* mutual exclusive with nx_mask and user_mask */
extern u64 __read_mostly shadow_accessed_mask;
extern u64 __read_mostly shadow_dirty_mask;
extern u64 __read_mostly shadow_mmio_value;
@@ -357,7 +358,13 @@ static inline bool is_last_spte(u64 pte, int level)
static inline bool is_executable_pte(u64 spte)
{
- return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask;
+ /*
+ * For now, return true if either the XS or XU bit is set
+ * This function is only used for fast_page_fault,
+ * which never processes shadow EPT, and regular page
+ * tables always have XS==XU.
+ */
+ return (spte & (shadow_xs_mask | shadow_xu_mask | shadow_nx_mask)) != shadow_nx_mask;
}
static inline kvm_pfn_t spte_to_pfn(u64 pte)
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 14/27] KVM: x86/mmu: move cr4_smep to base role
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (12 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 13/27] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 15/27] KVM: VMX: enable use of MBEC Paolo Bonzini
` (12 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Guest page tables can be reused independent of the value of CR4.SMEP
(at least if WP=1). However, this is not true of EPT MBEC pages,
because presence of EPT entries is signaled by bits 0-2 when MBEC
is off, and bits 0-2 + bit 10 when MBEC is on.
In preparation for enabling MBEC, move cr4_smep to the base role.
This makes the smep_andnot_wp bit redundant, so remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Documentation/virt/kvm/x86/mmu.rst | 10 ++++------
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 23 +++++++++++++++--------
arch/x86/kvm/mmu/mmu.c | 6 +++---
4 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/Documentation/virt/kvm/x86/mmu.rst b/Documentation/virt/kvm/x86/mmu.rst
index 2b3b6d442302..666aa179601a 100644
--- a/Documentation/virt/kvm/x86/mmu.rst
+++ b/Documentation/virt/kvm/x86/mmu.rst
@@ -184,10 +184,8 @@ Shadow pages contain the following information:
Contains the value of efer.nx for which the page is valid.
role.cr0_wp:
Contains the value of cr0.wp for which the page is valid.
- role.smep_andnot_wp:
- Contains the value of cr4.smep && !cr0.wp for which the page is valid
- (pages for which this is true are different from other pages; see the
- treatment of cr0.wp=0 below).
+ role.cr4_smep:
+ Contains the value of cr4.smep for which the page is valid.
role.smap_andnot_wp:
Contains the value of cr4.smap && !cr0.wp for which the page is valid
(pages for which this is true are different from other pages; see the
@@ -435,8 +433,8 @@ from being written by the kernel after cr0.wp has changed to 1, we make
the value of cr0.wp part of the page role. This means that an spte created
with one value of cr0.wp cannot be used when cr0.wp has a different value -
it will simply be missed by the shadow page lookup code. A similar issue
-exists when an spte created with cr0.wp=0 and cr4.smep=0 is used after
-changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep
+exists when an spte created with cr0.wp=0 and cr4.smap=0 is used after
+changing cr4.smap to 1. To avoid this, the value of !cr0.wp && cr4.smap
is also made a part of the page role.
Large pages
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index de709fb5bd76..a02b486cc6fe 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -93,6 +93,7 @@ KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
+KVM_X86_OP_OPTIONAL_RET0(tdp_has_smep)
KVM_X86_OP(load_mmu_pgd)
KVM_X86_OP_OPTIONAL(link_external_spt)
KVM_X86_OP_OPTIONAL(set_external_spte)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index a20263a4e727..8b4a55cc0918 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -342,8 +342,8 @@ struct kvm_kernel_irq_routing_entry;
* paging has exactly one upper level, making level completely redundant
* when has_4_byte_gpte=1.
*
- * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if
- * cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
+ * - on top of this, smap_andnot_wp is only set if cr0_wp=0,
+ * therefore these two bits only give rise to 3 possibilities.
*
* Therefore, the maximum number of possible upper-level shadow pages for a
* single gfn is a bit less than 2^14.
@@ -359,12 +359,19 @@ union kvm_mmu_page_role {
unsigned invalid:1;
unsigned efer_nx:1;
unsigned cr0_wp:1;
- unsigned smep_andnot_wp:1;
unsigned smap_andnot_wp:1;
unsigned ad_disabled:1;
unsigned guest_mode:1;
unsigned passthrough:1;
unsigned is_mirror:1;
+
+ /*
+ * cr4_smep is also set for EPT MBEC. Because it affects
+ * which pages are considered non-present (bit 10 additionally
+ * must be zero if MBEC is on) it has to be in the base role.
+ */
+ unsigned cr4_smep:1;
+
unsigned:3;
/*
@@ -391,10 +398,10 @@ union kvm_mmu_page_role {
* tables (because KVM doesn't support Protection Keys with shadow paging), and
* CR0.PG, CR4.PAE, and CR4.PSE are indirectly reflected in role.level.
*
- * Note, SMEP and SMAP are not redundant with sm*p_andnot_wp in the page role.
- * If CR0.WP=1, KVM can reuse shadow pages for the guest regardless of SMEP and
- * SMAP, but the MMU's permission checks for software walks need to be SMEP and
- * SMAP aware regardless of CR0.WP.
+ * Note, SMAP is not redundant with smap_andnot_wp in the page role. If
+ * CR0.WP=1, KVM can reuse shadow pages for the guest regardless of SMAP,
+ * but the MMU's permission checks for software walks need to be SMAP
+ * aware regardless of CR0.WP.
*/
union kvm_mmu_extended_role {
u32 word;
@@ -404,7 +411,6 @@ union kvm_mmu_extended_role {
unsigned int cr4_pse:1;
unsigned int cr4_pke:1;
unsigned int cr4_smap:1;
- unsigned int cr4_smep:1;
unsigned int cr4_la57:1;
unsigned int efer_lma:1;
};
@@ -1856,6 +1862,7 @@ struct kvm_x86_ops {
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
u8 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+ bool (*tdp_has_smep)(struct kvm *kvm);
void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa,
int root_level);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ff87dca46cc6..c4da7a42c77f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -227,7 +227,7 @@ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \
}
BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse);
-BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep);
+BUILD_MMU_ROLE_ACCESSOR(base, cr4, smep);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke);
BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57);
@@ -5749,7 +5749,7 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
role.base.efer_nx = ____is_efer_nx(regs);
role.base.cr0_wp = ____is_cr0_wp(regs);
- role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs);
+ role.base.cr4_smep = ____is_cr4_smep(regs);
role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs);
role.base.has_4_byte_gpte = !____is_cr4_pae(regs);
@@ -5761,7 +5761,6 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
else
role.base.level = PT32_ROOT_LEVEL;
- role.ext.cr4_smep = ____is_cr4_smep(regs);
role.ext.cr4_smap = ____is_cr4_smap(regs);
role.ext.cr4_pse = ____is_cr4_pse(regs);
@@ -5820,6 +5819,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
role.access = ACC_ALL;
role.cr0_wp = true;
+ role.cr4_smep = kvm_x86_call(tdp_has_smep)(vcpu->kvm);
role.efer_nx = true;
role.smm = cpu_role.base.smm;
role.guest_mode = cpu_role.base.guest_mode;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 15/27] KVM: VMX: enable use of MBEC
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (13 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 14/27] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 16/27] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
` (11 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
If available, set SECONDARY_EXEC_MODE_BASED_EPT_EXEC in the secondary
execution controls and configure XS and XU separately (even if they
are always used together).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/mmu.h | 7 ++++++-
arch/x86/kvm/mmu/spte.c | 6 +++---
arch/x86/kvm/mmu/spte.h | 5 +++--
arch/x86/kvm/vmx/capabilities.h | 7 +++++++
arch/x86/kvm/vmx/common.h | 10 +++++-----
arch/x86/kvm/vmx/main.c | 9 +++++++++
arch/x86/kvm/vmx/nested.c | 1 +
arch/x86/kvm/vmx/vmx.c | 16 +++++++++++++++-
arch/x86/kvm/vmx/vmx.h | 1 +
arch/x86/kvm/vmx/x86_ops.h | 1 +
11 files changed, 54 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 59e3b095a315..2b449a3948d3 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -608,9 +608,12 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
+#define EPT_VIOLATION_USER_EXEC_TO_PROT(__epte) (((__epte) & VMX_EPT_USER_EXECUTABLE_MASK) >> 4)
static_assert(EPT_VIOLATION_RWX_TO_PROT(VMX_EPT_RWX_MASK) ==
(EPT_VIOLATION_PROT_READ | EPT_VIOLATION_PROT_WRITE | EPT_VIOLATION_PROT_EXEC));
+static_assert(EPT_VIOLATION_USER_EXEC_TO_PROT(VMX_EPT_USER_EXECUTABLE_MASK) ==
+ (EPT_VIOLATION_PROT_USER_EXEC));
/*
* Exit Qualifications for NOTIFY VM EXIT
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index d8c13e43c2d7..d15f908d048f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -83,12 +83,17 @@ static inline gfn_t kvm_mmu_max_gfn(void)
return (1ULL << (max_gpa_bits - PAGE_SHIFT)) - 1;
}
+static inline bool mmu_has_mbec(struct kvm_mmu *mmu)
+{
+ return mmu->root_role.cr4_smep;
+}
+
u8 kvm_mmu_get_max_tdp_level(void);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value);
void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits);
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 779ee44893b0..6da5c73d1004 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -497,7 +497,7 @@ void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask)
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_mmu_set_me_spte_mask);
-void kvm_mmu_set_ept_masks(bool has_ad_bits)
+void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec)
{
kvm_ad_enabled = has_ad_bits;
@@ -506,10 +506,10 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits)
shadow_dirty_mask = VMX_EPT_DIRTY_BIT;
shadow_nx_mask = 0ull;
shadow_xs_mask = VMX_EPT_EXECUTABLE_MASK;
- shadow_xu_mask = VMX_EPT_EXECUTABLE_MASK;
+ shadow_xu_mask = has_mbec ? VMX_EPT_USER_EXECUTABLE_MASK : VMX_EPT_EXECUTABLE_MASK;
shadow_present_mask = VMX_EPT_SUPPRESS_VE_BIT;
- shadow_acc_track_mask = VMX_EPT_RWX_MASK;
+ shadow_acc_track_mask = VMX_EPT_RWX_MASK | shadow_xu_mask;
shadow_host_writable_mask = EPT_SPTE_HOST_WRITABLE;
shadow_mmu_writable_mask = EPT_SPTE_MMU_WRITABLE;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 0ed690f78e17..f5261d993eac 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -24,7 +24,7 @@
* - bits 55 (EPT only): MMU-writable
* - bits 56-59: unused
* - bits 60-61: type of A/D tracking
- * - bits 62: unused
+ * - bits 62 (EPT only): saved XU bit for disabled AD
*/
/*
@@ -65,7 +65,8 @@ static_assert(SPTE_TDP_AD_ENABLED == 0);
* must not overlap the A/D type mask.
*/
#define SHADOW_ACC_TRACK_SAVED_BITS_MASK (VMX_EPT_READABLE_MASK | \
- VMX_EPT_EXECUTABLE_MASK)
+ VMX_EPT_EXECUTABLE_MASK | \
+ VMX_EPT_USER_EXECUTABLE_MASK)
#define SHADOW_ACC_TRACK_SAVED_BITS_SHIFT 52
#define SHADOW_ACC_TRACK_SAVED_MASK (SHADOW_ACC_TRACK_SAVED_BITS_MASK << \
SHADOW_ACC_TRACK_SAVED_BITS_SHIFT)
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 609477f190e8..83d68028d414 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -15,6 +15,7 @@ extern bool __read_mostly enable_ept;
extern bool __read_mostly enable_unrestricted_guest;
extern bool __read_mostly enable_ept_ad_bits;
extern bool __read_mostly enable_pml;
+extern bool __read_mostly enable_mbec;
extern int __read_mostly pt_mode;
#define PT_MODE_SYSTEM 0
@@ -406,4 +407,10 @@ static inline bool cpu_has_notify_vmexit(void)
SECONDARY_EXEC_NOTIFY_VM_EXITING;
}
+static inline bool cpu_has_ept_mbec(void)
+{
+ return vmcs_config.cpu_based_2nd_exec_ctrl &
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+}
+
#endif /* __KVM_X86_VMX_CAPS_H */
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 1afbf272efae..40fa72f31fc7 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -91,15 +91,15 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
/* Is it a fetch fault? */
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
- /*
- * ept page table entry is present?
- * note: unconditionally clear USER_EXEC until mode-based
- * execute control is implemented
- */
+ /* ept page table entry is present? */
error_code |= (exit_qualification &
(EPT_VIOLATION_PROT_MASK & ~EPT_VIOLATION_PROT_USER_EXEC))
? PFERR_PRESENT_MASK : 0;
+ if (mmu_has_mbec(vcpu->arch.mmu))
+ error_code |= (exit_qualification & EPT_VIOLATION_PROT_USER_EXEC)
+ ? PFERR_PRESENT_MASK : 0;
+
if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
index a46ccd670785..c0dd506bed64 100644
--- a/arch/x86/kvm/vmx/main.c
+++ b/arch/x86/kvm/vmx/main.c
@@ -750,6 +750,14 @@ static int vt_set_identity_map_addr(struct kvm *kvm, u64 ident_addr)
return vmx_set_identity_map_addr(kvm, ident_addr);
}
+static bool vt_tdp_has_smep(struct kvm *kvm)
+{
+ if (is_td(kvm))
+ return false;
+
+ return vmx_tdp_has_smep(kvm);
+}
+
static u64 vt_get_l2_tsc_offset(struct kvm_vcpu *vcpu)
{
/* TDX doesn't support L2 guest at the moment. */
@@ -961,6 +969,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
.set_tss_addr = vt_op(set_tss_addr),
.set_identity_map_addr = vt_op(set_identity_map_addr),
.get_mt_mask = vmx_get_mt_mask,
+ .tdp_has_smep = vt_op(tdp_has_smep),
.get_exit_info = vt_op(get_exit_info),
.get_entry_info = vt_op(get_entry_info),
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e4cb317807ab..cdc35a2728d9 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2440,6 +2440,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_ENABLE_VMFUNC |
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC |
SECONDARY_EXEC_DESC);
if (nested_cpu_has(vmcs12,
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e27868fa4eb7..3905bc85a46c 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -113,6 +113,9 @@ module_param(emulate_invalid_guest_state, bool, 0444);
static bool __read_mostly fasteoi = 1;
module_param(fasteoi, bool, 0444);
+bool __read_mostly enable_mbec = 1;
+module_param_named(mbec, enable_mbec, bool, 0444);
+
module_param(enable_apicv, bool, 0444);
module_param(enable_ipiv, bool, 0444);
@@ -2809,6 +2812,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
return -EIO;
vmx_cap->ept = 0;
+ _cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE;
}
if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_VPID) &&
@@ -4844,6 +4848,9 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx)
*/
exec_control &= ~SECONDARY_EXEC_ENABLE_VMFUNC;
+ if (!enable_mbec)
+ exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
/* SECONDARY_EXEC_DESC is enabled/disabled on writes to CR4.UMIP,
* in vmx_set_cr4. */
exec_control &= ~SECONDARY_EXEC_DESC;
@@ -7932,6 +7939,11 @@ u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT);
}
+bool vmx_tdp_has_smep(struct kvm *kvm)
+{
+ return enable_mbec;
+}
+
static void vmcs_set_secondary_exec_control(struct vcpu_vmx *vmx, u32 new_ctl)
{
/*
@@ -8779,6 +8791,8 @@ __init int vmx_hardware_setup(void)
ple_window_shrink = 0;
}
+ if (!cpu_has_ept_mbec())
+ enable_mbec = 0;
if (!cpu_has_vmx_apicv())
enable_apicv = 0;
if (!enable_apicv)
@@ -8798,7 +8812,7 @@ __init int vmx_hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
if (enable_ept)
- kvm_mmu_set_ept_masks(enable_ept_ad_bits);
+ kvm_mmu_set_ept_masks(enable_ept_ad_bits, enable_mbec);
else
vt_x86_ops.get_mt_mask = NULL;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 70bfe81dea54..594717e619d9 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -570,6 +570,7 @@ static inline u8 vmx_get_rvi(void)
SECONDARY_EXEC_ENABLE_VMFUNC | \
SECONDARY_EXEC_BUS_LOCK_DETECTION | \
SECONDARY_EXEC_NOTIFY_VM_EXITING | \
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC | \
SECONDARY_EXEC_ENCLS_EXITING | \
SECONDARY_EXEC_EPT_VIOLATION_VE)
diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
index d09abeac2b56..69cf276be88e 100644
--- a/arch/x86/kvm/vmx/x86_ops.h
+++ b/arch/x86/kvm/vmx/x86_ops.h
@@ -103,6 +103,7 @@ void vmx_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr);
int vmx_set_identity_map_addr(struct kvm *kvm, u64 ident_addr);
u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio);
+bool vmx_tdp_has_smep(struct kvm *kvm);
void vmx_get_exit_info(struct kvm_vcpu *vcpu, u32 *reason,
u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code);
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 16/27] KVM: nVMX: pass advanced EPT violation vmexit info to guest
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (14 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 15/27] KVM: VMX: enable use of MBEC Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 17/27] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
` (10 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
KVM will use advanced vmexit information for EPT violations to
virtualize MBEC. Pass it to the guest since it is easy and allows
testing nested nested.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/vmx.h | 4 ++++
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
arch/x86/kvm/vmx/nested.c | 13 +++++++++----
3 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 2b449a3948d3..fcd623719334 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -524,6 +524,7 @@ enum vmcs_field {
#define VMX_EPT_1GB_PAGE_BIT (1ull << 17)
#define VMX_EPT_INVEPT_BIT (1ull << 20)
#define VMX_EPT_AD_BIT (1ull << 21)
+#define VMX_EPT_ADVANCED_VMEXIT_INFO_BIT (1ull << 22)
#define VMX_EPT_EXTENT_CONTEXT_BIT (1ull << 25)
#define VMX_EPT_EXTENT_GLOBAL_BIT (1ull << 26)
@@ -606,6 +607,9 @@ enum vm_entry_failure_code {
EPT_VIOLATION_PROT_USER_EXEC)
#define EPT_VIOLATION_GVA_IS_VALID BIT(7)
#define EPT_VIOLATION_GVA_TRANSLATED BIT(8)
+#define EPT_VIOLATION_GVA_USER BIT(9)
+#define EPT_VIOLATION_GVA_WRITABLE BIT(10)
+#define EPT_VIOLATION_GVA_NX BIT(11)
#define EPT_VIOLATION_RWX_TO_PROT(__epte) (((__epte) & VMX_EPT_RWX_MASK) << 3)
#define EPT_VIOLATION_USER_EXEC_TO_PROT(__epte) (((__epte) & VMX_EPT_USER_EXECUTABLE_MASK) >> 4)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index de8770d2fcfc..f5f9e745f21d 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -494,7 +494,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
* [2:0] - Derive from the access bits. The exit_qualification might be
* out of date if it is serving an EPT misconfiguration.
* [5:3] - Calculated by the page walk of the guest EPT page tables
- * [7:8] - Derived from [7:8] of real exit_qualification
+ * [7:11] - Derived from [7:11] of real exit_qualification
*
* The other bits are set to 0.
*/
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index cdc35a2728d9..e0ef2cb8f2cd 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -443,10 +443,14 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu,
vm_exit_reason = EXIT_REASON_EPT_MISCONFIG;
exit_qualification = 0;
} else {
+ u64 mask = EPT_VIOLATION_GVA_IS_VALID |
+ EPT_VIOLATION_GVA_TRANSLATED;
+ if (vmx->nested.msrs.ept_caps & VMX_EPT_ADVANCED_VMEXIT_INFO_BIT)
+ mask |= EPT_VIOLATION_GVA_USER |
+ EPT_VIOLATION_GVA_WRITABLE |
+ EPT_VIOLATION_GVA_NX;
exit_qualification = fault->exit_qualification;
- exit_qualification |= vmx_get_exit_qual(vcpu) &
- (EPT_VIOLATION_GVA_IS_VALID |
- EPT_VIOLATION_GVA_TRANSLATED);
+ exit_qualification |= vmx_get_exit_qual(vcpu) & mask;
vm_exit_reason = EXIT_REASON_EPT_VIOLATION;
}
@@ -7238,7 +7242,8 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_caps,
VMX_EPT_PAGE_WALK_5_BIT |
VMX_EPTP_WB_BIT |
VMX_EPT_INVEPT_BIT |
- VMX_EPT_EXECUTE_ONLY_BIT;
+ VMX_EPT_EXECUTE_ONLY_BIT |
+ VMX_EPT_ADVANCED_VMEXIT_INFO_BIT;
msrs->ept_caps &= ept_caps;
msrs->ept_caps |= VMX_EPT_EXTENT_GLOBAL_BIT |
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 17/27] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (15 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 16/27] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 18/27] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
` (9 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
For EPT, PFERR_USER_MASK refers not to the CPL of the guest,
but to the AND of the U bits encountered while walking guest
page tables; this is consistent with how MBEC differentiates
between XS and XU. This is available through the
"advanced vmexit information for EPT violations" feature.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/vmx/common.h | 6 +++++-
arch/x86/kvm/vmx/vmx.c | 10 ++++++++++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h
index 40fa72f31fc7..48520fa1c8e8 100644
--- a/arch/x86/kvm/vmx/common.h
+++ b/arch/x86/kvm/vmx/common.h
@@ -100,9 +100,13 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa,
error_code |= (exit_qualification & EPT_VIOLATION_PROT_USER_EXEC)
? PFERR_PRESENT_MASK : 0;
- if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID)
+ if (exit_qualification & EPT_VIOLATION_GVA_IS_VALID) {
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
+ if ((exit_qualification & (EPT_VIOLATION_GVA_TRANSLATED|EPT_VIOLATION_GVA_USER))
+ == (EPT_VIOLATION_GVA_TRANSLATED|EPT_VIOLATION_GVA_USER))
+ error_code |= PFERR_USER_MASK;
+ }
if (vt_is_tdx_private_gpa(vcpu->kvm, gpa))
error_code |= PFERR_PRIVATE_ACCESS;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3905bc85a46c..53dc78066650 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2826,6 +2826,16 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
vmx_cap->vpid = 0;
}
+ /*
+ * Virtualizing MBEC requires advanced vmexit information in order to
+ * distinguish supervisor and user accesses. For simplicity and clarity
+ * disable MBEC entirely if advanced vmexit information is not available,
+ * this way mbec=1 in the kvm_intel module parameters implies availability
+ * to nested guests as well.
+ */
+ if (!(vmx_cap->ept & VMX_EPT_ADVANCED_VMEXIT_INFO_BIT))
+ _cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
+
if (!cpu_has_sgx())
_cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_ENCLS_EXITING;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 18/27] KVM: x86/mmu: add support for MBEC to EPT page table walks
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (16 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 17/27] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 19/27] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
` (8 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Extend the page walker to support moving bit 10 of the PTEs
into ACC_USER_EXEC_MASK and bit 6 of the exit qualification of
EPT violation VM exits.
Note that while mmu_has_mbec()/cr4_smep affect the interpretation of
ACC_USER_EXEC_MASK and add bit 10 as a "present bit" in guest EPT page
table entries, they do not affect how KVM operates on SPTEs. That's
because the MMU uses explicit ACC_USER_EXEC_MASK/shadow_xu_mask even for
the non-nested EPT; the only difference is that ACC_USER_EXEC_MASK and
ACC_EXEC_MASK will always be set in tandem outside the nested scenario.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 13 +++++++++++--
arch/x86/kvm/mmu/paging_tmpl.h | 27 +++++++++++++++++++++------
arch/x86/kvm/mmu/spte.h | 2 ++
arch/x86/kvm/vmx/nested.c | 9 +++++++++
4 files changed, 43 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c4da7a42c77f..07bdc8f9b0cb 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5555,7 +5555,6 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
{
unsigned byte;
- const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
@@ -5596,8 +5595,18 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
u16 smapf = 0;
if (ept) {
- ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
+ const u16 xs = ACC_BITS_MASK(ACC_EXEC_MASK);
+ const u16 xu = ACC_BITS_MASK(ACC_USER_EXEC_MASK);
+
+ if (pfec & PFERR_FETCH_MASK) {
+ /* Ignore XU unless MBEC is enabled. */
+ if (cr4_smep)
+ ff = pfec & PFERR_USER_MASK ? (u16)~xu : (u16)~xs;
+ else
+ ff = (u16)~xs;
+ }
} else {
+ const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
/* Faults from kernel mode accesses to user pages */
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index f5f9e745f21d..7d7c617885fa 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -124,12 +124,17 @@ static inline void FNAME(protect_clean_gpte)(struct kvm_mmu *mmu, unsigned *acce
*access &= mask;
}
-static inline int FNAME(is_present_gpte)(unsigned long pte)
+static inline int FNAME(is_present_gpte)(struct kvm_mmu *mmu,
+ unsigned long pte)
{
#if PTTYPE != PTTYPE_EPT
return pte & PT_PRESENT_MASK;
#else
- return pte & 7;
+ /*
+ * For EPT, an entry is present if any of bits 2:0 are set.
+ * With mode-based execute control, bit 10 also indicates presence.
+ */
+ return pte & (7 | (mmu_has_mbec(mmu) ? VMX_EPT_USER_EXECUTABLE_MASK : 0));
#endif
}
@@ -152,7 +157,7 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *sp, u64 *spte,
u64 gpte)
{
- if (!FNAME(is_present_gpte)(gpte))
+ if (!FNAME(is_present_gpte)(vcpu->arch.mmu, gpte))
goto no_present;
/* Prefetch only accessed entries (unless A/D bits are disabled). */
@@ -173,10 +178,17 @@ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
static inline unsigned FNAME(gpte_access)(u64 gpte)
{
unsigned access;
+ /*
+ * Set bits in ACC_*_MASK even if they might not be used in the
+ * actual checks. For example, if EFER.NX is clear permission_fault()
+ * will ignore ACC_EXEC_MASK, and if MBEC is disabled it will
+ * ignore ACC_USER_EXEC_MASK.
+ */
#if PTTYPE == PTTYPE_EPT
access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
- ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0);
+ ((gpte & VMX_EPT_READABLE_MASK) ? ACC_READ_MASK : 0) |
+ ((gpte & VMX_EPT_USER_EXECUTABLE_MASK) ? ACC_USER_EXEC_MASK : 0);
#else
/*
* P is set here, so the page is always readable and W/U/!NX represent
@@ -331,7 +343,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
if (walker->level == PT32E_ROOT_LEVEL) {
pte = mmu->get_pdptr(vcpu, (addr >> 30) & 3);
trace_kvm_mmu_paging_element(pte, walker->level);
- if (!FNAME(is_present_gpte)(pte))
+ if (!FNAME(is_present_gpte)(mmu, pte))
goto error;
--walker->level;
}
@@ -414,7 +426,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
*/
pte_access = pt_access & (pte ^ walk_nx_mask);
- if (unlikely(!FNAME(is_present_gpte)(pte)))
+ if (unlikely(!FNAME(is_present_gpte)(mmu, pte)))
goto error;
if (unlikely(FNAME(is_rsvd_bits_set)(mmu, pte, walker->level))) {
@@ -521,6 +533,9 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
* ACC_*_MASK flags!
*/
walker->fault.exit_qualification |= EPT_VIOLATION_RWX_TO_PROT(pte_access);
+ if (mmu_has_mbec(mmu))
+ walker->fault.exit_qualification |=
+ EPT_VIOLATION_USER_EXEC_TO_PROT(pte_access);
}
#endif
walker->fault.address = addr;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index f5261d993eac..fe9571837fee 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -395,6 +395,8 @@ static inline bool __is_rsvd_bits_set(struct rsvd_bits_validate *rsvd_check,
static inline bool __is_bad_mt_xwr(struct rsvd_bits_validate *rsvd_check,
u64 pte)
{
+ if (pte & VMX_EPT_USER_EXECUTABLE_MASK)
+ pte |= VMX_EPT_EXECUTABLE_MASK;
return rsvd_check->bad_mt_xwr & BIT_ULL(pte & 0x3f);
}
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index e0ef2cb8f2cd..925dfa7f54df 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -7450,6 +7450,15 @@ static gpa_t vmx_translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa,
struct kvm_mmu *mmu = vcpu->arch.mmu;
BUG_ON(!mmu_is_nested(vcpu));
+
+ /*
+ * MBEC differentiates based on the effective U/S bit of
+ * the guest page tables; not the processor CPL.
+ */
+ access &= ~PFERR_USER_MASK;
+ if ((pte_access & ACC_USER_MASK) && (access & PFERR_GUEST_FINAL_MASK))
+ access |= PFERR_USER_MASK;
+
return mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 19/27] KVM: nVMX: advertise MBEC to nested guests
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (17 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 18/27] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 20/27] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
` (7 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
Advertise SECONDARY_EXEC_MODE_BASED_EPT_EXEC (MBEC) to userspace, which
allows userspace to expose and advertise the feature to the guest.
When MBEC is enabled by the guest, it is passed to the MMU via cr4_smep,
and to the processor by the merging of vmcs12->secondary_vm_exec_control
into the VMCS02's secondary VM execution controls.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-9-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 7 ++++---
arch/x86/kvm/vmx/nested.c | 11 +++++++++++
3 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index d15f908d048f..9031f7ef8e4e 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -100,7 +100,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
unsigned long cr4, u64 efer, gpa_t nested_cr3);
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
- gpa_t new_eptp);
+ bool mbec, gpa_t new_eptp);
bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu);
int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
u64 fault_address, char *insn, int insn_len);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 07bdc8f9b0cb..3ecc571a1de0 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5944,7 +5944,7 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_init_shadow_npt_mmu);
static union kvm_cpu_role
kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
- bool execonly, u8 level)
+ bool execonly, u8 level, bool mbec)
{
union kvm_cpu_role role = {0};
@@ -5954,6 +5954,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
*/
WARN_ON_ONCE(is_smm(vcpu));
role.base.level = level;
+ role.base.cr4_smep = mbec;
role.base.has_4_byte_gpte = false;
role.base.direct = false;
role.base.ad_disabled = !accessed_dirty;
@@ -5969,13 +5970,13 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
- gpa_t new_eptp)
+ bool mbec, gpa_t new_eptp)
{
struct kvm_mmu *context = &vcpu->arch.guest_mmu;
u8 level = vmx_eptp_page_walk_level(new_eptp);
union kvm_cpu_role new_mode =
kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
- execonly, level);
+ execonly, level, mbec);
if (new_mode.as_u64 != context->cpu_role.as_u64) {
/* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 925dfa7f54df..564165262073 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -469,6 +469,13 @@ static void nested_ept_inject_page_fault(struct kvm_vcpu *vcpu,
vmcs12->guest_physical_address = fault->address;
}
+static inline bool nested_ept_mbec_enabled(struct kvm_vcpu *vcpu)
+{
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+
+ return nested_cpu_has2(vmcs12, SECONDARY_EXEC_MODE_BASED_EPT_EXEC);
+}
+
static void nested_ept_new_eptp(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -477,6 +484,7 @@ static void nested_ept_new_eptp(struct kvm_vcpu *vcpu)
kvm_init_shadow_ept_mmu(vcpu, execonly, ept_lpage_level,
nested_ept_ad_enabled(vcpu),
+ nested_ept_mbec_enabled(vcpu),
nested_ept_get_eptp(vcpu));
}
@@ -7255,6 +7263,9 @@ static void nested_vmx_setup_secondary_ctls(u32 ept_caps,
msrs->ept_caps |= VMX_EPT_AD_BIT;
}
+ if (enable_mbec)
+ msrs->secondary_ctls_high |=
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC;
/*
* Advertise EPTP switching irrespective of hardware support,
* KVM emulates it in software so long as VMFUNC is supported.
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 20/27] KVM: nVMX: allow MBEC with EVMCS
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (18 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 19/27] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 21/27] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
` (6 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
From: Jon Kohler <jon@nutanix.com>
Extend EVMCS1_SUPPORTED_2NDEXEC to allow MBEC and EVMCS to coexist.
Presenting both EVMCS and MBEC simultaneously causes KVM to filter out
MBEC and not present it as a supported control to the guest, preventing
performance gains from MBEC when Windows HVCI is enabled.
The guest may choose not to use MBEC (e.g., if the admin does not enable
Windows HVCI / Memory Integrity), but if they use traditional nested
virt (Hyper-V, WSL2, etc.), having EVMCS exposed is important for
improving nested guest performance. IOW allowing MBEC and EVMCS to
coexist provides maximum optionality to Windows users without
overcomplicating VM administration.
Signed-off-by: Jon Kohler <jon@nutanix.com>
Message-ID: <20251223054806.1611168-8-jon@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/vmx/hyperv_evmcs.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/vmx/hyperv_evmcs.h b/arch/x86/kvm/vmx/hyperv_evmcs.h
index fc7c4e7bd1bf..bc08fe40590e 100644
--- a/arch/x86/kvm/vmx/hyperv_evmcs.h
+++ b/arch/x86/kvm/vmx/hyperv_evmcs.h
@@ -87,6 +87,7 @@
SECONDARY_EXEC_PT_CONCEAL_VMX | \
SECONDARY_EXEC_BUS_LOCK_DETECTION | \
SECONDARY_EXEC_NOTIFY_VM_EXITING | \
+ SECONDARY_EXEC_MODE_BASED_EPT_EXEC | \
SECONDARY_EXEC_ENCLS_EXITING)
#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 21/27] KVM: x86/mmu: propagate access mask from root pages down
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (19 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 20/27] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 22/27] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
` (5 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Until now, all SPTEs have had all kinds of access allowed; however,
for GMET to be enabled all the pages have to have ACC_USER_MASK
disabled. By marking them as supervisor pages, the processor
allows execution from either user or supervisor mode (unlike
for normal paging, NPT ignores the U bit for reads and writes).
That will mean that the root page's role has ACC_USER_MASK
cleared and that has to be propagated down through the kvm_mmu_page
tree.
Do that, and pass the required access to the
kvm_mmu_spte_requested tracepoint since it's not ACC_ALL
anymore.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 9 +++++----
arch/x86/kvm/mmu/mmutrace.h | 10 ++++++----
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
4 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3ecc571a1de0..4374784f1154 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3437,12 +3437,13 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
struct kvm_shadow_walk_iterator it;
struct kvm_mmu_page *sp;
- int ret;
+ int ret, access;
gfn_t base_gfn = fault->gfn;
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ access = vcpu->arch.mmu->root_role.access;
+ trace_kvm_mmu_spte_requested(fault, access);
for_each_shadow_entry(vcpu, fault->addr, it) {
/*
* We cannot overwrite existing page tables with an NX
@@ -3455,7 +3456,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (it.level == fault->goal_level)
break;
- sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL);
+ sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, access);
if (sp == ERR_PTR(-EEXIST))
continue;
@@ -3468,7 +3469,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (WARN_ON_ONCE(it.level != fault->goal_level))
return -EFAULT;
- ret = mmu_set_spte(vcpu, fault->slot, it.sptep, ACC_ALL,
+ ret = mmu_set_spte(vcpu, fault->slot, it.sptep, access,
base_gfn, fault->pfn, fault);
if (ret == RET_PF_SPURIOUS)
return ret;
diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h
index 3429c1413f42..fa01719baf8d 100644
--- a/arch/x86/kvm/mmu/mmutrace.h
+++ b/arch/x86/kvm/mmu/mmutrace.h
@@ -373,23 +373,25 @@ TRACE_EVENT(
TRACE_EVENT(
kvm_mmu_spte_requested,
- TP_PROTO(struct kvm_page_fault *fault),
- TP_ARGS(fault),
+ TP_PROTO(struct kvm_page_fault *fault, u8 access),
+ TP_ARGS(fault, access),
TP_STRUCT__entry(
__field(u64, gfn)
__field(u64, pfn)
__field(u8, level)
+ __field(u8, access)
),
TP_fast_assign(
__entry->gfn = fault->gfn;
__entry->pfn = fault->pfn | (fault->gfn & (KVM_PAGES_PER_HPAGE(fault->goal_level) - 1));
__entry->level = fault->goal_level;
+ __entry->access = access;
),
- TP_printk("gfn %llx pfn %llx level %d",
- __entry->gfn, __entry->pfn, __entry->level
+ TP_printk("gfn %llx pfn %llx level %d access %x",
+ __entry->gfn, __entry->pfn, __entry->level, __entry->access
)
);
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 7d7c617885fa..9ad2b05a0f44 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -734,7 +734,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
*/
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ trace_kvm_mmu_spte_requested(fault, gw->pte_access);
for (; shadow_walk_okay(&it); shadow_walk_next(&it)) {
/*
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 9c26038f6b77..25e557de99d6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1185,9 +1185,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
}
if (unlikely(!fault->slot))
- new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
+ new_spte = make_mmio_spte(vcpu, iter->gfn, sp->role.access);
else
- wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn,
+ wrprot = make_spte(vcpu, sp, fault->slot, sp->role.access, iter->gfn,
fault->pfn, iter->old_spte, fault->prefetch,
false, fault->map_writable, &new_spte);
@@ -1272,7 +1272,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
kvm_mmu_hugepage_adjust(vcpu, fault);
- trace_kvm_mmu_spte_requested(fault);
+ trace_kvm_mmu_spte_requested(fault, root->role.access);
rcu_read_lock();
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 22/27] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (20 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 21/27] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 23/27] KVM: SVM: add GMET bit definitions Paolo Bonzini
` (4 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
While GMET looks a lot like SMEP, it has several annoying differences.
The main one is that the availability of the I/D bit in the page fault
error code still depends on the host CR4.SMEP and EFER.NXE bits. If the
base.cr4_smep bit of the cpu_role is (ab)used to enable GMET, there needs
to be another place where the host CR4.SMEP is read from; just merge it
with EFER.NXE into a new cpu_role bit that tells paging_tmpl.h whether
to set the I/D bit at all.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 7 +++++++
arch/x86/kvm/mmu/mmu.c | 8 ++++++++
arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8b4a55cc0918..e20f729f6eb2 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -413,6 +413,13 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1;
unsigned int cr4_la57:1;
unsigned int efer_lma:1;
+
+ /*
+ * True if either CR4.SMEP or EFER.NXE are set. For AMD NPT
+ * this is the "real" host CR4.SMEP whereas cr4_smep is
+ * actually GMET.
+ */
+ unsigned int has_pferr_fetch:1;
};
};
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 4374784f1154..290537b0f36c 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -234,6 +234,11 @@ BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57);
BUILD_MMU_ROLE_ACCESSOR(base, efer, nx);
BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma);
+static inline bool has_pferr_fetch(struct kvm_mmu *mmu)
+{
+ return mmu->cpu_role.ext.has_pferr_fetch;
+}
+
static inline bool is_cr0_pg(struct kvm_mmu *mmu)
{
return mmu->cpu_role.base.level > 0;
@@ -5778,6 +5783,8 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu,
role.ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs);
role.ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs);
role.ext.efer_lma = ____is_efer_lma(regs);
+
+ role.ext.has_pferr_fetch = role.base.efer_nx | role.base.cr4_smep;
return role;
}
@@ -5931,6 +5938,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
/* NPT requires CR0.PG=1. */
WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
+ cpu_role.base.cr4_smep = false;
root_role = cpu_role.base;
root_role.level = kvm_mmu_get_tdp_level(vcpu);
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 9ad2b05a0f44..ad0e5f43b8ad 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -489,7 +489,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
error:
errcode |= write_fault | user_fault;
- if (fetch_fault && (is_efer_nx(mmu) || is_cr4_smep(mmu)))
+ if (fetch_fault && has_pferr_fetch(mmu))
errcode |= PFERR_FETCH_MASK;
walker->fault.vector = PF_VECTOR;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 23/27] KVM: SVM: add GMET bit definitions
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (21 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 22/27] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 24/27] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
` (3 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti, Borislav Petkov (AMD)
GMET (Guest Mode Execute Trap) is an AMD virtualization feature,
essentially the nested paging version of SMEP. Hyper-V uses it;
add it in preparation for making it available to hypervisors
running under KVM.
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/svm.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dbe104df339b..9f876fbdcc3a 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -379,6 +379,7 @@
#define X86_FEATURE_AVIC (15*32+13) /* "avic" Virtual Interrupt Controller */
#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* "v_vmsave_vmload" Virtual VMSAVE VMLOAD */
#define X86_FEATURE_VGIF (15*32+16) /* "vgif" Virtual GIF */
+#define X86_FEATURE_GMET (15*32+17) /* Guest Mode Execution Trap */
#define X86_FEATURE_X2AVIC (15*32+18) /* "x2avic" Virtual x2apic */
#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */
#define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index edde36097ddc..03e9e0112b10 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -242,6 +242,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_NESTED_CTL_NP_ENABLE BIT(0)
#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2)
+#define SVM_NESTED_CTL_GMET_ENABLE BIT(3)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 24/27] KVM: x86/mmu: add support for GMET to NPT page table walks
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (22 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 23/27] KVM: SVM: add GMET bit definitions Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 25/27] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
` (2 subsequent siblings)
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
GMET allows page table entries to be created with U=0 in NPT.
However, when GMET=1 U=0 only affects execution, not reads or
writes. Ignore user faults on non-fetch accesses for NPT GMET.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.h | 3 ++-
arch/x86/kvm/mmu/mmu.c | 19 +++++++++++++------
arch/x86/kvm/svm/nested.c | 10 +++++++---
4 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e20f729f6eb2..99f37fc132ab 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -369,6 +369,8 @@ union kvm_mmu_page_role {
* cr4_smep is also set for EPT MBEC. Because it affects
* which pages are considered non-present (bit 10 additionally
* must be zero if MBEC is on) it has to be in the base role.
+ * It also has to be in the base role for AMD GMET because
+ * kernel-executable pages need to have U=0 with GMET enabled.
*/
unsigned cr4_smep:1;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 9031f7ef8e4e..1677eb0b5588 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -97,7 +97,8 @@ void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_mbec);
void kvm_init_mmu(struct kvm_vcpu *vcpu);
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
- unsigned long cr4, u64 efer, gpa_t nested_cr3);
+ unsigned long cr4, u64 efer, gpa_t nested_cr3,
+ u64 nested_ctl);
void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
int huge_page_level, bool accessed_dirty,
bool mbec, gpa_t new_eptp);
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 290537b0f36c..354f8af5d90a 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -55,6 +55,7 @@
#include <asm/io.h>
#include <asm/set_memory.h>
#include <asm/spec-ctrl.h>
+#include <asm/svm.h>
#include <asm/vmx.h>
#include "trace.h"
@@ -5557,7 +5558,7 @@ reset_ept_shadow_zero_bits_mask(struct kvm_mmu *context, bool execonly)
(14 & (access) ? 1 << 14 : 0) | \
(15 & (access) ? 1 << 15 : 0))
-static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
+static void update_permission_bitmask(struct kvm_mmu *mmu, bool tdp, bool ept)
{
unsigned byte;
@@ -5618,7 +5619,12 @@ static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
/* Faults from kernel mode accesses to user pages */
u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
- uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
+ /*
+ * For NPT GMET, U=0 does not affect reads and writes. Fetches
+ * are handled below via cr4_smep.
+ */
+ if (!(tdp && cr4_smep))
+ uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
if (efer_nx)
ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
@@ -5729,7 +5735,7 @@ static void reset_guest_paging_metadata(struct kvm_vcpu *vcpu,
return;
reset_guest_rsvds_bits_mask(vcpu, mmu);
- update_permission_bitmask(mmu, false);
+ update_permission_bitmask(mmu, mmu == &vcpu->arch.guest_mmu, false);
update_pkru_bitmask(mmu);
}
@@ -5925,7 +5931,8 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
}
void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
- unsigned long cr4, u64 efer, gpa_t nested_cr3)
+ unsigned long cr4, u64 efer, gpa_t nested_cr3,
+ u64 nested_ctl)
{
struct kvm_mmu *context = &vcpu->arch.guest_mmu;
struct kvm_mmu_role_regs regs = {
@@ -5938,7 +5945,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
/* NPT requires CR0.PG=1. */
WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
- cpu_role.base.cr4_smep = false;
+ cpu_role.base.cr4_smep = (nested_ctl & SVM_NESTED_CTL_GMET_ENABLE) != 0;
root_role = cpu_role.base;
root_role.level = kvm_mmu_get_tdp_level(vcpu);
@@ -5996,7 +6003,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
context->gva_to_gpa = ept_gva_to_gpa;
context->sync_spte = ept_sync_spte;
- update_permission_bitmask(context, true);
+ update_permission_bitmask(context, true, true);
context->pkru_mask = 0;
reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level);
reset_ept_shadow_zero_bits_mask(context, execonly);
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 3b670ee4eb26..a76f2b8cbef9 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -95,7 +95,8 @@ static void nested_svm_init_mmu_context(struct kvm_vcpu *vcpu)
*/
kvm_init_shadow_npt_mmu(vcpu, X86_CR0_PG, svm->vmcb01.ptr->save.cr4,
svm->vmcb01.ptr->save.efer,
- svm->nested.ctl.nested_cr3);
+ svm->nested.ctl.nested_cr3,
+ svm->nested.ctl.nested_ctl);
vcpu->arch.mmu->get_guest_pgd = nested_svm_get_tdp_cr3;
vcpu->arch.mmu->get_pdptr = nested_svm_get_tdp_pdptr;
vcpu->arch.mmu->inject_page_fault = nested_svm_inject_npf_exit;
@@ -1973,12 +1974,15 @@ static gpa_t svm_translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa,
struct x86_exception *exception,
u64 pte_access)
{
+ struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_mmu *mmu = vcpu->arch.mmu;
BUG_ON(!mmu_is_nested(vcpu));
- /* NPT walks are always user-walks */
- access |= PFERR_USER_MASK;
+ /* Non-GMET walks are always user-walks */
+ if (!(svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE))
+ access |= PFERR_USER_MASK;
+
return mmu->gva_to_gpa(vcpu, mmu, gpa, access, exception);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 25/27] KVM: SVM: enable GMET and set it in MMU role
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (23 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 24/27] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 26/27] KVM: SVM: work around errata 1218 Paolo Bonzini
2026-04-08 15:42 ` [PATCH 27/27] KVM: nSVM: enable GMET for guests Paolo Bonzini
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
Set the GMET bit in the nested control field. This has effectively
no impact as long as NPT page tables are changed to have U=0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/mmu.c | 6 +++++-
arch/x86/kvm/svm/nested.c | 7 +++++--
arch/x86/kvm/svm/svm.c | 16 ++++++++++++++++
arch/x86/kvm/svm/svm.h | 1 +
4 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 354f8af5d90a..14c0c2da75bb 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5840,7 +5840,6 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
{
union kvm_mmu_page_role role = {0};
- role.access = ACC_ALL;
role.cr0_wp = true;
role.cr4_smep = kvm_x86_call(tdp_has_smep)(vcpu->kvm);
role.efer_nx = true;
@@ -5851,6 +5850,11 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
role.direct = true;
role.has_4_byte_gpte = false;
+ /* All TDP pages are supervisor-executable */
+ role.access = ACC_ALL;
+ if (role.cr4_smep && shadow_user_mask)
+ role.access &= ~ACC_USER_MASK;
+
return role;
}
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index a76f2b8cbef9..863040ecec0e 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -829,9 +829,12 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
/* Also overwritten later if necessary. */
vmcb02->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
- /* nested_cr3. */
- if (nested_npt_enabled(svm))
+ /* Use vmcb01 MMU and format if guest does not use nNPT */
+ if (nested_npt_enabled(svm)) {
+ vmcb02->control.nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+
nested_svm_init_mmu_context(vcpu);
+ }
vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
vcpu->arch.l1_tsc_offset,
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e6477affac9a..1705e3cafcb0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -135,6 +135,9 @@ module_param(pause_filter_count_max, ushort, 0444);
bool npt_enabled = true;
module_param_named(npt, npt_enabled, bool, 0444);
+bool gmet_enabled = true;
+module_param_named(gmet, gmet_enabled, bool, 0444);
+
/* allow nested virtualization in KVM/SVM */
static int nested = true;
module_param(nested, int, 0444);
@@ -1170,6 +1173,10 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
save->g_pat = vcpu->arch.pat;
save->cr3 = 0;
}
+
+ if (gmet_enabled)
+ control->nested_ctl |= SVM_NESTED_CTL_GMET_ENABLE;
+
svm->current_vmcb->asid_generation = 0;
svm->asid = 0;
@@ -4475,6 +4482,11 @@ svm_patch_hypercall(struct kvm_vcpu *vcpu, unsigned char *hypercall)
hypercall[2] = 0xd9;
}
+static bool svm_tdp_has_smep(struct kvm *kvm)
+{
+ return gmet_enabled;
+}
+
/*
* The kvm parameter can be NULL (module initialization, or invocation before
* VM creation). Be sure to check the kvm parameter before using it.
@@ -5224,6 +5236,7 @@ struct kvm_x86_ops svm_x86_ops __initdata = {
.write_tsc_multiplier = svm_write_tsc_multiplier,
.load_mmu_pgd = svm_load_mmu_pgd,
+ .tdp_has_smep = svm_tdp_has_smep,
.check_intercept = svm_check_intercept,
.handle_exit_irqoff = svm_handle_exit_irqoff,
@@ -5464,6 +5477,9 @@ static __init int svm_hardware_setup(void)
if (!boot_cpu_has(X86_FEATURE_NPT))
npt_enabled = false;
+ if (!npt_enabled || !boot_cpu_has(X86_FEATURE_GMET))
+ gmet_enabled = false;
+
/* Force VM NPT level equal to the host's paging level */
kvm_configure_mmu(npt_enabled, get_npt_level(),
get_npt_level(), PG_LEVEL_1G);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 6942e6b0eda6..41042379aa48 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -44,6 +44,7 @@ static inline struct page *__sme_pa_to_page(unsigned long pa)
#define IOPM_SIZE PAGE_SIZE * 3
#define MSRPM_SIZE PAGE_SIZE * 2
+extern bool gmet_enabled;
extern bool npt_enabled;
extern int nrips;
extern int vgif;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 26/27] KVM: SVM: work around errata 1218
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (24 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 25/27] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
2026-04-08 15:42 ` [PATCH 27/27] KVM: nSVM: enable GMET for guests Paolo Bonzini
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
According to AMD, the hypervisor may not be able to determine whether a
fault was a GMET fault or an NX fault based on EXITINFO1, and software
"must read the relevant VMCB to determine whether a fault was a GMET
fault or an NX fault". The APM further details that they meant the
CPL field.
KVM uses the page fault error code to distinguish the causes of a
nested page fault, so recalculate the PFERR_USER_MASK bit of the
vmexit information. Only do it for fetches and only if GMET is in
use, because KVM does not differentiate based on PFERR_USER_MASK
for other nested NPT page faults.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/svm.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1705e3cafcb0..4957fa36890c 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1957,6 +1957,17 @@ static int npf_interception(struct kvm_vcpu *vcpu)
}
}
+ if ((svm->vmcb->control.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE) &&
+ (error_code & PFERR_FETCH_MASK)) {
+ /*
+ * Work around errata 1218: EXITINFO1[2] May Be Incorrectly Set
+ * When GMET (Guest Mode Execute Trap extension) is Enabled
+ */
+ error_code |= PFERR_USER_MASK;
+ if (svm_get_cpl(vcpu) != 3)
+ error_code &= ~PFERR_USER_MASK;
+ }
+
if (sev_snp_guest(vcpu->kvm) && (error_code & PFERR_GUEST_ENC_MASK))
error_code |= PFERR_PRIVATE_ACCESS;
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH 27/27] KVM: nSVM: enable GMET for guests
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
` (25 preceding siblings ...)
2026-04-08 15:42 ` [PATCH 26/27] KVM: SVM: work around errata 1218 Paolo Bonzini
@ 2026-04-08 15:42 ` Paolo Bonzini
26 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2026-04-08 15:42 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: Jon Kohler, Nikunj A Dadhania, Amit Shah, Sean Christopherson,
Marcelo Tosatti
All that needs to be done is moving the GMET bit from vmcs12 to
vmcs02. The only new thing is that __nested_svm_check_controls
now ensures that ignored-if-unavailable bits are zero in
svm->nested.ctl.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/svm/nested.c | 5 +++++
arch/x86/kvm/svm/svm.c | 3 +++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 863040ecec0e..3c45adb70946 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -470,7 +470,11 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->exit_info_2 = from->exit_info_2;
to->exit_int_info = from->exit_int_info;
to->exit_int_info_err = from->exit_int_info_err;
+
to->nested_ctl = from->nested_ctl;
+ if (!gmet_enabled || !guest_cpu_cap_has(vcpu, X86_FEATURE_GMET))
+ to->nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+
to->event_inj = from->event_inj;
to->event_inj_err = from->event_inj_err;
to->next_rip = from->next_rip;
@@ -832,6 +836,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
/* Use vmcb01 MMU and format if guest does not use nNPT */
if (nested_npt_enabled(svm)) {
vmcb02->control.nested_ctl &= ~SVM_NESTED_CTL_GMET_ENABLE;
+ vmcb02->control.nested_ctl |= (svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_GMET_ENABLE);
nested_svm_init_mmu_context(vcpu);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4957fa36890c..55baaf833e63 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5372,6 +5372,9 @@ static __init void svm_set_cpu_caps(void)
if (boot_cpu_has(X86_FEATURE_PFTHRESHOLD))
kvm_cpu_cap_set(X86_FEATURE_PFTHRESHOLD);
+ if (gmet_enabled)
+ kvm_cpu_cap_set(X86_FEATURE_GMET);
+
if (vgif)
kvm_cpu_cap_set(X86_FEATURE_VGIF);
--
2.52.0
^ permalink raw reply related [flat|nested] 29+ messages in thread