* [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening
@ 2026-03-03 0:33 Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12 Yosry Ahmed
` (26 more replies)
0 siblings, 27 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
A group of semi-related fixes, cleanups, and hardening patches for nSVM.
The series is essentially a group of related mini-series stitched
together for syntactic and semantic dependencies. The first 17 patches
(except patch 3) are all optimistically CC'd to stable as they are fixes
or refactoring leading up to bug fixes. Although I am not sure how much
of that will actually apply to stable trees.
Patches 1-3 here are v2 of the last 3 patches in the LBRV fixes series
[1]. The first 3 patches of [1] are already upstream.
Patches 4-12 are fixes for failure handling in the nested VMRUN and
#VMEXIT code paths.
Patches 13-17 are fixes for missing or made-up consistency checks.
Patches 18-19 are renames and cleanups.
Patches 20-25 add hardening to reading the VMCB12, caching all used
fields in the save area to prevent theoritical TOC-TOU bugs, sanitizing
used fields in the control area, and restricting accesses to the VMCB12
through guest memory.
Finally, patch 26 is a selftest for nested VMRUN and #VMEXIT failures
due to failing to map vmcb12.
v6 -> v7:
- Dropped unification of VMRUN failure paths and refactoring patches
leading up to it, consistency checks are now moved into the helper
copying vmcb12 to cache instead of enter_svm_guest_mode().
- Clear reserved bits in dbgctl in KVM_SET_NESTED_STATE.
- Dropped consistency check on hCR0 as CR0.PG is already checked.
- Dropped redundant check on CR4.PAE in new CS consistency check.
- Correctly cache clean bits from vmcb12.
- Update selftest to use a single VMRUN instruction and avoid missing
post-VMRUN L1 registers restore.
v6: https://lore.kernel.org/kvm/20260224223405.3270433-1-yosry@kernel.org/
[1]https://lore.kernel.org/kvm/20251108004524.1600006-1-yosry.ahmed@linux.dev/
Yosry Ahmed (26):
KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12
KVM: SVM: Switch svm_copy_lbrs() to a macro
KVM: SVM: Add missing save/restore handling of LBR MSRs
KVM: selftests: Add a test for LBR save/restore (ft. nested)
KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN
KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper
KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as a helper
KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT
KVM: nSVM: Triple fault if restore host CR3 fails on nested #VMEXIT
KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID)
KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT
KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ on nested #VMEXIT
KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers
KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE
KVM: nSVM: Add missing consistency check for nCR3 validity
KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS
KVM: nSVM: Add missing consistency check for EVENTINJ
KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl
KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2
KVM: nSVM: Cache all used fields from VMCB12
KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN
KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12
KVM: nSVM: Sanitize TLB_CONTROL field when copying from vmcb12
KVM: nSVM: Sanitize INT/EVENTINJ fields when copying from vmcb12
KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl
KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
arch/x86/include/asm/svm.h | 20 +-
arch/x86/kvm/svm/nested.c | 459 +++++++++++-------
arch/x86/kvm/svm/sev.c | 4 +-
arch/x86/kvm/svm/svm.c | 72 +--
arch/x86/kvm/svm/svm.h | 50 +-
arch/x86/kvm/x86.c | 3 +
tools/testing/selftests/kvm/Makefile.kvm | 2 +
.../selftests/kvm/include/x86/processor.h | 5 +
tools/testing/selftests/kvm/include/x86/svm.h | 14 +-
tools/testing/selftests/kvm/lib/x86/svm.c | 2 +-
.../kvm/x86/nested_vmsave_vmload_test.c | 16 +-
.../selftests/kvm/x86/svm_lbr_nested_state.c | 145 ++++++
.../kvm/x86/svm_nested_invalid_vmcb12_gpa.c | 98 ++++
13 files changed, 644 insertions(+), 246 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
create mode 100644 tools/testing/selftests/kvm/x86/svm_nested_invalid_vmcb12_gpa.c
base-commit: 183bb0ce8c77b0fd1fb25874112bc8751a461e49
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply [flat|nested] 52+ messages in thread
* [PATCH v7 01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
@ 2026-03-03 0:33 ` Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 02/26] KVM: SVM: Switch svm_copy_lbrs() to a macro Yosry Ahmed
` (25 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
svm_copy_lbrs() always marks VMCB_LBR dirty in the destination VMCB.
However, nested_svm_vmexit() uses it to copy LBRs to vmcb12, and
clearing clean bits in vmcb12 is not architecturally defined.
Move vmcb_mark_dirty() to callers and drop it for vmcb12.
This also facilitates incoming refactoring that does not pass the entire
VMCB to svm_copy_lbrs().
Fixes: d20c796ca370 ("KVM: x86: nSVM: implement nested LBR virtualization")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 7 +++++--
arch/x86/kvm/svm/svm.c | 2 --
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index de90b104a0dd5..a31f3be1e16ec 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -714,6 +714,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
} else {
svm_copy_lbrs(vmcb02, vmcb01);
}
+ vmcb_mark_dirty(vmcb02, VMCB_LBR);
svm_update_lbrv(&svm->vcpu);
}
@@ -1232,10 +1233,12 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
- (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK)))
+ (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
svm_copy_lbrs(vmcb12, vmcb02);
- else
+ } else {
svm_copy_lbrs(vmcb01, vmcb02);
+ vmcb_mark_dirty(vmcb01, VMCB_LBR);
+ }
svm_update_lbrv(vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8f8bc863e2143..a2452b8ec49db 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -848,8 +848,6 @@ void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
to_vmcb->save.br_to = from_vmcb->save.br_to;
to_vmcb->save.last_excp_from = from_vmcb->save.last_excp_from;
to_vmcb->save.last_excp_to = from_vmcb->save.last_excp_to;
-
- vmcb_mark_dirty(to_vmcb, VMCB_LBR);
}
static void __svm_enable_lbrv(struct kvm_vcpu *vcpu)
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 02/26] KVM: SVM: Switch svm_copy_lbrs() to a macro
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12 Yosry Ahmed
@ 2026-03-03 0:33 ` Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs Yosry Ahmed
` (24 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
In preparation for using svm_copy_lbrs() with 'struct vmcb_save_area'
without a containing 'struct vmcb', and later even 'struct
vmcb_save_area_cached', make it a macro.
Macros are generally not preferred compared to functions, mainly due to
type-safety. However, in this case it seems like having a simple macro
copying a few fields is better than copy-pasting the same 5 lines of
code in different places.
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 8 ++++----
arch/x86/kvm/svm/svm.c | 9 ---------
arch/x86/kvm/svm/svm.h | 10 +++++++++-
3 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index a31f3be1e16ec..f7d5db0af69ac 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -709,10 +709,10 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
* Reserved bits of DEBUGCTL are ignored. Be consistent with
* svm_set_msr's definition of reserved bits.
*/
- svm_copy_lbrs(vmcb02, vmcb12);
+ svm_copy_lbrs(&vmcb02->save, &vmcb12->save);
vmcb02->save.dbgctl &= ~DEBUGCTL_RESERVED_BITS;
} else {
- svm_copy_lbrs(vmcb02, vmcb01);
+ svm_copy_lbrs(&vmcb02->save, &vmcb01->save);
}
vmcb_mark_dirty(vmcb02, VMCB_LBR);
svm_update_lbrv(&svm->vcpu);
@@ -1234,9 +1234,9 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
(svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
- svm_copy_lbrs(vmcb12, vmcb02);
+ svm_copy_lbrs(&vmcb12->save, &vmcb02->save);
} else {
- svm_copy_lbrs(vmcb01, vmcb02);
+ svm_copy_lbrs(&vmcb01->save, &vmcb02->save);
vmcb_mark_dirty(vmcb01, VMCB_LBR);
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a2452b8ec49db..f52e588317fcf 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -841,15 +841,6 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
*/
}
-void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
-{
- to_vmcb->save.dbgctl = from_vmcb->save.dbgctl;
- to_vmcb->save.br_from = from_vmcb->save.br_from;
- to_vmcb->save.br_to = from_vmcb->save.br_to;
- to_vmcb->save.last_excp_from = from_vmcb->save.last_excp_from;
- to_vmcb->save.last_excp_to = from_vmcb->save.last_excp_to;
-}
-
static void __svm_enable_lbrv(struct kvm_vcpu *vcpu)
{
to_svm(vcpu)->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index ebd7b36b1ceb9..44d767cd1d25a 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -713,8 +713,16 @@ static inline void *svm_vcpu_alloc_msrpm(void)
return svm_alloc_permissions_map(MSRPM_SIZE, GFP_KERNEL_ACCOUNT);
}
+#define svm_copy_lbrs(to, from) \
+do { \
+ (to)->dbgctl = (from)->dbgctl; \
+ (to)->br_from = (from)->br_from; \
+ (to)->br_to = (from)->br_to; \
+ (to)->last_excp_from = (from)->last_excp_from; \
+ (to)->last_excp_to = (from)->last_excp_to; \
+} while (0)
+
void svm_vcpu_free_msrpm(void *msrpm);
-void svm_copy_lbrs(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
void svm_enable_lbrv(struct kvm_vcpu *vcpu);
void svm_update_lbrv(struct kvm_vcpu *vcpu);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12 Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 02/26] KVM: SVM: Switch svm_copy_lbrs() to a macro Yosry Ahmed
@ 2026-03-03 0:33 ` Yosry Ahmed
2026-03-03 16:37 ` Sean Christopherson
2026-03-03 0:33 ` [PATCH v7 04/26] KVM: selftests: Add a test for LBR save/restore (ft. nested) Yosry Ahmed
` (23 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable,
Jim Mattson
MSR_IA32_DEBUGCTLMSR and LBR MSRs are currently not enumerated by
KVM_GET_MSR_INDEX_LIST, and LBR MSRs cannot be set with KVM_SET_MSRS. So
save/restore is completely broken.
Fix it by adding the MSRs to msrs_to_save_base, and allowing writes to
LBR MSRs from userspace only (as they are read-only MSRs). Additionally,
to correctly restore L1's LBRs while L2 is running, make sure the LBRs
are copied from the captured VMCB01 save area in svm_copy_vmrun_state().
For VMX, this also adds save/restore handling of KVM_GET_MSR_INDEX_LIST.
For unspported MSR_IA32_LAST* MSRs, kvm_do_msr_access() should 0 these
MSRs on userspace reads, and ignore KVM_MSR_RET_UNSUPPORTED on userspace
writes.
Fixes: 24e09cbf480a ("KVM: SVM: enable LBR virtualization")
Cc: stable@vger.kernel.org
Reported-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 5 +++++
arch/x86/kvm/svm/svm.c | 24 ++++++++++++++++++++++++
arch/x86/kvm/x86.c | 3 +++
3 files changed, 32 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f7d5db0af69ac..3bf758c9cb85c 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1100,6 +1100,11 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
to_save->isst_addr = from_save->isst_addr;
to_save->ssp = from_save->ssp;
}
+
+ if (lbrv) {
+ svm_copy_lbrs(to_save, from_save);
+ to_save->dbgctl &= ~DEBUGCTL_RESERVED_BITS;
+ }
}
void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f52e588317fcf..cb53174583a26 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3071,6 +3071,30 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
svm_update_lbrv(vcpu);
break;
+ case MSR_IA32_LASTBRANCHFROMIP:
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.br_from = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTBRANCHTOIP:
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.br_to = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTINTFROMIP:
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.last_excp_from = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTINTTOIP:
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.last_excp_to = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
case MSR_VM_HSAVE_PA:
/*
* Old kernels did not validate the value written to
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index db3f393192d94..416899b5dbe4d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -351,6 +351,9 @@ static const u32 msrs_to_save_base[] = {
MSR_IA32_U_CET, MSR_IA32_S_CET,
MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP,
MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB,
+ MSR_IA32_DEBUGCTLMSR,
+ MSR_IA32_LASTBRANCHFROMIP, MSR_IA32_LASTBRANCHTOIP,
+ MSR_IA32_LASTINTFROMIP, MSR_IA32_LASTINTTOIP,
};
static const u32 msrs_to_save_pmu[] = {
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 04/26] KVM: selftests: Add a test for LBR save/restore (ft. nested)
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (2 preceding siblings ...)
2026-03-03 0:33 ` [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs Yosry Ahmed
@ 2026-03-03 0:33 ` Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 05/26] KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN Yosry Ahmed
` (22 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
Add a selftest exercising save/restore with usage of LBRs in both L1 and
L2, and making sure all LBRs remain intact.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/include/x86/processor.h | 5 +
.../selftests/kvm/x86/svm_lbr_nested_state.c | 145 ++++++++++++++++++
3 files changed, 151 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index fdec90e854671..36b48e766e499 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -112,6 +112,7 @@ TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
+TEST_GEN_PROGS_x86 += x86/svm_lbr_nested_state
TEST_GEN_PROGS_x86 += x86/tsc_scaling_sync
TEST_GEN_PROGS_x86 += x86/sync_regs_test
TEST_GEN_PROGS_x86 += x86/ucna_injection_test
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 4ebae4269e681..db0171935197d 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -1360,6 +1360,11 @@ static inline bool kvm_is_ignore_msrs(void)
return get_kvm_param_bool("ignore_msrs");
}
+static inline bool kvm_is_lbrv_enabled(void)
+{
+ return !!get_kvm_amd_param_integer("lbrv");
+}
+
uint64_t *vm_get_pte(struct kvm_vm *vm, uint64_t vaddr);
uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
diff --git a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
new file mode 100644
index 0000000000000..bf16abb1152e0
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2026, Google, Inc.
+ */
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+
+
+#define L2_GUEST_STACK_SIZE 64
+
+#define DO_BRANCH() do { asm volatile("jmp 1f\n 1: nop"); } while (0)
+
+struct lbr_branch {
+ u64 from, to;
+};
+
+volatile struct lbr_branch l2_branch;
+
+#define RECORD_AND_CHECK_BRANCH(b) \
+do { \
+ wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR); \
+ DO_BRANCH(); \
+ (b)->from = rdmsr(MSR_IA32_LASTBRANCHFROMIP); \
+ (b)->to = rdmsr(MSR_IA32_LASTBRANCHTOIP); \
+ /* Disable LBR right after to avoid overriding the IPs */ \
+ wrmsr(MSR_IA32_DEBUGCTLMSR, 0); \
+ \
+ GUEST_ASSERT_NE((b)->from, 0); \
+ GUEST_ASSERT_NE((b)->to, 0); \
+} while (0)
+
+#define CHECK_BRANCH_MSRS(b) \
+do { \
+ GUEST_ASSERT_EQ((b)->from, rdmsr(MSR_IA32_LASTBRANCHFROMIP)); \
+ GUEST_ASSERT_EQ((b)->to, rdmsr(MSR_IA32_LASTBRANCHTOIP)); \
+} while (0)
+
+#define CHECK_BRANCH_VMCB(b, vmcb) \
+do { \
+ GUEST_ASSERT_EQ((b)->from, vmcb->save.br_from); \
+ GUEST_ASSERT_EQ((b)->to, vmcb->save.br_to); \
+} while (0)
+
+static void l2_guest_code(struct svm_test_data *svm)
+{
+ /* Record a branch, trigger save/restore, and make sure LBRs are intact */
+ RECORD_AND_CHECK_BRANCH(&l2_branch);
+ GUEST_SYNC(true);
+ CHECK_BRANCH_MSRS(&l2_branch);
+ vmmcall();
+}
+
+static void l1_guest_code(struct svm_test_data *svm, bool nested_lbrv)
+{
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+ struct vmcb *vmcb = svm->vmcb;
+ struct lbr_branch l1_branch;
+
+ /* Record a branch, trigger save/restore, and make sure LBRs are intact */
+ RECORD_AND_CHECK_BRANCH(&l1_branch);
+ GUEST_SYNC(true);
+ CHECK_BRANCH_MSRS(&l1_branch);
+
+ /* Run L2, which will also do the same */
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+ if (nested_lbrv)
+ vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ else
+ vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
+
+ run_guest(vmcb, svm->vmcb_gpa);
+ GUEST_ASSERT(svm->vmcb->control.exit_code == SVM_EXIT_VMMCALL);
+
+ /* Trigger save/restore one more time before checking, just for kicks */
+ GUEST_SYNC(true);
+
+ /*
+ * If LBR_CTL_ENABLE is set, L1 and L2 should have separate LBR MSRs, so
+ * expect L1's LBRs to remain intact and L2 LBRs to be in the VMCB.
+ * Otherwise, the MSRs are shared between L1 & L2 so expect L2's LBRs.
+ */
+ if (nested_lbrv) {
+ CHECK_BRANCH_MSRS(&l1_branch);
+ CHECK_BRANCH_VMCB(&l2_branch, vmcb);
+ } else {
+ CHECK_BRANCH_MSRS(&l2_branch);
+ }
+ GUEST_DONE();
+}
+
+void test_lbrv_nested_state(bool nested_lbrv)
+{
+ struct kvm_x86_state *state = NULL;
+ struct kvm_vcpu *vcpu;
+ vm_vaddr_t svm_gva;
+ struct kvm_vm *vm;
+ struct ucall uc;
+
+ pr_info("Testing with nested LBRV %s\n", nested_lbrv ? "enabled" : "disabled");
+
+ vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
+ vcpu_alloc_svm(vm, &svm_gva);
+ vcpu_args_set(vcpu, 2, svm_gva, nested_lbrv);
+
+ for (;;) {
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_SYNC:
+ /* Save the vCPU state and restore it in a new VM on sync */
+ pr_info("Guest triggered save/restore.\n");
+ state = vcpu_save_state(vcpu);
+ kvm_vm_release(vm);
+ vcpu = vm_recreate_with_one_vcpu(vm);
+ vcpu_load_state(vcpu, state);
+ kvm_x86_state_cleanup(state);
+ break;
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT(uc);
+ /* NOT REACHED */
+ case UCALL_DONE:
+ goto done;
+ default:
+ TEST_FAIL("Unknown ucall %lu", uc.cmd);
+ }
+ }
+done:
+ kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+ TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
+ TEST_REQUIRE(kvm_is_lbrv_enabled());
+
+ test_lbrv_nested_state(/*nested_lbrv=*/false);
+ test_lbrv_nested_state(/*nested_lbrv=*/true);
+
+ return 0;
+}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 05/26] KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (3 preceding siblings ...)
2026-03-03 0:33 ` [PATCH v7 04/26] KVM: selftests: Add a test for LBR save/restore (ft. nested) Yosry Ahmed
@ 2026-03-03 0:33 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 06/26] KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper Yosry Ahmed
` (21 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:33 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
nested_svm_vmrun() currently only injects a #GP if kvm_vcpu_map() fails
with -EINVAL. But it could also fail with -EFAULT if creating a host
mapping failed. Inject a #GP in all cases, no reason to treat failure
modes differently.
Fixes: 8c5fbf1a7231 ("KVM/nSVM: Use the new mapping API for mapping guest memory")
CC: stable@vger.kernel.org
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 3bf758c9cb85c..25f769a0d9a0d 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1011,12 +1011,9 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
}
vmcb12_gpa = svm->vmcb->save.rax;
- ret = kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map);
- if (ret == -EINVAL) {
+ if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map)) {
kvm_inject_gp(vcpu, 0);
return 1;
- } else if (ret) {
- return kvm_skip_emulated_instruction(vcpu);
}
ret = kvm_skip_emulated_instruction(vcpu);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 06/26] KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (4 preceding siblings ...)
2026-03-03 0:33 ` [PATCH v7 05/26] KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 07/26] KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as " Yosry Ahmed
` (20 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
Refactor the vCPU cap and vmcb12 flag checks into a helper. The
unlikely() annotation is dropped, it's unlikely (huh) to make a
difference and the CPU will probably predict it better on its own.
CC: stable@vger.kernel.org
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 25f769a0d9a0d..d84af051f65bc 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -639,6 +639,12 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
svm->nested.vmcb02.ptr->save.g_pat = svm->vmcb01.ptr->save.g_pat;
}
+static bool nested_vmcb12_has_lbrv(struct kvm_vcpu *vcpu)
+{
+ return guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
+ (to_svm(vcpu)->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK);
+}
+
static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
{
bool new_vmcb12 = false;
@@ -703,8 +709,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
vmcb_mark_dirty(vmcb02, VMCB_DR);
}
- if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
- (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
+ if (nested_vmcb12_has_lbrv(vcpu)) {
/*
* Reserved bits of DEBUGCTL are ignored. Be consistent with
* svm_set_msr's definition of reserved bits.
@@ -1234,8 +1239,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
if (!nested_exit_on_intr(svm))
kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
- if (unlikely(guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
- (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
+ if (nested_vmcb12_has_lbrv(vcpu)) {
svm_copy_lbrs(&vmcb12->save, &vmcb02->save);
} else {
svm_copy_lbrs(&vmcb01->save, &vmcb02->save);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 07/26] KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as a helper
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (5 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 06/26] KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 08/26] KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT Yosry Ahmed
` (19 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
Move mapping vmcb12 and updating it out of nested_svm_vmexit() into a
helper, no functional change intended.
CC: stable@vger.kernel.org
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 77 ++++++++++++++++++++++-----------------
1 file changed, 44 insertions(+), 33 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d84af051f65bc..82a92501ee86a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1125,36 +1125,20 @@ void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
to_vmcb->save.sysenter_eip = from_vmcb->save.sysenter_eip;
}
-int nested_svm_vmexit(struct vcpu_svm *svm)
+static int nested_svm_vmexit_update_vmcb12(struct kvm_vcpu *vcpu)
{
- struct kvm_vcpu *vcpu = &svm->vcpu;
- struct vmcb *vmcb01 = svm->vmcb01.ptr;
+ struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
- struct vmcb *vmcb12;
struct kvm_host_map map;
+ struct vmcb *vmcb12;
int rc;
rc = kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.vmcb12_gpa), &map);
- if (rc) {
- if (rc == -EINVAL)
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
+ if (rc)
+ return rc;
vmcb12 = map.hva;
- /* Exit Guest-Mode */
- leave_guest_mode(vcpu);
- svm->nested.vmcb12_gpa = 0;
- WARN_ON_ONCE(svm->nested.nested_run_pending);
-
- kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
-
- /* in case we halted in L2 */
- kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
-
- /* Give the current vmcb to the guest */
-
vmcb12->save.es = vmcb02->save.es;
vmcb12->save.cs = vmcb02->save.cs;
vmcb12->save.ss = vmcb02->save.ss;
@@ -1191,10 +1175,48 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
if (guest_cpu_cap_has(vcpu, X86_FEATURE_NRIPS))
vmcb12->control.next_rip = vmcb02->control.next_rip;
+ if (nested_vmcb12_has_lbrv(vcpu))
+ svm_copy_lbrs(&vmcb12->save, &vmcb02->save);
+
vmcb12->control.int_ctl = svm->nested.ctl.int_ctl;
vmcb12->control.event_inj = svm->nested.ctl.event_inj;
vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err;
+ trace_kvm_nested_vmexit_inject(vmcb12->control.exit_code,
+ vmcb12->control.exit_info_1,
+ vmcb12->control.exit_info_2,
+ vmcb12->control.exit_int_info,
+ vmcb12->control.exit_int_info_err,
+ KVM_ISA_SVM);
+
+ kvm_vcpu_unmap(vcpu, &map);
+ return 0;
+}
+
+int nested_svm_vmexit(struct vcpu_svm *svm)
+{
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct vmcb *vmcb01 = svm->vmcb01.ptr;
+ struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
+ int rc;
+
+ rc = nested_svm_vmexit_update_vmcb12(vcpu);
+ if (rc) {
+ if (rc == -EINVAL)
+ kvm_inject_gp(vcpu, 0);
+ return 1;
+ }
+
+ /* Exit Guest-Mode */
+ leave_guest_mode(vcpu);
+ svm->nested.vmcb12_gpa = 0;
+ WARN_ON_ONCE(svm->nested.nested_run_pending);
+
+ kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+
+ /* in case we halted in L2 */
+ kvm_set_mp_state(vcpu, KVM_MP_STATE_RUNNABLE);
+
if (!kvm_pause_in_guest(vcpu->kvm)) {
vmcb01->control.pause_filter_count = vmcb02->control.pause_filter_count;
vmcb_mark_dirty(vmcb01, VMCB_INTERCEPTS);
@@ -1239,9 +1261,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
if (!nested_exit_on_intr(svm))
kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
- if (nested_vmcb12_has_lbrv(vcpu)) {
- svm_copy_lbrs(&vmcb12->save, &vmcb02->save);
- } else {
+ if (!nested_vmcb12_has_lbrv(vcpu)) {
svm_copy_lbrs(&vmcb01->save, &vmcb02->save);
vmcb_mark_dirty(vmcb01, VMCB_LBR);
}
@@ -1297,15 +1317,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
svm->vcpu.arch.dr7 = DR7_FIXED_1;
kvm_update_dr7(&svm->vcpu);
- trace_kvm_nested_vmexit_inject(vmcb12->control.exit_code,
- vmcb12->control.exit_info_1,
- vmcb12->control.exit_info_2,
- vmcb12->control.exit_int_info,
- vmcb12->control.exit_int_info_err,
- KVM_ISA_SVM);
-
- kvm_vcpu_unmap(vcpu, &map);
-
nested_svm_transition_tlb_flush(vcpu);
nested_svm_uninit_mmu_context(vcpu);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 08/26] KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (6 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 07/26] KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as " Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 " Yosry Ahmed
` (18 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
KVM currently injects a #GP and hopes for the best if mapping VMCB12
fails on nested #VMEXIT, and only if the failure mode is -EINVAL.
Mapping the VMCB12 could also fail if creating host mappings fails.
After the #GP is injected, nested_svm_vmexit() bails early, without
cleaning up (e.g. KVM_REQ_GET_NESTED_STATE_PAGES is set, is_guest_mode()
is true, etc).
Instead of optionally injecting a #GP, triple fault the guest if mapping
VMCB12 fails since KVM cannot make a sane recovery. The APM states that
a #VMEXIT will triple fault if host state is illegal or an exception
occurs while loading host state, so the behavior is not entirely made
up.
Do not return early from nested_svm_vmexit(), continue cleaning up the
vCPU state (e.g. switch back to vmcb01), to handle the failure as
gracefully as possible.
Fixes: cf74a78b229d ("KVM: SVM: Add VMEXIT handler and intercepts")
CC: stable@vger.kernel.org
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 82a92501ee86a..5ad0ac3680fdd 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1200,12 +1200,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
int rc;
- rc = nested_svm_vmexit_update_vmcb12(vcpu);
- if (rc) {
- if (rc == -EINVAL)
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
+ if (nested_svm_vmexit_update_vmcb12(vcpu))
+ kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
/* Exit Guest-Mode */
leave_guest_mode(vcpu);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 fails on nested #VMEXIT
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (7 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 08/26] KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 16:49 ` Sean Christopherson
2026-03-03 0:34 ` [PATCH v7 10/26] KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID) Yosry Ahmed
` (17 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
If loading L1's CR3 fails on a nested #VMEXIT, nested_svm_vmexit()
returns an error code that is ignored by most callers, and continues to
run L1 with corrupted state. A sane recovery is not possible in this
case, and HW behavior is to cause a shutdown. Inject a triple fault
,nstead, and do not return early from nested_svm_vmexit(). Continue
cleaning up the vCPU state (e.g. clear pending exceptions), to handle
the failure as gracefully as possible.
From the APM:
Upon #VMEXIT, the processor performs the following actions in
order to return to the host execution context:
...
if (illegal host state loaded, or exception while loading
host state)
shutdown
else
execute first host instruction following the VMRUN
Remove the return value of nested_svm_vmexit(), which is mostly
unchecked anyway.
Fixes: d82aaef9c88a ("KVM: nSVM: use nested_svm_load_cr3() on guest->host switch")
CC: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 10 +++-------
arch/x86/kvm/svm/svm.c | 11 ++---------
arch/x86/kvm/svm/svm.h | 6 +++---
3 files changed, 8 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5ad0ac3680fdd..bb2cec5fd0434 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1193,12 +1193,11 @@ static int nested_svm_vmexit_update_vmcb12(struct kvm_vcpu *vcpu)
return 0;
}
-int nested_svm_vmexit(struct vcpu_svm *svm)
+void nested_svm_vmexit(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct vmcb *vmcb01 = svm->vmcb01.ptr;
struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
- int rc;
if (nested_svm_vmexit_update_vmcb12(vcpu))
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
@@ -1317,9 +1316,8 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
nested_svm_uninit_mmu_context(vcpu);
- rc = nested_svm_load_cr3(vcpu, vmcb01->save.cr3, false, true);
- if (rc)
- return 1;
+ if (nested_svm_load_cr3(vcpu, vmcb01->save.cr3, false, true))
+ kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
/*
* Drop what we picked up for L2 via svm_complete_interrupts() so it
@@ -1344,8 +1342,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
*/
if (kvm_apicv_activated(vcpu->kvm))
__kvm_vcpu_update_apicv(vcpu);
-
- return 0;
}
static void nested_svm_triple_fault(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cb53174583a26..1b31b033d79b0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2234,13 +2234,9 @@ static int emulate_svm_instr(struct kvm_vcpu *vcpu, int opcode)
[SVM_INSTR_VMSAVE] = vmsave_interception,
};
struct vcpu_svm *svm = to_svm(vcpu);
- int ret;
if (is_guest_mode(vcpu)) {
- /* Returns '1' or -errno on failure, '0' on success. */
- ret = nested_svm_simple_vmexit(svm, guest_mode_exit_codes[opcode]);
- if (ret)
- return ret;
+ nested_svm_simple_vmexit(svm, guest_mode_exit_codes[opcode]);
return 1;
}
return svm_instr_handlers[opcode](vcpu);
@@ -4796,7 +4792,6 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_host_map map_save;
- int ret;
if (!is_guest_mode(vcpu))
return 0;
@@ -4816,9 +4811,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram)
svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
- ret = nested_svm_simple_vmexit(svm, SVM_EXIT_SW);
- if (ret)
- return ret;
+ nested_svm_simple_vmexit(svm, SVM_EXIT_SW);
/*
* KVM uses VMCB01 to store L1 host state while L2 runs but
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 44d767cd1d25a..7629cb37c9302 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -793,14 +793,14 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu);
void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
struct vmcb_save_area *from_save);
void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
-int nested_svm_vmexit(struct vcpu_svm *svm);
+void nested_svm_vmexit(struct vcpu_svm *svm);
-static inline int nested_svm_simple_vmexit(struct vcpu_svm *svm, u32 exit_code)
+static inline void nested_svm_simple_vmexit(struct vcpu_svm *svm, u32 exit_code)
{
svm->vmcb->control.exit_code = exit_code;
svm->vmcb->control.exit_info_1 = 0;
svm->vmcb->control.exit_info_2 = 0;
- return nested_svm_vmexit(svm);
+ nested_svm_vmexit(svm);
}
int nested_svm_exit_handled(struct vcpu_svm *svm);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 10/26] KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID)
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (8 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 " Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 11/26] KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT Yosry Ahmed
` (16 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
According to the APM, GIF is set to 0 on any #VMEXIT, including
an #VMEXIT(INVALID) due to failed consistency checks. Clear GIF on
consistency check failures.
Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index bb2cec5fd0434..04ccab887c5ec 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1036,6 +1036,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
vmcb12->control.exit_code = SVM_EXIT_ERR;
vmcb12->control.exit_info_1 = 0;
vmcb12->control.exit_info_2 = 0;
+ svm_set_gif(svm, false);
goto out;
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 11/26] KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (9 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 10/26] KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID) Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ " Yosry Ahmed
` (15 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
According to the APM, from the reference of the VMRUN instruction:
Upon #VMEXIT, the processor performs the following actions in
order to return to the host execution context:
...
clear EVENTINJ field in VMCB
KVM syncs EVENTINJ fields from vmcb02 to cached vmcb12 on every L2->L0
#VMEXIT. Since these fields are zeroed by the CPU on #VMEXIT, they will
mostly be zeroed in vmcb12 on nested #VMEXIT by nested_svm_vmexit().
However, this is not the case when:
1. Consistency checks fail, as nested_svm_vmexit() is not called.
2. Entering guest mode fails before L2 runs (e.g. due to failed load of
CR3).
(2) was broken by commit 2d8a42be0e2b ("KVM: nSVM: synchronize VMCB
controls updated by the processor on every vmexit"), as prior to that
nested_svm_vmexit() always zeroed EVENTINJ fields.
Explicitly clear the fields all nested #VMEXIT code paths.
Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
Fixes: 2d8a42be0e2b ("KVM: nSVM: synchronize VMCB controls updated by the processor on every vmexit")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 04ccab887c5ec..f0ed352a3e901 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1036,6 +1036,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
vmcb12->control.exit_code = SVM_EXIT_ERR;
vmcb12->control.exit_info_1 = 0;
vmcb12->control.exit_info_2 = 0;
+ vmcb12->control.event_inj = 0;
+ vmcb12->control.event_inj_err = 0;
svm_set_gif(svm, false);
goto out;
}
@@ -1179,9 +1181,9 @@ static int nested_svm_vmexit_update_vmcb12(struct kvm_vcpu *vcpu)
if (nested_vmcb12_has_lbrv(vcpu))
svm_copy_lbrs(&vmcb12->save, &vmcb02->save);
+ vmcb12->control.event_inj = 0;
+ vmcb12->control.event_inj_err = 0;
vmcb12->control.int_ctl = svm->nested.ctl.int_ctl;
- vmcb12->control.event_inj = svm->nested.ctl.event_inj;
- vmcb12->control.event_inj_err = svm->nested.ctl.event_inj_err;
trace_kvm_nested_vmexit_inject(vmcb12->control.exit_code,
vmcb12->control.exit_info_1,
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ on nested #VMEXIT
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (10 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 11/26] KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 16:50 ` Sean Christopherson
2026-03-03 0:34 ` [PATCH v7 13/26] KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers Yosry Ahmed
` (14 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
KVM clears tracking of L1->L2 injected NMIs (i.e. nmi_l1_to_l2) and soft
IRQs (i.e. soft_int_injected) on a synthesized #VMEXIT(INVALID) due to
failed VMRUN. However, they are not explicitly cleared in other
synthesized #VMEXITs.
soft_int_injected is always cleared after the first VMRUN of L2 when
completing interrupts, as any re-injection is then tracked by KVM
(instead of purely in vmcb02).
nmi_l1_to_l2 is not cleared after the first VMRUN if NMI injection
failed, as KVM still needs to keep track that the NMI originated from L1
to avoid blocking NMIs for L1. It is only cleared when the NMI injection
succeeds.
KVM could synthesize a #VMEXIT to L1 before successfully injecting the
NMI into L2 (e.g. due to a #NPF on L2's NMI handler in L1's NPTs). In
this case, nmi_l1_to_l2 will remain true, and KVM may not correctly mask
NMIs and intercept IRET when injecting an NMI into L1.
Clear both nmi_l1_to_l2 and soft_int_injected in nested_svm_vmexit() to
capture all #VMEXITs, except those that occur due to failed consistency
checks, as those happen before nmi_l1_to_l2 or soft_int_injected are
set.
Fixes: 159fc6fa3b7d ("KVM: nSVM: Transparently handle L1 -> L2 NMI re-injection")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f0ed352a3e901..b66bd9bfce9d8 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1065,8 +1065,6 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
out_exit_err:
svm->nested.nested_run_pending = 0;
- svm->nmi_l1_to_l2 = false;
- svm->soft_int_injected = false;
svm->vmcb->control.exit_code = SVM_EXIT_ERR;
svm->vmcb->control.exit_info_1 = 0;
@@ -1322,6 +1320,10 @@ void nested_svm_vmexit(struct vcpu_svm *svm)
if (nested_svm_load_cr3(vcpu, vmcb01->save.cr3, false, true))
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
+ /* Drop tracking for L1->L2 injected NMIs and soft IRQs */
+ svm->nmi_l1_to_l2 = false;
+ svm->soft_int_injected = false;
+
/*
* Drop what we picked up for L2 via svm_complete_interrupts() so it
* doesn't end up in L1.
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 13/26] KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (11 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ " Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 14/26] KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE Yosry Ahmed
` (13 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
The wrappers provide little value and make it harder to see what KVM is
checking in the normal flow. Drop them.
Opportunistically fixup comments referring to the functions, adding '()'
to make it clear it's a reference to a function.
No functional change intended.
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 36 ++++++++++--------------------------
1 file changed, 10 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b66bd9bfce9d8..21e1a43c91879 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -339,8 +339,8 @@ static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa, u32 size)
kvm_vcpu_is_legal_gpa(vcpu, addr + size - 1);
}
-static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
- struct vmcb_ctrl_area_cached *control)
+static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
+ struct vmcb_ctrl_area_cached *control)
{
if (CC(!vmcb12_is_intercept(control, INTERCEPT_VMRUN)))
return false;
@@ -367,8 +367,8 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
}
/* Common checks that apply to both L1 and L2 state. */
-static bool __nested_vmcb_check_save(struct kvm_vcpu *vcpu,
- struct vmcb_save_area_cached *save)
+static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
+ struct vmcb_save_area_cached *save)
{
if (CC(!(save->efer & EFER_SVME)))
return false;
@@ -402,22 +402,6 @@ static bool __nested_vmcb_check_save(struct kvm_vcpu *vcpu,
return true;
}
-static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- struct vmcb_save_area_cached *save = &svm->nested.save;
-
- return __nested_vmcb_check_save(vcpu, save);
-}
-
-static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
- struct vmcb_ctrl_area_cached *ctl = &svm->nested.ctl;
-
- return __nested_vmcb_check_controls(vcpu, ctl);
-}
-
/*
* If a feature is not advertised to L1, clear the corresponding vmcb12
* intercept.
@@ -469,7 +453,7 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->pause_filter_count = from->pause_filter_count;
to->pause_filter_thresh = from->pause_filter_thresh;
- /* Copy asid here because nested_vmcb_check_controls will check it. */
+ /* Copy asid here because nested_vmcb_check_controls() will check it */
to->asid = from->asid;
to->msrpm_base_pa &= ~0x0fffULL;
to->iopm_base_pa &= ~0x0fffULL;
@@ -1031,8 +1015,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
- if (!nested_vmcb_check_save(vcpu) ||
- !nested_vmcb_check_controls(vcpu)) {
+ if (!nested_vmcb_check_save(vcpu, &svm->nested.save) ||
+ !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) {
vmcb12->control.exit_code = SVM_EXIT_ERR;
vmcb12->control.exit_info_1 = 0;
vmcb12->control.exit_info_2 = 0;
@@ -1878,12 +1862,12 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
ret = -EINVAL;
__nested_copy_vmcb_control_to_cache(vcpu, &ctl_cached, ctl);
- if (!__nested_vmcb_check_controls(vcpu, &ctl_cached))
+ if (!nested_vmcb_check_controls(vcpu, &ctl_cached))
goto out_free;
/*
* Processor state contains L2 state. Check that it is
- * valid for guest mode (see nested_vmcb_check_save).
+ * valid for guest mode (see nested_vmcb_check_save()).
*/
cr0 = kvm_read_cr0(vcpu);
if (((cr0 & X86_CR0_CD) == 0) && (cr0 & X86_CR0_NW))
@@ -1897,7 +1881,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
if (!(save->cr0 & X86_CR0_PG) ||
!(save->cr0 & X86_CR0_PE) ||
(save->rflags & X86_EFLAGS_VM) ||
- !__nested_vmcb_check_save(vcpu, &save_cached))
+ !nested_vmcb_check_save(vcpu, &save_cached))
goto out_free;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 14/26] KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (12 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 13/26] KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity Yosry Ahmed
` (12 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
KVM currenty fails a nested VMRUN and injects VMEXIT_INVALID (aka
SVM_EXIT_ERR) if L1 sets NP_ENABLE and the host does not support NPTs.
On first glance, it seems like the check should actually be for
guest_cpu_cap_has(X86_FEATURE_NPT) instead, as it is possible for the
host to support NPTs but the guest CPUID to not advertise it.
However, the consistency check is not architectural to begin with. The
APM does not mention VMEXIT_INVALID if NP_ENABLE is set on a processor
that does not have X86_FEATURE_NPT. Hence, NP_ENABLE should be ignored
if X86_FEATURE_NPT is not available for L1, so sanitize it when copying
from the VMCB12 to KVM's cache.
Apart from the consistency check, NP_ENABLE in VMCB12 is currently
ignored because the bit is actually copied from VMCB01 to VMCB02, not
from VMCB12.
Fixes: 4b16184c1cca ("KVM: SVM: Initialize Nested Nested MMU context on VMRUN")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 21e1a43c91879..613d5e2e7c3d1 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -348,9 +348,6 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
if (CC(control->asid == 0))
return false;
- if (CC((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) && !npt_enabled))
- return false;
-
if (CC(!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
MSRPM_SIZE)))
return false;
@@ -431,6 +428,11 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
nested_svm_sanitize_intercept(vcpu, to, SKINIT);
nested_svm_sanitize_intercept(vcpu, to, RDPRU);
+ /* Always clear SVM_NESTED_CTL_NP_ENABLE if the guest cannot use NPTs */
+ to->nested_ctl = from->nested_ctl;
+ if (!guest_cpu_cap_has(vcpu, X86_FEATURE_NPT))
+ to->nested_ctl &= ~SVM_NESTED_CTL_NP_ENABLE;
+
to->iopm_base_pa = from->iopm_base_pa;
to->msrpm_base_pa = from->msrpm_base_pa;
to->tsc_offset = from->tsc_offset;
@@ -444,7 +446,6 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->exit_info_2 = from->exit_info_2;
to->exit_int_info = from->exit_int_info;
to->exit_int_info_err = from->exit_int_info_err;
- to->nested_ctl = from->nested_ctl;
to->event_inj = from->event_inj;
to->event_inj_err = from->event_inj_err;
to->next_rip = from->next_rip;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (13 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 14/26] KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 16:56 ` Sean Christopherson
2026-03-03 0:34 ` [PATCH v7 16/26] KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS Yosry Ahmed
` (11 subsequent siblings)
26 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
From the APM Volume #2, 15.25.4 (24593—Rev. 3.42—March 2024):
When VMRUN is executed with nested paging enabled
(NP_ENABLE = 1), the following conditions are considered illegal
state combinations, in addition to those mentioned in
“Canonicalization and Consistency Checks”:
• Any MBZ bit of nCR3 is set.
• Any G_PAT.PA field has an unsupported type encoding or any
reserved field in G_PAT has a nonzero value.
Add the consistency check for nCR3 being a legal GPA with no MBZ bits
set. The G_PAT.PA check was proposed separately [*].
[*]https://lore.kernel.org/kvm/20260205214326.1029278-3-jmattson@google.com/
Fixes: 4b16184c1cca ("KVM: SVM: Initialize Nested Nested MMU context on VMRUN")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 613d5e2e7c3d1..3aaa4f0bb31ab 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -348,6 +348,11 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
if (CC(control->asid == 0))
return false;
+ if (control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
+ if (CC(!kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
+ return false;
+ }
+
if (CC(!nested_svm_check_bitmap_pa(vcpu, control->msrpm_base_pa,
MSRPM_SIZE)))
return false;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 16/26] KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (14 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 17/26] KVM: nSVM: Add missing consistency check for EVENTINJ Yosry Ahmed
` (10 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
According to the APM Volume #2, 15.5, Canonicalization and Consistency
Checks (24593—Rev. 3.42—March 2024), the following condition (among
others) results in a #VMEXIT with VMEXIT_INVALID (aka SVM_EXIT_ERR):
EFER.LME, CR0.PG, CR4.PAE, CS.L, and CS.D are all non-zero.
In the list of consistency checks done when EFER.LME and CR0.PG are set,
add a check that CS.L and CS.D are not both set, after the existing
check that CR4.PAE is set.
This is functionally a nop because the nested VMRUN results in
SVM_EXIT_ERR in HW, which is forwarded to L1, but KVM makes all
consistency checks before a VMRUN is actually attempted.
Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
Cc: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 6 ++++++
arch/x86/kvm/svm/svm.h | 1 +
2 files changed, 7 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 3aaa4f0bb31ab..93b3fab9b415d 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -392,6 +392,10 @@ static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
CC(!(save->cr0 & X86_CR0_PE)) ||
CC(!kvm_vcpu_is_legal_cr3(vcpu, save->cr3)))
return false;
+
+ if (CC((save->cs.attrib & SVM_SELECTOR_L_MASK) &&
+ (save->cs.attrib & SVM_SELECTOR_DB_MASK)))
+ return false;
}
/* Note, SVM doesn't have any additional restrictions on CR4. */
@@ -487,6 +491,8 @@ static void __nested_copy_vmcb_save_to_cache(struct vmcb_save_area_cached *to,
* Copy only fields that are validated, as we need them
* to avoid TOC/TOU races.
*/
+ to->cs = from->cs;
+
to->efer = from->efer;
to->cr0 = from->cr0;
to->cr3 = from->cr3;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7629cb37c9302..0a5d5a4453b7e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -140,6 +140,7 @@ struct kvm_vmcb_info {
};
struct vmcb_save_area_cached {
+ struct vmcb_seg cs;
u64 efer;
u64 cr4;
u64 cr3;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 17/26] KVM: nSVM: Add missing consistency check for EVENTINJ
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (15 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 16/26] KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 18/26] KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl Yosry Ahmed
` (9 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, stable
According to the APM Volume #2, 15.20 (24593—Rev. 3.42—March 2024):
VMRUN exits with VMEXIT_INVALID error code if either:
• Reserved values of TYPE have been specified, or
• TYPE = 3 (exception) has been specified with a vector that does not
correspond to an exception (this includes vector 2, which is an NMI,
not an exception).
Add the missing consistency checks to KVM. For the second point, inject
VMEXIT_INVALID if the vector is anything but the vectors defined by the
APM for exceptions. Reserved vectors are also considered invalid, which
matches the HW behavior. Vector 9 (i.e. #CSO) is considered invalid
because it is reserved on modern CPUs, and according to LLMs no CPUs
exist supporting SVM and producing #CSOs.
Defined exceptions could be different between virtual CPUs as new CPUs
define new vectors. In a best effort to dynamically define the valid
vectors, make all currently defined vectors as valid except those
obviously tied to a CPU feature: SHSTK -> #CP and SEV-ES -> #VC. As new
vectors are defined, they can similarly be tied to corresponding CPU
features.
Invalid vectors on specific (e.g. old) CPUs that are missed by KVM
should be rejected by HW anyway.
Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
CC: stable@vger.kernel.org
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 51 +++++++++++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 93b3fab9b415d..15f483fac28a0 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -339,6 +339,54 @@ static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa, u32 size)
kvm_vcpu_is_legal_gpa(vcpu, addr + size - 1);
}
+static bool nested_svm_event_inj_valid_exept(struct kvm_vcpu *vcpu, u8 vector)
+{
+ /*
+ * Vectors that do not correspond to a defined exception are invalid
+ * (including #NMI and reserved vectors). In a best effort to define
+ * valid exceptions based on the virtual CPU, make all exceptions always
+ * valid except those obviously tied to a CPU feature.
+ */
+ switch (vector) {
+ case DE_VECTOR: case DB_VECTOR: case BP_VECTOR: case OF_VECTOR:
+ case BR_VECTOR: case UD_VECTOR: case NM_VECTOR: case DF_VECTOR:
+ case TS_VECTOR: case NP_VECTOR: case SS_VECTOR: case GP_VECTOR:
+ case PF_VECTOR: case MF_VECTOR: case AC_VECTOR: case MC_VECTOR:
+ case XM_VECTOR: case HV_VECTOR: case SX_VECTOR:
+ return true;
+ case CP_VECTOR:
+ return guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK);
+ case VC_VECTOR:
+ return guest_cpu_cap_has(vcpu, X86_FEATURE_SEV_ES);
+ }
+ return false;
+}
+
+/*
+ * According to the APM, VMRUN exits with SVM_EXIT_ERR if SVM_EVTINJ_VALID is
+ * set and:
+ * - The type of event_inj is not one of the defined values.
+ * - The type is SVM_EVTINJ_TYPE_EXEPT, but the vector is not a valid exception.
+ */
+static bool nested_svm_check_event_inj(struct kvm_vcpu *vcpu, u32 event_inj)
+{
+ u32 type = event_inj & SVM_EVTINJ_TYPE_MASK;
+ u8 vector = event_inj & SVM_EVTINJ_VEC_MASK;
+
+ if (!(event_inj & SVM_EVTINJ_VALID))
+ return true;
+
+ if (type != SVM_EVTINJ_TYPE_INTR && type != SVM_EVTINJ_TYPE_NMI &&
+ type != SVM_EVTINJ_TYPE_EXEPT && type != SVM_EVTINJ_TYPE_SOFT)
+ return false;
+
+ if (type == SVM_EVTINJ_TYPE_EXEPT &&
+ !nested_svm_event_inj_valid_exept(vcpu, vector))
+ return false;
+
+ return true;
+}
+
static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
struct vmcb_ctrl_area_cached *control)
{
@@ -365,6 +413,9 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
return false;
}
+ if (CC(!nested_svm_check_event_inj(vcpu, control->event_inj)))
+ return false;
+
return true;
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 18/26] KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (16 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 17/26] KVM: nSVM: Add missing consistency check for EVENTINJ Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 19/26] KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2 Yosry Ahmed
` (8 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
The 'nested_ctl' field is misnamed. Although the first bit is for nested
paging, the other defined bits are for SEV/SEV-ES. Other bits in the
same field according to the APM (but not defined by KVM) include "Guest
Mode Execution Trap", "Enable INVLPGB/TLBSYNC", and other control bits
unrelated to 'nested'.
There is nothing common among these bits, so just name the field
misc_ctl. Also rename the flags accordingly.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/include/asm/svm.h | 8 ++++----
arch/x86/kvm/svm/nested.c | 14 +++++++-------
arch/x86/kvm/svm/sev.c | 4 ++--
arch/x86/kvm/svm/svm.c | 4 ++--
arch/x86/kvm/svm/svm.h | 4 ++--
tools/testing/selftests/kvm/include/x86/svm.h | 6 +++---
tools/testing/selftests/kvm/lib/x86/svm.c | 2 +-
7 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index edde36097ddc3..983db6575141d 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -142,7 +142,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u64 exit_info_2;
u32 exit_int_info;
u32 exit_int_info_err;
- u64 nested_ctl;
+ u64 misc_ctl;
u64 avic_vapic_bar;
u64 ghcb_gpa;
u32 event_inj;
@@ -239,9 +239,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
-#define SVM_NESTED_CTL_NP_ENABLE BIT(0)
-#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
-#define SVM_NESTED_CTL_SEV_ES_ENABLE BIT(2)
+#define SVM_MISC_ENABLE_NP BIT(0)
+#define SVM_MISC_ENABLE_SEV BIT(1)
+#define SVM_MISC_ENABLE_SEV_ES BIT(2)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 15f483fac28a0..2f84fca3df85a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -396,7 +396,7 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
if (CC(control->asid == 0))
return false;
- if (control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
+ if (control->misc_ctl & SVM_MISC_ENABLE_NP) {
if (CC(!kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
return false;
}
@@ -488,10 +488,10 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
nested_svm_sanitize_intercept(vcpu, to, SKINIT);
nested_svm_sanitize_intercept(vcpu, to, RDPRU);
- /* Always clear SVM_NESTED_CTL_NP_ENABLE if the guest cannot use NPTs */
- to->nested_ctl = from->nested_ctl;
+ /* Always clear SVM_MISC_ENABLE_NP if the guest cannot use NPTs */
+ to->misc_ctl = from->misc_ctl;
if (!guest_cpu_cap_has(vcpu, X86_FEATURE_NPT))
- to->nested_ctl &= ~SVM_NESTED_CTL_NP_ENABLE;
+ to->misc_ctl &= ~SVM_MISC_ENABLE_NP;
to->iopm_base_pa = from->iopm_base_pa;
to->msrpm_base_pa = from->msrpm_base_pa;
@@ -835,7 +835,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
}
/* Copied from vmcb01. msrpm_base can be overwritten later. */
- vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
+ vmcb02->control.misc_ctl = vmcb01->control.misc_ctl;
vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
vmcb02->control.msrpm_base_pa = vmcb01->control.msrpm_base_pa;
vmcb_mark_dirty(vmcb02, VMCB_PERM_MAP);
@@ -995,7 +995,7 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
vmcb12->save.rip,
vmcb12->control.int_ctl,
vmcb12->control.event_inj,
- vmcb12->control.nested_ctl,
+ vmcb12->control.misc_ctl,
vmcb12->control.nested_cr3,
vmcb12->save.cr3,
KVM_ISA_SVM);
@@ -1785,7 +1785,7 @@ static void nested_copy_vmcb_cache_to_control(struct vmcb_control_area *dst,
dst->exit_info_2 = from->exit_info_2;
dst->exit_int_info = from->exit_int_info;
dst->exit_int_info_err = from->exit_int_info_err;
- dst->nested_ctl = from->nested_ctl;
+ dst->misc_ctl = from->misc_ctl;
dst->event_inj = from->event_inj;
dst->event_inj_err = from->event_inj_err;
dst->next_rip = from->next_rip;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index ea515cf411686..0ed9cfed1cbcc 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -4599,7 +4599,7 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm, bool init_event)
struct kvm_sev_info *sev = to_kvm_sev_info(svm->vcpu.kvm);
struct vmcb *vmcb = svm->vmcb01.ptr;
- svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ES_ENABLE;
+ svm->vmcb->control.misc_ctl |= SVM_MISC_ENABLE_SEV_ES;
/*
* An SEV-ES guest requires a VMSA area that is a separate from the
@@ -4670,7 +4670,7 @@ void sev_init_vmcb(struct vcpu_svm *svm, bool init_event)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
- svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE;
+ svm->vmcb->control.misc_ctl |= SVM_MISC_ENABLE_SEV;
clr_exception_intercept(svm, UD_VECTOR);
/*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1b31b033d79b0..7bc8b72fe5ad8 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1152,7 +1152,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
if (npt_enabled) {
/* Setup VMCB for Nested Paging */
- control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
+ control->misc_ctl |= SVM_MISC_ENABLE_NP;
svm_clr_intercept(svm, INTERCEPT_INVLPG);
clr_exception_intercept(svm, PF_VECTOR);
svm_clr_intercept(svm, INTERCEPT_CR3_READ);
@@ -3362,7 +3362,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-20s%016llx\n", "exit_info2:", control->exit_info_2);
pr_err("%-20s%08x\n", "exit_int_info:", control->exit_int_info);
pr_err("%-20s%08x\n", "exit_int_info_err:", control->exit_int_info_err);
- pr_err("%-20s%lld\n", "nested_ctl:", control->nested_ctl);
+ pr_err("%-20s%lld\n", "misc_ctl:", control->misc_ctl);
pr_err("%-20s%016llx\n", "nested_cr3:", control->nested_cr3);
pr_err("%-20s%016llx\n", "avic_vapic_bar:", control->avic_vapic_bar);
pr_err("%-20s%016llx\n", "ghcb:", control->ghcb_gpa);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0a5d5a4453b7e..f66e5c8565aad 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -167,7 +167,7 @@ struct vmcb_ctrl_area_cached {
u64 exit_info_2;
u32 exit_int_info;
u32 exit_int_info_err;
- u64 nested_ctl;
+ u64 misc_ctl;
u32 event_inj;
u32 event_inj_err;
u64 next_rip;
@@ -579,7 +579,7 @@ static inline bool gif_set(struct vcpu_svm *svm)
static inline bool nested_npt_enabled(struct vcpu_svm *svm)
{
- return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE;
+ return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
}
static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
diff --git a/tools/testing/selftests/kvm/include/x86/svm.h b/tools/testing/selftests/kvm/include/x86/svm.h
index 10b30b38bb3fd..d81d8a9f5bfb6 100644
--- a/tools/testing/selftests/kvm/include/x86/svm.h
+++ b/tools/testing/selftests/kvm/include/x86/svm.h
@@ -97,7 +97,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u64 exit_info_2;
u32 exit_int_info;
u32 exit_int_info_err;
- u64 nested_ctl;
+ u64 misc_ctl;
u64 avic_vapic_bar;
u8 reserved_4[8];
u32 event_inj;
@@ -175,8 +175,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
#define SVM_VM_CR_SVM_DIS_MASK 0x0010ULL
-#define SVM_NESTED_CTL_NP_ENABLE BIT(0)
-#define SVM_NESTED_CTL_SEV_ENABLE BIT(1)
+#define SVM_MISC_ENABLE_NP BIT(0)
+#define SVM_MISC_ENABLE_SEV BIT(1)
struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c
index 2e5c480c9afd4..eb20b00112c76 100644
--- a/tools/testing/selftests/kvm/lib/x86/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86/svm.c
@@ -126,7 +126,7 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
guest_regs.rdi = (u64)svm;
if (svm->ncr3_gpa) {
- ctrl->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
+ ctrl->misc_ctl |= SVM_MISC_ENABLE_NP;
ctrl->nested_cr3 = svm->ncr3_gpa;
}
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 19/26] KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (17 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 18/26] KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 20/26] KVM: nSVM: Cache all used fields from VMCB12 Yosry Ahmed
` (7 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
'virt' is confusing in the VMCB because it is relative and ambiguous.
The 'virt_ext' field includes bits for LBR virtualization and
VMSAVE/VMLOAD virtualization, so it's just another miscellaneous control
field. Name it as such.
While at it, move the definitions of the bits below those for
'misc_ctl' and rename them for consistency.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/include/asm/svm.h | 7 +++----
arch/x86/kvm/svm/nested.c | 16 +++++++--------
arch/x86/kvm/svm/svm.c | 20 +++++++++----------
arch/x86/kvm/svm/svm.h | 2 +-
tools/testing/selftests/kvm/include/x86/svm.h | 8 ++++----
.../kvm/x86/nested_vmsave_vmload_test.c | 16 +++++++--------
.../selftests/kvm/x86/svm_lbr_nested_state.c | 4 ++--
7 files changed, 36 insertions(+), 37 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 983db6575141d..c169256c415fb 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -148,7 +148,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u32 event_inj;
u32 event_inj_err;
u64 nested_cr3;
- u64 virt_ext;
+ u64 misc_ctl2;
u32 clean;
u32 reserved_5;
u64 next_rip;
@@ -222,9 +222,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define X2APIC_MODE_SHIFT 30
#define X2APIC_MODE_MASK (1 << X2APIC_MODE_SHIFT)
-#define LBR_CTL_ENABLE_MASK BIT_ULL(0)
-#define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1)
-
#define SVM_INTERRUPT_SHADOW_MASK BIT_ULL(0)
#define SVM_GUEST_INTERRUPT_MASK BIT_ULL(1)
@@ -243,6 +240,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_MISC_ENABLE_SEV BIT(1)
#define SVM_MISC_ENABLE_SEV_ES BIT(2)
+#define SVM_MISC2_ENABLE_V_LBR BIT_ULL(0)
+#define SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE BIT_ULL(1)
#define SVM_TSC_RATIO_RSVD 0xffffff0000000000ULL
#define SVM_TSC_RATIO_MIN 0x0000000000000001ULL
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 2f84fca3df85a..5994e309831d0 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -116,7 +116,7 @@ static bool nested_vmcb_needs_vls_intercept(struct vcpu_svm *svm)
if (!nested_npt_enabled(svm))
return true;
- if (!(svm->nested.ctl.virt_ext & VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK))
+ if (!(svm->nested.ctl.misc_ctl2 & SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE))
return true;
return false;
@@ -179,7 +179,7 @@ void recalc_intercepts(struct vcpu_svm *svm)
vmcb_set_intercept(c, INTERCEPT_VMLOAD);
vmcb_set_intercept(c, INTERCEPT_VMSAVE);
} else {
- WARN_ON(!(c->virt_ext & VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK));
+ WARN_ON(!(c->misc_ctl2 & SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE));
}
}
@@ -510,7 +510,7 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->event_inj_err = from->event_inj_err;
to->next_rip = from->next_rip;
to->nested_cr3 = from->nested_cr3;
- to->virt_ext = from->virt_ext;
+ to->misc_ctl2 = from->misc_ctl2;
to->pause_filter_count = from->pause_filter_count;
to->pause_filter_thresh = from->pause_filter_thresh;
@@ -689,7 +689,7 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
static bool nested_vmcb12_has_lbrv(struct kvm_vcpu *vcpu)
{
return guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
- (to_svm(vcpu)->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK);
+ (to_svm(vcpu)->nested.ctl.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR);
}
static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
@@ -920,10 +920,10 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
svm->soft_int_next_rip = vmcb12_rip;
}
- /* LBR_CTL_ENABLE_MASK is controlled by svm_update_lbrv() */
+ /* SVM_MISC2_ENABLE_V_LBR is controlled by svm_update_lbrv() */
if (!nested_vmcb_needs_vls_intercept(svm))
- vmcb02->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ vmcb02->control.misc_ctl2 |= SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
if (guest_cpu_cap_has(vcpu, X86_FEATURE_PAUSEFILTER))
pause_count12 = svm->nested.ctl.pause_filter_count;
@@ -1789,8 +1789,8 @@ static void nested_copy_vmcb_cache_to_control(struct vmcb_control_area *dst,
dst->event_inj = from->event_inj;
dst->event_inj_err = from->event_inj_err;
dst->next_rip = from->next_rip;
- dst->nested_cr3 = from->nested_cr3;
- dst->virt_ext = from->virt_ext;
+ dst->nested_cr3 = from->nested_cr3;
+ dst->misc_ctl2 = from->misc_ctl2;
dst->pause_filter_count = from->pause_filter_count;
dst->pause_filter_thresh = from->pause_filter_thresh;
/* 'clean' and 'hv_enlightenments' are not changed by KVM */
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 7bc8b72fe5ad8..94e14badddfa2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -710,7 +710,7 @@ void *svm_alloc_permissions_map(unsigned long size, gfp_t gfp_mask)
static void svm_recalc_lbr_msr_intercepts(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- bool intercept = !(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK);
+ bool intercept = !(svm->vmcb->control.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR);
if (intercept == svm->lbr_msrs_intercepted)
return;
@@ -843,7 +843,7 @@ static void svm_recalc_msr_intercepts(struct kvm_vcpu *vcpu)
static void __svm_enable_lbrv(struct kvm_vcpu *vcpu)
{
- to_svm(vcpu)->vmcb->control.virt_ext |= LBR_CTL_ENABLE_MASK;
+ to_svm(vcpu)->vmcb->control.misc_ctl2 |= SVM_MISC2_ENABLE_V_LBR;
}
void svm_enable_lbrv(struct kvm_vcpu *vcpu)
@@ -855,16 +855,16 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu)
static void __svm_disable_lbrv(struct kvm_vcpu *vcpu)
{
KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm);
- to_svm(vcpu)->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
+ to_svm(vcpu)->vmcb->control.misc_ctl2 &= ~SVM_MISC2_ENABLE_V_LBR;
}
void svm_update_lbrv(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- bool current_enable_lbrv = svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK;
+ bool current_enable_lbrv = svm->vmcb->control.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR;
bool enable_lbrv = (svm->vmcb->save.dbgctl & DEBUGCTLMSR_LBR) ||
(is_guest_mode(vcpu) && guest_cpu_cap_has(vcpu, X86_FEATURE_LBRV) &&
- (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK));
+ (svm->nested.ctl.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR));
if (enable_lbrv && !current_enable_lbrv)
__svm_enable_lbrv(vcpu);
@@ -1023,7 +1023,7 @@ static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu)
}
/*
- * No need to toggle VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK here, it is
+ * No need to toggle SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE here, it is
* always set if vls is enabled. If the intercepts are set, the bit is
* meaningless anyway.
*/
@@ -1191,7 +1191,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
}
if (vls)
- svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ svm->vmcb->control.misc_ctl2 |= SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
if (vcpu->kvm->arch.bus_lock_detection_enabled)
svm_set_intercept(svm, INTERCEPT_BUSLOCK);
@@ -3368,7 +3368,7 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-20s%016llx\n", "ghcb:", control->ghcb_gpa);
pr_err("%-20s%08x\n", "event_inj:", control->event_inj);
pr_err("%-20s%08x\n", "event_inj_err:", control->event_inj_err);
- pr_err("%-20s%lld\n", "virt_ext:", control->virt_ext);
+ pr_err("%-20s%lld\n", "misc_ctl2:", control->misc_ctl2);
pr_err("%-20s%016llx\n", "next_rip:", control->next_rip);
pr_err("%-20s%016llx\n", "avic_backing_page:", control->avic_backing_page);
pr_err("%-20s%016llx\n", "avic_logical_id:", control->avic_logical_id);
@@ -4363,7 +4363,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
* VM-Exit), as running with the host's DEBUGCTL can negatively affect
* guest state and can even be fatal, e.g. due to Bus Lock Detect.
*/
- if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
+ if (!(svm->vmcb->control.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR) &&
vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
update_debugctlmsr(svm->vmcb->save.dbgctl);
@@ -4394,7 +4394,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
- if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
+ if (!(svm->vmcb->control.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR) &&
vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
update_debugctlmsr(vcpu->arch.host_debugctl);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index f66e5c8565aad..304328c33e960 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -172,7 +172,7 @@ struct vmcb_ctrl_area_cached {
u32 event_inj_err;
u64 next_rip;
u64 nested_cr3;
- u64 virt_ext;
+ u64 misc_ctl2;
u32 clean;
u64 bus_lock_rip;
union {
diff --git a/tools/testing/selftests/kvm/include/x86/svm.h b/tools/testing/selftests/kvm/include/x86/svm.h
index d81d8a9f5bfb6..c8539166270ea 100644
--- a/tools/testing/selftests/kvm/include/x86/svm.h
+++ b/tools/testing/selftests/kvm/include/x86/svm.h
@@ -103,7 +103,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u32 event_inj;
u32 event_inj_err;
u64 nested_cr3;
- u64 virt_ext;
+ u64 misc_ctl2;
u32 clean;
u32 reserved_5;
u64 next_rip;
@@ -155,9 +155,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define AVIC_ENABLE_SHIFT 31
#define AVIC_ENABLE_MASK (1 << AVIC_ENABLE_SHIFT)
-#define LBR_CTL_ENABLE_MASK BIT_ULL(0)
-#define VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK BIT_ULL(1)
-
#define SVM_INTERRUPT_SHADOW_MASK 1
#define SVM_IOIO_STR_SHIFT 2
@@ -178,6 +175,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define SVM_MISC_ENABLE_NP BIT(0)
#define SVM_MISC_ENABLE_SEV BIT(1)
+#define SVM_MISC2_ENABLE_V_LBR BIT_ULL(0)
+#define SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE BIT_ULL(1)
+
struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
u16 attrib;
diff --git a/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c b/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
index 6764a48f9d4d9..71717118d6924 100644
--- a/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
@@ -79,8 +79,8 @@ static void l1_guest_code(struct svm_test_data *svm)
svm->vmcb->control.intercept |= (BIT_ULL(INTERCEPT_VMSAVE) |
BIT_ULL(INTERCEPT_VMLOAD));
- /* ..VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK cleared.. */
- svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ /* ..SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE cleared.. */
+ svm->vmcb->control.misc_ctl2 &= ~SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
svm->vmcb->save.rip = (u64)l2_guest_code_vmsave;
run_guest(svm->vmcb, svm->vmcb_gpa);
@@ -90,8 +90,8 @@ static void l1_guest_code(struct svm_test_data *svm)
run_guest(svm->vmcb, svm->vmcb_gpa);
GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMLOAD);
- /* ..and VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK set */
- svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ /* ..and SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE set */
+ svm->vmcb->control.misc_ctl2 |= SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
svm->vmcb->save.rip = (u64)l2_guest_code_vmsave;
run_guest(svm->vmcb, svm->vmcb_gpa);
@@ -106,20 +106,20 @@ static void l1_guest_code(struct svm_test_data *svm)
BIT_ULL(INTERCEPT_VMLOAD));
/*
- * Without VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK, the GPA will be
+ * Without SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE, the GPA will be
* interpreted as an L1 GPA, so VMCB0 should be used.
*/
svm->vmcb->save.rip = (u64)l2_guest_code_vmcb0;
- svm->vmcb->control.virt_ext &= ~VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ svm->vmcb->control.misc_ctl2 &= ~SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
run_guest(svm->vmcb, svm->vmcb_gpa);
GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMMCALL);
/*
- * With VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK, the GPA will be interpeted as
+ * With SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE, the GPA will be interpeted as
* an L2 GPA, and translated through the NPT to VMCB1.
*/
svm->vmcb->save.rip = (u64)l2_guest_code_vmcb1;
- svm->vmcb->control.virt_ext |= VIRTUAL_VMLOAD_VMSAVE_ENABLE_MASK;
+ svm->vmcb->control.misc_ctl2 |= SVM_MISC2_ENABLE_V_VMLOAD_VMSAVE;
run_guest(svm->vmcb, svm->vmcb_gpa);
GUEST_ASSERT_EQ(svm->vmcb->control.exit_code, SVM_EXIT_VMMCALL);
diff --git a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
index bf16abb1152e0..ff99438824d3a 100644
--- a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
+++ b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
@@ -69,9 +69,9 @@ static void l1_guest_code(struct svm_test_data *svm, bool nested_lbrv)
&l2_guest_stack[L2_GUEST_STACK_SIZE]);
if (nested_lbrv)
- vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ vmcb->control.misc_ctl2 = SVM_MISC2_ENABLE_V_LBR;
else
- vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK;
+ vmcb->control.misc_ctl2 &= ~SVM_MISC2_ENABLE_V_LBR;
run_guest(vmcb, svm->vmcb_gpa);
GUEST_ASSERT(svm->vmcb->control.exit_code == SVM_EXIT_VMMCALL);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 20/26] KVM: nSVM: Cache all used fields from VMCB12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (18 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 19/26] KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2 Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 21/26] KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN Yosry Ahmed
` (6 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
Currently, most fields used from VMCB12 are cached in
svm->nested.{ctl/save}. This is mainly to avoid TOC-TOU bugs. However,
for the save area, only the fields used in the consistency checks (i.e.
nested_vmcb_check_save()) were being cached. Other fields are read
directly from guest memory in nested_vmcb02_prepare_save().
While probably benign, this still makes it possible for TOC-TOU bugs to
happen. For example, RAX, RSP, and RIP are read twice, once to store in
VMCB02, and once to store in vcpu->arch.regs. It is possible for the
guest to modify the value between both reads, potentially causing nasty
bugs.
Harden against such bugs by caching everything in svm->nested.save.
Cache all the needed fields, and keep all accesses to the VMCB12
strictly in nested_svm_vmrun() for caching and early error injection.
Following changes will further limit the access to the VMCB12 in the
nested VMRUN path.
Introduce vmcb12_is_dirty() to use with the cached control fields
instead of vmcb_is_dirty(), similar to vmcb12_is_intercept().
Opportunistically order the copies in __nested_copy_vmcb_save_to_cache()
by the order in which the fields are defined in struct vmcb_save_area.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 118 ++++++++++++++++++++++----------------
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/svm.h | 27 ++++++++-
3 files changed, 94 insertions(+), 53 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5994e309831d0..f89040a467d4a 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -518,11 +518,11 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->asid = from->asid;
to->msrpm_base_pa &= ~0x0fffULL;
to->iopm_base_pa &= ~0x0fffULL;
+ to->clean = from->clean;
#ifdef CONFIG_KVM_HYPERV
/* Hyper-V extensions (Enlightened VMCB) */
if (kvm_hv_hypercall_enabled(vcpu)) {
- to->clean = from->clean;
memcpy(&to->hv_enlightenments, &from->hv_enlightenments,
sizeof(to->hv_enlightenments));
}
@@ -538,19 +538,34 @@ void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm,
static void __nested_copy_vmcb_save_to_cache(struct vmcb_save_area_cached *to,
struct vmcb_save_area *from)
{
- /*
- * Copy only fields that are validated, as we need them
- * to avoid TOC/TOU races.
- */
+ to->es = from->es;
to->cs = from->cs;
+ to->ss = from->ss;
+ to->ds = from->ds;
+ to->gdtr = from->gdtr;
+ to->idtr = from->idtr;
+
+ to->cpl = from->cpl;
to->efer = from->efer;
- to->cr0 = from->cr0;
- to->cr3 = from->cr3;
to->cr4 = from->cr4;
-
- to->dr6 = from->dr6;
+ to->cr3 = from->cr3;
+ to->cr0 = from->cr0;
to->dr7 = from->dr7;
+ to->dr6 = from->dr6;
+
+ to->rflags = from->rflags;
+ to->rip = from->rip;
+ to->rsp = from->rsp;
+
+ to->s_cet = from->s_cet;
+ to->ssp = from->ssp;
+ to->isst_addr = from->isst_addr;
+
+ to->rax = from->rax;
+ to->cr2 = from->cr2;
+
+ svm_copy_lbrs(to, from);
}
void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm,
@@ -692,8 +707,10 @@ static bool nested_vmcb12_has_lbrv(struct kvm_vcpu *vcpu)
(to_svm(vcpu)->nested.ctl.misc_ctl2 & SVM_MISC2_ENABLE_V_LBR);
}
-static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12)
+static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
{
+ struct vmcb_ctrl_area_cached *control = &svm->nested.ctl;
+ struct vmcb_save_area_cached *save = &svm->nested.save;
bool new_vmcb12 = false;
struct vmcb *vmcb01 = svm->vmcb01.ptr;
struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
@@ -709,48 +726,48 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
svm->nested.force_msr_bitmap_recalc = true;
}
- if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_SEG))) {
- vmcb02->save.es = vmcb12->save.es;
- vmcb02->save.cs = vmcb12->save.cs;
- vmcb02->save.ss = vmcb12->save.ss;
- vmcb02->save.ds = vmcb12->save.ds;
- vmcb02->save.cpl = vmcb12->save.cpl;
+ if (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_SEG))) {
+ vmcb02->save.es = save->es;
+ vmcb02->save.cs = save->cs;
+ vmcb02->save.ss = save->ss;
+ vmcb02->save.ds = save->ds;
+ vmcb02->save.cpl = save->cpl;
vmcb_mark_dirty(vmcb02, VMCB_SEG);
}
- if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DT))) {
- vmcb02->save.gdtr = vmcb12->save.gdtr;
- vmcb02->save.idtr = vmcb12->save.idtr;
+ if (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_DT))) {
+ vmcb02->save.gdtr = save->gdtr;
+ vmcb02->save.idtr = save->idtr;
vmcb_mark_dirty(vmcb02, VMCB_DT);
}
if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK) &&
- (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_CET)))) {
- vmcb02->save.s_cet = vmcb12->save.s_cet;
- vmcb02->save.isst_addr = vmcb12->save.isst_addr;
- vmcb02->save.ssp = vmcb12->save.ssp;
+ (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_CET)))) {
+ vmcb02->save.s_cet = save->s_cet;
+ vmcb02->save.isst_addr = save->isst_addr;
+ vmcb02->save.ssp = save->ssp;
vmcb_mark_dirty(vmcb02, VMCB_CET);
}
- kvm_set_rflags(vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);
+ kvm_set_rflags(vcpu, save->rflags | X86_EFLAGS_FIXED);
svm_set_efer(vcpu, svm->nested.save.efer);
svm_set_cr0(vcpu, svm->nested.save.cr0);
svm_set_cr4(vcpu, svm->nested.save.cr4);
- svm->vcpu.arch.cr2 = vmcb12->save.cr2;
+ svm->vcpu.arch.cr2 = save->cr2;
- kvm_rax_write(vcpu, vmcb12->save.rax);
- kvm_rsp_write(vcpu, vmcb12->save.rsp);
- kvm_rip_write(vcpu, vmcb12->save.rip);
+ kvm_rax_write(vcpu, save->rax);
+ kvm_rsp_write(vcpu, save->rsp);
+ kvm_rip_write(vcpu, save->rip);
/* In case we don't even reach vcpu_run, the fields are not updated */
- vmcb02->save.rax = vmcb12->save.rax;
- vmcb02->save.rsp = vmcb12->save.rsp;
- vmcb02->save.rip = vmcb12->save.rip;
+ vmcb02->save.rax = save->rax;
+ vmcb02->save.rsp = save->rsp;
+ vmcb02->save.rip = save->rip;
- if (unlikely(new_vmcb12 || vmcb_is_dirty(vmcb12, VMCB_DR))) {
+ if (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_DR))) {
vmcb02->save.dr7 = svm->nested.save.dr7 | DR7_FIXED_1;
svm->vcpu.arch.dr6 = svm->nested.save.dr6 | DR6_ACTIVE_LOW;
vmcb_mark_dirty(vmcb02, VMCB_DR);
@@ -761,7 +778,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm, struct vmcb *vmcb12
* Reserved bits of DEBUGCTL are ignored. Be consistent with
* svm_set_msr's definition of reserved bits.
*/
- svm_copy_lbrs(&vmcb02->save, &vmcb12->save);
+ svm_copy_lbrs(&vmcb02->save, save);
vmcb02->save.dbgctl &= ~DEBUGCTL_RESERVED_BITS;
} else {
svm_copy_lbrs(&vmcb02->save, &vmcb01->save);
@@ -984,28 +1001,29 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
to_vmcb->save.spec_ctrl = from_vmcb->save.spec_ctrl;
}
-int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
- struct vmcb *vmcb12, bool from_vmrun)
+int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, bool from_vmrun)
{
struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb_ctrl_area_cached *control = &svm->nested.ctl;
+ struct vmcb_save_area_cached *save = &svm->nested.save;
int ret;
trace_kvm_nested_vmenter(svm->vmcb->save.rip,
vmcb12_gpa,
- vmcb12->save.rip,
- vmcb12->control.int_ctl,
- vmcb12->control.event_inj,
- vmcb12->control.misc_ctl,
- vmcb12->control.nested_cr3,
- vmcb12->save.cr3,
+ save->rip,
+ control->int_ctl,
+ control->event_inj,
+ control->misc_ctl,
+ control->nested_cr3,
+ save->cr3,
KVM_ISA_SVM);
- trace_kvm_nested_intercepts(vmcb12->control.intercepts[INTERCEPT_CR] & 0xffff,
- vmcb12->control.intercepts[INTERCEPT_CR] >> 16,
- vmcb12->control.intercepts[INTERCEPT_EXCEPTION],
- vmcb12->control.intercepts[INTERCEPT_WORD3],
- vmcb12->control.intercepts[INTERCEPT_WORD4],
- vmcb12->control.intercepts[INTERCEPT_WORD5]);
+ trace_kvm_nested_intercepts(control->intercepts[INTERCEPT_CR] & 0xffff,
+ control->intercepts[INTERCEPT_CR] >> 16,
+ control->intercepts[INTERCEPT_EXCEPTION],
+ control->intercepts[INTERCEPT_WORD3],
+ control->intercepts[INTERCEPT_WORD4],
+ control->intercepts[INTERCEPT_WORD5]);
svm->nested.vmcb12_gpa = vmcb12_gpa;
@@ -1015,8 +1033,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
nested_svm_copy_common_state(svm->vmcb01.ptr, svm->nested.vmcb02.ptr);
svm_switch_vmcb(svm, &svm->nested.vmcb02);
- nested_vmcb02_prepare_control(svm, vmcb12->save.rip, vmcb12->save.cs.base);
- nested_vmcb02_prepare_save(svm, vmcb12);
+ nested_vmcb02_prepare_control(svm, save->rip, save->cs.base);
+ nested_vmcb02_prepare_save(svm);
ret = nested_svm_load_cr3(&svm->vcpu, svm->nested.save.cr3,
nested_npt_enabled(svm), from_vmrun);
@@ -1104,7 +1122,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
svm->nested.nested_run_pending = 1;
- if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
+ if (enter_svm_guest_mode(vcpu, vmcb12_gpa, true))
goto out_exit_err;
if (nested_svm_merge_msrpm(vcpu))
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 94e14badddfa2..19112ec48c0f7 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4885,7 +4885,7 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram)
vmcb12 = map.hva;
nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
- ret = enter_svm_guest_mode(vcpu, smram64->svm_guest_vmcb_gpa, vmcb12, false);
+ ret = enter_svm_guest_mode(vcpu, smram64->svm_guest_vmcb_gpa, false);
if (ret)
goto unmap_save;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 304328c33e960..388aaa5d63d29 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -140,13 +140,32 @@ struct kvm_vmcb_info {
};
struct vmcb_save_area_cached {
+ struct vmcb_seg es;
struct vmcb_seg cs;
+ struct vmcb_seg ss;
+ struct vmcb_seg ds;
+ struct vmcb_seg gdtr;
+ struct vmcb_seg idtr;
+ u8 cpl;
u64 efer;
u64 cr4;
u64 cr3;
u64 cr0;
u64 dr7;
u64 dr6;
+ u64 rflags;
+ u64 rip;
+ u64 rsp;
+ u64 s_cet;
+ u64 ssp;
+ u64 isst_addr;
+ u64 rax;
+ u64 cr2;
+ u64 dbgctl;
+ u64 br_from;
+ u64 br_to;
+ u64 last_excp_from;
+ u64 last_excp_to;
};
struct vmcb_ctrl_area_cached {
@@ -421,6 +440,11 @@ static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
}
+static inline bool vmcb12_is_dirty(struct vmcb_ctrl_area_cached *control, int bit)
+{
+ return !test_bit(bit, (unsigned long *)&control->clean);
+}
+
static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
{
return container_of(vcpu, struct vcpu_svm, vcpu);
@@ -785,8 +809,7 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
int __init nested_svm_init_msrpm_merge_offsets(void);
-int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
- u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
+int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, bool from_vmrun);
void svm_leave_nested(struct kvm_vcpu *vcpu);
void svm_free_nested(struct vcpu_svm *svm);
int svm_allocate_nested(struct vcpu_svm *svm);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 21/26] KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (19 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 20/26] KVM: nSVM: Cache all used fields from VMCB12 Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 22/26] KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12 Yosry Ahmed
` (5 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
All accesses to the vmcb12 in the guest memory on nested VMRUN are
limited to nested_svm_vmrun() copying vmcb12 fields and writing them on
failed consistency checks. However, vmcb12 remains mapped throughout
nested_svm_vmrun(). Mapping and unmapping around usages is possible,
but it becomes easy-ish to introduce bugs where 'vmcb12' is used after
being unmapped.
Move reading the vmcb12, copying to cache, and consistency checks from
nested_svm_vmrun() into a new helper, nested_svm_copy_vmcb12_to_cache()
to limit the scope of the mapping.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 89 ++++++++++++++++++++++-----------------
1 file changed, 51 insertions(+), 38 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f89040a467d4a..0151354b2ef01 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1054,12 +1054,39 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, bool from_vmrun)
return 0;
}
-int nested_svm_vmrun(struct kvm_vcpu *vcpu)
+static int nested_svm_copy_vmcb12_to_cache(struct kvm_vcpu *vcpu, u64 vmcb12_gpa)
{
struct vcpu_svm *svm = to_svm(vcpu);
- int ret;
- struct vmcb *vmcb12;
struct kvm_host_map map;
+ struct vmcb *vmcb12;
+ int r = 0;
+
+ if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map))
+ return -EFAULT;
+
+ vmcb12 = map.hva;
+ nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
+ nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
+
+ if (!nested_vmcb_check_save(vcpu, &svm->nested.save) ||
+ !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) {
+ vmcb12->control.exit_code = SVM_EXIT_ERR;
+ vmcb12->control.exit_info_1 = 0;
+ vmcb12->control.exit_info_2 = 0;
+ vmcb12->control.event_inj = 0;
+ vmcb12->control.event_inj_err = 0;
+ svm_set_gif(svm, false);
+ r = -EINVAL;
+ }
+
+ kvm_vcpu_unmap(vcpu, &map);
+ return r;
+}
+
+int nested_svm_vmrun(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ int ret, err;
u64 vmcb12_gpa;
struct vmcb *vmcb01 = svm->vmcb01.ptr;
@@ -1080,32 +1107,23 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
return ret;
}
+ if (WARN_ON_ONCE(!svm->nested.initialized))
+ return -EINVAL;
+
vmcb12_gpa = svm->vmcb->save.rax;
- if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map)) {
+ err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
+ if (err == -EFAULT) {
kvm_inject_gp(vcpu, 0);
return 1;
}
+ /*
+ * Advance RIP if #GP or #UD are not injected, but otherwise stop if
+ * copying and checking vmcb12 failed.
+ */
ret = kvm_skip_emulated_instruction(vcpu);
-
- vmcb12 = map.hva;
-
- if (WARN_ON_ONCE(!svm->nested.initialized))
- return -EINVAL;
-
- nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
- nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
-
- if (!nested_vmcb_check_save(vcpu, &svm->nested.save) ||
- !nested_vmcb_check_controls(vcpu, &svm->nested.ctl)) {
- vmcb12->control.exit_code = SVM_EXIT_ERR;
- vmcb12->control.exit_info_1 = 0;
- vmcb12->control.exit_info_2 = 0;
- vmcb12->control.event_inj = 0;
- vmcb12->control.event_inj_err = 0;
- svm_set_gif(svm, false);
- goto out;
- }
+ if (err)
+ return ret;
/*
* Since vmcb01 is not in use, we can use it to store some of the L1
@@ -1122,23 +1140,18 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
svm->nested.nested_run_pending = 1;
- if (enter_svm_guest_mode(vcpu, vmcb12_gpa, true))
- goto out_exit_err;
-
- if (nested_svm_merge_msrpm(vcpu))
- goto out;
-
-out_exit_err:
- svm->nested.nested_run_pending = 0;
-
- svm->vmcb->control.exit_code = SVM_EXIT_ERR;
- svm->vmcb->control.exit_info_1 = 0;
- svm->vmcb->control.exit_info_2 = 0;
+ if (enter_svm_guest_mode(vcpu, vmcb12_gpa, true) ||
+ !nested_svm_merge_msrpm(vcpu)) {
+ svm->nested.nested_run_pending = 0;
+ svm->nmi_l1_to_l2 = false;
+ svm->soft_int_injected = false;
- nested_svm_vmexit(svm);
+ svm->vmcb->control.exit_code = SVM_EXIT_ERR;
+ svm->vmcb->control.exit_info_1 = 0;
+ svm->vmcb->control.exit_info_2 = 0;
-out:
- kvm_vcpu_unmap(vcpu, &map);
+ nested_svm_vmexit(svm);
+ }
return ret;
}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 22/26] KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (20 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 21/26] KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 23/26] KVM: nSVM: Sanitize TLB_CONTROL field when copying " Yosry Ahmed
` (4 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
Use PAGE_MASK to drop the lower bits from IOPM_BASE_PA and MSRPM_BASE_PA
while copying them instead of dropping the bits afterward with a
hardcoded mask.
No functional change intended.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 0151354b2ef01..2d0c39fad2724 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -493,8 +493,8 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
if (!guest_cpu_cap_has(vcpu, X86_FEATURE_NPT))
to->misc_ctl &= ~SVM_MISC_ENABLE_NP;
- to->iopm_base_pa = from->iopm_base_pa;
- to->msrpm_base_pa = from->msrpm_base_pa;
+ to->iopm_base_pa = from->iopm_base_pa & PAGE_MASK;
+ to->msrpm_base_pa = from->msrpm_base_pa & PAGE_MASK;
to->tsc_offset = from->tsc_offset;
to->tlb_ctl = from->tlb_ctl;
to->erap_ctl = from->erap_ctl;
@@ -516,8 +516,6 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
/* Copy asid here because nested_vmcb_check_controls() will check it */
to->asid = from->asid;
- to->msrpm_base_pa &= ~0x0fffULL;
- to->iopm_base_pa &= ~0x0fffULL;
to->clean = from->clean;
#ifdef CONFIG_KVM_HYPERV
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 23/26] KVM: nSVM: Sanitize TLB_CONTROL field when copying from vmcb12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (21 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 22/26] KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12 Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 24/26] KVM: nSVM: Sanitize INT/EVENTINJ fields " Yosry Ahmed
` (3 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
The APM defines possible values for TLB_CONTROL as 0, 1, 3, and 7 -- all
of which are always allowed for KVM guests as KVM always supports
X86_FEATURE_FLUSHBYASID. Only copy bits 0 to 2 from vmcb12's
TLB_CONTROL, such that no unhandled or reserved bits end up in vmcb02.
Note that TLB_CONTROL in vmcb12 is currently ignored by KVM, as it nukes
the TLB on nested transitions anyway (see
nested_svm_transition_tlb_flush()). However, such sanitization will be
needed once the TODOs there are addressed, and it's minimal churn to add
it now.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/include/asm/svm.h | 2 ++
arch/x86/kvm/svm/nested.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index c169256c415fb..16cf4f435aebd 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -182,6 +182,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define TLB_CONTROL_FLUSH_ASID 3
#define TLB_CONTROL_FLUSH_ASID_LOCAL 7
+#define TLB_CONTROL_MASK GENMASK(2, 0)
+
#define ERAP_CONTROL_ALLOW_LARGER_RAP BIT(0)
#define ERAP_CONTROL_CLEAR_RAP BIT(1)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 2d0c39fad2724..97439d0f5c49c 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -496,7 +496,7 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->iopm_base_pa = from->iopm_base_pa & PAGE_MASK;
to->msrpm_base_pa = from->msrpm_base_pa & PAGE_MASK;
to->tsc_offset = from->tsc_offset;
- to->tlb_ctl = from->tlb_ctl;
+ to->tlb_ctl = from->tlb_ctl & TLB_CONTROL_MASK;
to->erap_ctl = from->erap_ctl;
to->int_ctl = from->int_ctl;
to->int_vector = from->int_vector;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 24/26] KVM: nSVM: Sanitize INT/EVENTINJ fields when copying from vmcb12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (22 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 23/26] KVM: nSVM: Sanitize TLB_CONTROL field when copying " Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 25/26] KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl Yosry Ahmed
` (2 subsequent siblings)
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, Jim Mattson
Make sure all fields used from vmcb12 in creating the vmcb02 are
sanitized, such that no unhandled or reserved bits end up in the vmcb02.
The following control fields are read from vmcb12 and have bits that are
either reserved or not handled/advertised by KVM: tlb_ctl, int_ctl,
int_state, int_vector, event_inj, misc_ctl, and misc_ctl2.
The following fields do not require any extra sanitizing:
- tlb_ctl: already being sanitized.
- int_ctl: bits from vmcb12 are copied bit-by-bit as needed.
- misc_ctl: only used in consistency checks (particularly NP_ENABLE).
- misc_ctl2: bits from vmcb12 are copied bit-by-bit as needed.
For the remaining fields (int_vector, int_state, and event_inj), make
sure only defined bits are copied from L1's vmcb12 into KVM'cache by
defining appropriate masks where needed.
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/include/asm/svm.h | 5 +++++
arch/x86/kvm/svm/nested.c | 8 ++++----
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 16cf4f435aebd..bcfeb5e7c0edf 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -224,6 +224,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define X2APIC_MODE_SHIFT 30
#define X2APIC_MODE_MASK (1 << X2APIC_MODE_SHIFT)
+#define SVM_INT_VECTOR_MASK GENMASK(7, 0)
+
#define SVM_INTERRUPT_SHADOW_MASK BIT_ULL(0)
#define SVM_GUEST_INTERRUPT_MASK BIT_ULL(1)
@@ -637,6 +639,9 @@ static inline void __unused_size_checks(void)
#define SVM_EVTINJ_VALID (1 << 31)
#define SVM_EVTINJ_VALID_ERR (1 << 11)
+#define SVM_EVTINJ_RESERVED_BITS ~(SVM_EVTINJ_VEC_MASK | SVM_EVTINJ_TYPE_MASK | \
+ SVM_EVTINJ_VALID_ERR | SVM_EVTINJ_VALID)
+
#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 97439d0f5c49c..7ae62f04667cc 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -499,18 +499,18 @@ void __nested_copy_vmcb_control_to_cache(struct kvm_vcpu *vcpu,
to->tlb_ctl = from->tlb_ctl & TLB_CONTROL_MASK;
to->erap_ctl = from->erap_ctl;
to->int_ctl = from->int_ctl;
- to->int_vector = from->int_vector;
- to->int_state = from->int_state;
+ to->int_vector = from->int_vector & SVM_INT_VECTOR_MASK;
+ to->int_state = from->int_state & SVM_INTERRUPT_SHADOW_MASK;
to->exit_code = from->exit_code;
to->exit_info_1 = from->exit_info_1;
to->exit_info_2 = from->exit_info_2;
to->exit_int_info = from->exit_int_info;
to->exit_int_info_err = from->exit_int_info_err;
- to->event_inj = from->event_inj;
+ to->event_inj = from->event_inj & ~SVM_EVTINJ_RESERVED_BITS;
to->event_inj_err = from->event_inj_err;
to->next_rip = from->next_rip;
to->nested_cr3 = from->nested_cr3;
- to->misc_ctl2 = from->misc_ctl2;
+ to->misc_ctl2 = from->misc_ctl2;
to->pause_filter_count = from->pause_filter_count;
to->pause_filter_thresh = from->pause_filter_thresh;
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 25/26] KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (23 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 24/26] KVM: nSVM: Sanitize INT/EVENTINJ fields " Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12 Yosry Ahmed
2026-03-05 17:08 ` [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Sean Christopherson
26 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed, Jim Mattson
The 'misc_ctl' field in VMCB02 is taken as-is from VMCB01. However, the
only bit that needs to copied is SVM_MISC_ENABLE_NP, as all other known
bits in misc_ctl are related to SEV guests, and KVM doesn't support
nested virtualization for SEV guests.
Only copy SVM_MISC_ENABLE_NP to harden against future bugs if/when other
bits are set for L1 but should not be set for L2.
Opportunistically add a comment explaining why SVM_MISC_ENABLE_NP is
taken from VMCB01 and not VMCB02.
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
arch/x86/kvm/svm/nested.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 7ae62f04667cc..fd1fc2d67bd33 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -849,8 +849,16 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
V_NMI_BLOCKING_MASK);
}
- /* Copied from vmcb01. msrpm_base can be overwritten later. */
- vmcb02->control.misc_ctl = vmcb01->control.misc_ctl;
+ /*
+ * Copied from vmcb01. msrpm_base can be overwritten later.
+ *
+ * SVM_MISC_ENABLE_NP in vmcb12 is only used for consistency checks. If
+ * L1 enables NPTs, KVM shadows L1's NPTs and uses those to run L2. If
+ * L1 disables NPT, KVM runs L2 with the same NPTs used to run L1. For
+ * the latter, L1 runs L2 with shadow page tables that translate L2 GVAs
+ * to L1 GPAs, so the same NPTs can be used for L1 and L2.
+ */
+ vmcb02->control.misc_ctl = vmcb01->control.misc_ctl & SVM_MISC_ENABLE_NP;
vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
vmcb02->control.msrpm_base_pa = vmcb01->control.msrpm_base_pa;
vmcb_mark_dirty(vmcb02, VMCB_PERM_MAP);
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (24 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 25/26] KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl Yosry Ahmed
@ 2026-03-03 0:34 ` Yosry Ahmed
2026-03-05 22:30 ` Jim Mattson
2026-03-05 17:08 ` [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Sean Christopherson
26 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 0:34 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, Yosry Ahmed
Add a test that verifies that KVM correctly injects a #GP for nested
VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
mapped.
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../kvm/x86/svm_nested_invalid_vmcb12_gpa.c | 98 +++++++++++++++++++
2 files changed, 99 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86/svm_nested_invalid_vmcb12_gpa.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 36b48e766e499..f12e7c17d379d 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -110,6 +110,7 @@ TEST_GEN_PROGS_x86 += x86/state_test
TEST_GEN_PROGS_x86 += x86/vmx_preemption_timer_test
TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
+TEST_GEN_PROGS_x86 += x86/svm_nested_invalid_vmcb12_gpa
TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
TEST_GEN_PROGS_x86 += x86/svm_lbr_nested_state
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_invalid_vmcb12_gpa.c b/tools/testing/selftests/kvm/x86/svm_nested_invalid_vmcb12_gpa.c
new file mode 100644
index 0000000000000..c6d5f712120d1
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86/svm_nested_invalid_vmcb12_gpa.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2026, Google LLC.
+ */
+#include "kvm_util.h"
+#include "vmx.h"
+#include "svm_util.h"
+#include "kselftest.h"
+
+
+#define L2_GUEST_STACK_SIZE 64
+
+#define SYNC_GP 101
+#define SYNC_L2_STARTED 102
+
+u64 valid_vmcb12_gpa;
+int gp_triggered;
+
+static void guest_gp_handler(struct ex_regs *regs)
+{
+ GUEST_ASSERT(!gp_triggered);
+ GUEST_SYNC(SYNC_GP);
+ gp_triggered = 1;
+ regs->rax = valid_vmcb12_gpa;
+}
+
+static void l2_guest_code(void)
+{
+ GUEST_SYNC(SYNC_L2_STARTED);
+ vmcall();
+}
+
+static void l1_guest_code(struct svm_test_data *svm, u64 invalid_vmcb12_gpa)
+{
+ unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+
+ generic_svm_setup(svm, l2_guest_code,
+ &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+ valid_vmcb12_gpa = svm->vmcb_gpa;
+
+ run_guest(svm->vmcb, invalid_vmcb12_gpa); /* #GP */
+
+ /* GP handler should jump here */
+ GUEST_ASSERT(svm->vmcb->control.exit_code == SVM_EXIT_VMMCALL);
+ GUEST_DONE();
+}
+
+int main(int argc, char *argv[])
+{
+ struct kvm_x86_state *state;
+ vm_vaddr_t nested_gva = 0;
+ struct kvm_vcpu *vcpu;
+ uint32_t maxphyaddr;
+ u64 max_legal_gpa;
+ struct kvm_vm *vm;
+ struct ucall uc;
+
+ TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
+
+ vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
+ vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler);
+
+ /*
+ * Find the max legal GPA that is not backed by a memslot (i.e. cannot
+ * be mapped by KVM).
+ */
+ maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
+ max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
+ vcpu_alloc_svm(vm, &nested_gva);
+ vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
+
+ /* VMRUN with max_legal_gpa, KVM injects a #GP */
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+ TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
+ TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
+
+ /*
+ * Enter L2 (with a legit vmcb12 GPA), then overwrite vmcb12 GPA with
+ * max_legal_gpa. KVM will fail to map vmcb12 on nested VM-Exit and
+ * cause a shutdown.
+ */
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+ TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
+ TEST_ASSERT_EQ(uc.args[1], SYNC_L2_STARTED);
+
+ state = vcpu_save_state(vcpu);
+ state->nested.hdr.svm.vmcb_pa = max_legal_gpa;
+ vcpu_load_state(vcpu, state);
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_SHUTDOWN);
+
+ kvm_x86_state_cleanup(state);
+ kvm_vm_free(vm);
+ return 0;
+}
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
2026-03-03 0:33 ` [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs Yosry Ahmed
@ 2026-03-03 16:37 ` Sean Christopherson
2026-03-03 19:14 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-03 16:37 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel, stable, Jim Mattson
On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> MSR_IA32_DEBUGCTLMSR and LBR MSRs are currently not enumerated by
> KVM_GET_MSR_INDEX_LIST, and LBR MSRs cannot be set with KVM_SET_MSRS. So
> save/restore is completely broken.
>
> Fix it by adding the MSRs to msrs_to_save_base, and allowing writes to
> LBR MSRs from userspace only (as they are read-only MSRs). Additionally,
> to correctly restore L1's LBRs while L2 is running, make sure the LBRs
> are copied from the captured VMCB01 save area in svm_copy_vmrun_state().
>
> For VMX, this also adds save/restore handling of KVM_GET_MSR_INDEX_LIST.
> For unspported MSR_IA32_LAST* MSRs, kvm_do_msr_access() should 0 these
> MSRs on userspace reads, and ignore KVM_MSR_RET_UNSUPPORTED on userspace
> writes.
>
> Fixes: 24e09cbf480a ("KVM: SVM: enable LBR virtualization")
> Cc: stable@vger.kernel.org
> Reported-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> ---
> arch/x86/kvm/svm/nested.c | 5 +++++
> arch/x86/kvm/svm/svm.c | 24 ++++++++++++++++++++++++
> arch/x86/kvm/x86.c | 3 +++
> 3 files changed, 32 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index f7d5db0af69ac..3bf758c9cb85c 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1100,6 +1100,11 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
> to_save->isst_addr = from_save->isst_addr;
> to_save->ssp = from_save->ssp;
> }
> +
> + if (lbrv) {
Tomato, tomato, but maybe make this
if (kvm_cpu_cap_has(X86_FEATURE_LBRV)) {
to capture that this requires nested support. I can't imagine we'll ever disable
X86_FEATURE_LBRV when nested=1 && lbrv=1, but I don't see any harm in being
paranoid in this case.
> + svm_copy_lbrs(to_save, from_save);
> + to_save->dbgctl &= ~DEBUGCTL_RESERVED_BITS;
> + }
> }
>
> void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index f52e588317fcf..cb53174583a26 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3071,6 +3071,30 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> svm_update_lbrv(vcpu);
> break;
> + case MSR_IA32_LASTBRANCHFROMIP:
Shouldn't these be gated on lbrv? If LBRV is truly unsupported, KVM would be
writing "undefined" fields and clearing "unknown" clean bits.
Specifically, if we do:
if (!lbrv)
return KVM_MSR_RET_UNSUPPORTED;
then kvm_do_msr_access() will allow writes of '0' from the host, via this code:
if (host_initiated && !*data && kvm_is_advertised_msr(msr))
return 0;
And then in the read side, do e.g.:
msr_info->data = lbrv ? svm->vmcb->save.dbgctl : 0;
to ensure KVM won't feed userspace garbage (the VMCB fields should be '0', but
there's no reason to risk that).
The changelog also needs to call out that kvm_set_msr_common() returns
KVM_MSR_RET_UNSUPPORTED for unhandled MSRs (i.e. for VMX and TDX), and that
kvm_get_msr_common() explicitly zeros the data for MSR_IA32_LASTxxx (because per
b5e2fec0ebc3 ("KVM: Ignore DEBUGCTL MSRs with no effect"), old and crust kernels
would read the MSRs on Intel...).
So all in all (not yet tested), this? If this is the only issue in the series,
or at least in the stable@ part of the series, no need for a v8 (I've obviously
already done the fixup).
--
From: Yosry Ahmed <yosry@kernel.org>
Date: Tue, 3 Mar 2026 00:33:57 +0000
Subject: [PATCH] KVM: SVM: Add missing save/restore handling of LBR MSRs
MSR_IA32_DEBUGCTLMSR and LBR MSRs are currently not enumerated by
KVM_GET_MSR_INDEX_LIST, and LBR MSRs cannot be set with KVM_SET_MSRS. So
save/restore is completely broken.
Fix it by adding the MSRs to msrs_to_save_base, and allowing writes to
LBR MSRs from userspace only (as they are read-only MSRs) if LBR
virtualization is enabled. Additionally, to correctly restore L1's LBRs
while L2 is running, make sure the LBRs are copied from the captured
VMCB01 save area in svm_copy_vmrun_state().
Note, for VMX, this also fixes a flaw where MSR_IA32_DEBUGCTLMSR isn't
reported as an MSR to save/restore.
Note #2, over-reporting MSR_IA32_LASTxxx on Intel is ok, as KVM already
handles unsupported reads and writes thanks to commit b5e2fec0ebc3 ("KVM:
Ignore DEBUGCTL MSRs with no effect") (kvm_do_msr_access() will morph the
unsupported userspace write into a nop).
Fixes: 24e09cbf480a ("KVM: SVM: enable LBR virtualization")
Cc: stable@vger.kernel.org
Reported-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
Link: https://patch.msgid.link/20260303003421.2185681-4-yosry@kernel.org
[sean: guard with lbrv checks, massage changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/nested.c | 5 +++++
arch/x86/kvm/svm/svm.c | 44 +++++++++++++++++++++++++++++++++------
arch/x86/kvm/x86.c | 3 +++
3 files changed, 46 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d0faa3e2dc97..d142761ad517 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1098,6 +1098,11 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
to_save->isst_addr = from_save->isst_addr;
to_save->ssp = from_save->ssp;
}
+
+ if (kvm_cpu_cap_has(X86_FEATURE_LBRV)) {
+ svm_copy_lbrs(to_save, from_save);
+ to_save->dbgctl &= ~DEBUGCTL_RESERVED_BITS;
+ }
}
void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 4649cef966f6..317c8c28443a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2788,19 +2788,19 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = svm->tsc_aux;
break;
case MSR_IA32_DEBUGCTLMSR:
- msr_info->data = svm->vmcb->save.dbgctl;
+ msr_info->data = lbrv ? svm->vmcb->save.dbgctl : 0;
break;
case MSR_IA32_LASTBRANCHFROMIP:
- msr_info->data = svm->vmcb->save.br_from;
+ msr_info->data = lbrv ? svm->vmcb->save.br_from : 0;
break;
case MSR_IA32_LASTBRANCHTOIP:
- msr_info->data = svm->vmcb->save.br_to;
+ msr_info->data = lbrv ? svm->vmcb->save.br_to : 0;
break;
case MSR_IA32_LASTINTFROMIP:
- msr_info->data = svm->vmcb->save.last_excp_from;
- break;
+ msr_info->data = lbrv ? svm->vmcb->save.last_excp_from : 0;
+ breakk;
case MSR_IA32_LASTINTTOIP:
- msr_info->data = svm->vmcb->save.last_excp_to;
+ msr_info->data = lbrv ? svm->vmcb->save.last_excp_to : 0;
break;
case MSR_VM_HSAVE_PA:
msr_info->data = svm->nested.hsave_msr;
@@ -3075,6 +3075,38 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
svm_update_lbrv(vcpu);
break;
+ case MSR_IA32_LASTBRANCHFROMIP:
+ if (!lbrv)
+ return KVM_MSR_RET_UNSUPPORTED;
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.br_from = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTBRANCHTOIP:
+ if (!lbrv)
+ return KVM_MSR_RET_UNSUPPORTED;
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.br_to = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTINTFROMIP:
+ if (!lbrv)
+ return KVM_MSR_RET_UNSUPPORTED;
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.last_excp_from = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
+ case MSR_IA32_LASTINTTOIP:
+ if (!lbrv)
+ return KVM_MSR_RET_UNSUPPORTED;
+ if (!msr->host_initiated)
+ return 1;
+ svm->vmcb->save.last_excp_to = data;
+ vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
+ break;
case MSR_VM_HSAVE_PA:
/*
* Old kernels did not validate the value written to
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6e87ec52fa06..64da02d1ee00 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -351,6 +351,9 @@ static const u32 msrs_to_save_base[] = {
MSR_IA32_U_CET, MSR_IA32_S_CET,
MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP, MSR_IA32_PL2_SSP,
MSR_IA32_PL3_SSP, MSR_IA32_INT_SSP_TAB,
+ MSR_IA32_DEBUGCTLMSR,
+ MSR_IA32_LASTBRANCHFROMIP, MSR_IA32_LASTBRANCHTOIP,
+ MSR_IA32_LASTINTFROMIP, MSR_IA32_LASTINTTOIP,
};
static const u32 msrs_to_save_pmu[] = {
base-commit: 149b996ea367eef39faf82ccba0659a5f3d389ea
--
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 fails on nested #VMEXIT
2026-03-03 0:34 ` [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 " Yosry Ahmed
@ 2026-03-03 16:49 ` Sean Christopherson
2026-03-03 19:15 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-03 16:49 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> If loading L1's CR3 fails on a nested #VMEXIT, nested_svm_vmexit()
> returns an error code that is ignored by most callers, and continues to
> run L1 with corrupted state. A sane recovery is not possible in this
> case, and HW behavior is to cause a shutdown. Inject a triple fault
> ,nstead, and do not return early from nested_svm_vmexit(). Continue
s/,/i
> cleaning up the vCPU state (e.g. clear pending exceptions), to handle
> the failure as gracefully as possible.
>
> >From the APM:
> Upon #VMEXIT, the processor performs the following actions in
> order to return to the host execution context:
>
> ...
> if (illegal host state loaded, or exception while loading
> host state)
> shutdown
> else
> execute first host instruction following the VMRUN
Uber nit, use spaces instead of tabs in changelogs, as indenting eight chars is
almost always overkill and changelogs are more likely to be viewed in a reader
that has tab-stops set to something other than eight. E.g. using two spaces as
the margin and then manual indentation of four:
From the APM:
Upon #VMEXIT, the processor performs the following actions in order to
return to the host execution context:
...
if (illegal host state loaded, or exception while loading host state)
shutdown
else
execute first host instruction following the VMRUN
Remove the return value of nested_svm_vmexit(), which is mostly
unchecked anyway.
> Remove the return value of nested_svm_vmexit(), which is mostly
> unchecked anyway.
>
> Fixes: d82aaef9c88a ("KVM: nSVM: use nested_svm_load_cr3() on guest->host switch")
> CC: stable@vger.kernel.org
Heh, and super duper uber nit, "Cc:" is much more common than "CC:" (I'm actually
somewhat surprised checkpatch didn't complain since it's so particular about case
for other trailers).
$ git log -10000 | grep "CC:" | wc -l
38
$ git log -10000 | grep "Cc:" | wc -l
11238
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ on nested #VMEXIT
2026-03-03 0:34 ` [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ " Yosry Ahmed
@ 2026-03-03 16:50 ` Sean Christopherson
2026-03-03 19:15 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-03 16:50 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> KVM clears tracking of L1->L2 injected NMIs (i.e. nmi_l1_to_l2) and soft
> IRQs (i.e. soft_int_injected) on a synthesized #VMEXIT(INVALID) due to
> failed VMRUN. However, they are not explicitly cleared in other
> synthesized #VMEXITs.
>
> soft_int_injected is always cleared after the first VMRUN of L2 when
> completing interrupts, as any re-injection is then tracked by KVM
> (instead of purely in vmcb02).
>
> nmi_l1_to_l2 is not cleared after the first VMRUN if NMI injection
> failed, as KVM still needs to keep track that the NMI originated from L1
> to avoid blocking NMIs for L1. It is only cleared when the NMI injection
> succeeds.
>
> KVM could synthesize a #VMEXIT to L1 before successfully injecting the
> NMI into L2 (e.g. due to a #NPF on L2's NMI handler in L1's NPTs). In
> this case, nmi_l1_to_l2 will remain true, and KVM may not correctly mask
> NMIs and intercept IRET when injecting an NMI into L1.
>
> Clear both nmi_l1_to_l2 and soft_int_injected in nested_svm_vmexit() to
> capture all #VMEXITs, except those that occur due to failed consistency
> checks, as those happen before nmi_l1_to_l2 or soft_int_injected are
> set.
This last paragraph confused me a little bit. I read "to capture all #VMEXITs"
as some sort of "catching" that KVM was doing. I've got it reworded to this:
Clear both nmi_l1_to_l2 and soft_int_injected in nested_svm_vmexit(), i.e.
for all #VMEXITs except those that occur due to failed consistency checks,
as those happen before nmi_l1_to_l2 or soft_int_injected are set.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity
2026-03-03 0:34 ` [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity Yosry Ahmed
@ 2026-03-03 16:56 ` Sean Christopherson
2026-03-03 19:17 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-03 16:56 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> >From the APM Volume #2, 15.25.4 (24593—Rev. 3.42—March 2024):
>
> When VMRUN is executed with nested paging enabled
> (NP_ENABLE = 1), the following conditions are considered illegal
> state combinations, in addition to those mentioned in
> “Canonicalization and Consistency Checks”:
> • Any MBZ bit of nCR3 is set.
> • Any G_PAT.PA field has an unsupported type encoding or any
> reserved field in G_PAT has a nonzero value.
>
> Add the consistency check for nCR3 being a legal GPA with no MBZ bits
> set. The G_PAT.PA check was proposed separately [*].
>
> [*]https://lore.kernel.org/kvm/20260205214326.1029278-3-jmattson@google.com/
>
> Fixes: 4b16184c1cca ("KVM: SVM: Initialize Nested Nested MMU context on VMRUN")
> Cc: stable@vger.kernel.org
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> ---
> arch/x86/kvm/svm/nested.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 613d5e2e7c3d1..3aaa4f0bb31ab 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -348,6 +348,11 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
> if (CC(control->asid == 0))
> return false;
>
> + if (control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
> + if (CC(!kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
> + return false;
Put the full if-statement in CC(), that way the tracepoint will capture the entire
clause, i.e. will help the reader understand than nested_cr3 was checked
specifically because NPT was enabled.
if (CC((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
!kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
return false;
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
2026-03-03 16:37 ` Sean Christopherson
@ 2026-03-03 19:14 ` Yosry Ahmed
2026-03-04 0:44 ` Sean Christopherson
0 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 19:14 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, stable, Jim Mattson
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index f7d5db0af69ac..3bf758c9cb85c 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -1100,6 +1100,11 @@ void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
> > to_save->isst_addr = from_save->isst_addr;
> > to_save->ssp = from_save->ssp;
> > }
> > +
> > + if (lbrv) {
>
> Tomato, tomato, but maybe make this
>
> if (kvm_cpu_cap_has(X86_FEATURE_LBRV)) {
>
> to capture that this requires nested support. I can't imagine we'll ever disable
> X86_FEATURE_LBRV when nested=1 && lbrv=1, but I don't see any harm in being
> paranoid in this case.
Sounds good.
>
> > + svm_copy_lbrs(to_save, from_save);
> > + to_save->dbgctl &= ~DEBUGCTL_RESERVED_BITS;
> > + }
> > }
> >
> > void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index f52e588317fcf..cb53174583a26 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -3071,6 +3071,30 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > svm_update_lbrv(vcpu);
> > break;
> > + case MSR_IA32_LASTBRANCHFROMIP:
>
> Shouldn't these be gated on lbrv? If LBRV is truly unsupported, KVM would be
> writing "undefined" fields and clearing "unknown" clean bits.
>
> Specifically, if we do:
>
> if (!lbrv)
> return KVM_MSR_RET_UNSUPPORTED;
>
> then kvm_do_msr_access() will allow writes of '0' from the host, via this code:
>
> if (host_initiated && !*data && kvm_is_advertised_msr(msr))
> return 0;
>
> And then in the read side, do e.g.:
>
> msr_info->data = lbrv ? svm->vmcb->save.dbgctl : 0;
>
> to ensure KVM won't feed userspace garbage (the VMCB fields should be '0', but
> there's no reason to risk that).
Good call.
>
> The changelog also needs to call out that kvm_set_msr_common() returns
> KVM_MSR_RET_UNSUPPORTED for unhandled MSRs (i.e. for VMX and TDX), and that
> kvm_get_msr_common() explicitly zeros the data for MSR_IA32_LASTxxx (because per
> b5e2fec0ebc3 ("KVM: Ignore DEBUGCTL MSRs with no effect"), old and crust kernels
> would read the MSRs on Intel...).
That was captured (somehow):
For VMX, this also adds save/restore handling of KVM_GET_MSR_INDEX_LIST.
For unspported MSR_IA32_LAST* MSRs, kvm_do_msr_access() should 0 these
MSRs on userspace reads, and ignore KVM_MSR_RET_UNSUPPORTED on userspace
writes.
>
> So all in all (not yet tested), this? If this is the only issue in the series,
> or at least in the stable@ part of the series, no need for a v8 (I've obviously
> already done the fixup).
Looks good with a minor nit below (could be a followup).
> @@ -3075,6 +3075,38 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> svm_update_lbrv(vcpu);
> break;
> + case MSR_IA32_LASTBRANCHFROMIP:
> + if (!lbrv)
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (!msr->host_initiated)
> + return 1;
> + svm->vmcb->save.br_from = data;
> + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> + break;
> + case MSR_IA32_LASTBRANCHTOIP:
> + if (!lbrv)
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (!msr->host_initiated)
> + return 1;
> + svm->vmcb->save.br_to = data;
> + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> + break;
> + case MSR_IA32_LASTINTFROMIP:
> + if (!lbrv)
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (!msr->host_initiated)
> + return 1;
> + svm->vmcb->save.last_excp_from = data;
> + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> + break;
> + case MSR_IA32_LASTINTTOIP:
> + if (!lbrv)
> + return KVM_MSR_RET_UNSUPPORTED;
> + if (!msr->host_initiated)
> + return 1;
> + svm->vmcb->save.last_excp_to = data;
> + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> + break;
There's so much repeated code here. We can use gotos to share code,
but I am not sure if that's a strict improvement. We can also use a
helper, perhaps?
static int svm_set_lbr_msr(struct vcpu_svm *svm, struct msr_data *msr,
u64 data, u64 *field)
{
if (!lbrv)
return KVM_MSR_RET_UNSUPPORTED;
if (!msr->host_initiated)
return 1;
*field = data;
vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
return 0;
}
...
case MSR_IA32_LASTBRANCHFROMIP:
ret = svm_set_lbr_msr(svm, msr, data, &svm->vmcb->save.br_from);
if (ret)
return ret;
break;
...
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 fails on nested #VMEXIT
2026-03-03 16:49 ` Sean Christopherson
@ 2026-03-03 19:15 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 19:15 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 3, 2026 at 8:49 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> > If loading L1's CR3 fails on a nested #VMEXIT, nested_svm_vmexit()
> > returns an error code that is ignored by most callers, and continues to
> > run L1 with corrupted state. A sane recovery is not possible in this
> > case, and HW behavior is to cause a shutdown. Inject a triple fault
> > ,nstead, and do not return early from nested_svm_vmexit(). Continue
>
> s/,/i
Not sure how that happened lol.
>
> > cleaning up the vCPU state (e.g. clear pending exceptions), to handle
> > the failure as gracefully as possible.
> >
> > >From the APM:
> > Upon #VMEXIT, the processor performs the following actions in
> > order to return to the host execution context:
> >
> > ...
> > if (illegal host state loaded, or exception while loading
> > host state)
> > shutdown
> > else
> > execute first host instruction following the VMRUN
>
> Uber nit, use spaces instead of tabs in changelogs, as indenting eight chars is
> almost always overkill and changelogs are more likely to be viewed in a reader
> that has tab-stops set to something other than eight. E.g. using two spaces as
> the margin and then manual indentation of four:
Yeah I started doing that recently but I didn't go back to change old ones.
[..]
> >
> > Fixes: d82aaef9c88a ("KVM: nSVM: use nested_svm_load_cr3() on guest->host switch")
> > CC: stable@vger.kernel.org
>
> Heh, and super duper uber nit, "Cc:" is much more common than "CC:" (I'm actually
> somewhat surprised checkpatch didn't complain since it's so particular about case
> for other trailers).
>
> $ git log -10000 | grep "CC:" | wc -l
> 38
> $ git log -10000 | grep "Cc:" | wc -l
> 11238
That was a mistake, I think I generally use Cc.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ on nested #VMEXIT
2026-03-03 16:50 ` Sean Christopherson
@ 2026-03-03 19:15 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 19:15 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 3, 2026 at 8:50 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> > KVM clears tracking of L1->L2 injected NMIs (i.e. nmi_l1_to_l2) and soft
> > IRQs (i.e. soft_int_injected) on a synthesized #VMEXIT(INVALID) due to
> > failed VMRUN. However, they are not explicitly cleared in other
> > synthesized #VMEXITs.
> >
> > soft_int_injected is always cleared after the first VMRUN of L2 when
> > completing interrupts, as any re-injection is then tracked by KVM
> > (instead of purely in vmcb02).
> >
> > nmi_l1_to_l2 is not cleared after the first VMRUN if NMI injection
> > failed, as KVM still needs to keep track that the NMI originated from L1
> > to avoid blocking NMIs for L1. It is only cleared when the NMI injection
> > succeeds.
> >
> > KVM could synthesize a #VMEXIT to L1 before successfully injecting the
> > NMI into L2 (e.g. due to a #NPF on L2's NMI handler in L1's NPTs). In
> > this case, nmi_l1_to_l2 will remain true, and KVM may not correctly mask
> > NMIs and intercept IRET when injecting an NMI into L1.
> >
> > Clear both nmi_l1_to_l2 and soft_int_injected in nested_svm_vmexit() to
> > capture all #VMEXITs, except those that occur due to failed consistency
> > checks, as those happen before nmi_l1_to_l2 or soft_int_injected are
> > set.
>
> This last paragraph confused me a little bit. I read "to capture all #VMEXITs"
> as some sort of "catching" that KVM was doing. I've got it reworded to this:
>
> Clear both nmi_l1_to_l2 and soft_int_injected in nested_svm_vmexit(), i.e.
> for all #VMEXITs except those that occur due to failed consistency checks,
> as those happen before nmi_l1_to_l2 or soft_int_injected are set.
LGTM.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity
2026-03-03 16:56 ` Sean Christopherson
@ 2026-03-03 19:17 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-03 19:17 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, stable
On Tue, Mar 3, 2026 at 8:56 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> > >From the APM Volume #2, 15.25.4 (24593—Rev. 3.42—March 2024):
> >
> > When VMRUN is executed with nested paging enabled
> > (NP_ENABLE = 1), the following conditions are considered illegal
> > state combinations, in addition to those mentioned in
> > “Canonicalization and Consistency Checks”:
> > • Any MBZ bit of nCR3 is set.
> > • Any G_PAT.PA field has an unsupported type encoding or any
> > reserved field in G_PAT has a nonzero value.
> >
> > Add the consistency check for nCR3 being a legal GPA with no MBZ bits
> > set. The G_PAT.PA check was proposed separately [*].
> >
> > [*]https://lore.kernel.org/kvm/20260205214326.1029278-3-jmattson@google.com/
> >
> > Fixes: 4b16184c1cca ("KVM: SVM: Initialize Nested Nested MMU context on VMRUN")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > ---
> > arch/x86/kvm/svm/nested.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index 613d5e2e7c3d1..3aaa4f0bb31ab 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -348,6 +348,11 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
> > if (CC(control->asid == 0))
> > return false;
> >
> > + if (control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
> > + if (CC(!kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
> > + return false;
>
> Put the full if-statement in CC(), that way the tracepoint will capture the entire
> clause, i.e. will help the reader understand than nested_cr3 was checked
> specifically because NPT was enabled.
I had it this way in v6 because there was another consistency check
dependent on NPT being enabled:
https://lore.kernel.org/kvm/20260224223405.3270433-21-yosry@kernel.org/.
I dropped the patch in v7 as I realized L1's CR0.PG was already being
checked, but it didn't occur to me to go back and update this. Good
catch.
>
> if (CC((control->nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
> !kvm_vcpu_is_legal_gpa(vcpu, control->nested_cr3)))
> return false;
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
2026-03-03 19:14 ` Yosry Ahmed
@ 2026-03-04 0:44 ` Sean Christopherson
2026-03-04 0:48 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-04 0:44 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel, stable, Jim Mattson
On Tue, Mar 03, 2026, Yosry Ahmed wrote:
> > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > So all in all (not yet tested), this? If this is the only issue in the series,
> > or at least in the stable@ part of the series, no need for a v8 (I've obviously
> > already done the fixup).
>
> Looks good with a minor nit below (could be a followup).
>
> > @@ -3075,6 +3075,38 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> > vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > svm_update_lbrv(vcpu);
> > break;
> > + case MSR_IA32_LASTBRANCHFROMIP:
> > + if (!lbrv)
> > + return KVM_MSR_RET_UNSUPPORTED;
> > + if (!msr->host_initiated)
> > + return 1;
> > + svm->vmcb->save.br_from = data;
> > + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > + break;
> > + case MSR_IA32_LASTBRANCHTOIP:
> > + if (!lbrv)
> > + return KVM_MSR_RET_UNSUPPORTED;
> > + if (!msr->host_initiated)
> > + return 1;
> > + svm->vmcb->save.br_to = data;
> > + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > + break;
> > + case MSR_IA32_LASTINTFROMIP:
> > + if (!lbrv)
> > + return KVM_MSR_RET_UNSUPPORTED;
> > + if (!msr->host_initiated)
> > + return 1;
> > + svm->vmcb->save.last_excp_from = data;
> > + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > + break;
> > + case MSR_IA32_LASTINTTOIP:
> > + if (!lbrv)
> > + return KVM_MSR_RET_UNSUPPORTED;
> > + if (!msr->host_initiated)
> > + return 1;
> > + svm->vmcb->save.last_excp_to = data;
> > + vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> > + break;
>
> There's so much repeated code here.
Ya :-(
> We can use gotos to share code, but I am not sure if that's a strict
> improvement. We can also use a helper, perhaps?
Where's your sense of adventure?
case MSR_IA32_LASTBRANCHFROMIP:
case MSR_IA32_LASTBRANCHTOIP:
case MSR_IA32_LASTINTFROMIP:
case MSR_IA32_LASTINTTOIP:
if (!lbrv)
return KVM_MSR_RET_UNSUPPORTED;
if (!msr->host_initiated)
return 1;
*(&svm->vmcb->save.br_from + (ecx - MSR_IA32_LASTBRANCHFROMIP)) = data;
vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
break;
Jokes aside, maybe this, to dedup get() at the same time?
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 68b747a94294..f1811105e89f 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2720,6 +2720,23 @@ static int svm_get_feature_msr(u32 msr, u64 *data)
return 0;
}
+static __always_inline u64 *svm_vmcb_lbr(struct vcpu_svm *svm, u32 msr)
+{
+ switch (msr) {
+ case MSR_IA32_LASTBRANCHFROMIP:
+ return &svm->vmcb->save.br_from;
+ case MSR_IA32_LASTBRANCHTOIP:
+ return &svm->vmcb->save.br_to;
+ case MSR_IA32_LASTINTFROMIP:
+ return &svm->vmcb->save.last_excp_from;
+ case MSR_IA32_LASTINTTOIP:
+ return &svm->vmcb->save.last_excp_to;
+ default:
+ break;
+ }
+ BUILD_BUG();
+}
+
static bool sev_es_prevent_msr_access(struct kvm_vcpu *vcpu,
struct msr_data *msr_info)
{
@@ -2838,16 +2855,10 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = lbrv ? svm->vmcb->save.dbgctl : 0;
break;
case MSR_IA32_LASTBRANCHFROMIP:
- msr_info->data = lbrv ? svm->vmcb->save.br_from : 0;
- break;
case MSR_IA32_LASTBRANCHTOIP:
- msr_info->data = lbrv ? svm->vmcb->save.br_to : 0;
- break;
case MSR_IA32_LASTINTFROMIP:
- msr_info->data = lbrv ? svm->vmcb->save.last_excp_from : 0;
- break;
case MSR_IA32_LASTINTTOIP:
- msr_info->data = lbrv ? svm->vmcb->save.last_excp_to : 0;
+ msr_info->data = lbrv ? *svm_vmcb_lbr(svm, msr_info->index) : 0;
break;
case MSR_VM_HSAVE_PA:
msr_info->data = svm->nested.hsave_msr;
@@ -3122,35 +3133,14 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
svm_update_lbrv(vcpu);
break;
case MSR_IA32_LASTBRANCHFROMIP:
- if (!lbrv)
- return KVM_MSR_RET_UNSUPPORTED;
- if (!msr->host_initiated)
- return 1;
- svm->vmcb->save.br_from = data;
- vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
- break;
case MSR_IA32_LASTBRANCHTOIP:
- if (!lbrv)
- return KVM_MSR_RET_UNSUPPORTED;
- if (!msr->host_initiated)
- return 1;
- svm->vmcb->save.br_to = data;
- vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
- break;
case MSR_IA32_LASTINTFROMIP:
- if (!lbrv)
- return KVM_MSR_RET_UNSUPPORTED;
- if (!msr->host_initiated)
- return 1;
- svm->vmcb->save.last_excp_from = data;
- vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
- break;
case MSR_IA32_LASTINTTOIP:
if (!lbrv)
return KVM_MSR_RET_UNSUPPORTED;
if (!msr->host_initiated)
return 1;
- svm->vmcb->save.last_excp_to = data;
+ *svm_vmcb_lbr(svm, ecx) = data;
vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
break;
case MSR_VM_HSAVE_PA:
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
2026-03-04 0:44 ` Sean Christopherson
@ 2026-03-04 0:48 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-04 0:48 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, kvm, linux-kernel, stable, Jim Mattson
> > There's so much repeated code here.
>
> Ya :-(
>
> > We can use gotos to share code, but I am not sure if that's a strict
> > improvement. We can also use a helper, perhaps?
>
>
> Where's your sense of adventure?
>
> case MSR_IA32_LASTBRANCHFROMIP:
> case MSR_IA32_LASTBRANCHTOIP:
> case MSR_IA32_LASTINTFROMIP:
> case MSR_IA32_LASTINTTOIP:
> if (!lbrv)
> return KVM_MSR_RET_UNSUPPORTED;
> if (!msr->host_initiated)
> return 1;
> *(&svm->vmcb->save.br_from + (ecx - MSR_IA32_LASTBRANCHFROMIP)) = data;
> vmcb_mark_dirty(svm->vmcb, VMCB_LBR);
> break;
>
> Jokes aside, maybe this, to dedup get() at the same time?
Looks good to me!
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
` (25 preceding siblings ...)
2026-03-03 0:34 ` [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12 Yosry Ahmed
@ 2026-03-05 17:08 ` Sean Christopherson
26 siblings, 0 replies; 52+ messages in thread
From: Sean Christopherson @ 2026-03-05 17:08 UTC (permalink / raw)
To: Sean Christopherson, Yosry Ahmed; +Cc: Paolo Bonzini, kvm, linux-kernel
On Tue, 03 Mar 2026 00:33:54 +0000, Yosry Ahmed wrote:
> A group of semi-related fixes, cleanups, and hardening patches for nSVM.
> The series is essentially a group of related mini-series stitched
> together for syntactic and semantic dependencies. The first 17 patches
> (except patch 3) are all optimistically CC'd to stable as they are fixes
> or refactoring leading up to bug fixes. Although I am not sure how much
> of that will actually apply to stable trees.
>
> [...]
Applied to kvm-x86 nested, thanks!
[01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12
https://github.com/kvm-x86/linux/commit/b53ab5167a81
[02/26] KVM: SVM: Switch svm_copy_lbrs() to a macro
https://github.com/kvm-x86/linux/commit/361dbe8173c4
[03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs
https://github.com/kvm-x86/linux/commit/3700f0788da6
[04/26] KVM: selftests: Add a test for LBR save/restore (ft. nested)
https://github.com/kvm-x86/linux/commit/ac17892e5152
[05/26] KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN
https://github.com/kvm-x86/linux/commit/01ddcdc55e09
[06/26] KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper
https://github.com/kvm-x86/linux/commit/290c8d82023a
[07/26] KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as a helper
https://github.com/kvm-x86/linux/commit/dcf3648ab714
[08/26] KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT
https://github.com/kvm-x86/linux/commit/1b30e7551767
[09/26] KVM: nSVM: Triple fault if restore host CR3 fails on nested #VMEXIT
https://github.com/kvm-x86/linux/commit/5d291ef0585e
[10/26] KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID)
https://github.com/kvm-x86/linux/commit/f85a6ce06e4a
[11/26] KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT
https://github.com/kvm-x86/linux/commit/69b721a86d0d
[12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ on nested #VMEXIT
https://github.com/kvm-x86/linux/commit/8998e1d012f3
[13/26] KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers
https://github.com/kvm-x86/linux/commit/b786e34cde42
[14/26] KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE
https://github.com/kvm-x86/linux/commit/e0b6f031d64c
[15/26] KVM: nSVM: Add missing consistency check for nCR3 validity
https://github.com/kvm-x86/linux/commit/b71138fcc362
[16/26] KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS
https://github.com/kvm-x86/linux/commit/96bd3e76a171
[17/26] KVM: nSVM: Add missing consistency check for EVENTINJ
https://github.com/kvm-x86/linux/commit/7e79f71bca5c
[18/26] KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl
https://github.com/kvm-x86/linux/commit/1aea80dd42cf
[19/26] KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2
https://github.com/kvm-x86/linux/commit/7e6eab9be220
[20/26] KVM: nSVM: Cache all used fields from VMCB12
https://github.com/kvm-x86/linux/commit/84dc9fd0354d
[21/26] KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN
https://github.com/kvm-x86/linux/commit/b709087e9e54
[22/26] KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12
https://github.com/kvm-x86/linux/commit/a2b858051cf0
[23/26] KVM: nSVM: Sanitize TLB_CONTROL field when copying from vmcb12
https://github.com/kvm-x86/linux/commit/30a1d2fa8190
[24/26] KVM: nSVM: Sanitize INT/EVENTINJ fields when copying from vmcb12
https://github.com/kvm-x86/linux/commit/c8123e827256
[25/26] KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl
https://github.com/kvm-x86/linux/commit/b6dc21d896a0
[26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
https://github.com/kvm-x86/linux/commit/5e4c6da0bb92
--
https://github.com/kvm-x86/linux/tree/next
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-03 0:34 ` [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12 Yosry Ahmed
@ 2026-03-05 22:30 ` Jim Mattson
2026-03-05 22:52 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Jim Mattson @ 2026-03-05 22:30 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> Add a test that verifies that KVM correctly injects a #GP for nested
> VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> mapped.
>
> Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> ...
> + /*
> + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> + * be mapped by KVM).
> + */
> + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> + vcpu_alloc_svm(vm, &nested_gva);
> + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> +
> + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> + vcpu_run(vcpu);
> + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
Why would this raise #GP? That isn't architected behavior.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-05 22:30 ` Jim Mattson
@ 2026-03-05 22:52 ` Yosry Ahmed
2026-03-06 0:05 ` Jim Mattson
0 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-05 22:52 UTC (permalink / raw)
To: Jim Mattson; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
>
> On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > Add a test that verifies that KVM correctly injects a #GP for nested
> > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > mapped.
> >
> > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > ...
> > + /*
> > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > + * be mapped by KVM).
> > + */
> > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > + vcpu_alloc_svm(vm, &nested_gva);
> > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > +
> > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > + vcpu_run(vcpu);
> > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
>
> Why would this raise #GP? That isn't architected behavior.
I don't see architected behavior in the APM for what happens if VMRUN
fails to load the VMCB from memory. I guess it should be the same as
what would happen if a PTE is pointing to a physical address that
doesn't exist? Maybe #MC?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-05 22:52 ` Yosry Ahmed
@ 2026-03-06 0:05 ` Jim Mattson
2026-03-06 0:40 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Jim Mattson @ 2026-03-06 0:05 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 5, 2026 at 2:52 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
> >
> > On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > Add a test that verifies that KVM correctly injects a #GP for nested
> > > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > > mapped.
> > >
> > > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > > ...
> > > + /*
> > > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > > + * be mapped by KVM).
> > > + */
> > > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > > + vcpu_alloc_svm(vm, &nested_gva);
> > > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > > +
> > > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > > + vcpu_run(vcpu);
> > > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
> >
> > Why would this raise #GP? That isn't architected behavior.
>
> I don't see architected behavior in the APM for what happens if VMRUN
> fails to load the VMCB from memory. I guess it should be the same as
> what would happen if a PTE is pointing to a physical address that
> doesn't exist? Maybe #MC?
Reads from non-existent memory return all 1's, so I would expect a
#VMEXIT with exitcode VMEXIT_INVALID.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 0:05 ` Jim Mattson
@ 2026-03-06 0:40 ` Yosry Ahmed
2026-03-06 1:17 ` Jim Mattson
0 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 0:40 UTC (permalink / raw)
To: Jim Mattson; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 5, 2026 at 4:05 PM Jim Mattson <jmattson@google.com> wrote:
>
> On Thu, Mar 5, 2026 at 2:52 PM Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
> > >
> > > On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > >
> > > > Add a test that verifies that KVM correctly injects a #GP for nested
> > > > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > > > mapped.
> > > >
> > > > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > > > ...
> > > > + /*
> > > > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > > > + * be mapped by KVM).
> > > > + */
> > > > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > > > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > > > + vcpu_alloc_svm(vm, &nested_gva);
> > > > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > > > +
> > > > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > > > + vcpu_run(vcpu);
> > > > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > > > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > > > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
> > >
> > > Why would this raise #GP? That isn't architected behavior.
> >
> > I don't see architected behavior in the APM for what happens if VMRUN
> > fails to load the VMCB from memory. I guess it should be the same as
> > what would happen if a PTE is pointing to a physical address that
> > doesn't exist? Maybe #MC?
>
> Reads from non-existent memory return all 1's
Today I learned :) Do all x86 CPUs do this?
> so I would expect a #VMEXIT with exitcode VMEXIT_INVALID.
This would actually simplify the logic, as it would be the same
failure mode as failed consistency checks. That being said, KVM has
been injecting a #GP when it fails to map vmcb12 since the beginning.
It also does the same thing for VMSAVE/VMLOAD, which seems to also not
be architectural. This would be more annoying to handle correctly
because we'll need to copy all 1's to the relevant fields in vmcb12 or
vmcb01.
Sean, what do you want us to do here?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 0:40 ` Yosry Ahmed
@ 2026-03-06 1:17 ` Jim Mattson
2026-03-06 1:39 ` Sean Christopherson
0 siblings, 1 reply; 52+ messages in thread
From: Jim Mattson @ 2026-03-06 1:17 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 5, 2026 at 4:40 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Thu, Mar 5, 2026 at 4:05 PM Jim Mattson <jmattson@google.com> wrote:
> >
> > On Thu, Mar 5, 2026 at 2:52 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
> > > >
> > > > On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > > >
> > > > > Add a test that verifies that KVM correctly injects a #GP for nested
> > > > > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > > > > mapped.
> > > > >
> > > > > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > > > > ...
> > > > > + /*
> > > > > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > > > > + * be mapped by KVM).
> > > > > + */
> > > > > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > > > > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > > > > + vcpu_alloc_svm(vm, &nested_gva);
> > > > > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > > > > +
> > > > > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > > > > + vcpu_run(vcpu);
> > > > > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > > > > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > > > > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
> > > >
> > > > Why would this raise #GP? That isn't architected behavior.
> > >
> > > I don't see architected behavior in the APM for what happens if VMRUN
> > > fails to load the VMCB from memory. I guess it should be the same as
> > > what would happen if a PTE is pointing to a physical address that
> > > doesn't exist? Maybe #MC?
> >
> > Reads from non-existent memory return all 1's
>
> Today I learned :) Do all x86 CPUs do this?
Yes. If no device claims the address, reads return all 1s. I think you
can thank pull-up resistors for that.
> > so I would expect a #VMEXIT with exitcode VMEXIT_INVALID.
>
> This would actually simplify the logic, as it would be the same
> failure mode as failed consistency checks. That being said, KVM has
> been injecting a #GP when it fails to map vmcb12 since the beginning.
KVM has never been known for its attention to detail.
> It also does the same thing for VMSAVE/VMLOAD, which seems to also not
> be architectural. This would be more annoying to handle correctly
> because we'll need to copy all 1's to the relevant fields in vmcb12 or
> vmcb01.
Or just exit to userspace with
KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. I think on the
VMX side, this sort of thing goes through kvm_handle_memory_failure().
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 1:17 ` Jim Mattson
@ 2026-03-06 1:39 ` Sean Christopherson
2026-03-06 1:46 ` Jim Mattson
` (2 more replies)
0 siblings, 3 replies; 52+ messages in thread
From: Sean Christopherson @ 2026-03-06 1:39 UTC (permalink / raw)
To: Jim Mattson; +Cc: Yosry Ahmed, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 05, 2026, Jim Mattson wrote:
> On Thu, Mar 5, 2026 at 4:40 PM Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > On Thu, Mar 5, 2026 at 4:05 PM Jim Mattson <jmattson@google.com> wrote:
> > >
> > > On Thu, Mar 5, 2026 at 2:52 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > >
> > > > On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
> > > > >
> > > > > On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > > > >
> > > > > > Add a test that verifies that KVM correctly injects a #GP for nested
> > > > > > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > > > > > mapped.
> > > > > >
> > > > > > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > > > > > ...
> > > > > > + /*
> > > > > > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > > > > > + * be mapped by KVM).
> > > > > > + */
> > > > > > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > > > > > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > > > > > + vcpu_alloc_svm(vm, &nested_gva);
> > > > > > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > > > > > +
> > > > > > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > > > > > + vcpu_run(vcpu);
> > > > > > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > > > > > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > > > > > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
> > > > >
> > > > > Why would this raise #GP? That isn't architected behavior.
> > > >
> > > > I don't see architected behavior in the APM for what happens if VMRUN
> > > > fails to load the VMCB from memory. I guess it should be the same as
> > > > what would happen if a PTE is pointing to a physical address that
> > > > doesn't exist? Maybe #MC?
> > >
> > > Reads from non-existent memory return all 1's
> >
> > Today I learned :) Do all x86 CPUs do this?
>
> Yes. If no device claims the address, reads return all 1s. I think you
> can thank pull-up resistors for that.
Ya, it's officially documented PCI behavior. Writes are dropped, reads return
all 1s.
> > > so I would expect a #VMEXIT with exitcode VMEXIT_INVALID.
> >
> > This would actually simplify the logic, as it would be the same
> > failure mode as failed consistency checks. That being said, KVM has
> > been injecting a #GP when it fails to map vmcb12 since the beginning.
>
> KVM has never been known for its attention to detail.
LOL, hey, we try. Sometimes we just forget things though :-)
7a35e515a705 ("KVM: VMX: Properly handle kvm_read/write_guest_virt*() result")
> > It also does the same thing for VMSAVE/VMLOAD, which seems to also not
> > be architectural. This would be more annoying to handle correctly
> > because we'll need to copy all 1's to the relevant fields in vmcb12 or
> > vmcb01.
>
> Or just exit to userspace with
> KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. I think on the
> VMX side, this sort of thing goes through kvm_handle_memory_failure().
Yep, I think this is the correct fixup:
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b191c6cab57d..78a542c6ddf1 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1105,10 +1105,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
vmcb12_gpa = svm->vmcb->save.rax;
err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
- if (err == -EFAULT) {
- kvm_inject_gp(vcpu, 0);
- return 1;
- }
+ if (err == -EFAULT)
+ return kvm_handle_memory_failure(vcpu, X86EMUL_UNHANDLEABLE, NULL);
/*
* Advance RIP if #GP or #UD are not injected, but otherwise stop if
^ permalink raw reply related [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 1:39 ` Sean Christopherson
@ 2026-03-06 1:46 ` Jim Mattson
2026-03-06 15:52 ` Yosry Ahmed
2026-03-06 16:09 ` Yosry Ahmed
2 siblings, 0 replies; 52+ messages in thread
From: Jim Mattson @ 2026-03-06 1:46 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Yosry Ahmed, Paolo Bonzini, kvm, linux-kernel
On Thu, Mar 5, 2026 at 5:39 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Thu, Mar 05, 2026, Jim Mattson wrote:
> > On Thu, Mar 5, 2026 at 4:40 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > >
> > > On Thu, Mar 5, 2026 at 4:05 PM Jim Mattson <jmattson@google.com> wrote:
> > > >
> > > > On Thu, Mar 5, 2026 at 2:52 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > > >
> > > > > On Thu, Mar 5, 2026 at 2:30 PM Jim Mattson <jmattson@google.com> wrote:
> > > > > >
> > > > > > On Mon, Mar 2, 2026 at 4:43 PM Yosry Ahmed <yosry@kernel.org> wrote:
> > > > > > >
> > > > > > > Add a test that verifies that KVM correctly injects a #GP for nested
> > > > > > > VMRUN and a shutdown for nested #VMEXIT, if the GPA of vmcb12 cannot be
> > > > > > > mapped.
> > > > > > >
> > > > > > > Signed-off-by: Yosry Ahmed <yosry@kernel.org>
> > > > > > > ...
> > > > > > > + /*
> > > > > > > + * Find the max legal GPA that is not backed by a memslot (i.e. cannot
> > > > > > > + * be mapped by KVM).
> > > > > > > + */
> > > > > > > + maxphyaddr = kvm_cpuid_property(vcpu->cpuid, X86_PROPERTY_MAX_PHY_ADDR);
> > > > > > > + max_legal_gpa = BIT_ULL(maxphyaddr) - PAGE_SIZE;
> > > > > > > + vcpu_alloc_svm(vm, &nested_gva);
> > > > > > > + vcpu_args_set(vcpu, 2, nested_gva, max_legal_gpa);
> > > > > > > +
> > > > > > > + /* VMRUN with max_legal_gpa, KVM injects a #GP */
> > > > > > > + vcpu_run(vcpu);
> > > > > > > + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> > > > > > > + TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
> > > > > > > + TEST_ASSERT_EQ(uc.args[1], SYNC_GP);
> > > > > >
> > > > > > Why would this raise #GP? That isn't architected behavior.
> > > > >
> > > > > I don't see architected behavior in the APM for what happens if VMRUN
> > > > > fails to load the VMCB from memory. I guess it should be the same as
> > > > > what would happen if a PTE is pointing to a physical address that
> > > > > doesn't exist? Maybe #MC?
> > > >
> > > > Reads from non-existent memory return all 1's
> > >
> > > Today I learned :) Do all x86 CPUs do this?
> >
> > Yes. If no device claims the address, reads return all 1s. I think you
> > can thank pull-up resistors for that.
>
> Ya, it's officially documented PCI behavior. Writes are dropped, reads return
> all 1s.
LOL! PCI bus?!? These semantics were cast in stone long before anyone
even dreamt of a PCI bus!
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 1:39 ` Sean Christopherson
2026-03-06 1:46 ` Jim Mattson
@ 2026-03-06 15:52 ` Yosry Ahmed
2026-03-06 17:54 ` Yosry Ahmed
2026-03-06 16:09 ` Yosry Ahmed
2 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 15:52 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Jim Mattson, Paolo Bonzini, kvm, linux-kernel
> > > It also does the same thing for VMSAVE/VMLOAD, which seems to also not
> > > be architectural. This would be more annoying to handle correctly
> > > because we'll need to copy all 1's to the relevant fields in vmcb12 or
> > > vmcb01.
> >
> > Or just exit to userspace with
> > KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. I think on the
> > VMX side, this sort of thing goes through kvm_handle_memory_failure().
>
> Yep, I think this is the correct fixup:
Looks good, I was going to say ignore the series @
https://lore.kernel.org/kvm/20260305203005.1021335-1-yosry@kernel.org/,
because I will incorporate the fix in it after patch 1 (the cleanup),
and patch 2 will need to be redone such that the test checks for
KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. But then I
stumbled upon the VMSAVE/VMLOAD behavior and the #GP I was observing
with vls=1 (see cover letter).
So I dug a bit deeper. Turns out with vls=1, if the GPA is supported
but not mapped, VMLOAD will generate a #NPF, and because there is no
slot KVM will install an MMIO SPTE and emulate the instruction. The
emulator will end up calling check_svme_pa() -> emulate_gp(). I didn't
catch this initially because I was tracing kvm_queue_exception_e() and
didn't get any hits, but I can see the call to
inject_emulated_exception() with #GP so probably the compiler just
inlines it.
Anyway, we'll also need to update the emulator. Perhaps just return
X86EMUL_UNHANDLEABLE from check_svme_pa() instead of injecting #GP,
although I don't think this will always end up returning to userspace
with
KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. Looking at
handle_emulation_failure(), we'll only immediately exit to userspace
if KVM_CAP_EXIT_ON_EMULATION_FAILURE is set (because EMULTYPE_SKIP
won't be set). Otherwise we'll inject a #UD, and only exit to
userspace if the VMSAVE/VMLOAD came from L1.
Not sure if that's good enough or if we need to augment the emulator
somehow (e.g. new return value that always exits to userspace? Or
allow EMULTYPE_SKIP x86_emulate_insn() -> check_perm() to change the
emulation type to add EMULTYPE_SKIP?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 1:39 ` Sean Christopherson
2026-03-06 1:46 ` Jim Mattson
2026-03-06 15:52 ` Yosry Ahmed
@ 2026-03-06 16:09 ` Yosry Ahmed
2026-03-06 16:35 ` Sean Christopherson
2 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 16:09 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Jim Mattson, Paolo Bonzini, kvm, linux-kernel
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index b191c6cab57d..78a542c6ddf1 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1105,10 +1105,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
>
> vmcb12_gpa = svm->vmcb->save.rax;
> err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> - if (err == -EFAULT) {
> - kvm_inject_gp(vcpu, 0);
> - return 1;
> - }
> + if (err == -EFAULT)
> + return kvm_handle_memory_failure(vcpu, X86EMUL_UNHANDLEABLE, NULL);
Why not call kvm_prepare_emulation_failure_exit() directly? Is the
premise that kvm_handle_memory_failure() might evolve to do more
things for emulation failures that are specifically caused by memory
failures, other than potentially injecting an exception?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 16:09 ` Yosry Ahmed
@ 2026-03-06 16:35 ` Sean Christopherson
2026-03-06 17:25 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Sean Christopherson @ 2026-03-06 16:35 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Jim Mattson, Paolo Bonzini, kvm, linux-kernel
On Fri, Mar 06, 2026, Yosry Ahmed wrote:
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index b191c6cab57d..78a542c6ddf1 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -1105,10 +1105,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> >
> > vmcb12_gpa = svm->vmcb->save.rax;
> > err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> > - if (err == -EFAULT) {
> > - kvm_inject_gp(vcpu, 0);
> > - return 1;
> > - }
> > + if (err == -EFAULT)
> > + return kvm_handle_memory_failure(vcpu, X86EMUL_UNHANDLEABLE, NULL);
>
> Why not call kvm_prepare_emulation_failure_exit() directly?
Mostly because my mental coin-flip came up heads. But it's also one less line
of code, woot woot!
> Is the premise that kvm_handle_memory_failure() might evolve to do more
> things for emulation failures that are specifically caused by memory
> failures, other than potentially injecting an exception?
Yeah, more or less. I doubt kvm_handle_memory_failure() will ever actually evolve
into anything more sophisticated, but at the very least, using
kvm_handle_memory_failure() documents _why_ KVM can't handle emulation.
On second thought, I think using X86EMUL_IO_NEEDED would be more appropriate.
The memremap() is only reachable if allow_unsafe_mappings is enabled, and so for
a "default" configuration, failure can only occur on:
if (is_error_noslot_pfn(map->pfn))
return -EINVAL;
Which doesn't _guarantee_ that emulated I/O is required, but we're definitely
beyond splitting hairs at that point.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 16:35 ` Sean Christopherson
@ 2026-03-06 17:25 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 17:25 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Jim Mattson, Paolo Bonzini, kvm, linux-kernel
On Fri, Mar 6, 2026 at 8:35 AM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Mar 06, 2026, Yosry Ahmed wrote:
> > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > > index b191c6cab57d..78a542c6ddf1 100644
> > > --- a/arch/x86/kvm/svm/nested.c
> > > +++ b/arch/x86/kvm/svm/nested.c
> > > @@ -1105,10 +1105,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
> > >
> > > vmcb12_gpa = svm->vmcb->save.rax;
> > > err = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
> > > - if (err == -EFAULT) {
> > > - kvm_inject_gp(vcpu, 0);
> > > - return 1;
> > > - }
> > > + if (err == -EFAULT)
> > > + return kvm_handle_memory_failure(vcpu, X86EMUL_UNHANDLEABLE, NULL);
> >
> > Why not call kvm_prepare_emulation_failure_exit() directly?
>
> Mostly because my mental coin-flip came up heads. But it's also one less line
> of code, woot woot!
>
> > Is the premise that kvm_handle_memory_failure() might evolve to do more
> > things for emulation failures that are specifically caused by memory
> > failures, other than potentially injecting an exception?
>
> Yeah, more or less. I doubt kvm_handle_memory_failure() will ever actually evolve
> into anything more sophisticated, but at the very least, using
> kvm_handle_memory_failure() documents _why_ KVM can't handle emulation.
Yeah I agree with this too.
>
> On second thought, I think using X86EMUL_IO_NEEDED would be more appropriate.
> The memremap() is only reachable if allow_unsafe_mappings is enabled, and so for
> a "default" configuration, failure can only occur on:
>
> if (is_error_noslot_pfn(map->pfn))
> return -EINVAL;
>
> Which doesn't _guarantee_ that emulated I/O is required, but we're definitely
> beyond splitting hairs at that point.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 15:52 ` Yosry Ahmed
@ 2026-03-06 17:54 ` Yosry Ahmed
2026-03-06 22:15 ` Jim Mattson
0 siblings, 1 reply; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 17:54 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Jim Mattson, Paolo Bonzini, kvm, linux-kernel
On Fri, Mar 6, 2026 at 7:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
>
> > > > It also does the same thing for VMSAVE/VMLOAD, which seems to also not
> > > > be architectural. This would be more annoying to handle correctly
> > > > because we'll need to copy all 1's to the relevant fields in vmcb12 or
> > > > vmcb01.
> > >
> > > Or just exit to userspace with
> > > KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. I think on the
> > > VMX side, this sort of thing goes through kvm_handle_memory_failure().
> >
> > Yep, I think this is the correct fixup:
>
> Looks good, I was going to say ignore the series @
> https://lore.kernel.org/kvm/20260305203005.1021335-1-yosry@kernel.org/,
> because I will incorporate the fix in it after patch 1 (the cleanup),
> and patch 2 will need to be redone such that the test checks for
> KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. But then I
> stumbled upon the VMSAVE/VMLOAD behavior and the #GP I was observing
> with vls=1 (see cover letter).
>
> So I dug a bit deeper. Turns out with vls=1, if the GPA is supported
> but not mapped, VMLOAD will generate a #NPF, and because there is no
> slot KVM will install an MMIO SPTE and emulate the instruction. The
> emulator will end up calling check_svme_pa() -> emulate_gp(). I didn't
> catch this initially because I was tracing kvm_queue_exception_e() and
> didn't get any hits, but I can see the call to
> inject_emulated_exception() with #GP so probably the compiler just
> inlines it.
>
> Anyway, we'll also need to update the emulator. Perhaps just return
> X86EMUL_UNHANDLEABLE from check_svme_pa() instead of injecting #GP,
Actually, not quite. check_svme_pa() should keep injecting #GP, but
based on checking rax against kvm_host.maxphyaddr instead of the
hardcoded 0xffff000000000000ULL value. The address my test is using is
0xffffffffff000, which is a legal address on Turin (52 bit phyaddr),
but check_svme_pa() thinks it isn't and injects #GP. I think if that
is fixed, check_svme_pa() will succeed, and then emulation will fail
anyway because it's not implemented. So that seems like a separate
bug.
But then if the address is below maxphyaddr and the EFER.SVME check
succeeds, I think we should return X86EMUL_UNHANDLEABLE? I cannot
immediately tell if this will organically happen in x86_emulate_insn()
after check_svme_pa() returns.
The rest of what I said stands.
> although I don't think this will always end up returning to userspace
> with
> KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. Looking at
> handle_emulation_failure(), we'll only immediately exit to userspace
> if KVM_CAP_EXIT_ON_EMULATION_FAILURE is set (because EMULTYPE_SKIP
> won't be set). Otherwise we'll inject a #UD, and only exit to
> userspace if the VMSAVE/VMLOAD came from L1.
>
> Not sure if that's good enough or if we need to augment the emulator
> somehow (e.g. new return value that always exits to userspace? Or
> allow EMULTYPE_SKIP x86_emulate_insn() -> check_perm() to change the
> emulation type to add EMULTYPE_SKIP?
On Fri, Mar 6, 2026 at 7:52 AM Yosry Ahmed <yosry@kernel.org> wrote:
>
> > > > It also does the same thing for VMSAVE/VMLOAD, which seems to also not
> > > > be architectural. This would be more annoying to handle correctly
> > > > because we'll need to copy all 1's to the relevant fields in vmcb12 or
> > > > vmcb01.
> > >
> > > Or just exit to userspace with
> > > KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. I think on the
> > > VMX side, this sort of thing goes through kvm_handle_memory_failure().
> >
> > Yep, I think this is the correct fixup:
>
> Looks good, I was going to say ignore the series @
> https://lore.kernel.org/kvm/20260305203005.1021335-1-yosry@kernel.org/,
> because I will incorporate the fix in it after patch 1 (the cleanup),
> and patch 2 will need to be redone such that the test checks for
> KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. But then I
> stumbled upon the VMSAVE/VMLOAD behavior and the #GP I was observing
> with vls=1 (see cover letter).
>
> So I dug a bit deeper. Turns out with vls=1, if the GPA is supported
> but not mapped, VMLOAD will generate a #NPF, and because there is no
> slot KVM will install an MMIO SPTE and emulate the instruction. The
> emulator will end up calling check_svme_pa() -> emulate_gp(). I didn't
> catch this initially because I was tracing kvm_queue_exception_e() and
> didn't get any hits, but I can see the call to
> inject_emulated_exception() with #GP so probably the compiler just
> inlines it.
>
> Anyway, we'll also need to update the emulator. Perhaps just return
> X86EMUL_UNHANDLEABLE from check_svme_pa() instead of injecting #GP,
> although I don't think this will always end up returning to userspace
> with
> KVM_EXIT_INTERNAL_ERROR/KVM_INTERNAL_ERROR_EMULATION. Looking at
> handle_emulation_failure(), we'll only immediately exit to userspace
> if KVM_CAP_EXIT_ON_EMULATION_FAILURE is set (because EMULTYPE_SKIP
> won't be set). Otherwise we'll inject a #UD, and only exit to
> userspace if the VMSAVE/VMLOAD came from L1.
>
> Not sure if that's good enough or if we need to augment the emulator
> somehow (e.g. new return value that always exits to userspace? Or
> allow EMULTYPE_SKIP x86_emulate_insn() -> check_perm() to change the
> emulation type to add EMULTYPE_SKIP?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 17:54 ` Yosry Ahmed
@ 2026-03-06 22:15 ` Jim Mattson
2026-03-06 22:35 ` Yosry Ahmed
0 siblings, 1 reply; 52+ messages in thread
From: Jim Mattson @ 2026-03-06 22:15 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Fri, Mar 6, 2026 at 9:54 AM Yosry Ahmed <yosry@kernel.org> wrote:
> Actually, not quite. check_svme_pa() should keep injecting #GP, but
> based on checking rax against kvm_host.maxphyaddr instead of the
> hardcoded 0xffff000000000000ULL value.
Shouldn't it check against the guest's maxphyaddr, in case
allow_smaller_maxphyaddr is in use?
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12
2026-03-06 22:15 ` Jim Mattson
@ 2026-03-06 22:35 ` Yosry Ahmed
0 siblings, 0 replies; 52+ messages in thread
From: Yosry Ahmed @ 2026-03-06 22:35 UTC (permalink / raw)
To: Jim Mattson; +Cc: Sean Christopherson, Paolo Bonzini, kvm, linux-kernel
On Fri, Mar 6, 2026 at 2:15 PM Jim Mattson <jmattson@google.com> wrote:
>
> On Fri, Mar 6, 2026 at 9:54 AM Yosry Ahmed <yosry@kernel.org> wrote:
> > Actually, not quite. check_svme_pa() should keep injecting #GP, but
> > based on checking rax against kvm_host.maxphyaddr instead of the
> > hardcoded 0xffff000000000000ULL value.
>
> Shouldn't it check against the guest's maxphyaddr, in case
> allow_smaller_maxphyaddr is in use?
Will respond to the other thread.
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2026-03-06 22:35 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03 0:33 [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 01/26] KVM: nSVM: Avoid clearing VMCB_LBR in vmcb12 Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 02/26] KVM: SVM: Switch svm_copy_lbrs() to a macro Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 03/26] KVM: SVM: Add missing save/restore handling of LBR MSRs Yosry Ahmed
2026-03-03 16:37 ` Sean Christopherson
2026-03-03 19:14 ` Yosry Ahmed
2026-03-04 0:44 ` Sean Christopherson
2026-03-04 0:48 ` Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 04/26] KVM: selftests: Add a test for LBR save/restore (ft. nested) Yosry Ahmed
2026-03-03 0:33 ` [PATCH v7 05/26] KVM: nSVM: Always inject a #GP if mapping VMCB12 fails on nested VMRUN Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 06/26] KVM: nSVM: Refactor checking LBRV enablement in vmcb12 into a helper Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 07/26] KVM: nSVM: Refactor writing vmcb12 on nested #VMEXIT as " Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 08/26] KVM: nSVM: Triple fault if mapping VMCB12 fails on nested #VMEXIT Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 09/26] KVM: nSVM: Triple fault if restore host CR3 " Yosry Ahmed
2026-03-03 16:49 ` Sean Christopherson
2026-03-03 19:15 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 10/26] KVM: nSVM: Clear GIF on nested #VMEXIT(INVALID) Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 11/26] KVM: nSVM: Clear EVENTINJ fields in vmcb12 on nested #VMEXIT Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 12/26] KVM: nSVM: Clear tracking of L1->L2 NMI and soft IRQ " Yosry Ahmed
2026-03-03 16:50 ` Sean Christopherson
2026-03-03 19:15 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 13/26] KVM: nSVM: Drop nested_vmcb_check_{save/control}() wrappers Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 14/26] KVM: nSVM: Drop the non-architectural consistency check for NP_ENABLE Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 15/26] KVM: nSVM: Add missing consistency check for nCR3 validity Yosry Ahmed
2026-03-03 16:56 ` Sean Christopherson
2026-03-03 19:17 ` Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 16/26] KVM: nSVM: Add missing consistency check for EFER, CR0, CR4, and CS Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 17/26] KVM: nSVM: Add missing consistency check for EVENTINJ Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 18/26] KVM: SVM: Rename vmcb->nested_ctl to vmcb->misc_ctl Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 19/26] KVM: SVM: Rename vmcb->virt_ext to vmcb->misc_ctl2 Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 20/26] KVM: nSVM: Cache all used fields from VMCB12 Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 21/26] KVM: nSVM: Restrict mapping vmcb12 on nested VMRUN Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 22/26] KVM: nSVM: Use PAGE_MASK to drop lower bits of bitmap GPAs from vmcb12 Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 23/26] KVM: nSVM: Sanitize TLB_CONTROL field when copying " Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 24/26] KVM: nSVM: Sanitize INT/EVENTINJ fields " Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 25/26] KVM: nSVM: Only copy SVM_MISC_ENABLE_NP from VMCB01's misc_ctl Yosry Ahmed
2026-03-03 0:34 ` [PATCH v7 26/26] KVM: selftest: Add a selftest for VMRUN/#VMEXIT with unmappable vmcb12 Yosry Ahmed
2026-03-05 22:30 ` Jim Mattson
2026-03-05 22:52 ` Yosry Ahmed
2026-03-06 0:05 ` Jim Mattson
2026-03-06 0:40 ` Yosry Ahmed
2026-03-06 1:17 ` Jim Mattson
2026-03-06 1:39 ` Sean Christopherson
2026-03-06 1:46 ` Jim Mattson
2026-03-06 15:52 ` Yosry Ahmed
2026-03-06 17:54 ` Yosry Ahmed
2026-03-06 22:15 ` Jim Mattson
2026-03-06 22:35 ` Yosry Ahmed
2026-03-06 16:09 ` Yosry Ahmed
2026-03-06 16:35 ` Sean Christopherson
2026-03-06 17:25 ` Yosry Ahmed
2026-03-05 17:08 ` [PATCH v7 00/26] Nested SVM fixes, cleanups, and hardening Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox