public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: x86: Fastpath userspace exit fix and hardening
@ 2026-04-23 16:26 Sean Christopherson
  2026-04-23 16:26 ` [PATCH 1/2] KVM: x86: Ensure vendor's exit handler runs before fastpath userspace exits Sean Christopherson
  2026-04-23 16:26 ` [PATCH 2/2] KVM: SVM: Refresh vcpu->arch.cr{0,3} prior to invoking fastpath handler Sean Christopherson
  0 siblings, 2 replies; 3+ messages in thread
From: Sean Christopherson @ 2026-04-23 16:26 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel, Nikunj A . Dadhania

Fix a found-by-inspection bug where KVM could fail to flush the PML buffer
(VMX) or fail to grab the most recet CR0/CR3 (SVM) prior to exiting to
userspace, which could prove to be fatal if that exit is that last exit prior
to save/restore, e.g. the last exit before userspace commits to migrating the
VM.

Patch 2 is related hardening, i.e. isn't fixing any existing bugs (AFAIK).

Sean Christopherson (2):
  KVM: x86: Ensure vendor's exit handler runs before fastpath userspace
    exits
  KVM: SVM: Refresh vcpu->arch.cr{0,3} prior to invoking fastpath
    handler

 arch/x86/kvm/svm/svm.c | 15 ++++++++-------
 arch/x86/kvm/vmx/vmx.c |  3 +++
 arch/x86/kvm/x86.c     |  3 ---
 3 files changed, 11 insertions(+), 10 deletions(-)


base-commit: 85f871f6ba46f20d7fbc0b016b4db648c33220dd
-- 
2.54.0.545.g6539524ca2-goog


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] KVM: x86: Ensure vendor's exit handler runs before fastpath userspace exits
  2026-04-23 16:26 [PATCH 0/2] KVM: x86: Fastpath userspace exit fix and hardening Sean Christopherson
@ 2026-04-23 16:26 ` Sean Christopherson
  2026-04-23 16:26 ` [PATCH 2/2] KVM: SVM: Refresh vcpu->arch.cr{0,3} prior to invoking fastpath handler Sean Christopherson
  1 sibling, 0 replies; 3+ messages in thread
From: Sean Christopherson @ 2026-04-23 16:26 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel, Nikunj A . Dadhania

Move the handling of fastpath userspace exits into vendor code to ensure
KVM runs vendor specific operations that need to run before userspace gains
control of the vCPU.  E.g. for VMX (and soon to be for SVM as well), KVM
needs to flush the PML buffer prior to exiting to userspace, otherwise any
memory written by the final KVM_RUN might never be flagged as dirty.

Note, waiting to snapshot CR0 and CR3 until svm_handle_exit() is flawed in
general, as that risks consuming stale state in a fastpath handler.  That
will be addressed in a future change.

Fixes: f7f39c50edb9 ("KVM: x86: Exit to userspace if fastpath triggers one on instruction skip")
Cc: stable@vger.kernel.org
Cc: Nikunj A. Dadhania <nikunj@amd.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm.c | 3 +++
 arch/x86/kvm/vmx/vmx.c | 3 +++
 arch/x86/kvm/x86.c     | 3 ---
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e7fdd7a9c280..eb351ca4dd82 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3652,6 +3652,9 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 			vcpu->arch.cr3 = svm->vmcb->save.cr3;
 	}
 
+	if (unlikely(exit_fastpath == EXIT_FASTPATH_EXIT_USERSPACE))
+		return 0;
+
 	if (is_guest_mode(vcpu)) {
 		int vmexit;
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a29896a9ef14..4cb355ecfe46 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6687,6 +6687,9 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 	if (enable_pml && !is_guest_mode(vcpu))
 		vmx_flush_pml_buffer(vcpu);
 
+	if (unlikely(exit_fastpath == EXIT_FASTPATH_EXIT_USERSPACE))
+		return 0;
+
 	/*
 	 * KVM should never reach this point with a pending nested VM-Enter.
 	 * More specifically, short-circuiting VM-Entry to emulate L2 due to
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0a1b63c63d1a..9ad7ec3bf0f1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11602,9 +11602,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (vcpu->arch.apic_attention)
 		kvm_lapic_sync_from_vapic(vcpu);
 
-	if (unlikely(exit_fastpath == EXIT_FASTPATH_EXIT_USERSPACE))
-		return 0;
-
 	r = kvm_x86_call(handle_exit)(vcpu, exit_fastpath);
 	return r;
 
-- 
2.54.0.545.g6539524ca2-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] KVM: SVM: Refresh vcpu->arch.cr{0,3} prior to invoking fastpath handler
  2026-04-23 16:26 [PATCH 0/2] KVM: x86: Fastpath userspace exit fix and hardening Sean Christopherson
  2026-04-23 16:26 ` [PATCH 1/2] KVM: x86: Ensure vendor's exit handler runs before fastpath userspace exits Sean Christopherson
@ 2026-04-23 16:26 ` Sean Christopherson
  1 sibling, 0 replies; 3+ messages in thread
From: Sean Christopherson @ 2026-04-23 16:26 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini; +Cc: kvm, linux-kernel, Nikunj A . Dadhania

Refresh KVM's copies of CR0 and CR3 from the VMCB prior to (potentially)
invoking a fastpath handler to ensure that KVM doesn't consume stale
state.  While it's unlikely KVM will ever consume CR3 or CR0.{TS,MP} in
the fastpath, grabbing the values from the VMCB is inexpensive, i.e. the
risk of subtle bugs far outweighs the reward of deferring reads for a
small subset of VM-Exits.

Note, KVM doesn't currently consume CR3 or CR0.{TS,MP} in the fastpath,
as KVM requires next_rip to be valid (i.e. KVM doesn't read CR3 to decode
the instruction), CR0.MP is never consumed, and CR0.TS is only consumed by
the full emulator.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/svm/svm.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index eb351ca4dd82..df0bd132edf7 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3644,14 +3644,6 @@ static int svm_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct kvm_run *kvm_run = vcpu->run;
 
-	/* SEV-ES guests must use the CR write traps to track CR registers. */
-	if (!is_sev_es_guest(vcpu)) {
-		if (!svm_is_intercept(svm, INTERCEPT_CR0_WRITE))
-			vcpu->arch.cr0 = svm->vmcb->save.cr0;
-		if (npt_enabled)
-			vcpu->arch.cr3 = svm->vmcb->save.cr3;
-	}
-
 	if (unlikely(exit_fastpath == EXIT_FASTPATH_EXIT_USERSPACE))
 		return 0;
 
@@ -4505,11 +4497,17 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu, u64 run_flags)
 	if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL))
 		x86_spec_ctrl_restore_host(svm->virt_spec_ctrl);
 
+	/* SEV-ES guests must use the CR write traps to track CR registers. */
 	if (!is_sev_es_guest(vcpu)) {
 		vcpu->arch.cr2 = svm->vmcb->save.cr2;
 		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
 		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
 		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
+
+		if (!svm_is_intercept(svm, INTERCEPT_CR0_WRITE))
+			vcpu->arch.cr0 = svm->vmcb->save.cr0;
+		if (npt_enabled)
+			vcpu->arch.cr3 = svm->vmcb->save.cr3;
 	}
 	vcpu->arch.regs_dirty = 0;
 
-- 
2.54.0.545.g6539524ca2-goog


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-23 16:26 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-23 16:26 [PATCH 0/2] KVM: x86: Fastpath userspace exit fix and hardening Sean Christopherson
2026-04-23 16:26 ` [PATCH 1/2] KVM: x86: Ensure vendor's exit handler runs before fastpath userspace exits Sean Christopherson
2026-04-23 16:26 ` [PATCH 2/2] KVM: SVM: Refresh vcpu->arch.cr{0,3} prior to invoking fastpath handler Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox