public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Unify VERW mitigation for guests
@ 2025-10-29 21:26 Pawan Gupta
  2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
                   ` (3 more replies)
  0 siblings, 4 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-29 21:26 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

This series unifies the VERW execution sites in KVM, specifically
addressing inconsistencies in how MMIO Stale Data mitigation is handled
compared to other data sampling attacks (MDS/TAA/RFDS).

Problem
=======
Currently, MMIO Stale Data mitigation is handled differently from other
VERW-based mitigations. While MDS/TAA/RFDS perform VERW clearing in
assembly code, MMIO Stale Data mitigation uses a separate code path with
x86_clear_cpu_buffer() calls. This inconsistency exists because MMIO Stale
Data mitigation only needs to be applied when guests can access host MMIO,
which was previously difficult to check in assembly. The other
inconsistency is VERW execution MMIO Stale Data dependency on L1TF
mitigation.

Solution
========
Remove the VERW mitigation for MMIO in C, and use the asm VERW callsite for
all VERW mitigations in KVM. Also decoupling MMIO mitigation from L1TF
mitigation.

Roadmap:

Patch 1: Switch to VM_CLEAR_CPU_BUFFERS usage in VMX to align Intel
	 mitigations with AMD's TSA mitigation.

Patch 2: Renames cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only to
	 avoid confusion with the broader X86_FEATURE_CLEAR_CPU_BUF_VM.

Patch 3: Unifies MMIO Stale Data mitigation with other VERW-based
         mitigations.

---
Pawan Gupta (3):
      x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
      x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
      x86/mmio: Unify VERW mitigation for guests

 arch/x86/include/asm/nospec-branch.h |  2 +-
 arch/x86/kernel/cpu/bugs.c           | 17 +++++++++++------
 arch/x86/kvm/mmu/spte.c              |  2 +-
 arch/x86/kvm/vmx/run_flags.h         | 12 ++++++------
 arch/x86/kvm/vmx/vmenter.S           |  8 +++++++-
 arch/x86/kvm/vmx/vmx.c               | 26 ++++++++++----------------
 6 files changed, 36 insertions(+), 31 deletions(-)
---
base-commit: dcb6fa37fd7bc9c3d2b066329b0d27dedf8becaa
change-id: 20251028-verw-vm-6b922910a2b3

Best regards,
-- 
Pawan



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
  2025-10-29 21:26 [PATCH 0/3] Unify VERW mitigation for guests Pawan Gupta
@ 2025-10-29 21:26 ` Pawan Gupta
  2025-10-29 22:13   ` Pawan Gupta
  2025-10-30 12:28   ` Brendan Jackman
  2025-10-29 21:26 ` [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Pawan Gupta
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-29 21:26 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

TSA mitigation:

  d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation")

introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel
CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope
(kernel->user also).

Make mitigations on Intel consistent with TSA. This would help handling the
guest-only mitigations better in future.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/kernel/cpu/bugs.c | 9 +++++++--
 arch/x86/kvm/vmx/vmenter.S | 3 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d7fa03bf51b4517c12cc68e7c441f7589a4983d1..6d00a9ea7b4f28da291114a7a096b26cc129b57e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
 
 /*
  * Controls CPU Fill buffer clear before VMenter. This is a subset of
- * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
+ * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only
  * mitigation is required.
  */
 DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
@@ -536,6 +536,7 @@ static void __init mds_apply_mitigation(void)
 	if (mds_mitigation == MDS_MITIGATION_FULL ||
 	    mds_mitigation == MDS_MITIGATION_VMWERV) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
 		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
 		    (mds_nosmt || smt_mitigations == SMT_MITIGATIONS_ON))
 			cpu_smt_disable(false);
@@ -647,6 +648,7 @@ static void __init taa_apply_mitigation(void)
 		 * present on host, enable the mitigation for UCODE_NEEDED as well.
 		 */
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
 
 		if (taa_nosmt || smt_mitigations == SMT_MITIGATIONS_ON)
 			cpu_smt_disable(false);
@@ -752,6 +754,7 @@ static void __init mmio_apply_mitigation(void)
 	} else {
 		static_branch_enable(&cpu_buf_vm_clear);
 	}
+	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
 
 	/*
 	 * If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can
@@ -839,8 +842,10 @@ static void __init rfds_update_mitigation(void)
 
 static void __init rfds_apply_mitigation(void)
 {
-	if (rfds_mitigation == RFDS_MITIGATION_VERW)
+	if (rfds_mitigation == RFDS_MITIGATION_VERW) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
+	}
 }
 
 static __init int rfds_parse_cmdline(char *str)
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index bc255d709d8a16ae22b5bc401965d209a89a8692..0dd23beae207795484150698d1674dc4044cc520 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -161,7 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
 	/* Clobbers EFLAGS.ZF */
-	CLEAR_CPU_BUFFERS
+	VM_CLEAR_CPU_BUFFERS
+.Lskip_clear_cpu_buffers:
 
 	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
 	jnc .Lvmlaunch

-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
  2025-10-29 21:26 [PATCH 0/3] Unify VERW mitigation for guests Pawan Gupta
  2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
@ 2025-10-29 21:26 ` Pawan Gupta
  2025-10-30  0:18   ` Sean Christopherson
  2025-10-30 12:29   ` Brendan Jackman
  2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
  2025-10-30  0:29 ` [PATCH 0/3] " Sean Christopherson
  3 siblings, 2 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-29 21:26 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation.
Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/include/asm/nospec-branch.h | 2 +-
 arch/x86/kernel/cpu/bugs.c           | 8 ++++----
 arch/x86/kvm/mmu/spte.c              | 2 +-
 arch/x86/kvm/vmx/vmx.c               | 2 +-
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 08ed5a2e46a5fd790bcb1b73feb6469518809c06..cb46f5d188de47834466474ec8030bb2a2e4fdf3 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -580,7 +580,7 @@ DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
 
 DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
 
-DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
+DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear_mmio_only);
 
 extern u16 x86_verw_sel;
 
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6d00a9ea7b4f28da291114a7a096b26cc129b57e..e7c31c23fbeeb1aba4f538934c1e8a997adff522 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -197,8 +197,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
  * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only
  * mitigation is required.
  */
-DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
-EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
+DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear_mmio_only);
+EXPORT_SYMBOL_GPL(cpu_buf_vm_clear_mmio_only);
 
 #undef pr_fmt
 #define pr_fmt(fmt)	"mitigations: " fmt
@@ -750,9 +750,9 @@ static void __init mmio_apply_mitigation(void)
 	 */
 	if (verw_clear_cpu_buf_mitigation_selected) {
 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
-		static_branch_disable(&cpu_buf_vm_clear);
+		static_branch_disable(&cpu_buf_vm_clear_mmio_only);
 	} else {
-		static_branch_enable(&cpu_buf_vm_clear);
+		static_branch_enable(&cpu_buf_vm_clear_mmio_only);
 	}
 	setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
 
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 37647afde7d3acfa1301a771ac44792eab879495..380d6675027499715e49e5b35ef76e17451fd77b 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -292,7 +292,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 		mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);
 	}
 
-	if (static_branch_unlikely(&cpu_buf_vm_clear) &&
+	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
 	    !kvm_vcpu_can_access_host_mmio(vcpu) &&
 	    kvm_is_mmio_pfn(pfn, &is_host_mmio))
 		kvm_track_host_mmio_mapping(vcpu);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f87c216d976d7d344c924aa4cc18fe1bf8f9b731..451be757b3d1b2fec6b2b79157f26dd43bc368b8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -903,7 +903,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
 	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
 		flags |= VMX_RUN_SAVE_SPEC_CTRL;
 
-	if (static_branch_unlikely(&cpu_buf_vm_clear) &&
+	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
 	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
 		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
 

-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-29 21:26 [PATCH 0/3] Unify VERW mitigation for guests Pawan Gupta
  2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
  2025-10-29 21:26 ` [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Pawan Gupta
@ 2025-10-29 21:26 ` Pawan Gupta
  2025-10-30  0:27   ` Sean Christopherson
                     ` (2 more replies)
  2025-10-30  0:29 ` [PATCH 0/3] " Sean Christopherson
  3 siblings, 3 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-29 21:26 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

When a system is only affected by MMIO Stale Data, VERW mitigation is
currently handled differently than other data sampling attacks like
MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
VERW is needed only when the guest can access host MMIO, this was tricky to
check in asm.

Refactoring done by:

  83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
  MMIO into the guest")

now makes it easier to execute VERW conditionally in asm based on
VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.

Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().

This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
mitigations are independent of each other. Although, this has little
practical implication since there are no CPUs that are affected by L1TF and
are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
But, this makes the code cleaner and easier to maintain.

Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
---
 arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
 arch/x86/kvm/vmx/vmenter.S   |  5 +++++
 arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
 3 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,12 +2,12 @@
 #ifndef __KVM_X86_VMX_RUN_FLAGS_H
 #define __KVM_X86_VMX_RUN_FLAGS_H
 
-#define VMX_RUN_VMRESUME_SHIFT				0
-#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
-#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
+#define VMX_RUN_VMRESUME_SHIFT			0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
+#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
 
-#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
-#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
-#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
+#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
 
 #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
 	/* Load @regs to RAX. */
 	mov (%_ASM_SP), %_ASM_AX
 
+	/* jz .Lskip_clear_cpu_buffers below relies on this */
+	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
+
 	/* Check if vmlaunch or vmresume is needed */
 	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
 
@@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
 	/* Load guest RAX.  This kills the @regs pointer! */
 	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
+	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
+	jz .Lskip_clear_cpu_buffers
 	/* Clobbers EFLAGS.ZF */
 	VM_CLEAR_CPU_BUFFERS
 .Lskip_clear_cpu_buffers:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
 	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
 		flags |= VMX_RUN_SAVE_SPEC_CTRL;
 
-	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
-	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
-		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
+	/*
+	 * When affected by MMIO Stale Data only (and not other data sampling
+	 * attacks) only clear for MMIO-capable guests.
+	 */
+	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
+		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
+			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
+	} else {
+		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
+	}
 
 	return flags;
 }
@@ -7320,21 +7327,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 
 	guest_state_enter_irqoff();
 
-	/*
-	 * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
-	 * mitigation for MDS is done late in VMentry and is still
-	 * executed in spite of L1D Flush. This is because an extra VERW
-	 * should not matter much after the big hammer L1D Flush.
-	 *
-	 * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA,
-	 * and is affected by MMIO Stale Data. In such cases mitigation in only
-	 * needed against an MMIO capable guest.
-	 */
 	if (static_branch_unlikely(&vmx_l1d_should_flush))
 		vmx_l1d_flush(vcpu);
-	else if (static_branch_unlikely(&cpu_buf_vm_clear) &&
-		 (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
-		x86_clear_cpu_buffers();
 
 	vmx_disable_fb_clear(vmx);
 

-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
  2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
@ 2025-10-29 22:13   ` Pawan Gupta
  2025-10-30 12:28   ` Brendan Jackman
  1 sibling, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-29 22:13 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 02:26:28PM -0700, Pawan Gupta wrote:
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index bc255d709d8a16ae22b5bc401965d209a89a8692..0dd23beae207795484150698d1674dc4044cc520 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -161,7 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
>  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>  
>  	/* Clobbers EFLAGS.ZF */
> -	CLEAR_CPU_BUFFERS
> +	VM_CLEAR_CPU_BUFFERS
> +.Lskip_clear_cpu_buffers:

Agh, this label belongs to patch 3/3.

>  	/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
>  	jnc .Lvmlaunch

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
  2025-10-29 21:26 ` [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Pawan Gupta
@ 2025-10-30  0:18   ` Sean Christopherson
  2025-10-30  5:40     ` Pawan Gupta
  2025-10-30 12:29   ` Brendan Jackman
  1 sibling, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2025-10-30  0:18 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025, Pawan Gupta wrote:
> cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation.
> Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---

...

> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index f87c216d976d7d344c924aa4cc18fe1bf8f9b731..451be757b3d1b2fec6b2b79157f26dd43bc368b8 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -903,7 +903,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
>  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
>  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
>  
> -	if (static_branch_unlikely(&cpu_buf_vm_clear) &&
> +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
>  	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
>  		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;

The use in vmx_vcpu_enter_exit() needs to be renamed as well.  The code gets
dropped in patch 3, but intermediate builds will fail.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
@ 2025-10-30  0:27   ` Sean Christopherson
  2025-10-30  6:11     ` Pawan Gupta
  2025-10-30  0:33   ` Pawan Gupta
  2025-10-30 12:52   ` Brendan Jackman
  2 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2025-10-30  0:27 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025, Pawan Gupta wrote:
> When a system is only affected by MMIO Stale Data, VERW mitigation is
> currently handled differently than other data sampling attacks like
> MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> VERW is needed only when the guest can access host MMIO, this was tricky to
> check in asm.
> 
> Refactoring done by:
> 
>   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
>   MMIO into the guest")
> 
> now makes it easier to execute VERW conditionally in asm based on
> VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> 
> Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> 
> This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> mitigations are independent of each other. Although, this has little
> practical implication since there are no CPUs that are affected by L1TF and
> are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> But, this makes the code cleaner and easier to maintain.

Heh, and largely makes our discussion on the L1TF cleanup moot :-)

> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---

...

> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
>  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
>  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
>  
> -	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> -	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> -		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> +	/*
> +	 * When affected by MMIO Stale Data only (and not other data sampling
> +	 * attacks) only clear for MMIO-capable guests.
> +	 */
> +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> +		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> +			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> +	} else {
> +		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> +	}

This is quire confusing and subtle.  E.g. it requires the reader to know that
cpu_buf_vm_clear_mmio_only is mutually exlusive with X86_FEATURE_CLEAR_CPU_BUF,
and that VMX_RUN_CLEAR_CPU_BUFFERS is ignored if X86_FEATURE_CLEAR_CPU_BUF=n.

At least, I think that's how it works :-)

Isn't the above equivalent to this when all is said and done?

	if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) ||
	    (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
	     kvm_vcpu_can_access_host_mmio(&vmx->vcpu)))
		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;

>  
>  	return flags;
>  }
> @@ -7320,21 +7327,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>  
>  	guest_state_enter_irqoff();
>  
> -	/*
> -	 * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
> -	 * mitigation for MDS is done late in VMentry and is still
> -	 * executed in spite of L1D Flush. This is because an extra VERW
> -	 * should not matter much after the big hammer L1D Flush.
> -	 *
> -	 * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA,
> -	 * and is affected by MMIO Stale Data. In such cases mitigation in only
> -	 * needed against an MMIO capable guest.
> -	 */
>  	if (static_branch_unlikely(&vmx_l1d_should_flush))
>  		vmx_l1d_flush(vcpu);
> -	else if (static_branch_unlikely(&cpu_buf_vm_clear) &&
> -		 (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> -		x86_clear_cpu_buffers();
>  
>  	vmx_disable_fb_clear(vmx);

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/3] Unify VERW mitigation for guests
  2025-10-29 21:26 [PATCH 0/3] Unify VERW mitigation for guests Pawan Gupta
                   ` (2 preceding siblings ...)
  2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
@ 2025-10-30  0:29 ` Sean Christopherson
  2025-10-30 10:28   ` Borislav Petkov
  3 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2025-10-30  0:29 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025, Pawan Gupta wrote:
> ---
> Pawan Gupta (3):
>       x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
>       x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
>       x86/mmio: Unify VERW mitigation for guests
> 
>  arch/x86/include/asm/nospec-branch.h |  2 +-
>  arch/x86/kernel/cpu/bugs.c           | 17 +++++++++++------
>  arch/x86/kvm/mmu/spte.c              |  2 +-
>  arch/x86/kvm/vmx/run_flags.h         | 12 ++++++------
>  arch/x86/kvm/vmx/vmenter.S           |  8 +++++++-
>  arch/x86/kvm/vmx/vmx.c               | 26 ++++++++++----------------
>  6 files changed, 36 insertions(+), 31 deletions(-)
> ---

Any objection to taking these through the KVM tree when they're ready?  There
will be a conflict in vmx.c with an L1TF related cleanup, and that conflict is
actually helpful in that the two series feed off each other a little bit.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
  2025-10-30  0:27   ` Sean Christopherson
@ 2025-10-30  0:33   ` Pawan Gupta
  2025-10-30  5:52     ` Yao Yuan
  2025-10-30 12:52   ` Brendan Jackman
  2 siblings, 1 reply; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30  0:33 UTC (permalink / raw)
  To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 02:27:00PM -0700, Pawan Gupta wrote:
> When a system is only affected by MMIO Stale Data, VERW mitigation is
> currently handled differently than other data sampling attacks like
> MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> VERW is needed only when the guest can access host MMIO, this was tricky to
> check in asm.
> 
> Refactoring done by:
> 
>   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
>   MMIO into the guest")
> 
> now makes it easier to execute VERW conditionally in asm based on
> VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> 
> Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> 
> This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> mitigations are independent of each other. Although, this has little
> practical implication since there are no CPUs that are affected by L1TF and
> are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> But, this makes the code cleaner and easier to maintain.
> 
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
>  arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
>  arch/x86/kvm/vmx/vmenter.S   |  5 +++++
>  arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
>  3 files changed, 21 insertions(+), 22 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -2,12 +2,12 @@
>  #ifndef __KVM_X86_VMX_RUN_FLAGS_H
>  #define __KVM_X86_VMX_RUN_FLAGS_H
>  
> -#define VMX_RUN_VMRESUME_SHIFT				0
> -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
> +#define VMX_RUN_VMRESUME_SHIFT			0
> +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
> +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
>  
> -#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
> -#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> +#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
> +#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> +#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
>  
>  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
>  	/* Load @regs to RAX. */
>  	mov (%_ASM_SP), %_ASM_AX
>  
> +	/* jz .Lskip_clear_cpu_buffers below relies on this */
> +	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> +
>  	/* Check if vmlaunch or vmresume is needed */
>  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
>  
> @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
>  	/* Load guest RAX.  This kills the @regs pointer! */
>  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>  
> +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> +	jz .Lskip_clear_cpu_buffers
>  	/* Clobbers EFLAGS.ZF */
>  	VM_CLEAR_CPU_BUFFERS
>  .Lskip_clear_cpu_buffers:
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
>  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
>  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
>  
> -	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> -	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> -		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> +	/*
> +	 * When affected by MMIO Stale Data only (and not other data sampling
> +	 * attacks) only clear for MMIO-capable guests.
> +	 */
> +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> +		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> +			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> +	} else {
> +		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> +	}

Setting the flag here is harmless but not necessary when the CPU is not
affected by any of the data sampling attacks. VM_CLEAR_CPU_BUFFERS would be
a NOP in the case.

However, me looking at this code in a year or two would be confused why the
flag is always set on unaffected CPUs. Below change to conditionally set
the flag would make it clearer.

---
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 303935882a9f..0eab59ab2698 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -910,7 +910,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
 	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
 		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
 			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
-	} else {
+	} else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)) {
 		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
 	}
 

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
  2025-10-30  0:18   ` Sean Christopherson
@ 2025-10-30  5:40     ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30  5:40 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 05:18:19PM -0700, Sean Christopherson wrote:
> On Wed, Oct 29, 2025, Pawan Gupta wrote:
> > cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation.
> > Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM.
> > 
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> 
> ...
> 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index f87c216d976d7d344c924aa4cc18fe1bf8f9b731..451be757b3d1b2fec6b2b79157f26dd43bc368b8 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -903,7 +903,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> >  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> >  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
> >  
> > -	if (static_branch_unlikely(&cpu_buf_vm_clear) &&
> > +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> >  	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> >  		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> 
> The use in vmx_vcpu_enter_exit() needs to be renamed as well.  The code gets
> dropped in patch 3, but intermediate builds will fail.

Ya, thanks for catching it.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30  0:33   ` Pawan Gupta
@ 2025-10-30  5:52     ` Yao Yuan
  2025-10-30  6:17       ` Pawan Gupta
  0 siblings, 1 reply; 27+ messages in thread
From: Yao Yuan @ 2025-10-30  5:52 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 05:33:46PM +0800, Pawan Gupta wrote:
> On Wed, Oct 29, 2025 at 02:27:00PM -0700, Pawan Gupta wrote:
> > When a system is only affected by MMIO Stale Data, VERW mitigation is
> > currently handled differently than other data sampling attacks like
> > MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> > VERW is needed only when the guest can access host MMIO, this was tricky to
> > check in asm.
> >
> > Refactoring done by:
> >
> >   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
> >   MMIO into the guest")
> >
> > now makes it easier to execute VERW conditionally in asm based on
> > VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> >
> > Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> > have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> > call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> >
> > This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> > L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> > mitigations are independent of each other. Although, this has little
> > practical implication since there are no CPUs that are affected by L1TF and
> > are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> > But, this makes the code cleaner and easier to maintain.
> >
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> >  arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
> >  arch/x86/kvm/vmx/vmenter.S   |  5 +++++
> >  arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
> >  3 files changed, 21 insertions(+), 22 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> > index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
> > --- a/arch/x86/kvm/vmx/run_flags.h
> > +++ b/arch/x86/kvm/vmx/run_flags.h
> > @@ -2,12 +2,12 @@
> >  #ifndef __KVM_X86_VMX_RUN_FLAGS_H
> >  #define __KVM_X86_VMX_RUN_FLAGS_H
> >
> > -#define VMX_RUN_VMRESUME_SHIFT				0
> > -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
> > +#define VMX_RUN_VMRESUME_SHIFT			0
> > +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
> > +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
> >
> > -#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
> > -#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> > +#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
> > +#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > +#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> >
> >  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load @regs to RAX. */
> >  	mov (%_ASM_SP), %_ASM_AX
> >
> > +	/* jz .Lskip_clear_cpu_buffers below relies on this */
> > +	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> > +
> >  	/* Check if vmlaunch or vmresume is needed */
> >  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> >
> > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load guest RAX.  This kills the @regs pointer! */
> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >
> > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > +	jz .Lskip_clear_cpu_buffers
> >  	/* Clobbers EFLAGS.ZF */
> >  	VM_CLEAR_CPU_BUFFERS
> >  .Lskip_clear_cpu_buffers:
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> >  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> >  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
> >
> > -	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> > -	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > -		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> > +	/*
> > +	 * When affected by MMIO Stale Data only (and not other data sampling
> > +	 * attacks) only clear for MMIO-capable guests.
> > +	 */
> > +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> > +		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > +			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > +	} else {
> > +		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > +	}
>
> Setting the flag here is harmless but not necessary when the CPU is not
> affected by any of the data sampling attacks. VM_CLEAR_CPU_BUFFERS would be
> a NOP in the case.
>
> However, me looking at this code in a year or two would be confused why the
> flag is always set on unaffected CPUs. Below change to conditionally set
> the flag would make it clearer.
>
> ---
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 303935882a9f..0eab59ab2698 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -910,7 +910,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
>  	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
>  		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
>  			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> -	} else {
> +	} else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)) {
>  		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
>  	}
>

Oh, even no need a or two year later, I just feel confusion
when look at this part first time. But this change anyway
makes it more clear to me.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30  0:27   ` Sean Christopherson
@ 2025-10-30  6:11     ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30  6:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 05:27:37PM -0700, Sean Christopherson wrote:
> On Wed, Oct 29, 2025, Pawan Gupta wrote:
> > When a system is only affected by MMIO Stale Data, VERW mitigation is
> > currently handled differently than other data sampling attacks like
> > MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> > VERW is needed only when the guest can access host MMIO, this was tricky to
> > check in asm.
> > 
> > Refactoring done by:
> > 
> >   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
> >   MMIO into the guest")
> > 
> > now makes it easier to execute VERW conditionally in asm based on
> > VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> > 
> > Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> > have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> > call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> > 
> > This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> > L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> > mitigations are independent of each other. Although, this has little
> > practical implication since there are no CPUs that are affected by L1TF and
> > are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> > But, this makes the code cleaner and easier to maintain.
> 
> Heh, and largely makes our discussion on the L1TF cleanup moot :-)

Well, this series is largely a result of that discussion :-)

> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> 
> ...
> 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> >  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> >  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
> >  
> > -	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> > -	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > -		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> > +	/*
> > +	 * When affected by MMIO Stale Data only (and not other data sampling
> > +	 * attacks) only clear for MMIO-capable guests.
> > +	 */
> > +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> > +		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > +			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > +	} else {
> > +		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > +	}
> 
> This is quire confusing and subtle.

I realized that and sent the below follow-up almost at the same time:

	Setting the flag here is harmless but not necessary when the CPU is not
	affected by any of the data sampling attacks. VM_CLEAR_CPU_BUFFERS would be
	a NOP in the case.

	However, me looking at this code in a year or two would be confused why the
	flag is always set on unaffected CPUs. Below change to conditionally set
	the flag would make it clearer.

	---
	diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
	index 303935882a9f..0eab59ab2698 100644
	--- a/arch/x86/kvm/vmx/vmx.c
	+++ b/arch/x86/kvm/vmx/vmx.c
	@@ -910,7 +910,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
		if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
			if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
				flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
	-	} else {
	+	} else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)) {
			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
		}


> E.g. it requires the reader to know that cpu_buf_vm_clear_mmio_only is
> mutually exlusive with X86_FEATURE_CLEAR_CPU_BUF, and that
> VMX_RUN_CLEAR_CPU_BUFFERS is ignored if X86_FEATURE_CLEAR_CPU_BUF=n.
> 
> At least, I think that's how it works :-)

That is right, only thing is instead of X86_FEATURE_CLEAR_CPU_BUF,
VM_CLEAR_CPU_BUFFERS depends on KVM specific X86_FEATURE_CLEAR_CPU_BUF_VM.

> Isn't the above equivalent to this when all is said and done?
> 
> 	if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) ||

For the above reason, it might be better to to use
X86_FEATURE_CLEAR_CPU_BUF_VM (as in the diff I pasted above).

> 	    (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> 	     kvm_vcpu_can_access_host_mmio(&vmx->vcpu)))
> 		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30  5:52     ` Yao Yuan
@ 2025-10-30  6:17       ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30  6:17 UTC (permalink / raw)
  To: Yao Yuan
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson, Brendan Jackman

On Thu, Oct 30, 2025 at 01:52:39PM +0800, Yao Yuan wrote:
> On Wed, Oct 29, 2025 at 05:33:46PM +0800, Pawan Gupta wrote:
> > On Wed, Oct 29, 2025 at 02:27:00PM -0700, Pawan Gupta wrote:
> > > When a system is only affected by MMIO Stale Data, VERW mitigation is
> > > currently handled differently than other data sampling attacks like
> > > MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> > > VERW is needed only when the guest can access host MMIO, this was tricky to
> > > check in asm.
> > >
> > > Refactoring done by:
> > >
> > >   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
> > >   MMIO into the guest")
> > >
> > > now makes it easier to execute VERW conditionally in asm based on
> > > VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> > >
> > > Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> > > have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> > > call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> > >
> > > This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> > > L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> > > mitigations are independent of each other. Although, this has little
> > > practical implication since there are no CPUs that are affected by L1TF and
> > > are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> > > But, this makes the code cleaner and easier to maintain.
> > >
> > > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > > ---
> > >  arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
> > >  arch/x86/kvm/vmx/vmenter.S   |  5 +++++
> > >  arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
> > >  3 files changed, 21 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> > > index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
> > > --- a/arch/x86/kvm/vmx/run_flags.h
> > > +++ b/arch/x86/kvm/vmx/run_flags.h
> > > @@ -2,12 +2,12 @@
> > >  #ifndef __KVM_X86_VMX_RUN_FLAGS_H
> > >  #define __KVM_X86_VMX_RUN_FLAGS_H
> > >
> > > -#define VMX_RUN_VMRESUME_SHIFT				0
> > > -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
> > > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
> > > +#define VMX_RUN_VMRESUME_SHIFT			0
> > > +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
> > > +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
> > >
> > > -#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
> > > -#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> > > +#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
> > > +#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > > +#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> > >
> > >  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> > > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > > index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
> > > --- a/arch/x86/kvm/vmx/vmenter.S
> > > +++ b/arch/x86/kvm/vmx/vmenter.S
> > > @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > >  	/* Load @regs to RAX. */
> > >  	mov (%_ASM_SP), %_ASM_AX
> > >
> > > +	/* jz .Lskip_clear_cpu_buffers below relies on this */
> > > +	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> > > +
> > >  	/* Check if vmlaunch or vmresume is needed */
> > >  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> > >
> > > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > >  	/* Load guest RAX.  This kills the @regs pointer! */
> > >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> > >
> > > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > > +	jz .Lskip_clear_cpu_buffers
> > >  	/* Clobbers EFLAGS.ZF */
> > >  	VM_CLEAR_CPU_BUFFERS
> > >  .Lskip_clear_cpu_buffers:
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 451be757b3d1b2fec6b2b79157f26dd43bc368b8..303935882a9f8d1d8f81a499cdce1fdc8dad62f0 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -903,9 +903,16 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> > >  	if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> > >  		flags |= VMX_RUN_SAVE_SPEC_CTRL;
> > >
> > > -	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only) &&
> > > -	    kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > > -		flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> > > +	/*
> > > +	 * When affected by MMIO Stale Data only (and not other data sampling
> > > +	 * attacks) only clear for MMIO-capable guests.
> > > +	 */
> > > +	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> > > +		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> > > +			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > > +	} else {
> > > +		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > > +	}
> >
> > Setting the flag here is harmless but not necessary when the CPU is not
> > affected by any of the data sampling attacks. VM_CLEAR_CPU_BUFFERS would be
> > a NOP in the case.
> >
> > However, me looking at this code in a year or two would be confused why the
> > flag is always set on unaffected CPUs. Below change to conditionally set
> > the flag would make it clearer.
> >
> > ---
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 303935882a9f..0eab59ab2698 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -910,7 +910,7 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> >  	if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> >  		if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> >  			flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> > -	} else {
> > +	} else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)) {
> >  		flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> >  	}
> >
> 
> Oh, even no need a or two year later, I just feel confusion
> when look at this part first time. But this change anyway
> makes it more clear to me.

:-) I felt the same after sending the patch.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/3] Unify VERW mitigation for guests
  2025-10-30  0:29 ` [PATCH 0/3] " Sean Christopherson
@ 2025-10-30 10:28   ` Borislav Petkov
  0 siblings, 0 replies; 27+ messages in thread
From: Borislav Petkov @ 2025-10-30 10:28 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Pawan Gupta, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin, Paolo Bonzini,
	linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed, Oct 29, 2025 at 05:29:08PM -0700, Sean Christopherson wrote:
> Any objection to taking these through the KVM tree when they're ready?  There
> will be a conflict in vmx.c with an L1TF related cleanup, and that conflict is
> actually helpful in that the two series feed off each other a little bit.

I don't see why not.

At the moment, we don't have a whole lot in tip touching that area and if it
ends up appearing, I'd ping you for an immutable branch I could use.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
  2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
  2025-10-29 22:13   ` Pawan Gupta
@ 2025-10-30 12:28   ` Brendan Jackman
  2025-10-30 18:43     ` Pawan Gupta
  1 sibling, 1 reply; 27+ messages in thread
From: Brendan Jackman @ 2025-10-30 12:28 UTC (permalink / raw)
  To: Pawan Gupta, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> TSA mitigation:
>
>   d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation")
>
> introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel
> CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope
> (kernel->user also).
>
> Make mitigations on Intel consistent with TSA. This would help handling the
> guest-only mitigations better in future.
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
>  arch/x86/kernel/cpu/bugs.c | 9 +++++++--
>  arch/x86/kvm/vmx/vmenter.S | 3 ++-
>  2 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index d7fa03bf51b4517c12cc68e7c441f7589a4983d1..6d00a9ea7b4f28da291114a7a096b26cc129b57e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
>  
>  /*
>   * Controls CPU Fill buffer clear before VMenter. This is a subset of
> - * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
> + * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only
>   * mitigation is required.
>   */

So if I understand correctly with this patch the aim is:

X86_FEATURE_CLEAR_CPU_BUF means verw before exit to usermode

X86_FEATURE_CLEAR_CPU_BUF_VM means unconditional verw before VM Enter

cpu_buf_vm_clear[_mmio_only] means verw before VM Enter for
MMIO-capable guests.

Since this is being cleaned up can we also:

- Update the definition of X86_FEATURE_CLEAR_CPU_BUF in cpufeatures.h to
  say what context it applies to (now it's specifically exit to user)

- Clear up how verw_clear_cpu_buf_mitigation_selected relates to these
  two flags. Thinking aloud here... it looks like this is set:

  - If MDS mitigations are on, meaning both flags are set

  - If TAA mitigations are on, meaning both flags are set

  - If MMIO mitigations are on, and the CPU has MDS or TAA. In this case
    both flags are set, but this causality is messier.

  - If RFDS mitigations are on and supported, meaning both flags are set

  So if I'm reading this correctly whenever
  verw_clear_cpu_buf_mitigation_selected we should expect both flags
  enabled. So I think all that's needed is to add a reference to
  X86_FEATURE_CLEAR_CPU_BUF_VM to the comment?

I think we also need to update the assertion of vmx->disable_fb_clear?

Anyway thanks this seems like a very clear improvement to me.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
  2025-10-29 21:26 ` [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Pawan Gupta
  2025-10-30  0:18   ` Sean Christopherson
@ 2025-10-30 12:29   ` Brendan Jackman
  2025-10-30 16:56     ` Pawan Gupta
  1 sibling, 1 reply; 27+ messages in thread
From: Brendan Jackman @ 2025-10-30 12:29 UTC (permalink / raw)
  To: Pawan Gupta, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation.
> Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM.
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>

(except the build issue)

Reviewed-by: Brendan Jackman <jackmanb@google.com>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
  2025-10-30  0:27   ` Sean Christopherson
  2025-10-30  0:33   ` Pawan Gupta
@ 2025-10-30 12:52   ` Brendan Jackman
  2025-10-30 16:06     ` Sean Christopherson
  2025-10-30 17:28     ` Pawan Gupta
  2 siblings, 2 replies; 27+ messages in thread
From: Brendan Jackman @ 2025-10-30 12:52 UTC (permalink / raw)
  To: Pawan Gupta, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini
  Cc: linux-kernel, kvm, Tao Zhang, Jim Mattson, Brendan Jackman

On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> When a system is only affected by MMIO Stale Data, VERW mitigation is
> currently handled differently than other data sampling attacks like
> MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> VERW is needed only when the guest can access host MMIO, this was tricky to
> check in asm.
>
> Refactoring done by:
>
>   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
>   MMIO into the guest")
>
> now makes it easier to execute VERW conditionally in asm based on
> VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
>
> Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
>
> This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> mitigations are independent of each other. Although, this has little
> practical implication since there are no CPUs that are affected by L1TF and
> are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> But, this makes the code cleaner and easier to maintain.
>
> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> ---
>  arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
>  arch/x86/kvm/vmx/vmenter.S   |  5 +++++
>  arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
>  3 files changed, 21 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -2,12 +2,12 @@
>  #ifndef __KVM_X86_VMX_RUN_FLAGS_H
>  #define __KVM_X86_VMX_RUN_FLAGS_H
>  
> -#define VMX_RUN_VMRESUME_SHIFT				0
> -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
> +#define VMX_RUN_VMRESUME_SHIFT			0
> +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
> +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
>  
> -#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
> -#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> +#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
> +#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> +#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
>  
>  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
>  	/* Load @regs to RAX. */
>  	mov (%_ASM_SP), %_ASM_AX
>  
> +	/* jz .Lskip_clear_cpu_buffers below relies on this */
> +	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> +
>  	/* Check if vmlaunch or vmresume is needed */
>  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
>  
> @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
>  	/* Load guest RAX.  This kills the @regs pointer! */
>  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>  
> +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> +	jz .Lskip_clear_cpu_buffers

Hm, it's a bit weird that we have the "alternative" inside
VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
unconditionally. 

If we really want to super-optimise the no-mitigations-needed case,
shouldn't we want to avoid the conditional in the asm if it never
actually leads to a flush?

On the other hand, if we don't mind a couple of extra instructions,
shouldn't we be fine with just having the whole asm code based solely
on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?

I guess the issue is that in the latter case we'd be back to having
unnecessary inconsistency with AMD code while in the former case... well
that would just be really annoying asm code - am I on the right
wavelength there? So I'm not necessarily asking for changes here, just
probing in case it prompts any interesting insights on your side.

(Also, maybe this test+jz has a similar cost to the nops that the
"alternative" would inject anyway...?)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 12:52   ` Brendan Jackman
@ 2025-10-30 16:06     ` Sean Christopherson
  2025-10-30 16:26       ` Brendan Jackman
  2025-10-30 17:54       ` Pawan Gupta
  2025-10-30 17:28     ` Pawan Gupta
  1 sibling, 2 replies; 27+ messages in thread
From: Sean Christopherson @ 2025-10-30 16:06 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Pawan Gupta, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Paolo Bonzini, linux-kernel, kvm, Tao Zhang, Jim Mattson

On Thu, Oct 30, 2025, Brendan Jackman wrote:
> > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load guest RAX.  This kills the @regs pointer! */
> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >  
> > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > +	jz .Lskip_clear_cpu_buffers
> 
> Hm, it's a bit weird that we have the "alternative" inside
> VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> unconditionally. 

Yeah, I had the same reaction, but couldn't come up with a clean-ish solution
and so ignored it :-)

> If we really want to super-optimise the no-mitigations-needed case,
> shouldn't we want to avoid the conditional in the asm if it never
> actually leads to a flush?
> 
> On the other hand, if we don't mind a couple of extra instructions,
> shouldn't we be fine with just having the whole asm code based solely
> on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
> 
> I guess the issue is that in the latter case we'd be back to having
> unnecessary inconsistency with AMD code while in the former case... well
> that would just be really annoying asm code - am I on the right
> wavelength there? So I'm not necessarily asking for changes here, just
> probing in case it prompts any interesting insights on your side.
> 
> (Also, maybe this test+jz has a similar cost to the nops that the
> "alternative" would inject anyway...?)

It's not at all expensive.  My bigger objection is that it's hard to follow what's
happening.

Aha!  Idea.  IIUC, only the MMIO Stale Data is conditional based on the properties
of the vCPU, so we should track _that_ in a KVM_RUN flag.  And then if we add yet
another X86_FEATURE for MMIO Stale Data flushing (instead of a static branch),
this path can use ALTERNATIVE_2.  The use of ALTERNATIVE_2 isn't exactly pretty,
but IMO this is much more intuitive.

diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index 004fe1ca89f0..b9651960e069 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -4,10 +4,10 @@
 
 #define VMX_RUN_VMRESUME_SHIFT                 0
 #define VMX_RUN_SAVE_SPEC_CTRL_SHIFT           1
-#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT                2
+#define VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT      2
 
 #define VMX_RUN_VMRESUME               BIT(VMX_RUN_VMRESUME_SHIFT)
 #define VMX_RUN_SAVE_SPEC_CTRL         BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
-#define VMX_RUN_CLEAR_CPU_BUFFERS      BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
+#define VMX_RUN_CAN_ACCESS_HOST_MMIO   BIT(VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT)
 
 #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index ec91f4267eca..50a748b489b4 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -137,8 +137,10 @@ SYM_FUNC_START(__vmx_vcpu_run)
        /* Load @regs to RAX. */
        mov (%_ASM_SP), %_ASM_AX
 
-       /* jz .Lskip_clear_cpu_buffers below relies on this */
-       test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
+       /* Check if jz .Lskip_clear_cpu_buffers below relies on this */
+       ALTERNATIVE_2 "",
+                     "", X86_FEATURE_CLEAR_CPU_BUF
+                     "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO
 
        /* Check if vmlaunch or vmresume is needed */
        bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
@@ -163,8 +165,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
        /* Load guest RAX.  This kills the @regs pointer! */
        mov VCPU_RAX(%_ASM_AX), %_ASM_AX
 
-       /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
-       jz .Lskip_clear_cpu_buffers
+       ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",
+                     "", X86_FEATURE_CLEAR_CPU_BUF
+                     "jz .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO
        /* Clobbers EFLAGS.ZF */
        VM_CLEAR_CPU_BUFFERS
 .Lskip_clear_cpu_buffers:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 303935882a9f..b9e7247e6b9a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -903,16 +903,9 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
        if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
                flags |= VMX_RUN_SAVE_SPEC_CTRL;
 
-       /*
-        * When affected by MMIO Stale Data only (and not other data sampling
-        * attacks) only clear for MMIO-capable guests.
-        */
-       if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
-               if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
-                       flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
-       } else {
-               flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
-       }
+       if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO) &&
+           kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
+               flags |= VMX_RUN_CAN_ACCESS_HOST_MMIO;
 
        return flags;
 }

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 16:06     ` Sean Christopherson
@ 2025-10-30 16:26       ` Brendan Jackman
  2025-10-30 18:06         ` Pawan Gupta
  2025-10-30 17:54       ` Pawan Gupta
  1 sibling, 1 reply; 27+ messages in thread
From: Brendan Jackman @ 2025-10-30 16:26 UTC (permalink / raw)
  To: Sean Christopherson, Brendan Jackman
  Cc: Pawan Gupta, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Paolo Bonzini, linux-kernel, kvm, Tao Zhang, Jim Mattson

On Thu Oct 30, 2025 at 4:06 PM UTC, Sean Christopherson wrote:
> On Thu, Oct 30, 2025, Brendan Jackman wrote:
>> > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
>> >  	/* Load guest RAX.  This kills the @regs pointer! */
>> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>> >  
>> > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
>> > +	jz .Lskip_clear_cpu_buffers
>> 
>> Hm, it's a bit weird that we have the "alternative" inside
>> VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
>> unconditionally. 
>
> Yeah, I had the same reaction, but couldn't come up with a clean-ish solution
> and so ignored it :-)
>
>> If we really want to super-optimise the no-mitigations-needed case,
>> shouldn't we want to avoid the conditional in the asm if it never
>> actually leads to a flush?
>> 
>> On the other hand, if we don't mind a couple of extra instructions,
>> shouldn't we be fine with just having the whole asm code based solely
>> on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
>> X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
>> 
>> I guess the issue is that in the latter case we'd be back to having
>> unnecessary inconsistency with AMD code while in the former case... well
>> that would just be really annoying asm code - am I on the right
>> wavelength there? So I'm not necessarily asking for changes here, just
>> probing in case it prompts any interesting insights on your side.
>> 
>> (Also, maybe this test+jz has a similar cost to the nops that the
>> "alternative" would inject anyway...?)
>
> It's not at all expensive.  My bigger objection is that it's hard to follow what's
> happening.
>
> Aha!  Idea.  IIUC, only the MMIO Stale Data is conditional based on the properties
> of the vCPU, so we should track _that_ in a KVM_RUN flag.  And then if we add yet
> another X86_FEATURE for MMIO Stale Data flushing (instead of a static branch),
> this path can use ALTERNATIVE_2.  The use of ALTERNATIVE_2 isn't exactly pretty,
> but IMO this is much more intuitive.
>
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 004fe1ca89f0..b9651960e069 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -4,10 +4,10 @@
>  
>  #define VMX_RUN_VMRESUME_SHIFT                 0
>  #define VMX_RUN_SAVE_SPEC_CTRL_SHIFT           1
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT                2
> +#define VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT      2
>  
>  #define VMX_RUN_VMRESUME               BIT(VMX_RUN_VMRESUME_SHIFT)
>  #define VMX_RUN_SAVE_SPEC_CTRL         BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> -#define VMX_RUN_CLEAR_CPU_BUFFERS      BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> +#define VMX_RUN_CAN_ACCESS_HOST_MMIO   BIT(VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT)
>  
>  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index ec91f4267eca..50a748b489b4 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -137,8 +137,10 @@ SYM_FUNC_START(__vmx_vcpu_run)
>         /* Load @regs to RAX. */
>         mov (%_ASM_SP), %_ASM_AX
>  
> -       /* jz .Lskip_clear_cpu_buffers below relies on this */
> -       test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> +       /* Check if jz .Lskip_clear_cpu_buffers below relies on this */
> +       ALTERNATIVE_2 "",
> +                     "", X86_FEATURE_CLEAR_CPU_BUF
> +                     "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

Er, I don't understand the ALTERNATIVE_2 here, don't we just need this?

ALTERNATIVE "" "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", 
	    X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

>  
>         /* Check if vmlaunch or vmresume is needed */
>         bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> @@ -163,8 +165,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
>         /* Load guest RAX.  This kills the @regs pointer! */
>         mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>  
> -       /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> -       jz .Lskip_clear_cpu_buffers
> +       ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",
> +                     "", X86_FEATURE_CLEAR_CPU_BUF
> +                     "jz .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

To fit with the rest of Pawan's code this would need
s/X86_FEATURE_CLEAR_CPU_BUF/X86_FEATURE_CLEAR_CPU_BUF_VM/, right?

In case it reveals that I just don't understand ALTERNATIVE_2 at all,
I'm reading this second one as saying:

if cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO)
   "jz .Lskip_clear_cpu_buffers "
else if !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)
   "jmp .Lskip_clear_cpu_buffers"

I.e. I'm understanding X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO as mutually
exclusive with X86_FEATURE_CLEAR_CPU_BUF_VM, it means "you _only_ need
to verw MMIO". So basically we moved cpu_buf_vm_clear_mmio_only into a
CPU feature to make it accessible from asm?

(Also let's use BUF instead of BUFFERS in the name, for consistency)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only
  2025-10-30 12:29   ` Brendan Jackman
@ 2025-10-30 16:56     ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 16:56 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson

On Thu, Oct 30, 2025 at 12:29:39PM +0000, Brendan Jackman wrote:
> On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> > cpu_buf_vm_clear static key is only used by the MMIO Stale Data mitigation.
> > Rename it to avoid mixing it up with X86_FEATURE_CLEAR_CPU_BUF_VM.
> >
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> 
> (except the build issue)

Will fix that.

> Reviewed-by: Brendan Jackman <jackmanb@google.com>

Thanks.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 12:52   ` Brendan Jackman
  2025-10-30 16:06     ` Sean Christopherson
@ 2025-10-30 17:28     ` Pawan Gupta
  2025-10-30 18:21       ` Sean Christopherson
  1 sibling, 1 reply; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 17:28 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson

On Thu, Oct 30, 2025 at 12:52:12PM +0000, Brendan Jackman wrote:
> On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> > When a system is only affected by MMIO Stale Data, VERW mitigation is
> > currently handled differently than other data sampling attacks like
> > MDS/TAA/RFDS, that do the VERW in asm. This is because for MMIO Stale Data,
> > VERW is needed only when the guest can access host MMIO, this was tricky to
> > check in asm.
> >
> > Refactoring done by:
> >
> >   83ebe7157483 ("KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps
> >   MMIO into the guest")
> >
> > now makes it easier to execute VERW conditionally in asm based on
> > VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO.
> >
> > Unify MMIO Stale Data mitigation with other VERW-based mitigations and only
> > have single VERW callsite in __vmx_vcpu_run(). Remove the now unnecessary
> > call to x86_clear_cpu_buffer() in vmx_vcpu_enter_exit().
> >
> > This also untangles L1D Flush and MMIO Stale Data mitigation. Earlier, an
> > L1D Flush would skip the VERW for MMIO Stale Data. Now, both the
> > mitigations are independent of each other. Although, this has little
> > practical implication since there are no CPUs that are affected by L1TF and
> > are *only* affected by MMIO Stale Data (i.e. not affected by MDS/TAA/RFDS).
> > But, this makes the code cleaner and easier to maintain.
> >
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> >  arch/x86/kvm/vmx/run_flags.h | 12 ++++++------
> >  arch/x86/kvm/vmx/vmenter.S   |  5 +++++
> >  arch/x86/kvm/vmx/vmx.c       | 26 ++++++++++----------------
> >  3 files changed, 21 insertions(+), 22 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> > index 2f20fb170def8b10c8c0c46f7ba751f845c19e2c..004fe1ca89f05524bf3986540056de2caf0abbad 100644
> > --- a/arch/x86/kvm/vmx/run_flags.h
> > +++ b/arch/x86/kvm/vmx/run_flags.h
> > @@ -2,12 +2,12 @@
> >  #ifndef __KVM_X86_VMX_RUN_FLAGS_H
> >  #define __KVM_X86_VMX_RUN_FLAGS_H
> >  
> > -#define VMX_RUN_VMRESUME_SHIFT				0
> > -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT			1
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT	2
> > +#define VMX_RUN_VMRESUME_SHIFT			0
> > +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT		1
> > +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT		2
> >  
> > -#define VMX_RUN_VMRESUME			BIT(VMX_RUN_VMRESUME_SHIFT)
> > -#define VMX_RUN_SAVE_SPEC_CTRL			BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> > +#define VMX_RUN_VMRESUME		BIT(VMX_RUN_VMRESUME_SHIFT)
> > +#define VMX_RUN_SAVE_SPEC_CTRL		BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > +#define VMX_RUN_CLEAR_CPU_BUFFERS	BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> >  
> >  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index 0dd23beae207795484150698d1674dc4044cc520..ec91f4267eca319ffa8e6079887e8dfecc7f96d8 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -137,6 +137,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load @regs to RAX. */
> >  	mov (%_ASM_SP), %_ASM_AX
> >  
> > +	/* jz .Lskip_clear_cpu_buffers below relies on this */
> > +	test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> > +
> >  	/* Check if vmlaunch or vmresume is needed */
> >  	bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> >  
> > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >  	/* Load guest RAX.  This kills the @regs pointer! */
> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >  
> > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > +	jz .Lskip_clear_cpu_buffers
> 
> Hm, it's a bit weird that we have the "alternative" inside
> VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> unconditionally. 

Exactly, but it is tricky to handle the below 2 cases in asm:

1. MDS -> Always do VM_CLEAR_CPU_BUFFERS

2. MMIO -> Do VM_CLEAR_CPU_BUFFERS only if guest can access host MMIO

In th MMIO case, one guest may have access to host MMIO while another may
not. Alternatives alone can't handle this case as they patch code at boot
which is then set in stone. One way is to move the conditional inside
VM_CLEAR_CPU_BUFFERS that gets a flag as an arguement.

> If we really want to super-optimise the no-mitigations-needed case,
> shouldn't we want to avoid the conditional in the asm if it never
> actually leads to a flush?

Ya, so effectively, have VM_CLEAR_CPU_BUFFERS alternative spit out
conditional VERW when affected by MMIO_only, otherwise an unconditional
VERW.

> On the other hand, if we don't mind a couple of extra instructions,
> shouldn't we be fine with just having the whole asm code based solely
> on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?

That's also an option.

> I guess the issue is that in the latter case we'd be back to having
> unnecessary inconsistency with AMD code while in the former case... well
> that would just be really annoying asm code - am I on the right
> wavelength there? So I'm not necessarily asking for changes here, just
> probing in case it prompts any interesting insights on your side.
> 
> (Also, maybe this test+jz has a similar cost to the nops that the
> "alternative" would inject anyway...?)

Likely yes. test+jz is a necessary evil that is needed for MMIO Stale Data
for different per-guest handling.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 16:06     ` Sean Christopherson
  2025-10-30 16:26       ` Brendan Jackman
@ 2025-10-30 17:54       ` Pawan Gupta
  1 sibling, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 17:54 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brendan Jackman, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Paolo Bonzini, linux-kernel, kvm, Tao Zhang, Jim Mattson

On Thu, Oct 30, 2025 at 09:06:41AM -0700, Sean Christopherson wrote:
> On Thu, Oct 30, 2025, Brendan Jackman wrote:
> > > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> > >  	/* Load guest RAX.  This kills the @regs pointer! */
> > >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> > >  
> > > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > > +	jz .Lskip_clear_cpu_buffers
> > 
> > Hm, it's a bit weird that we have the "alternative" inside
> > VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> > unconditionally. 
> 
> Yeah, I had the same reaction, but couldn't come up with a clean-ish solution
> and so ignored it :-)

Ya, it is tricky to handle per-guest mitigation for MMIO in a clean way.

> > If we really want to super-optimise the no-mitigations-needed case,
> > shouldn't we want to avoid the conditional in the asm if it never
> > actually leads to a flush?
> > 
> > On the other hand, if we don't mind a couple of extra instructions,
> > shouldn't we be fine with just having the whole asm code based solely
> > on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> > X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
> > 
> > I guess the issue is that in the latter case we'd be back to having
> > unnecessary inconsistency with AMD code while in the former case... well
> > that would just be really annoying asm code - am I on the right
> > wavelength there? So I'm not necessarily asking for changes here, just
> > probing in case it prompts any interesting insights on your side.
> > 
> > (Also, maybe this test+jz has a similar cost to the nops that the
> > "alternative" would inject anyway...?)
> 
> It's not at all expensive.  My bigger objection is that it's hard to follow what's
> happening.
> 
> Aha!  Idea.  IIUC, only the MMIO Stale Data is conditional based on the properties
> of the vCPU, so we should track _that_ in a KVM_RUN flag.  And then if we add yet
> another X86_FEATURE for MMIO Stale Data flushing (instead of a static branch),
> this path can use ALTERNATIVE_2.  The use of ALTERNATIVE_2 isn't exactly pretty,
> but IMO this is much more intuitive.
> 
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 004fe1ca89f0..b9651960e069 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -4,10 +4,10 @@
>  
>  #define VMX_RUN_VMRESUME_SHIFT                 0
>  #define VMX_RUN_SAVE_SPEC_CTRL_SHIFT           1
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT                2
> +#define VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT      2
>  
>  #define VMX_RUN_VMRESUME               BIT(VMX_RUN_VMRESUME_SHIFT)
>  #define VMX_RUN_SAVE_SPEC_CTRL         BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> -#define VMX_RUN_CLEAR_CPU_BUFFERS      BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> +#define VMX_RUN_CAN_ACCESS_HOST_MMIO   BIT(VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT)
>  
>  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index ec91f4267eca..50a748b489b4 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -137,8 +137,10 @@ SYM_FUNC_START(__vmx_vcpu_run)
>         /* Load @regs to RAX. */
>         mov (%_ASM_SP), %_ASM_AX
>  
> -       /* jz .Lskip_clear_cpu_buffers below relies on this */
> -       test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> +       /* Check if jz .Lskip_clear_cpu_buffers below relies on this */
> +       ALTERNATIVE_2 "",
> +                     "", X86_FEATURE_CLEAR_CPU_BUF
> +                     "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

This approach looks better. I think we will be fine without ALTERNATIVE_2:

       ALTERNATIVE "", "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

>         /* Check if vmlaunch or vmresume is needed */
>         bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> @@ -163,8 +165,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
>         /* Load guest RAX.  This kills the @regs pointer! */
>         mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>  
> -       /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> -       jz .Lskip_clear_cpu_buffers
> +       ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",
> +                     "", X86_FEATURE_CLEAR_CPU_BUF
> +                     "jz .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

I am not 100% sure, but I believe the _MMIO check needs to be before
X86_FEATURE_CLEAR_CPU_BUF_VM, because MMIO mitigation also sets _VM:

       ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",
                     "jz .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO
                     "", X86_FEATURE_CLEAR_CPU_BUF_VM

>         /* Clobbers EFLAGS.ZF */
>         VM_CLEAR_CPU_BUFFERS
>  .Lskip_clear_cpu_buffers:
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 303935882a9f..b9e7247e6b9a 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -903,16 +903,9 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
>         if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
>                 flags |= VMX_RUN_SAVE_SPEC_CTRL;
>  
> -       /*
> -        * When affected by MMIO Stale Data only (and not other data sampling
> -        * attacks) only clear for MMIO-capable guests.
> -        */
> -       if (static_branch_unlikely(&cpu_buf_vm_clear_mmio_only)) {
> -               if (kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> -                       flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> -       } else {
> -               flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
> -       }
> +       if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO) &&
> +           kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> +               flags |= VMX_RUN_CAN_ACCESS_HOST_MMIO;

Thanks Sean! This is much cleaner.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 16:26       ` Brendan Jackman
@ 2025-10-30 18:06         ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 18:06 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Sean Christopherson, Thomas Gleixner, Borislav Petkov,
	Peter Zijlstra, Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86,
	H. Peter Anvin, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson

On Thu, Oct 30, 2025 at 04:26:10PM +0000, Brendan Jackman wrote:
> On Thu Oct 30, 2025 at 4:06 PM UTC, Sean Christopherson wrote:
> > On Thu, Oct 30, 2025, Brendan Jackman wrote:
> >> > @@ -160,6 +163,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >> >  	/* Load guest RAX.  This kills the @regs pointer! */
> >> >  	mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >> >  
> >> > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> >> > +	jz .Lskip_clear_cpu_buffers
> >> 
> >> Hm, it's a bit weird that we have the "alternative" inside
> >> VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> >> unconditionally. 
> >
> > Yeah, I had the same reaction, but couldn't come up with a clean-ish solution
> > and so ignored it :-)
> >
> >> If we really want to super-optimise the no-mitigations-needed case,
> >> shouldn't we want to avoid the conditional in the asm if it never
> >> actually leads to a flush?
> >> 
> >> On the other hand, if we don't mind a couple of extra instructions,
> >> shouldn't we be fine with just having the whole asm code based solely
> >> on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> >> X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
> >> 
> >> I guess the issue is that in the latter case we'd be back to having
> >> unnecessary inconsistency with AMD code while in the former case... well
> >> that would just be really annoying asm code - am I on the right
> >> wavelength there? So I'm not necessarily asking for changes here, just
> >> probing in case it prompts any interesting insights on your side.
> >> 
> >> (Also, maybe this test+jz has a similar cost to the nops that the
> >> "alternative" would inject anyway...?)
> >
> > It's not at all expensive.  My bigger objection is that it's hard to follow what's
> > happening.
> >
> > Aha!  Idea.  IIUC, only the MMIO Stale Data is conditional based on the properties
> > of the vCPU, so we should track _that_ in a KVM_RUN flag.  And then if we add yet
> > another X86_FEATURE for MMIO Stale Data flushing (instead of a static branch),
> > this path can use ALTERNATIVE_2.  The use of ALTERNATIVE_2 isn't exactly pretty,
> > but IMO this is much more intuitive.
> >
> > diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> > index 004fe1ca89f0..b9651960e069 100644
> > --- a/arch/x86/kvm/vmx/run_flags.h
> > +++ b/arch/x86/kvm/vmx/run_flags.h
> > @@ -4,10 +4,10 @@
> >  
> >  #define VMX_RUN_VMRESUME_SHIFT                 0
> >  #define VMX_RUN_SAVE_SPEC_CTRL_SHIFT           1
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT                2
> > +#define VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT      2
> >  
> >  #define VMX_RUN_VMRESUME               BIT(VMX_RUN_VMRESUME_SHIFT)
> >  #define VMX_RUN_SAVE_SPEC_CTRL         BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> > -#define VMX_RUN_CLEAR_CPU_BUFFERS      BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
> > +#define VMX_RUN_CAN_ACCESS_HOST_MMIO   BIT(VMX_RUN_CAN_ACCESS_HOST_MMIO_SHIT)
> >  
> >  #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> > diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> > index ec91f4267eca..50a748b489b4 100644
> > --- a/arch/x86/kvm/vmx/vmenter.S
> > +++ b/arch/x86/kvm/vmx/vmenter.S
> > @@ -137,8 +137,10 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >         /* Load @regs to RAX. */
> >         mov (%_ASM_SP), %_ASM_AX
> >  
> > -       /* jz .Lskip_clear_cpu_buffers below relies on this */
> > -       test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> > +       /* Check if jz .Lskip_clear_cpu_buffers below relies on this */
> > +       ALTERNATIVE_2 "",
> > +                     "", X86_FEATURE_CLEAR_CPU_BUF
> > +                     "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO
> 
> Er, I don't understand the ALTERNATIVE_2 here, don't we just need this?
> 
> ALTERNATIVE "" "test $VMX_RUN_CAN_ACCESS_HOST_MMIO, %ebx", 
> 	    X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO

Yeah, right.

> >         /* Check if vmlaunch or vmresume is needed */
> >         bt   $VMX_RUN_VMRESUME_SHIFT, %ebx
> > @@ -163,8 +165,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
> >         /* Load guest RAX.  This kills the @regs pointer! */
> >         mov VCPU_RAX(%_ASM_AX), %_ASM_AX
> >  
> > -       /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > -       jz .Lskip_clear_cpu_buffers
> > +       ALTERNATIVE_2 "jmp .Lskip_clear_cpu_buffers",
> > +                     "", X86_FEATURE_CLEAR_CPU_BUF
> > +                     "jz .Lskip_clear_cpu_buffers", X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO
> 
> To fit with the rest of Pawan's code this would need
> s/X86_FEATURE_CLEAR_CPU_BUF/X86_FEATURE_CLEAR_CPU_BUF_VM/, right?

Yes.

> In case it reveals that I just don't understand ALTERNATIVE_2 at all,
> I'm reading this second one as saying:
> 
> if cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO)
>    "jz .Lskip_clear_cpu_buffers "
> else if !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF_VM)
>    "jmp .Lskip_clear_cpu_buffers"
>
> I.e. I'm understanding X86_FEATURE_CLEAR_CPU_BUFFERS_MMIO as mutually
> exclusive with X86_FEATURE_CLEAR_CPU_BUF_VM, it means "you _only_ need
> to verw MMIO".

Yes, that's also my understanding.

> So basically we moved cpu_buf_vm_clear_mmio_only into a
> CPU feature to make it accessible from asm?

Essentially, yes.

> (Also let's use BUF instead of BUFFERS in the name, for consistency)

Agree.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 17:28     ` Pawan Gupta
@ 2025-10-30 18:21       ` Sean Christopherson
  2025-10-30 19:11         ` Pawan Gupta
  0 siblings, 1 reply; 27+ messages in thread
From: Sean Christopherson @ 2025-10-30 18:21 UTC (permalink / raw)
  To: Pawan Gupta
  Cc: Brendan Jackman, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Paolo Bonzini, linux-kernel, kvm, Tao Zhang, Jim Mattson

On Thu, Oct 30, 2025, Pawan Gupta wrote:
> On Thu, Oct 30, 2025 at 12:52:12PM +0000, Brendan Jackman wrote:
> > On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> > > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > > +	jz .Lskip_clear_cpu_buffers
> > 
> > Hm, it's a bit weird that we have the "alternative" inside
> > VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> > unconditionally. 
> 
> Exactly, but it is tricky to handle the below 2 cases in asm:
> 
> 1. MDS -> Always do VM_CLEAR_CPU_BUFFERS
> 
> 2. MMIO -> Do VM_CLEAR_CPU_BUFFERS only if guest can access host MMIO

Overloading VM_CLEAR_CPU_BUFFERS for MMIO is all kinds of confusing, e.g. my
pseudo-patch messed things.  It's impossible to understand

> In th MMIO case, one guest may have access to host MMIO while another may
> not. Alternatives alone can't handle this case as they patch code at boot
> which is then set in stone. One way is to move the conditional inside
> VM_CLEAR_CPU_BUFFERS that gets a flag as an arguement.
> 
> > If we really want to super-optimise the no-mitigations-needed case,
> > shouldn't we want to avoid the conditional in the asm if it never
> > actually leads to a flush?
> 
> Ya, so effectively, have VM_CLEAR_CPU_BUFFERS alternative spit out
> conditional VERW when affected by MMIO_only, otherwise an unconditional
> VERW.
> 
> > On the other hand, if we don't mind a couple of extra instructions,
> > shouldn't we be fine with just having the whole asm code based solely
> > on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> > X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
> 
> That's also an option.
> 
> > I guess the issue is that in the latter case we'd be back to having
> > unnecessary inconsistency with AMD code while in the former case... well
> > that would just be really annoying asm code - am I on the right
> > wavelength there? So I'm not necessarily asking for changes here, just
> > probing in case it prompts any interesting insights on your side.
> > 
> > (Also, maybe this test+jz has a similar cost to the nops that the
> > "alternative" would inject anyway...?)
> 
> Likely yes. test+jz is a necessary evil that is needed for MMIO Stale Data
> for different per-guest handling.

I don't like any of those options :-)

I again vote to add X86_FEATURE_CLEAR_CPU_BUF_MMIO, and then have it be mutually
exlusive with X86_FEATURE_CLEAR_CPU_BUF_VM, i.e. be an alterantive, not a subset.
Because as proposed, the MMIO case _isn't_ a strict subset, it's a subset iff
the MMIO mitigation is enabled, otherwise it's something else entirely.

After half an hour of debugging godawful assembler errors because I used stringify()
instead of __stringify(), I was able to get to this, which IMO is readable and
intuitive.

	/* Clobbers EFLAGS.ZF */
	ALTERNATIVE_2 "",							\
		      __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM,	\
		      __stringify(jz .Lskip_clear_cpu_buffers;			\
				  CLEAR_CPU_BUFFERS_SEQ;			\
				  .Lskip_clear_cpu_buffers:),			\
		      X86_FEATURE_CLEAR_CPU_BUF_MMIO

Without overloading X86_FEATURE_CLEAR_CPU_BUF_VM, e.g. the conversion from a
static branch to X86_FEATURE_CLEAR_CPU_BUF_MMIO is a pure conversion and yields:

	if (verw_clear_cpu_buf_mitigation_selected) {
		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
	} else {
		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_MMIO);
	}

Give me a few hours to test, and I'll post a v2.  The patches are:

Pawan Gupta (1):
  x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well

Sean Christopherson (4):
  x86/bugs: Decouple ALTERNATIVE usage from VERW macro definition
  x86/bugs: Use an X86_FEATURE_xxx flag for the MMIO Stale Data mitigation
  KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2
  x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERS

 arch/x86/include/asm/cpufeatures.h   |  1 +
 arch/x86/include/asm/nospec-branch.h | 24 +++++++++---------------
 arch/x86/kernel/cpu/bugs.c           | 18 +++++++-----------
 arch/x86/kvm/mmu/spte.c              |  2 +-
 arch/x86/kvm/svm/vmenter.S           |  6 ++++--
 arch/x86/kvm/vmx/vmenter.S           | 13 ++++++++++++-
 arch/x86/kvm/vmx/vmx.c               | 15 +--------------
 7 files changed, 35 insertions(+), 44 deletions(-)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
  2025-10-30 12:28   ` Brendan Jackman
@ 2025-10-30 18:43     ` Pawan Gupta
  2025-10-31 11:25       ` Brendan Jackman
  0 siblings, 1 reply; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 18:43 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson

On Thu, Oct 30, 2025 at 12:28:06PM +0000, Brendan Jackman wrote:
> On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> > TSA mitigation:
> >
> >   d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation")
> >
> > introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel
> > CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope
> > (kernel->user also).
> >
> > Make mitigations on Intel consistent with TSA. This would help handling the
> > guest-only mitigations better in future.
> >
> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > ---
> >  arch/x86/kernel/cpu/bugs.c | 9 +++++++--
> >  arch/x86/kvm/vmx/vmenter.S | 3 ++-
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index d7fa03bf51b4517c12cc68e7c441f7589a4983d1..6d00a9ea7b4f28da291114a7a096b26cc129b57e 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> >  
> >  /*
> >   * Controls CPU Fill buffer clear before VMenter. This is a subset of
> > - * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
> > + * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only
> >   * mitigation is required.
> >   */
> 
> So if I understand correctly with this patch the aim is:
> 
> X86_FEATURE_CLEAR_CPU_BUF means verw before exit to usermode
> 
> X86_FEATURE_CLEAR_CPU_BUF_VM means unconditional verw before VM Enter
> 
> cpu_buf_vm_clear[_mmio_only] means verw before VM Enter for
> MMIO-capable guests.

Yup, thats the goal.

> Since this is being cleaned up can we also:
> 
> - Update the definition of X86_FEATURE_CLEAR_CPU_BUF in cpufeatures.h to
>   say what context it applies to (now it's specifically exit to user)
> 
> - Clear up how verw_clear_cpu_buf_mitigation_selected relates to these
>   two flags. Thinking aloud here... it looks like this is set:
> 
>   - If MDS mitigations are on, meaning both flags are set
> 
>   - If TAA mitigations are on, meaning both flags are set
> 
>   - If MMIO mitigations are on, and the CPU has MDS or TAA. In this case
>     both flags are set, but this causality is messier.
> 
>   - If RFDS mitigations are on and supported, meaning both flags are set
> 
>   So if I'm reading this correctly whenever
>   verw_clear_cpu_buf_mitigation_selected we should expect both flags
>   enabled. So I think all that's needed is to add a reference to
>   X86_FEATURE_CLEAR_CPU_BUF_VM to the comment?

Yes. I will update the comment accordingly.

> I think we also need to update the assertion of vmx->disable_fb_clear?

I am not quite sure about the update needed. Could you please clarify?

> Anyway thanks this seems like a very clear improvement to me.

Thanks for the review and suggestions!

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests
  2025-10-30 18:21       ` Sean Christopherson
@ 2025-10-30 19:11         ` Pawan Gupta
  0 siblings, 0 replies; 27+ messages in thread
From: Pawan Gupta @ 2025-10-30 19:11 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Brendan Jackman, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
	Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Paolo Bonzini, linux-kernel, kvm, Tao Zhang, Jim Mattson

On Thu, Oct 30, 2025 at 11:21:52AM -0700, Sean Christopherson wrote:
> On Thu, Oct 30, 2025, Pawan Gupta wrote:
> > On Thu, Oct 30, 2025 at 12:52:12PM +0000, Brendan Jackman wrote:
> > > On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
> > > > +	/* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> > > > +	jz .Lskip_clear_cpu_buffers
> > > 
> > > Hm, it's a bit weird that we have the "alternative" inside
> > > VM_CLEAR_CPU_BUFFERS, but then we still keep the test+jz
> > > unconditionally. 
> > 
> > Exactly, but it is tricky to handle the below 2 cases in asm:
> > 
> > 1. MDS -> Always do VM_CLEAR_CPU_BUFFERS
> > 
> > 2. MMIO -> Do VM_CLEAR_CPU_BUFFERS only if guest can access host MMIO
> 
> Overloading VM_CLEAR_CPU_BUFFERS for MMIO is all kinds of confusing, e.g. my
> pseudo-patch messed things.  It's impossible to understand

Agree.

> > In th MMIO case, one guest may have access to host MMIO while another may
> > not. Alternatives alone can't handle this case as they patch code at boot
> > which is then set in stone. One way is to move the conditional inside
> > VM_CLEAR_CPU_BUFFERS that gets a flag as an arguement.
> > 
> > > If we really want to super-optimise the no-mitigations-needed case,
> > > shouldn't we want to avoid the conditional in the asm if it never
> > > actually leads to a flush?
> > 
> > Ya, so effectively, have VM_CLEAR_CPU_BUFFERS alternative spit out
> > conditional VERW when affected by MMIO_only, otherwise an unconditional
> > VERW.
> > 
> > > On the other hand, if we don't mind a couple of extra instructions,
> > > shouldn't we be fine with just having the whole asm code based solely
> > > on VMX_RUN_CLEAR_CPU_BUFFERS and leaving the
> > > X86_FEATURE_CLEAR_CPU_BUF_VM to the C code?
> > 
> > That's also an option.
> > 
> > > I guess the issue is that in the latter case we'd be back to having
> > > unnecessary inconsistency with AMD code while in the former case... well
> > > that would just be really annoying asm code - am I on the right
> > > wavelength there? So I'm not necessarily asking for changes here, just
> > > probing in case it prompts any interesting insights on your side.
> > > 
> > > (Also, maybe this test+jz has a similar cost to the nops that the
> > > "alternative" would inject anyway...?)
> > 
> > Likely yes. test+jz is a necessary evil that is needed for MMIO Stale Data
> > for different per-guest handling.
> 
> I don't like any of those options :-)
> 
> I again vote to add X86_FEATURE_CLEAR_CPU_BUF_MMIO, and then have it be mutually
> exlusive with X86_FEATURE_CLEAR_CPU_BUF_VM, i.e. be an alterantive, not a subset.
> Because as proposed, the MMIO case _isn't_ a strict subset, it's a subset iff
> the MMIO mitigation is enabled, otherwise it's something else entirely.

I don't see a problem with that.

> After half an hour of debugging godawful assembler errors because I used stringify()
> instead of __stringify(),

Not surprised at all :-)

> I was able to get to this, which IMO is readable and intuitive.
> 
> 	/* Clobbers EFLAGS.ZF */
> 	ALTERNATIVE_2 "",							\
> 		      __CLEAR_CPU_BUFFERS, X86_FEATURE_CLEAR_CPU_BUF_VM,	\
> 		      __stringify(jz .Lskip_clear_cpu_buffers;			\
> 				  CLEAR_CPU_BUFFERS_SEQ;			\

Curious what this is doing, I will wait for your patches.

> 				  .Lskip_clear_cpu_buffers:),			\
> 		      X86_FEATURE_CLEAR_CPU_BUF_MMIO
> 
> Without overloading X86_FEATURE_CLEAR_CPU_BUF_VM, e.g. the conversion from a
> static branch to X86_FEATURE_CLEAR_CPU_BUF_MMIO is a pure conversion and yields:
> 
> 	if (verw_clear_cpu_buf_mitigation_selected) {
> 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
> 	} else {
> 		setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_MMIO);
> 	}
> 
> Give me a few hours to test, and I'll post a v2.  The patches are:
> 
> Pawan Gupta (1):
>   x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
> 
> Sean Christopherson (4):
>   x86/bugs: Decouple ALTERNATIVE usage from VERW macro definition
>   x86/bugs: Use an X86_FEATURE_xxx flag for the MMIO Stale Data mitigation
>   KVM: VMX: Handle MMIO Stale Data in VM-Enter assembly via ALTERNATIVES_2
>   x86/bugs: KVM: Move VM_CLEAR_CPU_BUFFERS into SVM as SVM_CLEAR_CPU_BUFFERS

Ok, sounds good to me.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well
  2025-10-30 18:43     ` Pawan Gupta
@ 2025-10-31 11:25       ` Brendan Jackman
  0 siblings, 0 replies; 27+ messages in thread
From: Brendan Jackman @ 2025-10-31 11:25 UTC (permalink / raw)
  To: Pawan Gupta, Brendan Jackman
  Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
	Ingo Molnar, Dave Hansen, x86, H. Peter Anvin,
	Sean Christopherson, Paolo Bonzini, linux-kernel, kvm, Tao Zhang,
	Jim Mattson

On Thu Oct 30, 2025 at 6:43 PM UTC, Pawan Gupta wrote:
> On Thu, Oct 30, 2025 at 12:28:06PM +0000, Brendan Jackman wrote:
>> On Wed Oct 29, 2025 at 9:26 PM UTC, Pawan Gupta wrote:
>> > TSA mitigation:
>> >
>> >   d8010d4ba43e ("x86/bugs: Add a Transient Scheduler Attacks mitigation")
>> >
>> > introduced VM_CLEAR_CPU_BUFFERS for guests on AMD CPUs. Currently on Intel
>> > CLEAR_CPU_BUFFERS is being used for guests which has a much broader scope
>> > (kernel->user also).
>> >
>> > Make mitigations on Intel consistent with TSA. This would help handling the
>> > guest-only mitigations better in future.
>> >
>> > Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
>> > ---
>> >  arch/x86/kernel/cpu/bugs.c | 9 +++++++--
>> >  arch/x86/kvm/vmx/vmenter.S | 3 ++-
>> >  2 files changed, 9 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
>> > index d7fa03bf51b4517c12cc68e7c441f7589a4983d1..6d00a9ea7b4f28da291114a7a096b26cc129b57e 100644
>> > --- a/arch/x86/kernel/cpu/bugs.c
>> > +++ b/arch/x86/kernel/cpu/bugs.c
>> > @@ -194,7 +194,7 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
>> >  
>> >  /*
>> >   * Controls CPU Fill buffer clear before VMenter. This is a subset of
>> > - * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
>> > + * X86_FEATURE_CLEAR_CPU_BUF_VM, and should only be enabled when KVM-only
>> >   * mitigation is required.
>> >   */
>> 
>> So if I understand correctly with this patch the aim is:
>> 
>> X86_FEATURE_CLEAR_CPU_BUF means verw before exit to usermode
>> 
>> X86_FEATURE_CLEAR_CPU_BUF_VM means unconditional verw before VM Enter
>> 
>> cpu_buf_vm_clear[_mmio_only] means verw before VM Enter for
>> MMIO-capable guests.
>
> Yup, thats the goal.
>
>> Since this is being cleaned up can we also:
>> 
>> - Update the definition of X86_FEATURE_CLEAR_CPU_BUF in cpufeatures.h to
>>   say what context it applies to (now it's specifically exit to user)
>> 
>> - Clear up how verw_clear_cpu_buf_mitigation_selected relates to these
>>   two flags. Thinking aloud here... it looks like this is set:
>> 
>>   - If MDS mitigations are on, meaning both flags are set
>> 
>>   - If TAA mitigations are on, meaning both flags are set
>> 
>>   - If MMIO mitigations are on, and the CPU has MDS or TAA. In this case
>>     both flags are set, but this causality is messier.
>> 
>>   - If RFDS mitigations are on and supported, meaning both flags are set
>> 
>>   So if I'm reading this correctly whenever
>>   verw_clear_cpu_buf_mitigation_selected we should expect both flags
>>   enabled. So I think all that's needed is to add a reference to
>>   X86_FEATURE_CLEAR_CPU_BUF_VM to the comment?
>
> Yes. I will update the comment accordingly.
>
>> I think we also need to update the assertion of vmx->disable_fb_clear?
>
> I am not quite sure about the update needed. Could you please clarify?
>
>> Anyway thanks this seems like a very clear improvement to me.
>
> Thanks for the review and suggestions!

I will drop this thread and continue here:
https://lore.kernel.org/all/20251031003040.3491385-2-seanjc@google.com/

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2025-10-31 11:25 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-29 21:26 [PATCH 0/3] Unify VERW mitigation for guests Pawan Gupta
2025-10-29 21:26 ` [PATCH 1/3] x86/bugs: Use VM_CLEAR_CPU_BUFFERS in VMX as well Pawan Gupta
2025-10-29 22:13   ` Pawan Gupta
2025-10-30 12:28   ` Brendan Jackman
2025-10-30 18:43     ` Pawan Gupta
2025-10-31 11:25       ` Brendan Jackman
2025-10-29 21:26 ` [PATCH 2/3] x86/mmio: Rename cpu_buf_vm_clear to cpu_buf_vm_clear_mmio_only Pawan Gupta
2025-10-30  0:18   ` Sean Christopherson
2025-10-30  5:40     ` Pawan Gupta
2025-10-30 12:29   ` Brendan Jackman
2025-10-30 16:56     ` Pawan Gupta
2025-10-29 21:26 ` [PATCH 3/3] x86/mmio: Unify VERW mitigation for guests Pawan Gupta
2025-10-30  0:27   ` Sean Christopherson
2025-10-30  6:11     ` Pawan Gupta
2025-10-30  0:33   ` Pawan Gupta
2025-10-30  5:52     ` Yao Yuan
2025-10-30  6:17       ` Pawan Gupta
2025-10-30 12:52   ` Brendan Jackman
2025-10-30 16:06     ` Sean Christopherson
2025-10-30 16:26       ` Brendan Jackman
2025-10-30 18:06         ` Pawan Gupta
2025-10-30 17:54       ` Pawan Gupta
2025-10-30 17:28     ` Pawan Gupta
2025-10-30 18:21       ` Sean Christopherson
2025-10-30 19:11         ` Pawan Gupta
2025-10-30  0:29 ` [PATCH 0/3] " Sean Christopherson
2025-10-30 10:28   ` Borislav Petkov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox