* [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
@ 2025-05-23 1:17 Sean Christopherson
2025-05-23 1:17 ` [PATCH 1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL Sean Christopherson
` (6 more replies)
0 siblings, 7 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
Fix KVM's mitigation of the MMIO Stale Data bug, as the current approach
doesn't actually detect whether or not a guest has access to MMIO. E.g.
KVM_DEV_VFIO_FILE_ADD is entirely optional, and obviously only covers VFIO
devices, and so is a terrible heuristic for "can this vCPU access MMIO?"
To fix the flaw (hopefully), track whether or not a vCPU has access to MMIO
based on the MMU it will run with. KVM already detects host MMIO when
installing PTEs in order to force host MMIO to UC (EPT bypasses MTRRs), so
feeding that information into the MMU is rather straightforward.
Note, I haven't actually verified this mitigates the MMIO Stale Data bug, but
I think it's safe to say no has verified the existing code works either.
All that said, and despite what the subject says, my real interest in this
series it to kill off kvm_arch_{start,end}_assignment(). I.e. preciesly
identifying MMIO is a means to an end. Because as evidenced by the MMIO mess
and other bugs (e.g. vDPA device not getting device posted interrupts),
keying off KVM_DEV_VFIO_FILE_ADD for anything is a bad idea.
The last two patches of this series depend on the stupidly large device
posted interrupts rework:
https://lore.kernel.org/all/20250523010004.3240643-1-seanjc@google.com
which in turn depends on a not-tiny prep series:
https://lore.kernel.org/all/20250519232808.2745331-1-seanjc@google.com
Unless you care deeply about those patches, I honestly recommend just ignoring
them. I posted them as part of this series, because post two patches that
depends on *four* series seemed even more ridiculousr :-)
Side topic: Pawan, I haven't forgotten about your mmio_stale_data_clear =>
cpu_buf_vm_clear rename, I promise I'll review it soon.
Sean Christopherson (5):
KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask
is NULL
KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a
SPTE
KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the
guest
Revert "kvm: detect assigned device via irqbypass manager"
VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment()
arch/x86/include/asm/kvm_host.h | 3 +--
arch/x86/kvm/irq.c | 9 +------
arch/x86/kvm/mmu/mmu_internal.h | 3 +++
arch/x86/kvm/mmu/spte.c | 43 ++++++++++++++++++++++++++++++---
arch/x86/kvm/mmu/spte.h | 10 ++++++++
arch/x86/kvm/vmx/run_flags.h | 10 +++++---
arch/x86/kvm/vmx/vmx.c | 8 +++++-
arch/x86/kvm/x86.c | 18 --------------
include/linux/kvm_host.h | 18 --------------
virt/kvm/vfio.c | 3 ---
10 files changed, 68 insertions(+), 57 deletions(-)
base-commit: 1f0486097459e53d292db749de70e587339267f5
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
@ 2025-05-23 1:17 ` Sean Christopherson
2025-05-23 1:17 ` [PATCH 2/5] KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE Sean Christopherson
` (5 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
Guard the call to kvm_x86_call(get_mt_mask) with an explicit check on
kvm_x86_ops.get_mt_mask so as to avoid unnecessarily calling
kvm_is_mmio_pfn(), which is moderately expensive for some backing types.
E.g. lookup_memtype() conditionally takes a system-wide spinlock if KVM
ends up being call pat_pfn_immune_to_uc_mtrr(), e.g. for DAX memory.
While the call to kvm_x86_ops.get_mt_mask() itself is elided, the compiler
still needs to compute all parameters, as it can't know at build time that
the call will be squashed.
<+243>: call 0xffffffff812ad880 <kvm_is_mmio_pfn>
<+248>: mov %r13,%rsi
<+251>: mov %rbx,%rdi
<+254>: movzbl %al,%edx
<+257>: call 0xffffffff81c26af0 <__SCT__kvm_x86_get_mt_mask>
Fixes: 3fee4837ef40 ("KVM: x86: remove shadow_memtype_mask")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/spte.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index cfce03d8f123..f262c380f40e 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -209,7 +209,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
if (level > PG_LEVEL_4K)
spte |= PT_PAGE_SIZE_MASK;
- spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+ if (kvm_x86_ops.get_mt_mask)
+ spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+
if (host_writable)
spte |= shadow_host_writable_mask;
else
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 2/5] KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
2025-05-23 1:17 ` [PATCH 1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL Sean Christopherson
@ 2025-05-23 1:17 ` Sean Christopherson
2025-05-23 1:17 ` [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest Sean Christopherson
` (4 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
When making a SPTE, cache whether or not the target PFN is host MMIO in
order to avoid multiple rounds of the slow path of kvm_is_mmio_pfn(), e.g.
hitting pat_pfn_immune_to_uc_mtrr() in particular can be problematic. KVM
currently avoids multiple calls by virtue of the two users being mutually
exclusive (.get_mt_mask() is Intel-only, shadow_me_value is AMD-only), but
that won't hold true if/when KVM needs to detect host MMIO mappings for
other reasons, e.g. for mitigating the MMIO Stale Data vulnerability.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/spte.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index f262c380f40e..3f16c91aa042 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -104,7 +104,7 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access)
return spte;
}
-static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
+static bool __kvm_is_mmio_pfn(kvm_pfn_t pfn)
{
if (pfn_valid(pfn))
return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)) &&
@@ -125,6 +125,19 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn)
E820_TYPE_RAM);
}
+static bool kvm_is_mmio_pfn(kvm_pfn_t pfn, int *is_host_mmio)
+{
+ /*
+ * Determining if a PFN is host MMIO is relative expensive. Cache the
+ * result locally (in the sole caller) to avoid doing the full query
+ * multiple times when creating a single SPTE.
+ */
+ if (*is_host_mmio < 0)
+ *is_host_mmio = __kvm_is_mmio_pfn(pfn);
+
+ return *is_host_mmio;
+}
+
/*
* Returns true if the SPTE needs to be updated atomically due to having bits
* that may be changed without holding mmu_lock, and for which KVM must not
@@ -162,6 +175,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
{
int level = sp->role.level;
u64 spte = SPTE_MMU_PRESENT_MASK;
+ int is_host_mmio = -1;
bool wrprot = false;
/*
@@ -210,14 +224,14 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= PT_PAGE_SIZE_MASK;
if (kvm_x86_ops.get_mt_mask)
- spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
-
+ spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn,
+ kvm_is_mmio_pfn(pfn, &is_host_mmio));
if (host_writable)
spte |= shadow_host_writable_mask;
else
pte_access &= ~ACC_WRITE_MASK;
- if (shadow_me_value && !kvm_is_mmio_pfn(pfn))
+ if (shadow_me_value && !kvm_is_mmio_pfn(pfn, &is_host_mmio))
spte |= shadow_me_value;
spte |= (u64)pfn << PAGE_SHIFT;
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
2025-05-23 1:17 ` [PATCH 1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL Sean Christopherson
2025-05-23 1:17 ` [PATCH 2/5] KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE Sean Christopherson
@ 2025-05-23 1:17 ` Sean Christopherson
2025-05-29 4:27 ` Pawan Gupta
2025-05-23 1:17 ` [PATCH 4/5] Revert "kvm: detect assigned device via irqbypass manager" Sean Christopherson
` (3 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
Enforce the MMIO State Data mitigation if KVM has ever mapped host MMIO
into the VM, not if the VM has an assigned device. VFIO is but one of
many ways to map host MMIO into a KVM guest, and even within VFIO,
formally attaching a device to a VM via KVM_DEV_VFIO_FILE_ADD is entirely
optional.
Track whether or not the guest can access host MMIO on a per-MMU basis,
i.e. based on whether or not the vCPU has a mapping to host MMIO. For
simplicity, track MMIO mappings in "special" rools (those without a
kvm_mmu_page) at the VM level, as only Intel CPUs are vulnerable, and so
only legacy 32-bit shadow paging is affected, i.e. lack of precise
tracking is a complete non-issue.
Make the per-MMU and per-VM flags sticky. Detecting when *all* MMIO
mappings have been removed would be absurdly complex. And in practice,
removing MMIO from a guest will be done by deleting the associated memslot,
which by default will force KVM to re-allocate all roots. Special roots
will forever be mitigated, but as above, the affected scenarios are not
expected to be performance sensitive.
Use a VMX_RUN flag to communicate the need for a buffers flush to
vmx_vcpu_enter_exit() so that kvm_vcpu_can_access_host_mmio() and all its
dependencies don't need to be marked __always_inline, e.g. so that KASAN
doesn't trigger a noinstr violation.
Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Fixes: 8cb861e9e3c9 ("x86/speculation/mmio: Add mitigation for Processor MMIO Stale Data")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu/mmu_internal.h | 3 +++
arch/x86/kvm/mmu/spte.c | 21 +++++++++++++++++++++
arch/x86/kvm/mmu/spte.h | 10 ++++++++++
arch/x86/kvm/vmx/run_flags.h | 10 ++++++----
arch/x86/kvm/vmx/vmx.c | 8 +++++++-
6 files changed, 48 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 01edcefbd937..043be00ec5b8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1458,6 +1458,7 @@ struct kvm_arch {
bool x2apic_format;
bool x2apic_broadcast_quirk_disabled;
+ bool has_mapped_host_mmio;
bool guest_can_read_msr_platform_info;
bool exception_payload_enabled;
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index db8f33e4de62..65f3c89d7c5d 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -103,6 +103,9 @@ struct kvm_mmu_page {
int root_count;
refcount_t tdp_mmu_root_count;
};
+
+ bool has_mapped_host_mmio;
+
union {
/* These two members aren't used for TDP MMU */
struct {
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 3f16c91aa042..5fb43a834d48 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -138,6 +138,22 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn, int *is_host_mmio)
return *is_host_mmio;
}
+static void kvm_track_host_mmio_mapping(struct kvm_vcpu *vcpu)
+{
+ struct kvm_mmu_page *root = root_to_sp(vcpu->arch.mmu->root.hpa);
+
+ if (root)
+ WRITE_ONCE(root->has_mapped_host_mmio, true);
+ else
+ WRITE_ONCE(vcpu->kvm->arch.has_mapped_host_mmio, true);
+
+ /*
+ * Force vCPUs to exit and flush CPU buffers if the vCPU is using the
+ * affected root(s).
+ */
+ kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_OUTSIDE_GUEST_MODE);
+}
+
/*
* Returns true if the SPTE needs to be updated atomically due to having bits
* that may be changed without holding mmu_lock, and for which KVM must not
@@ -276,6 +292,11 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);
}
+ if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ !kvm_vcpu_can_access_host_mmio(vcpu) &&
+ kvm_is_mmio_pfn(pfn, &is_host_mmio))
+ kvm_track_host_mmio_mapping(vcpu);
+
*new_spte = spte;
return wrprot;
}
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 1e94f081bdaf..3133f066927e 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -280,6 +280,16 @@ static inline bool is_mirror_sptep(tdp_ptep_t sptep)
return is_mirror_sp(sptep_to_sp(rcu_dereference(sptep)));
}
+static inline bool kvm_vcpu_can_access_host_mmio(struct kvm_vcpu *vcpu)
+{
+ struct kvm_mmu_page *root = root_to_sp(vcpu->arch.mmu->root.hpa);
+
+ if (root)
+ return READ_ONCE(root->has_mapped_host_mmio);
+
+ return READ_ONCE(vcpu->kvm->arch.has_mapped_host_mmio);
+}
+
static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
{
return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index 6a9bfdfbb6e5..2f20fb170def 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,10 +2,12 @@
#ifndef __KVM_X86_VMX_RUN_FLAGS_H
#define __KVM_X86_VMX_RUN_FLAGS_H
-#define VMX_RUN_VMRESUME_SHIFT 0
-#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
+#define VMX_RUN_VMRESUME_SHIFT 0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
+#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT 2
-#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
-#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f79604bc0127..27e870d83122 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -74,6 +74,8 @@
#include "vmx_onhyperv.h"
#include "posted_intr.h"
+#include "mmu/spte.h"
+
MODULE_AUTHOR("Qumranet");
MODULE_DESCRIPTION("KVM support for VMX (Intel VT-x) extensions");
MODULE_LICENSE("GPL");
@@ -959,6 +961,10 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
flags |= VMX_RUN_SAVE_SPEC_CTRL;
+ if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
+ flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
+
return flags;
}
@@ -7282,7 +7288,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
- kvm_arch_has_assigned_device(vcpu->kvm))
+ (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
mds_clear_cpu_buffers();
vmx_disable_fb_clear(vmx);
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 4/5] Revert "kvm: detect assigned device via irqbypass manager"
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
` (2 preceding siblings ...)
2025-05-23 1:17 ` [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest Sean Christopherson
@ 2025-05-23 1:17 ` Sean Christopherson
2025-05-23 1:17 ` [PATCH 5/5] VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment() Sean Christopherson
` (2 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
Now that KVM explicitly tracks the number of possible bypass IRQs, and
doesn't conflate IRQ bypass with host MMIO access, stop bumping the
assigned device count when adding an IRQ bypass producer.
This reverts commit 2edd9cb79fb31b0907c6e0cdce2824780cf9b153.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/irq.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index 7586cf6f1215..b9bdec66a611 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -565,8 +565,6 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
struct kvm *kvm = irqfd->kvm;
int ret = 0;
- kvm_arch_start_assignment(irqfd->kvm);
-
spin_lock_irq(&kvm->irqfds.lock);
irqfd->producer = prod;
@@ -575,10 +573,8 @@ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
if (irqfd->irq_entry.type == KVM_IRQ_ROUTING_MSI) {
ret = kvm_pi_update_irte(irqfd, &irqfd->irq_entry);
- if (ret) {
+ if (ret)
kvm->arch.nr_possible_bypass_irqs--;
- kvm_arch_end_assignment(irqfd->kvm);
- }
}
spin_unlock_irq(&kvm->irqfds.lock);
@@ -614,9 +610,6 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
kvm->arch.nr_possible_bypass_irqs--;
spin_unlock_irq(&kvm->irqfds.lock);
-
-
- kvm_arch_end_assignment(irqfd->kvm);
}
void kvm_arch_update_irqfd_routing(struct kvm_kernel_irqfd *irqfd,
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 5/5] VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment()
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
` (3 preceding siblings ...)
2025-05-23 1:17 ` [PATCH 4/5] Revert "kvm: detect assigned device via irqbypass manager" Sean Christopherson
@ 2025-05-23 1:17 ` Sean Christopherson
2025-05-29 3:36 ` [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Pawan Gupta
2025-06-25 22:25 ` Sean Christopherson
6 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-05-23 1:17 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
Drop kvm_arch_{start,end}_assignment() and all associated code now that
KVM x86 no longer consumes assigned_device_count. Tracking whether or not
a VFIO-assigned device is formally associated with a VM is fundamentally
flawed, as such an association is optional for general usage, i.e. is prone
to false negatives. E.g. prior to commit 2edd9cb79fb3 ("kvm: detect
assigned device via irqbypass manager"), device passthrough via VFIO would
fail to enable IRQ bypass if userspace omitted the formal VFIO<=>KVM
binding.
And device drivers that *need* the VFIO<=>KVM connection, e.g. KVM-GT,
shouldn't be relying on generic x86 tracking infrastructure.
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/include/asm/kvm_host.h | 2 --
arch/x86/kvm/x86.c | 18 ------------------
include/linux/kvm_host.h | 18 ------------------
virt/kvm/vfio.c | 3 ---
4 files changed, 41 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 043be00ec5b8..3cb57f6ef730 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1380,8 +1380,6 @@ struct kvm_arch {
#define __KVM_HAVE_ARCH_NONCOHERENT_DMA
atomic_t noncoherent_dma_count;
-#define __KVM_HAVE_ARCH_ASSIGNED_DEVICE
- atomic_t assigned_device_count;
unsigned long nr_possible_bypass_irqs;
#ifdef CONFIG_KVM_IOAPIC
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3969e439a6bb..2a1563f2ee97 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13561,24 +13561,6 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
return kvm_lapic_enabled(vcpu) && apf_pageready_slot_free(vcpu);
}
-void kvm_arch_start_assignment(struct kvm *kvm)
-{
- atomic_inc(&kvm->arch.assigned_device_count);
-}
-EXPORT_SYMBOL_GPL(kvm_arch_start_assignment);
-
-void kvm_arch_end_assignment(struct kvm *kvm)
-{
- atomic_dec(&kvm->arch.assigned_device_count);
-}
-EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
-
-bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm)
-{
- return raw_atomic_read(&kvm->arch.assigned_device_count);
-}
-EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
-
static void kvm_noncoherent_dma_assignment_start_or_stop(struct kvm *kvm)
{
/*
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 706f2402ae8e..31f183c32f9a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1686,24 +1686,6 @@ static inline bool kvm_arch_has_noncoherent_dma(struct kvm *kvm)
return false;
}
#endif
-#ifdef __KVM_HAVE_ARCH_ASSIGNED_DEVICE
-void kvm_arch_start_assignment(struct kvm *kvm);
-void kvm_arch_end_assignment(struct kvm *kvm);
-bool kvm_arch_has_assigned_device(struct kvm *kvm);
-#else
-static inline void kvm_arch_start_assignment(struct kvm *kvm)
-{
-}
-
-static inline void kvm_arch_end_assignment(struct kvm *kvm)
-{
-}
-
-static __always_inline bool kvm_arch_has_assigned_device(struct kvm *kvm)
-{
- return false;
-}
-#endif
static inline struct rcuwait *kvm_arch_vcpu_get_wait(struct kvm_vcpu *vcpu)
{
diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
index 196a102e34fb..be50514bbd11 100644
--- a/virt/kvm/vfio.c
+++ b/virt/kvm/vfio.c
@@ -175,7 +175,6 @@ static int kvm_vfio_file_add(struct kvm_device *dev, unsigned int fd)
kvf->file = get_file(filp);
list_add_tail(&kvf->node, &kv->file_list);
- kvm_arch_start_assignment(dev->kvm);
kvm_vfio_file_set_kvm(kvf->file, dev->kvm);
kvm_vfio_update_coherency(dev);
@@ -205,7 +204,6 @@ static int kvm_vfio_file_del(struct kvm_device *dev, unsigned int fd)
continue;
list_del(&kvf->node);
- kvm_arch_end_assignment(dev->kvm);
#ifdef CONFIG_SPAPR_TCE_IOMMU
kvm_spapr_tce_release_vfio_group(dev->kvm, kvf);
#endif
@@ -336,7 +334,6 @@ static void kvm_vfio_release(struct kvm_device *dev)
fput(kvf->file);
list_del(&kvf->node);
kfree(kvf);
- kvm_arch_end_assignment(dev->kvm);
}
kvm_vfio_update_coherency(dev);
--
2.49.0.1151.ga128411c76-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
` (4 preceding siblings ...)
2025-05-23 1:17 ` [PATCH 5/5] VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment() Sean Christopherson
@ 2025-05-29 3:36 ` Pawan Gupta
2025-06-02 23:41 ` Sean Christopherson
2025-06-25 22:25 ` Sean Christopherson
6 siblings, 1 reply; 16+ messages in thread
From: Pawan Gupta @ 2025-05-29 3:36 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Thu, May 22, 2025 at 06:17:51PM -0700, Sean Christopherson wrote:
> Fix KVM's mitigation of the MMIO Stale Data bug, as the current approach
> doesn't actually detect whether or not a guest has access to MMIO. E.g.
> KVM_DEV_VFIO_FILE_ADD is entirely optional, and obviously only covers VFIO
I believe this needs userspace co-operation?
> devices, and so is a terrible heuristic for "can this vCPU access MMIO?"
>
> To fix the flaw (hopefully), track whether or not a vCPU has access to MMIO
> based on the MMU it will run with. KVM already detects host MMIO when
> installing PTEs in order to force host MMIO to UC (EPT bypasses MTRRs), so
> feeding that information into the MMU is rather straightforward.
>
> Note, I haven't actually verified this mitigates the MMIO Stale Data bug, but
> I think it's safe to say no has verified the existing code works either.
Mitigation was verifed for VFIO devices, but ofcourse not for the cases you
mentioned above. Typically, it is the PCI config registers on some faulty
devices (that don't respect byte-enable) are subject to MMIO Stale Data.
But, it is impossible to test and confirm with absolute certainity that all
other cases are not affected. Your patches should rule out those cases as
well.
Regarding validating this, if VERW is executed at VMenter, mitigation was
found to be effective. This is similar to other bugs like MDS. I am not a
virtualization expert, but I will try to validate whatever I can.
> All that said, and despite what the subject says, my real interest in this
> series it to kill off kvm_arch_{start,end}_assignment(). I.e. preciesly
> identifying MMIO is a means to an end. Because as evidenced by the MMIO mess
> and other bugs (e.g. vDPA device not getting device posted interrupts),
> keying off KVM_DEV_VFIO_FILE_ADD for anything is a bad idea.
>
> The last two patches of this series depend on the stupidly large device
> posted interrupts rework:
>
> https://lore.kernel.org/all/20250523010004.3240643-1-seanjc@google.com
>
> which in turn depends on a not-tiny prep series:
>
> https://lore.kernel.org/all/20250519232808.2745331-1-seanjc@google.com
>
> Unless you care deeply about those patches, I honestly recommend just ignoring
> them. I posted them as part of this series, because post two patches that
> depends on *four* series seemed even more ridiculousr :-)
>
> Side topic: Pawan, I haven't forgotten about your mmio_stale_data_clear =>
> cpu_buf_vm_clear rename, I promise I'll review it soon.
No problem.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-05-23 1:17 ` [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest Sean Christopherson
@ 2025-05-29 4:27 ` Pawan Gupta
2025-05-29 22:19 ` Sean Christopherson
0 siblings, 1 reply; 16+ messages in thread
From: Pawan Gupta @ 2025-05-29 4:27 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Thu, May 22, 2025 at 06:17:54PM -0700, Sean Christopherson wrote:
> Enforce the MMIO State Data mitigation if KVM has ever mapped host MMIO
> into the VM, not if the VM has an assigned device. VFIO is but one of
> many ways to map host MMIO into a KVM guest, and even within VFIO,
> formally attaching a device to a VM via KVM_DEV_VFIO_FILE_ADD is entirely
> optional.
>
> Track whether or not the guest can access host MMIO on a per-MMU basis,
> i.e. based on whether or not the vCPU has a mapping to host MMIO. For
> simplicity, track MMIO mappings in "special" rools (those without a
> kvm_mmu_page) at the VM level, as only Intel CPUs are vulnerable, and so
> only legacy 32-bit shadow paging is affected, i.e. lack of precise
> tracking is a complete non-issue.
>
> Make the per-MMU and per-VM flags sticky. Detecting when *all* MMIO
> mappings have been removed would be absurdly complex. And in practice,
> removing MMIO from a guest will be done by deleting the associated memslot,
> which by default will force KVM to re-allocate all roots. Special roots
> will forever be mitigated, but as above, the affected scenarios are not
> expected to be performance sensitive.
>
> Use a VMX_RUN flag to communicate the need for a buffers flush to
> vmx_vcpu_enter_exit() so that kvm_vcpu_can_access_host_mmio() and all its
> dependencies don't need to be marked __always_inline, e.g. so that KASAN
> doesn't trigger a noinstr violation.
>
> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Fixes: 8cb861e9e3c9 ("x86/speculation/mmio: Add mitigation for Processor MMIO Stale Data")
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/mmu/mmu_internal.h | 3 +++
> arch/x86/kvm/mmu/spte.c | 21 +++++++++++++++++++++
> arch/x86/kvm/mmu/spte.h | 10 ++++++++++
> arch/x86/kvm/vmx/run_flags.h | 10 ++++++----
> arch/x86/kvm/vmx/vmx.c | 8 +++++++-
> 6 files changed, 48 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 01edcefbd937..043be00ec5b8 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1458,6 +1458,7 @@ struct kvm_arch {
> bool x2apic_format;
> bool x2apic_broadcast_quirk_disabled;
>
> + bool has_mapped_host_mmio;
> bool guest_can_read_msr_platform_info;
> bool exception_payload_enabled;
>
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index db8f33e4de62..65f3c89d7c5d 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -103,6 +103,9 @@ struct kvm_mmu_page {
> int root_count;
> refcount_t tdp_mmu_root_count;
> };
> +
> + bool has_mapped_host_mmio;
> +
> union {
> /* These two members aren't used for TDP MMU */
> struct {
> diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
> index 3f16c91aa042..5fb43a834d48 100644
> --- a/arch/x86/kvm/mmu/spte.c
> +++ b/arch/x86/kvm/mmu/spte.c
> @@ -138,6 +138,22 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn, int *is_host_mmio)
> return *is_host_mmio;
> }
>
> +static void kvm_track_host_mmio_mapping(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_mmu_page *root = root_to_sp(vcpu->arch.mmu->root.hpa);
> +
> + if (root)
> + WRITE_ONCE(root->has_mapped_host_mmio, true);
> + else
> + WRITE_ONCE(vcpu->kvm->arch.has_mapped_host_mmio, true);
> +
> + /*
> + * Force vCPUs to exit and flush CPU buffers if the vCPU is using the
> + * affected root(s).
> + */
> + kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_OUTSIDE_GUEST_MODE);
> +}
> +
> /*
> * Returns true if the SPTE needs to be updated atomically due to having bits
> * that may be changed without holding mmu_lock, and for which KVM must not
> @@ -276,6 +292,11 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> mark_page_dirty_in_slot(vcpu->kvm, slot, gfn);
> }
>
> + if (static_branch_unlikely(&mmio_stale_data_clear) &&
> + !kvm_vcpu_can_access_host_mmio(vcpu) &&
> + kvm_is_mmio_pfn(pfn, &is_host_mmio))
> + kvm_track_host_mmio_mapping(vcpu);
> +
> *new_spte = spte;
> return wrprot;
> }
> diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
> index 1e94f081bdaf..3133f066927e 100644
> --- a/arch/x86/kvm/mmu/spte.h
> +++ b/arch/x86/kvm/mmu/spte.h
> @@ -280,6 +280,16 @@ static inline bool is_mirror_sptep(tdp_ptep_t sptep)
> return is_mirror_sp(sptep_to_sp(rcu_dereference(sptep)));
> }
>
> +static inline bool kvm_vcpu_can_access_host_mmio(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_mmu_page *root = root_to_sp(vcpu->arch.mmu->root.hpa);
> +
> + if (root)
> + return READ_ONCE(root->has_mapped_host_mmio);
> +
> + return READ_ONCE(vcpu->kvm->arch.has_mapped_host_mmio);
> +}
> +
> static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
> {
> return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value &&
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 6a9bfdfbb6e5..2f20fb170def 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -2,10 +2,12 @@
> #ifndef __KVM_X86_VMX_RUN_FLAGS_H
> #define __KVM_X86_VMX_RUN_FLAGS_H
>
> -#define VMX_RUN_VMRESUME_SHIFT 0
> -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
> +#define VMX_RUN_VMRESUME_SHIFT 0
> +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
> +#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT 2
>
> -#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
> -#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> +#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
> +#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> +#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
>
> #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index f79604bc0127..27e870d83122 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -74,6 +74,8 @@
> #include "vmx_onhyperv.h"
> #include "posted_intr.h"
>
> +#include "mmu/spte.h"
> +
> MODULE_AUTHOR("Qumranet");
> MODULE_DESCRIPTION("KVM support for VMX (Intel VT-x) extensions");
> MODULE_LICENSE("GPL");
> @@ -959,6 +961,10 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> flags |= VMX_RUN_SAVE_SPEC_CTRL;
>
> + if (static_branch_unlikely(&mmio_stale_data_clear) &&
> + kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> + flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> +
> return flags;
> }
>
> @@ -7282,7 +7288,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> if (static_branch_unlikely(&vmx_l1d_should_flush))
> vmx_l1d_flush(vcpu);
> else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> - kvm_arch_has_assigned_device(vcpu->kvm))
> + (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> mds_clear_cpu_buffers();
I think this also paves way for buffer clear for MDS and MMIO to be done at
a single place. Please let me know if below is feasible:
diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index 2f20fb170def..004fe1ca89f0 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,12 +2,12 @@
#ifndef __KVM_X86_VMX_RUN_FLAGS_H
#define __KVM_X86_VMX_RUN_FLAGS_H
-#define VMX_RUN_VMRESUME_SHIFT 0
-#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
-#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT 2
+#define VMX_RUN_VMRESUME_SHIFT 0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
+#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT 2
-#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
-#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
-#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
+#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
+#define VMX_RUN_CLEAR_CPU_BUFFERS BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index f6986dee6f8c..ab602ce4967e 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -141,6 +141,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Check if vmlaunch or vmresume is needed */
bt $VMX_RUN_VMRESUME_SHIFT, %ebx
+ test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
+
/* Load guest registers. Don't clobber flags. */
mov VCPU_RCX(%_ASM_AX), %_ASM_CX
mov VCPU_RDX(%_ASM_AX), %_ASM_DX
@@ -161,8 +163,11 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Load guest RAX. This kills the @regs pointer! */
mov VCPU_RAX(%_ASM_AX), %_ASM_AX
+ /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
+ jz .Lskip_clear_cpu_buffers
/* Clobbers EFLAGS.ZF */
CLEAR_CPU_BUFFERS
+.Lskip_clear_cpu_buffers:
/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
jnc .Lvmlaunch
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1e4790c8993a..1415aeea35f7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -958,9 +958,10 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
flags |= VMX_RUN_SAVE_SPEC_CTRL;
- if (static_branch_unlikely(&mmio_stale_data_clear) &&
- kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
- flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
+ if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) ||
+ (static_branch_unlikely(&mmio_stale_data_clear) &&
+ kvm_vcpu_can_access_host_mmio(&vmx->vcpu)))
+ flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
return flags;
}
@@ -7296,9 +7297,6 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
*/
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (static_branch_unlikely(&mmio_stale_data_clear) &&
- (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
- mds_clear_cpu_buffers();
vmx_disable_fb_clear(vmx);
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-05-29 4:27 ` Pawan Gupta
@ 2025-05-29 22:19 ` Sean Christopherson
2025-05-29 23:40 ` Pawan Gupta
0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-05-29 22:19 UTC (permalink / raw)
To: Pawan Gupta
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Wed, May 28, 2025, Pawan Gupta wrote:
> On Thu, May 22, 2025 at 06:17:54PM -0700, Sean Christopherson wrote:
> > @@ -7282,7 +7288,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> > if (static_branch_unlikely(&vmx_l1d_should_flush))
> > vmx_l1d_flush(vcpu);
> > else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> > - kvm_arch_has_assigned_device(vcpu->kvm))
> > + (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> > mds_clear_cpu_buffers();
>
> I think this also paves way for buffer clear for MDS and MMIO to be done at
> a single place. Please let me know if below is feasible:
It's definitely feasible (this thought crossed my mind as well), but because
CLEAR_CPU_BUFFERS emits VERW iff X86_FEATURE_CLEAR_CPU_BUF is enabled, the below
would do nothing for the MMIO case (either that, or I'm missing something).
We could obviously rework CLEAR_CPU_BUFFERS, I'm just not sure that's worth the
effort at this point. I'm definitely not opposed to it though.
> diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
> index 2f20fb170def..004fe1ca89f0 100644
> --- a/arch/x86/kvm/vmx/run_flags.h
> +++ b/arch/x86/kvm/vmx/run_flags.h
> @@ -2,12 +2,12 @@
> #ifndef __KVM_X86_VMX_RUN_FLAGS_H
> #define __KVM_X86_VMX_RUN_FLAGS_H
>
> -#define VMX_RUN_VMRESUME_SHIFT 0
> -#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT 2
> +#define VMX_RUN_VMRESUME_SHIFT 0
> +#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
> +#define VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT 2
>
> -#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
> -#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> -#define VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO BIT(VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO_SHIFT)
> +#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
> +#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)
> +#define VMX_RUN_CLEAR_CPU_BUFFERS BIT(VMX_RUN_CLEAR_CPU_BUFFERS_SHIFT)
>
> #endif /* __KVM_X86_VMX_RUN_FLAGS_H */
> diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> index f6986dee6f8c..ab602ce4967e 100644
> --- a/arch/x86/kvm/vmx/vmenter.S
> +++ b/arch/x86/kvm/vmx/vmenter.S
> @@ -141,6 +141,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
> /* Check if vmlaunch or vmresume is needed */
> bt $VMX_RUN_VMRESUME_SHIFT, %ebx
>
> + test $VMX_RUN_CLEAR_CPU_BUFFERS, %ebx
> +
> /* Load guest registers. Don't clobber flags. */
> mov VCPU_RCX(%_ASM_AX), %_ASM_CX
> mov VCPU_RDX(%_ASM_AX), %_ASM_DX
> @@ -161,8 +163,11 @@ SYM_FUNC_START(__vmx_vcpu_run)
> /* Load guest RAX. This kills the @regs pointer! */
> mov VCPU_RAX(%_ASM_AX), %_ASM_AX
>
> + /* Check EFLAGS.ZF from the VMX_RUN_CLEAR_CPU_BUFFERS bit test above */
> + jz .Lskip_clear_cpu_buffers
> /* Clobbers EFLAGS.ZF */
> CLEAR_CPU_BUFFERS
> +.Lskip_clear_cpu_buffers:
>
> /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
> jnc .Lvmlaunch
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 1e4790c8993a..1415aeea35f7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -958,9 +958,10 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx)
> if (!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))
> flags |= VMX_RUN_SAVE_SPEC_CTRL;
>
> - if (static_branch_unlikely(&mmio_stale_data_clear) &&
> - kvm_vcpu_can_access_host_mmio(&vmx->vcpu))
> - flags |= VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO;
> + if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) ||
> + (static_branch_unlikely(&mmio_stale_data_clear) &&
> + kvm_vcpu_can_access_host_mmio(&vmx->vcpu)))
> + flags |= VMX_RUN_CLEAR_CPU_BUFFERS;
>
> return flags;
> }
> @@ -7296,9 +7297,6 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> */
> if (static_branch_unlikely(&vmx_l1d_should_flush))
> vmx_l1d_flush(vcpu);
> - else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> - (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> - mds_clear_cpu_buffers();
>
> vmx_disable_fb_clear(vmx);
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-05-29 22:19 ` Sean Christopherson
@ 2025-05-29 23:40 ` Pawan Gupta
2025-06-02 23:45 ` Sean Christopherson
0 siblings, 1 reply; 16+ messages in thread
From: Pawan Gupta @ 2025-05-29 23:40 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Thu, May 29, 2025 at 03:19:22PM -0700, Sean Christopherson wrote:
> On Wed, May 28, 2025, Pawan Gupta wrote:
> > On Thu, May 22, 2025 at 06:17:54PM -0700, Sean Christopherson wrote:
> > > @@ -7282,7 +7288,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> > > if (static_branch_unlikely(&vmx_l1d_should_flush))
> > > vmx_l1d_flush(vcpu);
> > > else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> > > - kvm_arch_has_assigned_device(vcpu->kvm))
> > > + (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> > > mds_clear_cpu_buffers();
> >
> > I think this also paves way for buffer clear for MDS and MMIO to be done at
> > a single place. Please let me know if below is feasible:
>
> It's definitely feasible (this thought crossed my mind as well), but because
> CLEAR_CPU_BUFFERS emits VERW iff X86_FEATURE_CLEAR_CPU_BUF is enabled, the below
> would do nothing for the MMIO case (either that, or I'm missing something).
Thats right, CLEAR_CPU_BUFFERS needs rework too.
> We could obviously rework CLEAR_CPU_BUFFERS, I'm just not sure that's worth the
> effort at this point. I'm definitely not opposed to it though.
My goal with this is to have 2 separate controls for user-kernel and
guest-host. Such that MDS/TAA/RFDS gets finer controls to only enable
user-kernel or guest-host mitigation. This would play well with the Attack
vector series by David:
https://lore.kernel.org/lkml/20250509162839.3057217-1-david.kaplan@amd.com/
For now this patch is fine as is. I will send update separately including
the CLEAR_CPU_BUFFERS rework.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
2025-05-29 3:36 ` [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Pawan Gupta
@ 2025-06-02 23:41 ` Sean Christopherson
2025-06-03 1:22 ` Pawan Gupta
0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-06-02 23:41 UTC (permalink / raw)
To: Pawan Gupta
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Wed, May 28, 2025, Pawan Gupta wrote:
> On Thu, May 22, 2025 at 06:17:51PM -0700, Sean Christopherson wrote:
> > Fix KVM's mitigation of the MMIO Stale Data bug, as the current approach
> > doesn't actually detect whether or not a guest has access to MMIO. E.g.
> > KVM_DEV_VFIO_FILE_ADD is entirely optional, and obviously only covers VFIO
>
> I believe this needs userspace co-operation?
Yes, more or less. If the userspace VMM knows it doesn't need to trigger the
side effects of KVM_DEV_VFIO_FILE_ADD (e.g. isn't dealing with non-coherent DMA),
and doesn't need the VFIO<=>KVM binding (e.g. for KVM-GT), then AFAIK it's safe
to skip KVM_DEV_VFIO_FILE_ADD, modulo this mitigation.
> > devices, and so is a terrible heuristic for "can this vCPU access MMIO?"
> >
> > To fix the flaw (hopefully), track whether or not a vCPU has access to MMIO
> > based on the MMU it will run with. KVM already detects host MMIO when
> > installing PTEs in order to force host MMIO to UC (EPT bypasses MTRRs), so
> > feeding that information into the MMU is rather straightforward.
> >
> > Note, I haven't actually verified this mitigates the MMIO Stale Data bug, but
> > I think it's safe to say no has verified the existing code works either.
>
> Mitigation was verifed for VFIO devices, but ofcourse not for the cases you
> mentioned above. Typically, it is the PCI config registers on some faulty
> devices (that don't respect byte-enable) are subject to MMIO Stale Data.
>
> But, it is impossible to test and confirm with absolute certainity that all
Yeah, no argument there.
> other cases are not affected. Your patches should rule out those cases as
> well.
>
> Regarding validating this, if VERW is executed at VMenter, mitigation was
> found to be effective. This is similar to other bugs like MDS. I am not a
> virtualization expert, but I will try to validate whatever I can.
If you can re-verify the mitigation works for VFIO devices, that's more than
good enough for me. The bar at this point is to not regress the existing mitigation,
anything beyond that is gravy.
I've verified the KVM mechanics of tracing MMIO mappings fairly well (famous last
words), the only thing I haven't sanity checked is that the existing coverage for
VFIO devices is maintained.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-05-29 23:40 ` Pawan Gupta
@ 2025-06-02 23:45 ` Sean Christopherson
2025-06-03 1:29 ` Pawan Gupta
0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-06-02 23:45 UTC (permalink / raw)
To: Pawan Gupta
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Thu, May 29, 2025, Pawan Gupta wrote:
> On Thu, May 29, 2025 at 03:19:22PM -0700, Sean Christopherson wrote:
> > On Wed, May 28, 2025, Pawan Gupta wrote:
> > > On Thu, May 22, 2025 at 06:17:54PM -0700, Sean Christopherson wrote:
> > > > @@ -7282,7 +7288,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
> > > > if (static_branch_unlikely(&vmx_l1d_should_flush))
> > > > vmx_l1d_flush(vcpu);
> > > > else if (static_branch_unlikely(&mmio_stale_data_clear) &&
> > > > - kvm_arch_has_assigned_device(vcpu->kvm))
> > > > + (flags & VMX_RUN_CLEAR_CPU_BUFFERS_FOR_MMIO))
> > > > mds_clear_cpu_buffers();
> > >
> > > I think this also paves way for buffer clear for MDS and MMIO to be done at
> > > a single place. Please let me know if below is feasible:
> >
> > It's definitely feasible (this thought crossed my mind as well), but because
> > CLEAR_CPU_BUFFERS emits VERW iff X86_FEATURE_CLEAR_CPU_BUF is enabled, the below
> > would do nothing for the MMIO case (either that, or I'm missing something).
>
> Thats right, CLEAR_CPU_BUFFERS needs rework too.
>
> > We could obviously rework CLEAR_CPU_BUFFERS, I'm just not sure that's worth the
> > effort at this point. I'm definitely not opposed to it though.
>
> My goal with this is to have 2 separate controls for user-kernel and
> guest-host. Such that MDS/TAA/RFDS gets finer controls to only enable
> user-kernel or guest-host mitigation. This would play well with the Attack
> vector series by David:
>
> https://lore.kernel.org/lkml/20250509162839.3057217-1-david.kaplan@amd.com/
>
> For now this patch is fine as is. I will send update separately including
> the CLEAR_CPU_BUFFERS rework.
Sounds good.
Ah, and the s/mmio_stale_data_clear/cpu_buf_vm_clear rename already landed for
6.16-rc1, so we don't have to overthink about the ordering with respect to that
change. :-)
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
2025-06-02 23:41 ` Sean Christopherson
@ 2025-06-03 1:22 ` Pawan Gupta
2025-06-07 2:52 ` Pawan Gupta
0 siblings, 1 reply; 16+ messages in thread
From: Pawan Gupta @ 2025-06-03 1:22 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Mon, Jun 02, 2025 at 04:41:35PM -0700, Sean Christopherson wrote:
> > Regarding validating this, if VERW is executed at VMenter, mitigation was
> > found to be effective. This is similar to other bugs like MDS. I am not a
> > virtualization expert, but I will try to validate whatever I can.
>
> If you can re-verify the mitigation works for VFIO devices, that's more than
> good enough for me. The bar at this point is to not regress the existing mitigation,
> anything beyond that is gravy.
Ok sure. I'll verify that VERW is getting executed for VFIO devices.
> I've verified the KVM mechanics of tracing MMIO mappings fairly well (famous last
> words), the only thing I haven't sanity checked is that the existing coverage for
> VFIO devices is maintained.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
2025-06-02 23:45 ` Sean Christopherson
@ 2025-06-03 1:29 ` Pawan Gupta
0 siblings, 0 replies; 16+ messages in thread
From: Pawan Gupta @ 2025-06-03 1:29 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Mon, Jun 02, 2025 at 04:45:13PM -0700, Sean Christopherson wrote:
> Ah, and the s/mmio_stale_data_clear/cpu_buf_vm_clear rename already landed for
> 6.16-rc1, so we don't have to overthink about the ordering with respect to that
> change. :-)
Yeah, I noticed that. It went through the x86 tree as bulk of changes were
outside of KVM.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
2025-06-03 1:22 ` Pawan Gupta
@ 2025-06-07 2:52 ` Pawan Gupta
0 siblings, 0 replies; 16+ messages in thread
From: Pawan Gupta @ 2025-06-07 2:52 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, kvm, linux-kernel, Borislav Petkov, Jim Mattson
On Mon, Jun 02, 2025 at 06:22:08PM -0700, Pawan Gupta wrote:
> On Mon, Jun 02, 2025 at 04:41:35PM -0700, Sean Christopherson wrote:
> > > Regarding validating this, if VERW is executed at VMenter, mitigation was
> > > found to be effective. This is similar to other bugs like MDS. I am not a
> > > virtualization expert, but I will try to validate whatever I can.
> >
> > If you can re-verify the mitigation works for VFIO devices, that's more than
> > good enough for me. The bar at this point is to not regress the existing mitigation,
> > anything beyond that is gravy.
>
> Ok sure. I'll verify that VERW is getting executed for VFIO devices.
I have verified that with below patches CPU buffer clearing for MMIO Stale
Data is working as expected for VFIO device.
KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE
KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL
For the above patches:
Tested-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Below are excerpts from the logs with debug prints added:
# virsh start ubuntu24.04 <------ Guest launched
[ 5737.281649] virbr0: port 1(vnet1) entered blocking state
[ 5737.281659] virbr0: port 1(vnet1) entered disabled state
[ 5737.281686] vnet1: entered allmulticast mode
[ 5737.281775] vnet1: entered promiscuous mode
[ 5737.282026] virbr0: port 1(vnet1) entered blocking state
[ 5737.282032] virbr0: port 1(vnet1) entered listening state
[ 5737.775162] vmx_vcpu_enter_exit: 13085 callbacks suppressed
[ 5737.775169] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO <----- Buffers not cleared
[ 5737.775192] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5737.775203] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
...
Domain 'ubuntu24.04' started
[ 5739.323529] virbr0: port 1(vnet1) entered learning state
[ 5741.372527] virbr0: port 1(vnet1) entered forwarding state
[ 5741.372540] virbr0: topology change detected, propagating
[ 5742.906218] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5742.906232] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5742.906234] kvm_intel: vmx_vcpu_enter_exit: CPU buffer NOT cleared for MMIO
[ 5747.906515] vmx_vcpu_enter_exit: 267825 callbacks suppressed
...
# virsh attach-device ubuntu24.04 vfio.xml --live <----- Device attached
[ 5749.913996] ioatdma 0000:00:01.1: Removing dma and dca services
[ 5750.786112] vfio-pci 0000:00:01.1: resetting
[ 5750.891646] vfio-pci 0000:00:01.1: reset done
[ 5750.900521] vfio-pci 0000:00:01.1: resetting
[ 5751.003645] vfio-pci 0000:00:01.1: reset done
Device attached successfully
[ 5751.074292] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO <----- Buffers getting cleared
[ 5751.074293] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO
[ 5751.074294] kvm_intel: vmx_vcpu_enter_exit: CPU buffer cleared for MMIO
[ 5756.076427] vmx_vcpu_enter_exit: 68991 callbacks suppressed
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
` (5 preceding siblings ...)
2025-05-29 3:36 ` [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Pawan Gupta
@ 2025-06-25 22:25 ` Sean Christopherson
6 siblings, 0 replies; 16+ messages in thread
From: Sean Christopherson @ 2025-06-25 22:25 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini
Cc: kvm, linux-kernel, Pawan Gupta, Borislav Petkov, Jim Mattson
On Thu, 22 May 2025 18:17:51 -0700, Sean Christopherson wrote:
> Fix KVM's mitigation of the MMIO Stale Data bug, as the current approach
> doesn't actually detect whether or not a guest has access to MMIO. E.g.
> KVM_DEV_VFIO_FILE_ADD is entirely optional, and obviously only covers VFIO
> devices, and so is a terrible heuristic for "can this vCPU access MMIO?"
>
> To fix the flaw (hopefully), track whether or not a vCPU has access to MMIO
> based on the MMU it will run with. KVM already detects host MMIO when
> installing PTEs in order to force host MMIO to UC (EPT bypasses MTRRs), so
> feeding that information into the MMU is rather straightforward.
>
> [...]
Applied 1-3 to kvm-x86 mmio, and 4-5 to 'kvm-x86 no_assignment' (which is based
on 'irqs' and includes 'mmio' via a merge, to avoid having the mmio changes
depend on the IRQ overhaul).
[1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL
https://github.com/kvm-x86/linux/commit/c126b46e6fa8
[2/5] KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE
https://github.com/kvm-x86/linux/commit/ffe9d7966d01
[3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest
https://github.com/kvm-x86/linux/commit/83ebe7157483
[4/5] Revert "kvm: detect assigned device via irqbypass manager"
https://github.com/kvm-x86/linux/commit/ff845e6a84c8
[5/5] VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment()
https://github.com/kvm-x86/linux/commit/bbc13ae593e0
--
https://github.com/kvm-x86/kvm-unit-tests/tree/next
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-06-25 22:26 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-23 1:17 [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Sean Christopherson
2025-05-23 1:17 ` [PATCH 1/5] KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL Sean Christopherson
2025-05-23 1:17 ` [PATCH 2/5] KVM: x86/mmu: Locally cache whether a PFN is host MMIO when making a SPTE Sean Christopherson
2025-05-23 1:17 ` [PATCH 3/5] KVM: VMX: Apply MMIO Stale Data mitigation if KVM maps MMIO into the guest Sean Christopherson
2025-05-29 4:27 ` Pawan Gupta
2025-05-29 22:19 ` Sean Christopherson
2025-05-29 23:40 ` Pawan Gupta
2025-06-02 23:45 ` Sean Christopherson
2025-06-03 1:29 ` Pawan Gupta
2025-05-23 1:17 ` [PATCH 4/5] Revert "kvm: detect assigned device via irqbypass manager" Sean Christopherson
2025-05-23 1:17 ` [PATCH 5/5] VFIO: KVM: x86: Drop kvm_arch_{start,end}_assignment() Sean Christopherson
2025-05-29 3:36 ` [PATCH 0/5] KVM: VMX: Fix MMIO Stale Data Mitigation Pawan Gupta
2025-06-02 23:41 ` Sean Christopherson
2025-06-03 1:22 ` Pawan Gupta
2025-06-07 2:52 ` Pawan Gupta
2025-06-25 22:25 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).