* [PATCH v2 0/3] KVM: Fix and clean up kvm_vcpu_map[_readonly]() usages
@ 2026-04-08 0:11 Peter Fang
2026-04-08 0:11 ` [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes Peter Fang
` (2 more replies)
0 siblings, 3 replies; 18+ messages in thread
From: Peter Fang @ 2026-04-08 0:11 UTC (permalink / raw)
To: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan,
Nicholas Piggin
Cc: Yosry Ahmed, Ritesh Harjani, Michael Ellerman,
Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm,
linuxppc-dev, linux-kernel, Peter Fang
kvm_vcpu_map() and kvm_vcpu_map_readonly() are declared to take a gpa_t
in kvm_host.h when they're supposed to take a gfn_t. First fix the
function prototypes, and then refactor them to correctly take a gpa_t,
reducing boilerplate gpa->gfn conversions at all call sites.
No actual harm has been done yet as all of the call sites are correctly
passing in a gfn.
No functional change intended. All changes are compile-tested on x86 and
ppc, which are the current users of these APIs.
---
v1 -> v2:
- Rebased on top of latest kvm.git#master
- As suggested by Yosry, refactor the APIs to reduce boilerplate code
at call sites
v1: https://lore.kernel.org/kvm/20260325092001.613025-1-peter.fang@intel.com/
Peter Fang (3):
KVM: Fix kvm_vcpu_map[_readonly]() function prototypes
KVM: Move page mapping/unmapping APIs in kvm_host.h
KVM: Take gpa_t in kvm_vcpu_map[_readonly]()
arch/powerpc/kvm/book3s_pr.c | 2 +-
arch/x86/kvm/svm/nested.c | 4 ++--
arch/x86/kvm/svm/sev.c | 2 +-
arch/x86/kvm/svm/svm.c | 8 +++----
arch/x86/kvm/vmx/nested.c | 11 ++++-----
include/linux/kvm_host.h | 46 ++++++++++++++++++------------------
6 files changed, 36 insertions(+), 37 deletions(-)
base-commit: df83746075778958954aa0460cca55f4b3fc9c02
--
2.53.0
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes 2026-04-08 0:11 [PATCH v2 0/3] KVM: Fix and clean up kvm_vcpu_map[_readonly]() usages Peter Fang @ 2026-04-08 0:11 ` Peter Fang 2026-04-21 23:05 ` Yosry Ahmed 2026-04-08 0:11 ` [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h Peter Fang 2026-04-08 0:11 ` [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() Peter Fang 2 siblings, 1 reply; 18+ messages in thread From: Peter Fang @ 2026-04-08 0:11 UTC (permalink / raw) To: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin Cc: Yosry Ahmed, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel, Peter Fang, KarimAllah Ahmed, Konrad Rzeszutek Wilk kvm_vcpu_map() and kvm_vcpu_map_readonly() should take a gfn instead of a gpa. This appears to be a result of the original kvm_vcpu_map() being declared with the wrong function prototype in kvm_host.h, even though it was correct in the actual implementation in kvm_main.c. No actual harm has been done yet as all of the call sites are correctly passing in a gfn. Plus, both gfn_t and gpa_t are typedef'd to u64 so this change shouldn't have any functional impact. Compile-tested on x86 and ppc, which are the current users of these interfaces. Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API") Cc: KarimAllah Ahmed <karahmed@amazon.de> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Peter Fang <peter.fang@intel.com> --- include/linux/kvm_host.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6b76e7a6f4c2..4e3bea92a06b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1382,20 +1382,20 @@ void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *mems void mark_page_dirty(struct kvm *kvm, gfn_t gfn); void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map, +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, bool writable); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); -static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, +static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gpa, map, true); + return __kvm_vcpu_map(vcpu, gfn, map, true); } -static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, +static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gpa, map, false); + return __kvm_vcpu_map(vcpu, gfn, map, false); } static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, -- 2.53.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes 2026-04-08 0:11 ` [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes Peter Fang @ 2026-04-21 23:05 ` Yosry Ahmed 0 siblings, 0 replies; 18+ messages in thread From: Yosry Ahmed @ 2026-04-21 23:05 UTC (permalink / raw) To: Peter Fang Cc: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel, KarimAllah Ahmed, Konrad Rzeszutek Wilk On Tue, Apr 07, 2026 at 05:11:28PM -0700, Peter Fang wrote: > kvm_vcpu_map() and kvm_vcpu_map_readonly() should take a gfn instead of > a gpa. This appears to be a result of the original kvm_vcpu_map() being > declared with the wrong function prototype in kvm_host.h, even though > it was correct in the actual implementation in kvm_main.c. > > No actual harm has been done yet as all of the call sites are correctly > passing in a gfn. Plus, both gfn_t and gpa_t are typedef'd to u64 so > this change shouldn't have any functional impact. > > Compile-tested on x86 and ppc, which are the current users of these > interfaces. > > Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API") > Cc: KarimAllah Ahmed <karahmed@amazon.de> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > Signed-off-by: Peter Fang <peter.fang@intel.com> > --- Reviewed-by: Yosry Ahmed <yosry@kernel.org> ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h 2026-04-08 0:11 [PATCH v2 0/3] KVM: Fix and clean up kvm_vcpu_map[_readonly]() usages Peter Fang 2026-04-08 0:11 ` [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes Peter Fang @ 2026-04-08 0:11 ` Peter Fang 2026-04-21 23:06 ` Yosry Ahmed 2026-04-08 0:11 ` [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() Peter Fang 2 siblings, 1 reply; 18+ messages in thread From: Peter Fang @ 2026-04-08 0:11 UTC (permalink / raw) To: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin Cc: Yosry Ahmed, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel, Peter Fang Move kvm_vcpu_map*() and kvm_vcpu_unmap() so that a subsequent refactor can use gpa_to_gfn() without a forward declaration. No functional change intended. Signed-off-by: Peter Fang <peter.fang@intel.com> --- include/linux/kvm_host.h | 46 ++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4e3bea92a06b..484378cfdcc0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1382,29 +1382,6 @@ void mark_page_dirty_in_slot(struct kvm *kvm, const struct kvm_memory_slot *mems void mark_page_dirty(struct kvm *kvm, gfn_t gfn); void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, - bool writable); -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); - -static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, - struct kvm_host_map *map) -{ - return __kvm_vcpu_map(vcpu, gfn, map, true); -} - -static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gfn_t gfn, - struct kvm_host_map *map) -{ - return __kvm_vcpu_map(vcpu, gfn, map, false); -} - -static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, - struct kvm_host_map *map) -{ - if (kvm_vcpu_mapped(map)) - kvm_vcpu_mark_page_dirty(vcpu, map->gfn); -} - unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable); int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset, @@ -1916,6 +1893,29 @@ static inline hpa_t pfn_to_hpa(kvm_pfn_t pfn) return (hpa_t)pfn << PAGE_SHIFT; } +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, + bool writable); +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); + +static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gfn, map, true); +} + +static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gfn_t gfn, + struct kvm_host_map *map) +{ + return __kvm_vcpu_map(vcpu, gfn, map, false); +} + +static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, + struct kvm_host_map *map) +{ + if (kvm_vcpu_mapped(map)) + kvm_vcpu_mark_page_dirty(vcpu, map->gfn); +} + static inline bool kvm_is_gpa_in_memslot(struct kvm *kvm, gpa_t gpa) { unsigned long hva = gfn_to_hva(kvm, gpa_to_gfn(gpa)); -- 2.53.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h 2026-04-08 0:11 ` [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h Peter Fang @ 2026-04-21 23:06 ` Yosry Ahmed 0 siblings, 0 replies; 18+ messages in thread From: Yosry Ahmed @ 2026-04-21 23:06 UTC (permalink / raw) To: Peter Fang Cc: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 07, 2026 at 05:11:29PM -0700, Peter Fang wrote: > Move kvm_vcpu_map*() and kvm_vcpu_unmap() so that a subsequent refactor > can use gpa_to_gfn() without a forward declaration. > > No functional change intended. > > Signed-off-by: Peter Fang <peter.fang@intel.com> > --- Reviewed-by: Yosry Ahmed <yosry@kernel.org> ^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-08 0:11 [PATCH v2 0/3] KVM: Fix and clean up kvm_vcpu_map[_readonly]() usages Peter Fang 2026-04-08 0:11 ` [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes Peter Fang 2026-04-08 0:11 ` [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h Peter Fang @ 2026-04-08 0:11 ` Peter Fang 2026-04-21 23:08 ` Yosry Ahmed 2 siblings, 1 reply; 18+ messages in thread From: Peter Fang @ 2026-04-08 0:11 UTC (permalink / raw) To: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin Cc: Yosry Ahmed, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel, Peter Fang Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing boilerplate at call sites. __kvm_vcpu_map() still takes a gfn_t because guest page mapping is fundamentally GFN-based. No functional change intended. Compile-tested on x86 and ppc, which are the current users of these interfaces. Suggested-by: Yosry Ahmed <yosry@kernel.org> Signed-off-by: Peter Fang <peter.fang@intel.com> --- arch/powerpc/kvm/book3s_pr.c | 2 +- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/svm/svm.c | 8 ++++---- arch/x86/kvm/vmx/nested.c | 11 +++++------ include/linux/kvm_host.h | 8 ++++---- 6 files changed, 17 insertions(+), 18 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 2ba2dd26a7ea..45dea4064618 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -644,7 +644,7 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte) u32 *page; int i, r; - r = kvm_vcpu_map(vcpu, pte->raddr >> PAGE_SHIFT, &map); + r = kvm_vcpu_map(vcpu, pte->raddr, &map); if (r) return; diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index b36c33255bed..f168b54828bb 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1019,7 +1019,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu) } vmcb12_gpa = svm->vmcb->save.rax; - ret = kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map); + ret = kvm_vcpu_map(vcpu, vmcb12_gpa, &map); if (ret == -EINVAL) { kvm_inject_gp(vcpu, 0); return 1; @@ -1134,7 +1134,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm) struct kvm_host_map map; int rc; - rc = kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.vmcb12_gpa), &map); + rc = kvm_vcpu_map(vcpu, svm->nested.vmcb12_gpa, &map); if (rc) { if (rc == -EINVAL) kvm_inject_gp(vcpu, 0); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 3f9c1aa39a0a..524607bb8cc2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -4405,7 +4405,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu) return 1; } - if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) { + if (kvm_vcpu_map(vcpu, ghcb_gpa, &svm->sev_es.ghcb_map)) { /* Unable to map GHCB from guest */ vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n", ghcb_gpa); diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index e6477affac9a..823c6a6f3594 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2159,7 +2159,7 @@ static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload) if (nested_svm_check_permissions(vcpu)) return 1; - ret = kvm_vcpu_map(vcpu, gpa_to_gfn(svm->vmcb->save.rax), &map); + ret = kvm_vcpu_map(vcpu, svm->vmcb->save.rax, &map); if (ret) { if (ret == -EINVAL) kvm_inject_gp(vcpu, 0); @@ -4820,7 +4820,7 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) * that, see svm_prepare_switch_to_guest()) which must be * preserved. */ - if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save)) + if (kvm_vcpu_map(vcpu, svm->nested.hsave_msr, &map_save)) return 1; BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400); @@ -4854,11 +4854,11 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) if (!(smram64->efer & EFER_SVME)) return 1; - if (kvm_vcpu_map(vcpu, gpa_to_gfn(smram64->svm_guest_vmcb_gpa), &map)) + if (kvm_vcpu_map(vcpu, smram64->svm_guest_vmcb_gpa, &map)) return 1; ret = 1; - if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save)) + if (kvm_vcpu_map(vcpu, svm->nested.hsave_msr, &map_save)) goto unmap_map; if (svm_allocate_nested(svm)) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 937aeb474af7..ee3ff76a8678 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -696,7 +696,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, return true; } - if (kvm_vcpu_map_readonly(vcpu, gpa_to_gfn(vmcs12->msr_bitmap), &map)) + if (kvm_vcpu_map_readonly(vcpu, vmcs12->msr_bitmap, &map)) return false; msr_bitmap_l1 = (unsigned long *)map.hva; @@ -2138,8 +2138,7 @@ static enum nested_evmptrld_status nested_vmx_handle_enlightened_vmptrld( nested_release_evmcs(vcpu); - if (kvm_vcpu_map(vcpu, gpa_to_gfn(evmcs_gpa), - &vmx->nested.hv_evmcs_map)) + if (kvm_vcpu_map(vcpu, evmcs_gpa, &vmx->nested.hv_evmcs_map)) return EVMPTRLD_ERROR; vmx->nested.hv_evmcs = vmx->nested.hv_evmcs_map.hva; @@ -3437,7 +3436,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { map = &vmx->nested.apic_access_page_map; - if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->apic_access_addr), map)) { + if (!kvm_vcpu_map(vcpu, vmcs12->apic_access_addr, map)) { vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(map->pfn)); } else { pr_debug_ratelimited("%s: no backing for APIC-access address in vmcs12\n", @@ -3453,7 +3452,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) if (nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW)) { map = &vmx->nested.virtual_apic_map; - if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->virtual_apic_page_addr), map)) { + if (!kvm_vcpu_map(vcpu, vmcs12->virtual_apic_page_addr, map)) { vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); } else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) && nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) && @@ -3479,7 +3478,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) if (nested_cpu_has_posted_intr(vmcs12)) { map = &vmx->nested.pi_desc_map; - if (!kvm_vcpu_map(vcpu, gpa_to_gfn(vmcs12->posted_intr_desc_addr), map)) { + if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { vmx->nested.pi_desc = (struct pi_desc *)(((void *)map->hva) + offset_in_page(vmcs12->posted_intr_desc_addr)); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 484378cfdcc0..893a8c76a665 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1897,16 +1897,16 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, bool writable); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); -static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, +static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gfn, map, true); + return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, true); } -static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gfn_t gfn, +static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gfn, map, false); + return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, false); } static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, -- 2.53.0 ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-08 0:11 ` [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() Peter Fang @ 2026-04-21 23:08 ` Yosry Ahmed 2026-04-21 23:19 ` Sean Christopherson 0 siblings, 1 reply; 18+ messages in thread From: Yosry Ahmed @ 2026-04-21 23:08 UTC (permalink / raw) To: Peter Fang Cc: Paolo Bonzini, Sean Christopherson, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > boilerplate at call sites. > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > fundamentally GFN-based. > > No functional change intended. > > Compile-tested on x86 and ppc, which are the current users of these > interfaces. > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > Signed-off-by: Peter Fang <peter.fang@intel.com> > --- I was going to suggest a WARN in kvm_vcpu_map() and kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean usually hates my paranoid WARN suggestions. Anyway: Reviewed-by: Yosry Ahmed <yosry@kernel.org> ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-21 23:08 ` Yosry Ahmed @ 2026-04-21 23:19 ` Sean Christopherson 2026-04-21 23:25 ` Yosry Ahmed 2026-04-21 23:29 ` Sean Christopherson 0 siblings, 2 replies; 18+ messages in thread From: Sean Christopherson @ 2026-04-21 23:19 UTC (permalink / raw) To: Yosry Ahmed Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 21, 2026, Yosry Ahmed wrote: > On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > > boilerplate at call sites. > > > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > > fundamentally GFN-based. > > > > No functional change intended. > > > > Compile-tested on x86 and ppc, which are the current users of these > > interfaces. > > > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > > Signed-off-by: Peter Fang <peter.fang@intel.com> > > --- > > I was going to suggest a WARN in kvm_vcpu_map() and > kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean > usually hates my paranoid WARN suggestions. Heh, for good reason. Adding such a WARN would be triggered by this code: if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { vmx->nested.pi_desc = (struct pi_desc *)(((void *)map->hva) + offset_in_page(vmcs12->posted_intr_desc_addr)); The PI descriptor only needs to be 64-bit aligned, not page-aligned. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-21 23:19 ` Sean Christopherson @ 2026-04-21 23:25 ` Yosry Ahmed 2026-04-21 23:29 ` Sean Christopherson 1 sibling, 0 replies; 18+ messages in thread From: Yosry Ahmed @ 2026-04-21 23:25 UTC (permalink / raw) To: Sean Christopherson Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 21, 2026 at 04:19:03PM -0700, Sean Christopherson wrote: > On Tue, Apr 21, 2026, Yosry Ahmed wrote: > > On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > > > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > > > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > > > boilerplate at call sites. > > > > > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > > > fundamentally GFN-based. > > > > > > No functional change intended. > > > > > > Compile-tested on x86 and ppc, which are the current users of these > > > interfaces. > > > > > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > > > Signed-off-by: Peter Fang <peter.fang@intel.com> > > > --- > > > > I was going to suggest a WARN in kvm_vcpu_map() and > > kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean > > usually hates my paranoid WARN suggestions. > > Heh, for good reason. Adding such a WARN would be triggered by this code: > > if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { > vmx->nested.pi_desc = > (struct pi_desc *)(((void *)map->hva) + > offset_in_page(vmcs12->posted_intr_desc_addr)); > > The PI descriptor only needs to be 64-bit aligned, not page-aligned. I didn't know that, thanks for pointing out. You meant 64-byte aligned though, right? ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-21 23:19 ` Sean Christopherson 2026-04-21 23:25 ` Yosry Ahmed @ 2026-04-21 23:29 ` Sean Christopherson 2026-04-21 23:41 ` Yosry Ahmed 1 sibling, 1 reply; 18+ messages in thread From: Sean Christopherson @ 2026-04-21 23:29 UTC (permalink / raw) To: Yosry Ahmed Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 21, 2026, Sean Christopherson wrote: > On Tue, Apr 21, 2026, Yosry Ahmed wrote: > > On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > > > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > > > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > > > boilerplate at call sites. > > > > > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > > > fundamentally GFN-based. > > > > > > No functional change intended. > > > > > > Compile-tested on x86 and ppc, which are the current users of these > > > interfaces. > > > > > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > > > Signed-off-by: Peter Fang <peter.fang@intel.com> > > > --- > > > > I was going to suggest a WARN in kvm_vcpu_map() and > > kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean > > usually hates my paranoid WARN suggestions. > > Heh, for good reason. Adding such a WARN would be triggered by this code: > > if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { > vmx->nested.pi_desc = > (struct pi_desc *)(((void *)map->hva) + > offset_in_page(vmcs12->posted_intr_desc_addr)); > > The PI descriptor only needs to be 64-bit aligned, not page-aligned. To elaborate a bit, I'm all for adding WARNs in flows where something bad is all but guaranteed to happen if an assumption is violated, or in APIs where there's a history of goofs and/or subtlety in how the API behaves. What I'm against is adding WARNs because someone could write bad code in the future, or because KVM doesn't do XYZ at this time. Such WARNs usualy just add noise, and can even be actively harmful. E.g. in this case, ignoring the PID usage, a reader might look at the WARN and think it's _wrong_ to map a page in order to access a subset of the page, which is just not true. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-21 23:29 ` Sean Christopherson @ 2026-04-21 23:41 ` Yosry Ahmed 2026-04-22 0:27 ` Sean Christopherson 0 siblings, 1 reply; 18+ messages in thread From: Yosry Ahmed @ 2026-04-21 23:41 UTC (permalink / raw) To: Sean Christopherson Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 21, 2026 at 4:29 PM Sean Christopherson <seanjc@google.com> wrote: > > On Tue, Apr 21, 2026, Sean Christopherson wrote: > > On Tue, Apr 21, 2026, Yosry Ahmed wrote: > > > On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > > > > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > > > > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > > > > boilerplate at call sites. > > > > > > > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > > > > fundamentally GFN-based. > > > > > > > > No functional change intended. > > > > > > > > Compile-tested on x86 and ppc, which are the current users of these > > > > interfaces. > > > > > > > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > > > > Signed-off-by: Peter Fang <peter.fang@intel.com> > > > > --- > > > > > > I was going to suggest a WARN in kvm_vcpu_map() and > > > kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean > > > usually hates my paranoid WARN suggestions. > > > > Heh, for good reason. Adding such a WARN would be triggered by this code: > > > > if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { > > vmx->nested.pi_desc = > > (struct pi_desc *)(((void *)map->hva) + > > offset_in_page(vmcs12->posted_intr_desc_addr)); > > > > The PI descriptor only needs to be 64-bit aligned, not page-aligned. > > To elaborate a bit, I'm all for adding WARNs in flows where something bad is all > but guaranteed to happen if an assumption is violated, or in APIs where there's > a history of goofs and/or subtlety in how the API behaves. > > What I'm against is adding WARNs because someone could write bad code in the > future, or because KVM doesn't do XYZ at this time. Such WARNs usualy just add > noise, and can even be actively harmful. E.g. in this case, ignoring the PID > usage, a reader might look at the WARN and think it's _wrong_ to map a page in > order to access a subset of the page, which is just not true. Yeah I agree with most/all of your objections to my suggestions, it's usually that I don't have enough context to understand how the WARN could be harmful (like here), or am just being too paranoid or defending against bad code as you mentioned. I was mentioning your objections semi-sarcastically and intentionally bringing up the WARN in case it's actually useful. Taking a step back, what I really want to clarify and/or detect misuse of, is that kvm_vcpu_map() will map exactly one page, the one that the GPA lies in. For example, there's nothing protecting against the PID address being the last byte of the page, in which case accessing all of it would be wrong as it spans the mapped page boundary. This is difficult to hit if you are passing in a GFN, as it's more obvious that KVM is mapping one physical page. Perhaps we just need to rename the functions (e.g. kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds checking. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-21 23:41 ` Yosry Ahmed @ 2026-04-22 0:27 ` Sean Christopherson 2026-04-22 20:19 ` Yosry Ahmed 0 siblings, 1 reply; 18+ messages in thread From: Sean Christopherson @ 2026-04-22 0:27 UTC (permalink / raw) To: Yosry Ahmed Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Tue, Apr 21, 2026, Yosry Ahmed wrote: > On Tue, Apr 21, 2026 at 4:29 PM Sean Christopherson <seanjc@google.com> wrote: > > > > On Tue, Apr 21, 2026, Sean Christopherson wrote: > > > On Tue, Apr 21, 2026, Yosry Ahmed wrote: > > > > On Tue, Apr 07, 2026 at 05:11:30PM -0700, Peter Fang wrote: > > > > > Move the conversion from a gpa_t to a gfn_t into kvm_vcpu_map() and > > > > > kvm_vcpu_map_readonly() so that they take a gpa_t directly, reducing > > > > > boilerplate at call sites. > > > > > > > > > > __kvm_vcpu_map() still takes a gfn_t because guest page mapping is > > > > > fundamentally GFN-based. > > > > > > > > > > No functional change intended. > > > > > > > > > > Compile-tested on x86 and ppc, which are the current users of these > > > > > interfaces. > > > > > > > > > > Suggested-by: Yosry Ahmed <yosry@kernel.org> > > > > > Signed-off-by: Peter Fang <peter.fang@intel.com> > > > > > --- > > > > > > > > I was going to suggest a WARN in kvm_vcpu_map() and > > > > kvm_vcpu_map_readonly() if the passed GPA is not page-aligned, but Sean > > > > usually hates my paranoid WARN suggestions. > > > > > > Heh, for good reason. Adding such a WARN would be triggered by this code: > > > > > > if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { > > > vmx->nested.pi_desc = > > > (struct pi_desc *)(((void *)map->hva) + > > > offset_in_page(vmcs12->posted_intr_desc_addr)); > > > > > > The PI descriptor only needs to be 64-bit aligned, not page-aligned. To answer your other question: yes, 64-byte, not 64-bit. > > To elaborate a bit, I'm all for adding WARNs in flows where something bad is all > > but guaranteed to happen if an assumption is violated, or in APIs where there's > > a history of goofs and/or subtlety in how the API behaves. > > > > What I'm against is adding WARNs because someone could write bad code in the > > future, or because KVM doesn't do XYZ at this time. Such WARNs usualy just add > > noise, and can even be actively harmful. E.g. in this case, ignoring the PID > > usage, a reader might look at the WARN and think it's _wrong_ to map a page in > > order to access a subset of the page, which is just not true. > > Yeah I agree with most/all of your objections to my suggestions, it's > usually that I don't have enough context to understand how the WARN > could be harmful (like here), or am just being too paranoid or > defending against bad code as you mentioned. I was mentioning your > objections semi-sarcastically and intentionally bringing up the WARN > in case it's actually useful. > > Taking a step back, what I really want to clarify and/or detect misuse > of, is that kvm_vcpu_map() will map exactly one page, the one that the > GPA lies in. For example, there's nothing protecting against the PID > address being the last byte of the page, Well, technically there is: CC(!kvm_vcpu_is_legal_aligned_gpa(vcpu, vmcs12->posted_intr_desc_addr, 64)))) return -EINVAL; But I'm pretty sure what you're saying is that "nothing in the common helper code protects against a stupid caller". > in which case accessing all of it would be wrong as it spans the mapped page > boundary. This is difficult to hit if you are passing in a GFN, as it's more > obvious that KVM is mapping one physical page. Eh, that just leads to a different class of bugs (and possible even *worse* bugs). E.g. caller passes in a GFN, accesses the kernel mapping beyond the page, and reads/write arbitrary kernel memory (pfn_valid() case using the direct map), or hits a !PRESENT #PF (remap case). > Perhaps we just need to rename the functions (e.g. > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > checking. Definitely the latter. Or both I guess, but probably just the latter. Commit 025dde582bbf ("KVM: Harden guest memory APIs against out-of-bounds accesses") added that type of hardening for the "slow" APIs, exactly because of the type of OOB bug you're describing: commit f559b2e9c5c5 ("KVM: nSVM: Ignore nCR3[4:0] when loading PDPTEs from memory"). Actually, that's actually useful feedback for patch 3. __kvm_vcpu_map() should do the GPA=>GFN conversion, not its caller. Anyways, back to the hardening. We can do it with minimal additional churn. After patch 3 (passing a @gpa to __kvm_vcpu_map(), not a @gfn), do the below over a few patches (completely untested). This way the common case of mapping and accessing an entire page Just Works, and flows like the PI descriptor handling don't have to many provide the length (which also can be error prone). --- arch/x86/kvm/vmx/nested.c | 12 ++++-------- include/linux/kvm_host.h | 22 ++++++++++++++++------ virt/kvm/kvm_main.c | 28 +++++++++++++++++++--------- 3 files changed, 39 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index ee3ff76a8678..eb75d97c7453 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3453,7 +3453,7 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) map = &vmx->nested.virtual_apic_map; if (!kvm_vcpu_map(vcpu, vmcs12->virtual_apic_page_addr, map)) { - vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, pfn_to_hpa(map->pfn)); + vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, map->hpa)); } else if (nested_cpu_has(vmcs12, CPU_BASED_CR8_LOAD_EXITING) && nested_cpu_has(vmcs12, CPU_BASED_CR8_STORE_EXITING) && !nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { @@ -3478,12 +3478,9 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) if (nested_cpu_has_posted_intr(vmcs12)) { map = &vmx->nested.pi_desc_map; - if (!kvm_vcpu_map(vcpu, vmcs12->posted_intr_desc_addr, map)) { - vmx->nested.pi_desc = - (struct pi_desc *)(((void *)map->hva) + - offset_in_page(vmcs12->posted_intr_desc_addr)); - vmcs_write64(POSTED_INTR_DESC_ADDR, - pfn_to_hpa(map->pfn) + offset_in_page(vmcs12->posted_intr_desc_addr)); + if (!kvm_vcpu_map_ptr(vcpu, vmcs12->posted_intr_desc_addr, + vmx->nested.pi_desc, map)) { + vmcs_write64(POSTED_INTR_DESC_ADDR, map->hpa); } else { /* * Defer the KVM_INTERNAL_EXIT until KVM tries to @@ -3491,7 +3488,6 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) * descriptor. (Note that KVM may do this when it * should not, per the architectural specification.) */ - vmx->nested.pi_desc = NULL; pin_controls_clearbit(vmx, PIN_BASED_POSTED_INTR); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 893a8c76a665..da6f08aa0ac4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -291,10 +291,11 @@ struct kvm_host_map { */ struct page *pinned_page; struct page *page; - void *hva; - kvm_pfn_t pfn; kvm_pfn_t gfn; bool writable; + + hpa_t hpa; + void *hva; }; /* @@ -1893,22 +1894,31 @@ static inline hpa_t pfn_to_hpa(kvm_pfn_t pfn) return (hpa_t)pfn << PAGE_SHIFT; } -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, - bool writable); +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t len, + struct kvm_host_map *map, bool writable); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, true); + return __kvm_vcpu_map(vcpu, gpa, PAGE_SIZE, map, true); } static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map) { - return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, false); + return __kvm_vcpu_map(vcpu, gpa, PAGE_SIZE, map, false); } +#define kvm_vcpu_map_ptr(__vcpu, __gpa, __ptr, __map) \ +({ \ + int r; \ + \ + r = __kvm_vcpu_map(__vcpu, __gpa, sizeof(*(__ptr)), __map, true); \ + __ptr = !r ? (__map)->hva : NULL; \ + r; \ +}) + static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, struct kvm_host_map *map) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9093251beb39..e8d2e98b0068 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3114,9 +3114,10 @@ struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write) } EXPORT_SYMBOL_FOR_KVM_INTERNAL(__gfn_to_page); -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, - bool writable) +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t len, + struct kvm_host_map *map, bool writable) { + gfn_t gfn = gpa_to_gfn(gpa); struct kvm_follow_pfn kfp = { .slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn), .gfn = gfn, @@ -3124,6 +3125,10 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, .refcounted_page = &map->pinned_page, .pin = true, }; + kvm_pfn_t pfn; + + if (WARN_ON_ONCE(offset_in_page(gpa) + len > PAGE_SIZE)) + return -EINVAL; map->pinned_page = NULL; map->page = NULL; @@ -3131,20 +3136,25 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, map->gfn = gfn; map->writable = writable; - map->pfn = kvm_follow_pfn(&kfp); - if (is_error_noslot_pfn(map->pfn)) + pfn = kvm_follow_pfn(&kfp); + if (is_error_noslot_pfn(pfn)) return -EINVAL; - if (pfn_valid(map->pfn)) { - map->page = pfn_to_page(map->pfn); + map->hpa = pfn_to_hpa(pfn); + if (pfn_valid(pfn)) { + map->page = pfn_to_page(pfn); map->hva = kmap(map->page); #ifdef CONFIG_HAS_IOMEM } else { - map->hva = memremap(pfn_to_hpa(map->pfn), PAGE_SIZE, MEMREMAP_WB); + map->hva = memremap(map->hpa, PAGE_SIZE, MEMREMAP_WB); + if (!map->hva) + return -EFAULT; #endif } - return map->hva ? 0 : -EFAULT; + map->hpa += offset_in_page(gpa); + map->hva += offset_in_page(gpa); + return 0; } EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_map); @@ -3157,7 +3167,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map) kunmap(map->page); #ifdef CONFIG_HAS_IOMEM else - memunmap(map->hva); + memunmap(PTR_ALIGN_DOWN(map->hva, PAGE_SIZE)); #endif if (map->writable) base-commit: d9d61b2f6793deb6b72aa792ae70c09fa26fda37 -- ^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 0:27 ` Sean Christopherson @ 2026-04-22 20:19 ` Yosry Ahmed 2026-04-22 20:34 ` Sean Christopherson 2026-04-23 7:49 ` Peter Fang 0 siblings, 2 replies; 18+ messages in thread From: Yosry Ahmed @ 2026-04-22 20:19 UTC (permalink / raw) To: Sean Christopherson Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel > > > To elaborate a bit, I'm all for adding WARNs in flows where something bad is all > > > but guaranteed to happen if an assumption is violated, or in APIs where there's > > > a history of goofs and/or subtlety in how the API behaves. > > > > > > What I'm against is adding WARNs because someone could write bad code in the > > > future, or because KVM doesn't do XYZ at this time. Such WARNs usualy just add > > > noise, and can even be actively harmful. E.g. in this case, ignoring the PID > > > usage, a reader might look at the WARN and think it's _wrong_ to map a page in > > > order to access a subset of the page, which is just not true. > > > > Yeah I agree with most/all of your objections to my suggestions, it's > > usually that I don't have enough context to understand how the WARN > > could be harmful (like here), or am just being too paranoid or > > defending against bad code as you mentioned. I was mentioning your > > objections semi-sarcastically and intentionally bringing up the WARN > > in case it's actually useful. > > > > Taking a step back, what I really want to clarify and/or detect misuse > > of, is that kvm_vcpu_map() will map exactly one page, the one that the > > GPA lies in. For example, there's nothing protecting against the PID > > address being the last byte of the page, > > Well, technically there is: > > CC(!kvm_vcpu_is_legal_aligned_gpa(vcpu, vmcs12->posted_intr_desc_addr, 64)))) > return -EINVAL; > > But I'm pretty sure what you're saying is that "nothing in the common helper code > protects against a stupid caller". Yes. Now I am actually glad I brought up the WARN and you elaborated your thoughts, because it made me think and spell out my actual concern (that my brain translated into just WARN initiailly): we need bounds checking. > > > in which case accessing all of it would be wrong as it spans the mapped page > > boundary. This is difficult to hit if you are passing in a GFN, as it's more > > obvious that KVM is mapping one physical page. > > Eh, that just leads to a different class of bugs (and possible even *worse* bugs). > E.g. caller passes in a GFN, accesses the kernel mapping beyond the page, and > reads/write arbitrary kernel memory (pfn_valid() case using the direct map), or > hits a !PRESENT #PF (remap case). > > > Perhaps we just need to rename the functions (e.g. > > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > > checking. > > Definitely the latter. Or both I guess, but probably just the latter. I think both. I think renaming to kvm_vcpu_map_page() (and similar for others) would further clarify things, especially with the introduction of kvm_vcpu_map_ptr() below. > > Commit 025dde582bbf ("KVM: Harden guest memory APIs against out-of-bounds accesses") > added that type of hardening for the "slow" APIs, exactly because of the type of > OOB bug you're describing: commit f559b2e9c5c5 ("KVM: nSVM: Ignore nCR3[4:0] when > loading PDPTEs from memory"). > > Actually, that's actually useful feedback for patch 3. __kvm_vcpu_map() should > do the GPA=>GFN conversion, not its caller. Yes. > > Anyways, back to the hardening. We can do it with minimal additional churn. After > patch 3 (passing a @gpa to __kvm_vcpu_map(), not a @gfn), do the below over a few > patches (completely untested). This way the common case of mapping and accessing > an entire page Just Works, and flows like the PI descriptor handling don't have to > many provide the length (which also can be error prone). Yeah probably this (maybe not in the same order): - Convert map->pfn to map->hpa. - Pass size to __kvm_vcpu_map() and do bounds checking. - Rename kvm_vcpu_map() and __kvm_vpcu_map() to kvm_vcpu_map_page() and __kvm_vcpu_map_page(). - Introduce kvm_vcpu_map_ptr() wrapper and simplify the nested PID call site. Generally looks good with a small nit/question below. Peter, would you be interested in extending the series to do this? If not, I can send a follow up on top of your series when it's hashed out. [..] > @@ -1893,22 +1894,31 @@ static inline hpa_t pfn_to_hpa(kvm_pfn_t pfn) > return (hpa_t)pfn << PAGE_SHIFT; > } > > -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > - bool writable); > +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t len, > + struct kvm_host_map *map, bool writable); > void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map); > > static inline int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, > struct kvm_host_map *map) > { > - return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, true); > + return __kvm_vcpu_map(vcpu, gpa, PAGE_SIZE, map, true); > } > > static inline int kvm_vcpu_map_readonly(struct kvm_vcpu *vcpu, gpa_t gpa, > struct kvm_host_map *map) > { > - return __kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), map, false); > + return __kvm_vcpu_map(vcpu, gpa, PAGE_SIZE, map, false); > } > > +#define kvm_vcpu_map_ptr(__vcpu, __gpa, __ptr, __map) \ > +({ \ > + int r; \ > + \ > + r = __kvm_vcpu_map(__vcpu, __gpa, sizeof(*(__ptr)), __map, true); \ > + __ptr = !r ? (__map)->hva : NULL; \ > + r; \ > +}) > + > static inline void kvm_vcpu_map_mark_dirty(struct kvm_vcpu *vcpu, > struct kvm_host_map *map) > { > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 9093251beb39..e8d2e98b0068 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3114,9 +3114,10 @@ struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write) > } > EXPORT_SYMBOL_FOR_KVM_INTERNAL(__gfn_to_page); > > -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > - bool writable) > +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t len, > + struct kvm_host_map *map, bool writable) > { > + gfn_t gfn = gpa_to_gfn(gpa); > struct kvm_follow_pfn kfp = { > .slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn), > .gfn = gfn, > @@ -3124,6 +3125,10 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > .refcounted_page = &map->pinned_page, > .pin = true, > }; > + kvm_pfn_t pfn; > + > + if (WARN_ON_ONCE(offset_in_page(gpa) + len > PAGE_SIZE)) > + return -EINVAL; Maybe do the bounds checking after initializing 'map', then kvm_vcpu_map_ptr() wouldn't need to explicitly set the pointer to NULL on failure? There is already possibility of failure after initialization anyway. > > map->pinned_page = NULL; > map->page = NULL; > @@ -3131,20 +3136,25 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > map->gfn = gfn; > map->writable = writable; > > - map->pfn = kvm_follow_pfn(&kfp); > - if (is_error_noslot_pfn(map->pfn)) > + pfn = kvm_follow_pfn(&kfp); > + if (is_error_noslot_pfn(pfn)) > return -EINVAL; > > - if (pfn_valid(map->pfn)) { > - map->page = pfn_to_page(map->pfn); > + map->hpa = pfn_to_hpa(pfn); > + if (pfn_valid(pfn)) { > + map->page = pfn_to_page(pfn); > map->hva = kmap(map->page); > #ifdef CONFIG_HAS_IOMEM > } else { > - map->hva = memremap(pfn_to_hpa(map->pfn), PAGE_SIZE, MEMREMAP_WB); > + map->hva = memremap(map->hpa, PAGE_SIZE, MEMREMAP_WB); > + if (!map->hva) > + return -EFAULT; > #endif > } > > - return map->hva ? 0 : -EFAULT; > + map->hpa += offset_in_page(gpa); > + map->hva += offset_in_page(gpa); > + return 0; > } > EXPORT_SYMBOL_FOR_KVM_INTERNAL(__kvm_vcpu_map); [..] ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 20:19 ` Yosry Ahmed @ 2026-04-22 20:34 ` Sean Christopherson 2026-04-22 21:44 ` Yosry Ahmed 2026-04-23 7:49 ` Peter Fang 1 sibling, 1 reply; 18+ messages in thread From: Sean Christopherson @ 2026-04-22 20:34 UTC (permalink / raw) To: Yosry Ahmed Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Wed, Apr 22, 2026, Yosry Ahmed wrote: > > > Perhaps we just need to rename the functions (e.g. > > > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > > > checking. > > > > Definitely the latter. Or both I guess, but probably just the latter. > > I think both. I think renaming to kvm_vcpu_map_page() (and similar for > others) would further clarify things, especially with the introduction > of kvm_vcpu_map_ptr() below. I don't like "page" it's too easy to incorrectly assume "page" means "struct page". There are KVM APIs that do use "page" generically, e.g. kvm_read_guest_page(), but for this particular case I'd like to stay away from "page; there's a _lot_ of ugly history around mapping "struct page" vs. "other" memory in KVM. > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 9093251beb39..e8d2e98b0068 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -3114,9 +3114,10 @@ struct page *__gfn_to_page(struct kvm *kvm, gfn_t gfn, bool write) > > } > > EXPORT_SYMBOL_FOR_KVM_INTERNAL(__gfn_to_page); > > > > -int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > > - bool writable) > > +int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, gpa_t len, > > + struct kvm_host_map *map, bool writable) > > { > > + gfn_t gfn = gpa_to_gfn(gpa); > > struct kvm_follow_pfn kfp = { > > .slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn), > > .gfn = gfn, > > @@ -3124,6 +3125,10 @@ int __kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, > > .refcounted_page = &map->pinned_page, > > .pin = true, > > }; > > + kvm_pfn_t pfn; > > + > > + if (WARN_ON_ONCE(offset_in_page(gpa) + len > PAGE_SIZE)) > > + return -EINVAL; > > Maybe do the bounds checking after initializing 'map', then > kvm_vcpu_map_ptr() wouldn't need to explicitly set the pointer to NULL > on failure? Hmm, no. I don't want to encourage the caller to rely on the state of @map if the call fails. > There is already possibility of failure after initialization anyway. Sure, but the caller shouldn't rely on that. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 20:34 ` Sean Christopherson @ 2026-04-22 21:44 ` Yosry Ahmed 2026-04-22 22:17 ` Sean Christopherson 0 siblings, 1 reply; 18+ messages in thread From: Yosry Ahmed @ 2026-04-22 21:44 UTC (permalink / raw) To: Sean Christopherson Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Wed, Apr 22, 2026 at 1:34 PM Sean Christopherson <seanjc@google.com> wrote: > > On Wed, Apr 22, 2026, Yosry Ahmed wrote: > > > > Perhaps we just need to rename the functions (e.g. > > > > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > > > > checking. > > > > > > Definitely the latter. Or both I guess, but probably just the latter. > > > > I think both. I think renaming to kvm_vcpu_map_page() (and similar for > > others) would further clarify things, especially with the introduction > > of kvm_vcpu_map_ptr() below. > > I don't like "page" it's too easy to incorrectly assume "page" means "struct page". > There are KVM APIs that do use "page" generically, e.g. kvm_read_guest_page(), > but for this particular case I'd like to stay away from "page; there's a _lot_ > of ugly history around mapping "struct page" vs. "other" memory in KVM. Maybe kvm_vcpu_map_guest_page()? or if you reaaaally wanna be clear about it kvm_vcpu_map_page_sized_chunk_of_guest_memory() :P I don't feel strongly but I prefer we put "page" in there somewhere, up to you :) ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 21:44 ` Yosry Ahmed @ 2026-04-22 22:17 ` Sean Christopherson 2026-04-22 22:19 ` Yosry Ahmed 0 siblings, 1 reply; 18+ messages in thread From: Sean Christopherson @ 2026-04-22 22:17 UTC (permalink / raw) To: Yosry Ahmed Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Wed, Apr 22, 2026, Yosry Ahmed wrote: > On Wed, Apr 22, 2026 at 1:34 PM Sean Christopherson <seanjc@google.com> wrote: > > > > On Wed, Apr 22, 2026, Yosry Ahmed wrote: > > > > > Perhaps we just need to rename the functions (e.g. > > > > > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > > > > > checking. > > > > > > > > Definitely the latter. Or both I guess, but probably just the latter. > > > > > > I think both. I think renaming to kvm_vcpu_map_page() (and similar for > > > others) would further clarify things, especially with the introduction > > > of kvm_vcpu_map_ptr() below. > > > > I don't like "page" it's too easy to incorrectly assume "page" means "struct page". > > There are KVM APIs that do use "page" generically, e.g. kvm_read_guest_page(), > > but for this particular case I'd like to stay away from "page; there's a _lot_ > > of ugly history around mapping "struct page" vs. "other" memory in KVM. > > Maybe kvm_vcpu_map_guest_page()? or if you reaaaally wanna be clear > about it kvm_vcpu_map_page_sized_chunk_of_guest_memory() :P And rename all the extensions to .java while we're at it. I can live with kvm_vcpu_map_guest_page(). kvm_vcpu_map_ptr() becomes a bit odd, but kvm_vcpu_map_guest_ptr() is even worse. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 22:17 ` Sean Christopherson @ 2026-04-22 22:19 ` Yosry Ahmed 0 siblings, 0 replies; 18+ messages in thread From: Yosry Ahmed @ 2026-04-22 22:19 UTC (permalink / raw) To: Sean Christopherson Cc: Peter Fang, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Wed, Apr 22, 2026 at 3:17 PM Sean Christopherson <seanjc@google.com> wrote: > > On Wed, Apr 22, 2026, Yosry Ahmed wrote: > > On Wed, Apr 22, 2026 at 1:34 PM Sean Christopherson <seanjc@google.com> wrote: > > > > > > On Wed, Apr 22, 2026, Yosry Ahmed wrote: > > > > > > Perhaps we just need to rename the functions (e.g. > > > > > > kvm_vcpu_map_page()), or more intrusively pass in a size and do bounds > > > > > > checking. > > > > > > > > > > Definitely the latter. Or both I guess, but probably just the latter. > > > > > > > > I think both. I think renaming to kvm_vcpu_map_page() (and similar for > > > > others) would further clarify things, especially with the introduction > > > > of kvm_vcpu_map_ptr() below. > > > > > > I don't like "page" it's too easy to incorrectly assume "page" means "struct page". > > > There are KVM APIs that do use "page" generically, e.g. kvm_read_guest_page(), > > > but for this particular case I'd like to stay away from "page; there's a _lot_ > > > of ugly history around mapping "struct page" vs. "other" memory in KVM. > > > > Maybe kvm_vcpu_map_guest_page()? or if you reaaaally wanna be clear > > about it kvm_vcpu_map_page_sized_chunk_of_guest_memory() :P > > And rename all the extensions to .java while we're at it. Lovely. > I can live with kvm_vcpu_map_guest_page(). kvm_vcpu_map_ptr() becomes a bit > odd, but kvm_vcpu_map_guest_ptr() is even worse. FWIW I still think kvm_vcpu_map_page() and kvm_vcpu_map_ptr() are the best options. kvm_vcpu_map_guest_page() and kvm_vcpu_map_ptr() are also fine imo. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() 2026-04-22 20:19 ` Yosry Ahmed 2026-04-22 20:34 ` Sean Christopherson @ 2026-04-23 7:49 ` Peter Fang 1 sibling, 0 replies; 18+ messages in thread From: Peter Fang @ 2026-04-23 7:49 UTC (permalink / raw) To: Yosry Ahmed Cc: Sean Christopherson, Paolo Bonzini, Madhavan Srinivasan, Nicholas Piggin, Ritesh Harjani, Michael Ellerman, Christophe Leroy (CS GROUP), Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, kvm, linuxppc-dev, linux-kernel On Wed, Apr 22, 2026 at 08:19:45PM +0000, Yosry Ahmed wrote: > > > > Anyways, back to the hardening. We can do it with minimal additional churn. After > > patch 3 (passing a @gpa to __kvm_vcpu_map(), not a @gfn), do the below over a few > > patches (completely untested). This way the common case of mapping and accessing > > an entire page Just Works, and flows like the PI descriptor handling don't have to > > many provide the length (which also can be error prone). > > Yeah probably this (maybe not in the same order): > - Convert map->pfn to map->hpa. > - Pass size to __kvm_vcpu_map() and do bounds checking. > - Rename kvm_vcpu_map() and __kvm_vpcu_map() to kvm_vcpu_map_page() and > __kvm_vcpu_map_page(). > - Introduce kvm_vcpu_map_ptr() wrapper and simplify the nested PID call > site. > > Generally looks good with a small nit/question below. Peter, would you > be interested in extending the series to do this? If not, I can send a > follow up on top of your series when it's hashed out. Yep, I can extend the series into v3. Adding kvm_vcpu_map_ptr() and renaming the original APIs make sense to me, and I want to check all the call sites again to see if anything else can be improved. Thanks for the discussion. The out-of-bounds issue was not something I had considered. > > [..] ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2026-04-23 7:49 UTC | newest] Thread overview: 18+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-08 0:11 [PATCH v2 0/3] KVM: Fix and clean up kvm_vcpu_map[_readonly]() usages Peter Fang 2026-04-08 0:11 ` [PATCH v2 1/3] KVM: Fix kvm_vcpu_map[_readonly]() function prototypes Peter Fang 2026-04-21 23:05 ` Yosry Ahmed 2026-04-08 0:11 ` [PATCH v2 2/3] KVM: Move page mapping/unmapping APIs in kvm_host.h Peter Fang 2026-04-21 23:06 ` Yosry Ahmed 2026-04-08 0:11 ` [PATCH v2 3/3] KVM: Take gpa_t in kvm_vcpu_map[_readonly]() Peter Fang 2026-04-21 23:08 ` Yosry Ahmed 2026-04-21 23:19 ` Sean Christopherson 2026-04-21 23:25 ` Yosry Ahmed 2026-04-21 23:29 ` Sean Christopherson 2026-04-21 23:41 ` Yosry Ahmed 2026-04-22 0:27 ` Sean Christopherson 2026-04-22 20:19 ` Yosry Ahmed 2026-04-22 20:34 ` Sean Christopherson 2026-04-22 21:44 ` Yosry Ahmed 2026-04-22 22:17 ` Sean Christopherson 2026-04-22 22:19 ` Yosry Ahmed 2026-04-23 7:49 ` Peter Fang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox