* [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE
@ 2026-03-09 8:38 pcjer
2026-03-09 14:23 ` Sean Christopherson
0 siblings, 1 reply; 3+ messages in thread
From: pcjer @ 2026-03-09 8:38 UTC (permalink / raw)
To: kvm; +Cc: seanjc, pbonzini, linux-kernel
Signed-off-by: pcjer <pcj3195161583@163.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 1266d5452..8482a85d6 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1025,8 +1025,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
slot = gfn_to_memslot(kvm, gfn);
if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
- (gfn & mask) < start ||
- end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
+ (gfn & ~mask) < start ||
+ end < (gfn & ~mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
WARN_ON_ONCE(!can_yield);
if (split_sp) {
sp = split_sp;
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE
2026-03-09 8:38 [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE pcjer
@ 2026-03-09 14:23 ` Sean Christopherson
2026-03-10 1:29 ` Xiaoyao Li
0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2026-03-09 14:23 UTC (permalink / raw)
To: pcjer; +Cc: kvm, pbonzini, linux-kernel
On Mon, Mar 09, 2026, pcjer wrote:
> Signed-off-by: pcjer <pcj3195161583@163.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 1266d5452..8482a85d6 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -1025,8 +1025,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>
> slot = gfn_to_memslot(kvm, gfn);
> if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
> - (gfn & mask) < start ||
> - end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
> + (gfn & ~mask) < start ||
> + end < (gfn & ~mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
Somewhat to my surprise, this does indeed look like a legitimate fix, ignoring
that the code in question was never merged and was lasted posted 2+ years ago[*]
(and has long since been replaced).
The bug likely went unnoticed during development because "(gfn & mask) < start"
would almost always be true (mask == 511 for a 2MiB page). Though mask should
really just be inverted from the get go in this code
+ if (is_private && kvm_gfn_shared_mask(kvm) &&
+ is_large_pte(iter.old_spte)) {
+ gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
+ gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
+
+ struct kvm_memory_slot *slot;
+ struct kvm_mmu_page *sp;
+
[*] https://lore.kernel.org/all/c656573ccc68e212416d323d35f884bff25e6e2d.1708933624.git.isaku.yamahata@intel.com
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE
2026-03-09 14:23 ` Sean Christopherson
@ 2026-03-10 1:29 ` Xiaoyao Li
0 siblings, 0 replies; 3+ messages in thread
From: Xiaoyao Li @ 2026-03-10 1:29 UTC (permalink / raw)
To: Sean Christopherson, pcjer; +Cc: kvm, pbonzini, linux-kernel
On 3/9/2026 10:23 PM, Sean Christopherson wrote:
> On Mon, Mar 09, 2026, pcjer wrote:
>> Signed-off-by: pcjer <pcj3195161583@163.com>
>> ---
>> arch/x86/kvm/mmu/tdp_mmu.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
>> index 1266d5452..8482a85d6 100644
>> --- a/arch/x86/kvm/mmu/tdp_mmu.c
>> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
>> @@ -1025,8 +1025,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
>>
>> slot = gfn_to_memslot(kvm, gfn);
>> if (kvm_hugepage_test_mixed(slot, gfn, iter.level) ||
>> - (gfn & mask) < start ||
>> - end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
>> + (gfn & ~mask) < start ||
>> + end < (gfn & ~mask) + KVM_PAGES_PER_HPAGE(iter.level)) {
>
> Somewhat to my surprise, this does indeed look like a legitimate fix, ignoring
> that the code in question was never merged and was lasted posted 2+ years ago[*]
> (and has long since been replaced).
>
> The bug likely went unnoticed during development because "(gfn & mask) < start"
> would almost always be true (mask == 511 for a 2MiB page). Though mask should
> really just be inverted from the get go in this code
>
> + if (is_private && kvm_gfn_shared_mask(kvm) &&
> + is_large_pte(iter.old_spte)) {
> + gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm);
> + gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1;
> +
> + struct kvm_memory_slot *slot;
> + struct kvm_mmu_page *sp;
> +
>
> [*] https://lore.kernel.org/all/c656573ccc68e212416d323d35f884bff25e6e2d.1708933624.git.isaku.yamahata@intel.com
>
/faceplam, the buggy code was written by me.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-10 1:29 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 8:38 [PATCH] KVM: x86/tdp_mmu: Fix base gfn check when zapping private huge SPTE pcjer
2026-03-09 14:23 ` Sean Christopherson
2026-03-10 1:29 ` Xiaoyao Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox