* [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort()
@ 2026-03-04 16:22 Fuad Tabba
2026-03-04 16:22 ` [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault Fuad Tabba
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Fuad Tabba @ 2026-03-04 16:22 UTC (permalink / raw)
To: kvm, kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, yangyicong, wangzhou1, tabba
While digging into arch/arm64/kvm/mmu.c with the intention of finally
refactoring user_mem_abort(), I ran into a couple of latent bugs that
we should probably fix right now before attempting any major plumbing.
You might experience some deja-vu looking at the first patch. A while
back (in 5f9466b50c1b), I fixed a struct page reference leak on an
early error return in this exact same block. It turns out that another
early exit was introduced later on (for exclusive/atomic faults), and it
fell into the exact same trap of leaking the page.
The fact that this keeps happening really highlights how dangerous this
"danger zone" between faulting in the PFN and taking the MMU lock has
become. To stop playing whack-a-mole with inline `kvm_release_page_unused()`
calls, I've routed all the early exits here to a unified `out_put_page`
label so they are handled safely together.
The second patch addresses a staleness bug with `vma_shift` when handling
nested stage-2 faults. We currently truncate the mapping size for the
nested guest, but forget to update the shift, which results in us sending
the wrong boundaries to userspace if we subsequently trip over a hardware
poisoned page.
Finding these issues just reinforces how fragile this 300-line function
has become. We really need to refactor it to make the state flow easier
to reason about. I'm currently putting together a series to do just that
(introducing a proper fault state object), so stay tuned for an RFC on
that front.
Based on Linux 7.0-rc2.
Cheers,
/fuad
Fuad Tabba (2):
KVM: arm64: Fix page leak in user_mem_abort() on atomic fault
KVM: arm64: Fix vma_shift staleness on nested hwpoison path
arch/arm64/kvm/mmu.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
base-commit: 11439c4635edd669ae435eec308f4ab8a0804808
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault
2026-03-04 16:22 [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Fuad Tabba
@ 2026-03-04 16:22 ` Fuad Tabba
2026-03-05 1:57 ` Yao Yuan
2026-03-04 16:22 ` [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path Fuad Tabba
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Fuad Tabba @ 2026-03-04 16:22 UTC (permalink / raw)
To: kvm, kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, yangyicong, wangzhou1, tabba
When a guest performs an atomic/exclusive operation on memory lacking
the required attributes, user_mem_abort() injects a data abort and
returns early. However, it fails to release the reference to the
host page acquired via __kvm_faultin_pfn().
A malicious guest could repeatedly trigger this fault, leaking host
page references and eventually causing host memory exhaustion (OOM).
Fix this by consolidating the early error returns to a new out_put_page
label that correctly calls kvm_release_page_unused().
Fixes: 2937aeec9dc5 ("KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory")
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/mmu.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index ec2eee857208..e1d6a4f591a9 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1837,10 +1837,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && s2_force_noncacheable)
ret = -ENOEXEC;
- if (ret) {
- kvm_release_page_unused(page);
- return ret;
- }
+ if (ret)
+ goto out_put_page;
/*
* Guest performs atomic/exclusive operations on memory with unsupported
@@ -1850,7 +1848,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
*/
if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) {
kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu));
- return 1;
+ ret = 1;
+ goto out_put_page;
}
if (nested)
@@ -1936,6 +1935,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
mark_page_dirty_in_slot(kvm, memslot, gfn);
return ret != -EAGAIN ? ret : 0;
+
+out_put_page:
+ kvm_release_page_unused(page);
+ return ret;
}
/* Resolve the access fault by making the page young again. */
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path
2026-03-04 16:22 [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Fuad Tabba
2026-03-04 16:22 ` [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault Fuad Tabba
@ 2026-03-04 16:22 ` Fuad Tabba
2026-03-05 16:07 ` Marc Zyngier
2026-03-05 16:51 ` [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Marc Zyngier
2026-03-06 10:48 ` Marc Zyngier
3 siblings, 1 reply; 10+ messages in thread
From: Fuad Tabba @ 2026-03-04 16:22 UTC (permalink / raw)
To: kvm, kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, joey.gouly, suzuki.poulose, yuzenghui,
catalin.marinas, will, yangyicong, wangzhou1, tabba
When user_mem_abort() handles a nested stage-2 fault, it truncates
vma_pagesize to respect the guest's mapping size. However, the local
variable vma_shift is never updated to match this new size.
If the underlying host page turns out to be hardware poisoned,
kvm_send_hwpoison_signal() is called with the original, larger
vma_shift instead of the actual mapping size. This signals incorrect
poison boundaries to userspace and breaks hugepage memory poison
containment for nested VMs.
Update vma_shift to match the truncated vma_pagesize when operating
on behalf of a nested hypervisor.
Fixes: fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults")
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/mmu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index e1d6a4f591a9..b08240e0cab1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1751,6 +1751,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
force_pte = (max_map_size == PAGE_SIZE);
vma_pagesize = min_t(long, vma_pagesize, max_map_size);
+ vma_shift = force_pte ? PAGE_SHIFT : __ffs(vma_pagesize);
}
/*
--
2.53.0.473.g4a7958ca14-goog
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault
2026-03-04 16:22 ` [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault Fuad Tabba
@ 2026-03-05 1:57 ` Yao Yuan
0 siblings, 0 replies; 10+ messages in thread
From: Yao Yuan @ 2026-03-05 1:57 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, kvmarm, linux-arm-kernel, maz, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
On Wed, Mar 04, 2026 at 04:22:21PM +0800, Fuad Tabba wrote:
> When a guest performs an atomic/exclusive operation on memory lacking
> the required attributes, user_mem_abort() injects a data abort and
> returns early. However, it fails to release the reference to the
> host page acquired via __kvm_faultin_pfn().
>
> A malicious guest could repeatedly trigger this fault, leaking host
> page references and eventually causing host memory exhaustion (OOM).
>
> Fix this by consolidating the early error returns to a new out_put_page
> label that correctly calls kvm_release_page_unused().
>
> Fixes: 2937aeec9dc5 ("KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory")
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/kvm/mmu.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index ec2eee857208..e1d6a4f591a9 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1837,10 +1837,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (exec_fault && s2_force_noncacheable)
> ret = -ENOEXEC;
>
> - if (ret) {
> - kvm_release_page_unused(page);
> - return ret;
> - }
> + if (ret)
> + goto out_put_page;
>
> /*
> * Guest performs atomic/exclusive operations on memory with unsupported
> @@ -1850,7 +1848,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> */
Hi Fuda
> if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) {
> kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu));
> - return 1;
> + ret = 1;
> + goto out_put_page;
I thought the way about do this w/o introduce new label,
but it doesn't work due to the lock asseration inside
kvm_release_faultin_page() w/ if just small changes...
Reviewed-by: Yuan Yao <yaoyuan@linux.alibaba.com>
> }
>
> if (nested)
> @@ -1936,6 +1935,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> mark_page_dirty_in_slot(kvm, memslot, gfn);
>
> return ret != -EAGAIN ? ret : 0;
> +
> +out_put_page:
> + kvm_release_page_unused(page);
> + return ret;
> }
>
> /* Resolve the access fault by making the page young again. */
> --
> 2.53.0.473.g4a7958ca14-goog
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path
2026-03-04 16:22 ` [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path Fuad Tabba
@ 2026-03-05 16:07 ` Marc Zyngier
2026-03-05 16:13 ` Fuad Tabba
0 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2026-03-05 16:07 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, kvmarm, linux-arm-kernel, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
Hi Fuad,
On Wed, 04 Mar 2026 16:22:22 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> When user_mem_abort() handles a nested stage-2 fault, it truncates
> vma_pagesize to respect the guest's mapping size. However, the local
> variable vma_shift is never updated to match this new size.
>
> If the underlying host page turns out to be hardware poisoned,
> kvm_send_hwpoison_signal() is called with the original, larger
> vma_shift instead of the actual mapping size. This signals incorrect
> poison boundaries to userspace and breaks hugepage memory poison
> containment for nested VMs.
>
> Update vma_shift to match the truncated vma_pagesize when operating
> on behalf of a nested hypervisor.
>
> Fixes: fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults")
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/kvm/mmu.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index e1d6a4f591a9..b08240e0cab1 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1751,6 +1751,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>
> force_pte = (max_map_size == PAGE_SIZE);
> vma_pagesize = min_t(long, vma_pagesize, max_map_size);
> + vma_shift = force_pte ? PAGE_SHIFT : __ffs(vma_pagesize);
If force_pte is set, then we know that max_map_size == PAGE_SIZE. From
there, vma_pagesize == PAGE_SIZE, since nothing can be smaller.
Is there anything preventing us from having:
vma_shift = __ffs(vma_pagesize);
and be done with it?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path
2026-03-05 16:07 ` Marc Zyngier
@ 2026-03-05 16:13 ` Fuad Tabba
2026-03-05 16:22 ` Marc Zyngier
0 siblings, 1 reply; 10+ messages in thread
From: Fuad Tabba @ 2026-03-05 16:13 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvm, kvmarm, linux-arm-kernel, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
On Thu, 5 Mar 2026 at 16:08, Marc Zyngier <maz@kernel.org> wrote:
>
> Hi Fuad,
>
> On Wed, 04 Mar 2026 16:22:22 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > When user_mem_abort() handles a nested stage-2 fault, it truncates
> > vma_pagesize to respect the guest's mapping size. However, the local
> > variable vma_shift is never updated to match this new size.
> >
> > If the underlying host page turns out to be hardware poisoned,
> > kvm_send_hwpoison_signal() is called with the original, larger
> > vma_shift instead of the actual mapping size. This signals incorrect
> > poison boundaries to userspace and breaks hugepage memory poison
> > containment for nested VMs.
> >
> > Update vma_shift to match the truncated vma_pagesize when operating
> > on behalf of a nested hypervisor.
> >
> > Fixes: fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults")
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/arm64/kvm/mmu.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index e1d6a4f591a9..b08240e0cab1 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1751,6 +1751,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >
> > force_pte = (max_map_size == PAGE_SIZE);
> > vma_pagesize = min_t(long, vma_pagesize, max_map_size);
> > + vma_shift = force_pte ? PAGE_SHIFT : __ffs(vma_pagesize);
>
> If force_pte is set, then we know that max_map_size == PAGE_SIZE. From
> there, vma_pagesize == PAGE_SIZE, since nothing can be smaller.
>
> Is there anything preventing us from having:
>
> vma_shift = __ffs(vma_pagesize);
>
> and be done with it?
Nope, nothing prevents that. Even simpler and better.
Would you like me to respin it?
Cheers,
/fuad
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path
2026-03-05 16:13 ` Fuad Tabba
@ 2026-03-05 16:22 ` Marc Zyngier
0 siblings, 0 replies; 10+ messages in thread
From: Marc Zyngier @ 2026-03-05 16:22 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, kvmarm, linux-arm-kernel, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
On Thu, 05 Mar 2026 16:13:43 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> On Thu, 5 Mar 2026 at 16:08, Marc Zyngier <maz@kernel.org> wrote:
> >
> > Hi Fuad,
> >
> > On Wed, 04 Mar 2026 16:22:22 +0000,
> > Fuad Tabba <tabba@google.com> wrote:
> > >
> > > When user_mem_abort() handles a nested stage-2 fault, it truncates
> > > vma_pagesize to respect the guest's mapping size. However, the local
> > > variable vma_shift is never updated to match this new size.
> > >
> > > If the underlying host page turns out to be hardware poisoned,
> > > kvm_send_hwpoison_signal() is called with the original, larger
> > > vma_shift instead of the actual mapping size. This signals incorrect
> > > poison boundaries to userspace and breaks hugepage memory poison
> > > containment for nested VMs.
> > >
> > > Update vma_shift to match the truncated vma_pagesize when operating
> > > on behalf of a nested hypervisor.
> > >
> > > Fixes: fd276e71d1e7 ("KVM: arm64: nv: Handle shadow stage 2 page faults")
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > > arch/arm64/kvm/mmu.c | 1 +
> > > 1 file changed, 1 insertion(+)
> > >
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index e1d6a4f591a9..b08240e0cab1 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -1751,6 +1751,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > >
> > > force_pte = (max_map_size == PAGE_SIZE);
> > > vma_pagesize = min_t(long, vma_pagesize, max_map_size);
> > > + vma_shift = force_pte ? PAGE_SHIFT : __ffs(vma_pagesize);
> >
> > If force_pte is set, then we know that max_map_size == PAGE_SIZE. From
> > there, vma_pagesize == PAGE_SIZE, since nothing can be smaller.
> >
> > Is there anything preventing us from having:
> >
> > vma_shift = __ffs(vma_pagesize);
> >
> > and be done with it?
>
> Nope, nothing prevents that. Even simpler and better.
>
> Would you like me to respin it?
Nah, I'll fix that locally. Thanks for having given it a look.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort()
2026-03-04 16:22 [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Fuad Tabba
2026-03-04 16:22 ` [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault Fuad Tabba
2026-03-04 16:22 ` [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path Fuad Tabba
@ 2026-03-05 16:51 ` Marc Zyngier
2026-03-05 16:55 ` Fuad Tabba
2026-03-06 10:48 ` Marc Zyngier
3 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2026-03-05 16:51 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, kvmarm, linux-arm-kernel, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
On Wed, 04 Mar 2026 16:22:20 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> Finding these issues just reinforces how fragile this 300-line function
> has become. We really need to refactor it to make the state flow easier
> to reason about. I'm currently putting together a series to do just that
> (introducing a proper fault state object), so stay tuned for an RFC on
> that front.
If you have such patches, please post them sooner rather than later,
even if the rework is incomplete. I'd be happy take small patches that
start add infrastructure early and work out the full refactoring over
time.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort()
2026-03-05 16:51 ` [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Marc Zyngier
@ 2026-03-05 16:55 ` Fuad Tabba
0 siblings, 0 replies; 10+ messages in thread
From: Fuad Tabba @ 2026-03-05 16:55 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvm, kvmarm, linux-arm-kernel, oliver.upton, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, will, yangyicong,
wangzhou1
On Thu, 5 Mar 2026 at 16:51, Marc Zyngier <maz@kernel.org> wrote:
>
> On Wed, 04 Mar 2026 16:22:20 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Finding these issues just reinforces how fragile this 300-line function
> > has become. We really need to refactor it to make the state flow easier
> > to reason about. I'm currently putting together a series to do just that
> > (introducing a proper fault state object), so stay tuned for an RFC on
> > that front.
>
> If you have such patches, please post them sooner rather than later,
> even if the rework is incomplete. I'd be happy take small patches that
> start add infrastructure early and work out the full refactoring over
> time.
I'll try to have something for you by tomorrow in that case.
Cheers,
/fuad
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort()
2026-03-04 16:22 [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Fuad Tabba
` (2 preceding siblings ...)
2026-03-05 16:51 ` [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Marc Zyngier
@ 2026-03-06 10:48 ` Marc Zyngier
3 siblings, 0 replies; 10+ messages in thread
From: Marc Zyngier @ 2026-03-06 10:48 UTC (permalink / raw)
To: kvm, kvmarm, linux-arm-kernel, Fuad Tabba
Cc: joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, will,
yangyicong, wangzhou1, Oliver Upton
On Wed, 04 Mar 2026 16:22:20 +0000, Fuad Tabba wrote:
> While digging into arch/arm64/kvm/mmu.c with the intention of finally
> refactoring user_mem_abort(), I ran into a couple of latent bugs that
> we should probably fix right now before attempting any major plumbing.
>
> You might experience some deja-vu looking at the first patch. A while
> back (in 5f9466b50c1b), I fixed a struct page reference leak on an
> early error return in this exact same block. It turns out that another
> early exit was introduced later on (for exclusive/atomic faults), and it
> fell into the exact same trap of leaking the page.
>
> [...]
Applied to fixes, thanks!
[1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault
commit: e07fc9e2da91f6d9eeafa2961be9dc09d65ed633
[2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path
commit: 244acf1976b889b80b234982a70e9550c6f0bab7
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-03-06 10:48 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-04 16:22 [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Fuad Tabba
2026-03-04 16:22 ` [PATCH v1 1/2] KVM: arm64: Fix page leak in user_mem_abort() on atomic fault Fuad Tabba
2026-03-05 1:57 ` Yao Yuan
2026-03-04 16:22 ` [PATCH v1 2/2] KVM: arm64: Fix vma_shift staleness on nested hwpoison path Fuad Tabba
2026-03-05 16:07 ` Marc Zyngier
2026-03-05 16:13 ` Fuad Tabba
2026-03-05 16:22 ` Marc Zyngier
2026-03-05 16:51 ` [PATCH v1 0/2] KVM: arm64: Fix a couple of latent bugs in user_mem_abort() Marc Zyngier
2026-03-05 16:55 ` Fuad Tabba
2026-03-06 10:48 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox