public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
	Quentin Perret <qperret@google.com>,
	Will Deacon <will@kernel.org>,
	Vincent Donnefort <vdonnefort@google.com>,
	Joey Gouly <joey.gouly@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Oliver Upton <oupton@kernel.org>,
	Zenghui Yu <yuzenghui@huawei.com>,
	stable@vger.kernel.org
Subject: Re: [PATCH] KVM: arm64: Fix protected mode handling of pages larger than 4kB
Date: Sun, 22 Feb 2026 18:54:58 +0000	[thread overview]
Message-ID: <878qckehh9.wl-maz@kernel.org> (raw)
In-Reply-To: <CA+EHjTy4p-Mbfr86NR9n1LgHC0EWrkdVjYb8O3z7k=Lv1entQg@mail.gmail.com>

Hi Fuad,

On Sun, 22 Feb 2026 17:58:00 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Sun, 22 Feb 2026 at 14:10, Marc Zyngier <maz@kernel.org> wrote:
> >
> > Since 3669ddd8fa8b5 ("KVM: arm64: Add a range to pkvm_mappings"),
> > pKVM tracks the memory that has been mapped into a guest in a
> > side data structure. Crucially, it uses it to find out whether
> > a page has already been mapped, and therefore refuses to map it
> > twice. So far, so good.
> >
> > However, this very patch completely breaks non-4kB page support,
> > with guests being unable to boot. The most obvious symptom is that
> > we take the same fault repeatedly, and not making forward progress.
> > A quick investigation shows that this is because of the above
> > rejection code.
> >
> > As it turns out, there are multiple issues at play:
> >
> > - while the HPFAR_EL2 register gives you the faulting IPA minus
> >   the bottom 12 bits, it will still give you the extra bits that
> >   are part of the page offset for anything larger than 4kB,
> >   even for a level-3 mapping
> 
> Matches the ARM ARM.
> 
> > - pkvm_kvm_pgtable_stage2_map() assumes that the address passed
> >   as a parameter is aligned to the size of the intended mapping
> 
> nit: pkvm_kvm_pgtable_stage2_map() -> kvm_pgtable_stage2_map()

Actually, that's pkvm_pgtable_stage2_map(). kvm_pgtable_stage2_map()
itself isn't affected.

> 
> > - the faulting address is only aligned for a non-page mapping
> >
> > When the planets are suitably aligned (pun intended), the guest
> > faults a page by accessing it past the bottom 4kB, and extra bits
> > get set in the HPFAR_EL2 register. If this results in a page mapping
> > (which is likely with large granule sizes), nothing aligns it further
> > down, and pkvm_mapping_iter_first() finds an intersection that
> > doesn't really exist. We assume this is a spurious fault and return
> > -EAGAIN. And again.
> >
> > This doesn't hit outside of the protected code, as the page table
> > code always aligns the IPA down to a page boundary, hiding the issue
> > for everyone else.
> >
> > Fix it by always forcing the alignment on vma_pagesize, irrespective
> > of the value of vma_pagesize.
> >
> > Fixes: 3669ddd8fa8b5 ("KVM: arm64: Add a range to pkvm_mappings")
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/arm64/kvm/mmu.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 8c5d259810b2f..aa587f2e28264 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1753,14 +1753,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> >         }
> >
> >         /*
> > -        * Both the canonical IPA and fault IPA must be hugepage-aligned to
> > -        * ensure we find the right PFN and lay down the mapping in the right
> > -        * place.
> > +        * Both the canonical IPA and fault IPA must be aligned to the
> > +        * mapping size to ensure we find the right PFN and lay down the
> > +        * mapping in the right place.
> >          */
> > -       if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) {
> > -               fault_ipa &= ~(vma_pagesize - 1);
> > -               ipa &= ~(vma_pagesize - 1);
> > -       }
> > +       fault_ipa &= ~(vma_pagesize - 1);
> > +       ipa &= ~(vma_pagesize - 1);
> 
> nit: Since we're changing this code anyway, should we use the ALIGN
> macros instead?

That'd be ALIGN_DOWN() then, as ALIGN() really is ALIGN_UP(), and
that'd be counter-productive.  Something like:

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index aa587f2e28264..3952415c4f83b 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1757,8 +1757,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	 * mapping size to ensure we find the right PFN and lay down the
 	 * mapping in the right place.
 	 */
-	fault_ipa &= ~(vma_pagesize - 1);
-	ipa &= ~(vma_pagesize - 1);
+	fault_ipa = ALIGN_DOWN(fault_ipa, vma_pagesize);
+	ipa = ALIGN_DOWN(ipa, vma_pagesize);
 
 	gfn = ipa >> PAGE_SHIFT;
 	mte_allowed = kvm_vma_mte_allowed(vma);

> Reviewed-by: Fuad Tabba <tabba@google.com>
> 
> and using 4, 16, and 64KB pages:
> 
> Tested-by: Fuad Tabba <tabba@google.com>

Ah, great! I couldn't be bothered with 64kB, and only used 16kB in NV
to debug quickly and then bare-metal to verify the fix.

Thanks!

	M.

-- 
Jazz isn't dead. It just smells funny.

  reply	other threads:[~2026-02-22 18:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-22 14:10 [PATCH] KVM: arm64: Fix protected mode handling of pages larger than 4kB Marc Zyngier
2026-02-22 17:58 ` Fuad Tabba
2026-02-22 18:54   ` Marc Zyngier [this message]
2026-02-22 20:28     ` Fuad Tabba
2026-02-23 16:31 ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=878qckehh9.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=joey.gouly@arm.com \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=oupton@kernel.org \
    --cc=qperret@google.com \
    --cc=stable@vger.kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=vdonnefort@google.com \
    --cc=will@kernel.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox