public inbox for qemu-arm@nongnu.org
 help / color / mirror / Atom feed
From: Zenghui Yu <zenghui.yu@linux.dev>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-arm@nongnu.org, qemu-devel@nongnu.org, agraf@csgraf.de
Subject: Re: [PATCH rfc] hvf: arm: Inject SEA when executing insn in invalid memory range
Date: Sun, 22 Mar 2026 01:09:53 +0800	[thread overview]
Message-ID: <7c4ce7f4-7a50-40e3-b80f-e776eaac4984@linux.dev> (raw)
In-Reply-To: <CAFEAcA_kj1eRpjMYDF4b6hD1_P0+PazzdHFfTJFj=Hn231DBGA@mail.gmail.com>

Hi Peter,

On 3/20/26 6:52 PM, Peter Maydell wrote:
> On Sun, 15 Mar 2026 at 16:39, Zenghui Yu <zenghui.yu@linux.dev> wrote:
> >
> > It seems that hvf doesn't deal with the abort generated when guest tries to
> > execute instructions outside of the valid physical memory range, for
> > unknown reason. The abort is forwarded to userspace and QEMU doesn't handle
> > it either, which ends up with faulting on the same instruction infinitely.
> >
> > This was noticed by the kvm-unit-tests/selftest-vectors-kernel failure:
> >
> >   timeout -k 1s --foreground 90s /opt/homebrew/bin/qemu-system-aarch64 \
> >     -nodefaults -machine virt -accel hvf -cpu host \
> >     -device virtio-serial-device -device virtconsole,chardev=ctd \
> >     -chardev testdev,id=ctd -device pci-testdev -display none \
> >     -serial stdio -kernel arm/selftest.flat -smp 1 -append vectors-kernel
> >
> >   PASS: selftest: vectors-kernel: und
> >   PASS: selftest: vectors-kernel: svc
> >   qemu-system-aarch64: 0xffffc000: unhandled exception ec=0x20
> >   qemu-system-aarch64: 0xffffc000: unhandled exception ec=0x20
> >   qemu-system-aarch64: 0xffffc000: unhandled exception ec=0x20
> >   [...]
> >
> > It's apparent that the guest is braindead and it's unsure what prevents hvf
> > from injecting an abort directly in that case. Try to deal with the insane
> > guest in QEMU by injecting an SEA back into it in the EC_INSNABORT
> > emulation path.
> 
> Shouldn't that be an AddressSize fault, not an external abort?

I should have described this problem more clearly, see below.

> My guess would be that hvf is handing us the EC_INSNABORT
> cases for the same reason it hands us EC_DATABORT cases --
> we might have some ability to emulate the access. We probably
> also get this for cases like "guest tries to execute out of
> an MMIO device".
> 
> What happens for a data access to this kind of
> out-of-the-physical-memory-range address? Does hvf
> pass it back to us, or handle it internally?
> 
> Is the problem here a bogus virtual address from the guest's
> point of view, or a valid virtual address that the guest's
> page tables have translated to an invalid (intermediate)
> physical address ?

After adding `--trace "hvf_vm_map" --trace "hvf_vm_unmap"` to the
testing command line, I got:

hvf_vm_map paddr:0x0000000000000000 size:0x04000000 vaddr:0x112a34000
flags:0x05/R-X
hvf_vm_map paddr:0x0000000004000000 size:0x04000000 vaddr:0x116a38000
flags:0x05/R-X
hvf_vm_map paddr:0x0000000040000000 size:0x08000000 vaddr:0x10aa30000
flags:0x07/RWX

The guest then maps a VA 0xffffc000 to IPA 0x48000000 (an IPA that
hasn't been "registered" to hvf by hv_vm_map(), and I imprecisely refer
to it as an insn in invalid memory range) and sets PC to 0xffffc000,
expecting to receive an insn abort with IFSC equals to 0x10 (i.e., an
SEA). So the problem here is "a valid virtual address that the guest's
page tables have translated to an invalid (intermediate) physical
address".

This is what check_vectors()/check_pabt_init()/check_pabt() have tested,
if you can be bothered to have a look at kvm-unit-tests. ;-)

As for the AddressSize fault, I checked that on M1, we expose
ID_AA64MMFR0_EL1.PARange as 0b0001 to guest, so the advertised PA size
is 36bits (i.e., 64GB).

After hacking KUT to let the guest map a VA to IPA 0x1000000000 (an IPA
right after 64GB) and execute an insn on that VA, the guest receives an
insn abort with IFSC equals to 0x03 (Hello, AddressSize fault!). We can
_infer_ from that that the AddressSize fault is injected internally by
hvf.

I haven't tried the "data access" side, sorry. Without some docs
describing which syndromes can be forwarded to userspace, and more
importantly, given my limited understanding of hvf, I think I'd better
stop making incomplete hacks (like this patch) to hvf. :-)

> > Signed-off-by: Zenghui Yu <zenghui.yu@linux.dev>
> > ---
> >  target/arm/hvf/hvf.c | 23 +++++++++++++++++++++++
> >  1 file changed, 23 insertions(+)
> >
> > diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> > index aabc7d32c1..54d6ea469c 100644
> > --- a/target/arm/hvf/hvf.c
> > +++ b/target/arm/hvf/hvf.c
> > @@ -2332,9 +2332,32 @@ static int hvf_handle_exception(CPUState *cpu, hv_vcpu_exit_exception_t *excp)
> >          bool ea = (syndrome >> 9) & 1;
> >          bool s1ptw = (syndrome >> 7) & 1;
> >          uint32_t ifsc = (syndrome >> 0) & 0x3f;
> > +        uint64_t ipa = excp->physical_address;
> > +        AddressSpace *as = cpu_get_address_space(cpu, ARMASIdx_NS);
> > +        hwaddr xlat;
> > +        MemoryRegion *mr;
> > +
> > +        cpu_synchronize_state(cpu);
> >
> >          trace_hvf_insn_abort(env->pc, set, fnv, ea, s1ptw, ifsc);
> >
> > +        /*
> > +         * TODO: If s1ptw, this is an error in the guest os page tables.
> > +         * Inject the exception into the guest.
> > +         */
> > +        assert(!s1ptw);
> > +
> > +        mr = address_space_translate(as, ipa, &xlat, NULL, false,
> > +                                     MEMTXATTRS_UNSPECIFIED);
> > +        if (unlikely(!memory_region_is_ram(mr))) {
> 
> This doesn't look like the right kind of check, given the
> stated problem. Addresses can be in range but not have RAM.
> 
> > +            uint32_t syn;
> > +
> > +            /* inject an SEA back into the guest */
> > +            syn = syn_insn_abort(arm_current_el(env) == 1, ea, false, 0x10);
> > +            hvf_raise_exception(cpu, EXCP_PREFETCH_ABORT, syn, 1);
> > +            break;
> > +        }
> > +
> >          /* fall through */
> 
> This "fall through" remains not correct, I think, and it's kind
> of a big part of the problem here -- if we get an EC_INSNABORT
> handed to us by hvf, then we could:
>  * stop execution, exiting QEMU (as a "situation we can't
>    handle and don't know what to do with")
>  * advance the PC over the insn (questionable...)
>  * feed some kind of exception into the guest
> 
> but "continue execution of the guest without changing PC at all"
> is definitely wrong. A fix for this problem ought to involve
> changing the EC_INSNABORT case so that it no lenger does that
> "fall through to default" at all.

I completely agree with this. Thanks for your suggestion, Peter!

Zenghui


  reply	other threads:[~2026-03-21 17:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-15 16:38 [PATCH rfc] hvf: arm: Inject SEA when executing insn in invalid memory range Zenghui Yu
2026-03-16  9:40 ` Alex Bennée
2026-03-16 10:05   ` Mohamed Mediouni
2026-03-16 10:54   ` Zenghui Yu
2026-03-20 10:52 ` Peter Maydell
2026-03-21 17:09   ` Zenghui Yu [this message]
2026-03-21 17:26     ` Mohamed Mediouni
2026-03-21 17:39       ` Zenghui Yu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7c4ce7f4-7a50-40e3-b80f-e776eaac4984@linux.dev \
    --to=zenghui.yu@linux.dev \
    --cc=agraf@csgraf.de \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox