From: Andrew Jones <ajones@ventanamicro.com>
To: fangyu.yu@linux.alibaba.com
Cc: anup@brainfault.org, atish.patra@linux.dev, pjw@kernel.org,
palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr,
guoren@kernel.org, kvm@vger.kernel.org,
kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] RISC-V: KVM: Fix guest page fault within HLV* instructions
Date: Wed, 12 Nov 2025 13:41:51 -0600 [thread overview]
Message-ID: <20251112-ae882e7fd8d1fcbb73d87c6c@orel> (raw)
In-Reply-To: <20251111135506.8526-1-fangyu.yu@linux.alibaba.com>
On Tue, Nov 11, 2025 at 09:55:06PM +0800, fangyu.yu@linux.alibaba.com wrote:
> From: Fangyu Yu <fangyu.yu@linux.alibaba.com>
>
> When executing HLV* instructions at the HS mode, a guest page fault
> may occur when a g-stage page table migration between triggering the
> virtual instruction exception and executing the HLV* instruction.
>
> This may be a corner case, and one simpler way to handle this is to
> re-execute the instruction where the virtual instruction exception
> occurred, and the guest page fault will be automatically handled.
>
> Fixes: b91f0e4cb8a3 ("RISC-V: KVM: Factor-out instruction emulation into separate sources")
> Signed-off-by: Fangyu Yu <fangyu.yu@linux.alibaba.com>
>
> ---
> Changes in v2:
> - Remove unnecessary modifications and add comments(suggested by Anup)
> - Update Fixes tag
> - Link to v1: https://lore.kernel.org/linux-riscv/20250912134332.22053-1-fangyu.yu@linux.alibaba.com/
> ---
> arch/riscv/kvm/vcpu_insn.c | 39 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
> index de1f96ea6225..a8d796ef2822 100644
> --- a/arch/riscv/kvm/vcpu_insn.c
> +++ b/arch/riscv/kvm/vcpu_insn.c
> @@ -323,6 +323,19 @@ int kvm_riscv_vcpu_virtual_insn(struct kvm_vcpu *vcpu, struct kvm_run *run,
> ct->sepc,
> &utrap);
> if (utrap.scause) {
> + /**
> + * If a g-stage page fault occurs, the direct approach
> + * is to let the g-stage page fault handler handle it
> + * naturally, however, calling the g-stage page fault
> + * handler here seems rather strange.
> + * Considering this is an corner case, we can directly
> + * return to the guest and re-execute the same PC, this
> + * will trigger a g-stage page fault again and then the
> + * regular g-stage page fault handler will populate
> + * g-stage page table.
> + */
> + if (utrap.scause == EXC_LOAD_GUEST_PAGE_FAULT)
> + return 1;
> utrap.sepc = ct->sepc;
> kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> return 1;
> @@ -378,6 +391,19 @@ int kvm_riscv_vcpu_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run,
> insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, ct->sepc,
> &utrap);
> if (utrap.scause) {
> + /**
> + * If a g-stage page fault occurs, the direct approach
> + * is to let the g-stage page fault handler handle it
> + * naturally, however, calling the g-stage page fault
> + * handler here seems rather strange.
> + * Considering this is an corner case, we can directly
> + * return to the guest and re-execute the same PC, this
> + * will trigger a g-stage page fault again and then the
> + * regular g-stage page fault handler will populate
> + * g-stage page table.
> + */
> + if (utrap.scause == EXC_LOAD_GUEST_PAGE_FAULT)
> + return 1;
> /* Redirect trap if we failed to read instruction */
> utrap.sepc = ct->sepc;
> kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> @@ -504,6 +530,19 @@ int kvm_riscv_vcpu_mmio_store(struct kvm_vcpu *vcpu, struct kvm_run *run,
> insn = kvm_riscv_vcpu_unpriv_read(vcpu, true, ct->sepc,
> &utrap);
> if (utrap.scause) {
> + /**
> + * If a g-stage page fault occurs, the direct approach
> + * is to let the g-stage page fault handler handle it
> + * naturally, however, calling the g-stage page fault
> + * handler here seems rather strange.
> + * Considering this is an corner case, we can directly
> + * return to the guest and re-execute the same PC, this
> + * will trigger a g-stage page fault again and then the
> + * regular g-stage page fault handler will populate
> + * g-stage page table.
> + */
> + if (utrap.scause == EXC_LOAD_GUEST_PAGE_FAULT)
> + return 1;
> /* Redirect trap if we failed to read instruction */
> utrap.sepc = ct->sepc;
> kvm_riscv_vcpu_trap_redirect(vcpu, &utrap);
> --
> 2.50.1
>
To avoid repeating the same paragraph three times I would create a
helper function, kvm_riscv_check_load_guest_page_fault(), with the
paragraph placed in that function along with the utrap.scause
exception type check.
Thanks,
drew
--
kvm-riscv mailing list
kvm-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kvm-riscv
next prev parent reply other threads:[~2025-11-12 19:41 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-11 13:55 [PATCH v2] RISC-V: KVM: Fix guest page fault within HLV* instructions fangyu.yu
2025-11-12 19:41 ` Andrew Jones [this message]
2025-11-14 0:38 ` fangyu.yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251112-ae882e7fd8d1fcbb73d87c6c@orel \
--to=ajones@ventanamicro.com \
--cc=alex@ghiti.fr \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=atish.patra@linux.dev \
--cc=fangyu.yu@linux.alibaba.com \
--cc=guoren@kernel.org \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=palmer@dabbelt.com \
--cc=pjw@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).