From: Fabiano Rosas <farosas@linux.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>,
groug@kaod.org, philmd@redhat.com, qemu-devel@nongnu.org,
clg@kaod.org
Cc: lvivier@redhat.com, qemu-ppc@nongnu.org, paulus@samba.org,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [PATCH v3 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper
Date: Wed, 19 Feb 2020 11:06:20 -0300 [thread overview]
Message-ID: <87blpud63n.fsf@linux.ibm.com> (raw)
In-Reply-To: <20200219005414.15635-5-david@gibson.dropbear.id.au>
David Gibson <david@gibson.dropbear.id.au> writes:
> When running guests under a hypervisor, the hypervisor obviously needs to
> be protected from guest accesses even if those are in what the guest
> considers real mode (translation off). The POWER hardware provides two
> ways of doing that: The old way has guest real mode accesses simply offset
> and bounds checked into host addresses. It works, but requires that a
> significant chunk of the guest's memory - the RMA - be physically
> contiguous in the host, which is pretty inconvenient. The new way, known
> as VRMA, has guest real mode accesses translated in roughly the normal way
> but with some special parameters.
>
> In POWER7 and POWER8 the LPCR[VPM0] bit selected between the two modes, but
> in POWER9 only VRMA mode is supported
... when translation is off, right? Because I see in the 3.0 ISA that
LPCR[VPM1] is still there.
> and LPCR[VPM0] no longer exists. We
> handle that difference in behaviour in ppc_hash64_set_isi().. but not in
> other places that we blindly check LPCR[VPM0].
>
> Correct those instances with a new helper to tell if we should be in VRMA
> mode.
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
> ---
> target/ppc/mmu-hash64.c | 41 +++++++++++++++++++----------------------
> 1 file changed, 19 insertions(+), 22 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 5fabd93c92..d878180df5 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -668,6 +668,19 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *cpu,
> return 0;
> }
>
> +static bool ppc_hash64_use_vrma(CPUPPCState *env)
> +{
> + switch (env->mmu_model) {
> + case POWERPC_MMU_3_00:
> + /* ISAv3.0 (POWER9) always uses VRMA, the VPM0 field and RMOR
> + * register no longer exist */
> + return true;
> +
> + default:
> + return !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> + }
> +}
> +
> static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
> {
> CPUPPCState *env = &POWERPC_CPU(cs)->env;
> @@ -676,15 +689,7 @@ static void ppc_hash64_set_isi(CPUState *cs, uint64_t error_code)
> if (msr_ir) {
> vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
> } else {
> - switch (env->mmu_model) {
> - case POWERPC_MMU_3_00:
> - /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
> - vpm = true;
> - break;
> - default:
> - vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> - break;
> - }
> + vpm = ppc_hash64_use_vrma(env);
> }
> if (vpm && !msr_hv) {
> cs->exception_index = POWERPC_EXCP_HISI;
> @@ -702,15 +707,7 @@ static void ppc_hash64_set_dsi(CPUState *cs, uint64_t dar, uint64_t dsisr)
> if (msr_dr) {
> vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM1);
> } else {
> - switch (env->mmu_model) {
> - case POWERPC_MMU_3_00:
> - /* Field deprecated in ISAv3.00 - interrupts always go to hyperv */
> - vpm = true;
> - break;
> - default:
> - vpm = !!(env->spr[SPR_LPCR] & LPCR_VPM0);
> - break;
> - }
> + vpm = ppc_hash64_use_vrma(env);
> }
> if (vpm && !msr_hv) {
> cs->exception_index = POWERPC_EXCP_HDSI;
> @@ -799,7 +796,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr,
> if (!(eaddr >> 63)) {
> raddr |= env->spr[SPR_HRMOR];
> }
> - } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> + } else if (ppc_hash64_use_vrma(env)) {
> /* Emulated VRMA mode */
> slb = &env->vrma_slb;
> if (!slb->sps) {
> @@ -967,7 +964,7 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr)
> } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
> /* In HV mode, add HRMOR if top EA bit is clear */
> return raddr | env->spr[SPR_HRMOR];
> - } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
> + } else if (ppc_hash64_use_vrma(env)) {
> /* Emulated VRMA mode */
> slb = &env->vrma_slb;
> if (!slb->sps) {
> @@ -1056,8 +1053,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> slb->sps = NULL;
>
> /* Is VRMA enabled ? */
> - lpcr = env->spr[SPR_LPCR];
> - if (!(lpcr & LPCR_VPM0)) {
> + if (ppc_hash64_use_vrma(env)) {
Shouldn't this be !ppc_hash64_use_vrma(env)?
And a comment about the original code: all other places that check
LPCR_VPM0 do it after verifying that translation is off, except here
(ppc_hash64_update_vrma). Could that be an issue?
> return;
> }
>
> @@ -1065,6 +1061,7 @@ static void ppc_hash64_update_vrma(PowerPCCPU *cpu)
> * Make one up. Mostly ignore the ESID which will not be needed
> * for translation
> */
> + lpcr = env->spr[SPR_LPCR];
> vsid = SLB_VSID_VRMA;
> vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT;
> vsid |= (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP);
next prev parent reply other threads:[~2020-02-19 14:07 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-19 0:54 [PATCH v3 00/12] target/ppc: Correct some errors with real mode handling David Gibson
2020-02-19 0:54 ` [PATCH v3 01/12] ppc: Remove stub support for 32-bit hypervisor mode David Gibson
2020-02-19 14:15 ` Fabiano Rosas
2020-02-19 0:54 ` [PATCH v3 02/12] ppc: Remove stub of PPC970 HID4 implementation David Gibson
2020-02-19 11:18 ` BALATON Zoltan
2020-02-20 0:36 ` David Gibson
2020-02-19 0:54 ` [PATCH v3 03/12] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU David Gibson
2020-02-19 0:54 ` [PATCH v3 04/12] target/ppc: Introduce ppc_hash64_use_vrma() helper David Gibson
2020-02-19 14:06 ` Fabiano Rosas [this message]
2020-02-20 2:41 ` Paul Mackerras
2020-02-20 3:10 ` David Gibson
2020-02-19 0:54 ` [PATCH v3 05/12] spapr, ppc: Remove VPM0/RMLS hacks for POWER9 David Gibson
2020-02-19 0:54 ` [PATCH v3 06/12] target/ppc: Remove RMOR register from POWER9 & POWER10 David Gibson
2020-02-19 0:54 ` [PATCH v3 07/12] target/ppc: Use class fields to simplify LPCR masking David Gibson
2020-02-19 0:54 ` [PATCH v3 08/12] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS] David Gibson
2020-02-19 0:54 ` [PATCH v3 09/12] target/ppc: Correct RMLS table David Gibson
2020-02-19 0:54 ` [PATCH v3 10/12] target/ppc: Only calculate RMLS derived RMA limit on demand David Gibson
2020-02-19 0:54 ` [PATCH v3 11/12] target/ppc: Streamline construction of VRMA SLB entry David Gibson
2020-02-19 14:34 ` Fabiano Rosas
2020-02-20 3:13 ` David Gibson
2020-02-19 0:54 ` [PATCH v3 12/12] target/ppc: Don't store VRMA SLBE persistently David Gibson
2020-02-19 1:21 ` [PATCH v3 00/12] target/ppc: Correct some errors with real mode handling no-reply
2020-02-19 2:11 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87blpud63n.fsf@linux.ibm.com \
--to=farosas@linux.ibm.com \
--cc=clg@kaod.org \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=lvivier@redhat.com \
--cc=paulus@samba.org \
--cc=philmd@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).