From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48078) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bI5Q4-0005HO-5N for qemu-devel@nongnu.org; Tue, 28 Jun 2016 22:40:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bI5Pz-0004CD-Hk for qemu-devel@nongnu.org; Tue, 28 Jun 2016 22:40:03 -0400 Date: Wed, 29 Jun 2016 12:41:25 +1000 From: David Gibson Message-ID: <20160629024125.GF8885@voom.fritz.box> References: <1467096514-18905-1-git-send-email-clg@kaod.org> <1467096514-18905-2-git-send-email-clg@kaod.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="IvGM3kKqwtniy32b" Content-Disposition: inline In-Reply-To: <1467096514-18905-2-git-send-email-clg@kaod.org> Subject: Re: [Qemu-devel] [PATCH 1/2] ppc: Add proper real mode translation support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?iso-8859-1?Q?C=E9dric?= Le Goater Cc: Benjamin Herrenschmidt , qemu-devel@nongnu.org, qemu-ppc@nongnu.org --IvGM3kKqwtniy32b Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Jun 28, 2016 at 08:48:33AM +0200, C=E9dric Le Goater wrote: > From: Benjamin Herrenschmidt >=20 > This adds proper support for translating real mode addresses based > on the combination of HV and LPCR bits. This handles HRMOR offset > for hypervisor real mode, and both RMA and VRMA modes for guest > real mode. PAPR mode adjusts the offsets appropriately to match the > RMA used in TCG, but we need to limit to the max supported by the > implementation (16G). >=20 > Signed-off-by: Benjamin Herrenschmidt > [clg: fixed checkpatch.pl errors ] > Signed-off-by: C=E9dric Le Goater This looks correct and I've applied it. There are a couple of possible cleanups which might be a good idea to follow up with though. > --- > hw/ppc/spapr.c | 7 +++ > target-ppc/mmu-hash64.c | 146 ++++++++++++++++++++++++++++++++++++++= ------ > target-ppc/mmu-hash64.h | 1 + > target-ppc/translate_init.c | 10 ++- > 4 files changed, 144 insertions(+), 20 deletions(-) >=20 > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c > index d26b4c26ed10..53ab1f84fb11 100644 > --- a/hw/ppc/spapr.c > +++ b/hw/ppc/spapr.c > @@ -1770,6 +1770,13 @@ static void ppc_spapr_init(MachineState *machine) > spapr->vrma_adjust =3D 1; > spapr->rma_size =3D MIN(spapr->rma_size, 0x10000000); > } > + > + /* Actually we don't support unbounded RMA anymore since we > + * added proper emulation of HV mode. The max we can get is > + * 16G which also happens to be what we configure for PAPR > + * mode so make sure we don't do anything bigger than that > + */ > + spapr->rma_size =3D MIN(spapr->rma_size, 0x400000000ull); #1 - Instead of the various KVM / non-KVM cases here, it might be simpler to just always clamp the RMA to 256MiB. > } > =20 > if (spapr->rma_size > node0_size) { > diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c > index 6d6f26c92957..ed353b2d1539 100644 > --- a/target-ppc/mmu-hash64.c > +++ b/target-ppc/mmu-hash64.c > @@ -653,13 +653,41 @@ static void ppc_hash64_set_dsi(CPUState *cs, CPUPPC= State *env, uint64_t dar, > env->error_code =3D 0; > } > =20 > +static int64_t ppc_hash64_get_rmls(CPUPPCState *env) > +{ > + uint64_t lpcr =3D env->spr[SPR_LPCR]; > + > + /* > + * This is the full 4 bits encoding of POWER8. Previous > + * CPUs only support a subset of these but the filtering > + * is done when writing LPCR > + */ > + switch ((lpcr & LPCR_RMLS) >> LPCR_RMLS_SHIFT) { > + case 0x8: /* 32MB */ > + return 0x2000000ull; > + case 0x3: /* 64MB */ > + return 0x4000000ull; > + case 0x7: /* 128MB */ > + return 0x8000000ull; > + case 0x4: /* 256MB */ > + return 0x10000000ull; > + case 0x2: /* 1GB */ > + return 0x40000000ull; > + case 0x1: /* 16GB */ > + return 0x400000000ull; > + default: > + /* What to do here ??? */ > + return 0; > + } > +} > =20 > int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, > int rwx, int mmu_idx) > { > CPUState *cs =3D CPU(cpu); > CPUPPCState *env =3D &cpu->env; > - ppc_slb_t *slb; > + ppc_slb_t *slb_ptr; > + ppc_slb_t slb; > unsigned apshift; > hwaddr pte_offset; > ppc_hash_pte64_t pte; > @@ -670,11 +698,53 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, va= ddr eaddr, > =20 > assert((rwx =3D=3D 0) || (rwx =3D=3D 1) || (rwx =3D=3D 2)); > =20 > + /* Note on LPCR usage: 970 uses HID4, but our special variant > + * of store_spr copies relevant fields into env->spr[SPR_LPCR]. > + * Similarily we filter unimplemented bits when storing into > + * LPCR depending on the MMU version. This code can thus just > + * use the LPCR "as-is". > + */ > + > /* 1. Handle real mode accesses */ > if (((rwx =3D=3D 2) && (msr_ir =3D=3D 0)) || ((rwx !=3D 2) && (msr_d= r =3D=3D 0))) { > - /* Translation is off */ > - /* In real mode the top 4 effective address bits are ignored */ > + /* Translation is supposedly "off" */ > + /* In real mode the top 4 effective address bits are (mostly) ig= nored */ > raddr =3D eaddr & 0x0FFFFFFFFFFFFFFFULL; > + > + /* In HV mode, add HRMOR if top EA bit is clear */ > + if (msr_hv) { > + if (!(eaddr >> 63)) { > + raddr |=3D env->spr[SPR_HRMOR]; > + } > + } else { > + /* Otherwise, check VPM for RMA vs VRMA */ > + if (env->spr[SPR_LPCR] & LPCR_VPM0) { > + uint32_t vrmasd; > + /* VRMA, we make up an SLB entry */ > + slb.vsid =3D SLB_VSID_VRMA; > + vrmasd =3D (env->spr[SPR_LPCR] & LPCR_VRMASD) >> > + LPCR_VRMASD_SHIFT; > + slb.vsid |=3D (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP); > + slb.esid =3D SLB_ESID_V; > + goto skip_slb; > + } > + /* RMA. Check bounds in RMLS */ > + if (raddr < ppc_hash64_get_rmls(env)) { > + raddr |=3D env->spr[SPR_RMOR]; > + } else { > + /* The access failed, generate the approriate interrupt = */ > + if (rwx =3D=3D 2) { > + ppc_hash64_set_isi(cs, env, 0x08000000); > + } else { > + dsisr =3D 0x08000000; > + if (rwx =3D=3D 1) { > + dsisr |=3D 0x02000000; > + } > + ppc_hash64_set_dsi(cs, env, eaddr, dsisr); > + } > + return 1; > + } > + } > tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_M= ASK, > PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx, > TARGET_PAGE_SIZE); > @@ -682,9 +752,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vadd= r eaddr, > } > =20 > /* 2. Translation is on, so look up the SLB */ > - slb =3D slb_lookup(cpu, eaddr); > - > - if (!slb) { > + slb_ptr =3D slb_lookup(cpu, eaddr); > + if (!slb_ptr) { > if (rwx =3D=3D 2) { > cs->exception_index =3D POWERPC_EXCP_ISEG; > env->error_code =3D 0; > @@ -696,14 +765,29 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, va= ddr eaddr, > return 1; > } > =20 > + /* We grab a local copy because we can modify it (or get a > + * pre-cooked one from the VRMA code > + */ > + slb =3D *slb_ptr; > + > + /* 2.5 Clamp L||LP in ISL mode */ > + if (env->spr[SPR_LPCR] & LPCR_ISL) { > + slb.vsid &=3D ~SLB_VSID_LLP_MASK; > + } > + > /* 3. Check for segment level no-execute violation */ > - if ((rwx =3D=3D 2) && (slb->vsid & SLB_VSID_N)) { > + if ((rwx =3D=3D 2) && (slb.vsid & SLB_VSID_N)) { > ppc_hash64_set_isi(cs, env, 0x10000000); > return 1; > } > =20 > + /* We go straight here for VRMA translations as none of the > + * above applies in that case > + */ > + skip_slb: > + > /* 4. Locate the PTE in the hash table */ > - pte_offset =3D ppc_hash64_htab_lookup(cpu, slb, eaddr, &pte); > + pte_offset =3D ppc_hash64_htab_lookup(cpu, &slb, eaddr, &pte); > if (pte_offset =3D=3D -1) { > dsisr =3D 0x40000000; > if (rwx =3D=3D 2) { > @@ -720,7 +804,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vadd= r eaddr, > "found PTE at offset %08" HWADDR_PRIx "\n", pte_offset); > =20 > /* Validate page size encoding */ > - apshift =3D hpte_page_shift(slb->sps, pte.pte0, pte.pte1); > + apshift =3D hpte_page_shift(slb.sps, pte.pte0, pte.pte1); > if (!apshift) { > error_report("Bad page size encoding in HPTE 0x%"PRIx64" - 0x%"P= RIx64 > " @ 0x%"HWADDR_PRIx, pte.pte0, pte.pte1, pte_offset= ); > @@ -733,7 +817,7 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vadd= r eaddr, > =20 > /* 5. Check access permissions */ > =20 > - pp_prot =3D ppc_hash64_pte_prot(cpu, slb, pte); > + pp_prot =3D ppc_hash64_pte_prot(cpu, &slb, pte); > amr_prot =3D ppc_hash64_amr_prot(cpu, pte); > prot =3D pp_prot & amr_prot; > =20 > @@ -789,27 +873,51 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, va= ddr eaddr, > hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong addr) > { > CPUPPCState *env =3D &cpu->env; > - ppc_slb_t *slb; > - hwaddr pte_offset; > + ppc_slb_t slb; > + ppc_slb_t *slb_ptr; > + hwaddr pte_offset, raddr; > ppc_hash_pte64_t pte; > unsigned apshift; > =20 > + /* Handle real mode */ > if (msr_dr =3D=3D 0) { > - /* In real mode the top 4 effective address bits are ignored */ > - return addr & 0x0FFFFFFFFFFFFFFFULL; > - } > + raddr =3D addr & 0x0FFFFFFFFFFFFFFFULL; > =20 > - slb =3D slb_lookup(cpu, addr); > - if (!slb) { > + /* In HV mode, add HRMOR if top EA bit is clear */ > + if (msr_hv & !(addr >> 63)) { > + return raddr | env->spr[SPR_HRMOR]; > + } > + > + /* Otherwise, check VPM for RMA vs VRMA */ > + if (env->spr[SPR_LPCR] & LPCR_VPM0) { > + uint32_t vrmasd; > + > + /* VRMA, we make up an SLB entry */ > + slb.vsid =3D SLB_VSID_VRMA; > + vrmasd =3D (env->spr[SPR_LPCR] & LPCR_VRMASD) >> LPCR_VRMASD= _SHIFT; > + slb.vsid |=3D (vrmasd << 4) & (SLB_VSID_L | SLB_VSID_LP); > + slb.esid =3D SLB_ESID_V; > + goto skip_slb; > + } > + /* RMA. Check bounds in RMLS */ > + if (raddr < ppc_hash64_get_rmls(env)) { > + return raddr | env->spr[SPR_RMOR]; > + } Now that the real-mode case is non-trivial, it would be nice if we could factor out some of this logic from the fault and page_debug cases into a common helper function. > return -1; > } > =20 > - pte_offset =3D ppc_hash64_htab_lookup(cpu, slb, addr, &pte); > + slb_ptr =3D slb_lookup(cpu, addr); > + if (!slb_ptr) { > + return -1; > + } > + slb =3D *slb_ptr; > + skip_slb: > + pte_offset =3D ppc_hash64_htab_lookup(cpu, &slb, addr, &pte); > if (pte_offset =3D=3D -1) { > return -1; > } > =20 > - apshift =3D hpte_page_shift(slb->sps, pte.pte0, pte.pte1); > + apshift =3D hpte_page_shift(slb.sps, pte.pte0, pte.pte1); > if (!apshift) { > return -1; > } > diff --git a/target-ppc/mmu-hash64.h b/target-ppc/mmu-hash64.h > index 6423b9f791e7..13ad060cfefb 100644 > --- a/target-ppc/mmu-hash64.h > +++ b/target-ppc/mmu-hash64.h > @@ -37,6 +37,7 @@ unsigned ppc_hash64_hpte_page_shift_noslb(PowerPCCPU *c= pu, > #define SLB_VSID_B_256M 0x0000000000000000ULL > #define SLB_VSID_B_1T 0x4000000000000000ULL > #define SLB_VSID_VSID 0x3FFFFFFFFFFFF000ULL > +#define SLB_VSID_VRMA (0x0001FFFFFF000000ULL | SLB_VSID_B_1T) > #define SLB_VSID_PTEM (SLB_VSID_B | SLB_VSID_VSID) > #define SLB_VSID_KS 0x0000000000000800ULL > #define SLB_VSID_KP 0x0000000000000400ULL > diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c > index 55d1bfac97c4..4820c0bc99fb 100644 > --- a/target-ppc/translate_init.c > +++ b/target-ppc/translate_init.c > @@ -8791,11 +8791,19 @@ void cpu_ppc_set_papr(PowerPCCPU *cpu) > /* Set emulated LPCR to not send interrupts to hypervisor. Note that > * under KVM, the actual HW LPCR will be set differently by KVM itse= lf, > * the settings below ensure proper operations with TCG in absence of > - * a real hypervisor > + * a real hypervisor. > + * > + * Clearing VPM0 will also cause us to use RMOR in mmu-hash64.c for > + * real mode accesses, which thankfully defaults to 0 and isn't > + * accessible in guest mode. > */ > lpcr->default_value &=3D ~(LPCR_VPM0 | LPCR_VPM1 | LPCR_ISL | LPCR_K= BV); > lpcr->default_value |=3D LPCR_LPES0 | LPCR_LPES1; > =20 > + /* Set RMLS to the max (ie, 16G) */ > + lpcr->default_value &=3D ~LPCR_RMLS; > + lpcr->default_value |=3D 1ull << LPCR_RMLS_SHIFT; > + > /* P7 and P8 has slightly different PECE bits, mostly because P8 adds > * bit 47 and 48 which are reserved on P7. Here we set them all, whi= ch > * will work as expected for both implementations --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --IvGM3kKqwtniy32b Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXczVVAAoJEGw4ysog2bOSfhQQAKkQwEJyO2wIArb1OUz14//U w4DIjRlTgse72iES8aZxfRP0clmUsoV1dTPfYqBhF8J2jmwngFtsE8YlJ4HYirp2 VoAwx6Gv0aa2Bdb7TocN3hj94OwMTPcpdD1m2EAVp0qx0KeJ4NoIriwGw8kvjthL wzCsg5s1/Sb3xAyCHBjicBS4Lt0KjYTVjnRp+7863SHmQKY6AA+AYfAhVybMOfR5 bvwUhCxahAk1X4LH20YLG6rdLITtlzkVInN7hel6jx3J2llQmF8NQSTyqBhu+aoF KeB/VSkh+eUEPVjqMXQEN5Y16mLywRVJjaf2qxgpAsRc+RSZ/zK8t052tAgZpoeK O7MXIZhEynnxcwvJ+1yF0+9LDzVqvFlkohN2xydhGXV4SC6ZOmcZIwu6DFXNKzU8 6HLITNOE+wRITk0qRi3dYaXmUJ6CPaxyO2dIkxNl+eUiertJ3pI3BdsLR6aiYXNV NtSB4+gLE/fbSbDxIaJT4EG1+4lCxGxj+NvSXCGqfrKfqxI1OAT7OA/cqshhs9D4 5KTnWRj0DUZB9RHEsbhDGIGWuom6am4FhWaMo42w7Imu9hC4KDxpe/PkivYVxeW9 l+qT63pqYlZdqQhz5Wf+bsVL2V5otUsDtADRdN+AW5S2p1KADdX/NTk4CD+A/YmP K6ThQklKrVxuJOiKGrvf =2nKL -----END PGP SIGNATURE----- --IvGM3kKqwtniy32b--