From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50010) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOFvO-0007gu-Au for qemu-devel@nongnu.org; Tue, 26 Jan 2016 21:33:40 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aOFvM-00016A-H4 for qemu-devel@nongnu.org; Tue, 26 Jan 2016 21:33:38 -0500 Date: Wed, 27 Jan 2016 11:59:18 +1100 From: David Gibson Message-ID: <20160127005918.GI16692@voom.fritz.box> References: <1453698952-32092-1-git-send-email-david@gibson.dropbear.id.au> <1453698952-32092-5-git-send-email-david@gibson.dropbear.id.au> <56A679C9.4090704@suse.de> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="WHz+neNWvhIGAO8A" Content-Disposition: inline In-Reply-To: <56A679C9.4090704@suse.de> Subject: Re: [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: lvivier@redhat.com, thuth@redhat.com, aik@ozlabs.ru, qemu-devel@nongnu.org, qemu-ppc@nongnu.org --WHz+neNWvhIGAO8A Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Mon, Jan 25, 2016 at 08:38:49PM +0100, Alexander Graf wrote: >=20 >=20 > On 01/25/2016 06:15 AM, David Gibson wrote: > >Currently, the ppc_hash64_page_shift() function looks up a page size bas= ed > >on information in an SLB entry. It open codes the bit translation for > >existing CPUs, however different CPU models can have different SLB > >encodings. We already store those in the 'sps' table in CPUPPCState, but > >we don't currently enforce that that actually matches the logic in > >ppc_hash64_page_shift. > > > >This patch reworks lookup of page size from SLB in several ways: > > * ppc_store_slb() will now fail (triggering an illegal instruction > > exception) if given a bad SLB page size encoding > > * On success ppc_store_slb() stores a pointer to the relevant entry in > > the page size table in the SLB entry. This is looked up directly f= rom > > the published table of page size encodings, so can't get out ot syn= c. > > * ppc_hash64_htab_lookup() and others now use this precached page size > > information rather than decoding the SLB values > > * Adjust ppc_hash64_pte_raddr() to take a page shift directly > > instead of an SLB entry. We'll be wanting the flexibility shortly. > > * Adjust ppc_hash64_pte_raddr() to take a page shift directly > > instead of an SLB entry. Both callers now have easy access to this, > > and we'll need the flexibility shortly. > > > >Signed-off-by: David Gibson > >--- > > target-ppc/cpu.h | 1 + > > target-ppc/machine.c | 20 ++++++++++++++ > > target-ppc/mmu-hash64.c | 71 ++++++++++++++++++++++++++---------------= -------- > > 3 files changed, 59 insertions(+), 33 deletions(-) > > > >diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h > >index 2bc96b4..0820390 100644 > >--- a/target-ppc/cpu.h > >+++ b/target-ppc/cpu.h > >@@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t; > > struct ppc_slb_t { > > uint64_t esid; > > uint64_t vsid; > >+ const struct ppc_one_seg_page_size *sps; > > }; > > #define MAX_SLB_ENTRIES 64 > >diff --git a/target-ppc/machine.c b/target-ppc/machine.c > >index b61c060..ca62d3e 100644 > >--- a/target-ppc/machine.c > >+++ b/target-ppc/machine.c > >@@ -2,6 +2,7 @@ > > #include "hw/boards.h" > > #include "sysemu/kvm.h" > > #include "helper_regs.h" > >+#include "mmu-hash64.h" > > static int cpu_load_old(QEMUFile *f, void *opaque, int version_id) > > { > >@@ -352,11 +353,30 @@ static bool slb_needed(void *opaque) > > return (cpu->env.mmu_model & POWERPC_MMU_64); > > } > >+static int slb_post_load(void *opaque, int version_id) > >+{ > >+ PowerPCCPU *cpu =3D opaque; > >+ CPUPPCState *env =3D &cpu->env; > >+ int i; > >+ > >+ /* We've pulled in the raw esid and vsid values from the migration > >+ * stream, but we need to recompute the page size pointers */ > >+ for (i =3D 0; i < env->slb_nr; i++) { > >+ if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) <= 0) { > >+ /* Migration source had bad values in its SLB */ > >+ return -1; > >+ } > >+ } > >+ > >+ return 0; > >+} > >+ > > static const VMStateDescription vmstate_slb =3D { > > .name =3D "cpu/slb", > > .version_id =3D 1, > > .minimum_version_id =3D 1, > > .needed =3D slb_needed, > >+ .post_load =3D slb_post_load, > > .fields =3D (VMStateField[]) { > > VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU), > > VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES), > >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c > >index 5a6d33b..28ad361 100644 > >--- a/target-ppc/mmu-hash64.c > >+++ b/target-ppc/mmu-hash64.c > >@@ -19,6 +19,7 @@ > > */ > > #include "cpu.h" > > #include "exec/helper-proto.h" > >+#include "qemu/error-report.h" > > #include "sysemu/kvm.h" > > #include "kvm_ppc.h" > > #include "mmu-hash64.h" > >@@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot, > > { > > CPUPPCState *env =3D &cpu->env; > > ppc_slb_t *slb =3D &env->slb[slot]; > >+ const struct ppc_one_seg_page_size *sps =3D NULL; > >+ int i; > > if (slot >=3D env->slb_nr) { > > return -1; /* Bad slot number */ > >@@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slo= t, > > return -1; /* 1T segment on MMU that doesn't support it */ > > } > >+ for (i =3D 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) { > >+ const struct ppc_one_seg_page_size *sps1 =3D &env->sps.sps[i]; > >+ > >+ if (!sps1->page_shift) { > >+ break; > >+ } > >+ > >+ if ((vsid & SLB_VSID_LLP_MASK) =3D=3D sps1->slb_enc) { > >+ sps =3D sps1; > >+ break; > >+ } > >+ } > >+ > >+ if (!sps) { > >+ error_report("Bad page size encoding in SLB store: slot "TARGET= _FMT_lu > >+ " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx, > >+ slot, esid, vsid); > >+ return -1; > >+ } > >+ > > slb->esid =3D esid; > > slb->vsid =3D vsid; > >+ slb->sps =3D sps; > > LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " =3D> %016" P= RIx64 > > " %016" PRIx64 "\n", __func__, slot, esid, vsid, > >@@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cp= u, hwaddr hash, > > return -1; > > } > >-static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb) > >-{ > >- uint64_t epnshift; > >- > >- /* Page size according to the SLB, which we use to generate the > >- * EPN for hash table lookup.. When we implement more recent MMU > >- * extensions this might be different from the actual page size > >- * encoded in the PTE */ > >- if ((slb->vsid & SLB_VSID_LLP_MASK) =3D=3D SLB_VSID_4K) { > >- epnshift =3D TARGET_PAGE_BITS; > >- } else if ((slb->vsid & SLB_VSID_LLP_MASK) =3D=3D SLB_VSID_64K) { > >- epnshift =3D TARGET_PAGE_BITS_64K; > >- } else { > >- epnshift =3D TARGET_PAGE_BITS_16M; > >- } > >- return epnshift; > >-} > >- > > static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu, > > ppc_slb_t *slb, target_ulong eadd= r, > > ppc_hash_pte64_t *pte) > >@@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *c= pu, > > CPUPPCState *env =3D &cpu->env; > > hwaddr pte_offset; > > hwaddr hash; > >- uint64_t vsid, epnshift, epnmask, epn, ptem; > >+ uint64_t vsid, epnmask, epn, ptem; > >+ > >+ /* The SLB store path should prevent any bad page size encodings > >+ * getting in there, so: */ > >+ assert(slb->sps); > >- epnshift =3D ppc_hash64_page_shift(slb); > >- epnmask =3D ~((1ULL << epnshift) - 1); > >+ epnmask =3D ~((1ULL << slb->sps->page_shift) - 1); > > if (slb->vsid & SLB_VSID_B) { > > /* 1TB segment */ > > vsid =3D (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T; > > epn =3D (eaddr & ~SEGMENT_MASK_1T) & epnmask; > >- hash =3D vsid ^ (vsid << 25) ^ (epn >> epnshift); > >+ hash =3D vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift); > > } else { > > /* 256M segment */ > > vsid =3D (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT; > > epn =3D (eaddr & ~SEGMENT_MASK_256M) & epnmask; > >- hash =3D vsid ^ (epn >> epnshift); > >+ hash =3D vsid ^ (epn >> slb->sps->page_shift); > > } > > ptem =3D (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVP= N); > >@@ -465,17 +474,12 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *c= pu, > > return pte_offset; > > } > >-static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte, > >+static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_= t pte, > > target_ulong eaddr) > > { > >- hwaddr mask; > >- int target_page_bits; > >+ hwaddr mask =3D (1ULL << page_shift) - 1; > > hwaddr rpn =3D pte.pte1 & HPTE64_R_RPN; > >- /* > >- * We support 4K, 64K and 16M now > >- */ > >- target_page_bits =3D ppc_hash64_page_shift(slb); > >- mask =3D (1ULL << target_page_bits) - 1; > >+ > > return (rpn & ~mask) | (eaddr & mask); > > } >=20 > Isn't this basically deposit64 by now? True. I'll remove ppc_hash64_pte_raddr() entirely and use deposit64 instead. --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --WHz+neNWvhIGAO8A Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJWqBZlAAoJEGw4ysog2bOSyWAP/3mPSu+VVyJulLXFT9ysnD9c gAdxP7OOjGQFTlNu3PUKxFxaqbX2YylzV9BWpZZNn5ceq03Z+gQZd1+5v4G7MZQR ymGn6HldQJ+aZwGMfejikYs7gmbTATdyvNkZPnICQ2VHdTiwKcoWoGUPlElIKdJV zmboOypAkeHay8hoabt4WDq6uyKijECUtBPOOeCFauzpSr44+RsjC0pqcFzxxvyA sukhE7TZipje14Tj8ekZpbLDdq960yU1tP09itXkOvq0EbDUPQmx80rROCbni2xV 87yE1FFff1tdrClxPjVy8cZxfYwmQdUFJeeBXUuQZ9NHYOJjyO4/VKCboitU1cdW K8XjJa00rDFYo9hjWcFEAr8y86PYBsf8C2gqbnMzfRWpDksHWRAV2edaQdSwCZSL p+m1gFx1uZAuMaNn9djFaUS8gKWYH5W65TCsOuOvoXOF+0JdeAtnDGd8HkaGYxMv qg8f64n121aRRY+dK0pSgQetKs8j42pK0Slk/RpwvgzwobJbPonAWkvAbO07r6B/ qN4tiHDhAt2XhK2gOWYaequkcC2SGaoifTTC6gNVlZogjbKnaeIcZuaxlimd5vNb zX7OxTWK+8RxSahD9pBu3VgtN9P4phPmQURofDZ+8DQGf+6pr6PSyZPQLDn1yAnM M2hG2PkfNphdPZUTaJCy =Vb63 -----END PGP SIGNATURE----- --WHz+neNWvhIGAO8A--