qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH v2] tcg/ppc: Improve unaligned load/store handling on 64-bit backend
@ 2015-07-20  2:50 Benjamin Herrenschmidt
  2015-07-20  5:16 ` Aurelien Jarno
  0 siblings, 1 reply; 2+ messages in thread
From: Benjamin Herrenschmidt @ 2015-07-20  2:50 UTC (permalink / raw)
  To: qemu-devel
  Cc: Paolo Bonzini, qemu-ppc, Alexander Graf, Aurelien Jarno,
	Richard Henderson

Currently, we get to the slow path for any unaligned access in the
backend, because we effectively preserve the bottom address bits
below the alignment requirement when comparing with the TLB entry,
so any non-0 bit there will cause the compare to fail.

For the same number of instructions, we can instead add the access
size - 1 to the address and stick to clearing all the bottom bits.

That means that normal unaligned accesses will not fallback (the HW
will handle them fine). Only when crossing a page boundary well we
end up having a mismatch because we'll end up pointing to the next
page which cannot possibly be in that same TLB entry.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

v2. This is the correct version of the patch, the one that
actually works :-)

Note: I have verified things still work by booting an x86_64 ubuntu
installer on ppc64. I haven't noticed a large performance difference,
going to the full xubuntu installer took 5:45 instead of 5:51 on the
test machine I used, but I felt this is still worthwhile in case one
hits a worst-case scenario with a lot of unaligned accesses.

Note2: It would be nice to be able to pass larger load/stores to the
backend... it means we would need to use a higher bit in the TLB entry
for "invalid" and a bunch more macros in the front-end, but it could
be quite helpful speeding up things like memcpy which on ppc64 use
vector load/stores, or speeding up the new ppc lq/stq instructions.

Anybody already working on that ?

Note3: Hacking TCG is very new to me, so I apologize in advance for
any stupid oversight. I also assume other backends can probably use
the same trick if not already...

diff --git a/tcg/ppc/tcg-target.c b/tcg/ppc/tcg-target.c
index 2b6eafa..5ed8b58 100644
--- a/tcg/ppc/tcg-target.c
+++ b/tcg/ppc/tcg-target.c
@@ -1426,13 +1426,19 @@ static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp s_bits,
     if (TCG_TARGET_REG_BITS == 32 || TARGET_LONG_BITS == 32) {
         tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0,
                     (32 - s_bits) & 31, 31 - TARGET_PAGE_BITS);
-    } else if (!s_bits) {
-        tcg_out_rld(s, RLDICR, TCG_REG_R0, addrlo,
+    } else if (s_bits) {
+       /* Alignment check trick: We add the access_size-1 to the address
+        * before masking the low bits. That will make the address overflow
+        * to the next page if we cross a page boundary which will then
+        * force a mismatch of the TLB compare since the next page cannot
+        * possibly be in the same TLB index.
+        */
+        tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, (1 << s_bits) - 1));
+        tcg_out_rld(s, RLDICR, TCG_REG_R0, TCG_REG_R0,
                     0, 63 - TARGET_PAGE_BITS);
     } else {
-        tcg_out_rld(s, RLDICL, TCG_REG_R0, addrlo,
-                    64 - TARGET_PAGE_BITS, TARGET_PAGE_BITS - s_bits);
-        tcg_out_rld(s, RLDICL, TCG_REG_R0, TCG_REG_R0, TARGET_PAGE_BITS, 0);
+        tcg_out_rld(s, RLDICR, TCG_REG_R0, addrlo,
+                    0, 63 - TARGET_PAGE_BITS);
     }
 
     if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) {

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-07-20  5:16 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-20  2:50 [Qemu-devel] [RFC PATCH v2] tcg/ppc: Improve unaligned load/store handling on 64-bit backend Benjamin Herrenschmidt
2015-07-20  5:16 ` Aurelien Jarno

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).