From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42838) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZHR1b-0005IP-LZ for qemu-devel@nongnu.org; Tue, 21 Jul 2015 02:27:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZHR1Z-0004Sk-Ta for qemu-devel@nongnu.org; Tue, 21 Jul 2015 02:27:35 -0400 Sender: Richard Henderson References: <1437455978.5809.2.camel@kernel.crashing.org> From: Richard Henderson Message-ID: <55ADE64A.7050702@twiddle.net> Date: Tue, 21 Jul 2015 07:27:22 +0100 MIME-Version: 1.0 In-Reply-To: <1437455978.5809.2.camel@kernel.crashing.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v3] tcg/ppc: Improve unaligned load/store handling on 64-bit backend List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Benjamin Herrenschmidt , qemu-devel@nongnu.org Cc: Paolo Bonzini , qemu-ppc@nongnu.org, Alexander Graf , Aurelien Jarno On 07/21/2015 06:19 AM, Benjamin Herrenschmidt wrote: > + /* Clear the non-page, non-alignment bits from the address */ > if (TCG_TARGET_REG_BITS == 32 || TARGET_LONG_BITS == 32) { > + /* We don't support unaligned accesses on 32-bits, preserve > + * the bottom bits and thus trigger a comparison failure on > + * unaligned accesses > + */ > tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0, > (32 - s_bits) & 31, 31 - TARGET_PAGE_BITS); Why don't you support this unaligned acess with 32-bit guests? > - } else if (!s_bits) { > - tcg_out_rld(s, RLDICR, TCG_REG_R0, addrlo, > - 0, 63 - TARGET_PAGE_BITS); > + } else if (s_bits) { > + /* > byte access, we need to handle alignment */ > + if ((opc & MO_AMASK) == MO_ALIGN) { > + /* Alignment required by the front-end, same as 32-bits */ > + tcg_out_rld(s, RLDICL, TCG_REG_R0, addrlo, > + 64 - TARGET_PAGE_BITS, TARGET_PAGE_BITS - s_bits); > + tcg_out_rld(s, RLDICL, TCG_REG_R0, TCG_REG_R0, TARGET_PAGE_BITS, 0); > + } else { > + /* We support unaligned accesses, we need to make sure we fail > + * if we cross a page boundary. The trick is to add the > + * access_size-1 to the address before masking the low bits. > + * That will make the address overflow to the next page if we > + * cross a page boundary which will then force a mismatch of > + * the TLB compare since the next page cannot possibly be in > + * the same TLB index. > + */ > + tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, (1 << s_bits) - 1)); > + tcg_out_rld(s, RLDICR, TCG_REG_R0, TCG_REG_R0, > + 0, 63 - TARGET_PAGE_BITS); > + } > } else { > - tcg_out_rld(s, RLDICL, TCG_REG_R0, addrlo, > - 64 - TARGET_PAGE_BITS, TARGET_PAGE_BITS - s_bits); > - tcg_out_rld(s, RLDICL, TCG_REG_R0, TCG_REG_R0, TARGET_PAGE_BITS, 0); > + /* Byte access, just chop off the bits below the page index */ > + tcg_out_rld(s, RLDICR, TCG_REG_R0, addrlo, 0, 63 - TARGET_PAGE_BITS);