From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56879) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZXFnD-0006zn-Dv for qemu-devel@nongnu.org; Wed, 02 Sep 2015 17:42:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZXFnB-0001aU-90 for qemu-devel@nongnu.org; Wed, 02 Sep 2015 17:42:07 -0400 Received: from mail-qk0-x235.google.com ([2607:f8b0:400d:c09::235]:36032) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZXFnB-0001aO-4i for qemu-devel@nongnu.org; Wed, 02 Sep 2015 17:42:05 -0400 Received: by qkcf65 with SMTP id f65so13501174qkc.3 for ; Wed, 02 Sep 2015 14:42:04 -0700 (PDT) Sender: Richard Henderson From: Richard Henderson Date: Wed, 2 Sep 2015 14:41:29 -0700 Message-Id: <1441230089-2587-3-git-send-email-rth@twiddle.net> In-Reply-To: <1441230089-2587-1-git-send-email-rth@twiddle.net> References: <1441230089-2587-1-git-send-email-rth@twiddle.net> Subject: [Qemu-devel] [PULL 2/2] tcg/i386: omit a few REXW prefixes in softmmu code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, Aurelien Jarno From: Aurelien Jarno When computing the TLB address we are likely to mask out the high 32-bits by using shr + and. We can use 32-bit instructions in that case. This saves 2 bytes per TLB access. Signed-off-by: Aurelien Jarno Message-Id: <1437306632-20655-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson --- tcg/i386/tcg-target.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/tcg/i386/tcg-target.c b/tcg/i386/tcg-target.c index d2adbc4..9187d34 100644 --- a/tcg/i386/tcg-target.c +++ b/tcg/i386/tcg-target.c @@ -1178,8 +1178,8 @@ static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi, const TCGReg r0 = TCG_REG_L0; const TCGReg r1 = TCG_REG_L1; TCGType ttype = TCG_TYPE_I32; - TCGType htype = TCG_TYPE_I32; - int trexw = 0, hrexw = 0; + TCGType tlbtype = TCG_TYPE_I32; + int trexw = 0, hrexw = 0, tlbrexw = 0; int s_mask = (1 << (opc & MO_SIZE)) - 1; bool aligned = (opc & MO_AMASK) == MO_ALIGN || s_mask == 0; @@ -1189,12 +1189,15 @@ static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi, trexw = P_REXW; } if (TCG_TYPE_PTR == TCG_TYPE_I64) { - htype = TCG_TYPE_I64; hrexw = P_REXW; + if (TARGET_PAGE_BITS + CPU_TLB_BITS > 32) { + tlbtype = TCG_TYPE_I64; + tlbrexw = P_REXW; + } } } - tcg_out_mov(s, htype, r0, addrlo); + tcg_out_mov(s, tlbtype, r0, addrlo); if (aligned) { tcg_out_mov(s, ttype, r1, addrlo); } else { @@ -1203,12 +1206,12 @@ static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi, tcg_out_modrm_offset(s, OPC_LEA + trexw, r1, addrlo, s_mask); } - tcg_out_shifti(s, SHIFT_SHR + hrexw, r0, + tcg_out_shifti(s, SHIFT_SHR + tlbrexw, r0, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); tgen_arithi(s, ARITH_AND + trexw, r1, TARGET_PAGE_MASK | (aligned ? s_mask : 0), 0); - tgen_arithi(s, ARITH_AND + hrexw, r0, + tgen_arithi(s, ARITH_AND + tlbrexw, r0, (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS, 0); tcg_out_modrm_sib_offset(s, OPC_LEA + hrexw, r0, TCG_AREG0, r0, 0, -- 2.4.3