From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57920) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFKTN-0000WX-Q3 for qemu-devel@nongnu.org; Wed, 15 Jul 2015 07:03:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFKTH-0001Jl-NA for qemu-devel@nongnu.org; Wed, 15 Jul 2015 07:03:33 -0400 Received: from hall.aurel32.net ([2001:bc8:30d7:100::1]:47951) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFKTH-0001JF-G9 for qemu-devel@nongnu.org; Wed, 15 Jul 2015 07:03:27 -0400 From: Aurelien Jarno Date: Wed, 15 Jul 2015 13:03:18 +0200 Message-Id: <1436958199-5181-9-git-send-email-aurelien@aurel32.net> In-Reply-To: <1436958199-5181-1-git-send-email-aurelien@aurel32.net> References: <1436958199-5181-1-git-send-email-aurelien@aurel32.net> Subject: [Qemu-devel] [PATCH RFC 8/9] tcg/optimize: do not simplify size changing moves List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Aurelien Jarno , Richard Henderson Now that we have real size changing ops, we don't need to marked high bits of the destination as garbage. The goal of the optimizer is to predict the value of the temps (and not of the registers) and do simplifications when possible. The problem there is therefore not the fact that those bits are not counted as garbage, but that a size changing op is replaced by a move. This patch is basically a revert of 24666baf, including the changes that have been made since then. Cc: Paolo Bonzini Cc: Richard Henderson Signed-off-by: Aurelien Jarno --- tcg/optimize.c | 28 ++++++---------------------- 1 file changed, 6 insertions(+), 22 deletions(-) diff --git a/tcg/optimize.c b/tcg/optimize.c index 18b7bc3..d1a0b6d 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -197,19 +197,13 @@ static void tcg_opt_gen_movi(TCGContext *s, TCGOp *op, TCGArg *args, TCGArg dst, TCGArg val) { TCGOpcode new_op = op_to_movi(op->opc); - tcg_target_ulong mask; op->opc = new_op; reset_temp(dst); temps[dst].state = TCG_TEMP_CONST; temps[dst].val = val; - mask = val; - if (TCG_TARGET_REG_BITS > 32 && new_op == INDEX_op_mov_i32) { - /* High bits of the destination are now garbage. */ - mask |= ~0xffffffffull; - } - temps[dst].mask = mask; + temps[dst].mask = val; args[0] = dst; args[1] = val; @@ -229,17 +223,11 @@ static void tcg_opt_gen_mov(TCGContext *s, TCGOp *op, TCGArg *args, } TCGOpcode new_op = op_to_mov(op->opc); - tcg_target_ulong mask; op->opc = new_op; reset_temp(dst); - mask = temps[src].mask; - if (TCG_TARGET_REG_BITS > 32 && new_op == INDEX_op_mov_i32) { - /* High bits of the destination are now garbage. */ - mask |= ~0xffffffffull; - } - temps[dst].mask = mask; + temps[src].mask = temps[dst].mask; assert(temps[src].state != TCG_TEMP_CONST); @@ -590,7 +578,7 @@ void tcg_optimize(TCGContext *s) reset_all_temps(nb_temps); for (oi = s->gen_first_op_idx; oi >= 0; oi = oi_next) { - tcg_target_ulong mask, partmask, affected; + tcg_target_ulong mask, affected; int nb_oargs, nb_iargs, i; TCGArg tmp; @@ -945,17 +933,13 @@ void tcg_optimize(TCGContext *s) break; } - /* 32-bit ops generate 32-bit results. For the result is zero test - below, we can ignore high bits, but for further optimizations we - need to record that the high bits contain garbage. */ - partmask = mask; + /* 32-bit ops generate 32-bit results. */ if (!(def->flags & TCG_OPF_64BIT)) { - mask |= ~(tcg_target_ulong)0xffffffffu; - partmask &= 0xffffffffu; + mask &= 0xffffffffu; affected &= 0xffffffffu; } - if (partmask == 0) { + if (mask == 0) { assert(nb_oargs == 1); tcg_opt_gen_movi(s, op, args, args[0], 0); continue; -- 2.1.4