* [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup
@ 2011-08-17 21:11 Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 1/6] tcg: Add and use TCG_OPF_64BIT Richard Henderson
` (6 more replies)
0 siblings, 7 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
As discussed elsewhere, one way to tidy up tcg/optimize.c
is to always provide the enum names, even if the host does
not support the operation.
As a sanity check, I wanted to include a test to make sure
that we never tried to output an opcode that the target
does not handle. I did this via a bit in the TCGOpDef flags.
In order to get that set, I changed all of the TCG_TARGET_HAS*
macros to be true/false rather than def/undef.
That allowed a further cleanup to change ifdefs into C IFs.
Unfortunately, it wasn't really possible to split this into
smaller pieces. Using the C IFs requires the enums be
present, even if unused.
I cross-compiled --target-list=i386-softmmu,i386-linux-user
for each of the tcg hosts. In the process I discovered a
number of pure compilation errors.
r~
Richard Henderson (6):
tcg: Add and use TCG_OPF_64BIT.
tcg: Always define all of the TCGOpcode enum members.
tcg: Constant fold neg, andc, orc, eqv, nand, nor.
tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight.
tcg-ia64: Fix typos in AREG0 setup in prologue.
tcg-arm: Make tcg_out_addi inline
tcg/arm/tcg-target.c | 2 +-
tcg/arm/tcg-target.h | 30 +-
tcg/hppa/tcg-target.c | 2 +-
tcg/hppa/tcg-target.h | 29 +-
tcg/i386/tcg-target.h | 68 ++--
tcg/ia64/tcg-target.c | 4 +-
tcg/ia64/tcg-target.h | 66 ++--
tcg/mips/tcg-target.h | 31 +-
tcg/optimize.c | 260 +++-----------
tcg/ppc/tcg-target.h | 31 +-
tcg/ppc64/tcg-target.h | 68 ++--
tcg/s390/tcg-target.h | 68 ++--
tcg/sparc/tcg-target.h | 68 ++--
tcg/tcg-op.h | 946 +++++++++++++++++++++++-------------------------
tcg/tcg-opc.h | 242 +++++--------
tcg/tcg.c | 6 +-
tcg/tcg.h | 59 +++-
17 files changed, 886 insertions(+), 1094 deletions(-)
--
1.7.4.4
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 1/6] tcg: Add and use TCG_OPF_64BIT.
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members Richard Henderson
` (5 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
This allows the simplification of the op_bits function from
tcg/optimize.c.
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/optimize.c | 77 ++------------------------------------------
tcg/tcg-opc.h | 98 ++++++++++++++++++++++++++++----------------------------
tcg/tcg.c | 2 +-
tcg/tcg.h | 21 ++++++++----
4 files changed, 67 insertions(+), 131 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 7eb5eb1..98c7e3f 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -92,81 +92,10 @@ static void reset_temp(TCGArg temp, int nb_temps, int nb_globals)
}
}
-static int op_bits(int op)
+static int op_bits(enum TCGOpcode op)
{
- switch (op) {
- case INDEX_op_mov_i32:
- case INDEX_op_add_i32:
- case INDEX_op_sub_i32:
- case INDEX_op_mul_i32:
- case INDEX_op_and_i32:
- case INDEX_op_or_i32:
- case INDEX_op_xor_i32:
- case INDEX_op_shl_i32:
- case INDEX_op_shr_i32:
- case INDEX_op_sar_i32:
-#ifdef TCG_TARGET_HAS_rot_i32
- case INDEX_op_rotl_i32:
- case INDEX_op_rotr_i32:
-#endif
-#ifdef TCG_TARGET_HAS_not_i32
- case INDEX_op_not_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8s_i32
- case INDEX_op_ext8s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i32
- case INDEX_op_ext16s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i32
- case INDEX_op_ext8u_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i32
- case INDEX_op_ext16u_i32:
-#endif
- return 32;
-#if TCG_TARGET_REG_BITS == 64
- case INDEX_op_mov_i64:
- case INDEX_op_add_i64:
- case INDEX_op_sub_i64:
- case INDEX_op_mul_i64:
- case INDEX_op_and_i64:
- case INDEX_op_or_i64:
- case INDEX_op_xor_i64:
- case INDEX_op_shl_i64:
- case INDEX_op_shr_i64:
- case INDEX_op_sar_i64:
-#ifdef TCG_TARGET_HAS_rot_i64
- case INDEX_op_rotl_i64:
- case INDEX_op_rotr_i64:
-#endif
-#ifdef TCG_TARGET_HAS_not_i64
- case INDEX_op_not_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext8s_i64
- case INDEX_op_ext8s_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i64
- case INDEX_op_ext16s_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext32s_i64
- case INDEX_op_ext32s_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i64
- case INDEX_op_ext8u_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i64
- case INDEX_op_ext16u_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext32u_i64
- case INDEX_op_ext32u_i64:
-#endif
- return 64;
-#endif
- default:
- fprintf(stderr, "Unrecognized operation %d in op_bits.\n", op);
- tcg_abort();
- }
+ const TCGOpDef *def = &tcg_op_defs[op];
+ return def->flags & TCG_OPF_64BIT ? 64 : 32;
}
static int op_to_movi(int op)
diff --git a/tcg/tcg-opc.h b/tcg/tcg-opc.h
index 2c7ca1a..b48669b 100644
--- a/tcg/tcg-opc.h
+++ b/tcg/tcg-opc.h
@@ -131,98 +131,98 @@ DEF(nor_i32, 1, 2, 0, 0)
#endif
#if TCG_TARGET_REG_BITS == 64
-DEF(mov_i64, 1, 1, 0, 0)
-DEF(movi_i64, 1, 0, 1, 0)
-DEF(setcond_i64, 1, 2, 1, 0)
+DEF(mov_i64, 1, 1, 0, TCG_OPF_64BIT)
+DEF(movi_i64, 1, 0, 1, TCG_OPF_64BIT)
+DEF(setcond_i64, 1, 2, 1, TCG_OPF_64BIT)
/* load/store */
-DEF(ld8u_i64, 1, 1, 1, 0)
-DEF(ld8s_i64, 1, 1, 1, 0)
-DEF(ld16u_i64, 1, 1, 1, 0)
-DEF(ld16s_i64, 1, 1, 1, 0)
-DEF(ld32u_i64, 1, 1, 1, 0)
-DEF(ld32s_i64, 1, 1, 1, 0)
-DEF(ld_i64, 1, 1, 1, 0)
-DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS)
-DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS)
-DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS)
-DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS)
+DEF(ld8u_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld8s_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld16u_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld16s_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld32u_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld32s_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(ld_i64, 1, 1, 1, TCG_OPF_64BIT)
+DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
+DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
+DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
+DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
/* arith */
-DEF(add_i64, 1, 2, 0, 0)
-DEF(sub_i64, 1, 2, 0, 0)
-DEF(mul_i64, 1, 2, 0, 0)
+DEF(add_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(sub_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(mul_i64, 1, 2, 0, TCG_OPF_64BIT)
#ifdef TCG_TARGET_HAS_div_i64
-DEF(div_i64, 1, 2, 0, 0)
-DEF(divu_i64, 1, 2, 0, 0)
-DEF(rem_i64, 1, 2, 0, 0)
-DEF(remu_i64, 1, 2, 0, 0)
+DEF(div_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(divu_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(rem_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(remu_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_div2_i64
-DEF(div2_i64, 2, 3, 0, 0)
-DEF(divu2_i64, 2, 3, 0, 0)
+DEF(div2_i64, 2, 3, 0, TCG_OPF_64BIT)
+DEF(divu2_i64, 2, 3, 0, TCG_OPF_64BIT)
#endif
-DEF(and_i64, 1, 2, 0, 0)
-DEF(or_i64, 1, 2, 0, 0)
-DEF(xor_i64, 1, 2, 0, 0)
+DEF(and_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(or_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(xor_i64, 1, 2, 0, TCG_OPF_64BIT)
/* shifts/rotates */
-DEF(shl_i64, 1, 2, 0, 0)
-DEF(shr_i64, 1, 2, 0, 0)
-DEF(sar_i64, 1, 2, 0, 0)
+DEF(shl_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(shr_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(sar_i64, 1, 2, 0, TCG_OPF_64BIT)
#ifdef TCG_TARGET_HAS_rot_i64
-DEF(rotl_i64, 1, 2, 0, 0)
-DEF(rotr_i64, 1, 2, 0, 0)
+DEF(rotl_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(rotr_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_deposit_i64
-DEF(deposit_i64, 1, 2, 2, 0)
+DEF(deposit_i64, 1, 2, 2, TCG_OPF_64BIT)
#endif
-DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS)
+DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
#ifdef TCG_TARGET_HAS_ext8s_i64
-DEF(ext8s_i64, 1, 1, 0, 0)
+DEF(ext8s_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_ext16s_i64
-DEF(ext16s_i64, 1, 1, 0, 0)
+DEF(ext16s_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_ext32s_i64
-DEF(ext32s_i64, 1, 1, 0, 0)
+DEF(ext32s_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_ext8u_i64
-DEF(ext8u_i64, 1, 1, 0, 0)
+DEF(ext8u_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_ext16u_i64
-DEF(ext16u_i64, 1, 1, 0, 0)
+DEF(ext16u_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_ext32u_i64
-DEF(ext32u_i64, 1, 1, 0, 0)
+DEF(ext32u_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_bswap16_i64
-DEF(bswap16_i64, 1, 1, 0, 0)
+DEF(bswap16_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_bswap32_i64
-DEF(bswap32_i64, 1, 1, 0, 0)
+DEF(bswap32_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_bswap64_i64
-DEF(bswap64_i64, 1, 1, 0, 0)
+DEF(bswap64_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_not_i64
-DEF(not_i64, 1, 1, 0, 0)
+DEF(not_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_neg_i64
-DEF(neg_i64, 1, 1, 0, 0)
+DEF(neg_i64, 1, 1, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_andc_i64
-DEF(andc_i64, 1, 2, 0, 0)
+DEF(andc_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_orc_i64
-DEF(orc_i64, 1, 2, 0, 0)
+DEF(orc_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_eqv_i64
-DEF(eqv_i64, 1, 2, 0, 0)
+DEF(eqv_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_nand_i64
-DEF(nand_i64, 1, 2, 0, 0)
+DEF(nand_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#ifdef TCG_TARGET_HAS_nor_i64
-DEF(nor_i64, 1, 2, 0, 0)
+DEF(nor_i64, 1, 2, 0, TCG_OPF_64BIT)
#endif
#endif
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 92f1989..7179bd4 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -68,7 +68,7 @@ static void tcg_target_qemu_prologue(TCGContext *s);
static void patch_reloc(uint8_t *code_ptr, int type,
tcg_target_long value, tcg_target_long addend);
-static TCGOpDef tcg_op_defs[] = {
+TCGOpDef tcg_op_defs[] = {
#define DEF(s, oargs, iargs, cargs, flags) { #s, oargs, iargs, cargs, iargs + oargs + cargs, flags },
#include "tcg-opc.h"
#undef DEF
diff --git a/tcg/tcg.h b/tcg/tcg.h
index e2a7095..6a4f6e4 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -445,13 +445,18 @@ typedef struct TCGArgConstraint {
#define TCG_MAX_OP_ARGS 16
-#define TCG_OPF_BB_END 0x01 /* instruction defines the end of a basic
- block */
-#define TCG_OPF_CALL_CLOBBER 0x02 /* instruction clobbers call registers
- and potentially update globals. */
-#define TCG_OPF_SIDE_EFFECTS 0x04 /* instruction has side effects : it
- cannot be removed if its output
- are not used */
+/* Bits for TCGOpDef->flags, 8 bits available. */
+enum {
+ /* Instruction defines the end of a basic block. */
+ TCG_OPF_BB_END = 0x01,
+ /* Instruction clobbers call registers and potentially update globals. */
+ TCG_OPF_CALL_CLOBBER = 0x02,
+ /* Instruction has side effects: it cannot be removed
+ if its outputs are not used. */
+ TCG_OPF_SIDE_EFFECTS = 0x04,
+ /* Instruction operands are 64-bits (otherwise 32-bits). */
+ TCG_OPF_64BIT = 0x08,
+};
typedef struct TCGOpDef {
const char *name;
@@ -463,6 +468,8 @@ typedef struct TCGOpDef {
int used;
#endif
} TCGOpDef;
+
+extern TCGOpDef tcg_op_defs[];
typedef struct TCGTargetOpDef {
TCGOpcode op;
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members.
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 1/6] tcg: Add and use TCG_OPF_64BIT Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-23 17:11 ` Peter Maydell
2011-08-17 21:11 ` [Qemu-devel] [PATCH 3/6] tcg: Constant fold neg, andc, orc, eqv, nand, nor Richard Henderson
` (4 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
By always defining these symbols, we can eliminate a lot of ifdefs.
To allow this to be checked reliably, the semantics of the
TCG_TARGET_HAS_* macros must be changed from def/undef to true/false.
This allows even more ifdefs to be removed, converting them into
C if statements.
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/arm/tcg-target.h | 30 +-
tcg/hppa/tcg-target.h | 29 +-
tcg/i386/tcg-target.h | 68 ++--
tcg/ia64/tcg-target.h | 66 ++--
tcg/mips/tcg-target.h | 31 +-
tcg/optimize.c | 156 ++-------
tcg/ppc/tcg-target.h | 31 +-
tcg/ppc64/tcg-target.h | 68 ++--
tcg/s390/tcg-target.h | 68 ++--
tcg/sparc/tcg-target.h | 68 ++--
tcg/tcg-op.h | 946 +++++++++++++++++++++++-------------------------
tcg/tcg-opc.h | 242 +++++--------
tcg/tcg.c | 4 +
tcg/tcg.h | 38 ++
14 files changed, 837 insertions(+), 1008 deletions(-)
diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h
index d8d7d94..0e0f69a 100644
--- a/tcg/arm/tcg-target.h
+++ b/tcg/arm/tcg-target.h
@@ -58,20 +58,22 @@ enum {
#define TCG_TARGET_CALL_STACK_OFFSET 0
/* optional instructions */
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#undef TCG_TARGET_HAS_ext8u_i32 /* and r0, r1, #0xff */
-#define TCG_TARGET_HAS_ext16u_i32
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_neg_i32
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_andc_i32
-// #define TCG_TARGET_HAS_orc_i32
-// #define TCG_TARGET_HAS_eqv_i32
-// #define TCG_TARGET_HAS_nand_i32
-// #define TCG_TARGET_HAS_nor_i32
+#define TCG_TARGET_HAS_div_i32 0
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8u_i32 0 /* and r0, r1, #0xff */
+#define TCG_TARGET_HAS_ext16u_i32 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_andc_i32 1
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
#define TCG_TARGET_HAS_GUEST_BASE
diff --git a/tcg/hppa/tcg-target.h b/tcg/hppa/tcg-target.h
index f7919ce..ed90efc 100644
--- a/tcg/hppa/tcg-target.h
+++ b/tcg/hppa/tcg-target.h
@@ -85,21 +85,24 @@ enum {
#define TCG_TARGET_STACK_GROWSUP
/* optional instructions */
-// #define TCG_TARGET_HAS_div_i32
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_andc_i32
-// #define TCG_TARGET_HAS_orc_i32
-#define TCG_TARGET_HAS_deposit_i32
+#define TCG_TARGET_HAS_div_i32 0
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_andc_i32 1
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 1
/* optional instructions automatically implemented */
-#undef TCG_TARGET_HAS_neg_i32 /* sub rd, 0, rs */
-#undef TCG_TARGET_HAS_ext8u_i32 /* and rd, rs, 0xff */
-#undef TCG_TARGET_HAS_ext16u_i32 /* and rd, rs, 0xffff */
+#define TCG_TARGET_HAS_neg_i32 0 /* sub rd, 0, rs */
+#define TCG_TARGET_HAS_ext8u_i32 0 /* and rd, rs, 0xff */
+#define TCG_TARGET_HAS_ext16u_i32 0 /* and rd, rs, 0xffff */
#define TCG_TARGET_HAS_GUEST_BASE
diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
index bfafbfc..5088e47 100644
--- a/tcg/i386/tcg-target.h
+++ b/tcg/i386/tcg-target.h
@@ -75,41 +75,43 @@ enum {
#define TCG_TARGET_CALL_STACK_OFFSET 0
/* optional instructions */
-#define TCG_TARGET_HAS_div2_i32
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#define TCG_TARGET_HAS_ext8u_i32
-#define TCG_TARGET_HAS_ext16u_i32
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_neg_i32
-#define TCG_TARGET_HAS_not_i32
-// #define TCG_TARGET_HAS_andc_i32
-// #define TCG_TARGET_HAS_orc_i32
-// #define TCG_TARGET_HAS_eqv_i32
-// #define TCG_TARGET_HAS_nand_i32
-// #define TCG_TARGET_HAS_nor_i32
+#define TCG_TARGET_HAS_div2_i32 1
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8u_i32 1
+#define TCG_TARGET_HAS_ext16u_i32 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_andc_i32 0
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
#if TCG_TARGET_REG_BITS == 64
-#define TCG_TARGET_HAS_div2_i64
-#define TCG_TARGET_HAS_rot_i64
-#define TCG_TARGET_HAS_ext8s_i64
-#define TCG_TARGET_HAS_ext16s_i64
-#define TCG_TARGET_HAS_ext32s_i64
-#define TCG_TARGET_HAS_ext8u_i64
-#define TCG_TARGET_HAS_ext16u_i64
-#define TCG_TARGET_HAS_ext32u_i64
-#define TCG_TARGET_HAS_bswap16_i64
-#define TCG_TARGET_HAS_bswap32_i64
-#define TCG_TARGET_HAS_bswap64_i64
-#define TCG_TARGET_HAS_neg_i64
-#define TCG_TARGET_HAS_not_i64
-// #define TCG_TARGET_HAS_andc_i64
-// #define TCG_TARGET_HAS_orc_i64
-// #define TCG_TARGET_HAS_eqv_i64
-// #define TCG_TARGET_HAS_nand_i64
-// #define TCG_TARGET_HAS_nor_i64
+#define TCG_TARGET_HAS_div2_i64 1
+#define TCG_TARGET_HAS_rot_i64 1
+#define TCG_TARGET_HAS_ext8s_i64 1
+#define TCG_TARGET_HAS_ext16s_i64 1
+#define TCG_TARGET_HAS_ext32s_i64 1
+#define TCG_TARGET_HAS_ext8u_i64 1
+#define TCG_TARGET_HAS_ext16u_i64 1
+#define TCG_TARGET_HAS_ext32u_i64 1
+#define TCG_TARGET_HAS_bswap16_i64 1
+#define TCG_TARGET_HAS_bswap32_i64 1
+#define TCG_TARGET_HAS_bswap64_i64 1
+#define TCG_TARGET_HAS_neg_i64 1
+#define TCG_TARGET_HAS_not_i64 1
+#define TCG_TARGET_HAS_andc_i64 0
+#define TCG_TARGET_HAS_orc_i64 0
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
#endif
#define TCG_TARGET_HAS_GUEST_BASE
diff --git a/tcg/ia64/tcg-target.h b/tcg/ia64/tcg-target.h
index e56e88f..ddc93c1 100644
--- a/tcg/ia64/tcg-target.h
+++ b/tcg/ia64/tcg-target.h
@@ -104,39 +104,43 @@ enum {
#define TCG_TARGET_CALL_STACK_OFFSET 16
/* optional instructions */
-#define TCG_TARGET_HAS_andc_i32
-#define TCG_TARGET_HAS_andc_i64
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap16_i64
-#define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_bswap32_i64
-#define TCG_TARGET_HAS_bswap64_i64
-#define TCG_TARGET_HAS_eqv_i32
-#define TCG_TARGET_HAS_eqv_i64
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#define TCG_TARGET_HAS_ext8s_i64
-#define TCG_TARGET_HAS_ext16s_i64
-#define TCG_TARGET_HAS_ext32s_i64
-#define TCG_TARGET_HAS_ext8u_i32
-#define TCG_TARGET_HAS_ext16u_i32
-#define TCG_TARGET_HAS_ext8u_i64
-#define TCG_TARGET_HAS_ext16u_i64
-#define TCG_TARGET_HAS_ext32u_i64
-#define TCG_TARGET_HAS_nand_i32
-#define TCG_TARGET_HAS_nand_i64
-#define TCG_TARGET_HAS_nor_i32
-#define TCG_TARGET_HAS_nor_i64
-#define TCG_TARGET_HAS_orc_i32
-#define TCG_TARGET_HAS_orc_i64
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_rot_i64
+#define TCG_TARGET_HAS_div_i32 0
+#define TCG_TARGET_HAS_div_i64 0
+#define TCG_TARGET_HAS_andc_i32 1
+#define TCG_TARGET_HAS_andc_i64 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap16_i64 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_bswap32_i64 1
+#define TCG_TARGET_HAS_bswap64_i64 1
+#define TCG_TARGET_HAS_eqv_i32 1
+#define TCG_TARGET_HAS_eqv_i64 1
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8s_i64 1
+#define TCG_TARGET_HAS_ext16s_i64 1
+#define TCG_TARGET_HAS_ext32s_i64 1
+#define TCG_TARGET_HAS_ext8u_i32 1
+#define TCG_TARGET_HAS_ext16u_i32 1
+#define TCG_TARGET_HAS_ext8u_i64 1
+#define TCG_TARGET_HAS_ext16u_i64 1
+#define TCG_TARGET_HAS_ext32u_i64 1
+#define TCG_TARGET_HAS_nand_i32 1
+#define TCG_TARGET_HAS_nand_i64 1
+#define TCG_TARGET_HAS_nor_i32 1
+#define TCG_TARGET_HAS_nor_i64 1
+#define TCG_TARGET_HAS_orc_i32 1
+#define TCG_TARGET_HAS_orc_i64 1
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_rot_i64 1
+#define TCG_TARGET_HAS_deposit_i32 0
+#define TCG_TARGET_HAS_deposit_i64 0
/* optional instructions automatically implemented */
-#undef TCG_TARGET_HAS_neg_i32 /* sub r1, r0, r3 */
-#undef TCG_TARGET_HAS_neg_i64 /* sub r1, r0, r3 */
-#undef TCG_TARGET_HAS_not_i32 /* xor r1, -1, r3 */
-#undef TCG_TARGET_HAS_not_i64 /* xor r1, -1, r3 */
+#define TCG_TARGET_HAS_neg_i32 0 /* sub r1, r0, r3 */
+#define TCG_TARGET_HAS_neg_i64 0 /* sub r1, r0, r3 */
+#define TCG_TARGET_HAS_not_i32 0 /* xor r1, -1, r3 */
+#define TCG_TARGET_HAS_not_i64 0 /* xor r1, -1, r3 */
/* Note: must be synced with dyngen-exec.h */
#define TCG_AREG0 TCG_REG_R7
diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h
index 8cb7d88..43c5501 100644
--- a/tcg/mips/tcg-target.h
+++ b/tcg/mips/tcg-target.h
@@ -78,23 +78,24 @@ enum {
#define TCG_TARGET_CALL_ALIGN_ARGS 1
/* optional instructions */
-#define TCG_TARGET_HAS_div_i32
-#define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_nor_i32
-#undef TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#undef TCG_TARGET_HAS_bswap32_i32
-#undef TCG_TARGET_HAS_bswap16_i32
-#undef TCG_TARGET_HAS_andc_i32
-#undef TCG_TARGET_HAS_orc_i32
-#undef TCG_TARGET_HAS_eqv_i32
-#undef TCG_TARGET_HAS_nand_i32
+#define TCG_TARGET_HAS_div_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_nor_i32 1
+#define TCG_TARGET_HAS_rot_i32 0
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 0
+#define TCG_TARGET_HAS_bswap16_i32 0
+#define TCG_TARGET_HAS_andc_i32 0
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
/* optional instructions automatically implemented */
-#undef TCG_TARGET_HAS_neg_i32 /* sub rd, zero, rt */
-#undef TCG_TARGET_HAS_ext8u_i32 /* andi rt, rs, 0xff */
-#undef TCG_TARGET_HAS_ext16u_i32 /* andi rt, rs, 0xffff */
+#define TCG_TARGET_HAS_neg_i32 0 /* sub rd, zero, rt */
+#define TCG_TARGET_HAS_ext8u_i32 0 /* andi rt, rs, 0xff */
+#define TCG_TARGET_HAS_ext16u_i32 0 /* andi rt, rs, 0xffff */
/* Note: must be synced with dyngen-exec.h */
#define TCG_AREG0 TCG_REG_S0
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 98c7e3f..32f928f 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -31,14 +31,9 @@
#include "qemu-common.h"
#include "tcg-op.h"
-#if TCG_TARGET_REG_BITS == 64
#define CASE_OP_32_64(x) \
glue(glue(case INDEX_op_, x), _i32): \
glue(glue(case INDEX_op_, x), _i64)
-#else
-#define CASE_OP_32_64(x) \
- glue(glue(case INDEX_op_, x), _i32)
-#endif
typedef enum {
TCG_TEMP_UNDEF = 0,
@@ -103,10 +98,8 @@ static int op_to_movi(int op)
switch (op_bits(op)) {
case 32:
return INDEX_op_movi_i32;
-#if TCG_TARGET_REG_BITS == 64
case 64:
return INDEX_op_movi_i64;
-#endif
default:
fprintf(stderr, "op_to_movi: unexpected return value of "
"function op_bits.\n");
@@ -155,10 +148,8 @@ static int op_to_mov(int op)
switch (op_bits(op)) {
case 32:
return INDEX_op_mov_i32;
-#if TCG_TARGET_REG_BITS == 64
case 64:
return INDEX_op_mov_i64;
-#endif
default:
fprintf(stderr, "op_to_mov: unexpected return value of "
"function op_bits.\n");
@@ -190,124 +181,57 @@ static TCGArg do_constant_folding_2(int op, TCGArg x, TCGArg y)
case INDEX_op_shl_i32:
return (uint32_t)x << (uint32_t)y;
-#if TCG_TARGET_REG_BITS == 64
case INDEX_op_shl_i64:
return (uint64_t)x << (uint64_t)y;
-#endif
case INDEX_op_shr_i32:
return (uint32_t)x >> (uint32_t)y;
-#if TCG_TARGET_REG_BITS == 64
case INDEX_op_shr_i64:
return (uint64_t)x >> (uint64_t)y;
-#endif
case INDEX_op_sar_i32:
return (int32_t)x >> (int32_t)y;
-#if TCG_TARGET_REG_BITS == 64
case INDEX_op_sar_i64:
return (int64_t)x >> (int64_t)y;
-#endif
-#ifdef TCG_TARGET_HAS_rot_i32
case INDEX_op_rotr_i32:
-#if TCG_TARGET_REG_BITS == 64
- x &= 0xffffffff;
- y &= 0xffffffff;
-#endif
- x = (x << (32 - y)) | (x >> y);
+ x = ((uint32_t)x << (32 - y)) | ((uint32_t)x >> y);
return x;
-#endif
-#ifdef TCG_TARGET_HAS_rot_i64
-#if TCG_TARGET_REG_BITS == 64
case INDEX_op_rotr_i64:
- x = (x << (64 - y)) | (x >> y);
+ x = ((uint64_t)x << (64 - y)) | ((uint64_t)x >> y);
return x;
-#endif
-#endif
-#ifdef TCG_TARGET_HAS_rot_i32
case INDEX_op_rotl_i32:
-#if TCG_TARGET_REG_BITS == 64
- x &= 0xffffffff;
- y &= 0xffffffff;
-#endif
- x = (x << y) | (x >> (32 - y));
+ x = ((uint32_t)x << y) | ((uint32_t)x >> (32 - y));
return x;
-#endif
-#ifdef TCG_TARGET_HAS_rot_i64
-#if TCG_TARGET_REG_BITS == 64
case INDEX_op_rotl_i64:
- x = (x << y) | (x >> (64 - y));
+ x = ((uint64_t)x << y) | ((uint64_t)x >> (64 - y));
return x;
-#endif
-#endif
-
-#if defined(TCG_TARGET_HAS_not_i32) || defined(TCG_TARGET_HAS_not_i64)
-#ifdef TCG_TARGET_HAS_not_i32
- case INDEX_op_not_i32:
-#endif
-#ifdef TCG_TARGET_HAS_not_i64
- case INDEX_op_not_i64:
-#endif
+
+ CASE_OP_32_64(not):
return ~x;
-#endif
-
-#if defined(TCG_TARGET_HAS_ext8s_i32) || defined(TCG_TARGET_HAS_ext8s_i64)
-#ifdef TCG_TARGET_HAS_ext8s_i32
- case INDEX_op_ext8s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8s_i64
- case INDEX_op_ext8s_i64:
-#endif
+
+ CASE_OP_32_64(ext8s):
return (int8_t)x;
-#endif
-
-#if defined(TCG_TARGET_HAS_ext16s_i32) || defined(TCG_TARGET_HAS_ext16s_i64)
-#ifdef TCG_TARGET_HAS_ext16s_i32
- case INDEX_op_ext16s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i64
- case INDEX_op_ext16s_i64:
-#endif
+
+ CASE_OP_32_64(ext16s):
return (int16_t)x;
-#endif
-
-#if defined(TCG_TARGET_HAS_ext8u_i32) || defined(TCG_TARGET_HAS_ext8u_i64)
-#ifdef TCG_TARGET_HAS_ext8u_i32
- case INDEX_op_ext8u_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i64
- case INDEX_op_ext8u_i64:
-#endif
+
+ CASE_OP_32_64(ext8u):
return (uint8_t)x;
-#endif
-
-#if defined(TCG_TARGET_HAS_ext16u_i32) || defined(TCG_TARGET_HAS_ext16u_i64)
-#ifdef TCG_TARGET_HAS_ext16u_i32
- case INDEX_op_ext16u_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i64
- case INDEX_op_ext16u_i64:
-#endif
+
+ CASE_OP_32_64(ext16u):
return (uint16_t)x;
-#endif
-#if TCG_TARGET_REG_BITS == 64
-#ifdef TCG_TARGET_HAS_ext32s_i64
case INDEX_op_ext32s_i64:
return (int32_t)x;
-#endif
-#ifdef TCG_TARGET_HAS_ext32u_i64
case INDEX_op_ext32u_i64:
return (uint32_t)x;
-#endif
-#endif
default:
fprintf(stderr,
@@ -319,11 +243,9 @@ static TCGArg do_constant_folding_2(int op, TCGArg x, TCGArg y)
static TCGArg do_constant_folding(int op, TCGArg x, TCGArg y)
{
TCGArg res = do_constant_folding_2(op, x, y);
-#if TCG_TARGET_REG_BITS == 64
if (op_bits(op) == 32) {
res &= 0xffffffff;
}
-#endif
return res;
}
@@ -385,14 +307,8 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
CASE_OP_32_64(shl):
CASE_OP_32_64(shr):
CASE_OP_32_64(sar):
-#ifdef TCG_TARGET_HAS_rot_i32
- case INDEX_op_rotl_i32:
- case INDEX_op_rotr_i32:
-#endif
-#ifdef TCG_TARGET_HAS_rot_i64
- case INDEX_op_rotl_i64:
- case INDEX_op_rotr_i64:
-#endif
+ CASE_OP_32_64(rotl):
+ CASE_OP_32_64(rotr):
if (temps[args[1]].state == TCG_TEMP_CONST) {
/* Proceed with possible constant folding. */
break;
@@ -473,34 +389,12 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
args += 2;
break;
CASE_OP_32_64(not):
-#ifdef TCG_TARGET_HAS_ext8s_i32
- case INDEX_op_ext8s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8s_i64
- case INDEX_op_ext8s_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i32
- case INDEX_op_ext16s_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i64
- case INDEX_op_ext16s_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i32
- case INDEX_op_ext8u_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i64
- case INDEX_op_ext8u_i64:
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i32
- case INDEX_op_ext16u_i32:
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i64
- case INDEX_op_ext16u_i64:
-#endif
-#if TCG_TARGET_REG_BITS == 64
+ CASE_OP_32_64(ext8s):
+ CASE_OP_32_64(ext8u):
+ CASE_OP_32_64(ext16s):
+ CASE_OP_32_64(ext16u):
case INDEX_op_ext32s_i64:
case INDEX_op_ext32u_i64:
-#endif
if (temps[args[1]].state == TCG_TEMP_CONST) {
gen_opc_buf[op_index] = op_to_movi(op);
tmp = do_constant_folding(op, temps[args[1]].val, 0);
@@ -525,14 +419,8 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
CASE_OP_32_64(shl):
CASE_OP_32_64(shr):
CASE_OP_32_64(sar):
-#ifdef TCG_TARGET_HAS_rot_i32
- case INDEX_op_rotl_i32:
- case INDEX_op_rotr_i32:
-#endif
-#ifdef TCG_TARGET_HAS_rot_i64
- case INDEX_op_rotl_i64:
- case INDEX_op_rotr_i64:
-#endif
+ CASE_OP_32_64(rotl):
+ CASE_OP_32_64(rotr):
if (temps[args[1]].state == TCG_TEMP_CONST
&& temps[args[2]].state == TCG_TEMP_CONST) {
gen_opc_buf[op_index] = op_to_movi(op);
diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h
index a1f8599..8c35c4e 100644
--- a/tcg/ppc/tcg-target.h
+++ b/tcg/ppc/tcg-target.h
@@ -77,21 +77,22 @@ enum {
#endif
/* optional instructions */
-#define TCG_TARGET_HAS_div_i32
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#define TCG_TARGET_HAS_ext8u_i32
-#define TCG_TARGET_HAS_ext16u_i32
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_neg_i32
-#define TCG_TARGET_HAS_andc_i32
-#define TCG_TARGET_HAS_orc_i32
-#define TCG_TARGET_HAS_eqv_i32
-#define TCG_TARGET_HAS_nand_i32
-#define TCG_TARGET_HAS_nor_i32
+#define TCG_TARGET_HAS_div_i32 1
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8u_i32 1
+#define TCG_TARGET_HAS_ext16u_i32 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_andc_i32 1
+#define TCG_TARGET_HAS_orc_i32 1
+#define TCG_TARGET_HAS_eqv_i32 1
+#define TCG_TARGET_HAS_nand_i32 1
+#define TCG_TARGET_HAS_nor_i32 1
+#define TCG_TARGET_HAS_deposit_i32 0
#define TCG_AREG0 TCG_REG_R27
diff --git a/tcg/ppc64/tcg-target.h b/tcg/ppc64/tcg-target.h
index 8a6db11..041fe9d 100644
--- a/tcg/ppc64/tcg-target.h
+++ b/tcg/ppc64/tcg-target.h
@@ -68,40 +68,42 @@ enum {
#define TCG_TARGET_CALL_STACK_OFFSET 48
/* optional instructions */
-#define TCG_TARGET_HAS_div_i32
-/* #define TCG_TARGET_HAS_rot_i32 */
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-/* #define TCG_TARGET_HAS_ext8u_i32 */
-/* #define TCG_TARGET_HAS_ext16u_i32 */
-/* #define TCG_TARGET_HAS_bswap16_i32 */
-/* #define TCG_TARGET_HAS_bswap32_i32 */
-/* #define TCG_TARGET_HAS_not_i32 */
-#define TCG_TARGET_HAS_neg_i32
-/* #define TCG_TARGET_HAS_andc_i32 */
-/* #define TCG_TARGET_HAS_orc_i32 */
-/* #define TCG_TARGET_HAS_eqv_i32 */
-/* #define TCG_TARGET_HAS_nand_i32 */
-/* #define TCG_TARGET_HAS_nor_i32 */
+#define TCG_TARGET_HAS_div_i32 1
+#define TCG_TARGET_HAS_rot_i32 0
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8u_i32 0
+#define TCG_TARGET_HAS_ext16u_i32 0
+#define TCG_TARGET_HAS_bswap16_i32 0
+#define TCG_TARGET_HAS_bswap32_i32 0
+#define TCG_TARGET_HAS_not_i32 0
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_andc_i32 0
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
-#define TCG_TARGET_HAS_div_i64
-/* #define TCG_TARGET_HAS_rot_i64 */
-#define TCG_TARGET_HAS_ext8s_i64
-#define TCG_TARGET_HAS_ext16s_i64
-#define TCG_TARGET_HAS_ext32s_i64
-/* #define TCG_TARGET_HAS_ext8u_i64 */
-/* #define TCG_TARGET_HAS_ext16u_i64 */
-/* #define TCG_TARGET_HAS_ext32u_i64 */
-/* #define TCG_TARGET_HAS_bswap16_i64 */
-/* #define TCG_TARGET_HAS_bswap32_i64 */
-/* #define TCG_TARGET_HAS_bswap64_i64 */
-/* #define TCG_TARGET_HAS_not_i64 */
-#define TCG_TARGET_HAS_neg_i64
-/* #define TCG_TARGET_HAS_andc_i64 */
-/* #define TCG_TARGET_HAS_orc_i64 */
-/* #define TCG_TARGET_HAS_eqv_i64 */
-/* #define TCG_TARGET_HAS_nand_i64 */
-/* #define TCG_TARGET_HAS_nor_i64 */
+#define TCG_TARGET_HAS_div_i64 1
+#define TCG_TARGET_HAS_rot_i64 0
+#define TCG_TARGET_HAS_ext8s_i64 1
+#define TCG_TARGET_HAS_ext16s_i64 1
+#define TCG_TARGET_HAS_ext32s_i64 1
+#define TCG_TARGET_HAS_ext8u_i64 0
+#define TCG_TARGET_HAS_ext16u_i64 0
+#define TCG_TARGET_HAS_ext32u_i64 0
+#define TCG_TARGET_HAS_bswap16_i64 0
+#define TCG_TARGET_HAS_bswap32_i64 0
+#define TCG_TARGET_HAS_bswap64_i64 0
+#define TCG_TARGET_HAS_not_i64 0
+#define TCG_TARGET_HAS_neg_i64 1
+#define TCG_TARGET_HAS_andc_i64 0
+#define TCG_TARGET_HAS_orc_i64 0
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
#define TCG_AREG0 TCG_REG_R27
diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h
index 4e45cf3..35ebac3 100644
--- a/tcg/s390/tcg-target.h
+++ b/tcg/s390/tcg-target.h
@@ -53,41 +53,43 @@ typedef enum TCGReg {
#define TCG_TARGET_NB_REGS 16
/* optional instructions */
-#define TCG_TARGET_HAS_div2_i32
-#define TCG_TARGET_HAS_rot_i32
-#define TCG_TARGET_HAS_ext8s_i32
-#define TCG_TARGET_HAS_ext16s_i32
-#define TCG_TARGET_HAS_ext8u_i32
-#define TCG_TARGET_HAS_ext16u_i32
-#define TCG_TARGET_HAS_bswap16_i32
-#define TCG_TARGET_HAS_bswap32_i32
-// #define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_neg_i32
-// #define TCG_TARGET_HAS_andc_i32
-// #define TCG_TARGET_HAS_orc_i32
-// #define TCG_TARGET_HAS_eqv_i32
-// #define TCG_TARGET_HAS_nand_i32
-// #define TCG_TARGET_HAS_nor_i32
+#define TCG_TARGET_HAS_div2_i32 1
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_ext8s_i32 1
+#define TCG_TARGET_HAS_ext16s_i32 1
+#define TCG_TARGET_HAS_ext8u_i32 1
+#define TCG_TARGET_HAS_ext16u_i32 1
+#define TCG_TARGET_HAS_bswap16_i32 1
+#define TCG_TARGET_HAS_bswap32_i32 1
+#define TCG_TARGET_HAS_not_i32 0
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_andc_i32 0
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
#if TCG_TARGET_REG_BITS == 64
-#define TCG_TARGET_HAS_div2_i64
-#define TCG_TARGET_HAS_rot_i64
-#define TCG_TARGET_HAS_ext8s_i64
-#define TCG_TARGET_HAS_ext16s_i64
-#define TCG_TARGET_HAS_ext32s_i64
-#define TCG_TARGET_HAS_ext8u_i64
-#define TCG_TARGET_HAS_ext16u_i64
-#define TCG_TARGET_HAS_ext32u_i64
-#define TCG_TARGET_HAS_bswap16_i64
-#define TCG_TARGET_HAS_bswap32_i64
-#define TCG_TARGET_HAS_bswap64_i64
-// #define TCG_TARGET_HAS_not_i64
-#define TCG_TARGET_HAS_neg_i64
-// #define TCG_TARGET_HAS_andc_i64
-// #define TCG_TARGET_HAS_orc_i64
-// #define TCG_TARGET_HAS_eqv_i64
-// #define TCG_TARGET_HAS_nand_i64
-// #define TCG_TARGET_HAS_nor_i64
+#define TCG_TARGET_HAS_div2_i64 1
+#define TCG_TARGET_HAS_rot_i64 1
+#define TCG_TARGET_HAS_ext8s_i64 1
+#define TCG_TARGET_HAS_ext16s_i64 1
+#define TCG_TARGET_HAS_ext32s_i64 1
+#define TCG_TARGET_HAS_ext8u_i64 1
+#define TCG_TARGET_HAS_ext16u_i64 1
+#define TCG_TARGET_HAS_ext32u_i64 1
+#define TCG_TARGET_HAS_bswap16_i64 1
+#define TCG_TARGET_HAS_bswap32_i64 1
+#define TCG_TARGET_HAS_bswap64_i64 1
+#define TCG_TARGET_HAS_not_i64 0
+#define TCG_TARGET_HAS_neg_i64 1
+#define TCG_TARGET_HAS_andc_i64 0
+#define TCG_TARGET_HAS_orc_i64 0
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
#endif
#define TCG_TARGET_HAS_GUEST_BASE
diff --git a/tcg/sparc/tcg-target.h b/tcg/sparc/tcg-target.h
index df0785e..7b4e7f9 100644
--- a/tcg/sparc/tcg-target.h
+++ b/tcg/sparc/tcg-target.h
@@ -92,41 +92,43 @@ enum {
#endif
/* optional instructions */
-#define TCG_TARGET_HAS_div_i32
-// #define TCG_TARGET_HAS_rot_i32
-// #define TCG_TARGET_HAS_ext8s_i32
-// #define TCG_TARGET_HAS_ext16s_i32
-// #define TCG_TARGET_HAS_ext8u_i32
-// #define TCG_TARGET_HAS_ext16u_i32
-// #define TCG_TARGET_HAS_bswap16_i32
-// #define TCG_TARGET_HAS_bswap32_i32
-#define TCG_TARGET_HAS_neg_i32
-#define TCG_TARGET_HAS_not_i32
-#define TCG_TARGET_HAS_andc_i32
-#define TCG_TARGET_HAS_orc_i32
-// #define TCG_TARGET_HAS_eqv_i32
-// #define TCG_TARGET_HAS_nand_i32
-// #define TCG_TARGET_HAS_nor_i32
+#define TCG_TARGET_HAS_div_i32 1
+#define TCG_TARGET_HAS_rot_i32 0
+#define TCG_TARGET_HAS_ext8s_i32 0
+#define TCG_TARGET_HAS_ext16s_i32 0
+#define TCG_TARGET_HAS_ext8u_i32 0
+#define TCG_TARGET_HAS_ext16u_i32 0
+#define TCG_TARGET_HAS_bswap16_i32 0
+#define TCG_TARGET_HAS_bswap32_i32 0
+#define TCG_TARGET_HAS_neg_i32 1
+#define TCG_TARGET_HAS_not_i32 1
+#define TCG_TARGET_HAS_andc_i32 1
+#define TCG_TARGET_HAS_orc_i32 1
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
#if TCG_TARGET_REG_BITS == 64
-#define TCG_TARGET_HAS_div_i64
-// #define TCG_TARGET_HAS_rot_i64
-// #define TCG_TARGET_HAS_ext8s_i64
-// #define TCG_TARGET_HAS_ext16s_i64
-#define TCG_TARGET_HAS_ext32s_i64
-// #define TCG_TARGET_HAS_ext8u_i64
-// #define TCG_TARGET_HAS_ext16u_i64
-#define TCG_TARGET_HAS_ext32u_i64
-// #define TCG_TARGET_HAS_bswap16_i64
-// #define TCG_TARGET_HAS_bswap32_i64
-// #define TCG_TARGET_HAS_bswap64_i64
-#define TCG_TARGET_HAS_neg_i64
-#define TCG_TARGET_HAS_not_i64
-#define TCG_TARGET_HAS_andc_i64
-#define TCG_TARGET_HAS_orc_i64
-// #define TCG_TARGET_HAS_eqv_i64
-// #define TCG_TARGET_HAS_nand_i64
-// #define TCG_TARGET_HAS_nor_i64
+#define TCG_TARGET_HAS_div_i64 1
+#define TCG_TARGET_HAS_rot_i64 0
+#define TCG_TARGET_HAS_ext8s_i64 0
+#define TCG_TARGET_HAS_ext16s_i64 0
+#define TCG_TARGET_HAS_ext32s_i64 1
+#define TCG_TARGET_HAS_ext8u_i64 0
+#define TCG_TARGET_HAS_ext16u_i64 0
+#define TCG_TARGET_HAS_ext32u_i64 1
+#define TCG_TARGET_HAS_bswap16_i64 0
+#define TCG_TARGET_HAS_bswap32_i64 0
+#define TCG_TARGET_HAS_bswap64_i64 0
+#define TCG_TARGET_HAS_neg_i64 1
+#define TCG_TARGET_HAS_not_i64 1
+#define TCG_TARGET_HAS_andc_i64 1
+#define TCG_TARGET_HAS_orc_i64 1
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
#endif
/* Note: must be synced with dyngen-exec.h */
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index ebf5e13..404b637 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -664,107 +664,81 @@ static inline void tcg_gen_muli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
tcg_temp_free_i32(t0);
}
-#ifdef TCG_TARGET_HAS_div_i32
static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
- tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2);
-}
-#elif defined(TCG_TARGET_HAS_div2_i32)
-static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_sari_i32(t0, arg1, 31);
- tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2);
- tcg_temp_free_i32(t0);
-}
-
-static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_sari_i32(t0, arg1, 31);
- tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2);
- tcg_temp_free_i32(t0);
-}
-
-static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_movi_i32(t0, 0);
- tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2);
- tcg_temp_free_i32(t0);
-}
-
-static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_movi_i32(t0, 0);
- tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2);
- tcg_temp_free_i32(t0);
-}
-#else
-static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
-{
- int sizemask = 0;
- /* Return value and both arguments are 32-bit and signed. */
- sizemask |= tcg_gen_sizemask(0, 0, 1);
- sizemask |= tcg_gen_sizemask(1, 0, 1);
- sizemask |= tcg_gen_sizemask(2, 0, 1);
-
- tcg_gen_helper32(tcg_helper_div_i32, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i32) {
+ tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i32) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_sari_i32(t0, arg1, 31);
+ tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2);
+ tcg_temp_free_i32(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 32-bit and signed. */
+ sizemask |= tcg_gen_sizemask(0, 0, 1);
+ sizemask |= tcg_gen_sizemask(1, 0, 1);
+ sizemask |= tcg_gen_sizemask(2, 0, 1);
+ tcg_gen_helper32(tcg_helper_div_i32, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 32-bit and signed. */
- sizemask |= tcg_gen_sizemask(0, 0, 1);
- sizemask |= tcg_gen_sizemask(1, 0, 1);
- sizemask |= tcg_gen_sizemask(2, 0, 1);
-
- tcg_gen_helper32(tcg_helper_rem_i32, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i32) {
+ tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i32) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_sari_i32(t0, arg1, 31);
+ tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2);
+ tcg_temp_free_i32(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 32-bit and signed. */
+ sizemask |= tcg_gen_sizemask(0, 0, 1);
+ sizemask |= tcg_gen_sizemask(1, 0, 1);
+ sizemask |= tcg_gen_sizemask(2, 0, 1);
+ tcg_gen_helper32(tcg_helper_rem_i32, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 32-bit and unsigned. */
- sizemask |= tcg_gen_sizemask(0, 0, 0);
- sizemask |= tcg_gen_sizemask(1, 0, 0);
- sizemask |= tcg_gen_sizemask(2, 0, 0);
-
- tcg_gen_helper32(tcg_helper_divu_i32, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i32) {
+ tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i32) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_movi_i32(t0, 0);
+ tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2);
+ tcg_temp_free_i32(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 32-bit and unsigned. */
+ sizemask |= tcg_gen_sizemask(0, 0, 0);
+ sizemask |= tcg_gen_sizemask(1, 0, 0);
+ sizemask |= tcg_gen_sizemask(2, 0, 0);
+ tcg_gen_helper32(tcg_helper_divu_i32, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 32-bit and unsigned. */
- sizemask |= tcg_gen_sizemask(0, 0, 0);
- sizemask |= tcg_gen_sizemask(1, 0, 0);
- sizemask |= tcg_gen_sizemask(2, 0, 0);
-
- tcg_gen_helper32(tcg_helper_remu_i32, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i32) {
+ tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i32) {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_movi_i32(t0, 0);
+ tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2);
+ tcg_temp_free_i32(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 32-bit and unsigned. */
+ sizemask |= tcg_gen_sizemask(0, 0, 0);
+ sizemask |= tcg_gen_sizemask(1, 0, 0);
+ sizemask |= tcg_gen_sizemask(2, 0, 0);
+ tcg_gen_helper32(tcg_helper_remu_i32, sizemask, ret, arg1, arg2);
+ }
}
-#endif
#if TCG_TARGET_REG_BITS == 32
@@ -1250,109 +1224,82 @@ static inline void tcg_gen_mul_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
tcg_gen_op3_i64(INDEX_op_mul_i64, ret, arg1, arg2);
}
-#ifdef TCG_TARGET_HAS_div_i64
-static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2);
-}
-
-static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2);
-}
-#elif defined(TCG_TARGET_HAS_div2_i64)
-static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_sari_i64(t0, arg1, 63);
- tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2);
- tcg_temp_free_i64(t0);
-}
-
-static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_sari_i64(t0, arg1, 63);
- tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2);
- tcg_temp_free_i64(t0);
-}
-
-static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_movi_i64(t0, 0);
- tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2);
- tcg_temp_free_i64(t0);
-}
-
-static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
-{
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_movi_i64(t0, 0);
- tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2);
- tcg_temp_free_i64(t0);
-}
-#else
static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 64-bit and signed. */
- sizemask |= tcg_gen_sizemask(0, 1, 1);
- sizemask |= tcg_gen_sizemask(1, 1, 1);
- sizemask |= tcg_gen_sizemask(2, 1, 1);
-
- tcg_gen_helper64(tcg_helper_div_i64, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i64) {
+ tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i64) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_sari_i64(t0, arg1, 63);
+ tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2);
+ tcg_temp_free_i64(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 64-bit and signed. */
+ sizemask |= tcg_gen_sizemask(0, 1, 1);
+ sizemask |= tcg_gen_sizemask(1, 1, 1);
+ sizemask |= tcg_gen_sizemask(2, 1, 1);
+ tcg_gen_helper64(tcg_helper_div_i64, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 64-bit and signed. */
- sizemask |= tcg_gen_sizemask(0, 1, 1);
- sizemask |= tcg_gen_sizemask(1, 1, 1);
- sizemask |= tcg_gen_sizemask(2, 1, 1);
-
- tcg_gen_helper64(tcg_helper_rem_i64, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i64) {
+ tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i64) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_sari_i64(t0, arg1, 63);
+ tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2);
+ tcg_temp_free_i64(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 64-bit and signed. */
+ sizemask |= tcg_gen_sizemask(0, 1, 1);
+ sizemask |= tcg_gen_sizemask(1, 1, 1);
+ sizemask |= tcg_gen_sizemask(2, 1, 1);
+ tcg_gen_helper64(tcg_helper_rem_i64, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 64-bit and unsigned. */
- sizemask |= tcg_gen_sizemask(0, 1, 0);
- sizemask |= tcg_gen_sizemask(1, 1, 0);
- sizemask |= tcg_gen_sizemask(2, 1, 0);
-
- tcg_gen_helper64(tcg_helper_divu_i64, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i64) {
+ tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i64) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_movi_i64(t0, 0);
+ tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2);
+ tcg_temp_free_i64(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 64-bit and unsigned. */
+ sizemask |= tcg_gen_sizemask(0, 1, 0);
+ sizemask |= tcg_gen_sizemask(1, 1, 0);
+ sizemask |= tcg_gen_sizemask(2, 1, 0);
+ tcg_gen_helper64(tcg_helper_divu_i64, sizemask, ret, arg1, arg2);
+ }
}
static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
- int sizemask = 0;
- /* Return value and both arguments are 64-bit and unsigned. */
- sizemask |= tcg_gen_sizemask(0, 1, 0);
- sizemask |= tcg_gen_sizemask(1, 1, 0);
- sizemask |= tcg_gen_sizemask(2, 1, 0);
-
- tcg_gen_helper64(tcg_helper_remu_i64, sizemask, ret, arg1, arg2);
+ if (TCG_TARGET_HAS_div_i64) {
+ tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2);
+ } else if (TCG_TARGET_HAS_div2_i64) {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_movi_i64(t0, 0);
+ tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2);
+ tcg_temp_free_i64(t0);
+ } else {
+ int sizemask = 0;
+ /* Return value and both arguments are 64-bit and unsigned. */
+ sizemask |= tcg_gen_sizemask(0, 1, 0);
+ sizemask |= tcg_gen_sizemask(1, 1, 0);
+ sizemask |= tcg_gen_sizemask(2, 1, 0);
+ tcg_gen_helper64(tcg_helper_remu_i64, sizemask, ret, arg1, arg2);
+ }
}
-#endif
-
-#endif
+#endif /* TCG_TARGET_REG_BITS == 32 */
static inline void tcg_gen_addi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
{
@@ -1413,82 +1360,82 @@ static inline void tcg_gen_muli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
static inline void tcg_gen_ext8s_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_ext8s_i32
- tcg_gen_op2_i32(INDEX_op_ext8s_i32, ret, arg);
-#else
- tcg_gen_shli_i32(ret, arg, 24);
- tcg_gen_sari_i32(ret, ret, 24);
-#endif
+ if (TCG_TARGET_HAS_ext8s_i32) {
+ tcg_gen_op2_i32(INDEX_op_ext8s_i32, ret, arg);
+ } else {
+ tcg_gen_shli_i32(ret, arg, 24);
+ tcg_gen_sari_i32(ret, ret, 24);
+ }
}
static inline void tcg_gen_ext16s_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_ext16s_i32
- tcg_gen_op2_i32(INDEX_op_ext16s_i32, ret, arg);
-#else
- tcg_gen_shli_i32(ret, arg, 16);
- tcg_gen_sari_i32(ret, ret, 16);
-#endif
+ if (TCG_TARGET_HAS_ext16s_i32) {
+ tcg_gen_op2_i32(INDEX_op_ext16s_i32, ret, arg);
+ } else {
+ tcg_gen_shli_i32(ret, arg, 16);
+ tcg_gen_sari_i32(ret, ret, 16);
+ }
}
static inline void tcg_gen_ext8u_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_ext8u_i32
- tcg_gen_op2_i32(INDEX_op_ext8u_i32, ret, arg);
-#else
- tcg_gen_andi_i32(ret, arg, 0xffu);
-#endif
+ if (TCG_TARGET_HAS_ext8u_i32) {
+ tcg_gen_op2_i32(INDEX_op_ext8u_i32, ret, arg);
+ } else {
+ tcg_gen_andi_i32(ret, arg, 0xffu);
+ }
}
static inline void tcg_gen_ext16u_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_ext16u_i32
- tcg_gen_op2_i32(INDEX_op_ext16u_i32, ret, arg);
-#else
- tcg_gen_andi_i32(ret, arg, 0xffffu);
-#endif
+ if (TCG_TARGET_HAS_ext16u_i32) {
+ tcg_gen_op2_i32(INDEX_op_ext16u_i32, ret, arg);
+ } else {
+ tcg_gen_andi_i32(ret, arg, 0xffffu);
+ }
}
/* Note: we assume the two high bytes are set to zero */
static inline void tcg_gen_bswap16_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_bswap16_i32
- tcg_gen_op2_i32(INDEX_op_bswap16_i32, ret, arg);
-#else
- TCGv_i32 t0 = tcg_temp_new_i32();
+ if (TCG_TARGET_HAS_bswap16_i32) {
+ tcg_gen_op2_i32(INDEX_op_bswap16_i32, ret, arg);
+ } else {
+ TCGv_i32 t0 = tcg_temp_new_i32();
- tcg_gen_ext8u_i32(t0, arg);
- tcg_gen_shli_i32(t0, t0, 8);
- tcg_gen_shri_i32(ret, arg, 8);
- tcg_gen_or_i32(ret, ret, t0);
- tcg_temp_free_i32(t0);
-#endif
+ tcg_gen_ext8u_i32(t0, arg);
+ tcg_gen_shli_i32(t0, t0, 8);
+ tcg_gen_shri_i32(ret, arg, 8);
+ tcg_gen_or_i32(ret, ret, t0);
+ tcg_temp_free_i32(t0);
+ }
}
static inline void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_bswap32_i32
- tcg_gen_op2_i32(INDEX_op_bswap32_i32, ret, arg);
-#else
- TCGv_i32 t0, t1;
- t0 = tcg_temp_new_i32();
- t1 = tcg_temp_new_i32();
+ if (TCG_TARGET_HAS_bswap32_i32) {
+ tcg_gen_op2_i32(INDEX_op_bswap32_i32, ret, arg);
+ } else {
+ TCGv_i32 t0, t1;
+ t0 = tcg_temp_new_i32();
+ t1 = tcg_temp_new_i32();
- tcg_gen_shli_i32(t0, arg, 24);
+ tcg_gen_shli_i32(t0, arg, 24);
- tcg_gen_andi_i32(t1, arg, 0x0000ff00);
- tcg_gen_shli_i32(t1, t1, 8);
- tcg_gen_or_i32(t0, t0, t1);
+ tcg_gen_andi_i32(t1, arg, 0x0000ff00);
+ tcg_gen_shli_i32(t1, t1, 8);
+ tcg_gen_or_i32(t0, t0, t1);
- tcg_gen_shri_i32(t1, arg, 8);
- tcg_gen_andi_i32(t1, t1, 0x0000ff00);
- tcg_gen_or_i32(t0, t0, t1);
+ tcg_gen_shri_i32(t1, arg, 8);
+ tcg_gen_andi_i32(t1, t1, 0x0000ff00);
+ tcg_gen_or_i32(t0, t0, t1);
- tcg_gen_shri_i32(t1, arg, 24);
- tcg_gen_or_i32(ret, t0, t1);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
-#endif
+ tcg_gen_shri_i32(t1, arg, 24);
+ tcg_gen_or_i32(ret, t0, t1);
+ tcg_temp_free_i32(t0);
+ tcg_temp_free_i32(t1);
+ }
}
#if TCG_TARGET_REG_BITS == 32
@@ -1576,59 +1523,59 @@ static inline void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
static inline void tcg_gen_ext8s_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext8s_i64
- tcg_gen_op2_i64(INDEX_op_ext8s_i64, ret, arg);
-#else
- tcg_gen_shli_i64(ret, arg, 56);
- tcg_gen_sari_i64(ret, ret, 56);
-#endif
+ if (TCG_TARGET_HAS_ext8s_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext8s_i64, ret, arg);
+ } else {
+ tcg_gen_shli_i64(ret, arg, 56);
+ tcg_gen_sari_i64(ret, ret, 56);
+ }
}
static inline void tcg_gen_ext16s_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext16s_i64
- tcg_gen_op2_i64(INDEX_op_ext16s_i64, ret, arg);
-#else
- tcg_gen_shli_i64(ret, arg, 48);
- tcg_gen_sari_i64(ret, ret, 48);
-#endif
+ if (TCG_TARGET_HAS_ext16s_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext16s_i64, ret, arg);
+ } else {
+ tcg_gen_shli_i64(ret, arg, 48);
+ tcg_gen_sari_i64(ret, ret, 48);
+ }
}
static inline void tcg_gen_ext32s_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext32s_i64
- tcg_gen_op2_i64(INDEX_op_ext32s_i64, ret, arg);
-#else
- tcg_gen_shli_i64(ret, arg, 32);
- tcg_gen_sari_i64(ret, ret, 32);
-#endif
+ if (TCG_TARGET_HAS_ext32s_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext32s_i64, ret, arg);
+ } else {
+ tcg_gen_shli_i64(ret, arg, 32);
+ tcg_gen_sari_i64(ret, ret, 32);
+ }
}
static inline void tcg_gen_ext8u_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext8u_i64
- tcg_gen_op2_i64(INDEX_op_ext8u_i64, ret, arg);
-#else
- tcg_gen_andi_i64(ret, arg, 0xffu);
-#endif
+ if (TCG_TARGET_HAS_ext8u_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext8u_i64, ret, arg);
+ } else {
+ tcg_gen_andi_i64(ret, arg, 0xffu);
+ }
}
static inline void tcg_gen_ext16u_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext16u_i64
- tcg_gen_op2_i64(INDEX_op_ext16u_i64, ret, arg);
-#else
- tcg_gen_andi_i64(ret, arg, 0xffffu);
-#endif
+ if (TCG_TARGET_HAS_ext16u_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext16u_i64, ret, arg);
+ } else {
+ tcg_gen_andi_i64(ret, arg, 0xffffu);
+ }
}
static inline void tcg_gen_ext32u_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_ext32u_i64
- tcg_gen_op2_i64(INDEX_op_ext32u_i64, ret, arg);
-#else
- tcg_gen_andi_i64(ret, arg, 0xffffffffu);
-#endif
+ if (TCG_TARGET_HAS_ext32u_i64) {
+ tcg_gen_op2_i64(INDEX_op_ext32u_i64, ret, arg);
+ } else {
+ tcg_gen_andi_i64(ret, arg, 0xffffffffu);
+ }
}
/* Note: we assume the target supports move between 32 and 64 bit
@@ -1655,130 +1602,132 @@ static inline void tcg_gen_ext_i32_i64(TCGv_i64 ret, TCGv_i32 arg)
/* Note: we assume the six high bytes are set to zero */
static inline void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_bswap16_i64
- tcg_gen_op2_i64(INDEX_op_bswap16_i64, ret, arg);
-#else
- TCGv_i64 t0 = tcg_temp_new_i64();
+ if (TCG_TARGET_HAS_bswap16_i64) {
+ tcg_gen_op2_i64(INDEX_op_bswap16_i64, ret, arg);
+ } else {
+ TCGv_i64 t0 = tcg_temp_new_i64();
- tcg_gen_ext8u_i64(t0, arg);
- tcg_gen_shli_i64(t0, t0, 8);
- tcg_gen_shri_i64(ret, arg, 8);
- tcg_gen_or_i64(ret, ret, t0);
- tcg_temp_free_i64(t0);
-#endif
+ tcg_gen_ext8u_i64(t0, arg);
+ tcg_gen_shli_i64(t0, t0, 8);
+ tcg_gen_shri_i64(ret, arg, 8);
+ tcg_gen_or_i64(ret, ret, t0);
+ tcg_temp_free_i64(t0);
+ }
}
/* Note: we assume the four high bytes are set to zero */
static inline void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_bswap32_i64
- tcg_gen_op2_i64(INDEX_op_bswap32_i64, ret, arg);
-#else
- TCGv_i64 t0, t1;
- t0 = tcg_temp_new_i64();
- t1 = tcg_temp_new_i64();
+ if (TCG_TARGET_HAS_bswap32_i64) {
+ tcg_gen_op2_i64(INDEX_op_bswap32_i64, ret, arg);
+ } else {
+ TCGv_i64 t0, t1;
+ t0 = tcg_temp_new_i64();
+ t1 = tcg_temp_new_i64();
- tcg_gen_shli_i64(t0, arg, 24);
- tcg_gen_ext32u_i64(t0, t0);
+ tcg_gen_shli_i64(t0, arg, 24);
+ tcg_gen_ext32u_i64(t0, t0);
- tcg_gen_andi_i64(t1, arg, 0x0000ff00);
- tcg_gen_shli_i64(t1, t1, 8);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_andi_i64(t1, arg, 0x0000ff00);
+ tcg_gen_shli_i64(t1, t1, 8);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 8);
- tcg_gen_andi_i64(t1, t1, 0x0000ff00);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_shri_i64(t1, arg, 8);
+ tcg_gen_andi_i64(t1, t1, 0x0000ff00);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 24);
- tcg_gen_or_i64(ret, t0, t1);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
-#endif
+ tcg_gen_shri_i64(t1, arg, 24);
+ tcg_gen_or_i64(ret, t0, t1);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
+ }
}
static inline void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_bswap64_i64
- tcg_gen_op2_i64(INDEX_op_bswap64_i64, ret, arg);
-#else
- TCGv_i64 t0 = tcg_temp_new_i64();
- TCGv_i64 t1 = tcg_temp_new_i64();
+ if (TCG_TARGET_HAS_bswap64_i64) {
+ tcg_gen_op2_i64(INDEX_op_bswap64_i64, ret, arg);
+ } else {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ TCGv_i64 t1 = tcg_temp_new_i64();
- tcg_gen_shli_i64(t0, arg, 56);
+ tcg_gen_shli_i64(t0, arg, 56);
- tcg_gen_andi_i64(t1, arg, 0x0000ff00);
- tcg_gen_shli_i64(t1, t1, 40);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_andi_i64(t1, arg, 0x0000ff00);
+ tcg_gen_shli_i64(t1, t1, 40);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_andi_i64(t1, arg, 0x00ff0000);
- tcg_gen_shli_i64(t1, t1, 24);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_andi_i64(t1, arg, 0x00ff0000);
+ tcg_gen_shli_i64(t1, t1, 24);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_andi_i64(t1, arg, 0xff000000);
- tcg_gen_shli_i64(t1, t1, 8);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_andi_i64(t1, arg, 0xff000000);
+ tcg_gen_shli_i64(t1, t1, 8);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 8);
- tcg_gen_andi_i64(t1, t1, 0xff000000);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_shri_i64(t1, arg, 8);
+ tcg_gen_andi_i64(t1, t1, 0xff000000);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 24);
- tcg_gen_andi_i64(t1, t1, 0x00ff0000);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_shri_i64(t1, arg, 24);
+ tcg_gen_andi_i64(t1, t1, 0x00ff0000);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 40);
- tcg_gen_andi_i64(t1, t1, 0x0000ff00);
- tcg_gen_or_i64(t0, t0, t1);
+ tcg_gen_shri_i64(t1, arg, 40);
+ tcg_gen_andi_i64(t1, t1, 0x0000ff00);
+ tcg_gen_or_i64(t0, t0, t1);
- tcg_gen_shri_i64(t1, arg, 56);
- tcg_gen_or_i64(ret, t0, t1);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
-#endif
+ tcg_gen_shri_i64(t1, arg, 56);
+ tcg_gen_or_i64(ret, t0, t1);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
+ }
}
#endif
static inline void tcg_gen_neg_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_neg_i32
- tcg_gen_op2_i32(INDEX_op_neg_i32, ret, arg);
-#else
- TCGv_i32 t0 = tcg_const_i32(0);
- tcg_gen_sub_i32(ret, t0, arg);
- tcg_temp_free_i32(t0);
-#endif
+ if (TCG_TARGET_HAS_neg_i32) {
+ tcg_gen_op2_i32(INDEX_op_neg_i32, ret, arg);
+ } else {
+ TCGv_i32 t0 = tcg_const_i32(0);
+ tcg_gen_sub_i32(ret, t0, arg);
+ tcg_temp_free_i32(t0);
+ }
}
static inline void tcg_gen_neg_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_neg_i64
- tcg_gen_op2_i64(INDEX_op_neg_i64, ret, arg);
-#else
- TCGv_i64 t0 = tcg_const_i64(0);
- tcg_gen_sub_i64(ret, t0, arg);
- tcg_temp_free_i64(t0);
-#endif
+ if (TCG_TARGET_HAS_neg_i64) {
+ tcg_gen_op2_i64(INDEX_op_neg_i64, ret, arg);
+ } else {
+ TCGv_i64 t0 = tcg_const_i64(0);
+ tcg_gen_sub_i64(ret, t0, arg);
+ tcg_temp_free_i64(t0);
+ }
}
static inline void tcg_gen_not_i32(TCGv_i32 ret, TCGv_i32 arg)
{
-#ifdef TCG_TARGET_HAS_not_i32
- tcg_gen_op2_i32(INDEX_op_not_i32, ret, arg);
-#else
- tcg_gen_xori_i32(ret, arg, -1);
-#endif
+ if (TCG_TARGET_HAS_not_i32) {
+ tcg_gen_op2_i32(INDEX_op_not_i32, ret, arg);
+ } else {
+ tcg_gen_xori_i32(ret, arg, -1);
+ }
}
static inline void tcg_gen_not_i64(TCGv_i64 ret, TCGv_i64 arg)
{
-#ifdef TCG_TARGET_HAS_not_i64
- tcg_gen_op2_i64(INDEX_op_not_i64, ret, arg);
-#elif defined(TCG_TARGET_HAS_not_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_not_i64) {
+ tcg_gen_op2_i64(INDEX_op_not_i64, ret, arg);
+ } else {
+ tcg_gen_xori_i64(ret, arg, -1);
+ }
+#else
tcg_gen_not_i32(TCGV_LOW(ret), TCGV_LOW(arg));
tcg_gen_not_i32(TCGV_HIGH(ret), TCGV_HIGH(arg));
-#else
- tcg_gen_xori_i64(ret, arg, -1);
#endif
}
@@ -1787,18 +1736,15 @@ static inline void tcg_gen_discard_i32(TCGv_i32 arg)
tcg_gen_op1_i32(INDEX_op_discard, arg);
}
-#if TCG_TARGET_REG_BITS == 32
static inline void tcg_gen_discard_i64(TCGv_i64 arg)
{
+#if TCG_TARGET_REG_BITS == 32
tcg_gen_discard_i32(TCGV_LOW(arg));
tcg_gen_discard_i32(TCGV_HIGH(arg));
-}
#else
-static inline void tcg_gen_discard_i64(TCGv_i64 arg)
-{
tcg_gen_op1_i64(INDEX_op_discard, arg);
-}
#endif
+}
static inline void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 low, TCGv_i32 high)
{
@@ -1832,165 +1778,170 @@ static inline void tcg_gen_concat32_i64(TCGv_i64 dest, TCGv_i64 low, TCGv_i64 hi
static inline void tcg_gen_andc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_andc_i32
- tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2);
-#else
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_not_i32(t0, arg2);
- tcg_gen_and_i32(ret, arg1, t0);
- tcg_temp_free_i32(t0);
-#endif
+ if (TCG_TARGET_HAS_andc_i32) {
+ tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2);
+ } else {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_not_i32(t0, arg2);
+ tcg_gen_and_i32(ret, arg1, t0);
+ tcg_temp_free_i32(t0);
+ }
}
static inline void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_andc_i64
- tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2);
-#elif defined(TCG_TARGET_HAS_andc_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_andc_i64) {
+ tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2);
+ } else {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_not_i64(t0, arg2);
+ tcg_gen_and_i64(ret, arg1, t0);
+ tcg_temp_free_i64(t0);
+ }
+#else
tcg_gen_andc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
tcg_gen_andc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
-#else
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_not_i64(t0, arg2);
- tcg_gen_and_i64(ret, arg1, t0);
- tcg_temp_free_i64(t0);
#endif
}
static inline void tcg_gen_eqv_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_eqv_i32
- tcg_gen_op3_i32(INDEX_op_eqv_i32, ret, arg1, arg2);
-#else
- tcg_gen_xor_i32(ret, arg1, arg2);
- tcg_gen_not_i32(ret, ret);
-#endif
+ if (TCG_TARGET_HAS_eqv_i32) {
+ tcg_gen_op3_i32(INDEX_op_eqv_i32, ret, arg1, arg2);
+ } else {
+ tcg_gen_xor_i32(ret, arg1, arg2);
+ tcg_gen_not_i32(ret, ret);
+ }
}
static inline void tcg_gen_eqv_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_eqv_i64
- tcg_gen_op3_i64(INDEX_op_eqv_i64, ret, arg1, arg2);
-#elif defined(TCG_TARGET_HAS_eqv_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_eqv_i64) {
+ tcg_gen_op3_i64(INDEX_op_eqv_i64, ret, arg1, arg2);
+ } else {
+ tcg_gen_xor_i64(ret, arg1, arg2);
+ tcg_gen_not_i64(ret, ret);
+ }
+#else
tcg_gen_eqv_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
tcg_gen_eqv_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
-#else
- tcg_gen_xor_i64(ret, arg1, arg2);
- tcg_gen_not_i64(ret, ret);
#endif
}
static inline void tcg_gen_nand_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_nand_i32
- tcg_gen_op3_i32(INDEX_op_nand_i32, ret, arg1, arg2);
-#else
- tcg_gen_and_i32(ret, arg1, arg2);
- tcg_gen_not_i32(ret, ret);
-#endif
+ if (TCG_TARGET_HAS_nand_i32) {
+ tcg_gen_op3_i32(INDEX_op_nand_i32, ret, arg1, arg2);
+ } else {
+ tcg_gen_and_i32(ret, arg1, arg2);
+ tcg_gen_not_i32(ret, ret);
+ }
}
static inline void tcg_gen_nand_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_nand_i64
- tcg_gen_op3_i64(INDEX_op_nand_i64, ret, arg1, arg2);
-#elif defined(TCG_TARGET_HAS_nand_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_nand_i64) {
+ tcg_gen_op3_i64(INDEX_op_nand_i64, ret, arg1, arg2);
+ } else {
+ tcg_gen_and_i64(ret, arg1, arg2);
+ tcg_gen_not_i64(ret, ret);
+ }
+#else
tcg_gen_nand_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
tcg_gen_nand_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
-#else
- tcg_gen_and_i64(ret, arg1, arg2);
- tcg_gen_not_i64(ret, ret);
#endif
}
static inline void tcg_gen_nor_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_nor_i32
- tcg_gen_op3_i32(INDEX_op_nor_i32, ret, arg1, arg2);
-#else
- tcg_gen_or_i32(ret, arg1, arg2);
- tcg_gen_not_i32(ret, ret);
-#endif
+ if (TCG_TARGET_HAS_nor_i32) {
+ tcg_gen_op3_i32(INDEX_op_nor_i32, ret, arg1, arg2);
+ } else {
+ tcg_gen_or_i32(ret, arg1, arg2);
+ tcg_gen_not_i32(ret, ret);
+ }
}
static inline void tcg_gen_nor_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_nor_i64
- tcg_gen_op3_i64(INDEX_op_nor_i64, ret, arg1, arg2);
-#elif defined(TCG_TARGET_HAS_nor_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_nor_i64) {
+ tcg_gen_op3_i64(INDEX_op_nor_i64, ret, arg1, arg2);
+ } else {
+ tcg_gen_or_i64(ret, arg1, arg2);
+ tcg_gen_not_i64(ret, ret);
+ }
+#else
tcg_gen_nor_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
tcg_gen_nor_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
-#else
- tcg_gen_or_i64(ret, arg1, arg2);
- tcg_gen_not_i64(ret, ret);
#endif
}
static inline void tcg_gen_orc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_orc_i32
- tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2);
-#else
- TCGv_i32 t0;
- t0 = tcg_temp_new_i32();
- tcg_gen_not_i32(t0, arg2);
- tcg_gen_or_i32(ret, arg1, t0);
- tcg_temp_free_i32(t0);
-#endif
+ if (TCG_TARGET_HAS_orc_i32) {
+ tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2);
+ } else {
+ TCGv_i32 t0 = tcg_temp_new_i32();
+ tcg_gen_not_i32(t0, arg2);
+ tcg_gen_or_i32(ret, arg1, t0);
+ tcg_temp_free_i32(t0);
+ }
}
static inline void tcg_gen_orc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_orc_i64
- tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2);
-#elif defined(TCG_TARGET_HAS_orc_i32) && TCG_TARGET_REG_BITS == 32
+#if TCG_TARGET_REG_BITS == 64
+ if (TCG_TARGET_HAS_orc_i64) {
+ tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2);
+ } else {
+ TCGv_i64 t0 = tcg_temp_new_i64();
+ tcg_gen_not_i64(t0, arg2);
+ tcg_gen_or_i64(ret, arg1, t0);
+ tcg_temp_free_i64(t0);
+ }
+#else
tcg_gen_orc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2));
tcg_gen_orc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2));
-#else
- TCGv_i64 t0;
- t0 = tcg_temp_new_i64();
- tcg_gen_not_i64(t0, arg2);
- tcg_gen_or_i64(ret, arg1, t0);
- tcg_temp_free_i64(t0);
#endif
}
static inline void tcg_gen_rotl_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_rot_i32
- tcg_gen_op3_i32(INDEX_op_rotl_i32, ret, arg1, arg2);
-#else
- TCGv_i32 t0, t1;
+ if (TCG_TARGET_HAS_rot_i32) {
+ tcg_gen_op3_i32(INDEX_op_rotl_i32, ret, arg1, arg2);
+ } else {
+ TCGv_i32 t0, t1;
- t0 = tcg_temp_new_i32();
- t1 = tcg_temp_new_i32();
- tcg_gen_shl_i32(t0, arg1, arg2);
- tcg_gen_subfi_i32(t1, 32, arg2);
- tcg_gen_shr_i32(t1, arg1, t1);
- tcg_gen_or_i32(ret, t0, t1);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
-#endif
+ t0 = tcg_temp_new_i32();
+ t1 = tcg_temp_new_i32();
+ tcg_gen_shl_i32(t0, arg1, arg2);
+ tcg_gen_subfi_i32(t1, 32, arg2);
+ tcg_gen_shr_i32(t1, arg1, t1);
+ tcg_gen_or_i32(ret, t0, t1);
+ tcg_temp_free_i32(t0);
+ tcg_temp_free_i32(t1);
+ }
}
static inline void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_rot_i64
- tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2);
-#else
- TCGv_i64 t0, t1;
-
- t0 = tcg_temp_new_i64();
- t1 = tcg_temp_new_i64();
- tcg_gen_shl_i64(t0, arg1, arg2);
- tcg_gen_subfi_i64(t1, 64, arg2);
- tcg_gen_shr_i64(t1, arg1, t1);
- tcg_gen_or_i64(ret, t0, t1);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
-#endif
+ if (TCG_TARGET_HAS_rot_i64) {
+ tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2);
+ } else {
+ TCGv_i64 t0, t1;
+ t0 = tcg_temp_new_i64();
+ t1 = tcg_temp_new_i64();
+ tcg_gen_shl_i64(t0, arg1, arg2);
+ tcg_gen_subfi_i64(t1, 64, arg2);
+ tcg_gen_shr_i64(t1, arg1, t1);
+ tcg_gen_or_i64(ret, t0, t1);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
+ }
}
static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
@@ -1998,12 +1949,11 @@ static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
/* some cases can be optimized here */
if (arg2 == 0) {
tcg_gen_mov_i32(ret, arg1);
- } else {
-#ifdef TCG_TARGET_HAS_rot_i32
+ } else if (TCG_TARGET_HAS_rot_i32) {
TCGv_i32 t0 = tcg_const_i32(arg2);
tcg_gen_rotl_i32(ret, arg1, t0);
tcg_temp_free_i32(t0);
-#else
+ } else {
TCGv_i32 t0, t1;
t0 = tcg_temp_new_i32();
t1 = tcg_temp_new_i32();
@@ -2012,7 +1962,6 @@ static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
tcg_gen_or_i32(ret, t0, t1);
tcg_temp_free_i32(t0);
tcg_temp_free_i32(t1);
-#endif
}
}
@@ -2021,12 +1970,11 @@ static inline void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
/* some cases can be optimized here */
if (arg2 == 0) {
tcg_gen_mov_i64(ret, arg1);
- } else {
-#ifdef TCG_TARGET_HAS_rot_i64
+ } else if (TCG_TARGET_HAS_rot_i64) {
TCGv_i64 t0 = tcg_const_i64(arg2);
tcg_gen_rotl_i64(ret, arg1, t0);
tcg_temp_free_i64(t0);
-#else
+ } else {
TCGv_i64 t0, t1;
t0 = tcg_temp_new_i64();
t1 = tcg_temp_new_i64();
@@ -2035,44 +1983,42 @@ static inline void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2)
tcg_gen_or_i64(ret, t0, t1);
tcg_temp_free_i64(t0);
tcg_temp_free_i64(t1);
-#endif
}
}
static inline void tcg_gen_rotr_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2)
{
-#ifdef TCG_TARGET_HAS_rot_i32
- tcg_gen_op3_i32(INDEX_op_rotr_i32, ret, arg1, arg2);
-#else
- TCGv_i32 t0, t1;
+ if (TCG_TARGET_HAS_rot_i32) {
+ tcg_gen_op3_i32(INDEX_op_rotr_i32, ret, arg1, arg2);
+ } else {
+ TCGv_i32 t0, t1;
- t0 = tcg_temp_new_i32();
- t1 = tcg_temp_new_i32();
- tcg_gen_shr_i32(t0, arg1, arg2);
- tcg_gen_subfi_i32(t1, 32, arg2);
- tcg_gen_shl_i32(t1, arg1, t1);
- tcg_gen_or_i32(ret, t0, t1);
- tcg_temp_free_i32(t0);
- tcg_temp_free_i32(t1);
-#endif
+ t0 = tcg_temp_new_i32();
+ t1 = tcg_temp_new_i32();
+ tcg_gen_shr_i32(t0, arg1, arg2);
+ tcg_gen_subfi_i32(t1, 32, arg2);
+ tcg_gen_shl_i32(t1, arg1, t1);
+ tcg_gen_or_i32(ret, t0, t1);
+ tcg_temp_free_i32(t0);
+ tcg_temp_free_i32(t1);
+ }
}
static inline void tcg_gen_rotr_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2)
{
-#ifdef TCG_TARGET_HAS_rot_i64
- tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2);
-#else
- TCGv_i64 t0, t1;
-
- t0 = tcg_temp_new_i64();
- t1 = tcg_temp_new_i64();
- tcg_gen_shr_i64(t0, arg1, arg2);
- tcg_gen_subfi_i64(t1, 64, arg2);
- tcg_gen_shl_i64(t1, arg1, t1);
- tcg_gen_or_i64(ret, t0, t1);
- tcg_temp_free_i64(t0);
- tcg_temp_free_i64(t1);
-#endif
+ if (TCG_TARGET_HAS_rot_i64) {
+ tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2);
+ } else {
+ TCGv_i64 t0, t1;
+ t0 = tcg_temp_new_i64();
+ t1 = tcg_temp_new_i64();
+ tcg_gen_shr_i64(t0, arg1, arg2);
+ tcg_gen_subfi_i64(t1, 64, arg2);
+ tcg_gen_shl_i64(t1, arg1, t1);
+ tcg_gen_or_i64(ret, t0, t1);
+ tcg_temp_free_i64(t0);
+ tcg_temp_free_i64(t1);
+ }
}
static inline void tcg_gen_rotri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2)
@@ -2099,38 +2045,38 @@ static inline void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1,
TCGv_i32 arg2, unsigned int ofs,
unsigned int len)
{
-#ifdef TCG_TARGET_HAS_deposit_i32
- tcg_gen_op5ii_i32(INDEX_op_deposit_i32, ret, arg1, arg2, ofs, len);
-#else
- uint32_t mask = (1u << len) - 1;
- TCGv_i32 t1 = tcg_temp_new_i32 ();
+ if (TCG_TARGET_HAS_deposit_i32) {
+ tcg_gen_op5ii_i32(INDEX_op_deposit_i32, ret, arg1, arg2, ofs, len);
+ } else {
+ uint32_t mask = (1u << len) - 1;
+ TCGv_i32 t1 = tcg_temp_new_i32 ();
- tcg_gen_andi_i32(t1, arg2, mask);
- tcg_gen_shli_i32(t1, t1, ofs);
- tcg_gen_andi_i32(ret, arg1, ~(mask << ofs));
- tcg_gen_or_i32(ret, ret, t1);
+ tcg_gen_andi_i32(t1, arg2, mask);
+ tcg_gen_shli_i32(t1, t1, ofs);
+ tcg_gen_andi_i32(ret, arg1, ~(mask << ofs));
+ tcg_gen_or_i32(ret, ret, t1);
- tcg_temp_free_i32(t1);
-#endif
+ tcg_temp_free_i32(t1);
+ }
}
static inline void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1,
TCGv_i64 arg2, unsigned int ofs,
unsigned int len)
{
-#ifdef TCG_TARGET_HAS_deposit_i64
- tcg_gen_op5ii_i64(INDEX_op_deposit_i64, ret, arg1, arg2, ofs, len);
-#else
- uint64_t mask = (1ull << len) - 1;
- TCGv_i64 t1 = tcg_temp_new_i64 ();
+ if (TCG_TARGET_HAS_deposit_i64) {
+ tcg_gen_op5ii_i64(INDEX_op_deposit_i64, ret, arg1, arg2, ofs, len);
+ } else {
+ uint64_t mask = (1ull << len) - 1;
+ TCGv_i64 t1 = tcg_temp_new_i64 ();
- tcg_gen_andi_i64(t1, arg2, mask);
- tcg_gen_shli_i64(t1, t1, ofs);
- tcg_gen_andi_i64(ret, arg1, ~(mask << ofs));
- tcg_gen_or_i64(ret, ret, t1);
+ tcg_gen_andi_i64(t1, arg2, mask);
+ tcg_gen_shli_i64(t1, t1, ofs);
+ tcg_gen_andi_i64(ret, arg1, ~(mask << ofs));
+ tcg_gen_or_i64(ret, ret, t1);
- tcg_temp_free_i64(t1);
-#endif
+ tcg_temp_free_i64(t1);
+ }
}
/***************************************/
diff --git a/tcg/tcg-opc.h b/tcg/tcg-opc.h
index b48669b..8e06d03 100644
--- a/tcg/tcg-opc.h
+++ b/tcg/tcg-opc.h
@@ -41,6 +41,13 @@ DEF(call, 0, 1, 2, TCG_OPF_SIDE_EFFECTS) /* variable number of parameters */
DEF(jmp, 0, 1, 0, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS)
DEF(br, 0, 0, 1, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS)
+#define IMPL(X) (X ? 0 : TCG_OPF_NOT_PRESENT)
+#if TCG_TARGET_REG_BITS == 32
+# define IMPL64 TCG_OPF_64BIT | TCG_OPF_NOT_PRESENT
+#else
+# define IMPL64 TCG_OPF_64BIT
+#endif
+
DEF(mov_i32, 1, 1, 0, 0)
DEF(movi_i32, 1, 0, 1, 0)
DEF(setcond_i32, 1, 2, 1, 0)
@@ -57,16 +64,12 @@ DEF(st_i32, 0, 2, 1, TCG_OPF_SIDE_EFFECTS)
DEF(add_i32, 1, 2, 0, 0)
DEF(sub_i32, 1, 2, 0, 0)
DEF(mul_i32, 1, 2, 0, 0)
-#ifdef TCG_TARGET_HAS_div_i32
-DEF(div_i32, 1, 2, 0, 0)
-DEF(divu_i32, 1, 2, 0, 0)
-DEF(rem_i32, 1, 2, 0, 0)
-DEF(remu_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_div2_i32
-DEF(div2_i32, 2, 3, 0, 0)
-DEF(divu2_i32, 2, 3, 0, 0)
-#endif
+DEF(div_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32))
+DEF(divu_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32))
+DEF(rem_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32))
+DEF(remu_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32))
+DEF(div2_i32, 2, 3, 0, IMPL(TCG_TARGET_HAS_div2_i32))
+DEF(divu2_i32, 2, 3, 0, IMPL(TCG_TARGET_HAS_div2_i32))
DEF(and_i32, 1, 2, 0, 0)
DEF(or_i32, 1, 2, 0, 0)
DEF(xor_i32, 1, 2, 0, 0)
@@ -74,157 +77,86 @@ DEF(xor_i32, 1, 2, 0, 0)
DEF(shl_i32, 1, 2, 0, 0)
DEF(shr_i32, 1, 2, 0, 0)
DEF(sar_i32, 1, 2, 0, 0)
-#ifdef TCG_TARGET_HAS_rot_i32
-DEF(rotl_i32, 1, 2, 0, 0)
-DEF(rotr_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_deposit_i32
-DEF(deposit_i32, 1, 2, 2, 0)
-#endif
+DEF(rotl_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_rot_i32))
+DEF(rotr_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_rot_i32))
+DEF(deposit_i32, 1, 2, 2, IMPL(TCG_TARGET_HAS_deposit_i32))
DEF(brcond_i32, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS)
-#if TCG_TARGET_REG_BITS == 32
-DEF(add2_i32, 2, 4, 0, 0)
-DEF(sub2_i32, 2, 4, 0, 0)
-DEF(brcond2_i32, 0, 4, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS)
-DEF(mulu2_i32, 2, 2, 0, 0)
-DEF(setcond2_i32, 1, 4, 1, 0)
-#endif
-#ifdef TCG_TARGET_HAS_ext8s_i32
-DEF(ext8s_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i32
-DEF(ext16s_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i32
-DEF(ext8u_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i32
-DEF(ext16u_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_bswap16_i32
-DEF(bswap16_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_bswap32_i32
-DEF(bswap32_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_not_i32
-DEF(not_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_neg_i32
-DEF(neg_i32, 1, 1, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_andc_i32
-DEF(andc_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_orc_i32
-DEF(orc_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_eqv_i32
-DEF(eqv_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_nand_i32
-DEF(nand_i32, 1, 2, 0, 0)
-#endif
-#ifdef TCG_TARGET_HAS_nor_i32
-DEF(nor_i32, 1, 2, 0, 0)
-#endif
-#if TCG_TARGET_REG_BITS == 64
-DEF(mov_i64, 1, 1, 0, TCG_OPF_64BIT)
-DEF(movi_i64, 1, 0, 1, TCG_OPF_64BIT)
-DEF(setcond_i64, 1, 2, 1, TCG_OPF_64BIT)
+DEF(add2_i32, 2, 4, 0, IMPL(TCG_TARGET_REG_BITS == 32))
+DEF(sub2_i32, 2, 4, 0, IMPL(TCG_TARGET_REG_BITS == 32))
+DEF(brcond2_i32, 0, 4, 2,
+ TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | IMPL(TCG_TARGET_REG_BITS == 32))
+DEF(mulu2_i32, 2, 2, 0, IMPL(TCG_TARGET_REG_BITS == 32))
+DEF(setcond2_i32, 1, 4, 1, IMPL(TCG_TARGET_REG_BITS == 32))
+
+DEF(ext8s_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext8s_i32))
+DEF(ext16s_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext16s_i32))
+DEF(ext8u_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext8u_i32))
+DEF(ext16u_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext16u_i32))
+DEF(bswap16_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_bswap16_i32))
+DEF(bswap32_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_bswap32_i32))
+DEF(not_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_not_i32))
+DEF(neg_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_neg_i32))
+DEF(andc_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_andc_i32))
+DEF(orc_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_orc_i32))
+DEF(eqv_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_eqv_i32))
+DEF(nand_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_nand_i32))
+DEF(nor_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_nor_i32))
+
+DEF(mov_i64, 1, 1, 0, IMPL64)
+DEF(movi_i64, 1, 0, 1, IMPL64)
+DEF(setcond_i64, 1, 2, 1, IMPL64)
/* load/store */
-DEF(ld8u_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld8s_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld16u_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld16s_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld32u_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld32s_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(ld_i64, 1, 1, 1, TCG_OPF_64BIT)
-DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
-DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
-DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
-DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
+DEF(ld8u_i64, 1, 1, 1, IMPL64)
+DEF(ld8s_i64, 1, 1, 1, IMPL64)
+DEF(ld16u_i64, 1, 1, 1, IMPL64)
+DEF(ld16s_i64, 1, 1, 1, IMPL64)
+DEF(ld32u_i64, 1, 1, 1, IMPL64)
+DEF(ld32s_i64, 1, 1, 1, IMPL64)
+DEF(ld_i64, 1, 1, 1, IMPL64)
+DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64)
+DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64)
+DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64)
+DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64)
/* arith */
-DEF(add_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(sub_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(mul_i64, 1, 2, 0, TCG_OPF_64BIT)
-#ifdef TCG_TARGET_HAS_div_i64
-DEF(div_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(divu_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(rem_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(remu_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_div2_i64
-DEF(div2_i64, 2, 3, 0, TCG_OPF_64BIT)
-DEF(divu2_i64, 2, 3, 0, TCG_OPF_64BIT)
-#endif
-DEF(and_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(or_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(xor_i64, 1, 2, 0, TCG_OPF_64BIT)
+DEF(add_i64, 1, 2, 0, IMPL64)
+DEF(sub_i64, 1, 2, 0, IMPL64)
+DEF(mul_i64, 1, 2, 0, IMPL64)
+DEF(div_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64))
+DEF(divu_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64))
+DEF(rem_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64))
+DEF(remu_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64))
+DEF(div2_i64, 2, 3, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div2_i64))
+DEF(divu2_i64, 2, 3, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div2_i64))
+DEF(and_i64, 1, 2, 0, IMPL64)
+DEF(or_i64, 1, 2, 0, IMPL64)
+DEF(xor_i64, 1, 2, 0, IMPL64)
/* shifts/rotates */
-DEF(shl_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(shr_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(sar_i64, 1, 2, 0, TCG_OPF_64BIT)
-#ifdef TCG_TARGET_HAS_rot_i64
-DEF(rotl_i64, 1, 2, 0, TCG_OPF_64BIT)
-DEF(rotr_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_deposit_i64
-DEF(deposit_i64, 1, 2, 2, TCG_OPF_64BIT)
-#endif
+DEF(shl_i64, 1, 2, 0, IMPL64)
+DEF(shr_i64, 1, 2, 0, IMPL64)
+DEF(sar_i64, 1, 2, 0, IMPL64)
+DEF(rotl_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_rot_i64))
+DEF(rotr_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_rot_i64))
+DEF(deposit_i64, 1, 2, 2, IMPL64 | IMPL(TCG_TARGET_HAS_deposit_i64))
-DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT)
-#ifdef TCG_TARGET_HAS_ext8s_i64
-DEF(ext8s_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_ext16s_i64
-DEF(ext16s_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_ext32s_i64
-DEF(ext32s_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_ext8u_i64
-DEF(ext8u_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_ext16u_i64
-DEF(ext16u_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_ext32u_i64
-DEF(ext32u_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_bswap16_i64
-DEF(bswap16_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_bswap32_i64
-DEF(bswap32_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_bswap64_i64
-DEF(bswap64_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_not_i64
-DEF(not_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_neg_i64
-DEF(neg_i64, 1, 1, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_andc_i64
-DEF(andc_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_orc_i64
-DEF(orc_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_eqv_i64
-DEF(eqv_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_nand_i64
-DEF(nand_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#ifdef TCG_TARGET_HAS_nor_i64
-DEF(nor_i64, 1, 2, 0, TCG_OPF_64BIT)
-#endif
-#endif
+DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | IMPL64)
+DEF(ext8s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext8s_i64))
+DEF(ext16s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext16s_i64))
+DEF(ext32s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext32s_i64))
+DEF(ext8u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext8u_i64))
+DEF(ext16u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext16u_i64))
+DEF(ext32u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext32u_i64))
+DEF(bswap16_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap16_i64))
+DEF(bswap32_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap32_i64))
+DEF(bswap64_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap64_i64))
+DEF(not_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_not_i64))
+DEF(neg_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_neg_i64))
+DEF(andc_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_andc_i64))
+DEF(orc_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_orc_i64))
+DEF(eqv_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_eqv_i64))
+DEF(nand_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_nand_i64))
+DEF(nor_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_nor_i64))
/* QEMU specific */
#if TARGET_LONG_BITS > TCG_TARGET_REG_BITS
@@ -307,4 +239,6 @@ DEF(qemu_st64, 0, 2, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
#endif /* TCG_TARGET_REG_BITS != 32 */
+#undef IMPL
+#undef IMPL64
#undef DEF
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 7179bd4..3e1e972 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2124,6 +2124,10 @@ static inline int tcg_gen_code_common(TCGContext *s, uint8_t *gen_code_buf,
case INDEX_op_end:
goto the_end;
default:
+ /* Sanity check that we've not introduced any unhandled opcodes. */
+ if (def->flags & TCG_OPF_NOT_PRESENT) {
+ tcg_abort();
+ }
/* Note: in order to speed up the code, it would be much
faster to have specialized register allocator functions for
some common argument patterns */
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 6a4f6e4..dc5e9c9 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -47,6 +47,42 @@ typedef uint64_t TCGRegSet;
#error unsupported
#endif
+/* Turn some undef macros into false macros. */
+#if TCG_TARGET_REG_BITS == 32
+#define TCG_TARGET_HAS_div_i64 0
+#define TCG_TARGET_HAS_div2_i64 0
+#define TCG_TARGET_HAS_rot_i64 0
+#define TCG_TARGET_HAS_ext8s_i64 0
+#define TCG_TARGET_HAS_ext16s_i64 0
+#define TCG_TARGET_HAS_ext32s_i64 0
+#define TCG_TARGET_HAS_ext8u_i64 0
+#define TCG_TARGET_HAS_ext16u_i64 0
+#define TCG_TARGET_HAS_ext32u_i64 0
+#define TCG_TARGET_HAS_bswap16_i64 0
+#define TCG_TARGET_HAS_bswap32_i64 0
+#define TCG_TARGET_HAS_bswap64_i64 0
+#define TCG_TARGET_HAS_neg_i64 0
+#define TCG_TARGET_HAS_not_i64 0
+#define TCG_TARGET_HAS_andc_i64 0
+#define TCG_TARGET_HAS_orc_i64 0
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
+#endif
+
+/* Only one of DIV or DIV2 should be defined. */
+#if defined(TCG_TARGET_HAS_div_i32)
+#define TCG_TARGET_HAS_div2_i32 0
+#elif defined(TCG_TARGET_HAS_div2_i32)
+#define TCG_TARGET_HAS_div_i32 0
+#endif
+#if defined(TCG_TARGET_HAS_div_i64)
+#define TCG_TARGET_HAS_div2_i64 0
+#elif defined(TCG_TARGET_HAS_div2_i64)
+#define TCG_TARGET_HAS_div_i64 0
+#endif
+
typedef enum TCGOpcode {
#define DEF(name, oargs, iargs, cargs, flags) INDEX_op_ ## name,
#include "tcg-opc.h"
@@ -456,6 +492,8 @@ enum {
TCG_OPF_SIDE_EFFECTS = 0x04,
/* Instruction operands are 64-bits (otherwise 32-bits). */
TCG_OPF_64BIT = 0x08,
+ /* Instruction is optional and not implemented by the host. */
+ TCG_OPF_NOT_PRESENT = 0x10,
};
typedef struct TCGOpDef {
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 3/6] tcg: Constant fold neg, andc, orc, eqv, nand, nor.
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 1/6] tcg: Add and use TCG_OPF_64BIT Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 4/6] tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight Richard Henderson
` (3 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/optimize.c | 27 +++++++++++++++++++++++++++
1 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 32f928f..7e7f2b2 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -215,6 +215,24 @@ static TCGArg do_constant_folding_2(int op, TCGArg x, TCGArg y)
CASE_OP_32_64(not):
return ~x;
+ CASE_OP_32_64(neg):
+ return -x;
+
+ CASE_OP_32_64(andc):
+ return x & ~y;
+
+ CASE_OP_32_64(orc):
+ return x | ~y;
+
+ CASE_OP_32_64(eqv):
+ return ~(x ^ y);
+
+ CASE_OP_32_64(nand):
+ return ~(x & y);
+
+ CASE_OP_32_64(nor):
+ return ~(x | y);
+
CASE_OP_32_64(ext8s):
return (int8_t)x;
@@ -290,6 +308,9 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
CASE_OP_32_64(and):
CASE_OP_32_64(or):
CASE_OP_32_64(xor):
+ CASE_OP_32_64(eqv):
+ CASE_OP_32_64(nand):
+ CASE_OP_32_64(nor):
if (temps[args[1]].state == TCG_TEMP_CONST) {
tmp = args[1];
args[1] = args[2];
@@ -389,6 +410,7 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
args += 2;
break;
CASE_OP_32_64(not):
+ CASE_OP_32_64(neg):
CASE_OP_32_64(ext8s):
CASE_OP_32_64(ext8u):
CASE_OP_32_64(ext16s):
@@ -421,6 +443,11 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr,
CASE_OP_32_64(sar):
CASE_OP_32_64(rotl):
CASE_OP_32_64(rotr):
+ CASE_OP_32_64(andc):
+ CASE_OP_32_64(orc):
+ CASE_OP_32_64(eqv):
+ CASE_OP_32_64(nand):
+ CASE_OP_32_64(nor):
if (temps[args[1]].state == TCG_TEMP_CONST
&& temps[args[2]].state == TCG_TEMP_CONST) {
gen_opc_buf[op_index] = op_to_movi(op);
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 4/6] tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight.
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
` (2 preceding siblings ...)
2011-08-17 21:11 ` [Qemu-devel] [PATCH 3/6] tcg: Constant fold neg, andc, orc, eqv, nand, nor Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 5/6] tcg-ia64: Fix typos in AREG0 setup in prologue Richard Henderson
` (2 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/hppa/tcg-target.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tcg/hppa/tcg-target.c b/tcg/hppa/tcg-target.c
index 222f33e..71d9677 100644
--- a/tcg/hppa/tcg-target.c
+++ b/tcg/hppa/tcg-target.c
@@ -1650,7 +1650,7 @@ static void tcg_target_qemu_prologue(TCGContext *s)
/* Record the location of the TCG temps. */
tcg_set_frame(s, TCG_REG_CALL_STACK, -frame_size + i * 4,
- TCG_TEMP_BUF_NLONGS * sizeof(long));
+ CPU_TEMP_BUF_NLONGS * sizeof(long));
#ifdef CONFIG_USE_GUEST_BASE
if (GUEST_BASE != 0) {
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 5/6] tcg-ia64: Fix typos in AREG0 setup in prologue.
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
` (3 preceding siblings ...)
2011-08-17 21:11 ` [Qemu-devel] [PATCH 4/6] tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline Richard Henderson
2011-08-21 19:15 ` [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Blue Swirl
6 siblings, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/ia64/tcg-target.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tcg/ia64/tcg-target.c b/tcg/ia64/tcg-target.c
index 6386a5b..9db205d 100644
--- a/tcg/ia64/tcg-target.c
+++ b/tcg/ia64/tcg-target.c
@@ -2308,8 +2308,8 @@ static void tcg_target_qemu_prologue(TCGContext *s)
}
tcg_out_bundle(s, miB,
- tcg_opc_m48(TCG_REG_P0, OPC_MOV_I21,
- TCG_REG_AREG0, TCG_REG_R32, 0),
+ tcg_opc_a4 (TCG_REG_P0, OPC_ADDS_A4,
+ TCG_AREG0, 0, TCG_REG_R32),
tcg_opc_a4 (TCG_REG_P0, OPC_ADDS_A4,
TCG_REG_R12, -frame_size, TCG_REG_R12),
tcg_opc_b4 (TCG_REG_P0, OPC_BR_SPTK_MANY_B4, TCG_REG_B6));
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
` (4 preceding siblings ...)
2011-08-17 21:11 ` [Qemu-devel] [PATCH 5/6] tcg-ia64: Fix typos in AREG0 setup in prologue Richard Henderson
@ 2011-08-17 21:11 ` Richard Henderson
2011-08-19 23:51 ` Peter Maydell
2011-08-21 19:15 ` [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Blue Swirl
6 siblings, 1 reply; 16+ messages in thread
From: Richard Henderson @ 2011-08-17 21:11 UTC (permalink / raw)
To: qemu-devel
As it's not used, fixes a compilation error.
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/arm/tcg-target.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tcg/arm/tcg-target.c b/tcg/arm/tcg-target.c
index 93eb0f1..c94a354 100644
--- a/tcg/arm/tcg-target.c
+++ b/tcg/arm/tcg-target.c
@@ -1820,7 +1820,7 @@ static inline void tcg_out_st(TCGContext *s, TCGType type, int arg,
tcg_out_st32(s, COND_AL, arg, arg1, arg2);
}
-static void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val)
+static inline void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val)
{
if (val > 0)
if (val < 0x100)
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline
2011-08-17 21:11 ` [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline Richard Henderson
@ 2011-08-19 23:51 ` Peter Maydell
2011-08-20 0:10 ` andrzej zaborowski
0 siblings, 1 reply; 16+ messages in thread
From: Peter Maydell @ 2011-08-19 23:51 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On 17 August 2011 22:11, Richard Henderson <rth@twiddle.net> wrote:
> As it's not used, fixes a compilation error.
>
> Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
(We just hit this with the Ubuntu package builds of qemu-linaro
for ARM host...)
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline
2011-08-19 23:51 ` Peter Maydell
@ 2011-08-20 0:10 ` andrzej zaborowski
2011-08-20 4:19 ` Peter Maydell
0 siblings, 1 reply; 16+ messages in thread
From: andrzej zaborowski @ 2011-08-20 0:10 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, Richard Henderson
On 20 August 2011 01:51, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 17 August 2011 22:11, Richard Henderson <rth@twiddle.net> wrote:
>> As it's not used, fixes a compilation error.
Stefan Weil submitted an identical patch a couple of weeks ago, but I
can't see the rationale for inlining. It seems like working around
the warning is the only reason for this patch (?), so let's either
remove/#if0 out the function or fix the compilation options to not
error out on something that's not an error. (I'd prefer the latter but
others may object)
Cheers
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline
2011-08-20 0:10 ` andrzej zaborowski
@ 2011-08-20 4:19 ` Peter Maydell
0 siblings, 0 replies; 16+ messages in thread
From: Peter Maydell @ 2011-08-20 4:19 UTC (permalink / raw)
To: andrzej zaborowski; +Cc: qemu-devel, Richard Henderson
On 20 August 2011 01:10, andrzej zaborowski <balrogg@gmail.com> wrote:
> On 20 August 2011 01:51, Peter Maydell <peter.maydell@linaro.org> wrote:
>> On 17 August 2011 22:11, Richard Henderson <rth@twiddle.net> wrote:
>>> As it's not used, fixes a compilation error.
>
> Stefan Weil submitted an identical patch a couple of weeks ago, but I
> can't see the rationale for inlining. It seems like working around
> the warning is the only reason for this patch (?), so let's either
> remove/#if0 out the function or fix the compilation options to not
> error out on something that's not an error. (I'd prefer the latter but
> others may object)
Yeah, we could remove the function (I'd prefer that to ifdeffery).
I see malc did this for ppc/ppc64 in commits 1a2eb162414 and
c24a9c6ef94.
If we do this, for consistency we should also remove the unused
implementations in tcg/ia64 and tcg/s390.
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
` (5 preceding siblings ...)
2011-08-17 21:11 ` [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline Richard Henderson
@ 2011-08-21 19:15 ` Blue Swirl
6 siblings, 0 replies; 16+ messages in thread
From: Blue Swirl @ 2011-08-21 19:15 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
Thanks, applied 1 to 5.
On Wed, Aug 17, 2011 at 9:11 PM, Richard Henderson <rth@twiddle.net> wrote:
> As discussed elsewhere, one way to tidy up tcg/optimize.c
> is to always provide the enum names, even if the host does
> not support the operation.
>
> As a sanity check, I wanted to include a test to make sure
> that we never tried to output an opcode that the target
> does not handle. I did this via a bit in the TCGOpDef flags.
> In order to get that set, I changed all of the TCG_TARGET_HAS*
> macros to be true/false rather than def/undef.
>
> That allowed a further cleanup to change ifdefs into C IFs.
>
> Unfortunately, it wasn't really possible to split this into
> smaller pieces. Using the C IFs requires the enums be
> present, even if unused.
>
> I cross-compiled --target-list=i386-softmmu,i386-linux-user
> for each of the tcg hosts. In the process I discovered a
> number of pure compilation errors.
>
>
> r~
>
>
> Richard Henderson (6):
> tcg: Add and use TCG_OPF_64BIT.
> tcg: Always define all of the TCGOpcode enum members.
> tcg: Constant fold neg, andc, orc, eqv, nand, nor.
> tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight.
> tcg-ia64: Fix typos in AREG0 setup in prologue.
> tcg-arm: Make tcg_out_addi inline
>
> tcg/arm/tcg-target.c | 2 +-
> tcg/arm/tcg-target.h | 30 +-
> tcg/hppa/tcg-target.c | 2 +-
> tcg/hppa/tcg-target.h | 29 +-
> tcg/i386/tcg-target.h | 68 ++--
> tcg/ia64/tcg-target.c | 4 +-
> tcg/ia64/tcg-target.h | 66 ++--
> tcg/mips/tcg-target.h | 31 +-
> tcg/optimize.c | 260 +++-----------
> tcg/ppc/tcg-target.h | 31 +-
> tcg/ppc64/tcg-target.h | 68 ++--
> tcg/s390/tcg-target.h | 68 ++--
> tcg/sparc/tcg-target.h | 68 ++--
> tcg/tcg-op.h | 946 +++++++++++++++++++++++-------------------------
> tcg/tcg-opc.h | 242 +++++--------
> tcg/tcg.c | 6 +-
> tcg/tcg.h | 59 +++-
> 17 files changed, 886 insertions(+), 1094 deletions(-)
>
> --
> 1.7.4.4
>
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members.
2011-08-17 21:11 ` [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members Richard Henderson
@ 2011-08-23 17:11 ` Peter Maydell
2011-08-23 17:21 ` Richard Henderson
2011-08-23 17:43 ` [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT Richard Henderson
0 siblings, 2 replies; 16+ messages in thread
From: Peter Maydell @ 2011-08-23 17:11 UTC (permalink / raw)
To: Richard Henderson; +Cc: qemu-devel
On 17 August 2011 22:11, Richard Henderson <rth@twiddle.net> wrote:
> By always defining these symbols, we can eliminate a lot of ifdefs.
>
> To allow this to be checked reliably, the semantics of the
> TCG_TARGET_HAS_* macros must be changed from def/undef to true/false.
> This allows even more ifdefs to be removed, converting them into
> C if statements.
This breaks x86-64 hosts if built with --enable-debug:
petmay01@LinaroE102767:~/git/qemu$ ./arm-softmmu/qemu-system-arm
Missing op definition for div_i32
Missing op definition for divu_i32
Missing op definition for rem_i32
Missing op definition for remu_i32
Missing op definition for deposit_i32
Missing op definition for add2_i32
Missing op definition for sub2_i32
Missing op definition for brcond2_i32
Missing op definition for mulu2_i32
Missing op definition for setcond2_i32
Missing op definition for andc_i32
Missing op definition for orc_i32
Missing op definition for eqv_i32
Missing op definition for nand_i32
Missing op definition for nor_i32
Missing op definition for div_i64
Missing op definition for divu_i64
Missing op definition for rem_i64
Missing op definition for remu_i64
Missing op definition for deposit_i64
Missing op definition for andc_i64
Missing op definition for orc_i64
Missing op definition for eqv_i64
Missing op definition for nand_i64
Missing op definition for nor_i64
/home/petmay01/git/qemu/tcg/tcg.c:1148: tcg fatal error
Aborted
A compile-time check that the tcg target has #defined all the
TCG_TARGET_HAS_foo to 0/1 and not left any undefined might be
useful?
[thanks to mmu_man on irc for the report.]
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members.
2011-08-23 17:11 ` Peter Maydell
@ 2011-08-23 17:21 ` Richard Henderson
2011-08-23 17:43 ` [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT Richard Henderson
1 sibling, 0 replies; 16+ messages in thread
From: Richard Henderson @ 2011-08-23 17:21 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel
On 08/23/2011 10:11 AM, Peter Maydell wrote:
> A compile-time check that the tcg target has #defined all the
> TCG_TARGET_HAS_foo to 0/1 and not left any undefined might be
> useful?
That compile-time check is the uses in tcg-op.h. If they're
not defined you'll get undefined symbol errors in that file.
I'll look into the problem...
r~
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT.
2011-08-23 17:11 ` Peter Maydell
2011-08-23 17:21 ` Richard Henderson
@ 2011-08-23 17:43 ` Richard Henderson
2011-08-23 18:41 ` Peter Maydell
1 sibling, 1 reply; 16+ messages in thread
From: Richard Henderson @ 2011-08-23 17:43 UTC (permalink / raw)
To: qemu-devel; +Cc: blauwirbel, peter.maydell
Signed-off-by: Richard Henderson <rth@twiddle.net>
---
tcg/tcg.c | 15 ++++++++-------
1 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 06ce214..411f971 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -1128,18 +1128,19 @@ void tcg_add_target_add_op_defs(const TCGTargetOpDef *tdefs)
#if defined(CONFIG_DEBUG_TCG)
i = 0;
for (op = 0; op < ARRAY_SIZE(tcg_op_defs); op++) {
- if (op < INDEX_op_call || op == INDEX_op_debug_insn_start) {
+ const TCGOpDef *def = &tcg_op_defs[op];
+ if (op < INDEX_op_call
+ || op == INDEX_op_debug_insn_start
+ || (def->flags & TCG_OPF_NOT_PRESENT)) {
/* Wrong entry in op definitions? */
- if (tcg_op_defs[op].used) {
- fprintf(stderr, "Invalid op definition for %s\n",
- tcg_op_defs[op].name);
+ if (def->used) {
+ fprintf(stderr, "Invalid op definition for %s\n", def->name);
i = 1;
}
} else {
/* Missing entry in op definitions? */
- if (!tcg_op_defs[op].used) {
- fprintf(stderr, "Missing op definition for %s\n",
- tcg_op_defs[op].name);
+ if (!def->used) {
+ fprintf(stderr, "Missing op definition for %s\n", def->name);
i = 1;
}
}
--
1.7.4.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT.
2011-08-23 17:43 ` [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT Richard Henderson
@ 2011-08-23 18:41 ` Peter Maydell
2011-08-23 19:31 ` Edgar E. Iglesias
0 siblings, 1 reply; 16+ messages in thread
From: Peter Maydell @ 2011-08-23 18:41 UTC (permalink / raw)
To: Richard Henderson; +Cc: blauwirbel, qemu-devel
On 23 August 2011 18:43, Richard Henderson <rth@twiddle.net> wrote:
> Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Confirmed that this fixes the assertion on x86-64 host.
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT.
2011-08-23 18:41 ` Peter Maydell
@ 2011-08-23 19:31 ` Edgar E. Iglesias
0 siblings, 0 replies; 16+ messages in thread
From: Edgar E. Iglesias @ 2011-08-23 19:31 UTC (permalink / raw)
To: Peter Maydell; +Cc: blauwirbel, qemu-devel, Richard Henderson
On Tue, Aug 23, 2011 at 07:41:35PM +0100, Peter Maydell wrote:
> On 23 August 2011 18:43, Richard Henderson <rth@twiddle.net> wrote:
> > Signed-off-by: Richard Henderson <rth@twiddle.net>
>
> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
>
> Confirmed that this fixes the assertion on x86-64 host.
I've applied this. Thanks both of you for the quick response.
Cheers
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2011-08-23 19:31 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-17 21:11 [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 1/6] tcg: Add and use TCG_OPF_64BIT Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members Richard Henderson
2011-08-23 17:11 ` Peter Maydell
2011-08-23 17:21 ` Richard Henderson
2011-08-23 17:43 ` [Qemu-devel] [PATCH] tcg: Update --enable-debug for TCG_OPF_NOT_PRESENT Richard Henderson
2011-08-23 18:41 ` Peter Maydell
2011-08-23 19:31 ` Edgar E. Iglesias
2011-08-17 21:11 ` [Qemu-devel] [PATCH 3/6] tcg: Constant fold neg, andc, orc, eqv, nand, nor Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 4/6] tcg-hppa: Fix CPU_TEMP_BUF_NLONGS oversight Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 5/6] tcg-ia64: Fix typos in AREG0 setup in prologue Richard Henderson
2011-08-17 21:11 ` [Qemu-devel] [PATCH 6/6] tcg-arm: Make tcg_out_addi inline Richard Henderson
2011-08-19 23:51 ` Peter Maydell
2011-08-20 0:10 ` andrzej zaborowski
2011-08-20 4:19 ` Peter Maydell
2011-08-21 19:15 ` [Qemu-devel] [PATCH 0/6] TCG compile fixes and optimize cleanup Blue Swirl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).