qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites
@ 2013-06-03 13:23 Claudio Fontana
  2013-06-03 13:26 ` [Qemu-devel] [PATCH 1/4] tcg/aarch64: improve arith shifted regs operations Claudio Fontana
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-06-03 13:23 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
	Richard Henderson


This series is a split of:
"[PATCH 0/4] ARM aarch64 TCG tlb fast lookup"
http://lists.nongnu.org/archive/html/qemu-devel/2013-05/msg04803.html

It implements the low level operations that are necessary
in order to implement the tlb fast lookup,
which will be a separate series.

It requires the reviewed but not committed yet series
"[PATCH v4 0/3] ARM aarch64 TCG target" at:

http://lists.nongnu.org/archive/html/qemu-devel/2013-05/msg04200.html

Tested running on a x86-64 physical machine running Foundation v8,
running a linux 3.8.0-rc6+ minimal host system based on linaro v8
image 201301271620 for user space.

Tested guests: arm v5, i386 FreeDOS, i386 linux, sparc images,
all from the qemu testing page.
Also tested on x86-64/linux built with buildroot.

Changes from the original series:

* added ADDS and ANDS to the shifted regs ops, reorder
* split shifted regs ops and test/and immediate into 2 patches
* for byte swapping, remove REV32, we can just use REV
* fix broken comment in tcg_out_uxt

Claudio Fontana (4):
  tcg/aarch64: improve arith shifted regs operations
  tcg/aarch64: implement AND/TEST immediate pattern
  tcg/aarch64: implement byte swap operations
  tcg/aarch64: implement sign/zero extend operations

 tcg/aarch64/tcg-target.c | 170 +++++++++++++++++++++++++++++++++++++++++------
 tcg/aarch64/tcg-target.h |  30 ++++-----
 2 files changed, 166 insertions(+), 34 deletions(-)

-- 
1.8.1

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 1/4] tcg/aarch64: improve arith shifted regs operations
  2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
@ 2013-06-03 13:26 ` Claudio Fontana
  2013-06-03 13:28 ` [Qemu-devel] [PATCH 2/4] tcg/aarch64: implement AND/TEST immediate pattern Claudio Fontana
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-06-03 13:26 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
	Richard Henderson


for arith operations, add SUBS, ANDS, ADDS and add a shift parameter
so that all arith instructions can make use of shifted registers.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
 tcg/aarch64/tcg-target.c | 46 +++++++++++++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c
index ff626eb..b944655 100644
--- a/tcg/aarch64/tcg-target.c
+++ b/tcg/aarch64/tcg-target.c
@@ -186,11 +186,14 @@ enum aarch64_ldst_op_type { /* type of operation */
 };
 
 enum aarch64_arith_opc {
-    ARITH_ADD = 0x0b,
-    ARITH_SUB = 0x4b,
     ARITH_AND = 0x0a,
+    ARITH_ADD = 0x0b,
     ARITH_OR = 0x2a,
-    ARITH_XOR = 0x4a
+    ARITH_ADDS = 0x2b,
+    ARITH_XOR = 0x4a,
+    ARITH_SUB = 0x4b,
+    ARITH_ANDS = 0x6a,
+    ARITH_SUBS = 0x6b,
 };
 
 enum aarch64_srr_opc {
@@ -394,12 +397,20 @@ static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
 }
 
 static inline void tcg_out_arith(TCGContext *s, enum aarch64_arith_opc opc,
-                                 int ext, TCGReg rd, TCGReg rn, TCGReg rm)
+                                 int ext, TCGReg rd, TCGReg rn, TCGReg rm,
+                                 int shift_imm)
 {
     /* Using shifted register arithmetic operations */
     /* if extended registry operation (64bit) just OR with 0x80 << 24 */
-    unsigned int base = ext ? (0x80 | opc) << 24 : opc << 24;
-    tcg_out32(s, base | rm << 16 | rn << 5 | rd);
+    unsigned int shift, base = ext ? (0x80 | opc) << 24 : opc << 24;
+    if (shift_imm == 0) {
+        shift = 0;
+    } else if (shift_imm > 0) {
+        shift = shift_imm << 10 | 1 << 22;
+    } else /* (shift_imm < 0) */ {
+        shift = (-shift_imm) << 10;
+    }
+    tcg_out32(s, base | rm << 16 | shift | rn << 5 | rd);
 }
 
 static inline void tcg_out_mul(TCGContext *s, int ext,
@@ -482,11 +493,11 @@ static inline void tcg_out_rotl(TCGContext *s, int ext,
     tcg_out_extr(s, ext, rd, rn, rn, bits - (m & max));
 }
 
-static inline void tcg_out_cmp(TCGContext *s, int ext, TCGReg rn, TCGReg rm)
+static inline void tcg_out_cmp(TCGContext *s, int ext, TCGReg rn, TCGReg rm,
+                               int shift_imm)
 {
     /* Using CMP alias SUBS wzr, Wn, Wm */
-    unsigned int base = ext ? 0xeb00001f : 0x6b00001f;
-    tcg_out32(s, base | rm << 16 | rn << 5);
+    tcg_out_arith(s, ARITH_SUBS, ext, TCG_REG_XZR, rn, rm, shift_imm);
 }
 
 static inline void tcg_out_cset(TCGContext *s, int ext, TCGReg rd, TCGCond c)
@@ -830,31 +841,31 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
     case INDEX_op_add_i64:
         ext = 1; /* fall through */
     case INDEX_op_add_i32:
-        tcg_out_arith(s, ARITH_ADD, ext, args[0], args[1], args[2]);
+        tcg_out_arith(s, ARITH_ADD, ext, args[0], args[1], args[2], 0);
         break;
 
     case INDEX_op_sub_i64:
         ext = 1; /* fall through */
     case INDEX_op_sub_i32:
-        tcg_out_arith(s, ARITH_SUB, ext, args[0], args[1], args[2]);
+        tcg_out_arith(s, ARITH_SUB, ext, args[0], args[1], args[2], 0);
         break;
 
     case INDEX_op_and_i64:
         ext = 1; /* fall through */
     case INDEX_op_and_i32:
-        tcg_out_arith(s, ARITH_AND, ext, args[0], args[1], args[2]);
+        tcg_out_arith(s, ARITH_AND, ext, args[0], args[1], args[2], 0);
         break;
 
     case INDEX_op_or_i64:
         ext = 1; /* fall through */
     case INDEX_op_or_i32:
-        tcg_out_arith(s, ARITH_OR, ext, args[0], args[1], args[2]);
+        tcg_out_arith(s, ARITH_OR, ext, args[0], args[1], args[2], 0);
         break;
 
     case INDEX_op_xor_i64:
         ext = 1; /* fall through */
     case INDEX_op_xor_i32:
-        tcg_out_arith(s, ARITH_XOR, ext, args[0], args[1], args[2]);
+        tcg_out_arith(s, ARITH_XOR, ext, args[0], args[1], args[2], 0);
         break;
 
     case INDEX_op_mul_i64:
@@ -909,7 +920,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         if (const_args[2]) {    /* ROR / EXTR Wd, Wm, Wm, 32 - m */
             tcg_out_rotl(s, ext, args[0], args[1], args[2]);
         } else {
-            tcg_out_arith(s, ARITH_SUB, 0, TCG_REG_TMP, TCG_REG_XZR, args[2]);
+            tcg_out_arith(s, ARITH_SUB, 0,
+                          TCG_REG_TMP, TCG_REG_XZR, args[2], 0);
             tcg_out_shiftrot_reg(s, SRR_ROR, ext,
                                  args[0], args[1], TCG_REG_TMP);
         }
@@ -918,14 +930,14 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
     case INDEX_op_brcond_i64:
         ext = 1; /* fall through */
     case INDEX_op_brcond_i32: /* CMP 0, 1, cond(2), label 3 */
-        tcg_out_cmp(s, ext, args[0], args[1]);
+        tcg_out_cmp(s, ext, args[0], args[1], 0);
         tcg_out_goto_label_cond(s, args[2], args[3]);
         break;
 
     case INDEX_op_setcond_i64:
         ext = 1; /* fall through */
     case INDEX_op_setcond_i32:
-        tcg_out_cmp(s, ext, args[1], args[2]);
+        tcg_out_cmp(s, ext, args[1], args[2], 0);
         tcg_out_cset(s, 0, args[0], args[3]);
         break;
 
-- 
1.8.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 2/4] tcg/aarch64: implement AND/TEST immediate pattern
  2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
  2013-06-03 13:26 ` [Qemu-devel] [PATCH 1/4] tcg/aarch64: improve arith shifted regs operations Claudio Fontana
@ 2013-06-03 13:28 ` Claudio Fontana
  2013-06-03 13:29 ` [Qemu-devel] [PATCH 3/4] tcg/aarch64: implement byte swap operations Claudio Fontana
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-06-03 13:28 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
	Richard Henderson


add functions to AND/TEST registers with immediate patterns.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
 tcg/aarch64/tcg-target.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c
index b944655..3528aa1 100644
--- a/tcg/aarch64/tcg-target.c
+++ b/tcg/aarch64/tcg-target.c
@@ -580,6 +580,40 @@ static inline void tcg_out_call(TCGContext *s, tcg_target_long target)
     }
 }
 
+/* encode a logical immediate, mapping user parameter
+   M=set bits pattern length to S=M-1 */
+static inline unsigned int
+aarch64_limm(unsigned int m, unsigned int r)
+{
+    assert(m > 0);
+    return r << 16 | (m - 1) << 10;
+}
+
+/* test a register against an immediate bit pattern made of
+   M set bits rotated right by R.
+   Examples:
+   to test a 32/64 reg against 0x00000007, pass M = 3,  R = 0.
+   to test a 32/64 reg against 0x000000ff, pass M = 8,  R = 0.
+   to test a 32bit reg against 0xff000000, pass M = 8,  R = 8.
+   to test a 32bit reg against 0xff0000ff, pass M = 16, R = 8.
+ */
+static inline void tcg_out_tst(TCGContext *s, int ext, TCGReg rn,
+                               unsigned int m, unsigned int r)
+{
+    /* using TST alias of ANDS XZR, Xn,#bimm64 0x7200001f */
+    unsigned int base = ext ? 0xf240001f : 0x7200001f;
+    tcg_out32(s, base | aarch64_limm(m, r) | rn << 5);
+}
+
+/* and a register with a bit pattern, similarly to TST, no flags change */
+static inline void tcg_out_andi(TCGContext *s, int ext, TCGReg rd, TCGReg rn,
+                                unsigned int m, unsigned int r)
+{
+    /* using AND 0x12000000 */
+    unsigned int base = ext ? 0x92400000 : 0x12000000;
+    tcg_out32(s, base | aarch64_limm(m, r) | rn << 5 | rd);
+}
+
 static inline void tcg_out_ret(TCGContext *s)
 {
     /* emit RET { LR } */
-- 
1.8.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 3/4] tcg/aarch64: implement byte swap operations
  2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
  2013-06-03 13:26 ` [Qemu-devel] [PATCH 1/4] tcg/aarch64: improve arith shifted regs operations Claudio Fontana
  2013-06-03 13:28 ` [Qemu-devel] [PATCH 2/4] tcg/aarch64: implement AND/TEST immediate pattern Claudio Fontana
@ 2013-06-03 13:29 ` Claudio Fontana
  2013-06-03 13:30 ` [Qemu-devel] [PATCH 4/4] tcg/aarch64: implement sign/zero extend operations Claudio Fontana
  2013-06-03 14:19 ` [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Richard Henderson
  4 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-06-03 13:29 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
	Richard Henderson


implement the optional byte swap operations with the dedicated
aarch64 instructions.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
 tcg/aarch64/tcg-target.c | 32 ++++++++++++++++++++++++++++++++
 tcg/aarch64/tcg-target.h | 10 +++++-----
 2 files changed, 37 insertions(+), 5 deletions(-)

diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c
index 3528aa1..aa19a17 100644
--- a/tcg/aarch64/tcg-target.c
+++ b/tcg/aarch64/tcg-target.c
@@ -660,6 +660,20 @@ static inline void tcg_out_goto_label_cond(TCGContext *s,
     }
 }
 
+static inline void tcg_out_rev(TCGContext *s, int ext, TCGReg rd, TCGReg rm)
+{
+    /* using REV 0x5ac00800 */
+    unsigned int base = ext ? 0xdac00c00 : 0x5ac00800;
+    tcg_out32(s, base | rm << 5 | rd);
+}
+
+static inline void tcg_out_rev16(TCGContext *s, int ext, TCGReg rd, TCGReg rm)
+{
+    /* using REV16 0x5ac00400 */
+    unsigned int base = ext ? 0xdac00400 : 0x5ac00400;
+    tcg_out32(s, base | rm << 5 | rd);
+}
+
 #ifdef CONFIG_SOFTMMU
 #include "exec/softmmu_defs.h"
 
@@ -1012,6 +1026,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         tcg_out_qemu_st(s, args, 3);
         break;
 
+    case INDEX_op_bswap64_i64:
+        ext = 1; /* fall through */
+    case INDEX_op_bswap32_i64:
+    case INDEX_op_bswap32_i32:
+        tcg_out_rev(s, ext, args[0], args[1]);
+        break;
+    case INDEX_op_bswap16_i64:
+    case INDEX_op_bswap16_i32:
+        tcg_out_rev16(s, 0, args[0], args[1]);
+        break;
+
     default:
         tcg_abort(); /* opcode not implemented */
     }
@@ -1093,6 +1118,13 @@ static const TCGTargetOpDef aarch64_op_defs[] = {
     { INDEX_op_qemu_st16, { "l", "l" } },
     { INDEX_op_qemu_st32, { "l", "l" } },
     { INDEX_op_qemu_st64, { "l", "l" } },
+
+    { INDEX_op_bswap16_i32, { "r", "r" } },
+    { INDEX_op_bswap32_i32, { "r", "r" } },
+    { INDEX_op_bswap16_i64, { "r", "r" } },
+    { INDEX_op_bswap32_i64, { "r", "r" } },
+    { INDEX_op_bswap64_i64, { "r", "r" } },
+
     { -1 },
 };
 
diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
index 075ab2a..247ef43 100644
--- a/tcg/aarch64/tcg-target.h
+++ b/tcg/aarch64/tcg-target.h
@@ -44,8 +44,8 @@ typedef enum {
 #define TCG_TARGET_HAS_ext16s_i32       0
 #define TCG_TARGET_HAS_ext8u_i32        0
 #define TCG_TARGET_HAS_ext16u_i32       0
-#define TCG_TARGET_HAS_bswap16_i32      0
-#define TCG_TARGET_HAS_bswap32_i32      0
+#define TCG_TARGET_HAS_bswap16_i32      1
+#define TCG_TARGET_HAS_bswap32_i32      1
 #define TCG_TARGET_HAS_not_i32          0
 #define TCG_TARGET_HAS_neg_i32          0
 #define TCG_TARGET_HAS_rot_i32          1
@@ -68,9 +68,9 @@ typedef enum {
 #define TCG_TARGET_HAS_ext8u_i64        0
 #define TCG_TARGET_HAS_ext16u_i64       0
 #define TCG_TARGET_HAS_ext32u_i64       0
-#define TCG_TARGET_HAS_bswap16_i64      0
-#define TCG_TARGET_HAS_bswap32_i64      0
-#define TCG_TARGET_HAS_bswap64_i64      0
+#define TCG_TARGET_HAS_bswap16_i64      1
+#define TCG_TARGET_HAS_bswap32_i64      1
+#define TCG_TARGET_HAS_bswap64_i64      1
 #define TCG_TARGET_HAS_not_i64          0
 #define TCG_TARGET_HAS_neg_i64          0
 #define TCG_TARGET_HAS_rot_i64          1
-- 
1.8.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 4/4] tcg/aarch64: implement sign/zero extend operations
  2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
                   ` (2 preceding siblings ...)
  2013-06-03 13:29 ` [Qemu-devel] [PATCH 3/4] tcg/aarch64: implement byte swap operations Claudio Fontana
@ 2013-06-03 13:30 ` Claudio Fontana
  2013-06-03 14:19 ` [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Richard Henderson
  4 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-06-03 13:30 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
	Richard Henderson


implement the optional sign/zero extend operations with the dedicated
aarch64 instructions.

Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
 tcg/aarch64/tcg-target.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++--
 tcg/aarch64/tcg-target.h | 20 ++++++++---------
 2 files changed, 66 insertions(+), 12 deletions(-)

diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c
index aa19a17..5d0f300 100644
--- a/tcg/aarch64/tcg-target.c
+++ b/tcg/aarch64/tcg-target.c
@@ -674,6 +674,24 @@ static inline void tcg_out_rev16(TCGContext *s, int ext, TCGReg rd, TCGReg rm)
     tcg_out32(s, base | rm << 5 | rd);
 }
 
+static inline void tcg_out_sxt(TCGContext *s, int ext, int s_bits,
+                               TCGReg rd, TCGReg rn)
+{
+    /* using ALIASes SXTB 0x13001c00, SXTH 0x13003c00, SXTW 0x93407c00
+       of SBFM Xd, Xn, #0, #7|15|31 */
+    int bits = 8 * (1 << s_bits) - 1;
+    tcg_out_sbfm(s, ext, rd, rn, 0, bits);
+}
+
+static inline void tcg_out_uxt(TCGContext *s, int s_bits,
+                               TCGReg rd, TCGReg rn)
+{
+    /* using ALIASes UXTB 0x53001c00, UXTH 0x53003c00
+       of UBFM Wd, Wn, #0, #7|15 */
+    int bits = 8 * (1 << s_bits) - 1;
+    tcg_out_ubfm(s, 0, rd, rn, 0, bits);
+}
+
 #ifdef CONFIG_SOFTMMU
 #include "exec/softmmu_defs.h"
 
@@ -721,8 +739,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
     tcg_out_callr(s, TCG_REG_TMP);
 
     if (opc & 0x04) { /* sign extend */
-        unsigned int bits = 8 * (1 << s_bits) - 1;
-        tcg_out_sbfm(s, 1, data_reg, TCG_REG_X0, 0, bits); /* 7|15|31 */
+        tcg_out_sxt(s, 1, s_bits, data_reg, TCG_REG_X0);
     } else {
         tcg_out_movr(s, 1, data_reg, TCG_REG_X0);
     }
@@ -1037,6 +1054,31 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         tcg_out_rev16(s, 0, args[0], args[1]);
         break;
 
+    case INDEX_op_ext8s_i64:
+        ext = 1; /* fall through */
+    case INDEX_op_ext8s_i32:
+        tcg_out_sxt(s, ext, 0, args[0], args[1]);
+        break;
+    case INDEX_op_ext16s_i64:
+        ext = 1; /* fall through */
+    case INDEX_op_ext16s_i32:
+        tcg_out_sxt(s, ext, 1, args[0], args[1]);
+        break;
+    case INDEX_op_ext32s_i64:
+        tcg_out_sxt(s, 1, 2, args[0], args[1]);
+        break;
+    case INDEX_op_ext8u_i64:
+    case INDEX_op_ext8u_i32:
+        tcg_out_uxt(s, 0, args[0], args[1]);
+        break;
+    case INDEX_op_ext16u_i64:
+    case INDEX_op_ext16u_i32:
+        tcg_out_uxt(s, 1, args[0], args[1]);
+        break;
+    case INDEX_op_ext32u_i64:
+        tcg_out_movr(s, 0, args[0], args[1]);
+        break;
+
     default:
         tcg_abort(); /* opcode not implemented */
     }
@@ -1125,6 +1167,18 @@ static const TCGTargetOpDef aarch64_op_defs[] = {
     { INDEX_op_bswap32_i64, { "r", "r" } },
     { INDEX_op_bswap64_i64, { "r", "r" } },
 
+    { INDEX_op_ext8s_i32, { "r", "r" } },
+    { INDEX_op_ext16s_i32, { "r", "r" } },
+    { INDEX_op_ext8u_i32, { "r", "r" } },
+    { INDEX_op_ext16u_i32, { "r", "r" } },
+
+    { INDEX_op_ext8s_i64, { "r", "r" } },
+    { INDEX_op_ext16s_i64, { "r", "r" } },
+    { INDEX_op_ext32s_i64, { "r", "r" } },
+    { INDEX_op_ext8u_i64, { "r", "r" } },
+    { INDEX_op_ext16u_i64, { "r", "r" } },
+    { INDEX_op_ext32u_i64, { "r", "r" } },
+
     { -1 },
 };
 
diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
index 247ef43..97e4a5b 100644
--- a/tcg/aarch64/tcg-target.h
+++ b/tcg/aarch64/tcg-target.h
@@ -40,10 +40,10 @@ typedef enum {
 
 /* optional instructions */
 #define TCG_TARGET_HAS_div_i32          0
-#define TCG_TARGET_HAS_ext8s_i32        0
-#define TCG_TARGET_HAS_ext16s_i32       0
-#define TCG_TARGET_HAS_ext8u_i32        0
-#define TCG_TARGET_HAS_ext16u_i32       0
+#define TCG_TARGET_HAS_ext8s_i32        1
+#define TCG_TARGET_HAS_ext16s_i32       1
+#define TCG_TARGET_HAS_ext8u_i32        1
+#define TCG_TARGET_HAS_ext16u_i32       1
 #define TCG_TARGET_HAS_bswap16_i32      1
 #define TCG_TARGET_HAS_bswap32_i32      1
 #define TCG_TARGET_HAS_not_i32          0
@@ -62,12 +62,12 @@ typedef enum {
 #define TCG_TARGET_HAS_muls2_i32        0
 
 #define TCG_TARGET_HAS_div_i64          0
-#define TCG_TARGET_HAS_ext8s_i64        0
-#define TCG_TARGET_HAS_ext16s_i64       0
-#define TCG_TARGET_HAS_ext32s_i64       0
-#define TCG_TARGET_HAS_ext8u_i64        0
-#define TCG_TARGET_HAS_ext16u_i64       0
-#define TCG_TARGET_HAS_ext32u_i64       0
+#define TCG_TARGET_HAS_ext8s_i64        1
+#define TCG_TARGET_HAS_ext16s_i64       1
+#define TCG_TARGET_HAS_ext32s_i64       1
+#define TCG_TARGET_HAS_ext8u_i64        1
+#define TCG_TARGET_HAS_ext16u_i64       1
+#define TCG_TARGET_HAS_ext32u_i64       1
 #define TCG_TARGET_HAS_bswap16_i64      1
 #define TCG_TARGET_HAS_bswap32_i64      1
 #define TCG_TARGET_HAS_bswap64_i64      1
-- 
1.8.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites
  2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
                   ` (3 preceding siblings ...)
  2013-06-03 13:30 ` [Qemu-devel] [PATCH 4/4] tcg/aarch64: implement sign/zero extend operations Claudio Fontana
@ 2013-06-03 14:19 ` Richard Henderson
  4 siblings, 0 replies; 6+ messages in thread
From: Richard Henderson @ 2013-06-03 14:19 UTC (permalink / raw)
  To: Claudio Fontana
  Cc: Laurent Desnogues, Peter Maydell, Jani Kokkonen,
	qemu-devel@nongnu.org

On 06/03/2013 06:23 AM, Claudio Fontana wrote:
> Changes from the original series:
> 
> * added ADDS and ANDS to the shifted regs ops, reorder
> * split shifted regs ops and test/and immediate into 2 patches
> * for byte swapping, remove REV32, we can just use REV
> * fix broken comment in tcg_out_uxt
> 
> Claudio Fontana (4):
>   tcg/aarch64: improve arith shifted regs operations
>   tcg/aarch64: implement AND/TEST immediate pattern
>   tcg/aarch64: implement byte swap operations
>   tcg/aarch64: implement sign/zero extend operations
> 
>  tcg/aarch64/tcg-target.c | 170 +++++++++++++++++++++++++++++++++++++++++------
>  tcg/aarch64/tcg-target.h |  30 ++++-----
>  2 files changed, 166 insertions(+), 34 deletions(-)

Reviewed-by: Richard Henderson <rth@twiddle.net>


r~

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-06-03 14:19 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-03 13:23 [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Claudio Fontana
2013-06-03 13:26 ` [Qemu-devel] [PATCH 1/4] tcg/aarch64: improve arith shifted regs operations Claudio Fontana
2013-06-03 13:28 ` [Qemu-devel] [PATCH 2/4] tcg/aarch64: implement AND/TEST immediate pattern Claudio Fontana
2013-06-03 13:29 ` [Qemu-devel] [PATCH 3/4] tcg/aarch64: implement byte swap operations Claudio Fontana
2013-06-03 13:30 ` [Qemu-devel] [PATCH 4/4] tcg/aarch64: implement sign/zero extend operations Claudio Fontana
2013-06-03 14:19 ` [Qemu-devel] [PATCH 0/4] aarch64 TCG tlb fast lookup prerequisites Richard Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).