* [Qemu-devel] [PATCH v3 0/3] ARM aarch64 TCG target
@ 2013-05-28 15:23 Claudio Fontana
2013-05-28 15:26 ` [Qemu-devel] [PATCH v3 1/3] include/elf.h: add aarch64 ELF machine and relocs Claudio Fontana
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-05-28 15:23 UTC (permalink / raw)
To: Peter Maydell
Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
Richard Henderson
This series implements preliminary support for the ARM aarch64 TCG target.
Limitations of this initial implementation (TODOs) include:
* missing TLB lookup in qemu_ld/st [C helpers always called].
An incremental patch, which requires this series, is coming up
from colleague Jani Kokkonen to implement this.
* most optional opcodes are not implemented yet (only rotation done).
* CONFIG_SOFTMMU only
Tested running on a x86-64 physical machine running Foundation v8,
running a linux 3.8.0-rc6+ minimal host system based on linaro v8
image 201301271620 for user space.
Tested guests: arm v5 test image, i386 FreeDOS test image,
i386 linux test image, all from qemu-devel testing page.
Also tested on x86-64/linux built with buildroot,
and on arm v7/linux built with buildroot as well.
Changes in v2:
* for icache flushing, removed placeholder for old gcc
* aligned defines values in the elf aarch64 relocations
* added comment in the elf aarch64 relocations
* use X16 and X17 as well, they should be safe to use
* defined TCG_REG_TMP to TCG_REG_X8
* fix relocs and gotos to be more robust during retranslation
* removed declarations and assignments on same line
* added braces in 'if's even when unnecessary
* added comment about COND_NV behaving like COND_AL in aarch64
* added comment about no-extend field
* remove trampoline for the conditional branches, add CONDBR19
* set MAX_CODE_GEN_BUFFER_SIZE for aarch64, matching JUMP26
* improved left rotations, by using one less instruction
* for setcond_i32/i64 use CSET instead of CSEL
* implement andi and subi for working with the stack
* do not rely on temp_buf for tcg_set_frame: use stack
* remove unused constrained ARM constant
* redefine enums with same value to one-another
* fix setting of available regs (set all 32 bits)
* moved configure patch to after the tcg target in the series
* added low level operations useful in preparation of tlb lookup
Changes in v3:
* removed low level operations introduced in v2, will be in separate series
* honor 'addend' in patch_reloc, although it's always 0
* replace use of 'int' with 'TCGReg' when registers are expected
* merge movi32 and movi64 into movi_aux
* use 32bit version of the instructions when possible, to save energy/cycles
* do not clobber a passed register for INDEX_op_rotl_i32/i64
* removed hard coded SP and FP in stack functions, make them params
* zero-extend addr_reg for 32bit guests in qemu_ld/st
* make use of deposit32 (bitops) in reloc_pc26 and reloc_pc19
* never use multiple cases per line in switches even when empty
* less pessimistic range checks for instructions
* other formatting fixes that fell through the cracks in v2
Claudio Fontana (3):
include/elf.h: add aarch64 ELF machine and relocs
tcg/aarch64: implement new TCG target for aarch64
configure: permit compilation on arm aarch64
configure | 8 +
include/elf.h | 129 ++++++
include/exec/exec-all.h | 5 +-
tcg/aarch64/tcg-target.c | 1159 ++++++++++++++++++++++++++++++++++++++++++++++
tcg/aarch64/tcg-target.h | 99 ++++
translate-all.c | 2 +
6 files changed, 1401 insertions(+), 1 deletion(-)
create mode 100644 tcg/aarch64/tcg-target.c
create mode 100644 tcg/aarch64/tcg-target.h
--
1.8.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH v3 1/3] include/elf.h: add aarch64 ELF machine and relocs
2013-05-28 15:23 [Qemu-devel] [PATCH v3 0/3] ARM aarch64 TCG target Claudio Fontana
@ 2013-05-28 15:26 ` Claudio Fontana
2013-05-28 15:28 ` [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64 Claudio Fontana
2013-05-28 15:30 ` [Qemu-devel] [PATCH v3 3/3] configure: permit compilation on arm aarch64 Claudio Fontana
2 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-05-28 15:26 UTC (permalink / raw)
To: Peter Maydell
Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
Richard Henderson
we will use the 26bit relative relocs in the aarch64 tcg target.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
include/elf.h | 129 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 129 insertions(+)
diff --git a/include/elf.h b/include/elf.h
index a21ea53..cf0d3e2 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -129,6 +129,8 @@ typedef int64_t Elf64_Sxword;
#define EM_XTENSA 94 /* Tensilica Xtensa */
+#define EM_AARCH64 183
+
/* This is the info that is needed to parse the dynamic section of the file */
#define DT_NULL 0
#define DT_NEEDED 1
@@ -616,6 +618,133 @@ typedef struct {
/* Keep this the last entry. */
#define R_ARM_NUM 256
+/* ARM Aarch64 relocation types */
+#define R_AARCH64_NONE 256 /* also accepts R_ARM_NONE (0) */
+/* static data relocations */
+#define R_AARCH64_ABS64 257
+#define R_AARCH64_ABS32 258
+#define R_AARCH64_ABS16 259
+#define R_AARCH64_PREL64 260
+#define R_AARCH64_PREL32 261
+#define R_AARCH64_PREL16 262
+/* static aarch64 group relocations */
+/* group relocs to create unsigned data value or address inline */
+#define R_AARCH64_MOVW_UABS_G0 263
+#define R_AARCH64_MOVW_UABS_G0_NC 264
+#define R_AARCH64_MOVW_UABS_G1 265
+#define R_AARCH64_MOVW_UABS_G1_NC 266
+#define R_AARCH64_MOVW_UABS_G2 267
+#define R_AARCH64_MOVW_UABS_G2_NC 268
+#define R_AARCH64_MOVW_UABS_G3 269
+/* group relocs to create signed data or offset value inline */
+#define R_AARCH64_MOVW_SABS_G0 270
+#define R_AARCH64_MOVW_SABS_G1 271
+#define R_AARCH64_MOVW_SABS_G2 272
+/* relocs to generate 19, 21, and 33 bit PC-relative addresses */
+#define R_AARCH64_LD_PREL_LO19 273
+#define R_AARCH64_ADR_PREL_LO21 274
+#define R_AARCH64_ADR_PREL_PG_HI21 275
+#define R_AARCH64_ADR_PREL_PG_HI21_NC 276
+#define R_AARCH64_ADD_ABS_LO12_NC 277
+#define R_AARCH64_LDST8_ABS_LO12_NC 278
+#define R_AARCH64_LDST16_ABS_LO12_NC 284
+#define R_AARCH64_LDST32_ABS_LO12_NC 285
+#define R_AARCH64_LDST64_ABS_LO12_NC 286
+#define R_AARCH64_LDST128_ABS_LO12_NC 299
+/* relocs for control-flow - all offsets as multiple of 4 */
+#define R_AARCH64_TSTBR14 279
+#define R_AARCH64_CONDBR19 280
+#define R_AARCH64_JUMP26 282
+#define R_AARCH64_CALL26 283
+/* group relocs to create pc-relative offset inline */
+#define R_AARCH64_MOVW_PREL_G0 287
+#define R_AARCH64_MOVW_PREL_G0_NC 288
+#define R_AARCH64_MOVW_PREL_G1 289
+#define R_AARCH64_MOVW_PREL_G1_NC 290
+#define R_AARCH64_MOVW_PREL_G2 291
+#define R_AARCH64_MOVW_PREL_G2_NC 292
+#define R_AARCH64_MOVW_PREL_G3 293
+/* group relocs to create a GOT-relative offset inline */
+#define R_AARCH64_MOVW_GOTOFF_G0 300
+#define R_AARCH64_MOVW_GOTOFF_G0_NC 301
+#define R_AARCH64_MOVW_GOTOFF_G1 302
+#define R_AARCH64_MOVW_GOTOFF_G1_NC 303
+#define R_AARCH64_MOVW_GOTOFF_G2 304
+#define R_AARCH64_MOVW_GOTOFF_G2_NC 305
+#define R_AARCH64_MOVW_GOTOFF_G3 306
+/* GOT-relative data relocs */
+#define R_AARCH64_GOTREL64 307
+#define R_AARCH64_GOTREL32 308
+/* GOT-relative instr relocs */
+#define R_AARCH64_GOT_LD_PREL19 309
+#define R_AARCH64_LD64_GOTOFF_LO15 310
+#define R_AARCH64_ADR_GOT_PAGE 311
+#define R_AARCH64_LD64_GOT_LO12_NC 312
+#define R_AARCH64_LD64_GOTPAGE_LO15 313
+/* General Dynamic TLS relocations */
+#define R_AARCH64_TLSGD_ADR_PREL21 512
+#define R_AARCH64_TLSGD_ADR_PAGE21 513
+#define R_AARCH64_TLSGD_ADD_LO12_NC 514
+#define R_AARCH64_TLSGD_MOVW_G1 515
+#define R_AARCH64_TLSGD_MOVW_G0_NC 516
+/* Local Dynamic TLS relocations */
+#define R_AARCH64_TLSLD_ADR_PREL21 517
+#define R_AARCH64_TLSLD_ADR_PAGE21 518
+#define R_AARCH64_TLSLD_ADD_LO12_NC 519
+#define R_AARCH64_TLSLD_MOVW_G1 520
+#define R_AARCH64_TLSLD_MOVW_G0_NC 521
+#define R_AARCH64_TLSLD_LD_PREL19 522
+#define R_AARCH64_TLSLD_MOVW_DTPREL_G2 523
+#define R_AARCH64_TLSLD_MOVW_DTPREL_G1 524
+#define R_AARCH64_TLSLD_MOVW_DTPREL_G1_NC 525
+#define R_AARCH64_TLSLD_MOVW_DTPREL_G0 526
+#define R_AARCH64_TLSLD_MOVW_DTPREL_G0_NC 527
+#define R_AARCH64_TLSLD_ADD_DTPREL_HI12 528
+#define R_AARCH64_TLSLD_ADD_DTPREL_LO12 529
+#define R_AARCH64_TLSLD_ADD_DTPREL_LO12_NC 530
+#define R_AARCH64_TLSLD_LDST8_DTPREL_LO12 531
+#define R_AARCH64_TLSLD_LDST8_DTPREL_LO12_NC 532
+#define R_AARCH64_TLSLD_LDST16_DTPREL_LO12 533
+#define R_AARCH64_TLSLD_LDST16_DTPREL_LO12_NC 534
+#define R_AARCH64_TLSLD_LDST32_DTPREL_LO12 535
+#define R_AARCH64_TLSLD_LDST32_DTPREL_LO12_NC 536
+#define R_AARCH64_TLSLD_LDST64_DTPREL_LO12 537
+#define R_AARCH64_TLSLD_LDST64_DTPREL_LO12_NC 538
+/* initial exec TLS relocations */
+#define R_AARCH64_TLSIE_MOVW_GOTTPREL_G1 539
+#define R_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC 540
+#define R_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21 541
+#define R_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC 542
+#define R_AARCH64_TLSIE_LD_GOTTPREL_PREL19 543
+/* local exec TLS relocations */
+#define R_AARCH64_TLSLE_MOVW_TPREL_G2 544
+#define R_AARCH64_TLSLE_MOVW_TPREL_G1 545
+#define R_AARCH64_TLSLE_MOVW_TPREL_G1_NC 546
+#define R_AARCH64_TLSLE_MOVW_TPREL_G0 547
+#define R_AARCH64_TLSLE_MOVW_TPREL_G0_NC 548
+#define R_AARCH64_TLSLE_ADD_TPREL_HI12 549
+#define R_AARCH64_TLSLE_ADD_TPREL_LO12 550
+#define R_AARCH64_TLSLE_ADD_TPREL_LO12_NC 551
+#define R_AARCH64_TLSLE_LDST8_TPREL_LO12 552
+#define R_AARCH64_TLSLE_LDST8_TPREL_LO12_NC 553
+#define R_AARCH64_TLSLE_LDST16_TPREL_LO12 554
+#define R_AARCH64_TLSLE_LDST16_TPREL_LO12_NC 555
+#define R_AARCH64_TLSLE_LDST32_TPREL_LO12 556
+#define R_AARCH64_TLSLE_LDST32_TPREL_LO12_NC 557
+#define R_AARCH64_TLSLE_LDST64_TPREL_LO12 558
+#define R_AARCH64_TLSLE_LDST64_TPREL_LO12_NC 559
+/* Dynamic Relocations */
+#define R_AARCH64_COPY 1024
+#define R_AARCH64_GLOB_DAT 1025
+#define R_AARCH64_JUMP_SLOT 1026
+#define R_AARCH64_RELATIVE 1027
+#define R_AARCH64_TLS_DTPREL64 1028
+#define R_AARCH64_TLS_DTPMOD64 1029
+#define R_AARCH64_TLS_TPREL64 1030
+#define R_AARCH64_TLS_DTPREL32 1031
+#define R_AARCH64_TLS_DTPMOD32 1032
+#define R_AARCH64_TLS_TPREL32 1033
+
/* s390 relocations defined by the ABIs */
#define R_390_NONE 0 /* No reloc. */
#define R_390_8 1 /* Direct 8 bit. */
--
1.8.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64
2013-05-28 15:23 [Qemu-devel] [PATCH v3 0/3] ARM aarch64 TCG target Claudio Fontana
2013-05-28 15:26 ` [Qemu-devel] [PATCH v3 1/3] include/elf.h: add aarch64 ELF machine and relocs Claudio Fontana
@ 2013-05-28 15:28 ` Claudio Fontana
2013-05-28 16:18 ` Richard Henderson
2013-05-28 15:30 ` [Qemu-devel] [PATCH v3 3/3] configure: permit compilation on arm aarch64 Claudio Fontana
2 siblings, 1 reply; 6+ messages in thread
From: Claudio Fontana @ 2013-05-28 15:28 UTC (permalink / raw)
To: Peter Maydell
Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
Richard Henderson
add preliminary support for TCG target aarch64.
Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
include/exec/exec-all.h | 5 +-
tcg/aarch64/tcg-target.c | 1159 ++++++++++++++++++++++++++++++++++++++++++++++
tcg/aarch64/tcg-target.h | 99 ++++
translate-all.c | 2 +
4 files changed, 1264 insertions(+), 1 deletion(-)
create mode 100644 tcg/aarch64/tcg-target.c
create mode 100644 tcg/aarch64/tcg-target.h
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 6362074..5c31863 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -128,7 +128,7 @@ static inline void tlb_flush(CPUArchState *env, int flush_global)
#if defined(__arm__) || defined(_ARCH_PPC) \
|| defined(__x86_64__) || defined(__i386__) \
- || defined(__sparc__) \
+ || defined(__sparc__) || defined(__aarch64__) \
|| defined(CONFIG_TCG_INTERPRETER)
#define USE_DIRECT_JUMP
#endif
@@ -230,6 +230,9 @@ static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
*(uint32_t *)jmp_addr = addr - (jmp_addr + 4);
/* no need to flush icache explicitly */
}
+#elif defined(__aarch64__)
+void aarch64_tb_set_jmp_target(uintptr_t jmp_addr, uintptr_t addr);
+#define tb_set_jmp_target1 aarch64_tb_set_jmp_target
#elif defined(__arm__)
static inline void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr)
{
diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c
new file mode 100644
index 0000000..8051419
--- /dev/null
+++ b/tcg/aarch64/tcg-target.c
@@ -0,0 +1,1159 @@
+/*
+ * Initial TCG Implementation for aarch64
+ *
+ * Copyright (c) 2013 Huawei Technologies Duesseldorf GmbH
+ * Written by Claudio Fontana
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version.
+ *
+ * See the COPYING file in the top-level directory for details.
+ */
+
+#include "qemu/bitops.h"
+
+#ifndef NDEBUG
+static const char * const tcg_target_reg_names[TCG_TARGET_NB_REGS] = {
+ "%x0", "%x1", "%x2", "%x3", "%x4", "%x5", "%x6", "%x7",
+ "%x8", "%x9", "%x10", "%x11", "%x12", "%x13", "%x14", "%x15",
+ "%x16", "%x17", "%x18", "%x19", "%x20", "%x21", "%x22", "%x23",
+ "%x24", "%x25", "%x26", "%x27", "%x28",
+ "%fp", /* frame pointer */
+ "%lr", /* link register */
+ "%sp", /* stack pointer */
+};
+#endif /* NDEBUG */
+
+static const int tcg_target_reg_alloc_order[] = {
+ TCG_REG_X20, TCG_REG_X21, TCG_REG_X22, TCG_REG_X23,
+ TCG_REG_X24, TCG_REG_X25, TCG_REG_X26, TCG_REG_X27,
+ TCG_REG_X28,
+
+ TCG_REG_X9, TCG_REG_X10, TCG_REG_X11, TCG_REG_X12,
+ TCG_REG_X13, TCG_REG_X14, TCG_REG_X15,
+ TCG_REG_X16, TCG_REG_X17,
+
+ TCG_REG_X18, TCG_REG_X19, /* will not use these, see tcg_target_init */
+
+ TCG_REG_X0, TCG_REG_X1, TCG_REG_X2, TCG_REG_X3,
+ TCG_REG_X4, TCG_REG_X5, TCG_REG_X6, TCG_REG_X7,
+
+ TCG_REG_X8, /* will not use, see tcg_target_init */
+};
+
+static const int tcg_target_call_iarg_regs[8] = {
+ TCG_REG_X0, TCG_REG_X1, TCG_REG_X2, TCG_REG_X3,
+ TCG_REG_X4, TCG_REG_X5, TCG_REG_X6, TCG_REG_X7
+};
+static const int tcg_target_call_oarg_regs[1] = {
+ TCG_REG_X0
+};
+
+#define TCG_REG_TMP TCG_REG_X8
+
+static inline void reloc_pc26(void *code_ptr, tcg_target_long target)
+{
+ tcg_target_long offset; uint32_t insn;
+ offset = (target - (tcg_target_long)code_ptr) / 4;
+ /* read instruction, mask away previous PC_REL26 parameter contents,
+ set the proper offset, then write back the instruction. */
+ insn = *(uint32_t *)code_ptr;
+ insn = deposit32(insn, 0, 26, offset);
+ *(uint32_t *)code_ptr = insn;
+}
+
+static inline void reloc_pc19(void *code_ptr, tcg_target_long target)
+{
+ tcg_target_long offset; uint32_t insn;
+ offset = (target - (tcg_target_long)code_ptr) / 4;
+ /* read instruction, mask away previous PC_REL19 parameter contents,
+ set the proper offset, then write back the instruction. */
+ insn = *(uint32_t *)code_ptr;
+ insn = deposit32(insn, 5, 19, offset);
+ *(uint32_t *)code_ptr = insn;
+}
+
+static inline void patch_reloc(uint8_t *code_ptr, int type,
+ tcg_target_long value, tcg_target_long addend)
+{
+ value += addend;
+
+ switch (type) {
+ case R_AARCH64_JUMP26:
+ case R_AARCH64_CALL26:
+ reloc_pc26(code_ptr, value);
+ break;
+ case R_AARCH64_CONDBR19:
+ reloc_pc19(code_ptr, value);
+ break;
+
+ default:
+ tcg_abort();
+ }
+}
+
+/* parse target specific constraints */
+static int target_parse_constraint(TCGArgConstraint *ct,
+ const char **pct_str)
+{
+ const char *ct_str = *pct_str;
+
+ switch (ct_str[0]) {
+ case 'r':
+ ct->ct |= TCG_CT_REG;
+ tcg_regset_set32(ct->u.regs, 0, (1ULL << TCG_TARGET_NB_REGS) - 1);
+ break;
+ case 'l': /* qemu_ld / qemu_st address, data_reg */
+ ct->ct |= TCG_CT_REG;
+ tcg_regset_set32(ct->u.regs, 0, (1ULL << TCG_TARGET_NB_REGS) - 1);
+#ifdef CONFIG_SOFTMMU
+ /* x0 and x1 will be overwritten when reading the tlb entry,
+ and x2, and x3 for helper args, better to avoid using them. */
+ tcg_regset_reset_reg(ct->u.regs, TCG_REG_X0);
+ tcg_regset_reset_reg(ct->u.regs, TCG_REG_X1);
+ tcg_regset_reset_reg(ct->u.regs, TCG_REG_X2);
+ tcg_regset_reset_reg(ct->u.regs, TCG_REG_X3);
+#endif
+ break;
+ default:
+ return -1;
+ }
+
+ ct_str++;
+ *pct_str = ct_str;
+ return 0;
+}
+
+static inline int tcg_target_const_match(tcg_target_long val,
+ const TCGArgConstraint *arg_ct)
+{
+ int ct = arg_ct->ct;
+
+ if (ct & TCG_CT_CONST) {
+ return 1;
+ }
+
+ return 0;
+}
+
+enum aarch64_cond_code {
+ COND_EQ = 0x0,
+ COND_NE = 0x1,
+ COND_CS = 0x2, /* Unsigned greater or equal */
+ COND_HS = COND_CS, /* ALIAS greater or equal */
+ COND_CC = 0x3, /* Unsigned less than */
+ COND_LO = COND_CC, /* ALIAS Lower */
+ COND_MI = 0x4, /* Negative */
+ COND_PL = 0x5, /* Zero or greater */
+ COND_VS = 0x6, /* Overflow */
+ COND_VC = 0x7, /* No overflow */
+ COND_HI = 0x8, /* Unsigned greater than */
+ COND_LS = 0x9, /* Unsigned less or equal */
+ COND_GE = 0xa,
+ COND_LT = 0xb,
+ COND_GT = 0xc,
+ COND_LE = 0xd,
+ COND_AL = 0xe,
+ COND_NV = 0xf, /* behaves like COND_AL here */
+};
+
+static const enum aarch64_cond_code tcg_cond_to_aarch64[] = {
+ [TCG_COND_EQ] = COND_EQ,
+ [TCG_COND_NE] = COND_NE,
+ [TCG_COND_LT] = COND_LT,
+ [TCG_COND_GE] = COND_GE,
+ [TCG_COND_LE] = COND_LE,
+ [TCG_COND_GT] = COND_GT,
+ /* unsigned */
+ [TCG_COND_LTU] = COND_LO,
+ [TCG_COND_GTU] = COND_HI,
+ [TCG_COND_GEU] = COND_HS,
+ [TCG_COND_LEU] = COND_LS,
+};
+
+/* opcodes for LDR / STR instructions with base + simm9 addressing */
+enum aarch64_ldst_op_data { /* size of the data moved */
+ LDST_8 = 0x38,
+ LDST_16 = 0x78,
+ LDST_32 = 0xb8,
+ LDST_64 = 0xf8,
+};
+enum aarch64_ldst_op_type { /* type of operation */
+ LDST_ST = 0x0, /* store */
+ LDST_LD = 0x4, /* load */
+ LDST_LD_S_X = 0x8, /* load and sign-extend into Xt */
+ LDST_LD_S_W = 0xc, /* load and sign-extend into Wt */
+};
+
+enum aarch64_arith_opc {
+ ARITH_ADD = 0x0b,
+ ARITH_SUB = 0x4b,
+ ARITH_AND = 0x0a,
+ ARITH_OR = 0x2a,
+ ARITH_XOR = 0x4a
+};
+
+enum aarch64_srr_opc {
+ SRR_SHL = 0x0,
+ SRR_SHR = 0x4,
+ SRR_SAR = 0x8,
+ SRR_ROR = 0xc
+};
+
+static inline enum aarch64_ldst_op_data
+aarch64_ldst_get_data(TCGOpcode tcg_op)
+{
+ switch (tcg_op) {
+ case INDEX_op_ld8u_i32:
+ case INDEX_op_ld8s_i32:
+ case INDEX_op_ld8u_i64:
+ case INDEX_op_ld8s_i64:
+ case INDEX_op_st8_i32:
+ case INDEX_op_st8_i64:
+ return LDST_8;
+
+ case INDEX_op_ld16u_i32:
+ case INDEX_op_ld16s_i32:
+ case INDEX_op_ld16u_i64:
+ case INDEX_op_ld16s_i64:
+ case INDEX_op_st16_i32:
+ case INDEX_op_st16_i64:
+ return LDST_16;
+
+ case INDEX_op_ld_i32:
+ case INDEX_op_st_i32:
+ case INDEX_op_ld32u_i64:
+ case INDEX_op_ld32s_i64:
+ case INDEX_op_st32_i64:
+ return LDST_32;
+
+ case INDEX_op_ld_i64:
+ case INDEX_op_st_i64:
+ return LDST_64;
+
+ default:
+ tcg_abort();
+ }
+}
+
+static inline enum aarch64_ldst_op_type
+aarch64_ldst_get_type(TCGOpcode tcg_op)
+{
+ switch (tcg_op) {
+ case INDEX_op_st8_i32:
+ case INDEX_op_st16_i32:
+ case INDEX_op_st8_i64:
+ case INDEX_op_st16_i64:
+ case INDEX_op_st_i32:
+ case INDEX_op_st32_i64:
+ case INDEX_op_st_i64:
+ return LDST_ST;
+
+ case INDEX_op_ld8u_i32:
+ case INDEX_op_ld16u_i32:
+ case INDEX_op_ld8u_i64:
+ case INDEX_op_ld16u_i64:
+ case INDEX_op_ld_i32:
+ case INDEX_op_ld32u_i64:
+ case INDEX_op_ld_i64:
+ return LDST_LD;
+
+ case INDEX_op_ld8s_i32:
+ case INDEX_op_ld16s_i32:
+ return LDST_LD_S_W;
+
+ case INDEX_op_ld8s_i64:
+ case INDEX_op_ld16s_i64:
+ case INDEX_op_ld32s_i64:
+ return LDST_LD_S_X;
+
+ default:
+ tcg_abort();
+ }
+}
+
+static inline uint32_t tcg_in32(TCGContext *s)
+{
+ uint32_t v = *(uint32_t *)s->code_ptr;
+ return v;
+}
+
+static inline void tcg_out_ldst_9(TCGContext *s,
+ enum aarch64_ldst_op_data op_data,
+ enum aarch64_ldst_op_type op_type,
+ TCGReg rd, TCGReg rn, tcg_target_long offset)
+{
+ /* use LDUR with BASE register with 9bit signed unscaled offset */
+ unsigned int mod, off;
+
+ if (offset < 0) {
+ off = (256 + offset);
+ mod = 0x1;
+ } else {
+ off = offset;
+ mod = 0x0;
+ }
+
+ mod |= op_type;
+ tcg_out32(s, op_data << 24 | mod << 20 | off << 12 | rn << 5 | rd);
+}
+
+static inline void tcg_out_movr(TCGContext *s, int ext, TCGReg rd, TCGReg src)
+{
+ /* register to register move using MOV (shifted register with no shift) */
+ /* using MOV 0x2a0003e0 | (shift).. */
+ unsigned int base = ext ? 0xaa0003e0 : 0x2a0003e0;
+ tcg_out32(s, base | src << 16 | rd);
+}
+
+static inline void tcg_out_movi_aux(TCGContext *s,
+ TCGReg rd, uint64_t value)
+{
+ uint32_t half, base, movk = 0, shift = 0;
+
+ /* construct halfwords of the immediate with MOVZ/MOVK with LSL */
+ /* using MOVZ 0x52800000 | extended reg.. */
+ base = (value > 0xffffffff) ? 0xd2800000 : 0x52800000;
+
+ do {
+ int skip_zeros = ctz64(value) & (63 & -16);
+ value >>= skip_zeros;
+ shift += skip_zeros << 17;
+ half = value & 0xffff;
+ tcg_out32(s, base | movk | shift | half << 5 | rd);
+ movk = 0x20000000; /* morph next MOVZs into MOVKs */
+ value >>= 16;
+ shift += 16 << 17;
+ } while (value);
+}
+
+static inline void tcg_out_movi(TCGContext *s, TCGType type,
+ TCGReg rd, tcg_target_long value)
+{
+ if (type == TCG_TYPE_I64) {
+ tcg_out_movi_aux(s, rd, value);
+ } else {
+ tcg_out_movi_aux(s, rd, value & 0xffffffff);
+ }
+}
+
+static inline void tcg_out_ldst_r(TCGContext *s,
+ enum aarch64_ldst_op_data op_data,
+ enum aarch64_ldst_op_type op_type,
+ TCGReg rd, TCGReg base, TCGReg regoff)
+{
+ /* load from memory to register using base + 64bit register offset */
+ /* using f.e. STR Wt, [Xn, Xm] 0xb8600800|(regoff << 16)|(base << 5)|rd */
+ /* the 0x6000 is for the "no extend field" */
+ tcg_out32(s, 0x00206800
+ | op_data << 24 | op_type << 20 | regoff << 16 | base << 5 | rd);
+}
+
+/* solve the whole ldst problem */
+static inline void tcg_out_ldst(TCGContext *s, enum aarch64_ldst_op_data data,
+ enum aarch64_ldst_op_type type,
+ TCGReg rd, TCGReg rn, tcg_target_long offset)
+{
+ if (offset >= -256 && offset < 256) {
+ tcg_out_ldst_9(s, data, type, rd, rn, offset);
+ } else {
+ tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, offset);
+ tcg_out_ldst_r(s, data, type, rd, rn, TCG_REG_TMP);
+ }
+}
+
+/* mov alias implemented with add immediate, useful to move to/from SP */
+static inline void tcg_out_movr_sp(TCGContext *s, int ext, TCGReg rd, TCGReg rn)
+{
+ /* using ADD 0x11000000 | (ext) | rn << 5 | rd */
+ unsigned int base = ext ? 0x91000000 : 0x11000000;
+ tcg_out32(s, base | rn << 5 | rd);
+}
+
+static inline void tcg_out_mov(TCGContext *s,
+ TCGType type, TCGReg ret, TCGReg arg)
+{
+ if (ret != arg) {
+ tcg_out_movr(s, type == TCG_TYPE_I64, ret, arg);
+ }
+}
+
+static inline void tcg_out_ld(TCGContext *s, TCGType type, TCGReg arg,
+ TCGReg arg1, tcg_target_long arg2)
+{
+ tcg_out_ldst(s, (type == TCG_TYPE_I64) ? LDST_64 : LDST_32, LDST_LD,
+ arg, arg1, arg2);
+}
+
+static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg,
+ TCGReg arg1, tcg_target_long arg2)
+{
+ tcg_out_ldst(s, (type == TCG_TYPE_I64) ? LDST_64 : LDST_32, LDST_ST,
+ arg, arg1, arg2);
+}
+
+static inline void tcg_out_arith(TCGContext *s, enum aarch64_arith_opc opc,
+ int ext, TCGReg rd, TCGReg rn, TCGReg rm)
+{
+ /* Using shifted register arithmetic operations */
+ /* if extended registry operation (64bit) just OR with 0x80 << 24 */
+ unsigned int base = ext ? (0x80 | opc) << 24 : opc << 24;
+ tcg_out32(s, base | rm << 16 | rn << 5 | rd);
+}
+
+static inline void tcg_out_mul(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, TCGReg rm)
+{
+ /* Using MADD 0x1b000000 with Ra = wzr alias MUL 0x1b007c00 */
+ unsigned int base = ext ? 0x9b007c00 : 0x1b007c00;
+ tcg_out32(s, base | rm << 16 | rn << 5 | rd);
+}
+
+static inline void tcg_out_shiftrot_reg(TCGContext *s,
+ enum aarch64_srr_opc opc, int ext,
+ TCGReg rd, TCGReg rn, TCGReg rm)
+{
+ /* using 2-source data processing instructions 0x1ac02000 */
+ unsigned int base = ext ? 0x9ac02000 : 0x1ac02000;
+ tcg_out32(s, base | rm << 16 | opc << 8 | rn << 5 | rd);
+}
+
+static inline void tcg_out_ubfm(TCGContext *s, int ext, TCGReg rd, TCGReg rn,
+ unsigned int a, unsigned int b)
+{
+ /* Using UBFM 0x53000000 Wd, Wn, a, b */
+ unsigned int base = ext ? 0xd3400000 : 0x53000000;
+ tcg_out32(s, base | a << 16 | b << 10 | rn << 5 | rd);
+}
+
+static inline void tcg_out_sbfm(TCGContext *s, int ext, TCGReg rd, TCGReg rn,
+ unsigned int a, unsigned int b)
+{
+ /* Using SBFM 0x13000000 Wd, Wn, a, b */
+ unsigned int base = ext ? 0x93400000 : 0x13000000;
+ tcg_out32(s, base | a << 16 | b << 10 | rn << 5 | rd);
+}
+
+static inline void tcg_out_extr(TCGContext *s, int ext, TCGReg rd,
+ TCGReg rn, TCGReg rm, unsigned int a)
+{
+ /* Using EXTR 0x13800000 Wd, Wn, Wm, a */
+ unsigned int base = ext ? 0x93c00000 : 0x13800000;
+ tcg_out32(s, base | rm << 16 | a << 10 | rn << 5 | rd);
+}
+
+static inline void tcg_out_shl(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int m)
+{
+ int bits, max;
+ bits = ext ? 64 : 32;
+ max = bits - 1;
+ tcg_out_ubfm(s, ext, rd, rn, bits - (m & max), max - (m & max));
+}
+
+static inline void tcg_out_shr(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int m)
+{
+ int max = ext ? 63 : 31;
+ tcg_out_ubfm(s, ext, rd, rn, m & max, max);
+}
+
+static inline void tcg_out_sar(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int m)
+{
+ int max = ext ? 63 : 31;
+ tcg_out_sbfm(s, ext, rd, rn, m & max, max);
+}
+
+static inline void tcg_out_rotr(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int m)
+{
+ int max = ext ? 63 : 31;
+ tcg_out_extr(s, ext, rd, rn, rn, m & max);
+}
+
+static inline void tcg_out_rotl(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int m)
+{
+ int bits, max;
+ bits = ext ? 64 : 32;
+ max = bits - 1;
+ tcg_out_extr(s, ext, rd, rn, rn, bits - (m & max));
+}
+
+static inline void tcg_out_cmp(TCGContext *s, int ext, TCGReg rn, TCGReg rm)
+{
+ /* Using CMP alias SUBS wzr, Wn, Wm */
+ unsigned int base = ext ? 0xeb00001f : 0x6b00001f;
+ tcg_out32(s, base | rm << 16 | rn << 5);
+}
+
+static inline void tcg_out_cset(TCGContext *s, int ext, TCGReg rd, TCGCond c)
+{
+ /* Using CSET alias of CSINC 0x1a800400 Xd, XZR, XZR, invert(cond) */
+ unsigned int base = ext ? 0x9a9f07e0 : 0x1a9f07e0;
+ tcg_out32(s, base | tcg_cond_to_aarch64[tcg_invert_cond(c)] << 12 | rd);
+}
+
+static inline void tcg_out_goto(TCGContext *s, tcg_target_long target)
+{
+ tcg_target_long offset;
+ offset = (target - (tcg_target_long)s->code_ptr) / 4;
+
+ if (offset < -0x02000000 || offset >= 0x02000000) {
+ /* out of 26bit range */
+ tcg_abort();
+ }
+
+ tcg_out32(s, 0x14000000 | (offset & 0x03ffffff));
+}
+
+static inline void tcg_out_goto_noaddr(TCGContext *s)
+{
+ /* We pay attention here to not modify the branch target by
+ reading from the buffer. This ensure that caches and memory are
+ kept coherent during retranslation.
+ Mask away possible garbage in the high bits for the first translation,
+ while keeping the offset bits for retranslation. */
+ uint32_t insn;
+ insn = (tcg_in32(s) & 0x03ffffff) | 0x14000000;
+ tcg_out32(s, insn);
+}
+
+static inline void tcg_out_goto_cond_noaddr(TCGContext *s, TCGCond c)
+{
+ /* see comments in tcg_out_goto_noaddr */
+ uint32_t insn;
+ insn = tcg_in32(s) & (0x07ffff << 5);
+ insn |= 0x54000000 | tcg_cond_to_aarch64[c];
+ tcg_out32(s, insn);
+}
+
+static inline void tcg_out_goto_cond(TCGContext *s, TCGCond c,
+ tcg_target_long target)
+{
+ tcg_target_long offset;
+ offset = (target - (tcg_target_long)s->code_ptr) / 4;
+
+ if (offset < -0x40000 || offset >= 0x40000) {
+ /* out of 19bit range */
+ tcg_abort();
+ }
+
+ offset &= 0x7ffff;
+ tcg_out32(s, 0x54000000 | tcg_cond_to_aarch64[c] | offset << 5);
+}
+
+static inline void tcg_out_callr(TCGContext *s, TCGReg reg)
+{
+ tcg_out32(s, 0xd63f0000 | reg << 5);
+}
+
+static inline void tcg_out_gotor(TCGContext *s, TCGReg reg)
+{
+ tcg_out32(s, 0xd61f0000 | reg << 5);
+}
+
+static inline void tcg_out_call(TCGContext *s, tcg_target_long target)
+{
+ tcg_target_long offset;
+
+ offset = (target - (tcg_target_long)s->code_ptr) / 4;
+
+ if (offset < -0x02000000 || offset >= 0x02000000) { /* out of 26bit rng */
+ tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, target);
+ tcg_out_callr(s, TCG_REG_TMP);
+ } else {
+ tcg_out32(s, 0x94000000 | (offset & 0x03ffffff));
+ }
+}
+
+static inline void tcg_out_ret(TCGContext *s)
+{
+ /* emit RET { LR } */
+ tcg_out32(s, 0xd65f03c0);
+}
+
+void aarch64_tb_set_jmp_target(uintptr_t jmp_addr, uintptr_t addr)
+{
+ tcg_target_long target, offset;
+ target = (tcg_target_long)addr;
+ offset = (target - (tcg_target_long)jmp_addr) / 4;
+
+ if (offset < -0x02000000 || offset >= 0x02000000) {
+ /* out of 26bit range */
+ tcg_abort();
+ }
+
+ patch_reloc((uint8_t *)jmp_addr, R_AARCH64_JUMP26, target, 0);
+ flush_icache_range(jmp_addr, jmp_addr + 4);
+}
+
+static inline void tcg_out_goto_label(TCGContext *s, int label_index)
+{
+ TCGLabel *l = &s->labels[label_index];
+
+ if (!l->has_value) {
+ tcg_out_reloc(s, s->code_ptr, R_AARCH64_JUMP26, label_index, 0);
+ tcg_out_goto_noaddr(s);
+ } else {
+ tcg_out_goto(s, l->u.value);
+ }
+}
+
+static inline void tcg_out_goto_label_cond(TCGContext *s,
+ TCGCond c, int label_index)
+{
+ TCGLabel *l = &s->labels[label_index];
+
+ if (!l->has_value) {
+ tcg_out_reloc(s, s->code_ptr, R_AARCH64_CONDBR19, label_index, 0);
+ tcg_out_goto_cond_noaddr(s, c);
+ } else {
+ tcg_out_goto_cond(s, c, l->u.value);
+ }
+}
+
+#ifdef CONFIG_SOFTMMU
+#include "exec/softmmu_defs.h"
+
+/* helper signature: helper_ld_mmu(CPUState *env, target_ulong addr,
+ int mmu_idx) */
+static const void * const qemu_ld_helpers[4] = {
+ helper_ldb_mmu,
+ helper_ldw_mmu,
+ helper_ldl_mmu,
+ helper_ldq_mmu,
+};
+
+/* helper signature: helper_st_mmu(CPUState *env, target_ulong addr,
+ uintxx_t val, int mmu_idx) */
+static const void * const qemu_st_helpers[4] = {
+ helper_stb_mmu,
+ helper_stw_mmu,
+ helper_stl_mmu,
+ helper_stq_mmu,
+};
+
+#endif /* CONFIG_SOFTMMU */
+
+static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
+{
+ TCGReg addr_reg, data_reg;
+#ifdef CONFIG_SOFTMMU
+ int mem_index, s_bits;
+#endif
+ data_reg = args[0];
+ addr_reg = args[1];
+
+#ifdef CONFIG_SOFTMMU
+ mem_index = args[2];
+ s_bits = opc & 3;
+
+ /* TODO: insert TLB lookup here */
+
+ /* all arguments passed via registers */
+ tcg_out_movr(s, 1, TCG_REG_X0, TCG_AREG0);
+ tcg_out_movr(s, (TARGET_LONG_BITS == 64), TCG_REG_X1, addr_reg);
+ tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_X2, mem_index);
+ tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP,
+ (tcg_target_long)qemu_ld_helpers[s_bits]);
+ tcg_out_callr(s, TCG_REG_TMP);
+
+ if (opc & 0x04) { /* sign extend */
+ unsigned int bits = 8 * (1 << s_bits) - 1;
+ tcg_out_sbfm(s, 1, data_reg, TCG_REG_X0, 0, bits); /* 7|15|31 */
+ } else {
+ tcg_out_movr(s, 1, data_reg, TCG_REG_X0);
+ }
+
+#else /* !CONFIG_SOFTMMU */
+ tcg_abort(); /* TODO */
+#endif
+}
+
+static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
+{
+ TCGReg addr_reg, data_reg;
+#ifdef CONFIG_SOFTMMU
+ int mem_index, s_bits;
+#endif
+ data_reg = args[0];
+ addr_reg = args[1];
+
+#ifdef CONFIG_SOFTMMU
+ mem_index = args[2];
+ s_bits = opc & 3;
+
+ /* TODO: insert TLB lookup here */
+
+ /* all arguments passed via registers */
+ tcg_out_movr(s, 1, TCG_REG_X0, TCG_AREG0);
+ tcg_out_movr(s, (TARGET_LONG_BITS == 64), TCG_REG_X1, addr_reg);
+ tcg_out_movr(s, 1, TCG_REG_X2, data_reg);
+ tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_X3, mem_index);
+ tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP,
+ (tcg_target_long)qemu_st_helpers[s_bits]);
+ tcg_out_callr(s, TCG_REG_TMP);
+
+#else /* !CONFIG_SOFTMMU */
+ tcg_abort(); /* TODO */
+#endif
+}
+
+static uint8_t *tb_ret_addr;
+
+/* callee stack use example:
+ stp x29, x30, [sp,#-32]!
+ mov x29, sp
+ stp x1, x2, [sp,#16]
+ ...
+ ldp x1, x2, [sp,#16]
+ ldp x29, x30, [sp],#32
+ ret
+*/
+
+/* push r1 and r2, and alloc stack space for a total of
+ alloc_n elements (1 element=16 bytes, must be between 1 and 31. */
+static inline void tcg_out_push_pair(TCGContext *s, TCGReg addr,
+ TCGReg r1, TCGReg r2, int alloc_n)
+{
+ /* using indexed scaled simm7 STP 0x28800000 | (ext) | 0x01000000 (pre-idx)
+ | alloc_n * (-1) << 16 | r2 << 10 | addr << 5 | r1 */
+ assert(alloc_n > 0 && alloc_n < 0x20);
+ alloc_n = (-alloc_n) & 0x3f;
+ tcg_out32(s, 0xa9800000 | alloc_n << 16 | r2 << 10 | addr << 5 | r1);
+}
+
+/* dealloc stack space for a total of alloc_n elements and pop r1, r2. */
+static inline void tcg_out_pop_pair(TCGContext *s, TCGReg addr,
+ TCGReg r1, TCGReg r2, int alloc_n)
+{
+ /* using indexed scaled simm7 LDP 0x28c00000 | (ext) | nothing (post-idx)
+ | alloc_n << 16 | r2 << 10 | addr << 5 | r1 */
+ assert(alloc_n > 0 && alloc_n < 0x20);
+ tcg_out32(s, 0xa8c00000 | alloc_n << 16 | r2 << 10 | addr << 5 | r1);
+}
+
+static inline void tcg_out_store_pair(TCGContext *s, TCGReg addr,
+ TCGReg r1, TCGReg r2, int idx)
+{
+ /* using register pair offset simm7 STP 0x29000000 | (ext)
+ | idx << 16 | r2 << 10 | addr << 5 | r1 */
+ assert(idx > 0 && idx < 0x20);
+ tcg_out32(s, 0xa9000000 | idx << 16 | r2 << 10 | addr << 5 | r1);
+}
+
+static inline void tcg_out_load_pair(TCGContext *s, TCGReg addr,
+ TCGReg r1, TCGReg r2, int idx)
+{
+ /* using register pair offset simm7 LDP 0x29400000 | (ext)
+ | idx << 16 | r2 << 10 | addr << 5 | r1 */
+ assert(idx > 0 && idx < 0x20);
+ tcg_out32(s, 0xa9400000 | idx << 16 | r2 << 10 | addr << 5 | r1);
+}
+
+static void tcg_out_op(TCGContext *s, TCGOpcode opc,
+ const TCGArg *args, const int *const_args)
+{
+ /* ext will be set in the switch below, which will fall through to the
+ common code. It triggers the use of extended regs where appropriate. */
+ int ext = 0;
+
+ switch (opc) {
+ case INDEX_op_exit_tb:
+ tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_X0, args[0]);
+ tcg_out_goto(s, (tcg_target_long)tb_ret_addr);
+ break;
+
+ case INDEX_op_goto_tb:
+#ifndef USE_DIRECT_JUMP
+#error "USE_DIRECT_JUMP required for aarch64"
+#endif
+ assert(s->tb_jmp_offset != NULL); /* consistency for USE_DIRECT_JUMP */
+ s->tb_jmp_offset[args[0]] = s->code_ptr - s->code_buf;
+ /* actual branch destination will be patched by
+ aarch64_tb_set_jmp_target later, beware retranslation. */
+ tcg_out_goto_noaddr(s);
+ s->tb_next_offset[args[0]] = s->code_ptr - s->code_buf;
+ break;
+
+ case INDEX_op_call:
+ if (const_args[0]) {
+ tcg_out_call(s, args[0]);
+ } else {
+ tcg_out_callr(s, args[0]);
+ }
+ break;
+
+ case INDEX_op_br:
+ tcg_out_goto_label(s, args[0]);
+ break;
+
+ case INDEX_op_ld_i32:
+ case INDEX_op_ld_i64:
+ case INDEX_op_st_i32:
+ case INDEX_op_st_i64:
+ case INDEX_op_ld8u_i32:
+ case INDEX_op_ld8s_i32:
+ case INDEX_op_ld16u_i32:
+ case INDEX_op_ld16s_i32:
+ case INDEX_op_ld8u_i64:
+ case INDEX_op_ld8s_i64:
+ case INDEX_op_ld16u_i64:
+ case INDEX_op_ld16s_i64:
+ case INDEX_op_ld32u_i64:
+ case INDEX_op_ld32s_i64:
+ case INDEX_op_st8_i32:
+ case INDEX_op_st8_i64:
+ case INDEX_op_st16_i32:
+ case INDEX_op_st16_i64:
+ case INDEX_op_st32_i64:
+ tcg_out_ldst(s, aarch64_ldst_get_data(opc), aarch64_ldst_get_type(opc),
+ args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_mov_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_mov_i32:
+ tcg_out_movr(s, ext, args[0], args[1]);
+ break;
+
+ case INDEX_op_movi_i64:
+ tcg_out_movi(s, TCG_TYPE_I64, args[0], args[1]);
+ break;
+ case INDEX_op_movi_i32:
+ tcg_out_movi(s, TCG_TYPE_I32, args[0], args[1]);
+ break;
+
+ case INDEX_op_add_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_add_i32:
+ tcg_out_arith(s, ARITH_ADD, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_sub_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_sub_i32:
+ tcg_out_arith(s, ARITH_SUB, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_and_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_and_i32:
+ tcg_out_arith(s, ARITH_AND, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_or_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_or_i32:
+ tcg_out_arith(s, ARITH_OR, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_xor_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_xor_i32:
+ tcg_out_arith(s, ARITH_XOR, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_mul_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_mul_i32:
+ tcg_out_mul(s, ext, args[0], args[1], args[2]);
+ break;
+
+ case INDEX_op_shl_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_shl_i32:
+ if (const_args[2]) { /* LSL / UBFM Wd, Wn, (32 - m) */
+ tcg_out_shl(s, ext, args[0], args[1], args[2]);
+ } else { /* LSL / LSLV */
+ tcg_out_shiftrot_reg(s, SRR_SHL, ext, args[0], args[1], args[2]);
+ }
+ break;
+
+ case INDEX_op_shr_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_shr_i32:
+ if (const_args[2]) { /* LSR / UBFM Wd, Wn, m, 31 */
+ tcg_out_shr(s, ext, args[0], args[1], args[2]);
+ } else { /* LSR / LSRV */
+ tcg_out_shiftrot_reg(s, SRR_SHR, ext, args[0], args[1], args[2]);
+ }
+ break;
+
+ case INDEX_op_sar_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_sar_i32:
+ if (const_args[2]) { /* ASR / SBFM Wd, Wn, m, 31 */
+ tcg_out_sar(s, ext, args[0], args[1], args[2]);
+ } else { /* ASR / ASRV */
+ tcg_out_shiftrot_reg(s, SRR_SAR, ext, args[0], args[1], args[2]);
+ }
+ break;
+
+ case INDEX_op_rotr_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_rotr_i32:
+ if (const_args[2]) { /* ROR / EXTR Wd, Wm, Wm, m */
+ tcg_out_rotr(s, ext, args[0], args[1], args[2]);
+ } else { /* ROR / RORV */
+ tcg_out_shiftrot_reg(s, SRR_ROR, ext, args[0], args[1], args[2]);
+ }
+ break;
+
+ case INDEX_op_rotl_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_rotl_i32: /* same as rotate right by (32 - m) */
+ if (const_args[2]) { /* ROR / EXTR Wd, Wm, Wm, 32 - m */
+ tcg_out_rotl(s, ext, args[0], args[1], args[2]);
+ } else {
+ tcg_out_arith(s, ARITH_SUB, 0, TCG_REG_TMP, TCG_REG_XZR, args[2]);
+ tcg_out_shiftrot_reg(s, SRR_ROR, ext,
+ args[0], args[1], TCG_REG_TMP);
+ }
+ break;
+
+ case INDEX_op_brcond_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_brcond_i32: /* CMP 0, 1, cond(2), label 3 */
+ tcg_out_cmp(s, ext, args[0], args[1]);
+ tcg_out_goto_label_cond(s, args[2], args[3]);
+ break;
+
+ case INDEX_op_setcond_i64:
+ ext = 1; /* fall through */
+ case INDEX_op_setcond_i32:
+ tcg_out_cmp(s, ext, args[1], args[2]);
+ tcg_out_cset(s, 0, args[0], args[3]);
+ break;
+
+ case INDEX_op_qemu_ld8u:
+ tcg_out_qemu_ld(s, args, 0 | 0);
+ break;
+ case INDEX_op_qemu_ld8s:
+ tcg_out_qemu_ld(s, args, 4 | 0);
+ break;
+ case INDEX_op_qemu_ld16u:
+ tcg_out_qemu_ld(s, args, 0 | 1);
+ break;
+ case INDEX_op_qemu_ld16s:
+ tcg_out_qemu_ld(s, args, 4 | 1);
+ break;
+ case INDEX_op_qemu_ld32u:
+ tcg_out_qemu_ld(s, args, 0 | 2);
+ break;
+ case INDEX_op_qemu_ld32s:
+ tcg_out_qemu_ld(s, args, 4 | 2);
+ break;
+ case INDEX_op_qemu_ld32:
+ tcg_out_qemu_ld(s, args, 0 | 2);
+ break;
+ case INDEX_op_qemu_ld64:
+ tcg_out_qemu_ld(s, args, 0 | 3);
+ break;
+ case INDEX_op_qemu_st8:
+ tcg_out_qemu_st(s, args, 0);
+ break;
+ case INDEX_op_qemu_st16:
+ tcg_out_qemu_st(s, args, 1);
+ break;
+ case INDEX_op_qemu_st32:
+ tcg_out_qemu_st(s, args, 2);
+ break;
+ case INDEX_op_qemu_st64:
+ tcg_out_qemu_st(s, args, 3);
+ break;
+
+ default:
+ tcg_abort(); /* opcode not implemented */
+ }
+}
+
+static const TCGTargetOpDef aarch64_op_defs[] = {
+ { INDEX_op_exit_tb, { } },
+ { INDEX_op_goto_tb, { } },
+ { INDEX_op_call, { "ri" } },
+ { INDEX_op_br, { } },
+
+ { INDEX_op_mov_i32, { "r", "r" } },
+ { INDEX_op_mov_i64, { "r", "r" } },
+
+ { INDEX_op_movi_i32, { "r" } },
+ { INDEX_op_movi_i64, { "r" } },
+
+ { INDEX_op_ld8u_i32, { "r", "r" } },
+ { INDEX_op_ld8s_i32, { "r", "r" } },
+ { INDEX_op_ld16u_i32, { "r", "r" } },
+ { INDEX_op_ld16s_i32, { "r", "r" } },
+ { INDEX_op_ld_i32, { "r", "r" } },
+ { INDEX_op_ld8u_i64, { "r", "r" } },
+ { INDEX_op_ld8s_i64, { "r", "r" } },
+ { INDEX_op_ld16u_i64, { "r", "r" } },
+ { INDEX_op_ld16s_i64, { "r", "r" } },
+ { INDEX_op_ld32u_i64, { "r", "r" } },
+ { INDEX_op_ld32s_i64, { "r", "r" } },
+ { INDEX_op_ld_i64, { "r", "r" } },
+
+ { INDEX_op_st8_i32, { "r", "r" } },
+ { INDEX_op_st16_i32, { "r", "r" } },
+ { INDEX_op_st_i32, { "r", "r" } },
+ { INDEX_op_st8_i64, { "r", "r" } },
+ { INDEX_op_st16_i64, { "r", "r" } },
+ { INDEX_op_st32_i64, { "r", "r" } },
+ { INDEX_op_st_i64, { "r", "r" } },
+
+ { INDEX_op_add_i32, { "r", "r", "r" } },
+ { INDEX_op_add_i64, { "r", "r", "r" } },
+ { INDEX_op_sub_i32, { "r", "r", "r" } },
+ { INDEX_op_sub_i64, { "r", "r", "r" } },
+ { INDEX_op_mul_i32, { "r", "r", "r" } },
+ { INDEX_op_mul_i64, { "r", "r", "r" } },
+ { INDEX_op_and_i32, { "r", "r", "r" } },
+ { INDEX_op_and_i64, { "r", "r", "r" } },
+ { INDEX_op_or_i32, { "r", "r", "r" } },
+ { INDEX_op_or_i64, { "r", "r", "r" } },
+ { INDEX_op_xor_i32, { "r", "r", "r" } },
+ { INDEX_op_xor_i64, { "r", "r", "r" } },
+
+ { INDEX_op_shl_i32, { "r", "r", "ri" } },
+ { INDEX_op_shr_i32, { "r", "r", "ri" } },
+ { INDEX_op_sar_i32, { "r", "r", "ri" } },
+ { INDEX_op_rotl_i32, { "r", "r", "ri" } },
+ { INDEX_op_rotr_i32, { "r", "r", "ri" } },
+ { INDEX_op_shl_i64, { "r", "r", "ri" } },
+ { INDEX_op_shr_i64, { "r", "r", "ri" } },
+ { INDEX_op_sar_i64, { "r", "r", "ri" } },
+ { INDEX_op_rotl_i64, { "r", "r", "ri" } },
+ { INDEX_op_rotr_i64, { "r", "r", "ri" } },
+
+ { INDEX_op_brcond_i32, { "r", "r" } },
+ { INDEX_op_setcond_i32, { "r", "r", "r" } },
+ { INDEX_op_brcond_i64, { "r", "r" } },
+ { INDEX_op_setcond_i64, { "r", "r", "r" } },
+
+ { INDEX_op_qemu_ld8u, { "r", "l" } },
+ { INDEX_op_qemu_ld8s, { "r", "l" } },
+ { INDEX_op_qemu_ld16u, { "r", "l" } },
+ { INDEX_op_qemu_ld16s, { "r", "l" } },
+ { INDEX_op_qemu_ld32u, { "r", "l" } },
+ { INDEX_op_qemu_ld32s, { "r", "l" } },
+
+ { INDEX_op_qemu_ld32, { "r", "l" } },
+ { INDEX_op_qemu_ld64, { "r", "l" } },
+
+ { INDEX_op_qemu_st8, { "l", "l" } },
+ { INDEX_op_qemu_st16, { "l", "l" } },
+ { INDEX_op_qemu_st32, { "l", "l" } },
+ { INDEX_op_qemu_st64, { "l", "l" } },
+ { -1 },
+};
+
+static void tcg_target_init(TCGContext *s)
+{
+#if !defined(CONFIG_USER_ONLY)
+ /* fail safe */
+ if ((1ULL << CPU_TLB_ENTRY_BITS) != sizeof(CPUTLBEntry)) {
+ tcg_abort();
+ }
+#endif
+ tcg_regset_set32(tcg_target_available_regs[TCG_TYPE_I32], 0, 0xffffffff);
+ tcg_regset_set32(tcg_target_available_regs[TCG_TYPE_I64], 0, 0xffffffff);
+
+ tcg_regset_set32(tcg_target_call_clobber_regs, 0,
+ (1 << TCG_REG_X0) | (1 << TCG_REG_X1) |
+ (1 << TCG_REG_X2) | (1 << TCG_REG_X3) |
+ (1 << TCG_REG_X4) | (1 << TCG_REG_X5) |
+ (1 << TCG_REG_X6) | (1 << TCG_REG_X7) |
+ (1 << TCG_REG_X8) | (1 << TCG_REG_X9) |
+ (1 << TCG_REG_X10) | (1 << TCG_REG_X11) |
+ (1 << TCG_REG_X12) | (1 << TCG_REG_X13) |
+ (1 << TCG_REG_X14) | (1 << TCG_REG_X15) |
+ (1 << TCG_REG_X16) | (1 << TCG_REG_X17) |
+ (1 << TCG_REG_X18));
+
+ tcg_regset_clear(s->reserved_regs);
+ tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP);
+ tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP);
+ tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */
+
+ tcg_add_target_add_op_defs(aarch64_op_defs);
+}
+
+static inline void tcg_out_addi(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int aimm)
+{
+ /* add immediate aimm unsigned 12bit value (we use LSL 0 - no shift) */
+ /* using ADD 0x11000000 | (ext) | (aimm << 10) | (rn << 5) | rd */
+ unsigned int base = ext ? 0x91000000 : 0x11000000;
+ assert(aimm <= 0xfff);
+ tcg_out32(s, base | (aimm << 10) | (rn << 5) | rd);
+}
+
+static inline void tcg_out_subi(TCGContext *s, int ext,
+ TCGReg rd, TCGReg rn, unsigned int aimm)
+{
+ /* sub immediate aimm unsigned 12bit value (we use LSL 0 - no shift) */
+ /* using SUB 0x51000000 | (ext) | (aimm << 10) | (rn << 5) | rd */
+ unsigned int base = ext ? 0xd1000000 : 0x51000000;
+ assert(aimm <= 0xfff);
+ tcg_out32(s, base | (aimm << 10) | (rn << 5) | rd);
+}
+
+static void tcg_target_qemu_prologue(TCGContext *s)
+{
+ /* NB: frame sizes are in 16 byte stack units! */
+ int frame_size_callee_saved, frame_size_tcg_locals;
+ TCGReg r;
+
+ /* save pairs (FP, LR) and (X19, X20) .. (X27, X28) */
+ frame_size_callee_saved = (1) + (TCG_REG_X28 - TCG_REG_X19) / 2 + 1;
+
+ /* frame size requirement for TCG local variables */
+ frame_size_tcg_locals = TCG_STATIC_CALL_ARGS_SIZE
+ + CPU_TEMP_BUF_NLONGS * sizeof(long)
+ + (TCG_TARGET_STACK_ALIGN - 1);
+ frame_size_tcg_locals &= ~(TCG_TARGET_STACK_ALIGN - 1);
+ frame_size_tcg_locals /= TCG_TARGET_STACK_ALIGN;
+
+ /* push (FP, LR) and update sp */
+ tcg_out_push_pair(s, TCG_REG_SP,
+ TCG_REG_FP, TCG_REG_LR, frame_size_callee_saved);
+
+ /* FP -> callee_saved */
+ tcg_out_movr_sp(s, 1, TCG_REG_FP, TCG_REG_SP);
+
+ /* store callee-preserved regs x19..x28 using FP -> callee_saved */
+ for (r = TCG_REG_X19; r <= TCG_REG_X27; r += 2) {
+ int idx = (r - TCG_REG_X19) / 2 + 1;
+ tcg_out_store_pair(s, TCG_REG_FP, r, r + 1, idx);
+ }
+
+ /* make stack space for TCG locals */
+ tcg_out_subi(s, 1, TCG_REG_SP, TCG_REG_SP,
+ frame_size_tcg_locals * TCG_TARGET_STACK_ALIGN);
+ /* inform TCG about how to find TCG locals with register, offset, size */
+ tcg_set_frame(s, TCG_REG_SP, TCG_STATIC_CALL_ARGS_SIZE,
+ CPU_TEMP_BUF_NLONGS * sizeof(long));
+
+ tcg_out_mov(s, TCG_TYPE_PTR, TCG_AREG0, tcg_target_call_iarg_regs[0]);
+ tcg_out_gotor(s, tcg_target_call_iarg_regs[1]);
+
+ tb_ret_addr = s->code_ptr;
+
+ /* remove TCG locals stack space */
+ tcg_out_addi(s, 1, TCG_REG_SP, TCG_REG_SP,
+ frame_size_tcg_locals * TCG_TARGET_STACK_ALIGN);
+
+ /* restore registers x19..x28.
+ FP must be preserved, so it still points to callee_saved area */
+ for (r = TCG_REG_X19; r <= TCG_REG_X27; r += 2) {
+ int idx = (r - TCG_REG_X19) / 2 + 1;
+ tcg_out_load_pair(s, TCG_REG_FP, r, r + 1, idx);
+ }
+
+ /* pop (FP, LR), restore SP to previous frame, return */
+ tcg_out_pop_pair(s, TCG_REG_SP,
+ TCG_REG_FP, TCG_REG_LR, frame_size_callee_saved);
+ tcg_out_ret(s);
+}
diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h
new file mode 100644
index 0000000..075ab2a
--- /dev/null
+++ b/tcg/aarch64/tcg-target.h
@@ -0,0 +1,99 @@
+/*
+ * Initial TCG Implementation for aarch64
+ *
+ * Copyright (c) 2013 Huawei Technologies Duesseldorf GmbH
+ * Written by Claudio Fontana
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version.
+ *
+ * See the COPYING file in the top-level directory for details.
+ */
+
+#ifndef TCG_TARGET_AARCH64
+#define TCG_TARGET_AARCH64 1
+
+#undef TCG_TARGET_WORDS_BIGENDIAN
+#undef TCG_TARGET_STACK_GROWSUP
+
+typedef enum {
+ TCG_REG_X0, TCG_REG_X1, TCG_REG_X2, TCG_REG_X3, TCG_REG_X4,
+ TCG_REG_X5, TCG_REG_X6, TCG_REG_X7, TCG_REG_X8, TCG_REG_X9,
+ TCG_REG_X10, TCG_REG_X11, TCG_REG_X12, TCG_REG_X13, TCG_REG_X14,
+ TCG_REG_X15, TCG_REG_X16, TCG_REG_X17, TCG_REG_X18, TCG_REG_X19,
+ TCG_REG_X20, TCG_REG_X21, TCG_REG_X22, TCG_REG_X23, TCG_REG_X24,
+ TCG_REG_X25, TCG_REG_X26, TCG_REG_X27, TCG_REG_X28,
+ TCG_REG_FP, /* frame pointer */
+ TCG_REG_LR, /* link register */
+ TCG_REG_SP, /* stack pointer or zero register */
+ TCG_REG_XZR = TCG_REG_SP /* same register number */
+ /* program counter is not directly accessible! */
+} TCGReg;
+
+#define TCG_TARGET_NB_REGS 32
+
+/* used for function call generation */
+#define TCG_REG_CALL_STACK TCG_REG_SP
+#define TCG_TARGET_STACK_ALIGN 16
+#define TCG_TARGET_CALL_ALIGN_ARGS 1
+#define TCG_TARGET_CALL_STACK_OFFSET 0
+
+/* optional instructions */
+#define TCG_TARGET_HAS_div_i32 0
+#define TCG_TARGET_HAS_ext8s_i32 0
+#define TCG_TARGET_HAS_ext16s_i32 0
+#define TCG_TARGET_HAS_ext8u_i32 0
+#define TCG_TARGET_HAS_ext16u_i32 0
+#define TCG_TARGET_HAS_bswap16_i32 0
+#define TCG_TARGET_HAS_bswap32_i32 0
+#define TCG_TARGET_HAS_not_i32 0
+#define TCG_TARGET_HAS_neg_i32 0
+#define TCG_TARGET_HAS_rot_i32 1
+#define TCG_TARGET_HAS_andc_i32 0
+#define TCG_TARGET_HAS_orc_i32 0
+#define TCG_TARGET_HAS_eqv_i32 0
+#define TCG_TARGET_HAS_nand_i32 0
+#define TCG_TARGET_HAS_nor_i32 0
+#define TCG_TARGET_HAS_deposit_i32 0
+#define TCG_TARGET_HAS_movcond_i32 0
+#define TCG_TARGET_HAS_add2_i32 0
+#define TCG_TARGET_HAS_sub2_i32 0
+#define TCG_TARGET_HAS_mulu2_i32 0
+#define TCG_TARGET_HAS_muls2_i32 0
+
+#define TCG_TARGET_HAS_div_i64 0
+#define TCG_TARGET_HAS_ext8s_i64 0
+#define TCG_TARGET_HAS_ext16s_i64 0
+#define TCG_TARGET_HAS_ext32s_i64 0
+#define TCG_TARGET_HAS_ext8u_i64 0
+#define TCG_TARGET_HAS_ext16u_i64 0
+#define TCG_TARGET_HAS_ext32u_i64 0
+#define TCG_TARGET_HAS_bswap16_i64 0
+#define TCG_TARGET_HAS_bswap32_i64 0
+#define TCG_TARGET_HAS_bswap64_i64 0
+#define TCG_TARGET_HAS_not_i64 0
+#define TCG_TARGET_HAS_neg_i64 0
+#define TCG_TARGET_HAS_rot_i64 1
+#define TCG_TARGET_HAS_andc_i64 0
+#define TCG_TARGET_HAS_orc_i64 0
+#define TCG_TARGET_HAS_eqv_i64 0
+#define TCG_TARGET_HAS_nand_i64 0
+#define TCG_TARGET_HAS_nor_i64 0
+#define TCG_TARGET_HAS_deposit_i64 0
+#define TCG_TARGET_HAS_movcond_i64 0
+#define TCG_TARGET_HAS_add2_i64 0
+#define TCG_TARGET_HAS_sub2_i64 0
+#define TCG_TARGET_HAS_mulu2_i64 0
+#define TCG_TARGET_HAS_muls2_i64 0
+
+enum {
+ TCG_AREG0 = TCG_REG_X19,
+};
+
+static inline void flush_icache_range(tcg_target_ulong start,
+ tcg_target_ulong stop)
+{
+ __builtin___clear_cache((char *)start, (char *)stop);
+}
+
+#endif /* TCG_TARGET_AARCH64 */
diff --git a/translate-all.c b/translate-all.c
index 211be31..da6edae 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -460,6 +460,8 @@ static inline PageDesc *page_find(tb_page_addr_t index)
# define MAX_CODE_GEN_BUFFER_SIZE (2ul * 1024 * 1024 * 1024)
#elif defined(__sparc__)
# define MAX_CODE_GEN_BUFFER_SIZE (2ul * 1024 * 1024 * 1024)
+#elif defined(__aarch64__)
+# define MAX_CODE_GEN_BUFFER_SIZE (128ul * 1024 * 1024)
#elif defined(__arm__)
# define MAX_CODE_GEN_BUFFER_SIZE (16u * 1024 * 1024)
#elif defined(__s390x__)
--
1.8.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [PATCH v3 3/3] configure: permit compilation on arm aarch64
2013-05-28 15:23 [Qemu-devel] [PATCH v3 0/3] ARM aarch64 TCG target Claudio Fontana
2013-05-28 15:26 ` [Qemu-devel] [PATCH v3 1/3] include/elf.h: add aarch64 ELF machine and relocs Claudio Fontana
2013-05-28 15:28 ` [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64 Claudio Fontana
@ 2013-05-28 15:30 ` Claudio Fontana
2 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-05-28 15:30 UTC (permalink / raw)
To: Peter Maydell
Cc: Laurent Desnogues, Jani Kokkonen, qemu-devel@nongnu.org,
Richard Henderson
support compiling on aarch64.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Claudio Fontana <claudio.fontana@huawei.com>
---
configure | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/configure b/configure
index eb74510..f021bdd 100755
--- a/configure
+++ b/configure
@@ -385,6 +385,8 @@ elif check_define __s390__ ; then
fi
elif check_define __arm__ ; then
cpu="arm"
+elif check_define __aarch64__ ; then
+ cpu="aarch64"
elif check_define __hppa__ ; then
cpu="hppa"
else
@@ -407,6 +409,9 @@ case "$cpu" in
armv*b|armv*l|arm)
cpu="arm"
;;
+ aarch64)
+ cpu="aarch64"
+ ;;
hppa|parisc|parisc64)
cpu="hppa"
;;
@@ -4127,6 +4132,9 @@ if test "$linux" = "yes" ; then
s390x)
linux_arch=s390
;;
+ aarch64)
+ linux_arch=arm64
+ ;;
*)
# For most CPUs the kernel architecture name and QEMU CPU name match.
linux_arch="$cpu"
--
1.8.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64
2013-05-28 15:28 ` [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64 Claudio Fontana
@ 2013-05-28 16:18 ` Richard Henderson
2013-05-29 7:44 ` Claudio Fontana
0 siblings, 1 reply; 6+ messages in thread
From: Richard Henderson @ 2013-05-28 16:18 UTC (permalink / raw)
To: Claudio Fontana
Cc: Laurent Desnogues, Peter Maydell, Jani Kokkonen,
qemu-devel@nongnu.org
On 05/28/2013 08:28 AM, Claudio Fontana wrote:
> +static inline void tcg_out_movi_aux(TCGContext *s,
> + TCGReg rd, uint64_t value)
> +{
> + uint32_t half, base, movk = 0, shift = 0;
> +
> + /* construct halfwords of the immediate with MOVZ/MOVK with LSL */
> + /* using MOVZ 0x52800000 | extended reg.. */
> + base = (value > 0xffffffff) ? 0xd2800000 : 0x52800000;
> +
> + do {
> + int skip_zeros = ctz64(value) & (63 & -16);
> + value >>= skip_zeros;
> + shift += skip_zeros << 17;
> + half = value & 0xffff;
> + tcg_out32(s, base | movk | shift | half << 5 | rd);
> + movk = 0x20000000; /* morph next MOVZs into MOVKs */
> + value >>= 16;
> + shift += 16 << 17;
This is way more confusing than it needs to be. I don't think you
should modify VALUE by shifting at all. If you don't do that then
you don't need to make SHIFT loop carried, since we compute its
exact correct value every time with the ctz.
Was the only bug in the code that I pasted the lack of the shift-by-17
when encoding SHIFT into the tcg_out32?
> +static inline void tcg_out_movi(TCGContext *s, TCGType type,
> + TCGReg rd, tcg_target_long value)
> +{
> + if (type == TCG_TYPE_I64) {
> + tcg_out_movi_aux(s, rd, value);
> + } else {
> + tcg_out_movi_aux(s, rd, value & 0xffffffff);
> + }
> +}
Any reason you're splitting out tcg_out_movi_aux to a separate function?
> + tcg_regset_clear(s->reserved_regs);
> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP);
> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP);
> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */
Reserve the frame pointer.
r~
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64
2013-05-28 16:18 ` Richard Henderson
@ 2013-05-29 7:44 ` Claudio Fontana
0 siblings, 0 replies; 6+ messages in thread
From: Claudio Fontana @ 2013-05-29 7:44 UTC (permalink / raw)
To: Richard Henderson
Cc: Laurent Desnogues, Peter Maydell, Jani Kokkonen,
qemu-devel@nongnu.org
On 28.05.2013 18:18, Richard Henderson wrote:
> On 05/28/2013 08:28 AM, Claudio Fontana wrote:
>> +static inline void tcg_out_movi_aux(TCGContext *s,
>> + TCGReg rd, uint64_t value)
>> +{
>> + uint32_t half, base, movk = 0, shift = 0;
>> +
>> + /* construct halfwords of the immediate with MOVZ/MOVK with LSL */
>> + /* using MOVZ 0x52800000 | extended reg.. */
>> + base = (value > 0xffffffff) ? 0xd2800000 : 0x52800000;
>> +
>> + do {
>> + int skip_zeros = ctz64(value) & (63 & -16);
>> + value >>= skip_zeros;
>> + shift += skip_zeros << 17;
>> + half = value & 0xffff;
>> + tcg_out32(s, base | movk | shift | half << 5 | rd);
>> + movk = 0x20000000; /* morph next MOVZs into MOVKs */
>> + value >>= 16;
>> + shift += 16 << 17;
>
> This is way more confusing than it needs to be. I don't think you
> should modify VALUE by shifting at all. If you don't do that then
> you don't need to make SHIFT loop carried, since we compute its
> exact correct value every time with the ctz.
>
> Was the only bug in the code that I pasted the lack of the shift-by-17
> when encoding SHIFT into the tcg_out32?
yes, you only forgot to encode the shift in the tcg_out32,
the variation above was an attempt to make it easier to understand.
I agree that the approach that avoids changing value in the right shift
is more concise, I'll go back to that, adding a comment about how the
function works.
>> +static inline void tcg_out_movi(TCGContext *s, TCGType type,
>> + TCGReg rd, tcg_target_long value)
>> +{
>> + if (type == TCG_TYPE_I64) {
>> + tcg_out_movi_aux(s, rd, value);
>> + } else {
>> + tcg_out_movi_aux(s, rd, value & 0xffffffff);
>> + }
>> +}
>
> Any reason you're splitting out tcg_out_movi_aux to a separate function?
tcg_out_movi is an interface with tcg, and as such the prototype is fixed.
I'd rather work with a value that is unsigned, because of the right shift.
Having a separate _aux function does that without the need for adding
another local variable and another operation to understand in the function
with the actual algorithm.
>
>> + tcg_regset_clear(s->reserved_regs);
>> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_SP);
>> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP);
>> + tcg_regset_set_reg(s->reserved_regs, TCG_REG_X18); /* platform register */
>
> Reserve the frame pointer.
Ok.
> r~
Claudio
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-05-29 7:45 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-28 15:23 [Qemu-devel] [PATCH v3 0/3] ARM aarch64 TCG target Claudio Fontana
2013-05-28 15:26 ` [Qemu-devel] [PATCH v3 1/3] include/elf.h: add aarch64 ELF machine and relocs Claudio Fontana
2013-05-28 15:28 ` [Qemu-devel] [PATCH v3 2/3] tcg/aarch64: implement new TCG target for aarch64 Claudio Fontana
2013-05-28 16:18 ` Richard Henderson
2013-05-29 7:44 ` Claudio Fontana
2013-05-28 15:30 ` [Qemu-devel] [PATCH v3 3/3] configure: permit compilation on arm aarch64 Claudio Fontana
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).