* [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32
@ 2025-02-05 4:03 Richard Henderson
2025-02-05 4:03 ` [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Richard Henderson
` (11 more replies)
0 siblings, 12 replies; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
This is not complete by any means, but it's a start.
r~
Based-on: 20250204215359.1238808-1-richard.henderson@linaro.org
("[PATCH v3 00/12] meson: Deprecate 32-bit host support")
Richard Henderson (11):
tcg: Drop support for two address registers in gen_ldst
tcg: Merge INDEX_op_qemu_*_{a32,a64}_*
tcg/arm: Drop addrhi from prepare_host_addr
tcg/i386: Drop addrhi from prepare_host_addr
tcg/mips: Drop addrhi from prepare_host_addr
tcg/ppc: Drop addrhi from prepare_host_addr
tcg: Replace addr{lo,hi}_reg with addr_reg in TCGLabelQemuLdst
plugins: Fix qemu_plugin_read_memory_vaddr parameters
accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
include/exec: Change vaddr to uintptr_t
include/exec: Use uintptr_t in CPUTLBEntry
include/exec/tlb-common.h | 10 +-
include/exec/vaddr.h | 16 +--
include/tcg/tcg-opc.h | 28 ++----
accel/tcg/cputlb.c | 25 ++---
plugins/api.c | 2 +-
tcg/optimize.c | 21 ++--
tcg/tcg-op-ldst.c | 104 +++++---------------
tcg/tcg.c | 60 +++++------
tcg/tci.c | 119 ++++------------------
tcg/aarch64/tcg-target.c.inc | 40 +++-----
tcg/arm/tcg-target.c.inc | 104 ++++++--------------
tcg/i386/tcg-target.c.inc | 125 +++++++----------------
tcg/loongarch64/tcg-target.c.inc | 40 +++-----
tcg/mips/tcg-target.c.inc | 122 ++++++-----------------
tcg/ppc/tcg-target.c.inc | 164 ++++++++-----------------------
tcg/riscv/tcg-target.c.inc | 29 ++----
tcg/s390x/tcg-target.c.inc | 40 +++-----
tcg/sparc64/tcg-target.c.inc | 28 ++----
tcg/tci/tcg-target.c.inc | 60 +++--------
19 files changed, 314 insertions(+), 823 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-13 14:37 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_* Richard Henderson
` (10 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg-op-ldst.c | 22 ++++------------------
1 file changed, 4 insertions(+), 18 deletions(-)
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index 77271e0193..c3e9bf992a 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -91,25 +91,11 @@ static MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
static void gen_ldst(TCGOpcode opc, TCGType type, TCGTemp *vl, TCGTemp *vh,
TCGTemp *addr, MemOpIdx oi)
{
- if (TCG_TARGET_REG_BITS == 64 || tcg_ctx->addr_type == TCG_TYPE_I32) {
- if (vh) {
- tcg_gen_op4(opc, type, temp_arg(vl), temp_arg(vh),
- temp_arg(addr), oi);
- } else {
- tcg_gen_op3(opc, type, temp_arg(vl), temp_arg(addr), oi);
- }
+ assert(tcg_ctx->addr_type <= TCG_TYPE_REG);
+ if (vh) {
+ tcg_gen_op4(opc, type, temp_arg(vl), temp_arg(vh), temp_arg(addr), oi);
} else {
- /* See TCGV_LOW/HIGH. */
- TCGTemp *al = addr + HOST_BIG_ENDIAN;
- TCGTemp *ah = addr + !HOST_BIG_ENDIAN;
-
- if (vh) {
- tcg_gen_op5(opc, type, temp_arg(vl), temp_arg(vh),
- temp_arg(al), temp_arg(ah), oi);
- } else {
- tcg_gen_op4(opc, type, temp_arg(vl),
- temp_arg(al), temp_arg(ah), oi);
- }
+ tcg_gen_op3(opc, type, temp_arg(vl), temp_arg(addr), oi);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_*
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
2025-02-05 4:03 ` [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-17 14:22 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr Richard Henderson
` (9 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
Since 64-on-32 is now unsupported, guest addresses always
fit in one host register. Drop the replication of opcodes.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/tcg/tcg-opc.h | 28 ++------
tcg/optimize.c | 21 ++----
tcg/tcg-op-ldst.c | 82 +++++----------------
tcg/tcg.c | 42 ++++-------
tcg/tci.c | 119 ++++++-------------------------
tcg/aarch64/tcg-target.c.inc | 36 ++++------
tcg/arm/tcg-target.c.inc | 40 +++--------
tcg/i386/tcg-target.c.inc | 69 ++++--------------
tcg/loongarch64/tcg-target.c.inc | 36 ++++------
tcg/mips/tcg-target.c.inc | 51 +++----------
tcg/ppc/tcg-target.c.inc | 68 ++++--------------
tcg/riscv/tcg-target.c.inc | 24 +++----
tcg/s390x/tcg-target.c.inc | 36 ++++------
tcg/sparc64/tcg-target.c.inc | 24 +++----
tcg/tci/tcg-target.c.inc | 60 ++++------------
15 files changed, 177 insertions(+), 559 deletions(-)
diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h
index 9383e295f4..5bf78b0764 100644
--- a/include/tcg/tcg-opc.h
+++ b/include/tcg/tcg-opc.h
@@ -188,36 +188,22 @@ DEF(goto_ptr, 0, 1, 0, TCG_OPF_BB_EXIT | TCG_OPF_BB_END)
DEF(plugin_cb, 0, 0, 1, TCG_OPF_NOT_PRESENT)
DEF(plugin_mem_cb, 0, 1, 1, TCG_OPF_NOT_PRESENT)
-/* Replicate ld/st ops for 32 and 64-bit guest addresses. */
-DEF(qemu_ld_a32_i32, 1, 1, 1,
+DEF(qemu_ld_i32, 1, 1, 1,
TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a32_i32, 0, 1 + 1, 1,
+DEF(qemu_st_i32, 0, 1 + 1, 1,
TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_ld_a32_i64, DATA64_ARGS, 1, 1,
+DEF(qemu_ld_i64, DATA64_ARGS, 1, 1,
TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a32_i64, 0, DATA64_ARGS + 1, 1,
- TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-
-DEF(qemu_ld_a64_i32, 1, DATA64_ARGS, 1,
- TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a64_i32, 0, 1 + DATA64_ARGS, 1,
- TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_ld_a64_i64, DATA64_ARGS, DATA64_ARGS, 1,
- TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a64_i64, 0, DATA64_ARGS + DATA64_ARGS, 1,
+DEF(qemu_st_i64, 0, DATA64_ARGS + 1, 1,
TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
/* Only used by i386 to cope with stupid register constraints. */
-DEF(qemu_st8_a32_i32, 0, 1 + 1, 1,
- TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st8_a64_i32, 0, 1 + DATA64_ARGS, 1,
+DEF(qemu_st8_i32, 0, 1 + 1, 1,
TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
/* Only for 64-bit hosts at the moment. */
-DEF(qemu_ld_a32_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_ld_a64_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a32_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
-DEF(qemu_st_a64_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
+DEF(qemu_ld_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
+DEF(qemu_st_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS)
/* Host vector support. */
diff --git a/tcg/optimize.c b/tcg/optimize.c
index 8c6303e3af..996448c8bc 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -3008,29 +3008,22 @@ void tcg_optimize(TCGContext *s)
CASE_OP_32_64_VEC(orc):
done = fold_orc(&ctx, op);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
+ case INDEX_op_qemu_ld_i32:
done = fold_qemu_ld_1reg(&ctx, op);
break;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
done = fold_qemu_ld_1reg(&ctx, op);
break;
}
QEMU_FALLTHROUGH;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
done = fold_qemu_ld_2reg(&ctx, op);
break;
- case INDEX_op_qemu_st8_a32_i32:
- case INDEX_op_qemu_st8_a64_i32:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st8_i32:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
+ case INDEX_op_qemu_st_i128:
done = fold_qemu_st(&ctx, op);
break;
CASE_OP_32_64(rem):
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index c3e9bf992a..96553f3c3d 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -218,7 +218,6 @@ static void tcg_gen_qemu_ld_i32_int(TCGv_i32 val, TCGTemp *addr,
MemOp orig_memop;
MemOpIdx orig_oi, oi;
TCGv_i64 copy_addr;
- TCGOpcode opc;
tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
orig_memop = memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -234,12 +233,8 @@ static void tcg_gen_qemu_ld_i32_int(TCGv_i32 val, TCGTemp *addr,
}
copy_addr = plugin_maybe_preserve_addr(addr);
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_ld_a32_i32;
- } else {
- opc = INDEX_op_qemu_ld_a64_i32;
- }
- gen_ldst(opc, TCG_TYPE_I32, tcgv_i32_temp(val), NULL, addr, oi);
+ gen_ldst(INDEX_op_qemu_ld_i32, TCG_TYPE_I32,
+ tcgv_i32_temp(val), NULL, addr, oi);
plugin_gen_mem_callbacks_i32(val, copy_addr, addr, orig_oi,
QEMU_PLUGIN_MEM_R);
@@ -296,17 +291,9 @@ static void tcg_gen_qemu_st_i32_int(TCGv_i32 val, TCGTemp *addr,
}
if (TCG_TARGET_HAS_qemu_st8_i32 && (memop & MO_SIZE) == MO_8) {
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_st8_a32_i32;
- } else {
- opc = INDEX_op_qemu_st8_a64_i32;
- }
+ opc = INDEX_op_qemu_st8_i32;
} else {
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_st_a32_i32;
- } else {
- opc = INDEX_op_qemu_st_a64_i32;
- }
+ opc = INDEX_op_qemu_st_i32;
}
gen_ldst(opc, TCG_TYPE_I32, tcgv_i32_temp(val), NULL, addr, oi);
plugin_gen_mem_callbacks_i32(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
@@ -330,7 +317,6 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr,
MemOp orig_memop;
MemOpIdx orig_oi, oi;
TCGv_i64 copy_addr;
- TCGOpcode opc;
if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
tcg_gen_qemu_ld_i32_int(TCGV_LOW(val), addr, idx, memop);
@@ -356,12 +342,7 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr,
}
copy_addr = plugin_maybe_preserve_addr(addr);
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_ld_a32_i64;
- } else {
- opc = INDEX_op_qemu_ld_a64_i64;
- }
- gen_ldst_i64(opc, val, addr, oi);
+ gen_ldst_i64(INDEX_op_qemu_ld_i64, val, addr, oi);
plugin_gen_mem_callbacks_i64(val, copy_addr, addr, orig_oi,
QEMU_PLUGIN_MEM_R);
@@ -398,7 +379,6 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr,
{
TCGv_i64 swap = NULL;
MemOpIdx orig_oi, oi;
- TCGOpcode opc;
if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
tcg_gen_qemu_st_i32_int(TCGV_LOW(val), addr, idx, memop);
@@ -429,12 +409,7 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr,
oi = make_memop_idx(memop, idx);
}
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_st_a32_i64;
- } else {
- opc = INDEX_op_qemu_st_a64_i64;
- }
- gen_ldst_i64(opc, val, addr, oi);
+ gen_ldst_i64(INDEX_op_qemu_st_i64, val, addr, oi);
plugin_gen_mem_callbacks_i64(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
if (swap) {
@@ -546,7 +521,6 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
{
MemOpIdx orig_oi;
TCGv_i64 ext_addr = NULL;
- TCGOpcode opc;
check_max_alignment(memop_alignment_bits(memop));
tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
@@ -574,12 +548,7 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
hi = TCGV128_HIGH(val);
}
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_ld_a32_i128;
- } else {
- opc = INDEX_op_qemu_ld_a64_i128;
- }
- gen_ldst(opc, TCG_TYPE_I128, tcgv_i64_temp(lo),
+ gen_ldst(INDEX_op_qemu_ld_i128, TCG_TYPE_I128, tcgv_i64_temp(lo),
tcgv_i64_temp(hi), addr, oi);
if (need_bswap) {
@@ -595,12 +564,6 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
canonicalize_memop_i128_as_i64(mop, memop);
need_bswap = (mop[0] ^ memop) & MO_BSWAP;
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_ld_a32_i64;
- } else {
- opc = INDEX_op_qemu_ld_a64_i64;
- }
-
/*
* Since there are no global TCGv_i128, there is no visible state
* changed if the second load faults. Load directly into the two
@@ -614,7 +577,8 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
y = TCGV128_LOW(val);
}
- gen_ldst_i64(opc, x, addr, make_memop_idx(mop[0], idx));
+ gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr,
+ make_memop_idx(mop[0], idx));
if (need_bswap) {
tcg_gen_bswap64_i64(x, x);
@@ -630,7 +594,8 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
addr_p8 = tcgv_i64_temp(t);
}
- gen_ldst_i64(opc, y, addr_p8, make_memop_idx(mop[1], idx));
+ gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8,
+ make_memop_idx(mop[1], idx));
tcg_temp_free_internal(addr_p8);
if (need_bswap) {
@@ -664,7 +629,6 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
{
MemOpIdx orig_oi;
TCGv_i64 ext_addr = NULL;
- TCGOpcode opc;
check_max_alignment(memop_alignment_bits(memop));
tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST);
@@ -695,13 +659,8 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
hi = TCGV128_HIGH(val);
}
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_st_a32_i128;
- } else {
- opc = INDEX_op_qemu_st_a64_i128;
- }
- gen_ldst(opc, TCG_TYPE_I128, tcgv_i64_temp(lo),
- tcgv_i64_temp(hi), addr, oi);
+ gen_ldst(INDEX_op_qemu_st_i128, TCG_TYPE_I128,
+ tcgv_i64_temp(lo), tcgv_i64_temp(hi), addr, oi);
if (need_bswap) {
tcg_temp_free_i64(lo);
@@ -714,12 +673,6 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
canonicalize_memop_i128_as_i64(mop, memop);
- if (tcg_ctx->addr_type == TCG_TYPE_I32) {
- opc = INDEX_op_qemu_st_a32_i64;
- } else {
- opc = INDEX_op_qemu_st_a64_i64;
- }
-
if ((memop & MO_BSWAP) == MO_LE) {
x = TCGV128_LOW(val);
y = TCGV128_HIGH(val);
@@ -734,7 +687,8 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
x = b;
}
- gen_ldst_i64(opc, x, addr, make_memop_idx(mop[0], idx));
+ gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr,
+ make_memop_idx(mop[0], idx));
if (tcg_ctx->addr_type == TCG_TYPE_I32) {
TCGv_i32 t = tcg_temp_ebb_new_i32();
@@ -748,10 +702,12 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
if (b) {
tcg_gen_bswap64_i64(b, y);
- gen_ldst_i64(opc, b, addr_p8, make_memop_idx(mop[1], idx));
+ gen_ldst_i64(INDEX_op_qemu_st_i64, b, addr_p8,
+ make_memop_idx(mop[1], idx));
tcg_temp_free_i64(b);
} else {
- gen_ldst_i64(opc, y, addr_p8, make_memop_idx(mop[1], idx));
+ gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8,
+ make_memop_idx(mop[1], idx));
}
tcg_temp_free_internal(addr_p8);
} else {
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 43b6712286..295004b74f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2153,24 +2153,17 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_exit_tb:
case INDEX_op_goto_tb:
case INDEX_op_goto_ptr:
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_ld_i64:
+ case INDEX_op_qemu_st_i64:
return true;
- case INDEX_op_qemu_st8_a32_i32:
- case INDEX_op_qemu_st8_a64_i32:
+ case INDEX_op_qemu_st8_i32:
return TCG_TARGET_HAS_qemu_st8_i32;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_ld_i128:
+ case INDEX_op_qemu_st_i128:
return TCG_TARGET_HAS_qemu_ldst_i128;
case INDEX_op_mov_i32:
@@ -2868,20 +2861,13 @@ void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs)
}
i = 1;
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st8_a32_i32:
- case INDEX_op_qemu_st8_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st8_i32:
+ case INDEX_op_qemu_ld_i64:
+ case INDEX_op_qemu_st_i64:
+ case INDEX_op_qemu_ld_i128:
+ case INDEX_op_qemu_st_i128:
{
const char *s_al, *s_op, *s_at;
MemOpIdx oi = op->args[k++];
diff --git a/tcg/tci.c b/tcg/tci.c
index 8c1c53424d..d223258efe 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -154,16 +154,6 @@ static void tci_args_rrrbb(uint32_t insn, TCGReg *r0, TCGReg *r1,
*i4 = extract32(insn, 26, 6);
}
-static void tci_args_rrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
- TCGReg *r2, TCGReg *r3, TCGReg *r4)
-{
- *r0 = extract32(insn, 8, 4);
- *r1 = extract32(insn, 12, 4);
- *r2 = extract32(insn, 16, 4);
- *r3 = extract32(insn, 20, 4);
- *r4 = extract32(insn, 24, 4);
-}
-
static void tci_args_rrrr(uint32_t insn,
TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3)
{
@@ -912,43 +902,21 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
tb_ptr = ptr;
break;
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
tci_args_rrm(insn, &r0, &r1, &oi);
- taddr = (uint32_t)regs[r1];
- goto do_ld_i32;
- case INDEX_op_qemu_ld_a64_i32:
- if (TCG_TARGET_REG_BITS == 64) {
- tci_args_rrm(insn, &r0, &r1, &oi);
- taddr = regs[r1];
- } else {
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- taddr = tci_uint64(regs[r2], regs[r1]);
- oi = regs[r3];
- }
- do_ld_i32:
+ taddr = regs[r1];
regs[r0] = tci_qemu_ld(env, taddr, oi, tb_ptr);
break;
- case INDEX_op_qemu_ld_a32_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tci_args_rrm(insn, &r0, &r1, &oi);
- taddr = (uint32_t)regs[r1];
- } else {
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- taddr = (uint32_t)regs[r2];
- oi = regs[r3];
- }
- goto do_ld_i64;
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
tci_args_rrm(insn, &r0, &r1, &oi);
taddr = regs[r1];
} else {
- tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
- taddr = tci_uint64(regs[r3], regs[r2]);
- oi = regs[r4];
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
+ taddr = regs[r2];
+ oi = regs[r3];
}
- do_ld_i64:
tmp64 = tci_qemu_ld(env, taddr, oi, tb_ptr);
if (TCG_TARGET_REG_BITS == 32) {
tci_write_reg64(regs, r1, r0, tmp64);
@@ -957,47 +925,23 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
}
break;
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
tci_args_rrm(insn, &r0, &r1, &oi);
- taddr = (uint32_t)regs[r1];
- goto do_st_i32;
- case INDEX_op_qemu_st_a64_i32:
- if (TCG_TARGET_REG_BITS == 64) {
- tci_args_rrm(insn, &r0, &r1, &oi);
- taddr = regs[r1];
- } else {
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- taddr = tci_uint64(regs[r2], regs[r1]);
- oi = regs[r3];
- }
- do_st_i32:
+ taddr = regs[r1];
tci_qemu_st(env, taddr, regs[r0], oi, tb_ptr);
break;
- case INDEX_op_qemu_st_a32_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tci_args_rrm(insn, &r0, &r1, &oi);
- tmp64 = regs[r0];
- taddr = (uint32_t)regs[r1];
- } else {
- tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
- tmp64 = tci_uint64(regs[r1], regs[r0]);
- taddr = (uint32_t)regs[r2];
- oi = regs[r3];
- }
- goto do_st_i64;
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
tci_args_rrm(insn, &r0, &r1, &oi);
tmp64 = regs[r0];
taddr = regs[r1];
} else {
- tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
+ tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
tmp64 = tci_uint64(regs[r1], regs[r0]);
- taddr = tci_uint64(regs[r3], regs[r2]);
- oi = regs[r4];
+ taddr = regs[r2];
+ oi = regs[r3];
}
- do_st_i64:
tci_qemu_st(env, taddr, tmp64, oi, tb_ptr);
break;
@@ -1269,42 +1213,21 @@ int print_insn_tci(bfd_vma addr, disassemble_info *info)
str_r(r3), str_r(r4), str_r(r5));
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_st_a32_i32:
- len = 1 + 1;
- goto do_qemu_ldst;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_st_a64_i32:
- len = 1 + DIV_ROUND_UP(64, TCG_TARGET_REG_BITS);
- goto do_qemu_ldst;
- case INDEX_op_qemu_ld_a64_i64:
- case INDEX_op_qemu_st_a64_i64:
- len = 2 * DIV_ROUND_UP(64, TCG_TARGET_REG_BITS);
- goto do_qemu_ldst;
- do_qemu_ldst:
- switch (len) {
- case 2:
- tci_args_rrm(insn, &r0, &r1, &oi);
- info->fprintf_func(info->stream, "%-12s %s, %s, %x",
- op_name, str_r(r0), str_r(r1), oi);
- break;
- case 3:
+ case INDEX_op_qemu_ld_i64:
+ case INDEX_op_qemu_st_i64:
+ if (TCG_TARGET_REG_BITS == 32) {
tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s",
op_name, str_r(r0), str_r(r1),
str_r(r2), str_r(r3));
break;
- case 4:
- tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
- info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s",
- op_name, str_r(r0), str_r(r1),
- str_r(r2), str_r(r3), str_r(r4));
- break;
- default:
- g_assert_not_reached();
}
+ /* fall through */
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_st_i32:
+ tci_args_rrm(insn, &r0, &r1, &oi);
+ info->fprintf_func(info->stream, "%-12s %s, %s, %x",
+ op_name, str_r(r0), str_r(r1), oi);
break;
case 0:
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 66eb4b73b5..45dc2c649b 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
@@ -2398,24 +2398,18 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType ext,
tcg_out_insn(s, 3506, CSEL, ext, a0, REG0(3), REG0(4), args[5]);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, a0, a1, a2, ext);
break;
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, REG0(0), a1, a2, ext);
break;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_out_qemu_ldst_i128(s, a0, a1, a2, args[3], true);
break;
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_out_qemu_ldst_i128(s, REG0(0), REG0(1), a2, args[3], false);
break;
@@ -3084,21 +3078,15 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_movcond_i64:
return C_O1_I4(r, r, rC, rZ, rZ);
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
return C_O1_I1(r, r);
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
return C_O2_I1(r, r, r);
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
return C_O0_I2(rZ, r);
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
return C_O0_I3(rZ, rZ, r);
case INDEX_op_deposit_i32:
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
index 12dad7307f..05bb367a39 100644
--- a/tcg/arm/tcg-target.c.inc
+++ b/tcg/arm/tcg-target.c.inc
@@ -2071,37 +2071,21 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
ARITH_MOV, args[0], 0, 0);
break;
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a64_i32:
- tcg_out_qemu_ld(s, args[0], -1, args[1], args[2],
- args[3], TCG_TYPE_I32);
- break;
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, args[0], args[1], args[2], -1,
args[3], TCG_TYPE_I64);
break;
- case INDEX_op_qemu_ld_a64_i64:
- tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3],
- args[4], TCG_TYPE_I64);
- break;
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a64_i32:
- tcg_out_qemu_st(s, args[0], -1, args[1], args[2],
- args[3], TCG_TYPE_I32);
- break;
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, args[0], args[1], args[2], -1,
args[3], TCG_TYPE_I64);
break;
- case INDEX_op_qemu_st_a64_i64:
- tcg_out_qemu_st(s, args[0], args[1], args[2], args[3],
- args[4], TCG_TYPE_I64);
- break;
case INDEX_op_bswap16_i32:
tcg_out_bswap16(s, COND_AL, args[0], args[1], args[2]);
@@ -2243,22 +2227,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_setcond2_i32:
return C_O1_I4(r, r, r, rI, rI);
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
return C_O1_I1(r, q);
- case INDEX_op_qemu_ld_a64_i32:
- return C_O1_I2(r, q, q);
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
return C_O2_I1(e, p, q);
- case INDEX_op_qemu_ld_a64_i64:
- return C_O2_I2(e, p, q, q);
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(q, q);
- case INDEX_op_qemu_st_a64_i32:
- return C_O0_I3(q, q, q);
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
return C_O0_I3(Q, p, q);
- case INDEX_op_qemu_st_a64_i64:
- return C_O0_I4(Q, p, q, q);
case INDEX_op_st_vec:
return C_O0_I2(w, r);
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index 2cac151331..ca6e8abc57 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -2879,62 +2879,33 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out_modrm(s, OPC_GRP3_Ev + rexw, EXT3_NOT, a0);
break;
- case INDEX_op_qemu_ld_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_ld(s, a0, -1, a1, a2, args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
} else {
tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_ld_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
- } else {
- tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64);
- }
- break;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128);
break;
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st8_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_st(s, a0, -1, a1, a2, args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st8_a32_i32:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st8_i32:
tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
} else {
tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_st_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
- } else {
- tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64);
- }
- break;
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128);
break;
@@ -3824,36 +3795,24 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_clz_i64:
return have_lzcnt ? C_N1_I2(r, r, rW) : C_N1_I2(r, r, r);
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
return C_O1_I1(r, L);
- case INDEX_op_qemu_ld_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O1_I2(r, L, L);
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(L, L);
- case INDEX_op_qemu_st_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I3(L, L, L);
- case INDEX_op_qemu_st8_a32_i32:
+ case INDEX_op_qemu_st8_i32:
return C_O0_I2(s, L);
- case INDEX_op_qemu_st8_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(s, L) : C_O0_I3(s, L, L);
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O2_I1(r, r, L);
- case INDEX_op_qemu_ld_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O2_I2(r, r, L, L);
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I3(L, L, L);
- case INDEX_op_qemu_st_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I4(L, L, L, L);
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
return C_O2_I1(r, r, L);
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
return C_O0_I3(L, L, L);
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index cebe8dd354..4f32bf3e97 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -1675,28 +1675,22 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out_ldst(s, OPC_ST_D, a0, a1, a2);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64);
break;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, true);
break;
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64);
break;
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, false);
break;
@@ -2233,18 +2227,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_st32_i64:
case INDEX_op_st_i32:
case INDEX_op_st_i64:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
return C_O0_I2(rZ, r);
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
return C_N2_I1(r, r, r);
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
return C_O0_I3(r, r, r);
case INDEX_op_brcond_i32:
@@ -2290,10 +2280,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_ld32u_i64:
case INDEX_op_ld_i32:
case INDEX_op_ld_i64:
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
return C_O1_I1(r, r);
case INDEX_op_andc_i32:
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index 99f6ef6c76..b1d512ca2a 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -2095,53 +2095,27 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out_setcond2(s, args[5], a0, a1, a2, args[3], args[4]);
break;
- case INDEX_op_qemu_ld_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_ld(s, a0, 0, a1, a2, args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
} else {
tcg_out_qemu_ld(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_ld_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
- } else {
- tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64);
- }
- break;
- case INDEX_op_qemu_st_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_st(s, a0, 0, a1, a2, args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
} else {
tcg_out_qemu_st(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_st_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
- } else {
- tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64);
- }
- break;
case INDEX_op_add2_i32:
tcg_out_addsub2(s, a0, a1, a2, args[3], args[4], args[5],
@@ -2301,23 +2275,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_brcond2_i32:
return C_O0_I4(rZ, rZ, rZ, rZ);
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
return C_O1_I1(r, r);
- case INDEX_op_qemu_ld_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r);
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(rZ, r);
- case INDEX_op_qemu_st_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r) : C_O0_I3(rZ, r, r);
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r);
- case INDEX_op_qemu_ld_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r);
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r) : C_O0_I3(rZ, rZ, r);
- case INDEX_op_qemu_st_a64_i64:
- return (TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r)
- : C_O0_I4(rZ, rZ, r, r));
default:
return C_NotImplemented;
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 6e711cd53f..801cb6f3cb 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.inc
@@ -3308,17 +3308,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out32(s, MODUD | TAB(args[0], args[1], args[2]));
break;
- case INDEX_op_qemu_ld_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_ld(s, args[0], -1, args[1], args[2],
- args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_ld(s, args[0], -1, args[1], -1,
args[2], TCG_TYPE_I64);
@@ -3327,32 +3320,15 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_ld_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, args[0], -1, args[1], -1,
- args[2], TCG_TYPE_I64);
- } else {
- tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3],
- args[4], TCG_TYPE_I64);
- }
- break;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true);
break;
- case INDEX_op_qemu_st_a64_i32:
- if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_qemu_st(s, args[0], -1, args[1], args[2],
- args[3], TCG_TYPE_I32);
- break;
- }
- /* fall through */
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
+ case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
tcg_out_qemu_st(s, args[0], -1, args[1], -1,
args[2], TCG_TYPE_I64);
@@ -3361,17 +3337,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
args[3], TCG_TYPE_I64);
}
break;
- case INDEX_op_qemu_st_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, args[0], -1, args[1], -1,
- args[2], TCG_TYPE_I64);
- } else {
- tcg_out_qemu_st(s, args[0], args[1], args[2], args[3],
- args[4], TCG_TYPE_I64);
- }
- break;
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false);
break;
@@ -4306,29 +4272,19 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_sub2_i32:
return C_O2_I4(r, r, rI, rZM, r, r);
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
return C_O1_I1(r, r);
- case INDEX_op_qemu_ld_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r);
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r);
- case INDEX_op_qemu_ld_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r);
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(r, r);
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r);
- case INDEX_op_qemu_st_a32_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r);
- case INDEX_op_qemu_st_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I4(r, r, r, r);
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
return C_N1O1_I1(o, m, r);
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
return C_O0_I3(o, m, r);
case INDEX_op_add_vec:
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
index 61dc310c1a..55a3398712 100644
--- a/tcg/riscv/tcg-target.c.inc
+++ b/tcg/riscv/tcg-target.c.inc
@@ -2309,20 +2309,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
args[3], const_args[3], args[4], const_args[4]);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64);
break;
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64);
break;
@@ -2761,15 +2757,11 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_sub2_i64:
return C_O2_I4(r, r, rZ, rZ, rM, rM);
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
return C_O1_I1(r, r);
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
return C_O0_I2(rZ, r);
case INDEX_op_st_vec:
diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc
index dc7722dc31..6786e7b316 100644
--- a/tcg/s390x/tcg-target.c.inc
+++ b/tcg/s390x/tcg-target.c.inc
@@ -2455,28 +2455,22 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
args[2], const_args[2], args[3], const_args[3], args[4]);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, args[0], args[1], args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, args[0], args[1], args[2], TCG_TYPE_I64);
break;
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I64);
break;
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true);
break;
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false);
break;
@@ -3366,21 +3360,15 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_ctpop_i64:
return C_O1_I1(r, r);
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
return C_O1_I1(r, r);
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i64:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(r, r);
- case INDEX_op_qemu_ld_a32_i128:
- case INDEX_op_qemu_ld_a64_i128:
+ case INDEX_op_qemu_ld_i128:
return C_O2_I1(o, m, r);
- case INDEX_op_qemu_st_a32_i128:
- case INDEX_op_qemu_st_a64_i128:
+ case INDEX_op_qemu_st_i128:
return C_O0_I3(o, m, r);
case INDEX_op_deposit_i32:
diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc
index 733cb51651..ea0a3b8692 100644
--- a/tcg/sparc64/tcg-target.c.inc
+++ b/tcg/sparc64/tcg-target.c.inc
@@ -1426,20 +1426,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out_arithi(s, a1, a0, 32, SHIFT_SRLX);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
+ case INDEX_op_qemu_ld_i32:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i64:
tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64);
break;
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i32:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32);
break;
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i64:
tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64);
break;
@@ -1570,10 +1566,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_extu_i32_i64:
case INDEX_op_extract_i64:
case INDEX_op_sextract_i64:
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_ld_a64_i64:
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_ld_i64:
return C_O1_I1(r, r);
case INDEX_op_st8_i32:
@@ -1583,10 +1577,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_st_i32:
case INDEX_op_st32_i64:
case INDEX_op_st_i64:
- case INDEX_op_qemu_st_a32_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_st_a32_i64:
- case INDEX_op_qemu_st_a64_i64:
+ case INDEX_op_qemu_st_i32:
+ case INDEX_op_qemu_st_i64:
return C_O0_I2(rZ, r);
case INDEX_op_add_i32:
diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc
index d6c77325a3..36e018dd19 100644
--- a/tcg/tci/tcg-target.c.inc
+++ b/tcg/tci/tcg-target.c.inc
@@ -169,22 +169,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags)
case INDEX_op_setcond2_i32:
return C_O1_I4(r, r, r, r, r);
- case INDEX_op_qemu_ld_a32_i32:
+ case INDEX_op_qemu_ld_i32:
return C_O1_I1(r, r);
- case INDEX_op_qemu_ld_a64_i32:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r);
- case INDEX_op_qemu_ld_a32_i64:
+ case INDEX_op_qemu_ld_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r);
- case INDEX_op_qemu_ld_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r);
- case INDEX_op_qemu_st_a32_i32:
+ case INDEX_op_qemu_st_i32:
return C_O0_I2(r, r);
- case INDEX_op_qemu_st_a64_i32:
+ case INDEX_op_qemu_st_i64:
return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r);
- case INDEX_op_qemu_st_a32_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r);
- case INDEX_op_qemu_st_a64_i64:
- return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I4(r, r, r, r);
default:
return C_NotImplemented;
@@ -422,20 +414,6 @@ static void tcg_out_op_rrrbb(TCGContext *s, TCGOpcode op, TCGReg r0,
tcg_out32(s, insn);
}
-static void tcg_out_op_rrrrr(TCGContext *s, TCGOpcode op, TCGReg r0,
- TCGReg r1, TCGReg r2, TCGReg r3, TCGReg r4)
-{
- tcg_insn_unit insn = 0;
-
- insn = deposit32(insn, 0, 8, op);
- insn = deposit32(insn, 8, 4, r0);
- insn = deposit32(insn, 12, 4, r1);
- insn = deposit32(insn, 16, 4, r2);
- insn = deposit32(insn, 20, 4, r3);
- insn = deposit32(insn, 24, 4, r4);
- tcg_out32(s, insn);
-}
-
static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op,
TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3)
{
@@ -833,29 +811,21 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], args[3]);
break;
- case INDEX_op_qemu_ld_a32_i32:
- case INDEX_op_qemu_st_a32_i32:
- tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
- break;
- case INDEX_op_qemu_ld_a64_i32:
- case INDEX_op_qemu_st_a64_i32:
- case INDEX_op_qemu_ld_a32_i64:
- case INDEX_op_qemu_st_a32_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
- } else {
+ case INDEX_op_qemu_ld_i64:
+ case INDEX_op_qemu_st_i64:
+ if (TCG_TARGET_REG_BITS == 32) {
tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[3]);
tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], TCG_REG_TMP);
+ break;
}
- break;
- case INDEX_op_qemu_ld_a64_i64:
- case INDEX_op_qemu_st_a64_i64:
- if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
+ /* fall through */
+ case INDEX_op_qemu_ld_i32:
+ case INDEX_op_qemu_st_i32:
+ if (TCG_TARGET_REG_BITS == 64 && s->addr_type == TCG_TYPE_I32) {
+ tcg_out_ext32u(s, TCG_REG_TMP, args[1]);
+ tcg_out_op_rrm(s, opc, args[0], TCG_REG_TMP, args[2]);
} else {
- tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[4]);
- tcg_out_op_rrrrr(s, opc, args[0], args[1],
- args[2], args[3], TCG_REG_TMP);
+ tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
}
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
2025-02-05 4:03 ` [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Richard Henderson
2025-02-05 4:03 ` [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_* Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-17 12:12 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 04/11] tcg/i386: " Richard Henderson
` (8 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The guest address will now always be TCG_TYPE_I32.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/arm/tcg-target.c.inc | 63 ++++++++++++++--------------------------
1 file changed, 21 insertions(+), 42 deletions(-)
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
index 05bb367a39..252d9aa7e5 100644
--- a/tcg/arm/tcg-target.c.inc
+++ b/tcg/arm/tcg-target.c.inc
@@ -1455,8 +1455,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
#define MIN_TLB_MASK_TABLE_OFS -256
static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, bool is_ld)
+ TCGReg addr, MemOpIdx oi, bool is_ld)
{
TCGLabelQemuLdst *ldst = NULL;
MemOp opc = get_memop(oi);
@@ -1465,14 +1464,14 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
if (tcg_use_softmmu) {
*h = (HostAddress){
.cond = COND_AL,
- .base = addrlo,
+ .base = addr,
.index = TCG_REG_R1,
.index_scratch = true,
};
} else {
*h = (HostAddress){
.cond = COND_AL,
- .base = addrlo,
+ .base = addr,
.index = guest_base ? TCG_REG_GUEST_BASE : -1,
.index_scratch = false,
};
@@ -1492,8 +1491,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* Load cpu->neg.tlb.f[mmu_idx].{mask,table} into {r0,r1}. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0);
@@ -1501,30 +1499,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
tcg_out_ldrd_8(s, COND_AL, TCG_REG_R0, TCG_AREG0, fast_off);
/* Extract the tlb index from the address into R0. */
- tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addrlo,
+ tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addr,
SHIFT_IMM_LSR(s->page_bits - CPU_TLB_ENTRY_BITS));
/*
* Add the tlb_table pointer, creating the CPUTLBEntry address in R1.
- * Load the tlb comparator into R2/R3 and the fast path addend into R1.
+ * Load the tlb comparator into R2 and the fast path addend into R1.
*/
QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN);
if (cmp_off == 0) {
- if (s->addr_type == TCG_TYPE_I32) {
- tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2,
- TCG_REG_R1, TCG_REG_R0);
- } else {
- tcg_out_ldrd_rwb(s, COND_AL, TCG_REG_R2,
- TCG_REG_R1, TCG_REG_R0);
- }
+ tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0);
} else {
tcg_out_dat_reg(s, COND_AL, ARITH_ADD,
TCG_REG_R1, TCG_REG_R1, TCG_REG_R0, 0);
- if (s->addr_type == TCG_TYPE_I32) {
- tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
- } else {
- tcg_out_ldrd_8(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
- }
+ tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
}
/* Load the tlb addend. */
@@ -1543,11 +1531,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
* This leaves the least significant alignment bits unchanged, and of
* course must be zero.
*/
- t_addr = addrlo;
+ t_addr = addr;
if (a_mask < s_mask) {
t_addr = TCG_REG_R0;
tcg_out_dat_imm(s, COND_AL, ARITH_ADD, t_addr,
- addrlo, s_mask - a_mask);
+ addr, s_mask - a_mask);
}
if (use_armv7_instructions && s->page_bits <= 16) {
tcg_out_movi32(s, COND_AL, TCG_REG_TMP, ~(s->page_mask | a_mask));
@@ -1558,7 +1546,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
} else {
if (a_mask) {
tcg_debug_assert(a_mask <= 0xff);
- tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addrlo, a_mask);
+ tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask);
}
tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, t_addr,
SHIFT_IMM_LSR(s->page_bits));
@@ -1566,21 +1554,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
0, TCG_REG_R2, TCG_REG_TMP,
SHIFT_IMM_LSL(s->page_bits));
}
-
- if (s->addr_type != TCG_TYPE_I32) {
- tcg_out_dat_reg(s, COND_EQ, ARITH_CMP, 0, TCG_REG_R3, addrhi, 0);
- }
} else if (a_mask) {
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* We are expecting alignment to max out at 7 */
tcg_debug_assert(a_mask <= 0xff);
/* tst addr, #mask */
- tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addrlo, a_mask);
+ tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask);
}
return ldst;
@@ -1678,14 +1661,13 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg datalo,
}
static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true);
+ ldst = prepare_host_addr(s, &h, addr, oi, true);
if (ldst) {
ldst->type = data_type;
ldst->datalo_reg = datalo;
@@ -1764,14 +1746,13 @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg datalo,
}
static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false);
+ ldst = prepare_host_addr(s, &h, addr, oi, false);
if (ldst) {
ldst->type = data_type;
ldst->datalo_reg = datalo;
@@ -2072,19 +2053,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
break;
case INDEX_op_qemu_ld_i32:
- tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
+ tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I32);
break;
case INDEX_op_qemu_ld_i64:
- tcg_out_qemu_ld(s, args[0], args[1], args[2], -1,
- args[3], TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64);
break;
case INDEX_op_qemu_st_i32:
- tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
+ tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I32);
break;
case INDEX_op_qemu_st_i64:
- tcg_out_qemu_st(s, args[0], args[1], args[2], -1,
- args[3], TCG_TYPE_I64);
+ tcg_out_qemu_st(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64);
break;
case INDEX_op_bswap16_i32:
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 04/11] tcg/i386: Drop addrhi from prepare_host_addr
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (2 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-13 14:35 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 05/11] tcg/mips: " Richard Henderson
` (7 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The guest address will now always fit in one register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/i386/tcg-target.c.inc | 56 ++++++++++++++-------------------------
1 file changed, 20 insertions(+), 36 deletions(-)
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index ca6e8abc57..b33fe7fe23 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -2169,8 +2169,7 @@ static inline int setup_guest_base_seg(void)
* is required and fill in @h with the host address for the fast path.
*/
static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, bool is_ld)
+ TCGReg addr, MemOpIdx oi, bool is_ld)
{
TCGLabelQemuLdst *ldst = NULL;
MemOp opc = get_memop(oi);
@@ -2184,7 +2183,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
} else {
*h = x86_guest_base;
}
- h->base = addrlo;
+ h->base = addr;
h->aa = atom_and_align_for_opc(s, opc, MO_ATOM_IFALIGN, s_bits == MO_128);
a_mask = (1 << h->aa.align) - 1;
@@ -2202,8 +2201,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
if (TCG_TARGET_REG_BITS == 64) {
ttype = s->addr_type;
@@ -2217,7 +2215,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
}
}
- tcg_out_mov(s, tlbtype, TCG_REG_L0, addrlo);
+ tcg_out_mov(s, tlbtype, TCG_REG_L0, addr);
tcg_out_shifti(s, SHIFT_SHR + tlbrexw, TCG_REG_L0,
s->page_bits - CPU_TLB_ENTRY_BITS);
@@ -2233,10 +2231,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
* check that we don't cross pages for the complete access.
*/
if (a_mask >= s_mask) {
- tcg_out_mov(s, ttype, TCG_REG_L1, addrlo);
+ tcg_out_mov(s, ttype, TCG_REG_L1, addr);
} else {
tcg_out_modrm_offset(s, OPC_LEA + trexw, TCG_REG_L1,
- addrlo, s_mask - a_mask);
+ addr, s_mask - a_mask);
}
tlb_mask = s->page_mask | a_mask;
tgen_arithi(s, ARITH_AND + trexw, TCG_REG_L1, tlb_mask, 0);
@@ -2250,17 +2248,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst->label_ptr[0] = s->code_ptr;
s->code_ptr += 4;
- if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I64) {
- /* cmp 4(TCG_REG_L0), addrhi */
- tcg_out_modrm_offset(s, OPC_CMP_GvEv, addrhi,
- TCG_REG_L0, cmp_ofs + 4);
-
- /* jne slow_path */
- tcg_out_opc(s, OPC_JCC_long + JCC_JNE, 0, 0, 0);
- ldst->label_ptr[1] = s->code_ptr;
- s->code_ptr += 4;
- }
-
/* TLB Hit. */
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_L0, TCG_REG_L0,
offsetof(CPUTLBEntry, addend));
@@ -2270,11 +2257,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* jne slow_path */
- jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addrlo, a_mask, true, false);
+ jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addr, a_mask, true, false);
tcg_out_opc(s, OPC_JCC_long + jcc, 0, 0, 0);
ldst->label_ptr[0] = s->code_ptr;
s->code_ptr += 4;
@@ -2446,13 +2432,12 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
}
static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true);
+ ldst = prepare_host_addr(s, &h, addr, oi, true);
tcg_out_qemu_ld_direct(s, datalo, datahi, h, data_type, get_memop(oi));
if (ldst) {
@@ -2574,13 +2559,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
}
static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false);
+ ldst = prepare_host_addr(s, &h, addr, oi, false);
tcg_out_qemu_st_direct(s, datalo, datahi, h, get_memop(oi));
if (ldst) {
@@ -2880,34 +2864,34 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
break;
case INDEX_op_qemu_ld_i32:
- tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I32);
+ tcg_out_qemu_ld(s, a0, -1, a1, a2, TCG_TYPE_I32);
break;
case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, a0, -1, a1, a2, TCG_TYPE_I64);
} else {
- tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I64);
}
break;
case INDEX_op_qemu_ld_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
- tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128);
+ tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I128);
break;
case INDEX_op_qemu_st_i32:
case INDEX_op_qemu_st8_i32:
- tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I32);
+ tcg_out_qemu_st(s, a0, -1, a1, a2, TCG_TYPE_I32);
break;
case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64);
+ tcg_out_qemu_st(s, a0, -1, a1, a2, TCG_TYPE_I64);
} else {
- tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64);
+ tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I64);
}
break;
case INDEX_op_qemu_st_i128:
tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
- tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128);
+ tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I128);
break;
OP_32_64(mulu2):
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 05/11] tcg/mips: Drop addrhi from prepare_host_addr
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (3 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 04/11] tcg/i386: " Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-13 14:33 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 06/11] tcg/ppc: " Richard Henderson
` (6 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The guest address will now always fit in one register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/mips/tcg-target.c.inc | 62 ++++++++++++++-------------------------
1 file changed, 22 insertions(+), 40 deletions(-)
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index b1d512ca2a..153ce1f3c3 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -1217,8 +1217,7 @@ bool tcg_target_has_memory_bswap(MemOp memop)
* is required and fill in @h with the host address for the fast path.
*/
static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, bool is_ld)
+ TCGReg addr, MemOpIdx oi, bool is_ld)
{
TCGType addr_type = s->addr_type;
TCGLabelQemuLdst *ldst = NULL;
@@ -1245,8 +1244,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */
tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_AREG0, mask_off);
@@ -1254,11 +1252,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
/* Extract the TLB index from the address into TMP3. */
if (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32) {
- tcg_out_opc_sa(s, OPC_SRL, TCG_TMP3, addrlo,
+ tcg_out_opc_sa(s, OPC_SRL, TCG_TMP3, addr,
s->page_bits - CPU_TLB_ENTRY_BITS);
} else {
- tcg_out_dsrl(s, TCG_TMP3, addrlo,
- s->page_bits - CPU_TLB_ENTRY_BITS);
+ tcg_out_dsrl(s, TCG_TMP3, addr, s->page_bits - CPU_TLB_ENTRY_BITS);
}
tcg_out_opc_reg(s, OPC_AND, TCG_TMP3, TCG_TMP3, TCG_TMP0);
@@ -1288,48 +1285,35 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
tcg_out_opc_imm(s, (TCG_TARGET_REG_BITS == 32
|| addr_type == TCG_TYPE_I32
? OPC_ADDIU : OPC_DADDIU),
- TCG_TMP2, addrlo, s_mask - a_mask);
+ TCG_TMP2, addr, s_mask - a_mask);
tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, TCG_TMP2);
} else {
- tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, addrlo);
+ tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, addr);
}
/* Zero extend a 32-bit guest address for a 64-bit host. */
if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) {
- tcg_out_ext32u(s, TCG_TMP2, addrlo);
- addrlo = TCG_TMP2;
+ tcg_out_ext32u(s, TCG_TMP2, addr);
+ addr = TCG_TMP2;
}
ldst->label_ptr[0] = s->code_ptr;
tcg_out_opc_br(s, OPC_BNE, TCG_TMP1, TCG_TMP0);
- /* Load and test the high half tlb comparator. */
- if (TCG_TARGET_REG_BITS == 32 && addr_type != TCG_TYPE_I32) {
- /* delay slot */
- tcg_out_ldst(s, OPC_LW, TCG_TMP0, TCG_TMP3, cmp_off + HI_OFF);
-
- /* Load the tlb addend for the fast path. */
- tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP3, TCG_TMP3, add_off);
-
- ldst->label_ptr[1] = s->code_ptr;
- tcg_out_opc_br(s, OPC_BNE, addrhi, TCG_TMP0);
- }
-
/* delay slot */
base = TCG_TMP3;
- tcg_out_opc_reg(s, ALIAS_PADD, base, TCG_TMP3, addrlo);
+ tcg_out_opc_reg(s, ALIAS_PADD, base, TCG_TMP3, addr);
} else {
if (a_mask && (use_mips32r6_instructions || a_bits != s_bits)) {
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* We are expecting a_bits to max out at 7, much lower than ANDI. */
tcg_debug_assert(a_bits < 16);
- tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP0, addrlo, a_mask);
+ tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP0, addr, a_mask);
ldst->label_ptr[0] = s->code_ptr;
if (use_mips32r6_instructions) {
@@ -1340,7 +1324,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
}
}
- base = addrlo;
+ base = addr;
if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) {
tcg_out_ext32u(s, TCG_REG_A0, base);
base = TCG_REG_A0;
@@ -1460,14 +1444,13 @@ static void tcg_out_qemu_ld_unalign(TCGContext *s, TCGReg lo, TCGReg hi,
}
static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true);
+ ldst = prepare_host_addr(s, &h, addr, oi, true);
if (use_mips32r6_instructions || h.aa.align >= (opc & MO_SIZE)) {
tcg_out_qemu_ld_direct(s, datalo, datahi, h.base, opc, data_type);
@@ -1547,14 +1530,13 @@ static void tcg_out_qemu_st_unalign(TCGContext *s, TCGReg lo, TCGReg hi,
}
static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false);
+ ldst = prepare_host_addr(s, &h, addr, oi, false);
if (use_mips32r6_instructions || h.aa.align >= (opc & MO_SIZE)) {
tcg_out_qemu_st_direct(s, datalo, datahi, h.base, opc);
@@ -2096,24 +2078,24 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
break;
case INDEX_op_qemu_ld_i32:
- tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I32);
+ tcg_out_qemu_ld(s, a0, 0, a1, a2, TCG_TYPE_I32);
break;
case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, a0, 0, a1, a2, TCG_TYPE_I64);
} else {
- tcg_out_qemu_ld(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I64);
}
break;
case INDEX_op_qemu_st_i32:
- tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I32);
+ tcg_out_qemu_st(s, a0, 0, a1, a2, TCG_TYPE_I32);
break;
case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64);
+ tcg_out_qemu_st(s, a0, 0, a1, a2, TCG_TYPE_I64);
} else {
- tcg_out_qemu_st(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64);
+ tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I64);
}
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 06/11] tcg/ppc: Drop addrhi from prepare_host_addr
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (4 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 05/11] tcg/mips: " Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-13 14:34 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst Richard Henderson
` (5 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The guest address will now always fit in one register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/ppc/tcg-target.c.inc | 75 ++++++++++++----------------------------
1 file changed, 23 insertions(+), 52 deletions(-)
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 801cb6f3cb..74b93f4b57 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.inc
@@ -2438,8 +2438,7 @@ bool tcg_target_has_memory_bswap(MemOp memop)
* is required and fill in @h with the host address for the fast path.
*/
static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, bool is_ld)
+ TCGReg addr, MemOpIdx oi, bool is_ld)
{
TCGType addr_type = s->addr_type;
TCGLabelQemuLdst *ldst = NULL;
@@ -2474,8 +2473,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, mask_off);
@@ -2483,10 +2481,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
/* Extract the page index, shifted into place for tlb index. */
if (TCG_TARGET_REG_BITS == 32) {
- tcg_out_shri32(s, TCG_REG_R0, addrlo,
+ tcg_out_shri32(s, TCG_REG_R0, addr,
s->page_bits - CPU_TLB_ENTRY_BITS);
} else {
- tcg_out_shri64(s, TCG_REG_R0, addrlo,
+ tcg_out_shri64(s, TCG_REG_R0, addr,
s->page_bits - CPU_TLB_ENTRY_BITS);
}
tcg_out32(s, AND | SAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_R0));
@@ -2534,10 +2532,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
if (a_bits < s_bits) {
a_bits = s_bits;
}
- tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0,
+ tcg_out_rlw(s, RLWINM, TCG_REG_R0, addr, 0,
(32 - a_bits) & 31, 31 - s->page_bits);
} else {
- TCGReg t = addrlo;
+ TCGReg t = addr;
/*
* If the access is unaligned, we need to make sure we fail if we
@@ -2566,30 +2564,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
}
}
- if (TCG_TARGET_REG_BITS == 32 && addr_type != TCG_TYPE_I32) {
- /* Low part comparison into cr7. */
- tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2,
- 0, 7, TCG_TYPE_I32);
-
- /* Load the high part TLB comparator into TMP2. */
- tcg_out_ld(s, TCG_TYPE_I32, TCG_REG_TMP2, TCG_REG_TMP1,
- cmp_off + 4 * !HOST_BIG_ENDIAN);
-
- /* Load addend, deferred for this case. */
- tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_REG_TMP1,
- offsetof(CPUTLBEntry, addend));
-
- /* High part comparison into cr6. */
- tcg_out_cmp(s, TCG_COND_EQ, addrhi, TCG_REG_TMP2,
- 0, 6, TCG_TYPE_I32);
-
- /* Combine comparisons into cr0. */
- tcg_out32(s, CRAND | BT(0, CR_EQ) | BA(6, CR_EQ) | BB(7, CR_EQ));
- } else {
- /* Full comparison into cr0. */
- tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2,
- 0, 0, addr_type);
- }
+ /* Full comparison into cr0. */
+ tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2, 0, 0, addr_type);
/* Load a pointer into the current opcode w/conditional branch-link. */
ldst->label_ptr[0] = s->code_ptr;
@@ -2601,12 +2577,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addrlo;
- ldst->addrhi_reg = addrhi;
+ ldst->addrlo_reg = addr;
/* We are expecting a_bits to max out at 7, much lower than ANDI. */
tcg_debug_assert(a_bits < 16);
- tcg_out32(s, ANDI | SAI(addrlo, TCG_REG_R0, (1 << a_bits) - 1));
+ tcg_out32(s, ANDI | SAI(addr, TCG_REG_R0, (1 << a_bits) - 1));
ldst->label_ptr[0] = s->code_ptr;
tcg_out32(s, BC | BI(0, CR_EQ) | BO_COND_FALSE | LK);
@@ -2617,24 +2592,23 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) {
/* Zero-extend the guest address for use in the host address. */
- tcg_out_ext32u(s, TCG_REG_TMP2, addrlo);
+ tcg_out_ext32u(s, TCG_REG_TMP2, addr);
h->index = TCG_REG_TMP2;
} else {
- h->index = addrlo;
+ h->index = addr;
}
return ldst;
}
static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true);
+ ldst = prepare_host_addr(s, &h, addr, oi, true);
if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
if (opc & MO_BSWAP) {
@@ -2678,14 +2652,13 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi,
}
static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi,
- TCGReg addrlo, TCGReg addrhi,
- MemOpIdx oi, TCGType data_type)
+ TCGReg addr, MemOpIdx oi, TCGType data_type)
{
MemOp opc = get_memop(oi);
TCGLabelQemuLdst *ldst;
HostAddress h;
- ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false);
+ ldst = prepare_host_addr(s, &h, addr, oi, false);
if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
if (opc & MO_BSWAP) {
@@ -2729,7 +2702,7 @@ static void tcg_out_qemu_ldst_i128(TCGContext *s, TCGReg datalo, TCGReg datahi,
uint32_t insn;
TCGReg index;
- ldst = prepare_host_addr(s, &h, addr_reg, -1, oi, is_ld);
+ ldst = prepare_host_addr(s, &h, addr_reg, oi, is_ld);
/* Compose the final address, as LQ/STQ have no indexing. */
index = h.index;
@@ -3309,14 +3282,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
break;
case INDEX_op_qemu_ld_i32:
- tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
+ tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I32);
break;
case INDEX_op_qemu_ld_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_ld(s, args[0], -1, args[1], -1,
- args[2], TCG_TYPE_I64);
+ tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I64);
} else {
- tcg_out_qemu_ld(s, args[0], args[1], args[2], -1,
+ tcg_out_qemu_ld(s, args[0], args[1], args[2],
args[3], TCG_TYPE_I64);
}
break;
@@ -3326,14 +3298,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
break;
case INDEX_op_qemu_st_i32:
- tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
+ tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I32);
break;
case INDEX_op_qemu_st_i64:
if (TCG_TARGET_REG_BITS == 64) {
- tcg_out_qemu_st(s, args[0], -1, args[1], -1,
- args[2], TCG_TYPE_I64);
+ tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I64);
} else {
- tcg_out_qemu_st(s, args[0], args[1], args[2], -1,
+ tcg_out_qemu_st(s, args[0], args[1], args[2],
args[3], TCG_TYPE_I64);
}
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (5 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 06/11] tcg/ppc: " Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-12 7:14 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters Richard Henderson
` (4 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
There is now always only one guest address register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/tcg.c | 18 +++++++++---------
tcg/aarch64/tcg-target.c.inc | 4 ++--
tcg/arm/tcg-target.c.inc | 4 ++--
tcg/i386/tcg-target.c.inc | 4 ++--
tcg/loongarch64/tcg-target.c.inc | 4 ++--
tcg/mips/tcg-target.c.inc | 4 ++--
tcg/ppc/tcg-target.c.inc | 4 ++--
tcg/riscv/tcg-target.c.inc | 4 ++--
tcg/s390x/tcg-target.c.inc | 4 ++--
tcg/sparc64/tcg-target.c.inc | 4 ++--
10 files changed, 27 insertions(+), 27 deletions(-)
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 295004b74f..57f72b78d4 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -100,8 +100,7 @@ struct TCGLabelQemuLdst {
bool is_ld; /* qemu_ld: true, qemu_st: false */
MemOpIdx oi;
TCGType type; /* result type of a load */
- TCGReg addrlo_reg; /* reg index for low word of guest virtual addr */
- TCGReg addrhi_reg; /* reg index for high word of guest virtual addr */
+ TCGReg addr_reg; /* reg index for guest virtual addr */
TCGReg datalo_reg; /* reg index for low word to be loaded or stored */
TCGReg datahi_reg; /* reg index for high word to be loaded or stored */
const tcg_insn_unit *raddr; /* addr of the next IR of qemu_ld/st IR */
@@ -6067,7 +6066,7 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
*/
tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN,
TCG_TYPE_I32, TCG_TYPE_I32,
- ldst->addrlo_reg, -1);
+ ldst->addr_reg, -1);
tcg_out_helper_load_slots(s, 1, mov, parm);
tcg_out_helper_load_imm(s, loc[!HOST_BIG_ENDIAN].arg_slot,
@@ -6075,7 +6074,7 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
next_arg += 2;
} else {
nmov = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
- ldst->addrlo_reg, ldst->addrhi_reg);
+ ldst->addr_reg, -1);
tcg_out_helper_load_slots(s, nmov, mov, parm);
next_arg += nmov;
}
@@ -6232,21 +6231,22 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
/* Handle addr argument. */
loc = &info->in[next_arg];
- if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I32) {
+ tcg_debug_assert(s->addr_type <= TCG_TYPE_REG);
+ if (TCG_TARGET_REG_BITS == 32) {
/*
- * 32-bit host with 32-bit guest: zero-extend the guest address
+ * 32-bit host (and thus 32-bit guest): zero-extend the guest address
* to 64-bits for the helper by storing the low part. Later,
* after we have processed the register inputs, we will load a
* zero for the high part.
*/
tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN,
TCG_TYPE_I32, TCG_TYPE_I32,
- ldst->addrlo_reg, -1);
+ ldst->addr_reg, -1);
next_arg += 2;
nmov += 1;
} else {
n = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type,
- ldst->addrlo_reg, ldst->addrhi_reg);
+ ldst->addr_reg, -1);
next_arg += n;
nmov += n;
}
@@ -6294,7 +6294,7 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst,
g_assert_not_reached();
}
- if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I32) {
+ if (TCG_TARGET_REG_BITS == 32) {
/* Zero extend the address by loading a zero for the high part. */
loc = &info->in[1 + !HOST_BIG_ENDIAN];
tcg_out_helper_load_imm(s, loc->arg_slot, TCG_TYPE_I32, 0, parm);
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 45dc2c649b..6f383c1592 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
@@ -1775,7 +1775,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
mask_type = (s->page_bits + s->tlb_dyn_max_bits > 32
? TCG_TYPE_I64 : TCG_TYPE_I32);
@@ -1837,7 +1837,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
/* tst addr, #mask */
tcg_out_logicali(s, I3404_ANDSI, 0, TCG_REG_XZR, addr_reg, a_mask);
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
index 252d9aa7e5..865aab0ccd 100644
--- a/tcg/arm/tcg-target.c.inc
+++ b/tcg/arm/tcg-target.c.inc
@@ -1491,7 +1491,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* Load cpu->neg.tlb.f[mmu_idx].{mask,table} into {r0,r1}. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0);
@@ -1558,7 +1558,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* We are expecting alignment to max out at 7 */
tcg_debug_assert(a_mask <= 0xff);
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index b33fe7fe23..cfea4c496d 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -2201,7 +2201,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
if (TCG_TARGET_REG_BITS == 64) {
ttype = s->addr_type;
@@ -2257,7 +2257,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* jne slow_path */
jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addr, a_mask, true, false);
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index 4f32bf3e97..dd67e8f6bc 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -1010,7 +1010,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, TCG_AREG0, mask_ofs);
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs);
@@ -1055,7 +1055,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
/*
* Without micro-architecture details, we don't know which of
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index 153ce1f3c3..d744b853cd 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -1244,7 +1244,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */
tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_AREG0, mask_off);
@@ -1309,7 +1309,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* We are expecting a_bits to max out at 7, much lower than ANDI. */
tcg_debug_assert(a_bits < 16);
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 74b93f4b57..2d16807ec7 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.inc
@@ -2473,7 +2473,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, mask_off);
@@ -2577,7 +2577,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr;
+ ldst->addr_reg = addr;
/* We are expecting a_bits to max out at 7, much lower than ANDI. */
tcg_debug_assert(a_bits < 16);
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
index 55a3398712..689fbea0df 100644
--- a/tcg/riscv/tcg-target.c.inc
+++ b/tcg/riscv/tcg-target.c.inc
@@ -1727,7 +1727,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
init_setting_vtype(s);
@@ -1790,7 +1790,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
init_setting_vtype(s);
diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc
index 6786e7b316..b2e1cd60ff 100644
--- a/tcg/s390x/tcg-target.c.inc
+++ b/tcg/s390x/tcg-target.c.inc
@@ -1920,7 +1920,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
tcg_out_sh64(s, RSY_SRLG, TCG_TMP0, addr_reg, TCG_REG_NONE,
s->page_bits - CPU_TLB_ENTRY_BITS);
@@ -1974,7 +1974,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
tcg_debug_assert(a_mask <= 0xffff);
tcg_out_insn(s, RI, TMLL, addr_reg, a_mask);
diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc
index ea0a3b8692..527af5665d 100644
--- a/tcg/sparc64/tcg-target.c.inc
+++ b/tcg/sparc64/tcg-target.c.inc
@@ -1127,7 +1127,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
ldst->label_ptr[0] = s->code_ptr;
/* bne,pn %[xi]cc, label0 */
@@ -1147,7 +1147,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
ldst->oi = oi;
- ldst->addrlo_reg = addr_reg;
+ ldst->addr_reg = addr_reg;
ldst->label_ptr[0] = s->code_ptr;
/* bne,pn %icc, label0 */
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (6 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-12 7:16 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page Richard Henderson
` (3 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The declaration uses uint64_t for addr.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
plugins/api.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/plugins/api.c b/plugins/api.c
index 4110cfaa23..cf8cdf076a 100644
--- a/plugins/api.c
+++ b/plugins/api.c
@@ -561,7 +561,7 @@ GArray *qemu_plugin_get_registers(void)
return create_register_handles(regs);
}
-bool qemu_plugin_read_memory_vaddr(vaddr addr, GByteArray *data, size_t len)
+bool qemu_plugin_read_memory_vaddr(uint64_t addr, GByteArray *data, size_t len)
{
g_assert(current_cpu);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (7 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-12 7:22 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 10/11] include/exec: Change vaddr to uintptr_t Richard Henderson
` (2 subsequent siblings)
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
The declarations use vaddr for size.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 17e2251695..75d075d044 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1193,7 +1193,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
hwaddr paddr, MemTxAttrs attrs, int prot,
- int mmu_idx, uint64_t size)
+ int mmu_idx, vaddr size)
{
CPUTLBEntryFull full = {
.phys_addr = paddr,
@@ -1208,7 +1208,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
void tlb_set_page(CPUState *cpu, vaddr addr,
hwaddr paddr, int prot,
- int mmu_idx, uint64_t size)
+ int mmu_idx, vaddr size)
{
tlb_set_page_with_attrs(cpu, addr, paddr, MEMTXATTRS_UNSPECIFIED,
prot, mmu_idx, size);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 10/11] include/exec: Change vaddr to uintptr_t
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (8 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-12 7:24 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry Richard Henderson
2025-02-15 20:06 ` [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
Since we no longer support 64-bit guests on 32-bit hosts,
we can use a 32-bit type on a 32-bit host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/exec/vaddr.h | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/exec/vaddr.h b/include/exec/vaddr.h
index b9844afc77..28bec632fb 100644
--- a/include/exec/vaddr.h
+++ b/include/exec/vaddr.h
@@ -6,13 +6,15 @@
/**
* vaddr:
* Type wide enough to contain any #target_ulong virtual address.
+ * We do not support 64-bit guest on 32-host and detect at configure time.
+ * Therefore, a host pointer width will always fit a guest pointer.
*/
-typedef uint64_t vaddr;
-#define VADDR_PRId PRId64
-#define VADDR_PRIu PRIu64
-#define VADDR_PRIo PRIo64
-#define VADDR_PRIx PRIx64
-#define VADDR_PRIX PRIX64
-#define VADDR_MAX UINT64_MAX
+typedef uintptr_t vaddr;
+#define VADDR_PRId PRIdPTR
+#define VADDR_PRIu PRIuPTR
+#define VADDR_PRIo PRIoPTR
+#define VADDR_PRIx PRIxPTR
+#define VADDR_PRIX PRIXPTR
+#define VADDR_MAX UINTPTR_MAX
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (9 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 10/11] include/exec: Change vaddr to uintptr_t Richard Henderson
@ 2025-02-05 4:03 ` Richard Henderson
2025-02-13 14:29 ` Philippe Mathieu-Daudé
2025-02-15 20:06 ` [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
11 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-05 4:03 UTC (permalink / raw)
To: qemu-devel
Since we no longer support 64-bit guests on 32-bit hosts,
we can use a 32-bit type on a 32-bit host. This shrinks
the size of the structure to 16 bytes on a 32-bit host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/exec/tlb-common.h | 10 +++++-----
accel/tcg/cputlb.c | 21 ++++-----------------
tcg/arm/tcg-target.c.inc | 1 -
tcg/mips/tcg-target.c.inc | 9 ++-------
tcg/ppc/tcg-target.c.inc | 21 +++++----------------
tcg/riscv/tcg-target.c.inc | 1 -
6 files changed, 16 insertions(+), 47 deletions(-)
diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h
index dc5a5faa0b..03b5a8ffc7 100644
--- a/include/exec/tlb-common.h
+++ b/include/exec/tlb-common.h
@@ -19,14 +19,14 @@
#ifndef EXEC_TLB_COMMON_H
#define EXEC_TLB_COMMON_H 1
-#define CPU_TLB_ENTRY_BITS 5
+#define CPU_TLB_ENTRY_BITS (HOST_LONG_BITS == 32 ? 4 : 5)
/* Minimalized TLB entry for use by TCG fast path. */
typedef union CPUTLBEntry {
struct {
- uint64_t addr_read;
- uint64_t addr_write;
- uint64_t addr_code;
+ uintptr_t addr_read;
+ uintptr_t addr_write;
+ uintptr_t addr_code;
/*
* Addend to virtual address to get host address. IO accesses
* use the corresponding iotlb value.
@@ -37,7 +37,7 @@ typedef union CPUTLBEntry {
* Padding to get a power of two size, as well as index
* access to addr_{read,write,code}.
*/
- uint64_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uint64_t)];
+ uintptr_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uintptr_t)];
} CPUTLBEntry;
QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS));
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 75d075d044..ad158050a1 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -104,22 +104,15 @@ static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry,
{
/* Do not rearrange the CPUTLBEntry structure members. */
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) !=
- MMU_DATA_LOAD * sizeof(uint64_t));
+ MMU_DATA_LOAD * sizeof(uintptr_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) !=
- MMU_DATA_STORE * sizeof(uint64_t));
+ MMU_DATA_STORE * sizeof(uintptr_t));
QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) !=
- MMU_INST_FETCH * sizeof(uint64_t));
+ MMU_INST_FETCH * sizeof(uintptr_t));
-#if TARGET_LONG_BITS == 32
- /* Use qatomic_read, in case of addr_write; only care about low bits. */
- const uint32_t *ptr = (uint32_t *)&entry->addr_idx[access_type];
- ptr += HOST_BIG_ENDIAN;
- return qatomic_read(ptr);
-#else
- const uint64_t *ptr = &entry->addr_idx[access_type];
+ const uintptr_t *ptr = &entry->addr_idx[access_type];
/* ofs might correspond to .addr_write, so use qatomic_read */
return qatomic_read(ptr);
-#endif
}
static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry)
@@ -899,14 +892,8 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry,
addr &= TARGET_PAGE_MASK;
addr += tlb_entry->addend;
if ((addr - start) < length) {
-#if TARGET_LONG_BITS == 32
- uint32_t *ptr_write = (uint32_t *)&tlb_entry->addr_write;
- ptr_write += HOST_BIG_ENDIAN;
- qatomic_set(ptr_write, *ptr_write | TLB_NOTDIRTY);
-#else
qatomic_set(&tlb_entry->addr_write,
tlb_entry->addr_write | TLB_NOTDIRTY);
-#endif
}
}
}
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
index 865aab0ccd..f03bb76396 100644
--- a/tcg/arm/tcg-target.c.inc
+++ b/tcg/arm/tcg-target.c.inc
@@ -1506,7 +1506,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
* Add the tlb_table pointer, creating the CPUTLBEntry address in R1.
* Load the tlb comparator into R2 and the fast path addend into R1.
*/
- QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN);
if (cmp_off == 0) {
tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0);
} else {
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index d744b853cd..6fe7a77813 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -1262,13 +1262,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
/* Add the tlb_table pointer, creating the CPUTLBEntry address. */
tcg_out_opc_reg(s, ALIAS_PADD, TCG_TMP3, TCG_TMP3, TCG_TMP1);
- if (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32) {
- /* Load the (low half) tlb comparator. */
- tcg_out_ld(s, TCG_TYPE_I32, TCG_TMP0, TCG_TMP3,
- cmp_off + HOST_BIG_ENDIAN * 4);
- } else {
- tcg_out_ld(s, TCG_TYPE_I64, TCG_TMP0, TCG_TMP3, cmp_off);
- }
+ /* Load the tlb comparator. */
+ tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_TMP3, cmp_off);
if (TCG_TARGET_REG_BITS == 64 || addr_type == TCG_TYPE_I32) {
/* Load the tlb addend for the fast path. */
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 2d16807ec7..822925a19b 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.inc
@@ -2490,27 +2490,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
tcg_out32(s, AND | SAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_R0));
/*
- * Load the (low part) TLB comparator into TMP2.
+ * Load the TLB comparator into TMP2.
* For 64-bit host, always load the entire 64-bit slot for simplicity.
* We will ignore the high bits with tcg_out_cmp(..., addr_type).
*/
- if (TCG_TARGET_REG_BITS == 64) {
- if (cmp_off == 0) {
- tcg_out32(s, LDUX | TAB(TCG_REG_TMP2,
- TCG_REG_TMP1, TCG_REG_TMP2));
- } else {
- tcg_out32(s, ADD | TAB(TCG_REG_TMP1,
- TCG_REG_TMP1, TCG_REG_TMP2));
- tcg_out_ld(s, TCG_TYPE_I64, TCG_REG_TMP2,
- TCG_REG_TMP1, cmp_off);
- }
- } else if (cmp_off == 0 && !HOST_BIG_ENDIAN) {
- tcg_out32(s, LWZUX | TAB(TCG_REG_TMP2,
- TCG_REG_TMP1, TCG_REG_TMP2));
+ if (cmp_off == 0) {
+ tcg_out32(s, (TCG_TARGET_REG_BITS == 64 ? LDUX : LWZUX)
+ | TAB(TCG_REG_TMP2, TCG_REG_TMP1, TCG_REG_TMP2));
} else {
tcg_out32(s, ADD | TAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_TMP2));
- tcg_out_ld(s, TCG_TYPE_I32, TCG_REG_TMP2, TCG_REG_TMP1,
- cmp_off + 4 * HOST_BIG_ENDIAN);
+ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP1, cmp_off);
}
/*
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
index 689fbea0df..dae892437e 100644
--- a/tcg/riscv/tcg-target.c.inc
+++ b/tcg/riscv/tcg-target.c.inc
@@ -1760,7 +1760,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase,
}
/* Load the tlb comparator and the addend. */
- QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN);
tcg_out_ld(s, addr_type, TCG_REG_TMP0, TCG_REG_TMP2,
is_ld ? offsetof(CPUTLBEntry, addr_read)
: offsetof(CPUTLBEntry, addr_write));
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst
2025-02-05 4:03 ` [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst Richard Henderson
@ 2025-02-12 7:14 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-12 7:14 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> There is now always only one guest address register.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/tcg.c | 18 +++++++++---------
> tcg/aarch64/tcg-target.c.inc | 4 ++--
> tcg/arm/tcg-target.c.inc | 4 ++--
> tcg/i386/tcg-target.c.inc | 4 ++--
> tcg/loongarch64/tcg-target.c.inc | 4 ++--
> tcg/mips/tcg-target.c.inc | 4 ++--
> tcg/ppc/tcg-target.c.inc | 4 ++--
> tcg/riscv/tcg-target.c.inc | 4 ++--
> tcg/s390x/tcg-target.c.inc | 4 ++--
> tcg/sparc64/tcg-target.c.inc | 4 ++--
> 10 files changed, 27 insertions(+), 27 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters
2025-02-05 4:03 ` [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters Richard Henderson
@ 2025-02-12 7:16 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-12 7:16 UTC (permalink / raw)
To: Richard Henderson, qemu-devel, Rowan Hart
On 5/2/25 05:03, Richard Henderson wrote:
> The declaration uses uint64_t for addr.
>
Fixes: 595cd9ce2ec ("plugins: add plugin API to read guest memory")
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> plugins/api.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/plugins/api.c b/plugins/api.c
> index 4110cfaa23..cf8cdf076a 100644
> --- a/plugins/api.c
> +++ b/plugins/api.c
> @@ -561,7 +561,7 @@ GArray *qemu_plugin_get_registers(void)
> return create_register_handles(regs);
> }
>
> -bool qemu_plugin_read_memory_vaddr(vaddr addr, GByteArray *data, size_t len)
> +bool qemu_plugin_read_memory_vaddr(uint64_t addr, GByteArray *data, size_t len)
> {
> g_assert(current_cpu);
>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
2025-02-05 4:03 ` [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page Richard Henderson
@ 2025-02-12 7:22 ` Philippe Mathieu-Daudé
2025-02-12 18:21 ` Richard Henderson
0 siblings, 1 reply; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-12 7:22 UTC (permalink / raw)
To: Richard Henderson, qemu-devel, Anton Johansson
On 5/2/25 05:03, Richard Henderson wrote:
> The declarations use vaddr for size.
Which seems dubious, since TARGET_PAGE_SIZE is int IIUC.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 17e2251695..75d075d044 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1193,7 +1193,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
>
> void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
> hwaddr paddr, MemTxAttrs attrs, int prot,
> - int mmu_idx, uint64_t size)
> + int mmu_idx, vaddr size)
> {
> CPUTLBEntryFull full = {
> .phys_addr = paddr,
> @@ -1208,7 +1208,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr,
>
> void tlb_set_page(CPUState *cpu, vaddr addr,
> hwaddr paddr, int prot,
> - int mmu_idx, uint64_t size)
> + int mmu_idx, vaddr size)
> {
> tlb_set_page_with_attrs(cpu, addr, paddr, MEMTXATTRS_UNSPECIFIED,
> prot, mmu_idx, size);
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 10/11] include/exec: Change vaddr to uintptr_t
2025-02-05 4:03 ` [PATCH 10/11] include/exec: Change vaddr to uintptr_t Richard Henderson
@ 2025-02-12 7:24 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-12 7:24 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> Since we no longer support 64-bit guests on 32-bit hosts,
> we can use a 32-bit type on a 32-bit host.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> include/exec/vaddr.h | 16 +++++++++-------
> 1 file changed, 9 insertions(+), 7 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
2025-02-12 7:22 ` Philippe Mathieu-Daudé
@ 2025-02-12 18:21 ` Richard Henderson
2025-02-17 7:32 ` Philippe Mathieu-Daudé
0 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2025-02-12 18:21 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel, Anton Johansson
On 2/11/25 23:22, Philippe Mathieu-Daudé wrote:
> On 5/2/25 05:03, Richard Henderson wrote:
>> The declarations use vaddr for size.
>
> Which seems dubious, since TARGET_PAGE_SIZE is int IIUC.
This parameter must handle guest huge pages. Most often this is 2MiB or 1GiB, which do
fit in "int", but logically could be any size at all. So vaddr seems the correct type.
r~
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry
2025-02-05 4:03 ` [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry Richard Henderson
@ 2025-02-13 14:29 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-13 14:29 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> Since we no longer support 64-bit guests on 32-bit hosts,
> we can use a 32-bit type on a 32-bit host. This shrinks
> the size of the structure to 16 bytes on a 32-bit host.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> include/exec/tlb-common.h | 10 +++++-----
> accel/tcg/cputlb.c | 21 ++++-----------------
> tcg/arm/tcg-target.c.inc | 1 -
> tcg/mips/tcg-target.c.inc | 9 ++-------
> tcg/ppc/tcg-target.c.inc | 21 +++++----------------
> tcg/riscv/tcg-target.c.inc | 1 -
> 6 files changed, 16 insertions(+), 47 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 05/11] tcg/mips: Drop addrhi from prepare_host_addr
2025-02-05 4:03 ` [PATCH 05/11] tcg/mips: " Richard Henderson
@ 2025-02-13 14:33 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-13 14:33 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> The guest address will now always fit in one register.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/mips/tcg-target.c.inc | 62 ++++++++++++++-------------------------
> 1 file changed, 22 insertions(+), 40 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 06/11] tcg/ppc: Drop addrhi from prepare_host_addr
2025-02-05 4:03 ` [PATCH 06/11] tcg/ppc: " Richard Henderson
@ 2025-02-13 14:34 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-13 14:34 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> The guest address will now always fit in one register.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/ppc/tcg-target.c.inc | 75 ++++++++++++----------------------------
> 1 file changed, 23 insertions(+), 52 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 04/11] tcg/i386: Drop addrhi from prepare_host_addr
2025-02-05 4:03 ` [PATCH 04/11] tcg/i386: " Richard Henderson
@ 2025-02-13 14:35 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-13 14:35 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> The guest address will now always fit in one register.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/i386/tcg-target.c.inc | 56 ++++++++++++++-------------------------
> 1 file changed, 20 insertions(+), 36 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst
2025-02-05 4:03 ` [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Richard Henderson
@ 2025-02-13 14:37 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-13 14:37 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/tcg-op-ldst.c | 22 ++++------------------
> 1 file changed, 4 insertions(+), 18 deletions(-)
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
` (10 preceding siblings ...)
2025-02-05 4:03 ` [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry Richard Henderson
@ 2025-02-15 20:06 ` Richard Henderson
11 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2025-02-15 20:06 UTC (permalink / raw)
To: qemu-devel
On 2/4/25 20:03, Richard Henderson wrote:
> This is not complete by any means, but it's a start.
>
>
> r~
>
>
> Based-on: 20250204215359.1238808-1-richard.henderson@linaro.org
> ("[PATCH v3 00/12] meson: Deprecate 32-bit host support")
>
>
> Richard Henderson (11):
> tcg: Drop support for two address registers in gen_ldst
> tcg: Merge INDEX_op_qemu_*_{a32,a64}_*
> tcg/arm: Drop addrhi from prepare_host_addr
> tcg/i386: Drop addrhi from prepare_host_addr
> tcg/mips: Drop addrhi from prepare_host_addr
> tcg/ppc: Drop addrhi from prepare_host_addr
> tcg: Replace addr{lo,hi}_reg with addr_reg in TCGLabelQemuLdst
> plugins: Fix qemu_plugin_read_memory_vaddr parameters
> accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
> include/exec: Change vaddr to uintptr_t
> include/exec: Use uintptr_t in CPUTLBEntry
Queued.
r~
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page
2025-02-12 18:21 ` Richard Henderson
@ 2025-02-17 7:32 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-17 7:32 UTC (permalink / raw)
To: Richard Henderson, qemu-devel, Anton Johansson
On 12/2/25 19:21, Richard Henderson wrote:
> On 2/11/25 23:22, Philippe Mathieu-Daudé wrote:
>> On 5/2/25 05:03, Richard Henderson wrote:
>>> The declarations use vaddr for size.
>>
>> Which seems dubious, since TARGET_PAGE_SIZE is int IIUC.
>
> This parameter must handle guest huge pages. Most often this is 2MiB or
> 1GiB, which do fit in "int", but logically could be any size at all. So
> vaddr seems the correct type.
OK, got it.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr
2025-02-05 4:03 ` [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr Richard Henderson
@ 2025-02-17 12:12 ` Philippe Mathieu-Daudé
2025-02-17 16:44 ` Richard Henderson
0 siblings, 1 reply; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-17 12:12 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 5/2/25 05:03, Richard Henderson wrote:
> The guest address will now always be TCG_TYPE_I32.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tcg/arm/tcg-target.c.inc | 63 ++++++++++++++--------------------------
> 1 file changed, 21 insertions(+), 42 deletions(-)
> /*
> * Add the tlb_table pointer, creating the CPUTLBEntry address in R1.
> - * Load the tlb comparator into R2/R3 and the fast path addend into R1.
> + * Load the tlb comparator into R2 and the fast path addend into R1.
> */
> QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN);
> if (cmp_off == 0) {
> - if (s->addr_type == TCG_TYPE_I32) {
> - tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2,
> - TCG_REG_R1, TCG_REG_R0);
> - } else {
> - tcg_out_ldrd_rwb(s, COND_AL, TCG_REG_R2,
> - TCG_REG_R1, TCG_REG_R0);
> - }
> + tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0);
> } else {
> tcg_out_dat_reg(s, COND_AL, ARITH_ADD,
> TCG_REG_R1, TCG_REG_R1, TCG_REG_R0, 0);
> - if (s->addr_type == TCG_TYPE_I32) {
> - tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
> - } else {
> - tcg_out_ldrd_8(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
> - }
> + tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off);
> }
With this change:
-- >8 --
@@ -678,8 +678,2 @@ static void tcg_out_ldrd_r(TCGContext *s, ARMCond
cond, TCGReg rt,
-static void __attribute__((unused))
-tcg_out_ldrd_rwb(TCGContext *s, ARMCond cond, TCGReg rt, TCGReg rn,
TCGReg rm)
-{
- tcg_out_memop_r(s, cond, INSN_LDRD_REG, rt, rn, rm, 1, 1, 1);
-}
-
static void __attribute__((unused))
---
squashed:
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_*
2025-02-05 4:03 ` [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_* Richard Henderson
@ 2025-02-17 14:22 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-02-17 14:22 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: qemu-ppc, Stefan Weil
On 5/2/25 05:03, Richard Henderson wrote:
> Since 64-on-32 is now unsupported, guest addresses always
> fit in one host register. Drop the replication of opcodes.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> include/tcg/tcg-opc.h | 28 ++------
> tcg/optimize.c | 21 ++----
> tcg/tcg-op-ldst.c | 82 +++++----------------
> tcg/tcg.c | 42 ++++-------
> tcg/tci.c | 119 ++++++-------------------------
> tcg/aarch64/tcg-target.c.inc | 36 ++++------
> tcg/arm/tcg-target.c.inc | 40 +++--------
> tcg/i386/tcg-target.c.inc | 69 ++++--------------
> tcg/loongarch64/tcg-target.c.inc | 36 ++++------
> tcg/mips/tcg-target.c.inc | 51 +++----------
> tcg/ppc/tcg-target.c.inc | 68 ++++--------------
> tcg/riscv/tcg-target.c.inc | 24 +++----
> tcg/s390x/tcg-target.c.inc | 36 ++++------
> tcg/sparc64/tcg-target.c.inc | 24 +++----
> tcg/tci/tcg-target.c.inc | 60 ++++------------
> 15 files changed, 177 insertions(+), 559 deletions(-)
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index 43b6712286..295004b74f 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> - case INDEX_op_qemu_ld_a32_i32:
> - case INDEX_op_qemu_ld_a64_i32:
> - case INDEX_op_qemu_st_a32_i32:
> - case INDEX_op_qemu_st_a64_i32:
> - case INDEX_op_qemu_st8_a32_i32:
> - case INDEX_op_qemu_st8_a64_i32:
> - case INDEX_op_qemu_ld_a32_i64:
> - case INDEX_op_qemu_ld_a64_i64:
> - case INDEX_op_qemu_st_a32_i64:
> - case INDEX_op_qemu_st_a64_i64:
> - case INDEX_op_qemu_ld_a32_i128:
> - case INDEX_op_qemu_ld_a64_i128:
> - case INDEX_op_qemu_st_a32_i128:
> - case INDEX_op_qemu_st_a64_i128:
> + case INDEX_op_qemu_ld_i32:
> + case INDEX_op_qemu_st_i32:
> + case INDEX_op_qemu_st8_i32:
> + case INDEX_op_qemu_ld_i64:
> + case INDEX_op_qemu_st_i64:
> + case INDEX_op_qemu_ld_i128:
> + case INDEX_op_qemu_st_i128:
Nice :)
> diff --git a/tcg/tci.c b/tcg/tci.c
> index 8c1c53424d..d223258efe 100644
> --- a/tcg/tci.c
> +++ b/tcg/tci.c
> @@ -154,16 +154,6 @@ static void tci_args_rrrbb(uint32_t insn, TCGReg *r0, TCGReg *r1,
> *i4 = extract32(insn, 26, 6);
> }
>
> -static void tci_args_rrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1,
> - TCGReg *r2, TCGReg *r3, TCGReg *r4)
> -{
> - *r0 = extract32(insn, 8, 4);
> - *r1 = extract32(insn, 12, 4);
> - *r2 = extract32(insn, 16, 4);
> - *r3 = extract32(insn, 20, 4);
> - *r4 = extract32(insn, 24, 4);
> -}
> -
> static void tci_args_rrrr(uint32_t insn,
> TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3)
> {
> @@ -912,43 +902,21 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
> tb_ptr = ptr;
> break;
>
> - case INDEX_op_qemu_ld_a32_i32:
> + case INDEX_op_qemu_ld_i32:
> tci_args_rrm(insn, &r0, &r1, &oi);
> - taddr = (uint32_t)regs[r1];
> - goto do_ld_i32;
> - case INDEX_op_qemu_ld_a64_i32:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tci_args_rrm(insn, &r0, &r1, &oi);
> - taddr = regs[r1];
> - } else {
> - tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> - taddr = tci_uint64(regs[r2], regs[r1]);
> - oi = regs[r3];
> - }
> - do_ld_i32:
> + taddr = regs[r1];
> regs[r0] = tci_qemu_ld(env, taddr, oi, tb_ptr);
> break;
>
> - case INDEX_op_qemu_ld_a32_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tci_args_rrm(insn, &r0, &r1, &oi);
> - taddr = (uint32_t)regs[r1];
> - } else {
> - tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> - taddr = (uint32_t)regs[r2];
> - oi = regs[r3];
> - }
> - goto do_ld_i64;
> - case INDEX_op_qemu_ld_a64_i64:
> + case INDEX_op_qemu_ld_i64:
> if (TCG_TARGET_REG_BITS == 64) {
> tci_args_rrm(insn, &r0, &r1, &oi);
> taddr = regs[r1];
> } else {
> - tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
> - taddr = tci_uint64(regs[r3], regs[r2]);
> - oi = regs[r4];
> + tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> + taddr = regs[r2];
> + oi = regs[r3];
> }
> - do_ld_i64:
> tmp64 = tci_qemu_ld(env, taddr, oi, tb_ptr);
> if (TCG_TARGET_REG_BITS == 32) {
> tci_write_reg64(regs, r1, r0, tmp64);
> @@ -957,47 +925,23 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env,
> }
> break;
>
> - case INDEX_op_qemu_st_a32_i32:
> + case INDEX_op_qemu_st_i32:
> tci_args_rrm(insn, &r0, &r1, &oi);
> - taddr = (uint32_t)regs[r1];
> - goto do_st_i32;
> - case INDEX_op_qemu_st_a64_i32:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tci_args_rrm(insn, &r0, &r1, &oi);
> - taddr = regs[r1];
> - } else {
> - tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> - taddr = tci_uint64(regs[r2], regs[r1]);
> - oi = regs[r3];
> - }
> - do_st_i32:
> + taddr = regs[r1];
> tci_qemu_st(env, taddr, regs[r0], oi, tb_ptr);
> break;
>
> - case INDEX_op_qemu_st_a32_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tci_args_rrm(insn, &r0, &r1, &oi);
> - tmp64 = regs[r0];
> - taddr = (uint32_t)regs[r1];
> - } else {
> - tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> - tmp64 = tci_uint64(regs[r1], regs[r0]);
> - taddr = (uint32_t)regs[r2];
> - oi = regs[r3];
> - }
> - goto do_st_i64;
> - case INDEX_op_qemu_st_a64_i64:
> + case INDEX_op_qemu_st_i64:
> if (TCG_TARGET_REG_BITS == 64) {
> tci_args_rrm(insn, &r0, &r1, &oi);
> tmp64 = regs[r0];
> taddr = regs[r1];
> } else {
> - tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4);
> + tci_args_rrrr(insn, &r0, &r1, &r2, &r3);
> tmp64 = tci_uint64(regs[r1], regs[r0]);
> - taddr = tci_uint64(regs[r3], regs[r2]);
> - oi = regs[r4];
> + taddr = regs[r2];
> + oi = regs[r3];
> }
My tci is rusty, but this LGTM.
> diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
> index 6e711cd53f..801cb6f3cb 100644
> --- a/tcg/ppc/tcg-target.c.inc
> +++ b/tcg/ppc/tcg-target.c.inc
> @@ -3308,17 +3308,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
> tcg_out32(s, MODUD | TAB(args[0], args[1], args[2]));
> break;
>
> - case INDEX_op_qemu_ld_a64_i32:
> - if (TCG_TARGET_REG_BITS == 32) {
> - tcg_out_qemu_ld(s, args[0], -1, args[1], args[2],
> - args[3], TCG_TYPE_I32);
> - break;
> - }
> - /* fall through */
> - case INDEX_op_qemu_ld_a32_i32:
> + case INDEX_op_qemu_ld_i32:
> tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
> break;
> - case INDEX_op_qemu_ld_a32_i64:
> + case INDEX_op_qemu_ld_i64:
> if (TCG_TARGET_REG_BITS == 64) {
> tcg_out_qemu_ld(s, args[0], -1, args[1], -1,
> args[2], TCG_TYPE_I64);
> @@ -3327,32 +3320,15 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
> args[3], TCG_TYPE_I64);
> }
> break;
> - case INDEX_op_qemu_ld_a64_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tcg_out_qemu_ld(s, args[0], -1, args[1], -1,
> - args[2], TCG_TYPE_I64);
> - } else {
> - tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3],
> - args[4], TCG_TYPE_I64);
> - }
> - break;
> - case INDEX_op_qemu_ld_a32_i128:
> - case INDEX_op_qemu_ld_a64_i128:
> + case INDEX_op_qemu_ld_i128:
> tcg_debug_assert(TCG_TARGET_REG_BITS == 64);
> tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true);
> break;
>
> - case INDEX_op_qemu_st_a64_i32:
> - if (TCG_TARGET_REG_BITS == 32) {
> - tcg_out_qemu_st(s, args[0], -1, args[1], args[2],
> - args[3], TCG_TYPE_I32);
> - break;
> - }
> - /* fall through */
> - case INDEX_op_qemu_st_a32_i32:
> + case INDEX_op_qemu_st_i32:
> tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32);
> break;
> - case INDEX_op_qemu_st_a32_i64:
> + case INDEX_op_qemu_st_i64:
> if (TCG_TARGET_REG_BITS == 64) {
> tcg_out_qemu_st(s, args[0], -1, args[1], -1,
> args[2], TCG_TYPE_I64);
> @@ -3361,17 +3337,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
> args[3], TCG_TYPE_I64);
> }
> break;
> - case INDEX_op_qemu_st_a64_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tcg_out_qemu_st(s, args[0], -1, args[1], -1,
> - args[2], TCG_TYPE_I64);
> - } else {
> - tcg_out_qemu_st(s, args[0], args[1], args[2], args[3],
> - args[4], TCG_TYPE_I64);
> - }
> - break;
Diff context isn't sufficient to review, but after applying and
looking at the result, PPC LGTM.
> @@ -833,29 +811,21 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type,
> tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], args[3]);
> break;
>
> - case INDEX_op_qemu_ld_a32_i32:
> - case INDEX_op_qemu_st_a32_i32:
> - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
> - break;
> - case INDEX_op_qemu_ld_a64_i32:
> - case INDEX_op_qemu_st_a64_i32:
> - case INDEX_op_qemu_ld_a32_i64:
> - case INDEX_op_qemu_st_a32_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
> - } else {
> + case INDEX_op_qemu_ld_i64:
> + case INDEX_op_qemu_st_i64:
> + if (TCG_TARGET_REG_BITS == 32) {
> tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[3]);
> tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], TCG_REG_TMP);
> + break;
> }
> - break;
> - case INDEX_op_qemu_ld_a64_i64:
> - case INDEX_op_qemu_st_a64_i64:
> - if (TCG_TARGET_REG_BITS == 64) {
> - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
> + /* fall through */
> + case INDEX_op_qemu_ld_i32:
> + case INDEX_op_qemu_st_i32:
> + if (TCG_TARGET_REG_BITS == 64 && s->addr_type == TCG_TYPE_I32) {
> + tcg_out_ext32u(s, TCG_REG_TMP, args[1]);
> + tcg_out_op_rrm(s, opc, args[0], TCG_REG_TMP, args[2]);
> } else {
> - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[4]);
> - tcg_out_op_rrrrr(s, opc, args[0], args[1],
> - args[2], args[3], TCG_REG_TMP);
> + tcg_out_op_rrm(s, opc, args[0], args[1], args[2]);
> }
> break;
Also LGTM looking at applied changes, so I dare to:
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
(thanksfully 64-bit hosts were trivial)
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr
2025-02-17 12:12 ` Philippe Mathieu-Daudé
@ 2025-02-17 16:44 ` Richard Henderson
0 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2025-02-17 16:44 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel
On 2/17/25 04:12, Philippe Mathieu-Daudé wrote:
> With this change:
>
> -- >8 --
> @@ -678,8 +678,2 @@ static void tcg_out_ldrd_r(TCGContext *s, ARMCond cond, TCGReg rt,
>
> -static void __attribute__((unused))
> -tcg_out_ldrd_rwb(TCGContext *s, ARMCond cond, TCGReg rt, TCGReg rn, TCGReg rm)
> -{
> - tcg_out_memop_r(s, cond, INSN_LDRD_REG, rt, rn, rm, 1, 1, 1);
> -}
> -
> static void __attribute__((unused))
> ---
>
> squashed:
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Ah, thanks.
r~
>
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2025-02-17 16:45 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-05 4:03 [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
2025-02-05 4:03 ` [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Richard Henderson
2025-02-13 14:37 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_* Richard Henderson
2025-02-17 14:22 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr Richard Henderson
2025-02-17 12:12 ` Philippe Mathieu-Daudé
2025-02-17 16:44 ` Richard Henderson
2025-02-05 4:03 ` [PATCH 04/11] tcg/i386: " Richard Henderson
2025-02-13 14:35 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 05/11] tcg/mips: " Richard Henderson
2025-02-13 14:33 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 06/11] tcg/ppc: " Richard Henderson
2025-02-13 14:34 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst Richard Henderson
2025-02-12 7:14 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters Richard Henderson
2025-02-12 7:16 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page Richard Henderson
2025-02-12 7:22 ` Philippe Mathieu-Daudé
2025-02-12 18:21 ` Richard Henderson
2025-02-17 7:32 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 10/11] include/exec: Change vaddr to uintptr_t Richard Henderson
2025-02-12 7:24 ` Philippe Mathieu-Daudé
2025-02-05 4:03 ` [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry Richard Henderson
2025-02-13 14:29 ` Philippe Mathieu-Daudé
2025-02-15 20:06 ` [PATCH 00/11] tcg: Cleanups after disallowing 64-on-32 Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).