qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY
@ 2023-05-02 16:08 Richard Henderson
  2023-05-02 16:08 ` [PATCH 01/16] target/alpha: Use MO_ALIGN for system UNALIGN() Richard Henderson
                   ` (16 more replies)
  0 siblings, 17 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Based-on: 20230502135741.1158035-1-richard.henderson@linaro.org
("[PATCH 0/9] tcg: Remove compatability helpers for qemu ld/st")

Add MO_ALIGN where required, so that we may remove TARGET_ALIGNED_ONLY.
This is required for building tcg once, because we cannot have multiple
definitions of MO_ALIGN and MO_UNALN.


r~


Richard Henderson (16):
  target/alpha: Use MO_ALIGN for system UNALIGN()
  target/alpha: Use MO_ALIGN where required
  target/alpha: Remove TARGET_ALIGNED_ONLY
  target/hppa: Use MO_ALIGN for system UNALIGN()
  target/hppa: Remove TARGET_ALIGNED_ONLY
  target/mips: Add MO_ALIGN to gen_llwp, gen_scwp
  target/mips: Add missing default_tcg_memop_mask
  target/mips: Use MO_ALIGN instead of 0
  target/mips: Remove TARGET_ALIGNED_ONLY
  target/nios2: Remove TARGET_ALIGNED_ONLY
  target/sh4: Use MO_ALIGN where required
  target/sh4: Remove TARGET_ALIGNED_ONLY
  target/sparc: Use MO_ALIGN where required
  target/sparc: Use cpu_ld*_code_mmu
  target/sparc: Remove TARGET_ALIGNED_ONLY
  tcg: Remove TARGET_ALIGNED_ONLY

 configs/targets/alpha-linux-user.mak       |   1 -
 configs/targets/alpha-softmmu.mak          |   1 -
 configs/targets/hppa-linux-user.mak        |   1 -
 configs/targets/hppa-softmmu.mak           |   1 -
 configs/targets/mips-linux-user.mak        |   1 -
 configs/targets/mips-softmmu.mak           |   1 -
 configs/targets/mips64-linux-user.mak      |   1 -
 configs/targets/mips64-softmmu.mak         |   1 -
 configs/targets/mips64el-linux-user.mak    |   1 -
 configs/targets/mips64el-softmmu.mak       |   1 -
 configs/targets/mipsel-linux-user.mak      |   1 -
 configs/targets/mipsel-softmmu.mak         |   1 -
 configs/targets/mipsn32-linux-user.mak     |   1 -
 configs/targets/mipsn32el-linux-user.mak   |   1 -
 configs/targets/nios2-softmmu.mak          |   1 -
 configs/targets/sh4-linux-user.mak         |   1 -
 configs/targets/sh4-softmmu.mak            |   1 -
 configs/targets/sh4eb-linux-user.mak       |   1 -
 configs/targets/sh4eb-softmmu.mak          |   1 -
 configs/targets/sparc-linux-user.mak       |   1 -
 configs/targets/sparc-softmmu.mak          |   1 -
 configs/targets/sparc32plus-linux-user.mak |   1 -
 configs/targets/sparc64-linux-user.mak     |   1 -
 configs/targets/sparc64-softmmu.mak        |   1 -
 include/exec/memop.h                       |  13 +--
 include/exec/poison.h                      |   1 -
 target/alpha/translate.c                   |  38 ++++----
 target/hppa/translate.c                    |   2 +-
 target/mips/tcg/mxu_translate.c            |   3 +-
 target/nios2/translate.c                   |  10 ++
 target/sh4/translate.c                     | 102 +++++++++++++--------
 target/sparc/ldst_helper.c                 |  10 +-
 target/sparc/translate.c                   |  66 ++++++-------
 tcg/tcg.c                                  |   5 -
 target/mips/tcg/micromips_translate.c.inc  |  24 +++--
 target/mips/tcg/mips16e_translate.c.inc    |  18 ++--
 target/mips/tcg/nanomips_translate.c.inc   |  32 +++----
 37 files changed, 186 insertions(+), 162 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 01/16] target/alpha: Use MO_ALIGN for system UNALIGN()
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 02/16] target/alpha: Use MO_ALIGN where required Richard Henderson
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/alpha/translate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 9d25e21164..ffbac1c114 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -72,7 +72,7 @@ struct DisasContext {
 #ifdef CONFIG_USER_ONLY
 #define UNALIGN(C)  (C)->unalign
 #else
-#define UNALIGN(C)  0
+#define UNALIGN(C)  MO_ALIGN
 #endif
 
 /* Target-specific return values from translate_one, indicating the
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 02/16] target/alpha: Use MO_ALIGN where required
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
  2023-05-02 16:08 ` [PATCH 01/16] target/alpha: Use MO_ALIGN for system UNALIGN() Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 03/16] target/alpha: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Mark all memory operations that are not already marked with UNALIGN.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/alpha/translate.c | 36 ++++++++++++++++++++----------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index ffbac1c114..be8adb2526 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -2399,21 +2399,21 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
             switch ((insn >> 12) & 0xF) {
             case 0x0:
                 /* Longword physical access (hw_ldl/p) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LESL);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LESL | MO_ALIGN);
                 break;
             case 0x1:
                 /* Quadword physical access (hw_ldq/p) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ | MO_ALIGN);
                 break;
             case 0x2:
                 /* Longword physical access with lock (hw_ldl_l/p) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LESL);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LESL | MO_ALIGN);
                 tcg_gen_mov_i64(cpu_lock_addr, addr);
                 tcg_gen_mov_i64(cpu_lock_value, va);
                 break;
             case 0x3:
                 /* Quadword physical access with lock (hw_ldq_l/p) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_PHYS_IDX, MO_LEUQ | MO_ALIGN);
                 tcg_gen_mov_i64(cpu_lock_addr, addr);
                 tcg_gen_mov_i64(cpu_lock_value, va);
                 break;
@@ -2438,11 +2438,13 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
                 goto invalid_opc;
             case 0xA:
                 /* Longword virtual access with protection check (hw_ldl/w) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX, MO_LESL);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX,
+                                    MO_LESL | MO_ALIGN);
                 break;
             case 0xB:
                 /* Quadword virtual access with protection check (hw_ldq/w) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX, MO_LEUQ);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_KERNEL_IDX,
+                                    MO_LEUQ | MO_ALIGN);
                 break;
             case 0xC:
                 /* Longword virtual access with alt access mode (hw_ldl/a)*/
@@ -2453,12 +2455,14 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
             case 0xE:
                 /* Longword virtual access with alternate access mode and
                    protection checks (hw_ldl/wa) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX, MO_LESL);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX,
+                                    MO_LESL | MO_ALIGN);
                 break;
             case 0xF:
                 /* Quadword virtual access with alternate access mode and
                    protection checks (hw_ldq/wa) */
-                tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX, MO_LEUQ);
+                tcg_gen_qemu_ld_i64(va, addr, MMU_USER_IDX,
+                                    MO_LEUQ | MO_ALIGN);
                 break;
             }
             break;
@@ -2659,7 +2663,7 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
                 vb = load_gpr(ctx, rb);
                 tmp = tcg_temp_new();
                 tcg_gen_addi_i64(tmp, vb, disp12);
-                tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LESL);
+                tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LESL | MO_ALIGN);
                 break;
             case 0x1:
                 /* Quadword physical access */
@@ -2667,17 +2671,17 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
                 vb = load_gpr(ctx, rb);
                 tmp = tcg_temp_new();
                 tcg_gen_addi_i64(tmp, vb, disp12);
-                tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LEUQ);
+                tcg_gen_qemu_st_i64(va, tmp, MMU_PHYS_IDX, MO_LEUQ | MO_ALIGN);
                 break;
             case 0x2:
                 /* Longword physical access with lock */
                 ret = gen_store_conditional(ctx, ra, rb, disp12,
-                                            MMU_PHYS_IDX, MO_LESL);
+                                            MMU_PHYS_IDX, MO_LESL | MO_ALIGN);
                 break;
             case 0x3:
                 /* Quadword physical access with lock */
                 ret = gen_store_conditional(ctx, ra, rb, disp12,
-                                            MMU_PHYS_IDX, MO_LEUQ);
+                                            MMU_PHYS_IDX, MO_LEUQ | MO_ALIGN);
                 break;
             case 0x4:
                 /* Longword virtual access */
@@ -2771,11 +2775,11 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
         break;
     case 0x2A:
         /* LDL_L */
-        gen_load_int(ctx, ra, rb, disp16, MO_LESL, 0, 1);
+        gen_load_int(ctx, ra, rb, disp16, MO_LESL | MO_ALIGN, 0, 1);
         break;
     case 0x2B:
         /* LDQ_L */
-        gen_load_int(ctx, ra, rb, disp16, MO_LEUQ, 0, 1);
+        gen_load_int(ctx, ra, rb, disp16, MO_LEUQ | MO_ALIGN, 0, 1);
         break;
     case 0x2C:
         /* STL */
@@ -2788,12 +2792,12 @@ static DisasJumpType translate_one(DisasContext *ctx, uint32_t insn)
     case 0x2E:
         /* STL_C */
         ret = gen_store_conditional(ctx, ra, rb, disp16,
-                                    ctx->mem_idx, MO_LESL);
+                                    ctx->mem_idx, MO_LESL | MO_ALIGN);
         break;
     case 0x2F:
         /* STQ_C */
         ret = gen_store_conditional(ctx, ra, rb, disp16,
-                                    ctx->mem_idx, MO_LEUQ);
+                                    ctx->mem_idx, MO_LEUQ | MO_ALIGN);
         break;
     case 0x30:
         /* BR */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 03/16] target/alpha: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
  2023-05-02 16:08 ` [PATCH 01/16] target/alpha: Use MO_ALIGN for system UNALIGN() Richard Henderson
  2023-05-02 16:08 ` [PATCH 02/16] target/alpha: Use MO_ALIGN where required Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 04/16] target/hppa: Use MO_ALIGN for system UNALIGN() Richard Henderson
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/alpha-linux-user.mak | 1 -
 configs/targets/alpha-softmmu.mak    | 1 -
 2 files changed, 2 deletions(-)

diff --git a/configs/targets/alpha-linux-user.mak b/configs/targets/alpha-linux-user.mak
index 7e62fd796a..f7d3fb4afa 100644
--- a/configs/targets/alpha-linux-user.mak
+++ b/configs/targets/alpha-linux-user.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=alpha
 TARGET_SYSTBL_ABI=common
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
diff --git a/configs/targets/alpha-softmmu.mak b/configs/targets/alpha-softmmu.mak
index e4b874a19e..9dbe160740 100644
--- a/configs/targets/alpha-softmmu.mak
+++ b/configs/targets/alpha-softmmu.mak
@@ -1,3 +1,2 @@
 TARGET_ARCH=alpha
-TARGET_ALIGNED_ONLY=y
 TARGET_SUPPORTS_MTTCG=y
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 04/16] target/hppa: Use MO_ALIGN for system UNALIGN()
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (2 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 03/16] target/alpha: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 05/16] target/hppa: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/hppa/translate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 6a3154ebc6..59e4688bfa 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -271,7 +271,7 @@ typedef struct DisasContext {
 #ifdef CONFIG_USER_ONLY
 #define UNALIGN(C)  (C)->unalign
 #else
-#define UNALIGN(C)  0
+#define UNALIGN(C)  MO_ALIGN
 #endif
 
 /* Note that ssm/rsm instructions number PSW_W and PSW_E differently.  */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 05/16] target/hppa: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (3 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 04/16] target/hppa: Use MO_ALIGN for system UNALIGN() Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 06/16] target/mips: Add MO_ALIGN to gen_llwp, gen_scwp Richard Henderson
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/hppa-linux-user.mak | 1 -
 configs/targets/hppa-softmmu.mak    | 1 -
 2 files changed, 2 deletions(-)

diff --git a/configs/targets/hppa-linux-user.mak b/configs/targets/hppa-linux-user.mak
index db873a8796..361ea39d71 100644
--- a/configs/targets/hppa-linux-user.mak
+++ b/configs/targets/hppa-linux-user.mak
@@ -1,5 +1,4 @@
 TARGET_ARCH=hppa
 TARGET_SYSTBL_ABI=common,32
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/hppa-softmmu.mak b/configs/targets/hppa-softmmu.mak
index 44f07b0332..a41662aa99 100644
--- a/configs/targets/hppa-softmmu.mak
+++ b/configs/targets/hppa-softmmu.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=hppa
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
 TARGET_SUPPORTS_MTTCG=y
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 06/16] target/mips: Add MO_ALIGN to gen_llwp, gen_scwp
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (4 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 05/16] target/hppa: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 07/16] target/mips: Add missing default_tcg_memop_mask Richard Henderson
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

These are atomic operations, so mark as requiring alignment.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/mips/tcg/nanomips_translate.c.inc | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc
index 97b9572caa..e08343414c 100644
--- a/target/mips/tcg/nanomips_translate.c.inc
+++ b/target/mips/tcg/nanomips_translate.c.inc
@@ -998,7 +998,7 @@ static void gen_llwp(DisasContext *ctx, uint32_t base, int16_t offset,
     TCGv tmp2 = tcg_temp_new();
 
     gen_base_offset_addr(ctx, taddr, base, offset);
-    tcg_gen_qemu_ld_i64(tval, taddr, ctx->mem_idx, MO_TEUQ);
+    tcg_gen_qemu_ld_i64(tval, taddr, ctx->mem_idx, MO_TEUQ | MO_ALIGN);
     if (cpu_is_bigendian(ctx)) {
         tcg_gen_extr_i64_tl(tmp2, tmp1, tval);
     } else {
@@ -1039,7 +1039,8 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset,
 
     tcg_gen_ld_i64(llval, cpu_env, offsetof(CPUMIPSState, llval_wp));
     tcg_gen_atomic_cmpxchg_i64(val, taddr, llval, tval,
-                               eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_64);
+                               eva ? MIPS_HFLAG_UM : ctx->mem_idx,
+                               MO_64 | MO_ALIGN);
     if (reg1 != 0) {
         tcg_gen_movi_tl(cpu_gpr[reg1], 1);
     }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 07/16] target/mips: Add missing default_tcg_memop_mask
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (5 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 06/16] target/mips: Add MO_ALIGN to gen_llwp, gen_scwp Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0 Richard Henderson
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Memory operations that are not already aligned, or otherwise
marked up, require addition of ctx->default_tcg_memop_mask.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/mips/tcg/mxu_translate.c           |  3 ++-
 target/mips/tcg/micromips_translate.c.inc | 24 ++++++++++++++--------
 target/mips/tcg/mips16e_translate.c.inc   | 18 ++++++++++------
 target/mips/tcg/nanomips_translate.c.inc  | 25 +++++++++++------------
 4 files changed, 42 insertions(+), 28 deletions(-)

diff --git a/target/mips/tcg/mxu_translate.c b/target/mips/tcg/mxu_translate.c
index bdd20709c0..be038b5f07 100644
--- a/target/mips/tcg/mxu_translate.c
+++ b/target/mips/tcg/mxu_translate.c
@@ -831,7 +831,8 @@ static void gen_mxu_s32ldd_s32lddr(DisasContext *ctx)
         tcg_gen_ori_tl(t1, t1, 0xFFFFF000);
     }
     tcg_gen_add_tl(t1, t0, t1);
-    tcg_gen_qemu_ld_tl(t1, t1, ctx->mem_idx, MO_TESL ^ (sel * MO_BSWAP));
+    tcg_gen_qemu_ld_tl(t1, t1, ctx->mem_idx, (MO_TESL ^ (sel * MO_BSWAP)) |
+                       ctx->default_tcg_memop_mask);
 
     gen_store_mxu_gpr(t1, XRa);
 }
diff --git a/target/mips/tcg/micromips_translate.c.inc b/target/mips/tcg/micromips_translate.c.inc
index e8b193aeda..211d102cf6 100644
--- a/target/mips/tcg/micromips_translate.c.inc
+++ b/target/mips/tcg/micromips_translate.c.inc
@@ -977,20 +977,24 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd,
             gen_reserved_instruction(ctx);
             return;
         }
-        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL);
+        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL |
+                           ctx->default_tcg_memop_mask);
         gen_store_gpr(t1, rd);
         tcg_gen_movi_tl(t1, 4);
         gen_op_addr_add(ctx, t0, t0, t1);
-        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL);
+        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL |
+                           ctx->default_tcg_memop_mask);
         gen_store_gpr(t1, rd + 1);
         break;
     case SWP:
         gen_load_gpr(t1, rd);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
         tcg_gen_movi_tl(t1, 4);
         gen_op_addr_add(ctx, t0, t0, t1);
         gen_load_gpr(t1, rd + 1);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
         break;
 #ifdef TARGET_MIPS64
     case LDP:
@@ -998,20 +1002,24 @@ static void gen_ldst_pair(DisasContext *ctx, uint32_t opc, int rd,
             gen_reserved_instruction(ctx);
             return;
         }
-        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
+        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
+                           ctx->default_tcg_memop_mask);
         gen_store_gpr(t1, rd);
         tcg_gen_movi_tl(t1, 8);
         gen_op_addr_add(ctx, t0, t0, t1);
-        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
+        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
+                           ctx->default_tcg_memop_mask);
         gen_store_gpr(t1, rd + 1);
         break;
     case SDP:
         gen_load_gpr(t1, rd);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
+                           ctx->default_tcg_memop_mask);
         tcg_gen_movi_tl(t1, 8);
         gen_op_addr_add(ctx, t0, t0, t1);
         gen_load_gpr(t1, rd + 1);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUQ |
+                           ctx->default_tcg_memop_mask);
         break;
 #endif
     }
diff --git a/target/mips/tcg/mips16e_translate.c.inc b/target/mips/tcg/mips16e_translate.c.inc
index 602f5f0c02..5cffe0e412 100644
--- a/target/mips/tcg/mips16e_translate.c.inc
+++ b/target/mips/tcg/mips16e_translate.c.inc
@@ -172,22 +172,26 @@ static void gen_mips16_save(DisasContext *ctx,
     case 4:
         gen_base_offset_addr(ctx, t0, 29, 12);
         gen_load_gpr(t1, 7);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
         /* Fall through */
     case 3:
         gen_base_offset_addr(ctx, t0, 29, 8);
         gen_load_gpr(t1, 6);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
         /* Fall through */
     case 2:
         gen_base_offset_addr(ctx, t0, 29, 4);
         gen_load_gpr(t1, 5);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
         /* Fall through */
     case 1:
         gen_base_offset_addr(ctx, t0, 29, 0);
         gen_load_gpr(t1, 4);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |
+                           ctx->default_tcg_memop_mask);
     }
 
     gen_load_gpr(t0, 29);
@@ -196,7 +200,8 @@ static void gen_mips16_save(DisasContext *ctx,
         tcg_gen_movi_tl(t2, -4);                                 \
         gen_op_addr_add(ctx, t0, t0, t2);                        \
         gen_load_gpr(t1, reg);                                   \
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL); \
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL |       \
+                           ctx->default_tcg_memop_mask);         \
     } while (0)
 
     if (do_ra) {
@@ -298,7 +303,8 @@ static void gen_mips16_restore(DisasContext *ctx,
 #define DECR_AND_LOAD(reg) do {                            \
         tcg_gen_movi_tl(t2, -4);                           \
         gen_op_addr_add(ctx, t0, t0, t2);                  \
-        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL); \
+        tcg_gen_qemu_ld_tl(t1, t0, ctx->mem_idx, MO_TESL | \
+                           ctx->default_tcg_memop_mask);   \
         gen_store_gpr(t1, reg);                            \
     } while (0)
 
diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc
index e08343414c..b96dcd2ae9 100644
--- a/target/mips/tcg/nanomips_translate.c.inc
+++ b/target/mips/tcg/nanomips_translate.c.inc
@@ -2641,52 +2641,49 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt)
 
     switch (extract32(ctx->opcode, 7, 4)) {
     case NM_LBX:
-        tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx,
-                           MO_SB);
+        tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_SB);
         gen_store_gpr(t0, rd);
         break;
     case NM_LHX:
     /*case NM_LHXS:*/
         tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx,
-                           MO_TESW);
+                           MO_TESW | ctx->default_tcg_memop_mask);
         gen_store_gpr(t0, rd);
         break;
     case NM_LWX:
     /*case NM_LWXS:*/
         tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx,
-                           MO_TESL);
+                           MO_TESL | ctx->default_tcg_memop_mask);
         gen_store_gpr(t0, rd);
         break;
     case NM_LBUX:
-        tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx,
-                           MO_UB);
+        tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx, MO_UB);
         gen_store_gpr(t0, rd);
         break;
     case NM_LHUX:
     /*case NM_LHUXS:*/
         tcg_gen_qemu_ld_tl(t0, t0, ctx->mem_idx,
-                           MO_TEUW);
+                           MO_TEUW | ctx->default_tcg_memop_mask);
         gen_store_gpr(t0, rd);
         break;
     case NM_SBX:
         check_nms(ctx);
         gen_load_gpr(t1, rd);
-        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
-                           MO_8);
+        tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_8);
         break;
     case NM_SHX:
     /*case NM_SHXS:*/
         check_nms(ctx);
         gen_load_gpr(t1, rd);
         tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
-                           MO_TEUW);
+                           MO_TEUW | ctx->default_tcg_memop_mask);
         break;
     case NM_SWX:
     /*case NM_SWXS:*/
         check_nms(ctx);
         gen_load_gpr(t1, rd);
         tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
-                           MO_TEUL);
+                           MO_TEUL | ctx->default_tcg_memop_mask);
         break;
     case NM_LWC1X:
     /*case NM_LWC1XS:*/
@@ -3739,7 +3736,8 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                                                 addr_off);
 
                     tcg_gen_movi_tl(t0, addr);
-                    tcg_gen_qemu_ld_tl(cpu_gpr[rt], t0, ctx->mem_idx, MO_TESL);
+                    tcg_gen_qemu_ld_tl(cpu_gpr[rt], t0, ctx->mem_idx,
+                                       MO_TESL | ctx->default_tcg_memop_mask);
                 }
                 break;
             case NM_SWPC48:
@@ -3755,7 +3753,8 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                     tcg_gen_movi_tl(t0, addr);
                     gen_load_gpr(t1, rt);
 
-                    tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx, MO_TEUL);
+                    tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
+                                       MO_TEUL | ctx->default_tcg_memop_mask);
                 }
                 break;
             default:
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (6 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 07/16] target/mips: Add missing default_tcg_memop_mask Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-10 13:37   ` Philippe Mathieu-Daudé
  2023-05-02 16:08 ` [PATCH 09/16] target/mips: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

The opposite of MO_UNALN is MO_ALIGN.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/mips/tcg/nanomips_translate.c.inc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/mips/tcg/nanomips_translate.c.inc b/target/mips/tcg/nanomips_translate.c.inc
index b96dcd2ae9..a98dde0d2e 100644
--- a/target/mips/tcg/nanomips_translate.c.inc
+++ b/target/mips/tcg/nanomips_translate.c.inc
@@ -4305,7 +4305,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
                     MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
-                                      NM_P_LS_UAWM ? MO_UNALN : 0;
+                                      NM_P_LS_UAWM ? MO_UNALN : MO_ALIGN;
 
                     count = (count == 0) ? 8 : count;
                     while (counter != count) {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 09/16] target/mips: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (7 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0 Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-02 16:08 ` [PATCH 10/16] target/nios2: " Richard Henderson
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/mips-linux-user.mak      | 1 -
 configs/targets/mips-softmmu.mak         | 1 -
 configs/targets/mips64-linux-user.mak    | 1 -
 configs/targets/mips64-softmmu.mak       | 1 -
 configs/targets/mips64el-linux-user.mak  | 1 -
 configs/targets/mips64el-softmmu.mak     | 1 -
 configs/targets/mipsel-linux-user.mak    | 1 -
 configs/targets/mipsel-softmmu.mak       | 1 -
 configs/targets/mipsn32-linux-user.mak   | 1 -
 configs/targets/mipsn32el-linux-user.mak | 1 -
 10 files changed, 10 deletions(-)

diff --git a/configs/targets/mips-linux-user.mak b/configs/targets/mips-linux-user.mak
index 71fa77d464..b4569a9893 100644
--- a/configs/targets/mips-linux-user.mak
+++ b/configs/targets/mips-linux-user.mak
@@ -2,5 +2,4 @@ TARGET_ARCH=mips
 TARGET_ABI_MIPSO32=y
 TARGET_SYSTBL_ABI=o32
 TARGET_SYSTBL=syscall_o32.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/mips-softmmu.mak b/configs/targets/mips-softmmu.mak
index 7787a4d94c..d34b4083fc 100644
--- a/configs/targets/mips-softmmu.mak
+++ b/configs/targets/mips-softmmu.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=mips
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
 TARGET_SUPPORTS_MTTCG=y
diff --git a/configs/targets/mips64-linux-user.mak b/configs/targets/mips64-linux-user.mak
index 5a4771f22d..d2ff509a11 100644
--- a/configs/targets/mips64-linux-user.mak
+++ b/configs/targets/mips64-linux-user.mak
@@ -3,5 +3,4 @@ TARGET_ABI_MIPSN64=y
 TARGET_BASE_ARCH=mips
 TARGET_SYSTBL_ABI=n64
 TARGET_SYSTBL=syscall_n64.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/mips64-softmmu.mak b/configs/targets/mips64-softmmu.mak
index 568d66650c..12d9483bf0 100644
--- a/configs/targets/mips64-softmmu.mak
+++ b/configs/targets/mips64-softmmu.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=mips64
 TARGET_BASE_ARCH=mips
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/mips64el-linux-user.mak b/configs/targets/mips64el-linux-user.mak
index f348f35997..f9efeec8ea 100644
--- a/configs/targets/mips64el-linux-user.mak
+++ b/configs/targets/mips64el-linux-user.mak
@@ -3,4 +3,3 @@ TARGET_ABI_MIPSN64=y
 TARGET_BASE_ARCH=mips
 TARGET_SYSTBL_ABI=n64
 TARGET_SYSTBL=syscall_n64.tbl
-TARGET_ALIGNED_ONLY=y
diff --git a/configs/targets/mips64el-softmmu.mak b/configs/targets/mips64el-softmmu.mak
index 5a52aa4b64..8d9ab3ddc4 100644
--- a/configs/targets/mips64el-softmmu.mak
+++ b/configs/targets/mips64el-softmmu.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=mips64
 TARGET_BASE_ARCH=mips
-TARGET_ALIGNED_ONLY=y
 TARGET_NEED_FDT=y
diff --git a/configs/targets/mipsel-linux-user.mak b/configs/targets/mipsel-linux-user.mak
index e23793070c..e8d7241d31 100644
--- a/configs/targets/mipsel-linux-user.mak
+++ b/configs/targets/mipsel-linux-user.mak
@@ -2,4 +2,3 @@ TARGET_ARCH=mips
 TARGET_ABI_MIPSO32=y
 TARGET_SYSTBL_ABI=o32
 TARGET_SYSTBL=syscall_o32.tbl
-TARGET_ALIGNED_ONLY=y
diff --git a/configs/targets/mipsel-softmmu.mak b/configs/targets/mipsel-softmmu.mak
index c7c41f4fb7..0829659fc2 100644
--- a/configs/targets/mipsel-softmmu.mak
+++ b/configs/targets/mipsel-softmmu.mak
@@ -1,3 +1,2 @@
 TARGET_ARCH=mips
-TARGET_ALIGNED_ONLY=y
 TARGET_SUPPORTS_MTTCG=y
diff --git a/configs/targets/mipsn32-linux-user.mak b/configs/targets/mipsn32-linux-user.mak
index 1e80b302fc..206095da64 100644
--- a/configs/targets/mipsn32-linux-user.mak
+++ b/configs/targets/mipsn32-linux-user.mak
@@ -4,5 +4,4 @@ TARGET_ABI32=y
 TARGET_BASE_ARCH=mips
 TARGET_SYSTBL_ABI=n32
 TARGET_SYSTBL=syscall_n32.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/mipsn32el-linux-user.mak b/configs/targets/mipsn32el-linux-user.mak
index f31a9c394b..ca2a3ed753 100644
--- a/configs/targets/mipsn32el-linux-user.mak
+++ b/configs/targets/mipsn32el-linux-user.mak
@@ -4,4 +4,3 @@ TARGET_ABI32=y
 TARGET_BASE_ARCH=mips
 TARGET_SYSTBL_ABI=n32
 TARGET_SYSTBL=syscall_n32.tbl
-TARGET_ALIGNED_ONLY=y
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 10/16] target/nios2: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (8 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 09/16] target/mips: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-10 14:45   ` Philippe Mathieu-Daudé
  2023-05-02 16:08 ` [PATCH 11/16] target/sh4: Use MO_ALIGN where required Richard Henderson
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

In gen_ldx/gen_stx, the only two locations for memory operations,
mark the operation as either aligned (softmmu) or unaligned
(user-only, as if emulated by the kernel).

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/nios2-softmmu.mak |  1 -
 target/nios2/translate.c          | 10 ++++++++++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/configs/targets/nios2-softmmu.mak b/configs/targets/nios2-softmmu.mak
index 5823fc02c8..c99ae3777e 100644
--- a/configs/targets/nios2-softmmu.mak
+++ b/configs/targets/nios2-softmmu.mak
@@ -1,3 +1,2 @@
 TARGET_ARCH=nios2
-TARGET_ALIGNED_ONLY=y
 TARGET_NEED_FDT=y
diff --git a/target/nios2/translate.c b/target/nios2/translate.c
index 6610e22236..a548e16ed5 100644
--- a/target/nios2/translate.c
+++ b/target/nios2/translate.c
@@ -298,6 +298,11 @@ static void gen_ldx(DisasContext *dc, uint32_t code, uint32_t flags)
     TCGv data = dest_gpr(dc, instr.b);
 
     tcg_gen_addi_tl(addr, load_gpr(dc, instr.a), instr.imm16.s);
+#ifdef CONFIG_USER_ONLY
+    flags |= MO_UNALN;
+#else
+    flags |= MO_ALIGN;
+#endif
     tcg_gen_qemu_ld_tl(data, addr, dc->mem_idx, flags);
 }
 
@@ -309,6 +314,11 @@ static void gen_stx(DisasContext *dc, uint32_t code, uint32_t flags)
 
     TCGv addr = tcg_temp_new();
     tcg_gen_addi_tl(addr, load_gpr(dc, instr.a), instr.imm16.s);
+#ifdef CONFIG_USER_ONLY
+    flags |= MO_UNALN;
+#else
+    flags |= MO_ALIGN;
+#endif
     tcg_gen_qemu_st_tl(val, addr, dc->mem_idx, flags);
 }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 11/16] target/sh4: Use MO_ALIGN where required
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (9 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 10/16] target/nios2: " Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-10 13:41   ` Philippe Mathieu-Daudé
  2023-05-02 16:08 ` [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Mark all memory operations that are not already marked with UNALIGN.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/sh4/translate.c | 102 ++++++++++++++++++++++++++---------------
 1 file changed, 66 insertions(+), 36 deletions(-)

diff --git a/target/sh4/translate.c b/target/sh4/translate.c
index 6e40d5dd6a..0dedbb8210 100644
--- a/target/sh4/translate.c
+++ b/target/sh4/translate.c
@@ -527,13 +527,15 @@ static void _decode_opc(DisasContext * ctx)
     case 0x9000:		/* mov.w @(disp,PC),Rn */
 	{
             TCGv addr = tcg_constant_i32(ctx->base.pc_next + 4 + B7_0 * 2);
-            tcg_gen_qemu_ld_i32(REG(B11_8), addr, ctx->memidx, MO_TESW);
+            tcg_gen_qemu_ld_i32(REG(B11_8), addr, ctx->memidx,
+                                MO_TESW | MO_ALIGN);
 	}
 	return;
     case 0xd000:		/* mov.l @(disp,PC),Rn */
 	{
             TCGv addr = tcg_constant_i32((ctx->base.pc_next + 4 + B7_0 * 4) & ~3);
-            tcg_gen_qemu_ld_i32(REG(B11_8), addr, ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(REG(B11_8), addr, ctx->memidx,
+                                MO_TESL | MO_ALIGN);
 	}
 	return;
     case 0x7000:		/* add #imm,Rn */
@@ -801,9 +803,11 @@ static void _decode_opc(DisasContext * ctx)
 	{
 	    TCGv arg0, arg1;
 	    arg0 = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(arg0, REG(B7_4), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(arg0, REG(B7_4), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
 	    arg1 = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(arg1, REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(arg1, REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
             gen_helper_macl(cpu_env, arg0, arg1);
 	    tcg_gen_addi_i32(REG(B7_4), REG(B7_4), 4);
 	    tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 4);
@@ -813,9 +817,11 @@ static void _decode_opc(DisasContext * ctx)
 	{
 	    TCGv arg0, arg1;
 	    arg0 = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(arg0, REG(B7_4), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(arg0, REG(B7_4), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
 	    arg1 = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(arg1, REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(arg1, REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
             gen_helper_macw(cpu_env, arg0, arg1);
 	    tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 2);
 	    tcg_gen_addi_i32(REG(B7_4), REG(B7_4), 2);
@@ -961,30 +967,36 @@ static void _decode_opc(DisasContext * ctx)
         if (ctx->tbflags & FPSCR_SZ) {
             TCGv_i64 fp = tcg_temp_new_i64();
             gen_load_fpr64(ctx, fp, XHACK(B7_4));
-            tcg_gen_qemu_st_i64(fp, REG(B11_8), ctx->memidx, MO_TEUQ);
+            tcg_gen_qemu_st_i64(fp, REG(B11_8), ctx->memidx,
+                                MO_TEUQ | MO_ALIGN);
 	} else {
-            tcg_gen_qemu_st_i32(FREG(B7_4), REG(B11_8), ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(FREG(B7_4), REG(B11_8), ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
 	}
 	return;
     case 0xf008: /* fmov @Rm,{F,D,X}Rn - FPSCR: Nothing */
 	CHECK_FPU_ENABLED
         if (ctx->tbflags & FPSCR_SZ) {
             TCGv_i64 fp = tcg_temp_new_i64();
-            tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEUQ);
+            tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx,
+                                MO_TEUQ | MO_ALIGN);
             gen_store_fpr64(ctx, fp, XHACK(B11_8));
 	} else {
-            tcg_gen_qemu_ld_i32(FREG(B11_8), REG(B7_4), ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_ld_i32(FREG(B11_8), REG(B7_4), ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
 	}
 	return;
     case 0xf009: /* fmov @Rm+,{F,D,X}Rn - FPSCR: Nothing */
 	CHECK_FPU_ENABLED
         if (ctx->tbflags & FPSCR_SZ) {
             TCGv_i64 fp = tcg_temp_new_i64();
-            tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx, MO_TEUQ);
+            tcg_gen_qemu_ld_i64(fp, REG(B7_4), ctx->memidx,
+                                MO_TEUQ | MO_ALIGN);
             gen_store_fpr64(ctx, fp, XHACK(B11_8));
             tcg_gen_addi_i32(REG(B7_4), REG(B7_4), 8);
 	} else {
-            tcg_gen_qemu_ld_i32(FREG(B11_8), REG(B7_4), ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_ld_i32(FREG(B11_8), REG(B7_4), ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
 	    tcg_gen_addi_i32(REG(B7_4), REG(B7_4), 4);
 	}
 	return;
@@ -996,10 +1008,12 @@ static void _decode_opc(DisasContext * ctx)
                 TCGv_i64 fp = tcg_temp_new_i64();
                 gen_load_fpr64(ctx, fp, XHACK(B7_4));
                 tcg_gen_subi_i32(addr, REG(B11_8), 8);
-                tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEUQ);
+                tcg_gen_qemu_st_i64(fp, addr, ctx->memidx,
+                                    MO_TEUQ | MO_ALIGN);
             } else {
                 tcg_gen_subi_i32(addr, REG(B11_8), 4);
-                tcg_gen_qemu_st_i32(FREG(B7_4), addr, ctx->memidx, MO_TEUL);
+                tcg_gen_qemu_st_i32(FREG(B7_4), addr, ctx->memidx,
+                                    MO_TEUL | MO_ALIGN);
             }
             tcg_gen_mov_i32(REG(B11_8), addr);
         }
@@ -1011,10 +1025,12 @@ static void _decode_opc(DisasContext * ctx)
 	    tcg_gen_add_i32(addr, REG(B7_4), REG(0));
             if (ctx->tbflags & FPSCR_SZ) {
                 TCGv_i64 fp = tcg_temp_new_i64();
-                tcg_gen_qemu_ld_i64(fp, addr, ctx->memidx, MO_TEUQ);
+                tcg_gen_qemu_ld_i64(fp, addr, ctx->memidx,
+                                    MO_TEUQ | MO_ALIGN);
                 gen_store_fpr64(ctx, fp, XHACK(B11_8));
 	    } else {
-                tcg_gen_qemu_ld_i32(FREG(B11_8), addr, ctx->memidx, MO_TEUL);
+                tcg_gen_qemu_ld_i32(FREG(B11_8), addr, ctx->memidx,
+                                    MO_TEUL | MO_ALIGN);
 	    }
 	}
 	return;
@@ -1026,9 +1042,11 @@ static void _decode_opc(DisasContext * ctx)
             if (ctx->tbflags & FPSCR_SZ) {
                 TCGv_i64 fp = tcg_temp_new_i64();
                 gen_load_fpr64(ctx, fp, XHACK(B7_4));
-                tcg_gen_qemu_st_i64(fp, addr, ctx->memidx, MO_TEUQ);
+                tcg_gen_qemu_st_i64(fp, addr, ctx->memidx,
+                                    MO_TEUQ | MO_ALIGN);
 	    } else {
-                tcg_gen_qemu_st_i32(FREG(B7_4), addr, ctx->memidx, MO_TEUL);
+                tcg_gen_qemu_st_i32(FREG(B7_4), addr, ctx->memidx,
+                                    MO_TEUL | MO_ALIGN);
 	    }
 	}
 	return;
@@ -1158,14 +1176,14 @@ static void _decode_opc(DisasContext * ctx)
 	{
 	    TCGv addr = tcg_temp_new();
 	    tcg_gen_addi_i32(addr, cpu_gbr, B7_0 * 2);
-            tcg_gen_qemu_ld_i32(REG(0), addr, ctx->memidx, MO_TESW);
+            tcg_gen_qemu_ld_i32(REG(0), addr, ctx->memidx, MO_TESW | MO_ALIGN);
 	}
 	return;
     case 0xc600:		/* mov.l @(disp,GBR),R0 */
 	{
 	    TCGv addr = tcg_temp_new();
 	    tcg_gen_addi_i32(addr, cpu_gbr, B7_0 * 4);
-            tcg_gen_qemu_ld_i32(REG(0), addr, ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(REG(0), addr, ctx->memidx, MO_TESL | MO_ALIGN);
 	}
 	return;
     case 0xc000:		/* mov.b R0,@(disp,GBR) */
@@ -1179,14 +1197,14 @@ static void _decode_opc(DisasContext * ctx)
 	{
 	    TCGv addr = tcg_temp_new();
 	    tcg_gen_addi_i32(addr, cpu_gbr, B7_0 * 2);
-            tcg_gen_qemu_st_i32(REG(0), addr, ctx->memidx, MO_TEUW);
+            tcg_gen_qemu_st_i32(REG(0), addr, ctx->memidx, MO_TEUW | MO_ALIGN);
 	}
 	return;
     case 0xc200:		/* mov.l R0,@(disp,GBR) */
 	{
 	    TCGv addr = tcg_temp_new();
 	    tcg_gen_addi_i32(addr, cpu_gbr, B7_0 * 4);
-            tcg_gen_qemu_st_i32(REG(0), addr, ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(REG(0), addr, ctx->memidx, MO_TEUL | MO_ALIGN);
 	}
 	return;
     case 0x8000:		/* mov.b R0,@(disp,Rn) */
@@ -1286,7 +1304,8 @@ static void _decode_opc(DisasContext * ctx)
 	return;
     case 0x4087:		/* ldc.l @Rm+,Rn_BANK */
 	CHECK_PRIVILEGED
-        tcg_gen_qemu_ld_i32(ALTREG(B6_4), REG(B11_8), ctx->memidx, MO_TESL);
+        tcg_gen_qemu_ld_i32(ALTREG(B6_4), REG(B11_8), ctx->memidx,
+                            MO_TESL | MO_ALIGN);
 	tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 4);
 	return;
     case 0x0082:		/* stc Rm_BANK,Rn */
@@ -1298,7 +1317,8 @@ static void _decode_opc(DisasContext * ctx)
 	{
 	    TCGv addr = tcg_temp_new();
 	    tcg_gen_subi_i32(addr, REG(B11_8), 4);
-            tcg_gen_qemu_st_i32(ALTREG(B6_4), addr, ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(ALTREG(B6_4), addr, ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
 	    tcg_gen_mov_i32(REG(B11_8), addr);
 	}
 	return;
@@ -1354,7 +1374,8 @@ static void _decode_opc(DisasContext * ctx)
 	CHECK_PRIVILEGED
 	{
 	    TCGv val = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(val, REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(val, REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
             tcg_gen_andi_i32(val, val, 0x700083f3);
             gen_write_sr(val);
 	    tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 4);
@@ -1372,7 +1393,7 @@ static void _decode_opc(DisasContext * ctx)
             TCGv val = tcg_temp_new();
 	    tcg_gen_subi_i32(addr, REG(B11_8), 4);
             gen_read_sr(val);
-            tcg_gen_qemu_st_i32(val, addr, ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(val, addr, ctx->memidx, MO_TEUL | MO_ALIGN);
 	    tcg_gen_mov_i32(REG(B11_8), addr);
 	}
 	return;
@@ -1383,7 +1404,8 @@ static void _decode_opc(DisasContext * ctx)
     return;							\
   case ldpnum:							\
     prechk    							\
-    tcg_gen_qemu_ld_i32(cpu_##reg, REG(B11_8), ctx->memidx, MO_TESL); \
+    tcg_gen_qemu_ld_i32(cpu_##reg, REG(B11_8), ctx->memidx,     \
+                        MO_TESL | MO_ALIGN);                    \
     tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 4);		\
     return;
 #define ST(reg,stnum,stpnum,prechk)		\
@@ -1396,7 +1418,8 @@ static void _decode_opc(DisasContext * ctx)
     {								\
 	TCGv addr = tcg_temp_new();				\
 	tcg_gen_subi_i32(addr, REG(B11_8), 4);			\
-        tcg_gen_qemu_st_i32(cpu_##reg, addr, ctx->memidx, MO_TEUL); \
+        tcg_gen_qemu_st_i32(cpu_##reg, addr, ctx->memidx,       \
+                            MO_TEUL | MO_ALIGN);                \
 	tcg_gen_mov_i32(REG(B11_8), addr);			\
     }								\
     return;
@@ -1423,7 +1446,8 @@ static void _decode_opc(DisasContext * ctx)
 	CHECK_FPU_ENABLED
 	{
 	    TCGv addr = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(addr, REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(addr, REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
 	    tcg_gen_addi_i32(REG(B11_8), REG(B11_8), 4);
             gen_helper_ld_fpscr(cpu_env, addr);
             ctx->base.is_jmp = DISAS_STOP;
@@ -1441,16 +1465,18 @@ static void _decode_opc(DisasContext * ctx)
 	    tcg_gen_andi_i32(val, cpu_fpscr, 0x003fffff);
 	    addr = tcg_temp_new();
 	    tcg_gen_subi_i32(addr, REG(B11_8), 4);
-            tcg_gen_qemu_st_i32(val, addr, ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(val, addr, ctx->memidx, MO_TEUL | MO_ALIGN);
 	    tcg_gen_mov_i32(REG(B11_8), addr);
 	}
 	return;
     case 0x00c3:		/* movca.l R0,@Rm */
         {
             TCGv val = tcg_temp_new();
-            tcg_gen_qemu_ld_i32(val, REG(B11_8), ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_ld_i32(val, REG(B11_8), ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
             gen_helper_movcal(cpu_env, REG(B11_8), val);
-            tcg_gen_qemu_st_i32(REG(0), REG(B11_8), ctx->memidx, MO_TEUL);
+            tcg_gen_qemu_st_i32(REG(0), REG(B11_8), ctx->memidx,
+                                MO_TEUL | MO_ALIGN);
         }
         ctx->has_movcal = 1;
 	return;
@@ -1492,11 +1518,13 @@ static void _decode_opc(DisasContext * ctx)
                                    cpu_lock_addr, fail);
                 tmp = tcg_temp_new();
                 tcg_gen_atomic_cmpxchg_i32(tmp, REG(B11_8), cpu_lock_value,
-                                           REG(0), ctx->memidx, MO_TEUL);
+                                           REG(0), ctx->memidx,
+                                           MO_TEUL | MO_ALIGN);
                 tcg_gen_setcond_i32(TCG_COND_EQ, cpu_sr_t, tmp, cpu_lock_value);
             } else {
                 tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_lock_addr, -1, fail);
-                tcg_gen_qemu_st_i32(REG(0), REG(B11_8), ctx->memidx, MO_TEUL);
+                tcg_gen_qemu_st_i32(REG(0), REG(B11_8), ctx->memidx,
+                                    MO_TEUL | MO_ALIGN);
                 tcg_gen_movi_i32(cpu_sr_t, 1);
             }
             tcg_gen_br(done);
@@ -1521,11 +1549,13 @@ static void _decode_opc(DisasContext * ctx)
         if ((tb_cflags(ctx->base.tb) & CF_PARALLEL)) {
             TCGv tmp = tcg_temp_new();
             tcg_gen_mov_i32(tmp, REG(B11_8));
-            tcg_gen_qemu_ld_i32(REG(0), REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(REG(0), REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
             tcg_gen_mov_i32(cpu_lock_value, REG(0));
             tcg_gen_mov_i32(cpu_lock_addr, tmp);
         } else {
-            tcg_gen_qemu_ld_i32(REG(0), REG(B11_8), ctx->memidx, MO_TESL);
+            tcg_gen_qemu_ld_i32(REG(0), REG(B11_8), ctx->memidx,
+                                MO_TESL | MO_ALIGN);
             tcg_gen_movi_i32(cpu_lock_addr, 0);
         }
         return;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (10 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 11/16] target/sh4: Use MO_ALIGN where required Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-10 14:44   ` Philippe Mathieu-Daudé
  2023-05-02 16:08 ` [PATCH 13/16] target/sparc: Use MO_ALIGN where required Richard Henderson
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/sh4-linux-user.mak   | 1 -
 configs/targets/sh4-softmmu.mak      | 1 -
 configs/targets/sh4eb-linux-user.mak | 1 -
 configs/targets/sh4eb-softmmu.mak    | 1 -
 4 files changed, 4 deletions(-)

diff --git a/configs/targets/sh4-linux-user.mak b/configs/targets/sh4-linux-user.mak
index 0152d6621e..9908887566 100644
--- a/configs/targets/sh4-linux-user.mak
+++ b/configs/targets/sh4-linux-user.mak
@@ -1,5 +1,4 @@
 TARGET_ARCH=sh4
 TARGET_SYSTBL_ABI=common
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_HAS_BFLT=y
diff --git a/configs/targets/sh4-softmmu.mak b/configs/targets/sh4-softmmu.mak
index 95896376c4..f9d62d91e4 100644
--- a/configs/targets/sh4-softmmu.mak
+++ b/configs/targets/sh4-softmmu.mak
@@ -1,2 +1 @@
 TARGET_ARCH=sh4
-TARGET_ALIGNED_ONLY=y
diff --git a/configs/targets/sh4eb-linux-user.mak b/configs/targets/sh4eb-linux-user.mak
index 6724165efe..9db6b3609c 100644
--- a/configs/targets/sh4eb-linux-user.mak
+++ b/configs/targets/sh4eb-linux-user.mak
@@ -1,6 +1,5 @@
 TARGET_ARCH=sh4
 TARGET_SYSTBL_ABI=common
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
 TARGET_HAS_BFLT=y
diff --git a/configs/targets/sh4eb-softmmu.mak b/configs/targets/sh4eb-softmmu.mak
index dc8b30bf7a..226b1fc698 100644
--- a/configs/targets/sh4eb-softmmu.mak
+++ b/configs/targets/sh4eb-softmmu.mak
@@ -1,3 +1,2 @@
 TARGET_ARCH=sh4
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 13/16] target/sparc: Use MO_ALIGN where required
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (11 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-03 20:26   ` Mark Cave-Ayland
  2023-05-02 16:08 ` [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu Richard Henderson
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/sparc/translate.c | 66 +++++++++++++++++++++-------------------
 1 file changed, 34 insertions(+), 32 deletions(-)

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index bc71e44e66..414e014b11 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -1899,7 +1899,7 @@ static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
                      TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
-    tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
+    tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop | MO_ALIGN);
 }
 
 static void gen_ldstub(DisasContext *dc, TCGv dst, TCGv addr, int mmu_idx)
@@ -2155,12 +2155,12 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
         break;
     case GET_ASI_DIRECT:
         gen_address_mask(dc, addr);
-        tcg_gen_qemu_ld_tl(dst, addr, da.mem_idx, da.memop);
+        tcg_gen_qemu_ld_tl(dst, addr, da.mem_idx, da.memop | MO_ALIGN);
         break;
     default:
         {
             TCGv_i32 r_asi = tcg_constant_i32(da.asi);
-            TCGv_i32 r_mop = tcg_constant_i32(memop);
+            TCGv_i32 r_mop = tcg_constant_i32(memop | MO_ALIGN);
 
             save_state(dc);
 #ifdef TARGET_SPARC64
@@ -2201,7 +2201,7 @@ static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
         /* fall through */
     case GET_ASI_DIRECT:
         gen_address_mask(dc, addr);
-        tcg_gen_qemu_st_tl(src, addr, da.mem_idx, da.memop);
+        tcg_gen_qemu_st_tl(src, addr, da.mem_idx, da.memop | MO_ALIGN);
         break;
 #if !defined(TARGET_SPARC64) && !defined(CONFIG_USER_ONLY)
     case GET_ASI_BCOPY:
@@ -2233,7 +2233,7 @@ static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
     default:
         {
             TCGv_i32 r_asi = tcg_constant_i32(da.asi);
-            TCGv_i32 r_mop = tcg_constant_i32(memop & MO_SIZE);
+            TCGv_i32 r_mop = tcg_constant_i32(memop | MO_ALIGN);
 
             save_state(dc);
 #ifdef TARGET_SPARC64
@@ -2283,7 +2283,7 @@ static void gen_cas_asi(DisasContext *dc, TCGv addr, TCGv cmpv,
     case GET_ASI_DIRECT:
         oldv = tcg_temp_new();
         tcg_gen_atomic_cmpxchg_tl(oldv, addr, cmpv, gen_load_gpr(dc, rd),
-                                  da.mem_idx, da.memop);
+                                  da.mem_idx, da.memop | MO_ALIGN);
         gen_store_gpr(dc, rd, oldv);
         break;
     default:
@@ -2347,7 +2347,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
         switch (size) {
         case 4:
             d32 = gen_dest_fpr_F(dc);
-            tcg_gen_qemu_ld_i32(d32, addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_ld_i32(d32, addr, da.mem_idx, da.memop | MO_ALIGN);
             gen_store_fpr_F(dc, rd, d32);
             break;
         case 8:
@@ -2397,7 +2397,8 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
         /* Valid for lddfa only.  */
         if (size == 8) {
             gen_address_mask(dc, addr);
-            tcg_gen_qemu_ld_i64(cpu_fpr[rd / 2], addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_ld_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
+                                da.memop | MO_ALIGN);
         } else {
             gen_exception(dc, TT_ILL_INSN);
         }
@@ -2406,7 +2407,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     default:
         {
             TCGv_i32 r_asi = tcg_constant_i32(da.asi);
-            TCGv_i32 r_mop = tcg_constant_i32(da.memop);
+            TCGv_i32 r_mop = tcg_constant_i32(da.memop | MO_ALIGN);
 
             save_state(dc);
             /* According to the table in the UA2011 manual, the only
@@ -2454,7 +2455,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
         switch (size) {
         case 4:
             d32 = gen_load_fpr_F(dc, rd);
-            tcg_gen_qemu_st_i32(d32, addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_st_i32(d32, addr, da.mem_idx, da.memop | MO_ALIGN);
             break;
         case 8:
             tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
@@ -2506,7 +2507,8 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
         /* Valid for stdfa only.  */
         if (size == 8) {
             gen_address_mask(dc, addr);
-            tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
+                                da.memop | MO_ALIGN);
         } else {
             gen_exception(dc, TT_ILL_INSN);
         }
@@ -2543,7 +2545,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
             TCGv_i64 tmp = tcg_temp_new_i64();
 
             gen_address_mask(dc, addr);
-            tcg_gen_qemu_ld_i64(tmp, addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_ld_i64(tmp, addr, da.mem_idx, da.memop | MO_ALIGN);
 
             /* Note that LE ldda acts as if each 32-bit register
                result is byte swapped.  Having just performed one
@@ -2613,7 +2615,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
                 tcg_gen_concat32_i64(t64, hi, lo);
             }
             gen_address_mask(dc, addr);
-            tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop);
+            tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
         }
         break;
 
@@ -2651,7 +2653,7 @@ static void gen_casx_asi(DisasContext *dc, TCGv addr, TCGv cmpv,
     case GET_ASI_DIRECT:
         oldv = tcg_temp_new();
         tcg_gen_atomic_cmpxchg_tl(oldv, addr, cmpv, gen_load_gpr(dc, rd),
-                                  da.mem_idx, da.memop);
+                                  da.mem_idx, da.memop | MO_ALIGN);
         gen_store_gpr(dc, rd, oldv);
         break;
     default:
@@ -2678,7 +2680,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
         return;
     case GET_ASI_DIRECT:
         gen_address_mask(dc, addr);
-        tcg_gen_qemu_ld_i64(t64, addr, da.mem_idx, da.memop);
+        tcg_gen_qemu_ld_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
         break;
     default:
         {
@@ -2710,7 +2712,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
         break;
     case GET_ASI_DIRECT:
         gen_address_mask(dc, addr);
-        tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop);
+        tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
         break;
     case GET_ASI_BFILL:
         /* Store 32 bytes of T64 to ADDR.  */
@@ -5180,7 +5182,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x0:       /* ld, V9 lduw, load unsigned word */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUL);
+                                       dc->mem_idx, MO_TEUL | MO_ALIGN);
                     break;
                 case 0x1:       /* ldub, load unsigned byte */
                     gen_address_mask(dc, cpu_addr);
@@ -5190,7 +5192,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x2:       /* lduh, load unsigned halfword */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUW);
+                                       dc->mem_idx, MO_TEUW | MO_ALIGN);
                     break;
                 case 0x3:       /* ldd, load double word */
                     if (rd & 1)
@@ -5201,7 +5203,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                         gen_address_mask(dc, cpu_addr);
                         t64 = tcg_temp_new_i64();
                         tcg_gen_qemu_ld_i64(t64, cpu_addr,
-                                            dc->mem_idx, MO_TEUQ);
+                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
                         tcg_gen_trunc_i64_tl(cpu_val, t64);
                         tcg_gen_ext32u_tl(cpu_val, cpu_val);
                         gen_store_gpr(dc, rd + 1, cpu_val);
@@ -5217,7 +5219,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0xa:       /* ldsh, load signed halfword */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TESW);
+                                       dc->mem_idx, MO_TESW | MO_ALIGN);
                     break;
                 case 0xd:       /* ldstub */
                     gen_ldstub(dc, cpu_val, cpu_addr, dc->mem_idx);
@@ -5272,12 +5274,12 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x08: /* V9 ldsw */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TESL);
+                                       dc->mem_idx, MO_TESL | MO_ALIGN);
                     break;
                 case 0x0b: /* V9 ldx */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUQ);
+                                       dc->mem_idx, MO_TEUQ | MO_ALIGN);
                     break;
                 case 0x18: /* V9 ldswa */
                     gen_ld_asi(dc, cpu_val, cpu_addr, insn, MO_TESL);
@@ -5328,7 +5330,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                     gen_address_mask(dc, cpu_addr);
                     cpu_dst_32 = gen_dest_fpr_F(dc);
                     tcg_gen_qemu_ld_i32(cpu_dst_32, cpu_addr,
-                                        dc->mem_idx, MO_TEUL);
+                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
                     gen_store_fpr_F(dc, rd, cpu_dst_32);
                     break;
                 case 0x21:      /* ldfsr, V9 ldxfsr */
@@ -5337,14 +5339,14 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                     if (rd == 1) {
                         TCGv_i64 t64 = tcg_temp_new_i64();
                         tcg_gen_qemu_ld_i64(t64, cpu_addr,
-                                            dc->mem_idx, MO_TEUQ);
+                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
                         gen_helper_ldxfsr(cpu_fsr, cpu_env, cpu_fsr, t64);
                         break;
                     }
 #endif
                     cpu_dst_32 = tcg_temp_new_i32();
                     tcg_gen_qemu_ld_i32(cpu_dst_32, cpu_addr,
-                                        dc->mem_idx, MO_TEUL);
+                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
                     gen_helper_ldfsr(cpu_fsr, cpu_env, cpu_fsr, cpu_dst_32);
                     break;
                 case 0x22:      /* ldqf, load quad fpreg */
@@ -5377,7 +5379,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x4: /* st, store word */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUL);
+                                       dc->mem_idx, MO_TEUL | MO_ALIGN);
                     break;
                 case 0x5: /* stb, store byte */
                     gen_address_mask(dc, cpu_addr);
@@ -5386,7 +5388,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x6: /* sth, store halfword */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUW);
+                                       dc->mem_idx, MO_TEUW | MO_ALIGN);
                     break;
                 case 0x7: /* std, store double word */
                     if (rd & 1)
@@ -5400,7 +5402,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                         t64 = tcg_temp_new_i64();
                         tcg_gen_concat_tl_i64(t64, lo, cpu_val);
                         tcg_gen_qemu_st_i64(t64, cpu_addr,
-                                            dc->mem_idx, MO_TEUQ);
+                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
                     }
                     break;
 #if !defined(CONFIG_USER_ONLY) || defined(TARGET_SPARC64)
@@ -5424,7 +5426,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                 case 0x0e: /* V9 stx */
                     gen_address_mask(dc, cpu_addr);
                     tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
-                                       dc->mem_idx, MO_TEUQ);
+                                       dc->mem_idx, MO_TEUQ | MO_ALIGN);
                     break;
                 case 0x1e: /* V9 stxa */
                     gen_st_asi(dc, cpu_val, cpu_addr, insn, MO_TEUQ);
@@ -5442,7 +5444,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                     gen_address_mask(dc, cpu_addr);
                     cpu_src1_32 = gen_load_fpr_F(dc, rd);
                     tcg_gen_qemu_st_i32(cpu_src1_32, cpu_addr,
-                                        dc->mem_idx, MO_TEUL);
+                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
                     break;
                 case 0x25: /* stfsr, V9 stxfsr */
                     {
@@ -5450,12 +5452,12 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
                         gen_address_mask(dc, cpu_addr);
                         if (rd == 1) {
                             tcg_gen_qemu_st_tl(cpu_fsr, cpu_addr,
-                                               dc->mem_idx, MO_TEUQ);
+                                               dc->mem_idx, MO_TEUQ | MO_ALIGN);
                             break;
                         }
 #endif
                         tcg_gen_qemu_st_tl(cpu_fsr, cpu_addr,
-                                           dc->mem_idx, MO_TEUL);
+                                           dc->mem_idx, MO_TEUL | MO_ALIGN);
                     }
                     break;
                 case 0x26:
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (12 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 13/16] target/sparc: Use MO_ALIGN where required Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-03 20:27   ` Mark Cave-Ayland
  2023-05-02 16:08 ` [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

This passes on the memop as given as argument to
helper_ld_asi to the ultimate load primitive.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/sparc/ldst_helper.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c
index a53580d9e4..7972d56a72 100644
--- a/target/sparc/ldst_helper.c
+++ b/target/sparc/ldst_helper.c
@@ -593,6 +593,7 @@ uint64_t helper_ld_asi(CPUSPARCState *env, target_ulong addr,
 #if defined(DEBUG_MXCC) || defined(DEBUG_ASI)
     uint32_t last_addr = addr;
 #endif
+    MemOpIdx oi;
 
     do_check_align(env, addr, size - 1, GETPC());
     switch (asi) {
@@ -692,19 +693,20 @@ uint64_t helper_ld_asi(CPUSPARCState *env, target_ulong addr,
     case ASI_M_IODIAG:  /* Turbosparc IOTLB Diagnostic */
         break;
     case ASI_KERNELTXT: /* Supervisor code access */
+        oi = make_memop_idx(memop, cpu_mmu_index(env, true));
         switch (size) {
         case 1:
-            ret = cpu_ldub_code(env, addr);
+            ret = cpu_ldb_code_mmu(env, addr, oi, GETPC());
             break;
         case 2:
-            ret = cpu_lduw_code(env, addr);
+            ret = cpu_ldw_code_mmu(env, addr, oi, GETPC());
             break;
         default:
         case 4:
-            ret = cpu_ldl_code(env, addr);
+            ret = cpu_ldl_code_mmu(env, addr, oi, GETPC());
             break;
         case 8:
-            ret = cpu_ldq_code(env, addr);
+            ret = cpu_ldq_code_mmu(env, addr, oi, GETPC());
             break;
         }
         break;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (13 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-03 20:28   ` Mark Cave-Ayland
  2023-05-02 16:08 ` [PATCH 16/16] tcg: " Richard Henderson
  2023-05-10 11:16 ` [PATCH 00/16] " Richard Henderson
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/sparc-linux-user.mak       | 1 -
 configs/targets/sparc-softmmu.mak          | 1 -
 configs/targets/sparc32plus-linux-user.mak | 1 -
 configs/targets/sparc64-linux-user.mak     | 1 -
 configs/targets/sparc64-softmmu.mak        | 1 -
 5 files changed, 5 deletions(-)

diff --git a/configs/targets/sparc-linux-user.mak b/configs/targets/sparc-linux-user.mak
index 00e7bc1f07..abcfb8fc62 100644
--- a/configs/targets/sparc-linux-user.mak
+++ b/configs/targets/sparc-linux-user.mak
@@ -1,5 +1,4 @@
 TARGET_ARCH=sparc
 TARGET_SYSTBL_ABI=common,32
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/sparc-softmmu.mak b/configs/targets/sparc-softmmu.mak
index a849190f01..454eb35499 100644
--- a/configs/targets/sparc-softmmu.mak
+++ b/configs/targets/sparc-softmmu.mak
@@ -1,3 +1,2 @@
 TARGET_ARCH=sparc
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/sparc32plus-linux-user.mak b/configs/targets/sparc32plus-linux-user.mak
index a65c0951a1..6cc8fa516b 100644
--- a/configs/targets/sparc32plus-linux-user.mak
+++ b/configs/targets/sparc32plus-linux-user.mak
@@ -4,5 +4,4 @@ TARGET_BASE_ARCH=sparc
 TARGET_ABI_DIR=sparc
 TARGET_SYSTBL_ABI=common,32
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/sparc64-linux-user.mak b/configs/targets/sparc64-linux-user.mak
index 20fcb93fa4..52f05ec000 100644
--- a/configs/targets/sparc64-linux-user.mak
+++ b/configs/targets/sparc64-linux-user.mak
@@ -3,5 +3,4 @@ TARGET_BASE_ARCH=sparc
 TARGET_ABI_DIR=sparc
 TARGET_SYSTBL_ABI=common,64
 TARGET_SYSTBL=syscall.tbl
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
diff --git a/configs/targets/sparc64-softmmu.mak b/configs/targets/sparc64-softmmu.mak
index c626ac3eae..d3f8a3b710 100644
--- a/configs/targets/sparc64-softmmu.mak
+++ b/configs/targets/sparc64-softmmu.mak
@@ -1,4 +1,3 @@
 TARGET_ARCH=sparc64
 TARGET_BASE_ARCH=sparc
-TARGET_ALIGNED_ONLY=y
 TARGET_BIG_ENDIAN=y
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 16/16] tcg: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (14 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-02 16:08 ` Richard Henderson
  2023-05-10 14:43   ` Philippe Mathieu-Daudé
  2023-05-10 11:16 ` [PATCH 00/16] " Richard Henderson
  16 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-05-02 16:08 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

All uses have now been expunged.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/memop.h  | 13 ++-----------
 include/exec/poison.h |  1 -
 tcg/tcg.c             |  5 -----
 3 files changed, 2 insertions(+), 17 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 25d027434a..07f5f88188 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -47,8 +47,6 @@ typedef enum MemOp {
      * MO_UNALN accesses are never checked for alignment.
      * MO_ALIGN accesses will result in a call to the CPU's
      * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines
-     * TARGET_ALIGNED_ONLY.
      *
      * Some architectures (e.g. ARMv8) need the address which is aligned
      * to a size more than the size of the memory access.
@@ -65,21 +63,14 @@ typedef enum MemOp {
      */
     MO_ASHIFT = 5,
     MO_AMASK = 0x7 << MO_ASHIFT,
-#ifdef NEED_CPU_H
-#ifdef TARGET_ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-#endif
+    MO_UNALN    = 0,
     MO_ALIGN_2  = 1 << MO_ASHIFT,
     MO_ALIGN_4  = 2 << MO_ASHIFT,
     MO_ALIGN_8  = 3 << MO_ASHIFT,
     MO_ALIGN_16 = 4 << MO_ASHIFT,
     MO_ALIGN_32 = 5 << MO_ASHIFT,
     MO_ALIGN_64 = 6 << MO_ASHIFT,
+    MO_ALIGN    = MO_AMASK,
 
     /* Combinations of the above, for ease of use.  */
     MO_UB    = MO_8,
diff --git a/include/exec/poison.h b/include/exec/poison.h
index 140daa4a85..256736e11a 100644
--- a/include/exec/poison.h
+++ b/include/exec/poison.h
@@ -35,7 +35,6 @@
 #pragma GCC poison TARGET_TRICORE
 #pragma GCC poison TARGET_XTENSA
 
-#pragma GCC poison TARGET_ALIGNED_ONLY
 #pragma GCC poison TARGET_HAS_BFLT
 #pragma GCC poison TARGET_NAME
 #pragma GCC poison TARGET_SUPPORTS_MTTCG
diff --git a/tcg/tcg.c b/tcg/tcg.c
index cfd3262a4a..2ce9dba25c 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2071,13 +2071,8 @@ static const char * const ldst_name[] =
 };
 
 static const char * const alignment_name[(MO_AMASK >> MO_ASHIFT) + 1] = {
-#ifdef TARGET_ALIGNED_ONLY
-    [MO_UNALN >> MO_ASHIFT]    = "un+",
-    [MO_ALIGN >> MO_ASHIFT]    = "",
-#else
     [MO_UNALN >> MO_ASHIFT]    = "",
     [MO_ALIGN >> MO_ASHIFT]    = "al+",
-#endif
     [MO_ALIGN_2 >> MO_ASHIFT]  = "al2+",
     [MO_ALIGN_4 >> MO_ASHIFT]  = "al4+",
     [MO_ALIGN_8 >> MO_ASHIFT]  = "al8+",
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 13/16] target/sparc: Use MO_ALIGN where required
  2023-05-02 16:08 ` [PATCH 13/16] target/sparc: Use MO_ALIGN where required Richard Henderson
@ 2023-05-03 20:26   ` Mark Cave-Ayland
  0 siblings, 0 replies; 27+ messages in thread
From: Mark Cave-Ayland @ 2023-05-03 20:26 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato

On 02/05/2023 17:08, Richard Henderson wrote:

> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   target/sparc/translate.c | 66 +++++++++++++++++++++-------------------
>   1 file changed, 34 insertions(+), 32 deletions(-)
> 
> diff --git a/target/sparc/translate.c b/target/sparc/translate.c
> index bc71e44e66..414e014b11 100644
> --- a/target/sparc/translate.c
> +++ b/target/sparc/translate.c
> @@ -1899,7 +1899,7 @@ static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
>                        TCGv addr, int mmu_idx, MemOp memop)
>   {
>       gen_address_mask(dc, addr);
> -    tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
> +    tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop | MO_ALIGN);
>   }
>   
>   static void gen_ldstub(DisasContext *dc, TCGv dst, TCGv addr, int mmu_idx)
> @@ -2155,12 +2155,12 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
>           break;
>       case GET_ASI_DIRECT:
>           gen_address_mask(dc, addr);
> -        tcg_gen_qemu_ld_tl(dst, addr, da.mem_idx, da.memop);
> +        tcg_gen_qemu_ld_tl(dst, addr, da.mem_idx, da.memop | MO_ALIGN);
>           break;
>       default:
>           {
>               TCGv_i32 r_asi = tcg_constant_i32(da.asi);
> -            TCGv_i32 r_mop = tcg_constant_i32(memop);
> +            TCGv_i32 r_mop = tcg_constant_i32(memop | MO_ALIGN);
>   
>               save_state(dc);
>   #ifdef TARGET_SPARC64
> @@ -2201,7 +2201,7 @@ static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
>           /* fall through */
>       case GET_ASI_DIRECT:
>           gen_address_mask(dc, addr);
> -        tcg_gen_qemu_st_tl(src, addr, da.mem_idx, da.memop);
> +        tcg_gen_qemu_st_tl(src, addr, da.mem_idx, da.memop | MO_ALIGN);
>           break;
>   #if !defined(TARGET_SPARC64) && !defined(CONFIG_USER_ONLY)
>       case GET_ASI_BCOPY:
> @@ -2233,7 +2233,7 @@ static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
>       default:
>           {
>               TCGv_i32 r_asi = tcg_constant_i32(da.asi);
> -            TCGv_i32 r_mop = tcg_constant_i32(memop & MO_SIZE);
> +            TCGv_i32 r_mop = tcg_constant_i32(memop | MO_ALIGN);
>   
>               save_state(dc);
>   #ifdef TARGET_SPARC64
> @@ -2283,7 +2283,7 @@ static void gen_cas_asi(DisasContext *dc, TCGv addr, TCGv cmpv,
>       case GET_ASI_DIRECT:
>           oldv = tcg_temp_new();
>           tcg_gen_atomic_cmpxchg_tl(oldv, addr, cmpv, gen_load_gpr(dc, rd),
> -                                  da.mem_idx, da.memop);
> +                                  da.mem_idx, da.memop | MO_ALIGN);
>           gen_store_gpr(dc, rd, oldv);
>           break;
>       default:
> @@ -2347,7 +2347,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
>           switch (size) {
>           case 4:
>               d32 = gen_dest_fpr_F(dc);
> -            tcg_gen_qemu_ld_i32(d32, addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_ld_i32(d32, addr, da.mem_idx, da.memop | MO_ALIGN);
>               gen_store_fpr_F(dc, rd, d32);
>               break;
>           case 8:
> @@ -2397,7 +2397,8 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
>           /* Valid for lddfa only.  */
>           if (size == 8) {
>               gen_address_mask(dc, addr);
> -            tcg_gen_qemu_ld_i64(cpu_fpr[rd / 2], addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_ld_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
> +                                da.memop | MO_ALIGN);
>           } else {
>               gen_exception(dc, TT_ILL_INSN);
>           }
> @@ -2406,7 +2407,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
>       default:
>           {
>               TCGv_i32 r_asi = tcg_constant_i32(da.asi);
> -            TCGv_i32 r_mop = tcg_constant_i32(da.memop);
> +            TCGv_i32 r_mop = tcg_constant_i32(da.memop | MO_ALIGN);
>   
>               save_state(dc);
>               /* According to the table in the UA2011 manual, the only
> @@ -2454,7 +2455,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
>           switch (size) {
>           case 4:
>               d32 = gen_load_fpr_F(dc, rd);
> -            tcg_gen_qemu_st_i32(d32, addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_st_i32(d32, addr, da.mem_idx, da.memop | MO_ALIGN);
>               break;
>           case 8:
>               tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
> @@ -2506,7 +2507,8 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
>           /* Valid for stdfa only.  */
>           if (size == 8) {
>               gen_address_mask(dc, addr);
> -            tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_st_i64(cpu_fpr[rd / 2], addr, da.mem_idx,
> +                                da.memop | MO_ALIGN);
>           } else {
>               gen_exception(dc, TT_ILL_INSN);
>           }
> @@ -2543,7 +2545,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
>               TCGv_i64 tmp = tcg_temp_new_i64();
>   
>               gen_address_mask(dc, addr);
> -            tcg_gen_qemu_ld_i64(tmp, addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_ld_i64(tmp, addr, da.mem_idx, da.memop | MO_ALIGN);
>   
>               /* Note that LE ldda acts as if each 32-bit register
>                  result is byte swapped.  Having just performed one
> @@ -2613,7 +2615,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
>                   tcg_gen_concat32_i64(t64, hi, lo);
>               }
>               gen_address_mask(dc, addr);
> -            tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop);
> +            tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
>           }
>           break;
>   
> @@ -2651,7 +2653,7 @@ static void gen_casx_asi(DisasContext *dc, TCGv addr, TCGv cmpv,
>       case GET_ASI_DIRECT:
>           oldv = tcg_temp_new();
>           tcg_gen_atomic_cmpxchg_tl(oldv, addr, cmpv, gen_load_gpr(dc, rd),
> -                                  da.mem_idx, da.memop);
> +                                  da.mem_idx, da.memop | MO_ALIGN);
>           gen_store_gpr(dc, rd, oldv);
>           break;
>       default:
> @@ -2678,7 +2680,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
>           return;
>       case GET_ASI_DIRECT:
>           gen_address_mask(dc, addr);
> -        tcg_gen_qemu_ld_i64(t64, addr, da.mem_idx, da.memop);
> +        tcg_gen_qemu_ld_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
>           break;
>       default:
>           {
> @@ -2710,7 +2712,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
>           break;
>       case GET_ASI_DIRECT:
>           gen_address_mask(dc, addr);
> -        tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop);
> +        tcg_gen_qemu_st_i64(t64, addr, da.mem_idx, da.memop | MO_ALIGN);
>           break;
>       case GET_ASI_BFILL:
>           /* Store 32 bytes of T64 to ADDR.  */
> @@ -5180,7 +5182,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x0:       /* ld, V9 lduw, load unsigned word */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUL);
> +                                       dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       break;
>                   case 0x1:       /* ldub, load unsigned byte */
>                       gen_address_mask(dc, cpu_addr);
> @@ -5190,7 +5192,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x2:       /* lduh, load unsigned halfword */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUW);
> +                                       dc->mem_idx, MO_TEUW | MO_ALIGN);
>                       break;
>                   case 0x3:       /* ldd, load double word */
>                       if (rd & 1)
> @@ -5201,7 +5203,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                           gen_address_mask(dc, cpu_addr);
>                           t64 = tcg_temp_new_i64();
>                           tcg_gen_qemu_ld_i64(t64, cpu_addr,
> -                                            dc->mem_idx, MO_TEUQ);
> +                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                           tcg_gen_trunc_i64_tl(cpu_val, t64);
>                           tcg_gen_ext32u_tl(cpu_val, cpu_val);
>                           gen_store_gpr(dc, rd + 1, cpu_val);
> @@ -5217,7 +5219,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0xa:       /* ldsh, load signed halfword */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TESW);
> +                                       dc->mem_idx, MO_TESW | MO_ALIGN);
>                       break;
>                   case 0xd:       /* ldstub */
>                       gen_ldstub(dc, cpu_val, cpu_addr, dc->mem_idx);
> @@ -5272,12 +5274,12 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x08: /* V9 ldsw */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TESL);
> +                                       dc->mem_idx, MO_TESL | MO_ALIGN);
>                       break;
>                   case 0x0b: /* V9 ldx */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_ld_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUQ);
> +                                       dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                       break;
>                   case 0x18: /* V9 ldswa */
>                       gen_ld_asi(dc, cpu_val, cpu_addr, insn, MO_TESL);
> @@ -5328,7 +5330,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                       gen_address_mask(dc, cpu_addr);
>                       cpu_dst_32 = gen_dest_fpr_F(dc);
>                       tcg_gen_qemu_ld_i32(cpu_dst_32, cpu_addr,
> -                                        dc->mem_idx, MO_TEUL);
> +                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       gen_store_fpr_F(dc, rd, cpu_dst_32);
>                       break;
>                   case 0x21:      /* ldfsr, V9 ldxfsr */
> @@ -5337,14 +5339,14 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                       if (rd == 1) {
>                           TCGv_i64 t64 = tcg_temp_new_i64();
>                           tcg_gen_qemu_ld_i64(t64, cpu_addr,
> -                                            dc->mem_idx, MO_TEUQ);
> +                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                           gen_helper_ldxfsr(cpu_fsr, cpu_env, cpu_fsr, t64);
>                           break;
>                       }
>   #endif
>                       cpu_dst_32 = tcg_temp_new_i32();
>                       tcg_gen_qemu_ld_i32(cpu_dst_32, cpu_addr,
> -                                        dc->mem_idx, MO_TEUL);
> +                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       gen_helper_ldfsr(cpu_fsr, cpu_env, cpu_fsr, cpu_dst_32);
>                       break;
>                   case 0x22:      /* ldqf, load quad fpreg */
> @@ -5377,7 +5379,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x4: /* st, store word */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUL);
> +                                       dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       break;
>                   case 0x5: /* stb, store byte */
>                       gen_address_mask(dc, cpu_addr);
> @@ -5386,7 +5388,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x6: /* sth, store halfword */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUW);
> +                                       dc->mem_idx, MO_TEUW | MO_ALIGN);
>                       break;
>                   case 0x7: /* std, store double word */
>                       if (rd & 1)
> @@ -5400,7 +5402,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                           t64 = tcg_temp_new_i64();
>                           tcg_gen_concat_tl_i64(t64, lo, cpu_val);
>                           tcg_gen_qemu_st_i64(t64, cpu_addr,
> -                                            dc->mem_idx, MO_TEUQ);
> +                                            dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                       }
>                       break;
>   #if !defined(CONFIG_USER_ONLY) || defined(TARGET_SPARC64)
> @@ -5424,7 +5426,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                   case 0x0e: /* V9 stx */
>                       gen_address_mask(dc, cpu_addr);
>                       tcg_gen_qemu_st_tl(cpu_val, cpu_addr,
> -                                       dc->mem_idx, MO_TEUQ);
> +                                       dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                       break;
>                   case 0x1e: /* V9 stxa */
>                       gen_st_asi(dc, cpu_val, cpu_addr, insn, MO_TEUQ);
> @@ -5442,7 +5444,7 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                       gen_address_mask(dc, cpu_addr);
>                       cpu_src1_32 = gen_load_fpr_F(dc, rd);
>                       tcg_gen_qemu_st_i32(cpu_src1_32, cpu_addr,
> -                                        dc->mem_idx, MO_TEUL);
> +                                        dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       break;
>                   case 0x25: /* stfsr, V9 stxfsr */
>                       {
> @@ -5450,12 +5452,12 @@ static void disas_sparc_insn(DisasContext * dc, unsigned int insn)
>                           gen_address_mask(dc, cpu_addr);
>                           if (rd == 1) {
>                               tcg_gen_qemu_st_tl(cpu_fsr, cpu_addr,
> -                                               dc->mem_idx, MO_TEUQ);
> +                                               dc->mem_idx, MO_TEUQ | MO_ALIGN);
>                               break;
>                           }
>   #endif
>                           tcg_gen_qemu_st_tl(cpu_fsr, cpu_addr,
> -                                           dc->mem_idx, MO_TEUL);
> +                                           dc->mem_idx, MO_TEUL | MO_ALIGN);
>                       }
>                       break;
>                   case 0x26:

Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>


ATB,

Mark.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu
  2023-05-02 16:08 ` [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu Richard Henderson
@ 2023-05-03 20:27   ` Mark Cave-Ayland
  0 siblings, 0 replies; 27+ messages in thread
From: Mark Cave-Ayland @ 2023-05-03 20:27 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato

On 02/05/2023 17:08, Richard Henderson wrote:

> This passes on the memop as given as argument to
> helper_ld_asi to the ultimate load primitive.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   target/sparc/ldst_helper.c | 10 ++++++----
>   1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/target/sparc/ldst_helper.c b/target/sparc/ldst_helper.c
> index a53580d9e4..7972d56a72 100644
> --- a/target/sparc/ldst_helper.c
> +++ b/target/sparc/ldst_helper.c
> @@ -593,6 +593,7 @@ uint64_t helper_ld_asi(CPUSPARCState *env, target_ulong addr,
>   #if defined(DEBUG_MXCC) || defined(DEBUG_ASI)
>       uint32_t last_addr = addr;
>   #endif
> +    MemOpIdx oi;
>   
>       do_check_align(env, addr, size - 1, GETPC());
>       switch (asi) {
> @@ -692,19 +693,20 @@ uint64_t helper_ld_asi(CPUSPARCState *env, target_ulong addr,
>       case ASI_M_IODIAG:  /* Turbosparc IOTLB Diagnostic */
>           break;
>       case ASI_KERNELTXT: /* Supervisor code access */
> +        oi = make_memop_idx(memop, cpu_mmu_index(env, true));
>           switch (size) {
>           case 1:
> -            ret = cpu_ldub_code(env, addr);
> +            ret = cpu_ldb_code_mmu(env, addr, oi, GETPC());
>               break;
>           case 2:
> -            ret = cpu_lduw_code(env, addr);
> +            ret = cpu_ldw_code_mmu(env, addr, oi, GETPC());
>               break;
>           default:
>           case 4:
> -            ret = cpu_ldl_code(env, addr);
> +            ret = cpu_ldl_code_mmu(env, addr, oi, GETPC());
>               break;
>           case 8:
> -            ret = cpu_ldq_code(env, addr);
> +            ret = cpu_ldq_code_mmu(env, addr, oi, GETPC());
>               break;
>           }
>           break;

Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>


ATB,

Mark.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 ` [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-03 20:28   ` Mark Cave-Ayland
  0 siblings, 0 replies; 27+ messages in thread
From: Mark Cave-Ayland @ 2023-05-03 20:28 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato

On 02/05/2023 17:08, Richard Henderson wrote:

> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   configs/targets/sparc-linux-user.mak       | 1 -
>   configs/targets/sparc-softmmu.mak          | 1 -
>   configs/targets/sparc32plus-linux-user.mak | 1 -
>   configs/targets/sparc64-linux-user.mak     | 1 -
>   configs/targets/sparc64-softmmu.mak        | 1 -
>   5 files changed, 5 deletions(-)
> 
> diff --git a/configs/targets/sparc-linux-user.mak b/configs/targets/sparc-linux-user.mak
> index 00e7bc1f07..abcfb8fc62 100644
> --- a/configs/targets/sparc-linux-user.mak
> +++ b/configs/targets/sparc-linux-user.mak
> @@ -1,5 +1,4 @@
>   TARGET_ARCH=sparc
>   TARGET_SYSTBL_ABI=common,32
>   TARGET_SYSTBL=syscall.tbl
> -TARGET_ALIGNED_ONLY=y
>   TARGET_BIG_ENDIAN=y
> diff --git a/configs/targets/sparc-softmmu.mak b/configs/targets/sparc-softmmu.mak
> index a849190f01..454eb35499 100644
> --- a/configs/targets/sparc-softmmu.mak
> +++ b/configs/targets/sparc-softmmu.mak
> @@ -1,3 +1,2 @@
>   TARGET_ARCH=sparc
> -TARGET_ALIGNED_ONLY=y
>   TARGET_BIG_ENDIAN=y
> diff --git a/configs/targets/sparc32plus-linux-user.mak b/configs/targets/sparc32plus-linux-user.mak
> index a65c0951a1..6cc8fa516b 100644
> --- a/configs/targets/sparc32plus-linux-user.mak
> +++ b/configs/targets/sparc32plus-linux-user.mak
> @@ -4,5 +4,4 @@ TARGET_BASE_ARCH=sparc
>   TARGET_ABI_DIR=sparc
>   TARGET_SYSTBL_ABI=common,32
>   TARGET_SYSTBL=syscall.tbl
> -TARGET_ALIGNED_ONLY=y
>   TARGET_BIG_ENDIAN=y
> diff --git a/configs/targets/sparc64-linux-user.mak b/configs/targets/sparc64-linux-user.mak
> index 20fcb93fa4..52f05ec000 100644
> --- a/configs/targets/sparc64-linux-user.mak
> +++ b/configs/targets/sparc64-linux-user.mak
> @@ -3,5 +3,4 @@ TARGET_BASE_ARCH=sparc
>   TARGET_ABI_DIR=sparc
>   TARGET_SYSTBL_ABI=common,64
>   TARGET_SYSTBL=syscall.tbl
> -TARGET_ALIGNED_ONLY=y
>   TARGET_BIG_ENDIAN=y
> diff --git a/configs/targets/sparc64-softmmu.mak b/configs/targets/sparc64-softmmu.mak
> index c626ac3eae..d3f8a3b710 100644
> --- a/configs/targets/sparc64-softmmu.mak
> +++ b/configs/targets/sparc64-softmmu.mak
> @@ -1,4 +1,3 @@
>   TARGET_ARCH=sparc64
>   TARGET_BASE_ARCH=sparc
> -TARGET_ALIGNED_ONLY=y
>   TARGET_BIG_ENDIAN=y

Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>


ATB,

Mark.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
                   ` (15 preceding siblings ...)
  2023-05-02 16:08 ` [PATCH 16/16] tcg: " Richard Henderson
@ 2023-05-10 11:16 ` Richard Henderson
  16 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-10 11:16 UTC (permalink / raw)
  To: qemu-devel; +Cc: philmd, jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

Ping for mips, nios2, sh4.
The portions for alpha, hppa and sparc have been merged.

r~

On 5/2/23 17:08, Richard Henderson wrote:
> Based-on: 20230502135741.1158035-1-richard.henderson@linaro.org
> ("[PATCH 0/9] tcg: Remove compatability helpers for qemu ld/st")
> 
> Add MO_ALIGN where required, so that we may remove TARGET_ALIGNED_ONLY.
> This is required for building tcg once, because we cannot have multiple
> definitions of MO_ALIGN and MO_UNALN.
> 
> 
> r~
> 
> 
> Richard Henderson (16):
>    target/alpha: Use MO_ALIGN for system UNALIGN()
>    target/alpha: Use MO_ALIGN where required
>    target/alpha: Remove TARGET_ALIGNED_ONLY
>    target/hppa: Use MO_ALIGN for system UNALIGN()
>    target/hppa: Remove TARGET_ALIGNED_ONLY
>    target/mips: Add MO_ALIGN to gen_llwp, gen_scwp
>    target/mips: Add missing default_tcg_memop_mask
>    target/mips: Use MO_ALIGN instead of 0
>    target/mips: Remove TARGET_ALIGNED_ONLY
>    target/nios2: Remove TARGET_ALIGNED_ONLY
>    target/sh4: Use MO_ALIGN where required
>    target/sh4: Remove TARGET_ALIGNED_ONLY
>    target/sparc: Use MO_ALIGN where required
>    target/sparc: Use cpu_ld*_code_mmu
>    target/sparc: Remove TARGET_ALIGNED_ONLY
>    tcg: Remove TARGET_ALIGNED_ONLY
> 
>   configs/targets/alpha-linux-user.mak       |   1 -
>   configs/targets/alpha-softmmu.mak          |   1 -
>   configs/targets/hppa-linux-user.mak        |   1 -
>   configs/targets/hppa-softmmu.mak           |   1 -
>   configs/targets/mips-linux-user.mak        |   1 -
>   configs/targets/mips-softmmu.mak           |   1 -
>   configs/targets/mips64-linux-user.mak      |   1 -
>   configs/targets/mips64-softmmu.mak         |   1 -
>   configs/targets/mips64el-linux-user.mak    |   1 -
>   configs/targets/mips64el-softmmu.mak       |   1 -
>   configs/targets/mipsel-linux-user.mak      |   1 -
>   configs/targets/mipsel-softmmu.mak         |   1 -
>   configs/targets/mipsn32-linux-user.mak     |   1 -
>   configs/targets/mipsn32el-linux-user.mak   |   1 -
>   configs/targets/nios2-softmmu.mak          |   1 -
>   configs/targets/sh4-linux-user.mak         |   1 -
>   configs/targets/sh4-softmmu.mak            |   1 -
>   configs/targets/sh4eb-linux-user.mak       |   1 -
>   configs/targets/sh4eb-softmmu.mak          |   1 -
>   configs/targets/sparc-linux-user.mak       |   1 -
>   configs/targets/sparc-softmmu.mak          |   1 -
>   configs/targets/sparc32plus-linux-user.mak |   1 -
>   configs/targets/sparc64-linux-user.mak     |   1 -
>   configs/targets/sparc64-softmmu.mak        |   1 -
>   include/exec/memop.h                       |  13 +--
>   include/exec/poison.h                      |   1 -
>   target/alpha/translate.c                   |  38 ++++----
>   target/hppa/translate.c                    |   2 +-
>   target/mips/tcg/mxu_translate.c            |   3 +-
>   target/nios2/translate.c                   |  10 ++
>   target/sh4/translate.c                     | 102 +++++++++++++--------
>   target/sparc/ldst_helper.c                 |  10 +-
>   target/sparc/translate.c                   |  66 ++++++-------
>   tcg/tcg.c                                  |   5 -
>   target/mips/tcg/micromips_translate.c.inc  |  24 +++--
>   target/mips/tcg/mips16e_translate.c.inc    |  18 ++--
>   target/mips/tcg/nanomips_translate.c.inc   |  32 +++----
>   37 files changed, 186 insertions(+), 162 deletions(-)
> 



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0
  2023-05-02 16:08 ` [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0 Richard Henderson
@ 2023-05-10 13:37   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-10 13:37 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 2/5/23 18:08, Richard Henderson wrote:
> The opposite of MO_UNALN is MO_ALIGN.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   target/mips/tcg/nanomips_translate.c.inc | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 11/16] target/sh4: Use MO_ALIGN where required
  2023-05-02 16:08 ` [PATCH 11/16] target/sh4: Use MO_ALIGN where required Richard Henderson
@ 2023-05-10 13:41   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-10 13:41 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 2/5/23 18:08, Richard Henderson wrote:
> Mark all memory operations that are not already marked with UNALIGN.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   target/sh4/translate.c | 102 ++++++++++++++++++++++++++---------------
>   1 file changed, 66 insertions(+), 36 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 16/16] tcg: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 ` [PATCH 16/16] tcg: " Richard Henderson
@ 2023-05-10 14:43   ` Philippe Mathieu-Daudé
  2023-05-10 15:11     ` Richard Henderson
  0 siblings, 1 reply; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-10 14:43 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 2/5/23 18:08, Richard Henderson wrote:
> All uses have now been expunged.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   include/exec/memop.h  | 13 ++-----------
>   include/exec/poison.h |  1 -
>   tcg/tcg.c             |  5 -----
>   3 files changed, 2 insertions(+), 17 deletions(-)


> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index cfd3262a4a..2ce9dba25c 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -2071,13 +2071,8 @@ static const char * const ldst_name[] =
>   };
>   
>   static const char * const alignment_name[(MO_AMASK >> MO_ASHIFT) + 1] = {
> -#ifdef TARGET_ALIGNED_ONLY
> -    [MO_UNALN >> MO_ASHIFT]    = "un+",

Maybe we want to keep the "un+" prefix.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>

> -    [MO_ALIGN >> MO_ASHIFT]    = "",
> -#else
>       [MO_UNALN >> MO_ASHIFT]    = "",
>       [MO_ALIGN >> MO_ASHIFT]    = "al+",
> -#endif
>       [MO_ALIGN_2 >> MO_ASHIFT]  = "al2+",
>       [MO_ALIGN_4 >> MO_ASHIFT]  = "al4+",
>       [MO_ALIGN_8 >> MO_ASHIFT]  = "al8+",



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 ` [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY Richard Henderson
@ 2023-05-10 14:44   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-10 14:44 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 2/5/23 18:08, Richard Henderson wrote:
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   configs/targets/sh4-linux-user.mak   | 1 -
>   configs/targets/sh4-softmmu.mak      | 1 -
>   configs/targets/sh4eb-linux-user.mak | 1 -
>   configs/targets/sh4eb-softmmu.mak    | 1 -
>   4 files changed, 4 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 10/16] target/nios2: Remove TARGET_ALIGNED_ONLY
  2023-05-02 16:08 ` [PATCH 10/16] target/nios2: " Richard Henderson
@ 2023-05-10 14:45   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 27+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-05-10 14:45 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 2/5/23 18:08, Richard Henderson wrote:
> In gen_ldx/gen_stx, the only two locations for memory operations,
> mark the operation as either aligned (softmmu) or unaligned
> (user-only, as if emulated by the kernel).
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   configs/targets/nios2-softmmu.mak |  1 -
>   target/nios2/translate.c          | 10 ++++++++++
>   2 files changed, 10 insertions(+), 1 deletion(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 16/16] tcg: Remove TARGET_ALIGNED_ONLY
  2023-05-10 14:43   ` Philippe Mathieu-Daudé
@ 2023-05-10 15:11     ` Richard Henderson
  0 siblings, 0 replies; 27+ messages in thread
From: Richard Henderson @ 2023-05-10 15:11 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: jiaxun.yang, crwulff, marex, ysato, mark.cave-ayland

On 5/10/23 15:43, Philippe Mathieu-Daudé wrote:
> On 2/5/23 18:08, Richard Henderson wrote:
>> All uses have now been expunged.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   include/exec/memop.h  | 13 ++-----------
>>   include/exec/poison.h |  1 -
>>   tcg/tcg.c             |  5 -----
>>   3 files changed, 2 insertions(+), 17 deletions(-)
> 
> 
>> diff --git a/tcg/tcg.c b/tcg/tcg.c
>> index cfd3262a4a..2ce9dba25c 100644
>> --- a/tcg/tcg.c
>> +++ b/tcg/tcg.c
>> @@ -2071,13 +2071,8 @@ static const char * const ldst_name[] =
>>   };
>>   static const char * const alignment_name[(MO_AMASK >> MO_ASHIFT) + 1] = {
>> -#ifdef TARGET_ALIGNED_ONLY
>> -    [MO_UNALN >> MO_ASHIFT]    = "un+",
> 
> Maybe we want to keep the "un+" prefix.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> 
>> -    [MO_ALIGN >> MO_ASHIFT]    = "",
>> -#else
>>       [MO_UNALN >> MO_ASHIFT]    = "",
>>       [MO_ALIGN >> MO_ASHIFT]    = "al+",

Done.


r~


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-05-10 15:12 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-02 16:08 [PATCH 00/16] tcg: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-02 16:08 ` [PATCH 01/16] target/alpha: Use MO_ALIGN for system UNALIGN() Richard Henderson
2023-05-02 16:08 ` [PATCH 02/16] target/alpha: Use MO_ALIGN where required Richard Henderson
2023-05-02 16:08 ` [PATCH 03/16] target/alpha: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-02 16:08 ` [PATCH 04/16] target/hppa: Use MO_ALIGN for system UNALIGN() Richard Henderson
2023-05-02 16:08 ` [PATCH 05/16] target/hppa: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-02 16:08 ` [PATCH 06/16] target/mips: Add MO_ALIGN to gen_llwp, gen_scwp Richard Henderson
2023-05-02 16:08 ` [PATCH 07/16] target/mips: Add missing default_tcg_memop_mask Richard Henderson
2023-05-02 16:08 ` [PATCH 08/16] target/mips: Use MO_ALIGN instead of 0 Richard Henderson
2023-05-10 13:37   ` Philippe Mathieu-Daudé
2023-05-02 16:08 ` [PATCH 09/16] target/mips: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-02 16:08 ` [PATCH 10/16] target/nios2: " Richard Henderson
2023-05-10 14:45   ` Philippe Mathieu-Daudé
2023-05-02 16:08 ` [PATCH 11/16] target/sh4: Use MO_ALIGN where required Richard Henderson
2023-05-10 13:41   ` Philippe Mathieu-Daudé
2023-05-02 16:08 ` [PATCH 12/16] target/sh4: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-10 14:44   ` Philippe Mathieu-Daudé
2023-05-02 16:08 ` [PATCH 13/16] target/sparc: Use MO_ALIGN where required Richard Henderson
2023-05-03 20:26   ` Mark Cave-Ayland
2023-05-02 16:08 ` [PATCH 14/16] target/sparc: Use cpu_ld*_code_mmu Richard Henderson
2023-05-03 20:27   ` Mark Cave-Ayland
2023-05-02 16:08 ` [PATCH 15/16] target/sparc: Remove TARGET_ALIGNED_ONLY Richard Henderson
2023-05-03 20:28   ` Mark Cave-Ayland
2023-05-02 16:08 ` [PATCH 16/16] tcg: " Richard Henderson
2023-05-10 14:43   ` Philippe Mathieu-Daudé
2023-05-10 15:11     ` Richard Henderson
2023-05-10 11:16 ` [PATCH 00/16] " Richard Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).