qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/28] tcg patch queue
@ 2025-05-28  8:13 Richard Henderson
  2025-05-28  8:13 ` [PULL 01/28] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
                   ` (28 more replies)
  0 siblings, 29 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel

The following changes since commit 80db93b2b88f9b3ed8927ae7ac74ca30e643a83e:

  Merge tag 'pull-aspeed-20250526' of https://github.com/legoater/qemu into staging (2025-05-26 10:16:59 -0400)

are available in the Git repository at:

  https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20250528

for you to fetch changes up to 5c2891601ccdaa41427187ef95bc25c828b355e4:

  accel/tcg: Assert TCGCPUOps.pointer_wrap is set (2025-05-28 08:08:48 +0100)

----------------------------------------------------------------
accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
linux-user: implement pgid field of /proc/self/stat
target/sh4: Use MO_ALIGN for system UNALIGN()
target/microblaze: Use TARGET_LONG_BITS == 32 for system mode
accel/tcg: Add TCGCPUOps.pointer_wrap
target/*: Populate TCGCPUOps.pointer_wrap

----------------------------------------------------------------
Andreas Schwab (1):
      linux-user: implement pgid field of /proc/self/stat

Pierrick Bouvier (1):
      system/main: comment lock rationale

Richard Henderson (26):
      accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
      target/microblaze: Split out mb_unaligned_access_internal
      target/microblaze: Introduce helper_unaligned_access
      target/microblaze: Split out mb_transaction_failed_internal
      target/microblaze: Implement extended address load/store out of line
      target/microblaze: Use uint64_t for CPUMBState.ear
      target/microblaze: Use TCGv_i64 for compute_ldst_addr_ea
      target/microblaze: Fix printf format in mmu_translate
      target/microblaze: Use TARGET_LONG_BITS == 32 for system mode
      target/microblaze: Drop DisasContext.r0
      target/microblaze: Simplify compute_ldst_addr_type{a,b}
      tcg: Drop TCGContext.tlb_dyn_max_bits
      tcg: Drop TCGContext.page_{mask,bits}
      target/sh4: Use MO_ALIGN for system UNALIGN()
      accel/tcg: Add TCGCPUOps.pointer_wrap
      target: Use cpu_pointer_wrap_notreached for strict align targets
      target: Use cpu_pointer_wrap_uint32 for 32-bit targets
      target/arm: Fill in TCGCPUOps.pointer_wrap
      target/i386: Fill in TCGCPUOps.pointer_wrap
      target/loongarch: Fill in TCGCPUOps.pointer_wrap
      target/mips: Fill in TCGCPUOps.pointer_wrap
      target/ppc: Fill in TCGCPUOps.pointer_wrap
      target/riscv: Fill in TCGCPUOps.pointer_wrap
      target/s390x: Fill in TCGCPUOps.pointer_wrap
      target/sparc: Fill in TCGCPUOps.pointer_wrap
      accel/tcg: Assert TCGCPUOps.pointer_wrap is set

 include/accel/tcg/cpu-ops.h              |  13 ++++
 include/tcg/tcg.h                        |   4 -
 target/microblaze/cpu.h                  |   2 +-
 target/microblaze/helper.h               |  22 ++++--
 accel/tcg/cpu-exec.c                     |   1 +
 accel/tcg/cputlb.c                       |  37 +++++++--
 accel/tcg/translate-all.c                |   6 --
 linux-user/syscall.c                     |   3 +
 system/main.c                            |  13 ++++
 target/alpha/cpu.c                       |   1 +
 target/arm/cpu.c                         |  24 ++++++
 target/arm/tcg/cpu-v7m.c                 |   1 +
 target/avr/cpu.c                         |   6 ++
 target/hppa/cpu.c                        |   1 +
 target/i386/tcg/tcg-cpu.c                |   7 ++
 target/loongarch/cpu.c                   |   7 ++
 target/m68k/cpu.c                        |   1 +
 target/microblaze/cpu.c                  |   1 +
 target/microblaze/helper.c               |  71 ++++++++++-------
 target/microblaze/mmu.c                  |   3 +-
 target/microblaze/op_helper.c            | 110 +++++++++++++++++++-------
 target/microblaze/translate.c            | 128 ++++++++++++++++---------------
 target/mips/cpu.c                        |   9 +++
 target/openrisc/cpu.c                    |   1 +
 target/ppc/cpu_init.c                    |   7 ++
 target/riscv/tcg/tcg-cpu.c               |  26 +++++++
 target/rx/cpu.c                          |   1 +
 target/s390x/cpu.c                       |   9 +++
 target/sh4/cpu.c                         |   1 +
 target/sh4/translate.c                   |   2 +-
 target/sparc/cpu.c                       |  13 ++++
 target/tricore/cpu.c                     |   1 +
 target/xtensa/cpu.c                      |   1 +
 tcg/perf.c                               |   2 +-
 tcg/tcg-op-ldst.c                        |   3 +-
 tcg/tcg.c                                |   1 +
 configs/targets/microblaze-softmmu.mak   |   4 +-
 configs/targets/microblazeel-softmmu.mak |   4 +-
 tcg/aarch64/tcg-target.c.inc             |  10 +--
 tcg/arm/tcg-target.c.inc                 |  10 +--
 tcg/i386/tcg-target.c.inc                |  10 +--
 tcg/loongarch64/tcg-target.c.inc         |   4 +-
 tcg/mips/tcg-target.c.inc                |   6 +-
 tcg/ppc/tcg-target.c.inc                 |  14 ++--
 tcg/riscv/tcg-target.c.inc               |   4 +-
 tcg/s390x/tcg-target.c.inc               |   4 +-
 tcg/sparc64/tcg-target.c.inc             |   4 +-
 47 files changed, 427 insertions(+), 186 deletions(-)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PULL 01/28] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 02/28] system/main: comment lock rationale Richard Henderson
                   ` (27 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel
  Cc: Jonathan Cameron, Philippe Mathieu-Daudé, Pierrick Bouvier

When we moved TLB_MMIO and TLB_DISCARD_WRITE to TLB_SLOW_FLAGS_MASK,
we failed to update atomic_mmu_lookup to properly reconstruct flags.

Fixes: 24b5e0fdb543 ("include/exec: Move TLB_MMIO, TLB_DISCARD_WRITE to slow flags")
Reported-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5f6d7c601c..86d0deb08c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1871,8 +1871,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
         goto stop_the_world;
     }
 
-    /* Collect tlb flags for read. */
+    /* Finish collecting tlb flags for both read and write. */
+    full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
     tlb_addr |= tlbe->addr_read;
+    tlb_addr &= TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
+    tlb_addr |= full->slow_flags[MMU_DATA_STORE];
+    tlb_addr |= full->slow_flags[MMU_DATA_LOAD];
 
     /* Notice an IO access or a needs-MMU-lookup access */
     if (unlikely(tlb_addr & (TLB_MMIO | TLB_DISCARD_WRITE))) {
@@ -1882,13 +1886,12 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
     }
 
     hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
-    full = &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
 
     if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
         notdirty_write(cpu, addr, size, full, retaddr);
     }
 
-    if (unlikely(tlb_addr & TLB_FORCE_SLOW)) {
+    if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
         int wp_flags = 0;
 
         if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) {
@@ -1897,10 +1900,8 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
         if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) {
             wp_flags |= BP_MEM_READ;
         }
-        if (wp_flags) {
-            cpu_check_watchpoint(cpu, addr, size,
-                                 full->attrs, wp_flags, retaddr);
-        }
+        cpu_check_watchpoint(cpu, addr, size,
+                             full->attrs, wp_flags, retaddr);
     }
 
     return hostaddr;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 02/28] system/main: comment lock rationale
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
  2025-05-28  8:13 ` [PULL 01/28] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 03/28] linux-user: implement pgid field of /proc/self/stat Richard Henderson
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Pierrick Bouvier

From: Pierrick Bouvier <pierrick.bouvier@linaro.org>

Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20250515174641.4000309-1-pierrick.bouvier@linaro.org>
---
 system/main.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/system/main.c b/system/main.c
index 1c02206734..b8f7157cc3 100644
--- a/system/main.c
+++ b/system/main.c
@@ -69,8 +69,21 @@ int (*qemu_main)(void) = os_darwin_cfrunloop_main;
 int main(int argc, char **argv)
 {
     qemu_init(argc, argv);
+
+    /*
+     * qemu_init acquires the BQL and replay mutex lock. BQL is acquired when
+     * initializing cpus, to block associated threads until initialization is
+     * complete. Replay_mutex lock is acquired on initialization, because it
+     * must be held when configuring icount_mode.
+     *
+     * On MacOS, qemu main event loop runs in a background thread, as main
+     * thread must be reserved for UI. Thus, we need to transfer lock ownership,
+     * and the simplest way to do that is to release them, and reacquire them
+     * from qemu_default_main.
+     */
     bql_unlock();
     replay_mutex_unlock();
+
     if (qemu_main) {
         QemuThread main_loop_thread;
         qemu_thread_create(&main_loop_thread, "qemu_main",
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 03/28] linux-user: implement pgid field of /proc/self/stat
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
  2025-05-28  8:13 ` [PULL 01/28] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
  2025-05-28  8:13 ` [PULL 02/28] system/main: comment lock rationale Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 04/28] target/microblaze: Split out mb_unaligned_access_internal Richard Henderson
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Andreas Schwab

From: Andreas Schwab <schwab@suse.de>

Signed-off-by: Andreas Schwab <schwab@suse.de>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <mvmfrgzcr4m.fsf@suse.de>
---
 linux-user/syscall.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 23b901b713..fc37028597 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -8235,6 +8235,9 @@ static int open_self_stat(CPUArchState *cpu_env, int fd)
         } else if (i == 3) {
             /* ppid */
             g_string_printf(buf, FMT_pid " ", getppid());
+        } else if (i == 4) {
+            /* pgid */
+            g_string_printf(buf, FMT_pid " ", getpgrp());
         } else if (i == 19) {
             /* num_threads */
             int cpus = 0;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 04/28] target/microblaze: Split out mb_unaligned_access_internal
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (2 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 03/28] linux-user: implement pgid field of /proc/self/stat Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 05/28] target/microblaze: Introduce helper_unaligned_access Richard Henderson
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use an explicit 64-bit type for the address to store in EAR.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/helper.c | 64 +++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 28 deletions(-)

diff --git a/target/microblaze/helper.c b/target/microblaze/helper.c
index 9203192483..5fe81e4b16 100644
--- a/target/microblaze/helper.c
+++ b/target/microblaze/helper.c
@@ -27,6 +27,42 @@
 #include "qemu/host-utils.h"
 #include "exec/log.h"
 
+
+G_NORETURN
+static void mb_unaligned_access_internal(CPUState *cs, uint64_t addr,
+                                         uintptr_t retaddr)
+{
+    CPUMBState *env = cpu_env(cs);
+    uint32_t esr, iflags;
+
+    /* Recover the pc and iflags from the corresponding insn_start.  */
+    cpu_restore_state(cs, retaddr);
+    iflags = env->iflags;
+
+    qemu_log_mask(CPU_LOG_INT,
+                  "Unaligned access addr=0x%" PRIx64 " pc=%x iflags=%x\n",
+                  addr, env->pc, iflags);
+
+    esr = ESR_EC_UNALIGNED_DATA;
+    if (likely(iflags & ESR_ESS_FLAG)) {
+        esr |= iflags & ESR_ESS_MASK;
+    } else {
+        qemu_log_mask(LOG_UNIMP, "Unaligned access without ESR_ESS_FLAG\n");
+    }
+
+    env->ear = addr;
+    env->esr = esr;
+    cs->exception_index = EXCP_HW_EXCP;
+    cpu_loop_exit(cs);
+}
+
+void mb_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
+                                MMUAccessType access_type,
+                                int mmu_idx, uintptr_t retaddr)
+{
+    mb_unaligned_access_internal(cs, addr, retaddr);
+}
+
 #ifndef CONFIG_USER_ONLY
 static bool mb_cpu_access_is_secure(MicroBlazeCPU *cpu,
                                     MMUAccessType access_type)
@@ -269,31 +305,3 @@ bool mb_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
 }
 
 #endif /* !CONFIG_USER_ONLY */
-
-void mb_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
-                                MMUAccessType access_type,
-                                int mmu_idx, uintptr_t retaddr)
-{
-    MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
-    uint32_t esr, iflags;
-
-    /* Recover the pc and iflags from the corresponding insn_start.  */
-    cpu_restore_state(cs, retaddr);
-    iflags = cpu->env.iflags;
-
-    qemu_log_mask(CPU_LOG_INT,
-                  "Unaligned access addr=" TARGET_FMT_lx " pc=%x iflags=%x\n",
-                  (target_ulong)addr, cpu->env.pc, iflags);
-
-    esr = ESR_EC_UNALIGNED_DATA;
-    if (likely(iflags & ESR_ESS_FLAG)) {
-        esr |= iflags & ESR_ESS_MASK;
-    } else {
-        qemu_log_mask(LOG_UNIMP, "Unaligned access without ESR_ESS_FLAG\n");
-    }
-
-    cpu->env.ear = addr;
-    cpu->env.esr = esr;
-    cs->exception_index = EXCP_HW_EXCP;
-    cpu_loop_exit(cs);
-}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 05/28] target/microblaze: Introduce helper_unaligned_access
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (3 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 04/28] target/microblaze: Split out mb_unaligned_access_internal Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 06/28] target/microblaze: Split out mb_transaction_failed_internal Richard Henderson
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/helper.h | 12 ++++++------
 target/microblaze/helper.c |  7 +++++++
 2 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/target/microblaze/helper.h b/target/microblaze/helper.h
index f740835fcb..41f56a5601 100644
--- a/target/microblaze/helper.h
+++ b/target/microblaze/helper.h
@@ -20,12 +20,12 @@ DEF_HELPER_FLAGS_3(fcmp_ne, TCG_CALL_NO_WG, i32, env, i32, i32)
 DEF_HELPER_FLAGS_3(fcmp_ge, TCG_CALL_NO_WG, i32, env, i32, i32)
 
 DEF_HELPER_FLAGS_2(pcmpbf, TCG_CALL_NO_RWG_SE, i32, i32, i32)
-#if !defined(CONFIG_USER_ONLY)
-DEF_HELPER_FLAGS_3(mmu_read, TCG_CALL_NO_RWG, i32, env, i32, i32)
-DEF_HELPER_FLAGS_4(mmu_write, TCG_CALL_NO_RWG, void, env, i32, i32, i32)
-#endif
-
 DEF_HELPER_FLAGS_2(stackprot, TCG_CALL_NO_WG, void, env, tl)
-
 DEF_HELPER_FLAGS_2(get, TCG_CALL_NO_RWG, i32, i32, i32)
 DEF_HELPER_FLAGS_3(put, TCG_CALL_NO_RWG, void, i32, i32, i32)
+
+#ifndef CONFIG_USER_ONLY
+DEF_HELPER_FLAGS_3(mmu_read, TCG_CALL_NO_RWG, i32, env, i32, i32)
+DEF_HELPER_FLAGS_4(mmu_write, TCG_CALL_NO_RWG, void, env, i32, i32, i32)
+DEF_HELPER_FLAGS_2(unaligned_access, TCG_CALL_NO_WG, noreturn, env, i64)
+#endif
diff --git a/target/microblaze/helper.c b/target/microblaze/helper.c
index 5fe81e4b16..ef0e2f973f 100644
--- a/target/microblaze/helper.c
+++ b/target/microblaze/helper.c
@@ -26,6 +26,7 @@
 #include "exec/target_page.h"
 #include "qemu/host-utils.h"
 #include "exec/log.h"
+#include "exec/helper-proto.h"
 
 
 G_NORETURN
@@ -64,6 +65,12 @@ void mb_cpu_do_unaligned_access(CPUState *cs, vaddr addr,
 }
 
 #ifndef CONFIG_USER_ONLY
+
+void HELPER(unaligned_access)(CPUMBState *env, uint64_t addr)
+{
+    mb_unaligned_access_internal(env_cpu(env), addr, GETPC());
+}
+
 static bool mb_cpu_access_is_secure(MicroBlazeCPU *cpu,
                                     MMUAccessType access_type)
 {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 06/28] target/microblaze: Split out mb_transaction_failed_internal
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (4 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 05/28] target/microblaze: Introduce helper_unaligned_access Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 07/28] target/microblaze: Implement extended address load/store out of line Richard Henderson
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use an explicit 64-bit type for the address to store in EAR.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/op_helper.c | 70 +++++++++++++++++++++--------------
 1 file changed, 42 insertions(+), 28 deletions(-)

diff --git a/target/microblaze/op_helper.c b/target/microblaze/op_helper.c
index 9e838dfa15..4c39207a55 100644
--- a/target/microblaze/op_helper.c
+++ b/target/microblaze/op_helper.c
@@ -393,38 +393,52 @@ void helper_mmu_write(CPUMBState *env, uint32_t ext, uint32_t rn, uint32_t v)
     mmu_write(env, ext, rn, v);
 }
 
+static void mb_transaction_failed_internal(CPUState *cs, hwaddr physaddr,
+                                           uint64_t addr, unsigned size,
+                                           MMUAccessType access_type,
+                                           uintptr_t retaddr)
+{
+    CPUMBState *env = cpu_env(cs);
+    MicroBlazeCPU *cpu = env_archcpu(env);
+    const char *access_name = "INVALID";
+    bool take = env->msr & MSR_EE;
+    uint32_t esr = ESR_EC_DATA_BUS;
+
+    switch (access_type) {
+    case MMU_INST_FETCH:
+        access_name = "INST_FETCH";
+        esr = ESR_EC_INSN_BUS;
+        take &= cpu->cfg.iopb_bus_exception;
+        break;
+    case MMU_DATA_LOAD:
+        access_name = "DATA_LOAD";
+        take &= cpu->cfg.dopb_bus_exception;
+        break;
+    case MMU_DATA_STORE:
+        access_name = "DATA_STORE";
+        take &= cpu->cfg.dopb_bus_exception;
+        break;
+    }
+
+    qemu_log_mask(CPU_LOG_INT, "Transaction failed: addr 0x%" PRIx64
+                  "physaddr 0x" HWADDR_FMT_plx " size %d access-type %s (%s)\n",
+                  addr, physaddr, size, access_name,
+                  take ? "TAKEN" : "DROPPED");
+
+    if (take) {
+        env->esr = esr;
+        env->ear = addr;
+        cs->exception_index = EXCP_HW_EXCP;
+        cpu_loop_exit_restore(cs, retaddr);
+    }
+}
+
 void mb_cpu_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr addr,
                                unsigned size, MMUAccessType access_type,
                                int mmu_idx, MemTxAttrs attrs,
                                MemTxResult response, uintptr_t retaddr)
 {
-    MicroBlazeCPU *cpu = MICROBLAZE_CPU(cs);
-    CPUMBState *env = &cpu->env;
-
-    qemu_log_mask(CPU_LOG_INT, "Transaction failed: vaddr 0x%" VADDR_PRIx
-                  " physaddr 0x" HWADDR_FMT_plx " size %d access type %s\n",
-                  addr, physaddr, size,
-                  access_type == MMU_INST_FETCH ? "INST_FETCH" :
-                  (access_type == MMU_DATA_LOAD ? "DATA_LOAD" : "DATA_STORE"));
-
-    if (!(env->msr & MSR_EE)) {
-        return;
-    }
-
-    if (access_type == MMU_INST_FETCH) {
-        if (!cpu->cfg.iopb_bus_exception) {
-            return;
-        }
-        env->esr = ESR_EC_INSN_BUS;
-    } else {
-        if (!cpu->cfg.dopb_bus_exception) {
-            return;
-        }
-        env->esr = ESR_EC_DATA_BUS;
-    }
-
-    env->ear = addr;
-    cs->exception_index = EXCP_HW_EXCP;
-    cpu_loop_exit_restore(cs, retaddr);
+    mb_transaction_failed_internal(cs, physaddr, addr, size,
+                                   access_type, retaddr);
 }
 #endif
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 07/28] target/microblaze: Implement extended address load/store out of line
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (5 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 06/28] target/microblaze: Split out mb_transaction_failed_internal Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 08/28] target/microblaze: Use uint64_t for CPUMBState.ear Richard Henderson
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use helpers and address_space_ld/st instead of inline
loads and stores.  This allows us to perform operations
on physical addresses wider than virtual addresses.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/helper.h    | 10 +++++++
 target/microblaze/op_helper.c | 40 +++++++++++++++++++++++++++
 target/microblaze/translate.c | 52 +++++++++++++++++++++++++++--------
 3 files changed, 90 insertions(+), 12 deletions(-)

diff --git a/target/microblaze/helper.h b/target/microblaze/helper.h
index 41f56a5601..ef4fad9b91 100644
--- a/target/microblaze/helper.h
+++ b/target/microblaze/helper.h
@@ -28,4 +28,14 @@ DEF_HELPER_FLAGS_3(put, TCG_CALL_NO_RWG, void, i32, i32, i32)
 DEF_HELPER_FLAGS_3(mmu_read, TCG_CALL_NO_RWG, i32, env, i32, i32)
 DEF_HELPER_FLAGS_4(mmu_write, TCG_CALL_NO_RWG, void, env, i32, i32, i32)
 DEF_HELPER_FLAGS_2(unaligned_access, TCG_CALL_NO_WG, noreturn, env, i64)
+DEF_HELPER_FLAGS_2(lbuea, TCG_CALL_NO_WG, i32, env, i64)
+DEF_HELPER_FLAGS_2(lhuea_be, TCG_CALL_NO_WG, i32, env, i64)
+DEF_HELPER_FLAGS_2(lhuea_le, TCG_CALL_NO_WG, i32, env, i64)
+DEF_HELPER_FLAGS_2(lwea_be, TCG_CALL_NO_WG, i32, env, i64)
+DEF_HELPER_FLAGS_2(lwea_le, TCG_CALL_NO_WG, i32, env, i64)
+DEF_HELPER_FLAGS_3(sbea, TCG_CALL_NO_WG, void, env, i32, i64)
+DEF_HELPER_FLAGS_3(shea_be, TCG_CALL_NO_WG, void, env, i32, i64)
+DEF_HELPER_FLAGS_3(shea_le, TCG_CALL_NO_WG, void, env, i32, i64)
+DEF_HELPER_FLAGS_3(swea_be, TCG_CALL_NO_WG, void, env, i32, i64)
+DEF_HELPER_FLAGS_3(swea_le, TCG_CALL_NO_WG, void, env, i32, i64)
 #endif
diff --git a/target/microblaze/op_helper.c b/target/microblaze/op_helper.c
index 4c39207a55..b8365b3b1d 100644
--- a/target/microblaze/op_helper.c
+++ b/target/microblaze/op_helper.c
@@ -382,6 +382,8 @@ void helper_stackprot(CPUMBState *env, target_ulong addr)
 }
 
 #if !defined(CONFIG_USER_ONLY)
+#include "system/memory.h"
+
 /* Writes/reads to the MMU's special regs end up here.  */
 uint32_t helper_mmu_read(CPUMBState *env, uint32_t ext, uint32_t rn)
 {
@@ -441,4 +443,42 @@ void mb_cpu_transaction_failed(CPUState *cs, hwaddr physaddr, vaddr addr,
     mb_transaction_failed_internal(cs, physaddr, addr, size,
                                    access_type, retaddr);
 }
+
+#define LD_EA(NAME, TYPE, FUNC) \
+uint32_t HELPER(NAME)(CPUMBState *env, uint64_t ea)                     \
+{                                                                       \
+    CPUState *cs = env_cpu(env);                                        \
+    MemTxResult txres;                                                  \
+    TYPE ret = FUNC(cs->as, ea, MEMTXATTRS_UNSPECIFIED, &txres);        \
+    if (unlikely(txres != MEMTX_OK)) {                                  \
+        mb_transaction_failed_internal(cs, ea, ea, sizeof(TYPE),        \
+                                       MMU_DATA_LOAD, GETPC());         \
+    }                                                                   \
+    return ret;                                                         \
+}
+
+LD_EA(lbuea, uint8_t, address_space_ldub)
+LD_EA(lhuea_be, uint16_t, address_space_lduw_be)
+LD_EA(lhuea_le, uint16_t, address_space_lduw_le)
+LD_EA(lwea_be, uint32_t, address_space_ldl_be)
+LD_EA(lwea_le, uint32_t, address_space_ldl_le)
+
+#define ST_EA(NAME, TYPE, FUNC) \
+void HELPER(NAME)(CPUMBState *env, uint32_t data, uint64_t ea)          \
+{                                                                       \
+    CPUState *cs = env_cpu(env);                                        \
+    MemTxResult txres;                                                  \
+    FUNC(cs->as, ea, data, MEMTXATTRS_UNSPECIFIED, &txres);             \
+    if (unlikely(txres != MEMTX_OK)) {                                  \
+        mb_transaction_failed_internal(cs, ea, ea, sizeof(TYPE),        \
+                                       MMU_DATA_STORE, GETPC());        \
+    }                                                                   \
+}
+
+ST_EA(sbea, uint8_t, address_space_stb)
+ST_EA(shea_be, uint16_t, address_space_stw_be)
+ST_EA(shea_le, uint16_t, address_space_stw_le)
+ST_EA(swea_be, uint32_t, address_space_stl_be)
+ST_EA(swea_le, uint32_t, address_space_stl_le)
+
 #endif
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 671b1ae4db..3d9756391e 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -700,6 +700,20 @@ static void record_unaligned_ess(DisasContext *dc, int rd,
 
     tcg_set_insn_start_param(dc->base.insn_start, 1, iflags);
 }
+
+static void gen_alignment_check_ea(DisasContext *dc, TCGv_i64 ea, int rb,
+                                   int rd, MemOp size, bool store)
+{
+    if (rb && (dc->tb_flags & MSR_EE) && dc->cfg->unaligned_exceptions) {
+        TCGLabel *over = gen_new_label();
+
+        record_unaligned_ess(dc, rd, size, store);
+
+        tcg_gen_brcondi_i64(TCG_COND_TSTEQ, ea, (1 << size) - 1, over);
+        gen_helper_unaligned_access(tcg_env, ea);
+        gen_set_label(over);
+    }
+}
 #endif
 
 static inline MemOp mo_endian(DisasContext *dc)
@@ -765,10 +779,11 @@ static bool trans_lbuea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_load(dc, arg->rd, addr, MO_UB, MMU_NOMMU_IDX, false);
+    gen_helper_lbuea(reg_for_write(dc, arg->rd), tcg_env, addr);
+    return true;
 #endif
 }
 
@@ -796,10 +811,13 @@ static bool trans_lhuea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_load(dc, arg->rd, addr, MO_UW, MMU_NOMMU_IDX, false);
+    gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_16, false);
+    (mo_endian(dc) == MO_BE ? gen_helper_lhuea_be : gen_helper_lhuea_le)
+        (reg_for_write(dc, arg->rd), tcg_env, addr);
+    return true;
 #endif
 }
 
@@ -827,10 +845,13 @@ static bool trans_lwea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_load(dc, arg->rd, addr, MO_UL, MMU_NOMMU_IDX, false);
+    gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_32, false);
+    (mo_endian(dc) == MO_BE ? gen_helper_lwea_be : gen_helper_lwea_le)
+        (reg_for_write(dc, arg->rd), tcg_env, addr);
+    return true;
 #endif
 }
 
@@ -918,10 +939,11 @@ static bool trans_sbea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_store(dc, arg->rd, addr, MO_UB, MMU_NOMMU_IDX, false);
+    gen_helper_sbea(tcg_env, reg_for_read(dc, arg->rd), addr);
+    return true;
 #endif
 }
 
@@ -949,10 +971,13 @@ static bool trans_shea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_store(dc, arg->rd, addr, MO_UW, MMU_NOMMU_IDX, false);
+    gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_16, true);
+    (mo_endian(dc) == MO_BE ? gen_helper_shea_be : gen_helper_shea_le)
+        (tcg_env, reg_for_read(dc, arg->rd), addr);
+    return true;
 #endif
 }
 
@@ -980,10 +1005,13 @@ static bool trans_swea(DisasContext *dc, arg_typea *arg)
         return true;
     }
 #ifdef CONFIG_USER_ONLY
-    return true;
+    g_assert_not_reached();
 #else
     TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
-    return do_store(dc, arg->rd, addr, MO_UL, MMU_NOMMU_IDX, false);
+    gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_32, true);
+    (mo_endian(dc) == MO_BE ? gen_helper_swea_be : gen_helper_swea_le)
+        (tcg_env, reg_for_read(dc, arg->rd), addr);
+    return true;
 #endif
 }
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 08/28] target/microblaze: Use uint64_t for CPUMBState.ear
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (6 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 07/28] target/microblaze: Implement extended address load/store out of line Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 09/28] target/microblaze: Use TCGv_i64 for compute_ldst_addr_ea Richard Henderson
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use an explicit 64-bit type for EAR.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/cpu.h       | 2 +-
 target/microblaze/translate.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/microblaze/cpu.h b/target/microblaze/cpu.h
index 6ad8643f2e..3ce28b302f 100644
--- a/target/microblaze/cpu.h
+++ b/target/microblaze/cpu.h
@@ -248,7 +248,7 @@ struct CPUArchState {
     uint32_t pc;
     uint32_t msr;    /* All bits of MSR except MSR[C] and MSR[CC] */
     uint32_t msr_c;  /* MSR[C], in low bit; other bits must be 0 */
-    target_ulong ear;
+    uint64_t ear;
     uint32_t esr;
     uint32_t fsr;
     uint32_t btr;
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 3d9756391e..b1fc9e5624 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -1857,7 +1857,7 @@ void mb_cpu_dump_state(CPUState *cs, FILE *f, int flags)
     }
 
     qemu_fprintf(f, "\nesr=0x%04x fsr=0x%02x btr=0x%08x edr=0x%x\n"
-                 "ear=0x" TARGET_FMT_lx " slr=0x%x shr=0x%x\n",
+                 "ear=0x%" PRIx64 " slr=0x%x shr=0x%x\n",
                  env->esr, env->fsr, env->btr, env->edr,
                  env->ear, env->slr, env->shr);
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 09/28] target/microblaze: Use TCGv_i64 for compute_ldst_addr_ea
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (7 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 08/28] target/microblaze: Use uint64_t for CPUMBState.ear Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 10/28] target/microblaze: Fix printf format in mmu_translate Richard Henderson
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use an explicit 64-bit type for extended addresses.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/translate.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index b1fc9e5624..dc597b36e6 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -660,23 +660,23 @@ static TCGv compute_ldst_addr_typeb(DisasContext *dc, int ra, int imm)
 }
 
 #ifndef CONFIG_USER_ONLY
-static TCGv compute_ldst_addr_ea(DisasContext *dc, int ra, int rb)
+static TCGv_i64 compute_ldst_addr_ea(DisasContext *dc, int ra, int rb)
 {
     int addr_size = dc->cfg->addr_size;
-    TCGv ret = tcg_temp_new();
+    TCGv_i64 ret = tcg_temp_new_i64();
 
     if (addr_size == 32 || ra == 0) {
         if (rb) {
-            tcg_gen_extu_i32_tl(ret, cpu_R[rb]);
+            tcg_gen_extu_i32_i64(ret, cpu_R[rb]);
         } else {
-            tcg_gen_movi_tl(ret, 0);
+            return tcg_constant_i64(0);
         }
     } else {
         if (rb) {
             tcg_gen_concat_i32_i64(ret, cpu_R[rb], cpu_R[ra]);
         } else {
-            tcg_gen_extu_i32_tl(ret, cpu_R[ra]);
-            tcg_gen_shli_tl(ret, ret, 32);
+            tcg_gen_extu_i32_i64(ret, cpu_R[ra]);
+            tcg_gen_shli_i64(ret, ret, 32);
         }
         if (addr_size < 64) {
             /* Mask off out of range bits.  */
@@ -781,7 +781,7 @@ static bool trans_lbuea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_helper_lbuea(reg_for_write(dc, arg->rd), tcg_env, addr);
     return true;
 #endif
@@ -813,7 +813,7 @@ static bool trans_lhuea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_16, false);
     (mo_endian(dc) == MO_BE ? gen_helper_lhuea_be : gen_helper_lhuea_le)
         (reg_for_write(dc, arg->rd), tcg_env, addr);
@@ -847,7 +847,7 @@ static bool trans_lwea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_32, false);
     (mo_endian(dc) == MO_BE ? gen_helper_lwea_be : gen_helper_lwea_le)
         (reg_for_write(dc, arg->rd), tcg_env, addr);
@@ -941,7 +941,7 @@ static bool trans_sbea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_helper_sbea(tcg_env, reg_for_read(dc, arg->rd), addr);
     return true;
 #endif
@@ -973,7 +973,7 @@ static bool trans_shea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_16, true);
     (mo_endian(dc) == MO_BE ? gen_helper_shea_be : gen_helper_shea_le)
         (tcg_env, reg_for_read(dc, arg->rd), addr);
@@ -1007,7 +1007,7 @@ static bool trans_swea(DisasContext *dc, arg_typea *arg)
 #ifdef CONFIG_USER_ONLY
     g_assert_not_reached();
 #else
-    TCGv addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
+    TCGv_i64 addr = compute_ldst_addr_ea(dc, arg->ra, arg->rb);
     gen_alignment_check_ea(dc, addr, arg->rb, arg->rd, MO_32, true);
     (mo_endian(dc) == MO_BE ? gen_helper_swea_be : gen_helper_swea_le)
         (tcg_env, reg_for_read(dc, arg->rd), addr);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 10/28] target/microblaze: Fix printf format in mmu_translate
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (8 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 09/28] target/microblaze: Use TCGv_i64 for compute_ldst_addr_ea Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 11/28] target/microblaze: Use TARGET_LONG_BITS == 32 for system mode Richard Henderson
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Use TARGET_FMT_lx to match the target_ulong type of vaddr.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/mmu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/target/microblaze/mmu.c b/target/microblaze/mmu.c
index 95a12e16f8..8703ff5c65 100644
--- a/target/microblaze/mmu.c
+++ b/target/microblaze/mmu.c
@@ -172,7 +172,8 @@ unsigned int mmu_translate(MicroBlazeCPU *cpu, MicroBlazeMMULookup *lu,
     }
 done:
     qemu_log_mask(CPU_LOG_MMU,
-                  "MMU vaddr=%" PRIx64 " rw=%d tlb_wr=%d tlb_ex=%d hit=%d\n",
+                  "MMU vaddr=0x" TARGET_FMT_lx
+                  " rw=%d tlb_wr=%d tlb_ex=%d hit=%d\n",
                   vaddr, rw, tlb_wr, tlb_ex, hit);
     return hit;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 11/28] target/microblaze: Use TARGET_LONG_BITS == 32 for system mode
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (9 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 10/28] target/microblaze: Fix printf format in mmu_translate Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 12/28] target/microblaze: Drop DisasContext.r0 Richard Henderson
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Now that the extended address instructions are handled separately
from virtual addresses, we can narrow the emulation to 32-bit.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 configs/targets/microblaze-softmmu.mak   | 4 +---
 configs/targets/microblazeel-softmmu.mak | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/configs/targets/microblaze-softmmu.mak b/configs/targets/microblaze-softmmu.mak
index 23457d0ae6..bab7b498c2 100644
--- a/configs/targets/microblaze-softmmu.mak
+++ b/configs/targets/microblaze-softmmu.mak
@@ -3,6 +3,4 @@ TARGET_BIG_ENDIAN=y
 # needed by boot.c
 TARGET_NEED_FDT=y
 TARGET_XML_FILES=gdb-xml/microblaze-core.xml gdb-xml/microblaze-stack-protect.xml
-# System mode can address up to 64 bits via lea/sea instructions.
-# TODO: These bypass the mmu, so we could emulate these differently.
-TARGET_LONG_BITS=64
+TARGET_LONG_BITS=32
diff --git a/configs/targets/microblazeel-softmmu.mak b/configs/targets/microblazeel-softmmu.mak
index c82c509623..8aee7ebc5c 100644
--- a/configs/targets/microblazeel-softmmu.mak
+++ b/configs/targets/microblazeel-softmmu.mak
@@ -2,6 +2,4 @@ TARGET_ARCH=microblaze
 # needed by boot.c
 TARGET_NEED_FDT=y
 TARGET_XML_FILES=gdb-xml/microblaze-core.xml gdb-xml/microblaze-stack-protect.xml
-# System mode can address up to 64 bits via lea/sea instructions.
-# TODO: These bypass the mmu, so we could emulate these differently.
-TARGET_LONG_BITS=64
+TARGET_LONG_BITS=32
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 12/28] target/microblaze: Drop DisasContext.r0
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (10 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 11/28] target/microblaze: Use TARGET_LONG_BITS == 32 for system mode Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 13/28] target/microblaze: Simplify compute_ldst_addr_type{a,b} Richard Henderson
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Return a constant 0 from reg_for_read, and a new
temporary from reg_for_write.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/translate.c | 24 ++----------------------
 1 file changed, 2 insertions(+), 22 deletions(-)

diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index dc597b36e6..047d97e2c5 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -63,9 +63,6 @@ typedef struct DisasContext {
     DisasContextBase base;
     const MicroBlazeCPUConfig *cfg;
 
-    TCGv_i32 r0;
-    bool r0_set;
-
     /* Decoder.  */
     uint32_t ext_imm;
     unsigned int tb_flags;
@@ -179,14 +176,7 @@ static TCGv_i32 reg_for_read(DisasContext *dc, int reg)
     if (likely(reg != 0)) {
         return cpu_R[reg];
     }
-    if (!dc->r0_set) {
-        if (dc->r0 == NULL) {
-            dc->r0 = tcg_temp_new_i32();
-        }
-        tcg_gen_movi_i32(dc->r0, 0);
-        dc->r0_set = true;
-    }
-    return dc->r0;
+    return tcg_constant_i32(0);
 }
 
 static TCGv_i32 reg_for_write(DisasContext *dc, int reg)
@@ -194,10 +184,7 @@ static TCGv_i32 reg_for_write(DisasContext *dc, int reg)
     if (likely(reg != 0)) {
         return cpu_R[reg];
     }
-    if (dc->r0 == NULL) {
-        dc->r0 = tcg_temp_new_i32();
-    }
-    return dc->r0;
+    return tcg_temp_new_i32();
 }
 
 static bool do_typea(DisasContext *dc, arg_typea *arg, bool side_effects,
@@ -1635,8 +1622,6 @@ static void mb_tr_init_disas_context(DisasContextBase *dcb, CPUState *cs)
     dc->cfg = &cpu->cfg;
     dc->tb_flags = dc->base.tb->flags;
     dc->ext_imm = dc->base.tb->cs_base;
-    dc->r0 = NULL;
-    dc->r0_set = false;
     dc->mem_index = cpu_mmu_index(cs, false);
     dc->jmp_cond = dc->tb_flags & D_FLAG ? TCG_COND_ALWAYS : TCG_COND_NEVER;
     dc->jmp_dest = -1;
@@ -1675,11 +1660,6 @@ static void mb_tr_translate_insn(DisasContextBase *dcb, CPUState *cs)
         trap_illegal(dc, true);
     }
 
-    if (dc->r0) {
-        dc->r0 = NULL;
-        dc->r0_set = false;
-    }
-
     /* Discard the imm global when its contents cannot be used. */
     if ((dc->tb_flags & ~dc->tb_flags_to_set) & IMM_FLAG) {
         tcg_gen_discard_i32(cpu_imm);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 13/28] target/microblaze: Simplify compute_ldst_addr_type{a,b}
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (11 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 12/28] target/microblaze: Drop DisasContext.r0 Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 14/28] tcg: Drop TCGContext.tlb_dyn_max_bits Richard Henderson
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Edgar E . Iglesias

Require TCGv_i32 and TCGv be identical, so drop
the extensions.  Return constants when possible
instead of a mov into a temporary.  Return register
inputs unchanged when possible.

Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/microblaze/translate.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 047d97e2c5..5098a1db4d 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -606,19 +606,18 @@ DO_TYPEBI(xori, false, tcg_gen_xori_i32)
 
 static TCGv compute_ldst_addr_typea(DisasContext *dc, int ra, int rb)
 {
-    TCGv ret = tcg_temp_new();
+    TCGv ret;
 
     /* If any of the regs is r0, set t to the value of the other reg.  */
     if (ra && rb) {
-        TCGv_i32 tmp = tcg_temp_new_i32();
-        tcg_gen_add_i32(tmp, cpu_R[ra], cpu_R[rb]);
-        tcg_gen_extu_i32_tl(ret, tmp);
+        ret = tcg_temp_new_i32();
+        tcg_gen_add_i32(ret, cpu_R[ra], cpu_R[rb]);
     } else if (ra) {
-        tcg_gen_extu_i32_tl(ret, cpu_R[ra]);
+        ret = cpu_R[ra];
     } else if (rb) {
-        tcg_gen_extu_i32_tl(ret, cpu_R[rb]);
+        ret = cpu_R[rb];
     } else {
-        tcg_gen_movi_tl(ret, 0);
+        ret = tcg_constant_i32(0);
     }
 
     if ((ra == 1 || rb == 1) && dc->cfg->stackprot) {
@@ -629,15 +628,16 @@ static TCGv compute_ldst_addr_typea(DisasContext *dc, int ra, int rb)
 
 static TCGv compute_ldst_addr_typeb(DisasContext *dc, int ra, int imm)
 {
-    TCGv ret = tcg_temp_new();
+    TCGv ret;
 
     /* If any of the regs is r0, set t to the value of the other reg.  */
-    if (ra) {
-        TCGv_i32 tmp = tcg_temp_new_i32();
-        tcg_gen_addi_i32(tmp, cpu_R[ra], imm);
-        tcg_gen_extu_i32_tl(ret, tmp);
+    if (ra && imm) {
+        ret = tcg_temp_new_i32();
+        tcg_gen_addi_i32(ret, cpu_R[ra], imm);
+    } else if (ra) {
+        ret = cpu_R[ra];
     } else {
-        tcg_gen_movi_tl(ret, (uint32_t)imm);
+        ret = tcg_constant_i32(imm);
     }
 
     if (ra == 1 && dc->cfg->stackprot) {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 14/28] tcg: Drop TCGContext.tlb_dyn_max_bits
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (12 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 13/28] target/microblaze: Simplify compute_ldst_addr_type{a,b} Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 15/28] tcg: Drop TCGContext.page_{mask,bits} Richard Henderson
                   ` (14 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel

This was an extremely minor optimization for aarch64
and x86_64, to use a 32-bit AND instruction when the
guest softmmu tlb maximum was sufficiently small.
Both hosts can simply use a 64-bit AND insn instead.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/tcg/tcg.h            | 1 -
 accel/tcg/translate-all.c    | 2 --
 tcg/aarch64/tcg-target.c.inc | 6 +-----
 tcg/i386/tcg-target.c.inc    | 6 ++----
 4 files changed, 3 insertions(+), 12 deletions(-)

diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
index 3fa5a7aed2..e440c889c8 100644
--- a/include/tcg/tcg.h
+++ b/include/tcg/tcg.h
@@ -368,7 +368,6 @@ struct TCGContext {
 
     int page_mask;
     uint8_t page_bits;
-    uint8_t tlb_dyn_max_bits;
     TCGBar guest_mo;
 
     TCGRegSet reserved_regs;
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 451b383aa8..6735a40ade 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -24,7 +24,6 @@
 #include "tcg/tcg.h"
 #include "exec/mmap-lock.h"
 #include "tb-internal.h"
-#include "tlb-bounds.h"
 #include "exec/tb-flush.h"
 #include "qemu/cacheinfo.h"
 #include "qemu/target-info.h"
@@ -316,7 +315,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu, TCGTBCPUState s)
 #ifdef CONFIG_SOFTMMU
     tcg_ctx->page_bits = TARGET_PAGE_BITS;
     tcg_ctx->page_mask = TARGET_PAGE_MASK;
-    tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
 #endif
     tcg_ctx->guest_mo = cpu->cc->tcg_ops->guest_default_memory_order;
 
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 4cb647cb34..6356a81c2a 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
@@ -1661,7 +1661,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         unsigned s_mask = (1u << s_bits) - 1;
         unsigned mem_index = get_mmuidx(oi);
         TCGReg addr_adj;
-        TCGType mask_type;
         uint64_t compare_mask;
 
         ldst = new_ldst_label(s);
@@ -1669,9 +1668,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         ldst->oi = oi;
         ldst->addr_reg = addr_reg;
 
-        mask_type = (s->page_bits + s->tlb_dyn_max_bits > 32
-                     ? TCG_TYPE_I64 : TCG_TYPE_I32);
-
         /* Load cpu->neg.tlb.f[mmu_idx].{mask,table} into {tmp0,tmp1}. */
         QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0);
         QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
@@ -1679,7 +1675,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
                      tlb_mask_table_ofs(s, mem_index), 1, 0);
 
         /* Extract the TLB index from the address into X0.  */
-        tcg_out_insn(s, 3502S, AND_LSR, mask_type == TCG_TYPE_I64,
+        tcg_out_insn(s, 3502S, AND_LSR, TCG_TYPE_I64,
                      TCG_REG_TMP0, TCG_REG_TMP0, addr_reg,
                      s->page_bits - CPU_TLB_ENTRY_BITS);
 
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index 09fce27b06..2990912080 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -2199,10 +2199,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             trexw = (ttype == TCG_TYPE_I32 ? 0 : P_REXW);
             if (TCG_TYPE_PTR == TCG_TYPE_I64) {
                 hrexw = P_REXW;
-                if (s->page_bits + s->tlb_dyn_max_bits > 32) {
-                    tlbtype = TCG_TYPE_I64;
-                    tlbrexw = P_REXW;
-                }
+                tlbtype = TCG_TYPE_I64;
+                tlbrexw = P_REXW;
             }
         }
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 15/28] tcg: Drop TCGContext.page_{mask,bits}
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (13 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 14/28] tcg: Drop TCGContext.tlb_dyn_max_bits Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 16/28] target/sh4: Use MO_ALIGN for system UNALIGN() Richard Henderson
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel

Use exec/target_page.h instead of independent variables.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/tcg/tcg.h                |  3 ---
 accel/tcg/translate-all.c        |  4 ----
 tcg/perf.c                       |  2 +-
 tcg/tcg-op-ldst.c                |  3 ++-
 tcg/tcg.c                        |  1 +
 tcg/aarch64/tcg-target.c.inc     |  4 ++--
 tcg/arm/tcg-target.c.inc         | 10 +++++-----
 tcg/i386/tcg-target.c.inc        |  4 ++--
 tcg/loongarch64/tcg-target.c.inc |  4 ++--
 tcg/mips/tcg-target.c.inc        |  6 +++---
 tcg/ppc/tcg-target.c.inc         | 14 +++++++-------
 tcg/riscv/tcg-target.c.inc       |  4 ++--
 tcg/s390x/tcg-target.c.inc       |  4 ++--
 tcg/sparc64/tcg-target.c.inc     |  4 ++--
 14 files changed, 31 insertions(+), 36 deletions(-)

diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h
index e440c889c8..125323f153 100644
--- a/include/tcg/tcg.h
+++ b/include/tcg/tcg.h
@@ -365,9 +365,6 @@ struct TCGContext {
     int nb_indirects;
     int nb_ops;
     TCGType addr_type;            /* TCG_TYPE_I32 or TCG_TYPE_I64 */
-
-    int page_mask;
-    uint8_t page_bits;
     TCGBar guest_mo;
 
     TCGRegSet reserved_regs;
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 6735a40ade..d468667b0d 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -312,10 +312,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu, TCGTBCPUState s)
 
     tcg_ctx->gen_tb = tb;
     tcg_ctx->addr_type = target_long_bits() == 32 ? TCG_TYPE_I32 : TCG_TYPE_I64;
-#ifdef CONFIG_SOFTMMU
-    tcg_ctx->page_bits = TARGET_PAGE_BITS;
-    tcg_ctx->page_mask = TARGET_PAGE_MASK;
-#endif
     tcg_ctx->guest_mo = cpu->cc->tcg_ops->guest_default_memory_order;
 
  restart_translate:
diff --git a/tcg/perf.c b/tcg/perf.c
index 4e8d2c1bee..8fa5fa9991 100644
--- a/tcg/perf.c
+++ b/tcg/perf.c
@@ -334,7 +334,7 @@ void perf_report_code(uint64_t guest_pc, TranslationBlock *tb,
         /* FIXME: This replicates the restore_state_to_opc() logic. */
         q[insn].address = gen_insn_data[insn * INSN_START_WORDS + 0];
         if (tb_cflags(tb) & CF_PCREL) {
-            q[insn].address |= (guest_pc & qemu_target_page_mask());
+            q[insn].address |= guest_pc & TARGET_PAGE_MASK;
         }
         q[insn].flags = DEBUGINFO_SYMBOL | (jitdump ? DEBUGINFO_LINE : 0);
     }
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index fa9e52277b..548496002d 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -27,6 +27,7 @@
 #include "tcg/tcg-temp-internal.h"
 #include "tcg/tcg-op-common.h"
 #include "tcg/tcg-mo.h"
+#include "exec/target_page.h"
 #include "exec/translation-block.h"
 #include "exec/plugin-gen.h"
 #include "tcg-internal.h"
@@ -40,7 +41,7 @@ static void check_max_alignment(unsigned a_bits)
      * FIXME: Must keep the count up-to-date with "exec/tlb-flags.h".
      */
     if (tcg_use_softmmu) {
-        tcg_debug_assert(a_bits + 5 <= tcg_ctx->page_bits);
+        tcg_debug_assert(a_bits + 5 <= TARGET_PAGE_BITS);
     }
 }
 
diff --git a/tcg/tcg.c b/tcg/tcg.c
index ae27a2607d..d714ae2889 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -34,6 +34,7 @@
 #include "qemu/cacheflush.h"
 #include "qemu/cacheinfo.h"
 #include "qemu/timer.h"
+#include "exec/target_page.h"
 #include "exec/translation-block.h"
 #include "exec/tlb-common.h"
 #include "tcg/startup.h"
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 6356a81c2a..3b088b7bd9 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
@@ -1677,7 +1677,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         /* Extract the TLB index from the address into X0.  */
         tcg_out_insn(s, 3502S, AND_LSR, TCG_TYPE_I64,
                      TCG_REG_TMP0, TCG_REG_TMP0, addr_reg,
-                     s->page_bits - CPU_TLB_ENTRY_BITS);
+                     TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
 
         /* Add the tlb_table pointer, forming the CPUTLBEntry address. */
         tcg_out_insn(s, 3502, ADD, 1, TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_TMP0);
@@ -1703,7 +1703,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             tcg_out_insn(s, 3401, ADDI, addr_type,
                          addr_adj, addr_reg, s_mask - a_mask);
         }
-        compare_mask = (uint64_t)s->page_mask | a_mask;
+        compare_mask = (uint64_t)TARGET_PAGE_MASK | a_mask;
 
         /* Store the page mask part of the address into TMP2.  */
         tcg_out_logicali(s, I3404_ANDI, addr_type, TCG_REG_TMP2,
diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc
index 447e43583e..836894b16a 100644
--- a/tcg/arm/tcg-target.c.inc
+++ b/tcg/arm/tcg-target.c.inc
@@ -1427,7 +1427,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
 
         /* Extract the tlb index from the address into R0.  */
         tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addr,
-                        SHIFT_IMM_LSR(s->page_bits - CPU_TLB_ENTRY_BITS));
+                        SHIFT_IMM_LSR(TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS));
 
         /*
          * Add the tlb_table pointer, creating the CPUTLBEntry address in R1.
@@ -1463,8 +1463,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             tcg_out_dat_imm(s, COND_AL, ARITH_ADD, t_addr,
                             addr, s_mask - a_mask);
         }
-        if (use_armv7_instructions && s->page_bits <= 16) {
-            tcg_out_movi32(s, COND_AL, TCG_REG_TMP, ~(s->page_mask | a_mask));
+        if (use_armv7_instructions && TARGET_PAGE_BITS <= 16) {
+            tcg_out_movi32(s, COND_AL, TCG_REG_TMP, ~(TARGET_PAGE_MASK | a_mask));
             tcg_out_dat_reg(s, COND_AL, ARITH_BIC, TCG_REG_TMP,
                             t_addr, TCG_REG_TMP, 0);
             tcg_out_dat_reg(s, COND_AL, ARITH_CMP, 0,
@@ -1475,10 +1475,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
                 tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask);
             }
             tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, t_addr,
-                            SHIFT_IMM_LSR(s->page_bits));
+                            SHIFT_IMM_LSR(TARGET_PAGE_BITS));
             tcg_out_dat_reg(s, (a_mask ? COND_EQ : COND_AL), ARITH_CMP,
                             0, TCG_REG_R2, TCG_REG_TMP,
-                            SHIFT_IMM_LSL(s->page_bits));
+                            SHIFT_IMM_LSL(TARGET_PAGE_BITS));
         }
     } else if (a_mask) {
         ldst = new_ldst_label(s);
diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc
index 2990912080..088c6c9264 100644
--- a/tcg/i386/tcg-target.c.inc
+++ b/tcg/i386/tcg-target.c.inc
@@ -2206,7 +2206,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
 
         tcg_out_mov(s, tlbtype, TCG_REG_L0, addr);
         tcg_out_shifti(s, SHIFT_SHR + tlbrexw, TCG_REG_L0,
-                       s->page_bits - CPU_TLB_ENTRY_BITS);
+                       TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
 
         tcg_out_modrm_offset(s, OPC_AND_GvEv + trexw, TCG_REG_L0, TCG_AREG0,
                              fast_ofs + offsetof(CPUTLBDescFast, mask));
@@ -2225,7 +2225,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             tcg_out_modrm_offset(s, OPC_LEA + trexw, TCG_REG_L1,
                                  addr, s_mask - a_mask);
         }
-        tlb_mask = s->page_mask | a_mask;
+        tlb_mask = TARGET_PAGE_MASK | a_mask;
         tgen_arithi(s, ARITH_AND + trexw, TCG_REG_L1, tlb_mask, 0);
 
         /* cmp 0(TCG_REG_L0), TCG_REG_L1 */
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index e5580d69a8..10c69211ac 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -1065,7 +1065,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs);
 
         tcg_out_opc_srli_d(s, TCG_REG_TMP2, addr_reg,
-                           s->page_bits - CPU_TLB_ENTRY_BITS);
+                           TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         tcg_out_opc_and(s, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP0);
         tcg_out_opc_add_d(s, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP1);
 
@@ -1091,7 +1091,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             tcg_out_mov(s, addr_type, TCG_REG_TMP1, addr_reg);
         }
         tcg_out_opc_bstrins_d(s, TCG_REG_TMP1, TCG_REG_ZERO,
-                              a_bits, s->page_bits - 1);
+                              a_bits, TARGET_PAGE_BITS - 1);
 
         /* Compare masked address with the TLB entry.  */
         ldst->label_ptr[0] = s->code_ptr;
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index 2c0457e588..400eafbab4 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -1199,9 +1199,9 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         /* Extract the TLB index from the address into TMP3.  */
         if (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32) {
             tcg_out_opc_sa(s, OPC_SRL, TCG_TMP3, addr,
-                           s->page_bits - CPU_TLB_ENTRY_BITS);
+                           TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         } else {
-            tcg_out_dsrl(s, TCG_TMP3, addr, s->page_bits - CPU_TLB_ENTRY_BITS);
+            tcg_out_dsrl(s, TCG_TMP3, addr, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         }
         tcg_out_opc_reg(s, OPC_AND, TCG_TMP3, TCG_TMP3, TCG_TMP0);
 
@@ -1224,7 +1224,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
          * For unaligned accesses, compare against the end of the access to
          * verify that it does not cross a page boundary.
          */
-        tcg_out_movi(s, addr_type, TCG_TMP1, s->page_mask | a_mask);
+        tcg_out_movi(s, addr_type, TCG_TMP1, TARGET_PAGE_MASK | a_mask);
         if (a_mask < s_mask) {
             tcg_out_opc_imm(s, (TCG_TARGET_REG_BITS == 32
                                 || addr_type == TCG_TYPE_I32
diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc
index 2e94778104..b8b23d44d5 100644
--- a/tcg/ppc/tcg-target.c.inc
+++ b/tcg/ppc/tcg-target.c.inc
@@ -2440,10 +2440,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         /* Extract the page index, shifted into place for tlb index.  */
         if (TCG_TARGET_REG_BITS == 32) {
             tcg_out_shri32(s, TCG_REG_R0, addr,
-                           s->page_bits - CPU_TLB_ENTRY_BITS);
+                           TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         } else {
             tcg_out_shri64(s, TCG_REG_R0, addr,
-                           s->page_bits - CPU_TLB_ENTRY_BITS);
+                           TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         }
         tcg_out32(s, AND | SAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_R0));
 
@@ -2480,7 +2480,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
                 a_bits = s_bits;
             }
             tcg_out_rlw(s, RLWINM, TCG_REG_R0, addr, 0,
-                        (32 - a_bits) & 31, 31 - s->page_bits);
+                        (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS);
         } else {
             TCGReg t = addr;
 
@@ -2501,13 +2501,13 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
             /* Mask the address for the requested alignment.  */
             if (addr_type == TCG_TYPE_I32) {
                 tcg_out_rlw(s, RLWINM, TCG_REG_R0, t, 0,
-                            (32 - a_bits) & 31, 31 - s->page_bits);
+                            (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS);
             } else if (a_bits == 0) {
-                tcg_out_rld(s, RLDICR, TCG_REG_R0, t, 0, 63 - s->page_bits);
+                tcg_out_rld(s, RLDICR, TCG_REG_R0, t, 0, 63 - TARGET_PAGE_BITS);
             } else {
                 tcg_out_rld(s, RLDICL, TCG_REG_R0, t,
-                            64 - s->page_bits, s->page_bits - a_bits);
-                tcg_out_rld(s, RLDICL, TCG_REG_R0, TCG_REG_R0, s->page_bits, 0);
+                            64 - TARGET_PAGE_BITS, TARGET_PAGE_BITS - a_bits);
+                tcg_out_rld(s, RLDICL, TCG_REG_R0, TCG_REG_R0, TARGET_PAGE_BITS, 0);
             }
         }
 
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
index f9417d15f7..1800fd5077 100644
--- a/tcg/riscv/tcg-target.c.inc
+++ b/tcg/riscv/tcg-target.c.inc
@@ -1706,7 +1706,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase,
         tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs);
 
         tcg_out_opc_imm(s, OPC_SRLI, TCG_REG_TMP2, addr_reg,
-                        s->page_bits - CPU_TLB_ENTRY_BITS);
+                        TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
         tcg_out_opc_reg(s, OPC_AND, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP0);
         tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP2, TCG_REG_TMP2, TCG_REG_TMP1);
 
@@ -1722,7 +1722,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase,
             tcg_out_opc_imm(s, addr_type == TCG_TYPE_I32 ? OPC_ADDIW : OPC_ADDI,
                             addr_adj, addr_reg, s_mask - a_mask);
         }
-        compare_mask = s->page_mask | a_mask;
+        compare_mask = TARGET_PAGE_MASK | a_mask;
         if (compare_mask == sextreg(compare_mask, 0, 12)) {
             tcg_out_opc_imm(s, OPC_ANDI, TCG_REG_TMP1, addr_adj, compare_mask);
         } else {
diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc
index 7ca0071f24..84a9e73a46 100644
--- a/tcg/s390x/tcg-target.c.inc
+++ b/tcg/s390x/tcg-target.c.inc
@@ -2004,7 +2004,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
         ldst->addr_reg = addr_reg;
 
         tcg_out_sh64(s, RSY_SRLG, TCG_TMP0, addr_reg, TCG_REG_NONE,
-                     s->page_bits - CPU_TLB_ENTRY_BITS);
+                     TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS);
 
         tcg_out_insn(s, RXY, NG, TCG_TMP0, TCG_AREG0, TCG_REG_NONE, mask_off);
         tcg_out_insn(s, RXY, AG, TCG_TMP0, TCG_AREG0, TCG_REG_NONE, table_off);
@@ -2016,7 +2016,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
          * byte of the access.
          */
         a_off = (a_mask >= s_mask ? 0 : s_mask - a_mask);
-        tlb_mask = (uint64_t)s->page_mask | a_mask;
+        tlb_mask = (uint64_t)TARGET_PAGE_MASK | a_mask;
         if (a_off == 0) {
             tgen_andi_risbg(s, TCG_REG_R0, addr_reg, tlb_mask);
         } else {
diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc
index 9e004fb511..5e5c3f1cda 100644
--- a/tcg/sparc64/tcg-target.c.inc
+++ b/tcg/sparc64/tcg-target.c.inc
@@ -1120,7 +1120,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
 
     /* Extract the page index, shifted into place for tlb index.  */
     tcg_out_arithi(s, TCG_REG_T1, addr_reg,
-                   s->page_bits - CPU_TLB_ENTRY_BITS, SHIFT_SRL);
+                   TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS, SHIFT_SRL);
     tcg_out_arith(s, TCG_REG_T1, TCG_REG_T1, TCG_REG_T2, ARITH_AND);
 
     /* Add the tlb_table pointer, creating the CPUTLBEntry address into R2.  */
@@ -1136,7 +1136,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h,
     h->base = TCG_REG_T1;
 
     /* Mask out the page offset, except for the required alignment. */
-    compare_mask = s->page_mask | a_mask;
+    compare_mask = TARGET_PAGE_MASK | a_mask;
     if (check_fit_tl(compare_mask, 13)) {
         tcg_out_arithi(s, TCG_REG_T3, addr_reg, compare_mask, ARITH_AND);
     } else {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 16/28] target/sh4: Use MO_ALIGN for system UNALIGN()
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (14 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 15/28] tcg: Drop TCGContext.page_{mask,bits} Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:13 ` [PULL 17/28] accel/tcg: Add TCGCPUOps.pointer_wrap Richard Henderson
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yoshinori Sato, Philippe Mathieu-Daudé

This should have been done before removing TARGET_ALIGNED_ONLY,
as we did for hppa and alpha.

Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Fixes: 8244189419f9 ("target/sh4: Remove TARGET_ALIGNED_ONLY")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/sh4/translate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/sh4/translate.c b/target/sh4/translate.c
index bf8828fce8..70fd13aa3f 100644
--- a/target/sh4/translate.c
+++ b/target/sh4/translate.c
@@ -54,7 +54,7 @@ typedef struct DisasContext {
 #define UNALIGN(C)   (ctx->tbflags & TB_FLAG_UNALIGN ? MO_UNALN : MO_ALIGN)
 #else
 #define IS_USER(ctx) (!(ctx->tbflags & (1u << SR_MD)))
-#define UNALIGN(C)   0
+#define UNALIGN(C)   MO_ALIGN
 #endif
 
 /* Target-specific values for ctx->base.is_jmp.  */
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 17/28] accel/tcg: Add TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (15 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 16/28] target/sh4: Use MO_ALIGN for system UNALIGN() Richard Henderson
@ 2025-05-28  8:13 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets Richard Henderson
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:13 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/accel/tcg/cpu-ops.h | 7 +++++++
 accel/tcg/cputlb.c          | 6 ++++++
 2 files changed, 13 insertions(+)

diff --git a/include/accel/tcg/cpu-ops.h b/include/accel/tcg/cpu-ops.h
index cd22e5d5b9..83b2c2c864 100644
--- a/include/accel/tcg/cpu-ops.h
+++ b/include/accel/tcg/cpu-ops.h
@@ -222,6 +222,13 @@ struct TCGCPUOps {
     bool (*tlb_fill)(CPUState *cpu, vaddr address, int size,
                      MMUAccessType access_type, int mmu_idx,
                      bool probe, uintptr_t retaddr);
+    /**
+     * @pointer_wrap:
+     *
+     * We have incremented @base to @result, resulting in a page change.
+     * For the current cpu state, adjust @result for possible overflow.
+     */
+    vaddr (*pointer_wrap)(CPUState *cpu, int mmu_idx, vaddr result, vaddr base);
     /**
      * @do_transaction_failed: Callback for handling failed memory transactions
      * (ie bus faults or external aborts; not MMU faults)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 86d0deb08c..81ff725cbc 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1773,6 +1773,12 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
         l->page[1].size = l->page[0].size - size0;
         l->page[0].size = size0;
 
+        if (cpu->cc->tcg_ops->pointer_wrap) {
+            l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
+                                                             l->page[1].addr,
+                                                             addr);
+        }
+
         /*
          * Lookup both pages, recognizing exceptions from either.  If the
          * second lookup potentially resized, refresh first CPUTLBEntryFull.
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (16 preceding siblings ...)
  2025-05-28  8:13 ` [PULL 17/28] accel/tcg: Add TCGCPUOps.pointer_wrap Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-08-29  6:55   ` Michael Tokarev
  2025-05-28  8:14 ` [PULL 19/28] target: Use cpu_pointer_wrap_uint32 for 32-bit targets Richard Henderson
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Helge Deller, Yoshinori Sato, Philippe Mathieu-Daudé

Alpha, HPPA, and SH4 always use aligned addresses,
and therefore never produce accesses that cross pages.

Cc: Helge Deller <deller@gmx.de>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/accel/tcg/cpu-ops.h |  5 +++++
 accel/tcg/cputlb.c          | 13 +++++++++++++
 target/alpha/cpu.c          |  1 +
 target/hppa/cpu.c           |  1 +
 target/sh4/cpu.c            |  1 +
 5 files changed, 21 insertions(+)

diff --git a/include/accel/tcg/cpu-ops.h b/include/accel/tcg/cpu-ops.h
index 83b2c2c864..4f3b4fd3bc 100644
--- a/include/accel/tcg/cpu-ops.h
+++ b/include/accel/tcg/cpu-ops.h
@@ -322,6 +322,11 @@ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, vaddr len,
  */
 int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len);
 
+/*
+ * Common pointer_wrap implementations.
+ */
+vaddr cpu_pointer_wrap_notreached(CPUState *, int, vaddr, vaddr);
+
 #endif
 
 #endif /* TCG_CPU_OPS_H */
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 81ff725cbc..49ec3ee5dc 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2933,3 +2933,16 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, vaddr addr,
 {
     return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_INST_FETCH);
 }
+
+/*
+ * Common pointer_wrap implementations.
+ */
+
+/*
+ * To be used for strict alignment targets.
+ * Because no accesses are unaligned, no accesses wrap either.
+ */
+vaddr cpu_pointer_wrap_notreached(CPUState *cs, int idx, vaddr res, vaddr base)
+{
+    g_assert_not_reached();
+}
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index 890b84c032..2082db45ea 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -261,6 +261,7 @@ static const TCGCPUOps alpha_tcg_ops = {
     .record_sigbus = alpha_cpu_record_sigbus,
 #else
     .tlb_fill = alpha_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_notreached,
     .cpu_exec_interrupt = alpha_cpu_exec_interrupt,
     .cpu_exec_halt = alpha_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 6465181543..24777727e6 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -269,6 +269,7 @@ static const TCGCPUOps hppa_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill_align = hppa_cpu_tlb_fill_align,
+    .pointer_wrap = cpu_pointer_wrap_notreached,
     .cpu_exec_interrupt = hppa_cpu_exec_interrupt,
     .cpu_exec_halt = hppa_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index b35f18e250..4f561e8c91 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -296,6 +296,7 @@ static const TCGCPUOps superh_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = superh_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_notreached,
     .cpu_exec_interrupt = superh_cpu_exec_interrupt,
     .cpu_exec_halt = superh_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 19/28] target: Use cpu_pointer_wrap_uint32 for 32-bit targets
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (17 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 20/28] target/arm: Fill in TCGCPUOps.pointer_wrap Richard Henderson
                   ` (9 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael Rolnik, Laurent Vivier, Stafford Horne, Yoshinori Sato,
	Max Filippov, Bastian Koppelmann, Philippe Mathieu-Daudé,
	Edgar E . Iglesias

M68K, MicroBlaze, OpenRISC, RX, TriCore and Xtensa are
all 32-bit targets.  AVR is more complicated, but using
a 32-bit wrap preserves current behaviour.

Cc: Michael Rolnik <mrolnik@gmail.com>
Cc: Laurent Vivier <laurent@vivier.eu>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Tested-by Bastian Koppelmann <kbastian@mail.uni-paderborn.de> (tricore)
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/accel/tcg/cpu-ops.h | 1 +
 accel/tcg/cputlb.c          | 6 ++++++
 target/avr/cpu.c            | 6 ++++++
 target/m68k/cpu.c           | 1 +
 target/microblaze/cpu.c     | 1 +
 target/openrisc/cpu.c       | 1 +
 target/rx/cpu.c             | 1 +
 target/tricore/cpu.c        | 1 +
 target/xtensa/cpu.c         | 1 +
 9 files changed, 19 insertions(+)

diff --git a/include/accel/tcg/cpu-ops.h b/include/accel/tcg/cpu-ops.h
index 4f3b4fd3bc..dd8ea30016 100644
--- a/include/accel/tcg/cpu-ops.h
+++ b/include/accel/tcg/cpu-ops.h
@@ -326,6 +326,7 @@ int cpu_watchpoint_address_matches(CPUState *cpu, vaddr addr, vaddr len);
  * Common pointer_wrap implementations.
  */
 vaddr cpu_pointer_wrap_notreached(CPUState *, int, vaddr, vaddr);
+vaddr cpu_pointer_wrap_uint32(CPUState *, int, vaddr, vaddr);
 
 #endif
 
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 49ec3ee5dc..a734859396 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2946,3 +2946,9 @@ vaddr cpu_pointer_wrap_notreached(CPUState *cs, int idx, vaddr res, vaddr base)
 {
     g_assert_not_reached();
 }
+
+/* To be used for strict 32-bit targets. */
+vaddr cpu_pointer_wrap_uint32(CPUState *cs, int idx, vaddr res, vaddr base)
+{
+    return (uint32_t)res;
+}
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
index 250241541b..6995de6a12 100644
--- a/target/avr/cpu.c
+++ b/target/avr/cpu.c
@@ -250,6 +250,12 @@ static const TCGCPUOps avr_tcg_ops = {
     .cpu_exec_reset = cpu_reset,
     .tlb_fill = avr_cpu_tlb_fill,
     .do_interrupt = avr_cpu_do_interrupt,
+    /*
+     * TODO: code and data wrapping are different, but for the most part
+     * AVR only references bytes or aligned code fetches.  But we use
+     * non-aligned MO_16 accesses for stack push/pop.
+     */
+    .pointer_wrap = cpu_pointer_wrap_uint32,
 };
 
 static void avr_cpu_class_init(ObjectClass *oc, const void *data)
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index c5196a612e..6a09db3a6f 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -619,6 +619,7 @@ static const TCGCPUOps m68k_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = m68k_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = m68k_cpu_exec_interrupt,
     .cpu_exec_halt = m68k_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index 615a959200..ee0a869a94 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -447,6 +447,7 @@ static const TCGCPUOps mb_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = mb_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = mb_cpu_exec_interrupt,
     .cpu_exec_halt = mb_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index 054ad33360..dfbb2df643 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -265,6 +265,7 @@ static const TCGCPUOps openrisc_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = openrisc_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = openrisc_cpu_exec_interrupt,
     .cpu_exec_halt = openrisc_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
index 36eba75545..c6dd5d6f83 100644
--- a/target/rx/cpu.c
+++ b/target/rx/cpu.c
@@ -225,6 +225,7 @@ static const TCGCPUOps rx_tcg_ops = {
     .restore_state_to_opc = rx_restore_state_to_opc,
     .mmu_index = rx_cpu_mmu_index,
     .tlb_fill = rx_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
 
     .cpu_exec_interrupt = rx_cpu_exec_interrupt,
     .cpu_exec_halt = rx_cpu_has_work,
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
index e56f90fde9..4f035b6f76 100644
--- a/target/tricore/cpu.c
+++ b/target/tricore/cpu.c
@@ -190,6 +190,7 @@ static const TCGCPUOps tricore_tcg_ops = {
     .restore_state_to_opc = tricore_restore_state_to_opc,
     .mmu_index = tricore_cpu_mmu_index,
     .tlb_fill = tricore_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = tricore_cpu_exec_interrupt,
     .cpu_exec_halt = tricore_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index 91b71b6caa..ea9b6df3aa 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -318,6 +318,7 @@ static const TCGCPUOps xtensa_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = xtensa_cpu_tlb_fill,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = xtensa_cpu_exec_interrupt,
     .cpu_exec_halt = xtensa_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 20/28] target/arm: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (18 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 19/28] target: Use cpu_pointer_wrap_uint32 for 32-bit targets Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 21/28] target/i386: " Richard Henderson
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-arm

For a-profile, check A32 vs A64 state.
For m-profile, use cpu_pointer_wrap_uint32.

Cc: qemu-arm@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/arm/cpu.c         | 24 ++++++++++++++++++++++++
 target/arm/tcg/cpu-v7m.c |  1 +
 2 files changed, 25 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index ca5ed7892e..e025e241ed 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2703,6 +2703,29 @@ static const struct SysemuCPUOps arm_sysemu_ops = {
 #endif
 
 #ifdef CONFIG_TCG
+#ifndef CONFIG_USER_ONLY
+static vaddr aprofile_pointer_wrap(CPUState *cs, int mmu_idx,
+                                   vaddr result, vaddr base)
+{
+    /*
+     * The Stage2 and Phys indexes are only used for ptw on arm32,
+     * and all pte's are aligned, so we never produce a wrap for these.
+     * Double check that we're not truncating a 40-bit physical address.
+     */
+    assert((unsigned)mmu_idx < (ARMMMUIdx_Stage2_S & ARM_MMU_IDX_COREIDX_MASK));
+
+    if (!is_a64(cpu_env(cs))) {
+        return (uint32_t)result;
+    }
+
+    /*
+     * TODO: For FEAT_CPA2, decide how to we want to resolve
+     * Unpredictable_CPACHECK in AddressIncrement.
+     */
+    return result;
+}
+#endif /* !CONFIG_USER_ONLY */
+
 static const TCGCPUOps arm_tcg_ops = {
     .mttcg_supported = true,
     /* ARM processors have a weak memory model */
@@ -2722,6 +2745,7 @@ static const TCGCPUOps arm_tcg_ops = {
     .untagged_addr = aarch64_untagged_addr,
 #else
     .tlb_fill_align = arm_cpu_tlb_fill_align,
+    .pointer_wrap = aprofile_pointer_wrap,
     .cpu_exec_interrupt = arm_cpu_exec_interrupt,
     .cpu_exec_halt = arm_cpu_exec_halt,
     .cpu_exec_reset = cpu_reset,
diff --git a/target/arm/tcg/cpu-v7m.c b/target/arm/tcg/cpu-v7m.c
index 95b23d9b55..8e1a083b91 100644
--- a/target/arm/tcg/cpu-v7m.c
+++ b/target/arm/tcg/cpu-v7m.c
@@ -249,6 +249,7 @@ static const TCGCPUOps arm_v7m_tcg_ops = {
     .record_sigbus = arm_cpu_record_sigbus,
 #else
     .tlb_fill_align = arm_cpu_tlb_fill_align,
+    .pointer_wrap = cpu_pointer_wrap_uint32,
     .cpu_exec_interrupt = arm_v7m_cpu_exec_interrupt,
     .cpu_exec_halt = arm_cpu_exec_halt,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 21/28] target/i386: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (19 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 20/28] target/arm: Fill in TCGCPUOps.pointer_wrap Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 22/28] target/loongarch: " Richard Henderson
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Paolo Bonzini, Philippe Mathieu-Daudé

Check 32 vs 64-bit state.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/i386/tcg/tcg-cpu.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
index 179dfdf064..6f5dc06b3b 100644
--- a/target/i386/tcg/tcg-cpu.c
+++ b/target/i386/tcg/tcg-cpu.c
@@ -149,6 +149,12 @@ static void x86_cpu_exec_reset(CPUState *cs)
     do_cpu_init(env_archcpu(env));
     cs->exception_index = EXCP_HALTED;
 }
+
+static vaddr x86_pointer_wrap(CPUState *cs, int mmu_idx,
+                              vaddr result, vaddr base)
+{
+    return cpu_env(cs)->hflags & HF_CS64_MASK ? result : (uint32_t)result;
+}
 #endif
 
 const TCGCPUOps x86_tcg_ops = {
@@ -172,6 +178,7 @@ const TCGCPUOps x86_tcg_ops = {
     .record_sigbus = x86_cpu_record_sigbus,
 #else
     .tlb_fill = x86_cpu_tlb_fill,
+    .pointer_wrap = x86_pointer_wrap,
     .do_interrupt = x86_cpu_do_interrupt,
     .cpu_exec_halt = x86_cpu_exec_halt,
     .cpu_exec_interrupt = x86_cpu_exec_interrupt,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 22/28] target/loongarch: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (20 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 21/28] target/i386: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 23/28] target/mips: " Richard Henderson
                   ` (6 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Song Gao, Bibo Mao, Philippe Mathieu-Daudé

Check va32 state.

Reviewed-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/cpu.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
index f7535d1be7..abad84c054 100644
--- a/target/loongarch/cpu.c
+++ b/target/loongarch/cpu.c
@@ -334,6 +334,12 @@ static bool loongarch_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
     }
     return false;
 }
+
+static vaddr loongarch_pointer_wrap(CPUState *cs, int mmu_idx,
+                                    vaddr result, vaddr base)
+{
+    return is_va32(cpu_env(cs)) ? (uint32_t)result : result;
+}
 #endif
 
 static TCGTBCPUState loongarch_get_tb_cpu_state(CPUState *cs)
@@ -889,6 +895,7 @@ static const TCGCPUOps loongarch_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = loongarch_cpu_tlb_fill,
+    .pointer_wrap = loongarch_pointer_wrap,
     .cpu_exec_interrupt = loongarch_cpu_exec_interrupt,
     .cpu_exec_halt = loongarch_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 23/28] target/mips: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (21 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 22/28] target/loongarch: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 24/28] target/ppc: " Richard Henderson
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé

Check 32 vs 64-bit addressing state.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/mips/cpu.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index 4cbfb9435a..1f6c41fd34 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -560,6 +560,14 @@ static TCGTBCPUState mips_get_tb_cpu_state(CPUState *cs)
     };
 }
 
+#ifndef CONFIG_USER_ONLY
+static vaddr mips_pointer_wrap(CPUState *cs, int mmu_idx,
+                               vaddr result, vaddr base)
+{
+    return cpu_env(cs)->hflags & MIPS_HFLAG_AWRAP ? (int32_t)result : result;
+}
+#endif
+
 static const TCGCPUOps mips_tcg_ops = {
     .mttcg_supported = TARGET_LONG_BITS == 32,
     .guest_default_memory_order = 0,
@@ -573,6 +581,7 @@ static const TCGCPUOps mips_tcg_ops = {
 
 #if !defined(CONFIG_USER_ONLY)
     .tlb_fill = mips_cpu_tlb_fill,
+    .pointer_wrap = mips_pointer_wrap,
     .cpu_exec_interrupt = mips_cpu_exec_interrupt,
     .cpu_exec_halt = mips_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 24/28] target/ppc: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (22 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 23/28] target/mips: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 25/28] target/riscv: " Richard Henderson
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, Philippe Mathieu-Daudé

Check 32 vs 64-bit state.

Cc: qemu-ppc@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/ppc/cpu_init.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
index 9642812a71..a0e77f2673 100644
--- a/target/ppc/cpu_init.c
+++ b/target/ppc/cpu_init.c
@@ -7386,6 +7386,12 @@ static void ppc_cpu_exec_exit(CPUState *cs)
         cpu->vhyp_class->cpu_exec_exit(cpu->vhyp, cpu);
     }
 }
+
+static vaddr ppc_pointer_wrap(CPUState *cs, int mmu_idx,
+                              vaddr result, vaddr base)
+{
+    return (cpu_env(cs)->hflags >> HFLAGS_64) & 1 ? result : (uint32_t)result;
+}
 #endif /* CONFIG_TCG */
 
 #endif /* !CONFIG_USER_ONLY */
@@ -7490,6 +7496,7 @@ static const TCGCPUOps ppc_tcg_ops = {
   .record_sigsegv = ppc_cpu_record_sigsegv,
 #else
   .tlb_fill = ppc_cpu_tlb_fill,
+  .pointer_wrap = ppc_pointer_wrap,
   .cpu_exec_interrupt = ppc_cpu_exec_interrupt,
   .cpu_exec_halt = ppc_cpu_has_work,
   .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 25/28] target/riscv: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (23 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 24/28] target/ppc: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 26/28] target/s390x: " Richard Henderson
                   ` (3 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-riscv, Philippe Mathieu-Daudé, Alistair Francis

Check 32 vs 64-bit and pointer masking state.

Cc: qemu-riscv@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/riscv/tcg/tcg-cpu.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index 305912b8dd..55fd9e5584 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -237,6 +237,31 @@ static void riscv_restore_state_to_opc(CPUState *cs,
     env->excp_uw2 = data[2];
 }
 
+#ifndef CONFIG_USER_ONLY
+static vaddr riscv_pointer_wrap(CPUState *cs, int mmu_idx,
+                                vaddr result, vaddr base)
+{
+    CPURISCVState *env = cpu_env(cs);
+    uint32_t pm_len;
+    bool pm_signext;
+
+    if (cpu_address_xl(env) == MXL_RV32) {
+        return (uint32_t)result;
+    }
+
+    pm_len = riscv_pm_get_pmlen(riscv_pm_get_pmm(env));
+    if (pm_len == 0) {
+        return result;
+    }
+
+    pm_signext = riscv_cpu_virt_mem_enabled(env);
+    if (pm_signext) {
+        return sextract64(result, 0, 64 - pm_len);
+    }
+    return extract64(result, 0, 64 - pm_len);
+}
+#endif
+
 const TCGCPUOps riscv_tcg_ops = {
     .mttcg_supported = true,
     .guest_default_memory_order = 0,
@@ -250,6 +275,7 @@ const TCGCPUOps riscv_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = riscv_cpu_tlb_fill,
+    .pointer_wrap = riscv_pointer_wrap,
     .cpu_exec_interrupt = riscv_cpu_exec_interrupt,
     .cpu_exec_halt = riscv_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 26/28] target/s390x: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (24 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 25/28] target/riscv: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 27/28] target/sparc: " Richard Henderson
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-s390x, Philippe Mathieu-Daudé

Use the existing wrap_address function.

Cc: qemu-s390x@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/s390x/cpu.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 9c1158ebcc..f05ce317da 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -347,6 +347,14 @@ static TCGTBCPUState s390x_get_tb_cpu_state(CPUState *cs)
     };
 }
 
+#ifndef CONFIG_USER_ONLY
+static vaddr s390_pointer_wrap(CPUState *cs, int mmu_idx,
+                               vaddr result, vaddr base)
+{
+    return wrap_address(cpu_env(cs), result);
+}
+#endif
+
 static const TCGCPUOps s390_tcg_ops = {
     .mttcg_supported = true,
     .precise_smc = true,
@@ -367,6 +375,7 @@ static const TCGCPUOps s390_tcg_ops = {
     .record_sigbus = s390_cpu_record_sigbus,
 #else
     .tlb_fill = s390_cpu_tlb_fill,
+    .pointer_wrap = s390_pointer_wrap,
     .cpu_exec_interrupt = s390_cpu_exec_interrupt,
     .cpu_exec_halt = s390_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 27/28] target/sparc: Fill in TCGCPUOps.pointer_wrap
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (25 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 26/28] target/s390x: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-28  8:14 ` [PULL 28/28] accel/tcg: Assert TCGCPUOps.pointer_wrap is set Richard Henderson
  2025-05-29 12:35 ` [PULL 00/28] tcg patch queue Stefan Hajnoczi
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Mark Cave-Ayland, Philippe Mathieu-Daudé

Check address masking state for sparc64.

Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/sparc/cpu.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 2a3e408923..ed7701b02f 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -1002,6 +1002,18 @@ static const struct SysemuCPUOps sparc_sysemu_ops = {
 #ifdef CONFIG_TCG
 #include "accel/tcg/cpu-ops.h"
 
+#ifndef CONFIG_USER_ONLY
+static vaddr sparc_pointer_wrap(CPUState *cs, int mmu_idx,
+                                vaddr result, vaddr base)
+{
+#ifdef TARGET_SPARC64
+    return cpu_env(cs)->pstate & PS_AM ? (uint32_t)result : result;
+#else
+    return (uint32_t)result;
+#endif
+}
+#endif
+
 static const TCGCPUOps sparc_tcg_ops = {
     /*
      * From Oracle SPARC Architecture 2015:
@@ -1036,6 +1048,7 @@ static const TCGCPUOps sparc_tcg_ops = {
 
 #ifndef CONFIG_USER_ONLY
     .tlb_fill = sparc_cpu_tlb_fill,
+    .pointer_wrap = sparc_pointer_wrap,
     .cpu_exec_interrupt = sparc_cpu_exec_interrupt,
     .cpu_exec_halt = sparc_cpu_has_work,
     .cpu_exec_reset = cpu_reset,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PULL 28/28] accel/tcg: Assert TCGCPUOps.pointer_wrap is set
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (26 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 27/28] target/sparc: " Richard Henderson
@ 2025-05-28  8:14 ` Richard Henderson
  2025-05-29 12:35 ` [PULL 00/28] tcg patch queue Stefan Hajnoczi
  28 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-05-28  8:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé

All targets now provide the function, so we can
make the call unconditional.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cpu-exec.c | 1 +
 accel/tcg/cputlb.c   | 7 ++-----
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index cc5f362305..713bdb2056 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -1039,6 +1039,7 @@ bool tcg_exec_realizefn(CPUState *cpu, Error **errp)
         assert(tcg_ops->cpu_exec_halt);
         assert(tcg_ops->cpu_exec_interrupt);
         assert(tcg_ops->cpu_exec_reset);
+        assert(tcg_ops->pointer_wrap);
 #endif /* !CONFIG_USER_ONLY */
         assert(tcg_ops->translate_code);
         assert(tcg_ops->get_tb_cpu_state);
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a734859396..87e14bde4f 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1773,11 +1773,8 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
         l->page[1].size = l->page[0].size - size0;
         l->page[0].size = size0;
 
-        if (cpu->cc->tcg_ops->pointer_wrap) {
-            l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
-                                                             l->page[1].addr,
-                                                             addr);
-        }
+        l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
+                                                         l->page[1].addr, addr);
 
         /*
          * Lookup both pages, recognizing exceptions from either.  If the
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PULL 00/28] tcg patch queue
  2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
                   ` (27 preceding siblings ...)
  2025-05-28  8:14 ` [PULL 28/28] accel/tcg: Assert TCGCPUOps.pointer_wrap is set Richard Henderson
@ 2025-05-29 12:35 ` Stefan Hajnoczi
  28 siblings, 0 replies; 32+ messages in thread
From: Stefan Hajnoczi @ 2025-05-29 12:35 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 116 bytes --]

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/10.1 for any user-visible changes.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets
  2025-05-28  8:14 ` [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets Richard Henderson
@ 2025-08-29  6:55   ` Michael Tokarev
  2025-08-30  3:11     ` Richard Henderson
  0 siblings, 1 reply; 32+ messages in thread
From: Michael Tokarev @ 2025-08-29  6:55 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Helge Deller, Yoshinori Sato, Philippe Mathieu-Daudé

On 28.05.2025 11:14, Richard Henderson wrote:
> Alpha, HPPA, and SH4 always use aligned addresses,
> and therefore never produce accesses that cross pages.
> 
> Cc: Helge Deller <deller@gmx.de>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

This seems to broke booting debian on alpha, -- see
https://bugs.debian.org/1112285 .  I weren't able to repro it
though, - asked the OP to get a backtrace.

Thanks,

/mjt


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets
  2025-08-29  6:55   ` Michael Tokarev
@ 2025-08-30  3:11     ` Richard Henderson
  0 siblings, 0 replies; 32+ messages in thread
From: Richard Henderson @ 2025-08-30  3:11 UTC (permalink / raw)
  To: Michael Tokarev, qemu-devel
  Cc: Helge Deller, Yoshinori Sato, Philippe Mathieu-Daudé

On 8/29/25 16:55, Michael Tokarev wrote:
> On 28.05.2025 11:14, Richard Henderson wrote:
>> Alpha, HPPA, and SH4 always use aligned addresses,
>> and therefore never produce accesses that cross pages.
>>
>> Cc: Helge Deller <deller@gmx.de>
>> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> 
> This seems to broke booting debian on alpha, -- see
> https://bugs.debian.org/1112285 .  I weren't able to repro it
> though, - asked the OP to get a backtrace.

Ok.  I haven't reproduced this either.
Let me know if you get more information.


r~


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2025-08-30 16:08 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-28  8:13 [PULL 00/28] tcg patch queue Richard Henderson
2025-05-28  8:13 ` [PULL 01/28] accel/tcg: Fix atomic_mmu_lookup vs TLB_FORCE_SLOW Richard Henderson
2025-05-28  8:13 ` [PULL 02/28] system/main: comment lock rationale Richard Henderson
2025-05-28  8:13 ` [PULL 03/28] linux-user: implement pgid field of /proc/self/stat Richard Henderson
2025-05-28  8:13 ` [PULL 04/28] target/microblaze: Split out mb_unaligned_access_internal Richard Henderson
2025-05-28  8:13 ` [PULL 05/28] target/microblaze: Introduce helper_unaligned_access Richard Henderson
2025-05-28  8:13 ` [PULL 06/28] target/microblaze: Split out mb_transaction_failed_internal Richard Henderson
2025-05-28  8:13 ` [PULL 07/28] target/microblaze: Implement extended address load/store out of line Richard Henderson
2025-05-28  8:13 ` [PULL 08/28] target/microblaze: Use uint64_t for CPUMBState.ear Richard Henderson
2025-05-28  8:13 ` [PULL 09/28] target/microblaze: Use TCGv_i64 for compute_ldst_addr_ea Richard Henderson
2025-05-28  8:13 ` [PULL 10/28] target/microblaze: Fix printf format in mmu_translate Richard Henderson
2025-05-28  8:13 ` [PULL 11/28] target/microblaze: Use TARGET_LONG_BITS == 32 for system mode Richard Henderson
2025-05-28  8:13 ` [PULL 12/28] target/microblaze: Drop DisasContext.r0 Richard Henderson
2025-05-28  8:13 ` [PULL 13/28] target/microblaze: Simplify compute_ldst_addr_type{a,b} Richard Henderson
2025-05-28  8:13 ` [PULL 14/28] tcg: Drop TCGContext.tlb_dyn_max_bits Richard Henderson
2025-05-28  8:13 ` [PULL 15/28] tcg: Drop TCGContext.page_{mask,bits} Richard Henderson
2025-05-28  8:13 ` [PULL 16/28] target/sh4: Use MO_ALIGN for system UNALIGN() Richard Henderson
2025-05-28  8:13 ` [PULL 17/28] accel/tcg: Add TCGCPUOps.pointer_wrap Richard Henderson
2025-05-28  8:14 ` [PULL 18/28] target: Use cpu_pointer_wrap_notreached for strict align targets Richard Henderson
2025-08-29  6:55   ` Michael Tokarev
2025-08-30  3:11     ` Richard Henderson
2025-05-28  8:14 ` [PULL 19/28] target: Use cpu_pointer_wrap_uint32 for 32-bit targets Richard Henderson
2025-05-28  8:14 ` [PULL 20/28] target/arm: Fill in TCGCPUOps.pointer_wrap Richard Henderson
2025-05-28  8:14 ` [PULL 21/28] target/i386: " Richard Henderson
2025-05-28  8:14 ` [PULL 22/28] target/loongarch: " Richard Henderson
2025-05-28  8:14 ` [PULL 23/28] target/mips: " Richard Henderson
2025-05-28  8:14 ` [PULL 24/28] target/ppc: " Richard Henderson
2025-05-28  8:14 ` [PULL 25/28] target/riscv: " Richard Henderson
2025-05-28  8:14 ` [PULL 26/28] target/s390x: " Richard Henderson
2025-05-28  8:14 ` [PULL 27/28] target/sparc: " Richard Henderson
2025-05-28  8:14 ` [PULL 28/28] accel/tcg: Assert TCGCPUOps.pointer_wrap is set Richard Henderson
2025-05-29 12:35 ` [PULL 00/28] tcg patch queue Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).