* [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order
@ 2025-03-21 18:15 Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 1/7] tcg: Always define TCG_GUEST_DEFAULT_MO Philippe Mathieu-Daudé
` (7 more replies)
0 siblings, 8 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
Since v1:
- Do not use tcg_ctx in tcg_req_mo (rth)
Hi,
In this series we replace the TCG_GUEST_DEFAULT_MO definition
from "cpu-param.h" by a 'guest_default_memory_order' field in
TCGCPUOps.
Since tcg_req_mo() now accesses tcg_ctx, this impact the
cpu_req_mo() calls in accel/tcg/{cputlb,user-exec}.c.
The long term goal is to be able to use targets with distinct
guest memory order restrictions.
Philippe Mathieu-Daudé (7):
tcg: Always define TCG_GUEST_DEFAULT_MO
tcg: Simplify tcg_req_mo() macro
tcg: Define guest_default_memory_order in TCGCPUOps
tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
tcg: Propagate CPUState argument to cpu_req_mo()
tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order
tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally
docs/devel/multi-thread-tcg.rst | 4 ++--
accel/tcg/internal-target.h | 19 ++++++-------------
include/accel/tcg/cpu-ops.h | 8 ++++++++
target/alpha/cpu-param.h | 3 ---
target/arm/cpu-param.h | 3 ---
target/avr/cpu-param.h | 2 --
target/hppa/cpu-param.h | 8 --------
target/i386/cpu-param.h | 3 ---
target/loongarch/cpu-param.h | 2 --
target/microblaze/cpu-param.h | 3 ---
target/mips/cpu-param.h | 2 --
target/openrisc/cpu-param.h | 2 --
target/ppc/cpu-param.h | 2 --
target/riscv/cpu-param.h | 2 --
target/s390x/cpu-param.h | 6 ------
target/sparc/cpu-param.h | 23 -----------------------
target/xtensa/cpu-param.h | 3 ---
accel/tcg/cputlb.c | 20 ++++++++++----------
accel/tcg/tcg-all.c | 3 ---
accel/tcg/translate-all.c | 6 +-----
accel/tcg/user-exec.c | 20 ++++++++++----------
target/alpha/cpu.c | 3 +++
target/arm/cpu.c | 3 +++
target/arm/tcg/cpu-v7m.c | 3 +++
target/avr/cpu.c | 1 +
target/hexagon/cpu.c | 2 ++
target/hppa/cpu.c | 8 ++++++++
target/i386/tcg/tcg-cpu.c | 5 +++++
target/loongarch/cpu.c | 2 ++
target/m68k/cpu.c | 3 +++
target/microblaze/cpu.c | 3 +++
target/mips/cpu.c | 2 ++
target/openrisc/cpu.c | 2 ++
target/ppc/cpu_init.c | 2 ++
target/riscv/tcg/tcg-cpu.c | 2 ++
target/rx/cpu.c | 3 +++
target/s390x/cpu.c | 6 ++++++
target/sh4/cpu.c | 3 +++
target/sparc/cpu.c | 23 +++++++++++++++++++++++
target/tricore/cpu.c | 2 ++
target/xtensa/cpu.c | 3 +++
41 files changed, 118 insertions(+), 107 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH-for-10.1 v2 1/7] tcg: Always define TCG_GUEST_DEFAULT_MO
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 2/7] tcg: Simplify tcg_req_mo() macro Philippe Mathieu-Daudé
` (6 subsequent siblings)
7 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
We only require the TCG_GUEST_DEFAULT_MO for MTTCG-enabled
frontends, otherwise we use a default value of TCG_MO_ALL.
In order to simplify, require the definition for all targets,
defining it for hexagon, m68k, rx, sh4 and tricore.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
target/hexagon/cpu-param.h | 3 +++
target/m68k/cpu-param.h | 3 +++
target/rx/cpu-param.h | 3 +++
target/sh4/cpu-param.h | 3 +++
target/tricore/cpu-param.h | 3 +++
accel/tcg/translate-all.c | 4 ----
6 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/target/hexagon/cpu-param.h b/target/hexagon/cpu-param.h
index 45ee7b46409..2d57ea6caf9 100644
--- a/target/hexagon/cpu-param.h
+++ b/target/hexagon/cpu-param.h
@@ -23,4 +23,7 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 36
#define TARGET_VIRT_ADDR_SPACE_BITS 32
+/* MTTCG not yet supported: require strict ordering */
+#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
+
#endif
diff --git a/target/m68k/cpu-param.h b/target/m68k/cpu-param.h
index 7afbf6d302d..1a909eaa13e 100644
--- a/target/m68k/cpu-param.h
+++ b/target/m68k/cpu-param.h
@@ -17,4 +17,7 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
+/* MTTCG not yet supported: require strict ordering */
+#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
+
#endif
diff --git a/target/rx/cpu-param.h b/target/rx/cpu-param.h
index ef1970a09e9..2ce199164d7 100644
--- a/target/rx/cpu-param.h
+++ b/target/rx/cpu-param.h
@@ -24,4 +24,7 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
+/* MTTCG not yet supported: require strict ordering */
+#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
+
#endif
diff --git a/target/sh4/cpu-param.h b/target/sh4/cpu-param.h
index 2b6e11dd0ac..1bc90d4695e 100644
--- a/target/sh4/cpu-param.h
+++ b/target/sh4/cpu-param.h
@@ -16,4 +16,7 @@
# define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
+/* MTTCG not yet supported: require strict ordering */
+#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
+
#endif
diff --git a/target/tricore/cpu-param.h b/target/tricore/cpu-param.h
index 790242ef3d2..923459370cc 100644
--- a/target/tricore/cpu-param.h
+++ b/target/tricore/cpu-param.h
@@ -12,4 +12,7 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
+/* MTTCG not yet supported: require strict ordering */
+#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
+
#endif
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 82bc16bd535..fb9f83dbba3 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -349,11 +349,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
#endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
-#ifdef TCG_GUEST_DEFAULT_MO
tcg_ctx->guest_mo = TCG_GUEST_DEFAULT_MO;
-#else
- tcg_ctx->guest_mo = TCG_MO_ALL;
-#endif
restart_translate:
trace_translate_block(tb, pc, tb->tc.ptr);
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH-for-10.1 v2 2/7] tcg: Simplify tcg_req_mo() macro
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 1/7] tcg: Always define TCG_GUEST_DEFAULT_MO Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 3/7] tcg: Define guest_default_memory_order in TCGCPUOps Philippe Mathieu-Daudé
` (5 subsequent siblings)
7 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
Now that TCG_GUEST_DEFAULT_MO is always defined,
simplify the tcg_req_mo() macro.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
accel/tcg/internal-target.h | 9 +--------
accel/tcg/tcg-all.c | 3 ---
2 files changed, 1 insertion(+), 11 deletions(-)
diff --git a/accel/tcg/internal-target.h b/accel/tcg/internal-target.h
index 2cdf11c905e..1cb35dba99e 100644
--- a/accel/tcg/internal-target.h
+++ b/accel/tcg/internal-target.h
@@ -50,17 +50,10 @@ G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
* memory ordering vs the host memory ordering. A non-zero
* result indicates that some barrier is required.
*
- * If TCG_GUEST_DEFAULT_MO is not defined, assume that the
- * guest requires strict ordering.
- *
* This is a macro so that it's constant even without optimization.
*/
-#ifdef TCG_GUEST_DEFAULT_MO
-# define tcg_req_mo(type) \
+#define tcg_req_mo(type) \
((type) & TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO)
-#else
-# define tcg_req_mo(type) ((type) & ~TCG_TARGET_DEFAULT_MO)
-#endif
/**
* cpu_req_mo:
diff --git a/accel/tcg/tcg-all.c b/accel/tcg/tcg-all.c
index c1a30b01219..cb632cc8cc7 100644
--- a/accel/tcg/tcg-all.c
+++ b/accel/tcg/tcg-all.c
@@ -77,9 +77,6 @@ static bool default_mttcg_enabled(void)
return false;
}
#ifdef TARGET_SUPPORTS_MTTCG
-# ifndef TCG_GUEST_DEFAULT_MO
-# error "TARGET_SUPPORTS_MTTCG without TCG_GUEST_DEFAULT_MO"
-# endif
return true;
#else
return false;
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH-for-10.1 v2 3/7] tcg: Define guest_default_memory_order in TCGCPUOps
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 1/7] tcg: Always define TCG_GUEST_DEFAULT_MO Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 2/7] tcg: Simplify tcg_req_mo() macro Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code() Philippe Mathieu-Daudé
` (4 subsequent siblings)
7 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
Add the TCGCPUOps::guest_default_memory_order field and have
each target initialize it.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
include/accel/tcg/cpu-ops.h | 8 ++++++++
target/alpha/cpu.c | 2 ++
target/arm/cpu.c | 2 ++
target/arm/tcg/cpu-v7m.c | 2 ++
target/avr/cpu.c | 1 +
target/hexagon/cpu.c | 1 +
target/hppa/cpu.c | 2 ++
target/i386/tcg/tcg-cpu.c | 2 ++
target/loongarch/cpu.c | 2 ++
target/m68k/cpu.c | 2 ++
target/microblaze/cpu.c | 2 ++
target/mips/cpu.c | 2 ++
target/openrisc/cpu.c | 2 ++
target/ppc/cpu_init.c | 2 ++
target/riscv/tcg/tcg-cpu.c | 2 ++
target/rx/cpu.c | 2 ++
target/s390x/cpu.c | 2 ++
target/sh4/cpu.c | 2 ++
target/sparc/cpu.c | 2 ++
target/tricore/cpu.c | 1 +
target/xtensa/cpu.c | 2 ++
21 files changed, 45 insertions(+)
diff --git a/include/accel/tcg/cpu-ops.h b/include/accel/tcg/cpu-ops.h
index f60e5303f21..5fd299cefb6 100644
--- a/include/accel/tcg/cpu-ops.h
+++ b/include/accel/tcg/cpu-ops.h
@@ -16,8 +16,16 @@
#include "exec/memop.h"
#include "exec/mmu-access-type.h"
#include "exec/vaddr.h"
+#include "tcg/tcg-mo.h"
struct TCGCPUOps {
+
+ /**
+ * @guest_default_memory_order: default barrier that is required
+ * for the guest memory ordering.
+ */
+ TCGBar guest_default_memory_order;
+
/**
* @initialize: Initialize TCG state
*
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index 584c2aa76bd..00905d48621 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -239,6 +239,8 @@ static const TCGCPUOps alpha_tcg_ops = {
.synchronize_from_tb = alpha_cpu_synchronize_from_tb,
.restore_state_to_opc = alpha_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifdef CONFIG_USER_ONLY
.record_sigsegv = alpha_cpu_record_sigsegv,
.record_sigbus = alpha_cpu_record_sigbus,
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 01786ac7879..9e858ae8c77 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2675,6 +2675,8 @@ static const TCGCPUOps arm_tcg_ops = {
.debug_excp_handler = arm_debug_excp_handler,
.restore_state_to_opc = arm_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifdef CONFIG_USER_ONLY
.record_sigsegv = arm_cpu_record_sigsegv,
.record_sigbus = arm_cpu_record_sigbus,
diff --git a/target/arm/tcg/cpu-v7m.c b/target/arm/tcg/cpu-v7m.c
index c4dd3092726..6f714324ffd 100644
--- a/target/arm/tcg/cpu-v7m.c
+++ b/target/arm/tcg/cpu-v7m.c
@@ -238,6 +238,8 @@ static const TCGCPUOps arm_v7m_tcg_ops = {
.debug_excp_handler = arm_debug_excp_handler,
.restore_state_to_opc = arm_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifdef CONFIG_USER_ONLY
.record_sigsegv = arm_cpu_record_sigsegv,
.record_sigbus = arm_cpu_record_sigbus,
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
index 834c7082aa7..330e50f74e7 100644
--- a/target/avr/cpu.c
+++ b/target/avr/cpu.c
@@ -216,6 +216,7 @@ static const TCGCPUOps avr_tcg_ops = {
.cpu_exec_halt = avr_cpu_has_work,
.tlb_fill = avr_cpu_tlb_fill,
.do_interrupt = avr_cpu_do_interrupt,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
static void avr_cpu_class_init(ObjectClass *oc, void *data)
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
index 766b6786511..669f7440f52 100644
--- a/target/hexagon/cpu.c
+++ b/target/hexagon/cpu.c
@@ -324,6 +324,7 @@ static const TCGCPUOps hexagon_tcg_ops = {
.translate_code = hexagon_translate_code,
.synchronize_from_tb = hexagon_cpu_synchronize_from_tb,
.restore_state_to_opc = hexagon_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
static void hexagon_cpu_class_init(ObjectClass *c, void *data)
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 2a85495d02f..15cbcd2d957 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -257,6 +257,8 @@ static const TCGCPUOps hppa_tcg_ops = {
.synchronize_from_tb = hppa_cpu_synchronize_from_tb,
.restore_state_to_opc = hppa_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill_align = hppa_cpu_tlb_fill_align,
.cpu_exec_interrupt = hppa_cpu_exec_interrupt,
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
index b8aff825eec..de2fe8e04f4 100644
--- a/target/i386/tcg/tcg-cpu.c
+++ b/target/i386/tcg/tcg-cpu.c
@@ -128,6 +128,8 @@ static const TCGCPUOps x86_tcg_ops = {
.debug_check_breakpoint = x86_debug_check_breakpoint,
.need_replay_interrupt = x86_need_replay_interrupt,
#endif /* !CONFIG_USER_ONLY */
+
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
static void x86_tcg_cpu_init_ops(AccelCPUClass *accel_cpu, CPUClass *cc)
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
index ea1665e2705..5b9dd5048d4 100644
--- a/target/loongarch/cpu.c
+++ b/target/loongarch/cpu.c
@@ -869,6 +869,8 @@ static const TCGCPUOps loongarch_tcg_ops = {
.synchronize_from_tb = loongarch_cpu_synchronize_from_tb,
.restore_state_to_opc = loongarch_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = loongarch_cpu_tlb_fill,
.cpu_exec_interrupt = loongarch_cpu_exec_interrupt,
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index 0065e1c1ca5..dc742ddc2cb 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -593,6 +593,8 @@ static const TCGCPUOps m68k_tcg_ops = {
.translate_code = m68k_translate_code,
.restore_state_to_opc = m68k_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = m68k_cpu_tlb_fill,
.cpu_exec_interrupt = m68k_cpu_exec_interrupt,
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index f3bebea856e..32f9e32502c 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -432,6 +432,8 @@ static const TCGCPUOps mb_tcg_ops = {
.synchronize_from_tb = mb_cpu_synchronize_from_tb,
.restore_state_to_opc = mb_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = mb_cpu_tlb_fill,
.cpu_exec_interrupt = mb_cpu_exec_interrupt,
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index b207106dd79..207b7d3c8db 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -554,6 +554,8 @@ static const TCGCPUOps mips_tcg_ops = {
.synchronize_from_tb = mips_cpu_synchronize_from_tb,
.restore_state_to_opc = mips_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#if !defined(CONFIG_USER_ONLY)
.tlb_fill = mips_cpu_tlb_fill,
.cpu_exec_interrupt = mips_cpu_exec_interrupt,
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index e8abf1f8b5c..c6a1d603afb 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -248,6 +248,8 @@ static const TCGCPUOps openrisc_tcg_ops = {
.synchronize_from_tb = openrisc_cpu_synchronize_from_tb,
.restore_state_to_opc = openrisc_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = openrisc_cpu_tlb_fill,
.cpu_exec_interrupt = openrisc_cpu_exec_interrupt,
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
index 8b590e7f17c..28f6f6bc2ba 100644
--- a/target/ppc/cpu_init.c
+++ b/target/ppc/cpu_init.c
@@ -7490,6 +7490,8 @@ static const TCGCPUOps ppc_tcg_ops = {
.translate_code = ppc_translate_code,
.restore_state_to_opc = ppc_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifdef CONFIG_USER_ONLY
.record_sigsegv = ppc_cpu_record_sigsegv,
#else
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index 5aef9eef366..0e5fa10784d 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -139,6 +139,8 @@ static const TCGCPUOps riscv_tcg_ops = {
.synchronize_from_tb = riscv_cpu_synchronize_from_tb,
.restore_state_to_opc = riscv_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = riscv_cpu_tlb_fill,
.cpu_exec_interrupt = riscv_cpu_exec_interrupt,
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
index 0ba0d55ab5b..ae78c661079 100644
--- a/target/rx/cpu.c
+++ b/target/rx/cpu.c
@@ -212,6 +212,8 @@ static const TCGCPUOps rx_tcg_ops = {
.cpu_exec_interrupt = rx_cpu_exec_interrupt,
.cpu_exec_halt = rx_cpu_has_work,
.do_interrupt = rx_cpu_do_interrupt,
+
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
static void rx_cpu_class_init(ObjectClass *klass, void *data)
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index d73142600bf..975b8353026 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -360,6 +360,8 @@ static const TCGCPUOps s390_tcg_ops = {
.debug_excp_handler = s390x_cpu_debug_excp_handler,
.do_unaligned_access = s390x_cpu_do_unaligned_access,
#endif /* !CONFIG_USER_ONLY */
+
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
#endif /* CONFIG_TCG */
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index ce84bdf539a..6d319dd01c7 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -267,6 +267,8 @@ static const TCGCPUOps superh_tcg_ops = {
.synchronize_from_tb = superh_cpu_synchronize_from_tb,
.restore_state_to_opc = superh_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = superh_cpu_tlb_fill,
.cpu_exec_interrupt = superh_cpu_exec_interrupt,
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 57161201173..961c7f92a84 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -1005,6 +1005,8 @@ static const TCGCPUOps sparc_tcg_ops = {
.synchronize_from_tb = sparc_cpu_synchronize_from_tb,
.restore_state_to_opc = sparc_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = sparc_cpu_tlb_fill,
.cpu_exec_interrupt = sparc_cpu_exec_interrupt,
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
index 16acc4ecb92..960e7093f1c 100644
--- a/target/tricore/cpu.c
+++ b/target/tricore/cpu.c
@@ -179,6 +179,7 @@ static const TCGCPUOps tricore_tcg_ops = {
.tlb_fill = tricore_cpu_tlb_fill,
.cpu_exec_interrupt = tricore_cpu_exec_interrupt,
.cpu_exec_halt = tricore_cpu_has_work,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
};
static void tricore_cpu_class_init(ObjectClass *c, void *data)
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index 7663b62d01e..0a4068ad7bf 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -237,6 +237,8 @@ static const TCGCPUOps xtensa_tcg_ops = {
.debug_excp_handler = xtensa_breakpoint_handler,
.restore_state_to_opc = xtensa_restore_state_to_opc,
+ .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+
#ifndef CONFIG_USER_ONLY
.tlb_fill = xtensa_cpu_tlb_fill,
.cpu_exec_interrupt = xtensa_cpu_exec_interrupt,
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (2 preceding siblings ...)
2025-03-21 18:15 ` [PATCH-for-10.1 v2 3/7] tcg: Define guest_default_memory_order in TCGCPUOps Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-23 18:01 ` Richard Henderson
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo() Philippe Mathieu-Daudé
` (3 subsequent siblings)
7 siblings, 1 reply; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
Use TCGCPUOps::guest_default_memory_order to set TCGContext::guest_mo.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
accel/tcg/translate-all.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index fb9f83dbba3..26442e83776 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -349,7 +349,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
#endif
tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
- tcg_ctx->guest_mo = TCG_GUEST_DEFAULT_MO;
+ tcg_ctx->guest_mo = cpu->cc->tcg_ops->guest_default_memory_order;
restart_translate:
trace_translate_block(tb, pc, tb->tc.ptr);
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo()
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (3 preceding siblings ...)
2025-03-21 18:15 ` [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code() Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-23 18:05 ` Richard Henderson
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (2 subsequent siblings)
7 siblings, 1 reply; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
In preparation of having tcg_req_mo() access CPUState in
the next commit, pass it to cpu_req_mo(), its single caller.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
accel/tcg/internal-target.h | 3 ++-
accel/tcg/cputlb.c | 20 ++++++++++----------
accel/tcg/user-exec.c | 20 ++++++++++----------
3 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/accel/tcg/internal-target.h b/accel/tcg/internal-target.h
index 1cb35dba99e..992362be7e6 100644
--- a/accel/tcg/internal-target.h
+++ b/accel/tcg/internal-target.h
@@ -57,12 +57,13 @@ G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
/**
* cpu_req_mo:
+ * @cpu: CPUState
* @type: TCGBar
*
* If tcg_req_mo indicates a barrier for @type is required
* for the guest memory model, issue a host memory barrier.
*/
-#define cpu_req_mo(type) \
+#define cpu_req_mo(cpu, type) \
do { \
if (tcg_req_mo(type)) { \
smp_mb(); \
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index fb22048876e..b6713efdb81 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2321,7 +2321,7 @@ static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
MMULookupLocals l;
bool crosspage;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(cpu, addr, oi, ra, access_type, &l);
tcg_debug_assert(!crosspage);
@@ -2336,7 +2336,7 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uint16_t ret;
uint8_t a, b;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(cpu, addr, oi, ra, access_type, &l);
if (likely(!crosspage)) {
return do_ld_2(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, ra);
@@ -2360,7 +2360,7 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
bool crosspage;
uint32_t ret;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(cpu, addr, oi, ra, access_type, &l);
if (likely(!crosspage)) {
return do_ld_4(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, ra);
@@ -2381,7 +2381,7 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
bool crosspage;
uint64_t ret;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(cpu, addr, oi, ra, access_type, &l);
if (likely(!crosspage)) {
return do_ld_8(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, ra);
@@ -2404,7 +2404,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr,
Int128 ret;
int first;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_LOAD, &l);
if (likely(!crosspage)) {
if (unlikely(l.page[0].flags & TLB_MMIO)) {
@@ -2732,7 +2732,7 @@ static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val,
MMULookupLocals l;
bool crosspage;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l);
tcg_debug_assert(!crosspage);
@@ -2746,7 +2746,7 @@ static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val,
bool crosspage;
uint8_t a, b;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l);
if (likely(!crosspage)) {
do_st_2(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra);
@@ -2768,7 +2768,7 @@ static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val,
MMULookupLocals l;
bool crosspage;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l);
if (likely(!crosspage)) {
do_st_4(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra);
@@ -2789,7 +2789,7 @@ static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val,
MMULookupLocals l;
bool crosspage;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l);
if (likely(!crosspage)) {
do_st_8(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra);
@@ -2812,7 +2812,7 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
uint64_t a, b;
int first;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l);
if (likely(!crosspage)) {
if (unlikely(l.page[0].flags & TLB_MMIO)) {
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 2322181b151..5bda8fb5514 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1059,7 +1059,7 @@ static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
void *haddr;
uint8_t ret;
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type);
ret = ldub_p(haddr);
clear_helper_retaddr();
@@ -1073,7 +1073,7 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uint16_t ret;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_2(cpu, ra, haddr, mop);
clear_helper_retaddr();
@@ -1091,7 +1091,7 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uint32_t ret;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_4(cpu, ra, haddr, mop);
clear_helper_retaddr();
@@ -1109,7 +1109,7 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi,
uint64_t ret;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, access_type);
ret = load_atom_8(cpu, ra, haddr, mop);
clear_helper_retaddr();
@@ -1128,7 +1128,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr,
MemOp mop = get_memop(oi);
tcg_debug_assert((mop & MO_SIZE) == MO_128);
- cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
+ cpu_req_mo(cpu, TCG_MO_LD_LD | TCG_MO_ST_LD);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD);
ret = load_atom_16(cpu, ra, haddr, mop);
clear_helper_retaddr();
@@ -1144,7 +1144,7 @@ static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val,
{
void *haddr;
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE);
stb_p(haddr, val);
clear_helper_retaddr();
@@ -1156,7 +1156,7 @@ static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val,
void *haddr;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
@@ -1172,7 +1172,7 @@ static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val,
void *haddr;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
@@ -1188,7 +1188,7 @@ static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val,
void *haddr;
MemOp mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
@@ -1204,7 +1204,7 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val,
void *haddr;
MemOpIdx mop = get_memop(oi);
- cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
+ cpu_req_mo(cpu, TCG_MO_LD_ST | TCG_MO_ST_ST);
haddr = cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE);
if (mop & MO_BSWAP) {
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (4 preceding siblings ...)
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo() Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-03-23 18:06 ` Richard Henderson
2025-03-21 18:15 ` [PATCH-for-10.1 v2 7/7] tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally Philippe Mathieu-Daudé
2025-04-02 20:00 ` [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Richard Henderson
7 siblings, 1 reply; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
In order to use TCG with multiple targets, replace the
compile time use of TCG_GUEST_DEFAULT_MO by a runtime access
to TCGCPUOps::guest_default_memory_order via CPUState.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
accel/tcg/internal-target.h | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/accel/tcg/internal-target.h b/accel/tcg/internal-target.h
index 992362be7e6..d5b8c4b730b 100644
--- a/accel/tcg/internal-target.h
+++ b/accel/tcg/internal-target.h
@@ -44,16 +44,15 @@ G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
/**
* tcg_req_mo:
+ * @guest_mo: Guest default memory order
* @type: TCGBar
*
* Filter @type to the barrier that is required for the guest
* memory ordering vs the host memory ordering. A non-zero
* result indicates that some barrier is required.
- *
- * This is a macro so that it's constant even without optimization.
*/
-#define tcg_req_mo(type) \
- ((type) & TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO)
+#define tcg_req_mo(guest_mo, type) \
+ ((type) & guest_mo & ~TCG_TARGET_DEFAULT_MO)
/**
* cpu_req_mo:
@@ -65,7 +64,7 @@ G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr);
*/
#define cpu_req_mo(cpu, type) \
do { \
- if (tcg_req_mo(type)) { \
+ if (tcg_req_mo(cpu->cc->tcg_ops->guest_default_memory_order, type)) { \
smp_mb(); \
} \
} while (0)
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH-for-10.1 v2 7/7] tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (5 preceding siblings ...)
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
@ 2025-03-21 18:15 ` Philippe Mathieu-Daudé
2025-04-02 20:00 ` [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Richard Henderson
7 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-03-21 18:15 UTC (permalink / raw)
To: qemu-devel, Richard Henderson
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson, Philippe Mathieu-Daudé
By directly using TCGCPUOps::guest_default_memory_order,
we don't need the TCG_GUEST_DEFAULT_MO definition anymore.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
docs/devel/multi-thread-tcg.rst | 4 ++--
target/alpha/cpu-param.h | 3 ---
target/arm/cpu-param.h | 3 ---
target/avr/cpu-param.h | 2 --
target/hexagon/cpu-param.h | 3 ---
target/hppa/cpu-param.h | 8 --------
target/i386/cpu-param.h | 3 ---
target/loongarch/cpu-param.h | 2 --
target/m68k/cpu-param.h | 3 ---
target/microblaze/cpu-param.h | 3 ---
target/mips/cpu-param.h | 2 --
target/openrisc/cpu-param.h | 2 --
target/ppc/cpu-param.h | 2 --
target/riscv/cpu-param.h | 2 --
target/rx/cpu-param.h | 3 ---
target/s390x/cpu-param.h | 6 ------
target/sh4/cpu-param.h | 3 ---
target/sparc/cpu-param.h | 23 -----------------------
target/tricore/cpu-param.h | 3 ---
target/xtensa/cpu-param.h | 3 ---
target/alpha/cpu.c | 3 ++-
target/arm/cpu.c | 3 ++-
target/arm/tcg/cpu-v7m.c | 3 ++-
target/avr/cpu.c | 2 +-
target/hexagon/cpu.c | 3 ++-
target/hppa/cpu.c | 8 +++++++-
target/i386/tcg/tcg-cpu.c | 5 ++++-
target/loongarch/cpu.c | 2 +-
target/m68k/cpu.c | 3 ++-
target/microblaze/cpu.c | 3 ++-
target/mips/cpu.c | 2 +-
target/openrisc/cpu.c | 2 +-
target/ppc/cpu_init.c | 2 +-
target/riscv/tcg/tcg-cpu.c | 2 +-
target/rx/cpu.c | 3 ++-
target/s390x/cpu.c | 6 +++++-
target/sh4/cpu.c | 3 ++-
target/sparc/cpu.c | 23 ++++++++++++++++++++++-
target/tricore/cpu.c | 3 ++-
target/xtensa/cpu.c | 3 ++-
40 files changed, 66 insertions(+), 101 deletions(-)
diff --git a/docs/devel/multi-thread-tcg.rst b/docs/devel/multi-thread-tcg.rst
index b0f473961dd..14a2a9dc7b5 100644
--- a/docs/devel/multi-thread-tcg.rst
+++ b/docs/devel/multi-thread-tcg.rst
@@ -28,8 +28,8 @@ vCPU Scheduling
We introduce a new running mode where each vCPU will run on its own
user-space thread. This is enabled by default for all FE/BE
combinations where the host memory model is able to accommodate the
-guest (TCG_GUEST_DEFAULT_MO & ~TCG_TARGET_DEFAULT_MO is zero) and the
-guest has had the required work done to support this safely
+guest (TCGCPUOps::guest_default_memory_order & ~TCG_TARGET_DEFAULT_MO is zero)
+and the guest has had the required work done to support this safely
(TARGET_SUPPORTS_MTTCG).
System emulation will fall back to the original round robin approach
diff --git a/target/alpha/cpu-param.h b/target/alpha/cpu-param.h
index ff06e41497a..c74556d2667 100644
--- a/target/alpha/cpu-param.h
+++ b/target/alpha/cpu-param.h
@@ -25,7 +25,4 @@
# define TARGET_VIRT_ADDR_SPACE_BITS (30 + TARGET_PAGE_BITS)
#endif
-/* Alpha processors have a weak memory model */
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
index 896b35bd6d5..55de4d54544 100644
--- a/target/arm/cpu-param.h
+++ b/target/arm/cpu-param.h
@@ -38,7 +38,4 @@
# define TARGET_PAGE_BITS_MIN 10
#endif /* !CONFIG_USER_ONLY */
-/* ARM processors have a weak memory model */
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/avr/cpu-param.h b/target/avr/cpu-param.h
index 81f3f49ee1f..11e827109c0 100644
--- a/target/avr/cpu-param.h
+++ b/target/avr/cpu-param.h
@@ -31,6 +31,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 24
#define TARGET_VIRT_ADDR_SPACE_BITS 24
-#define TCG_GUEST_DEFAULT_MO 0
-
#endif
diff --git a/target/hexagon/cpu-param.h b/target/hexagon/cpu-param.h
index 2d57ea6caf9..45ee7b46409 100644
--- a/target/hexagon/cpu-param.h
+++ b/target/hexagon/cpu-param.h
@@ -23,7 +23,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 36
#define TARGET_VIRT_ADDR_SPACE_BITS 32
-/* MTTCG not yet supported: require strict ordering */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/hppa/cpu-param.h b/target/hppa/cpu-param.h
index 7ed6b5741e7..e0b2c7c9157 100644
--- a/target/hppa/cpu-param.h
+++ b/target/hppa/cpu-param.h
@@ -19,12 +19,4 @@
#define TARGET_PAGE_BITS 12
-/* PA-RISC 1.x processors have a strong memory model. */
-/*
- * ??? While we do not yet implement PA-RISC 2.0, those processors have
- * a weak memory model, but with TLB bits that force ordering on a per-page
- * basis. It's probably easier to fall back to a strong memory model.
- */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/i386/cpu-param.h b/target/i386/cpu-param.h
index b0e884c5d70..909bc027923 100644
--- a/target/i386/cpu-param.h
+++ b/target/i386/cpu-param.h
@@ -22,7 +22,4 @@
#endif
#define TARGET_PAGE_BITS 12
-/* The x86 has a strong memory model with some store-after-load re-ordering */
-#define TCG_GUEST_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD)
-
#endif
diff --git a/target/loongarch/cpu-param.h b/target/loongarch/cpu-param.h
index 52437946e56..071567712b3 100644
--- a/target/loongarch/cpu-param.h
+++ b/target/loongarch/cpu-param.h
@@ -13,6 +13,4 @@
#define TARGET_PAGE_BITS 12
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/m68k/cpu-param.h b/target/m68k/cpu-param.h
index 1a909eaa13e..7afbf6d302d 100644
--- a/target/m68k/cpu-param.h
+++ b/target/m68k/cpu-param.h
@@ -17,7 +17,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
-/* MTTCG not yet supported: require strict ordering */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/microblaze/cpu-param.h b/target/microblaze/cpu-param.h
index c866ec6c149..6a0714bb3d7 100644
--- a/target/microblaze/cpu-param.h
+++ b/target/microblaze/cpu-param.h
@@ -27,7 +27,4 @@
/* FIXME: MB uses variable pages down to 1K but linux only uses 4k. */
#define TARGET_PAGE_BITS 12
-/* MicroBlaze is always in-order. */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/mips/cpu-param.h b/target/mips/cpu-param.h
index 11b3ac0ac63..35fb6ea7243 100644
--- a/target/mips/cpu-param.h
+++ b/target/mips/cpu-param.h
@@ -25,6 +25,4 @@
#define TARGET_PAGE_BITS_MIN 12
#endif
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/openrisc/cpu-param.h b/target/openrisc/cpu-param.h
index 37627f2c394..3011bf5fcca 100644
--- a/target/openrisc/cpu-param.h
+++ b/target/openrisc/cpu-param.h
@@ -12,6 +12,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/ppc/cpu-param.h b/target/ppc/cpu-param.h
index 6c4525fdf3c..2cee113ddd3 100644
--- a/target/ppc/cpu-param.h
+++ b/target/ppc/cpu-param.h
@@ -38,6 +38,4 @@
# define TARGET_PAGE_BITS 12
#endif
-#define TCG_GUEST_DEFAULT_MO 0
-
#endif
diff --git a/target/riscv/cpu-param.h b/target/riscv/cpu-param.h
index fba30e966a8..a80310ef2c5 100644
--- a/target/riscv/cpu-param.h
+++ b/target/riscv/cpu-param.h
@@ -26,6 +26,4 @@
* - M mode HLV/HLVX/HSV 0b111
*/
-#define TCG_GUEST_DEFAULT_MO 0
-
#endif
diff --git a/target/rx/cpu-param.h b/target/rx/cpu-param.h
index 2ce199164d7..ef1970a09e9 100644
--- a/target/rx/cpu-param.h
+++ b/target/rx/cpu-param.h
@@ -24,7 +24,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
-/* MTTCG not yet supported: require strict ordering */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/s390x/cpu-param.h b/target/s390x/cpu-param.h
index 5c331ec424c..a5f798eeae7 100644
--- a/target/s390x/cpu-param.h
+++ b/target/s390x/cpu-param.h
@@ -12,10 +12,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 64
#define TARGET_VIRT_ADDR_SPACE_BITS 64
-/*
- * The z/Architecture has a strong memory model with some
- * store-after-load re-ordering.
- */
-#define TCG_GUEST_DEFAULT_MO (TCG_MO_ALL & ~TCG_MO_ST_LD)
-
#endif
diff --git a/target/sh4/cpu-param.h b/target/sh4/cpu-param.h
index 1bc90d4695e..2b6e11dd0ac 100644
--- a/target/sh4/cpu-param.h
+++ b/target/sh4/cpu-param.h
@@ -16,7 +16,4 @@
# define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
-/* MTTCG not yet supported: require strict ordering */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/sparc/cpu-param.h b/target/sparc/cpu-param.h
index 6952ee2b826..6e8e2a51469 100644
--- a/target/sparc/cpu-param.h
+++ b/target/sparc/cpu-param.h
@@ -21,27 +21,4 @@
# define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
-/*
- * From Oracle SPARC Architecture 2015:
- *
- * Compatibility notes: The PSO memory model described in SPARC V8 and
- * SPARC V9 compatibility architecture specifications was never implemented
- * in a SPARC V9 implementation and is not included in the Oracle SPARC
- * Architecture specification.
- *
- * The RMO memory model described in the SPARC V9 specification was
- * implemented in some non-Sun SPARC V9 implementations, but is not
- * directly supported in Oracle SPARC Architecture 2015 implementations.
- *
- * Therefore always use TSO in QEMU.
- *
- * D.5 Specification of Partial Store Order (PSO)
- * ... [loads] are followed by an implied MEMBAR #LoadLoad | #LoadStore.
- *
- * D.6 Specification of Total Store Order (TSO)
- * ... PSO with the additional requirement that all [stores] are followed
- * by an implied MEMBAR #StoreStore.
- */
-#define TCG_GUEST_DEFAULT_MO (TCG_MO_LD_LD | TCG_MO_LD_ST | TCG_MO_ST_ST)
-
#endif
diff --git a/target/tricore/cpu-param.h b/target/tricore/cpu-param.h
index 923459370cc..790242ef3d2 100644
--- a/target/tricore/cpu-param.h
+++ b/target/tricore/cpu-param.h
@@ -12,7 +12,4 @@
#define TARGET_PHYS_ADDR_SPACE_BITS 32
#define TARGET_VIRT_ADDR_SPACE_BITS 32
-/* MTTCG not yet supported: require strict ordering */
-#define TCG_GUEST_DEFAULT_MO TCG_MO_ALL
-
#endif
diff --git a/target/xtensa/cpu-param.h b/target/xtensa/cpu-param.h
index 5e4848ad059..06d85218b84 100644
--- a/target/xtensa/cpu-param.h
+++ b/target/xtensa/cpu-param.h
@@ -16,7 +16,4 @@
#define TARGET_VIRT_ADDR_SPACE_BITS 32
#endif
-/* Xtensa processors have a weak memory model */
-#define TCG_GUEST_DEFAULT_MO (0)
-
#endif
diff --git a/target/alpha/cpu.c b/target/alpha/cpu.c
index 00905d48621..e5e14976f51 100644
--- a/target/alpha/cpu.c
+++ b/target/alpha/cpu.c
@@ -239,7 +239,8 @@ static const TCGCPUOps alpha_tcg_ops = {
.synchronize_from_tb = alpha_cpu_synchronize_from_tb,
.restore_state_to_opc = alpha_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* Alpha processors have a weak memory model */
+ .guest_default_memory_order = 0,
#ifdef CONFIG_USER_ONLY
.record_sigsegv = alpha_cpu_record_sigsegv,
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 9e858ae8c77..8b9f2acf82b 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -2675,7 +2675,8 @@ static const TCGCPUOps arm_tcg_ops = {
.debug_excp_handler = arm_debug_excp_handler,
.restore_state_to_opc = arm_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* ARM processors have a weak memory model */
+ .guest_default_memory_order = 0,
#ifdef CONFIG_USER_ONLY
.record_sigsegv = arm_cpu_record_sigsegv,
diff --git a/target/arm/tcg/cpu-v7m.c b/target/arm/tcg/cpu-v7m.c
index 6f714324ffd..df6b7198944 100644
--- a/target/arm/tcg/cpu-v7m.c
+++ b/target/arm/tcg/cpu-v7m.c
@@ -238,7 +238,8 @@ static const TCGCPUOps arm_v7m_tcg_ops = {
.debug_excp_handler = arm_debug_excp_handler,
.restore_state_to_opc = arm_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* ARM processors have a weak memory model */
+ .guest_default_memory_order = 0,
#ifdef CONFIG_USER_ONLY
.record_sigsegv = arm_cpu_record_sigsegv,
diff --git a/target/avr/cpu.c b/target/avr/cpu.c
index 330e50f74e7..24e52e28f44 100644
--- a/target/avr/cpu.c
+++ b/target/avr/cpu.c
@@ -216,7 +216,7 @@ static const TCGCPUOps avr_tcg_ops = {
.cpu_exec_halt = avr_cpu_has_work,
.tlb_fill = avr_cpu_tlb_fill,
.do_interrupt = avr_cpu_do_interrupt,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
};
static void avr_cpu_class_init(ObjectClass *oc, void *data)
diff --git a/target/hexagon/cpu.c b/target/hexagon/cpu.c
index 669f7440f52..34734b0edb0 100644
--- a/target/hexagon/cpu.c
+++ b/target/hexagon/cpu.c
@@ -324,7 +324,8 @@ static const TCGCPUOps hexagon_tcg_ops = {
.translate_code = hexagon_translate_code,
.synchronize_from_tb = hexagon_cpu_synchronize_from_tb,
.restore_state_to_opc = hexagon_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MTTCG not yet supported: require strict ordering */
+ .guest_default_memory_order = TCG_MO_ALL,
};
static void hexagon_cpu_class_init(ObjectClass *c, void *data)
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 15cbcd2d957..997bd69db19 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -257,7 +257,13 @@ static const TCGCPUOps hppa_tcg_ops = {
.synchronize_from_tb = hppa_cpu_synchronize_from_tb,
.restore_state_to_opc = hppa_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* PA-RISC 1.x processors have a strong memory model. */
+ /*
+ * ??? While we do not yet implement PA-RISC 2.0, those processors have
+ * a weak memory model, but with TLB bits that force ordering on a per-page
+ * basis. It's probably easier to fall back to a strong memory model.
+ */
+ .guest_default_memory_order = TCG_MO_ALL,
#ifndef CONFIG_USER_ONLY
.tlb_fill_align = hppa_cpu_tlb_fill_align,
diff --git a/target/i386/tcg/tcg-cpu.c b/target/i386/tcg/tcg-cpu.c
index de2fe8e04f4..4a76c475971 100644
--- a/target/i386/tcg/tcg-cpu.c
+++ b/target/i386/tcg/tcg-cpu.c
@@ -129,7 +129,10 @@ static const TCGCPUOps x86_tcg_ops = {
.need_replay_interrupt = x86_need_replay_interrupt,
#endif /* !CONFIG_USER_ONLY */
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /*
+ * The x86 has a strong memory model with some store-after-load re-ordering
+ */
+ .guest_default_memory_order = TCG_MO_ALL & ~TCG_MO_ST_LD,
};
static void x86_tcg_cpu_init_ops(AccelCPUClass *accel_cpu, CPUClass *cc)
diff --git a/target/loongarch/cpu.c b/target/loongarch/cpu.c
index 5b9dd5048d4..c39ff056157 100644
--- a/target/loongarch/cpu.c
+++ b/target/loongarch/cpu.c
@@ -869,7 +869,7 @@ static const TCGCPUOps loongarch_tcg_ops = {
.synchronize_from_tb = loongarch_cpu_synchronize_from_tb,
.restore_state_to_opc = loongarch_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
#ifndef CONFIG_USER_ONLY
.tlb_fill = loongarch_cpu_tlb_fill,
diff --git a/target/m68k/cpu.c b/target/m68k/cpu.c
index dc742ddc2cb..e96b379e266 100644
--- a/target/m68k/cpu.c
+++ b/target/m68k/cpu.c
@@ -593,7 +593,8 @@ static const TCGCPUOps m68k_tcg_ops = {
.translate_code = m68k_translate_code,
.restore_state_to_opc = m68k_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MTTCG not yet supported: require strict ordering */
+ .guest_default_memory_order = TCG_MO_ALL,
#ifndef CONFIG_USER_ONLY
.tlb_fill = m68k_cpu_tlb_fill,
diff --git a/target/microblaze/cpu.c b/target/microblaze/cpu.c
index 32f9e32502c..4b9ef6e52c4 100644
--- a/target/microblaze/cpu.c
+++ b/target/microblaze/cpu.c
@@ -432,7 +432,8 @@ static const TCGCPUOps mb_tcg_ops = {
.synchronize_from_tb = mb_cpu_synchronize_from_tb,
.restore_state_to_opc = mb_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MicroBlaze is always in-order. */
+ .guest_default_memory_order = TCG_MO_ALL,
#ifndef CONFIG_USER_ONLY
.tlb_fill = mb_cpu_tlb_fill,
diff --git a/target/mips/cpu.c b/target/mips/cpu.c
index 207b7d3c8db..5ddc9bbb829 100644
--- a/target/mips/cpu.c
+++ b/target/mips/cpu.c
@@ -554,7 +554,7 @@ static const TCGCPUOps mips_tcg_ops = {
.synchronize_from_tb = mips_cpu_synchronize_from_tb,
.restore_state_to_opc = mips_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
#if !defined(CONFIG_USER_ONLY)
.tlb_fill = mips_cpu_tlb_fill,
diff --git a/target/openrisc/cpu.c b/target/openrisc/cpu.c
index c6a1d603afb..6a878aaadd8 100644
--- a/target/openrisc/cpu.c
+++ b/target/openrisc/cpu.c
@@ -248,7 +248,7 @@ static const TCGCPUOps openrisc_tcg_ops = {
.synchronize_from_tb = openrisc_cpu_synchronize_from_tb,
.restore_state_to_opc = openrisc_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
#ifndef CONFIG_USER_ONLY
.tlb_fill = openrisc_cpu_tlb_fill,
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
index 28f6f6bc2ba..28fbbb8d3c1 100644
--- a/target/ppc/cpu_init.c
+++ b/target/ppc/cpu_init.c
@@ -7490,7 +7490,7 @@ static const TCGCPUOps ppc_tcg_ops = {
.translate_code = ppc_translate_code,
.restore_state_to_opc = ppc_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
#ifdef CONFIG_USER_ONLY
.record_sigsegv = ppc_cpu_record_sigsegv,
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index 0e5fa10784d..fb903992faa 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -139,7 +139,7 @@ static const TCGCPUOps riscv_tcg_ops = {
.synchronize_from_tb = riscv_cpu_synchronize_from_tb,
.restore_state_to_opc = riscv_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ .guest_default_memory_order = 0,
#ifndef CONFIG_USER_ONLY
.tlb_fill = riscv_cpu_tlb_fill,
diff --git a/target/rx/cpu.c b/target/rx/cpu.c
index ae78c661079..6a24e7e9136 100644
--- a/target/rx/cpu.c
+++ b/target/rx/cpu.c
@@ -213,7 +213,8 @@ static const TCGCPUOps rx_tcg_ops = {
.cpu_exec_halt = rx_cpu_has_work,
.do_interrupt = rx_cpu_do_interrupt,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MTTCG not yet supported: require strict ordering */
+ .guest_default_memory_order = TCG_MO_ALL,
};
static void rx_cpu_class_init(ObjectClass *klass, void *data)
diff --git a/target/s390x/cpu.c b/target/s390x/cpu.c
index 975b8353026..12fd853c00a 100644
--- a/target/s390x/cpu.c
+++ b/target/s390x/cpu.c
@@ -361,7 +361,11 @@ static const TCGCPUOps s390_tcg_ops = {
.do_unaligned_access = s390x_cpu_do_unaligned_access,
#endif /* !CONFIG_USER_ONLY */
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /*
+ * The z/Architecture has a strong memory model with some
+ * store-after-load re-ordering.
+ */
+ .guest_default_memory_order = TCG_MO_ALL & ~TCG_MO_ST_LD,
};
#endif /* CONFIG_TCG */
diff --git a/target/sh4/cpu.c b/target/sh4/cpu.c
index 6d319dd01c7..ce9ed75107a 100644
--- a/target/sh4/cpu.c
+++ b/target/sh4/cpu.c
@@ -267,7 +267,8 @@ static const TCGCPUOps superh_tcg_ops = {
.synchronize_from_tb = superh_cpu_synchronize_from_tb,
.restore_state_to_opc = superh_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MTTCG not yet supported: require strict ordering */
+ .guest_default_memory_order = TCG_MO_ALL,
#ifndef CONFIG_USER_ONLY
.tlb_fill = superh_cpu_tlb_fill,
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 961c7f92a84..39bd0c42855 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -1005,7 +1005,28 @@ static const TCGCPUOps sparc_tcg_ops = {
.synchronize_from_tb = sparc_cpu_synchronize_from_tb,
.restore_state_to_opc = sparc_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /*
+ * From Oracle SPARC Architecture 2015:
+ *
+ * Compatibility notes: The PSO memory model described in SPARC V8 and
+ * SPARC V9 compatibility architecture specifications was never
+ * implemented in a SPARC V9 implementation and is not included in the
+ * Oracle SPARC Architecture specification.
+ *
+ * The RMO memory model described in the SPARC V9 specification was
+ * implemented in some non-Sun SPARC V9 implementations, but is not
+ * directly supported in Oracle SPARC Architecture 2015 implementations.
+ *
+ * Therefore always use TSO in QEMU.
+ *
+ * D.5 Specification of Partial Store Order (PSO)
+ * ... [loads] are followed by an implied MEMBAR #LoadLoad | #LoadStore.
+ *
+ * D.6 Specification of Total Store Order (TSO)
+ * ... PSO with the additional requirement that all [stores] are followed
+ * by an implied MEMBAR #StoreStore.
+ */
+ .guest_default_memory_order = TCG_MO_LD_LD | TCG_MO_LD_ST | TCG_MO_ST_ST,
#ifndef CONFIG_USER_ONLY
.tlb_fill = sparc_cpu_tlb_fill,
diff --git a/target/tricore/cpu.c b/target/tricore/cpu.c
index 960e7093f1c..e0a48065948 100644
--- a/target/tricore/cpu.c
+++ b/target/tricore/cpu.c
@@ -179,7 +179,8 @@ static const TCGCPUOps tricore_tcg_ops = {
.tlb_fill = tricore_cpu_tlb_fill,
.cpu_exec_interrupt = tricore_cpu_exec_interrupt,
.cpu_exec_halt = tricore_cpu_has_work,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* MTTCG not yet supported: require strict ordering */
+ .guest_default_memory_order = TCG_MO_ALL,
};
static void tricore_cpu_class_init(ObjectClass *c, void *data)
diff --git a/target/xtensa/cpu.c b/target/xtensa/cpu.c
index 0a4068ad7bf..dd9061ba469 100644
--- a/target/xtensa/cpu.c
+++ b/target/xtensa/cpu.c
@@ -237,7 +237,8 @@ static const TCGCPUOps xtensa_tcg_ops = {
.debug_excp_handler = xtensa_breakpoint_handler,
.restore_state_to_opc = xtensa_restore_state_to_opc,
- .guest_default_memory_order = TCG_GUEST_DEFAULT_MO,
+ /* Xtensa processors have a weak memory model */
+ .guest_default_memory_order = 0,
#ifndef CONFIG_USER_ONLY
.tlb_fill = xtensa_cpu_tlb_fill,
--
2.47.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
2025-03-21 18:15 ` [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code() Philippe Mathieu-Daudé
@ 2025-03-23 18:01 ` Richard Henderson
0 siblings, 0 replies; 14+ messages in thread
From: Richard Henderson @ 2025-03-23 18:01 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
> Use TCGCPUOps::guest_default_memory_order to set TCGContext::guest_mo.
>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
> accel/tcg/translate-all.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
> index fb9f83dbba3..26442e83776 100644
> --- a/accel/tcg/translate-all.c
> +++ b/accel/tcg/translate-all.c
> @@ -349,7 +349,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
> tcg_ctx->tlb_dyn_max_bits = CPU_TLB_DYN_MAX_BITS;
> #endif
> tcg_ctx->insn_start_words = TARGET_INSN_START_WORDS;
> - tcg_ctx->guest_mo = TCG_GUEST_DEFAULT_MO;
> + tcg_ctx->guest_mo = cpu->cc->tcg_ops->guest_default_memory_order;
>
> restart_translate:
> trace_translate_block(tb, pc, tb->tc.ptr);
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo()
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo() Philippe Mathieu-Daudé
@ 2025-03-23 18:05 ` Richard Henderson
0 siblings, 0 replies; 14+ messages in thread
From: Richard Henderson @ 2025-03-23 18:05 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
> In preparation of having tcg_req_mo() access CPUState in
> the next commit, pass it to cpu_req_mo(), its single caller.
>
> Signed-off-by: Philippe Mathieu-Daudé<philmd@linaro.org>
> ---
> accel/tcg/internal-target.h | 3 ++-
> accel/tcg/cputlb.c | 20 ++++++++++----------
> accel/tcg/user-exec.c | 20 ++++++++++----------
> 3 files changed, 22 insertions(+), 21 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
@ 2025-03-23 18:06 ` Richard Henderson
0 siblings, 0 replies; 14+ messages in thread
From: Richard Henderson @ 2025-03-23 18:06 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
> In order to use TCG with multiple targets, replace the
> compile time use of TCG_GUEST_DEFAULT_MO by a runtime access
> to TCGCPUOps::guest_default_memory_order via CPUState.
>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
> accel/tcg/internal-target.h | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
` (6 preceding siblings ...)
2025-03-21 18:15 ` [PATCH-for-10.1 v2 7/7] tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally Philippe Mathieu-Daudé
@ 2025-04-02 20:00 ` Richard Henderson
2025-04-02 20:27 ` Philippe Mathieu-Daudé
7 siblings, 1 reply; 14+ messages in thread
From: Richard Henderson @ 2025-04-02 20:00 UTC (permalink / raw)
To: Philippe Mathieu-Daudé, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
> Since v1:
> - Do not use tcg_ctx in tcg_req_mo (rth)
>
> Hi,
>
> In this series we replace the TCG_GUEST_DEFAULT_MO definition
> from "cpu-param.h" by a 'guest_default_memory_order' field in
> TCGCPUOps.
>
> Since tcg_req_mo() now accesses tcg_ctx, this impact the
> cpu_req_mo() calls in accel/tcg/{cputlb,user-exec}.c.
>
> The long term goal is to be able to use targets with distinct
> guest memory order restrictions.
>
> Philippe Mathieu-Daudé (7):
> tcg: Always define TCG_GUEST_DEFAULT_MO
> tcg: Simplify tcg_req_mo() macro
> tcg: Define guest_default_memory_order in TCGCPUOps
> tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
> tcg: Propagate CPUState argument to cpu_req_mo()
> tcg: Have tcg_req_mo() useTCGCPUOps::guest_default_memory_order
> tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally
Queued to tcg-next, thanks.
r~
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order
2025-04-02 20:00 ` [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Richard Henderson
@ 2025-04-02 20:27 ` Philippe Mathieu-Daudé
2025-04-02 20:29 ` Philippe Mathieu-Daudé
0 siblings, 1 reply; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-04-02 20:27 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 2/4/25 22:00, Richard Henderson wrote:
> On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
>> Since v1:
>> - Do not use tcg_ctx in tcg_req_mo (rth)
>>
>> Hi,
>>
>> In this series we replace the TCG_GUEST_DEFAULT_MO definition
>> from "cpu-param.h" by a 'guest_default_memory_order' field in
>> TCGCPUOps.
>>
>> Since tcg_req_mo() now accesses tcg_ctx, this impact the
>> cpu_req_mo() calls in accel/tcg/{cputlb,user-exec}.c.
>>
>> The long term goal is to be able to use targets with distinct
>> guest memory order restrictions.
>>
>> Philippe Mathieu-Daudé (7):
>> tcg: Always define TCG_GUEST_DEFAULT_MO
>> tcg: Simplify tcg_req_mo() macro
>> tcg: Define guest_default_memory_order in TCGCPUOps
>> tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
>> tcg: Propagate CPUState argument to cpu_req_mo()
>> tcg: Have tcg_req_mo() useTCGCPUOps::guest_default_memory_order
>> tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally
>
> Queued to tcg-next, thanks.
Thanks but I neglected to test on linux-user and found a pair of issues,
so I'll respin with them addressed.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order
2025-04-02 20:27 ` Philippe Mathieu-Daudé
@ 2025-04-02 20:29 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-04-02 20:29 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
Cc: Paolo Bonzini, Pierrick Bouvier, Alex Bennée,
Anton Johansson
On 2/4/25 22:27, Philippe Mathieu-Daudé wrote:
> On 2/4/25 22:00, Richard Henderson wrote:
>> On 3/21/25 11:15, Philippe Mathieu-Daudé wrote:
>>> Since v1:
>>> - Do not use tcg_ctx in tcg_req_mo (rth)
>>>
>>> Hi,
>>>
>>> In this series we replace the TCG_GUEST_DEFAULT_MO definition
>>> from "cpu-param.h" by a 'guest_default_memory_order' field in
>>> TCGCPUOps.
>>>
>>> Since tcg_req_mo() now accesses tcg_ctx, this impact the
>>> cpu_req_mo() calls in accel/tcg/{cputlb,user-exec}.c.
>>>
>>> The long term goal is to be able to use targets with distinct
>>> guest memory order restrictions.
>>>
>>> Philippe Mathieu-Daudé (7):
>>> tcg: Always define TCG_GUEST_DEFAULT_MO
>>> tcg: Simplify tcg_req_mo() macro
>>> tcg: Define guest_default_memory_order in TCGCPUOps
>>> tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code()
>>> tcg: Propagate CPUState argument to cpu_req_mo()
>>> tcg: Have tcg_req_mo() useTCGCPUOps::guest_default_memory_order
>>> tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally
>>
>> Queued to tcg-next, thanks.
>
> Thanks but I neglected to test on linux-user and found a pair of issues,
> so I'll respin with them addressed.
Oops, wrong series (I meant the one about TCGCPUOps::mttcg_supported).
This one is OK.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-04-02 20:29 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-21 18:15 [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 1/7] tcg: Always define TCG_GUEST_DEFAULT_MO Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 2/7] tcg: Simplify tcg_req_mo() macro Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 3/7] tcg: Define guest_default_memory_order in TCGCPUOps Philippe Mathieu-Daudé
2025-03-21 18:15 ` [PATCH-for-10.1 v2 4/7] tcg: Remove use of TCG_GUEST_DEFAULT_MO in tb_gen_code() Philippe Mathieu-Daudé
2025-03-23 18:01 ` Richard Henderson
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 5/7] tcg: Propagate CPUState argument to cpu_req_mo() Philippe Mathieu-Daudé
2025-03-23 18:05 ` Richard Henderson
2025-03-21 18:15 ` [RFC PATCH-for-10.1 v2 6/7] tcg: Have tcg_req_mo() use TCGCPUOps::guest_default_memory_order Philippe Mathieu-Daudé
2025-03-23 18:06 ` Richard Henderson
2025-03-21 18:15 ` [PATCH-for-10.1 v2 7/7] tcg: Remove the TCG_GUEST_DEFAULT_MO definition globally Philippe Mathieu-Daudé
2025-04-02 20:00 ` [RFC PATCH-for-10.1 v2 0/7] tcg: Move TCG_GUEST_DEFAULT_MO -> TCGCPUOps::guest_default_memory_order Richard Henderson
2025-04-02 20:27 ` Philippe Mathieu-Daudé
2025-04-02 20:29 ` Philippe Mathieu-Daudé
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).