* [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request
@ 2025-09-08 10:12 Paolo Bonzini
2025-09-08 10:12 ` [PATCH 01/19] target/ppc: limit cpu_interrupt_exittb to system emulation Paolo Bonzini
` (18 more replies)
0 siblings, 19 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel
cpu->exit_request is used to kick vCPU threads into qemu_wait_io_event().
The code that handles the signaling of cpu->exit_request is messy, mostly
due to no one ever taking a look at it as a whole. In fact already in commit
4b8523ee896 ("kvm: First step to push iothread lock out of inner run loop",
2015-07-01), the read of cpu->exit_request was placed outside the BQL
critical section without much attention to ordering; and it only got
worse from that point, in no small part due to a young and naive me.
This series is complementary to the cpu->interrupt_request cleanups
that Igor did in "memory: reintroduce BQL-free fine-grained PIO/MMIO"
(https://lore.kernel.org/qemu-devel/20250808120137.2208800-1-imammedo@redhat.com/T/),
and in fact includes some extra changes to interrupt_request at the
beginning.
Paolo
Paolo Bonzini (19):
target/ppc: limit cpu_interrupt_exittb to system emulation
target/sparc: limit cpu_check_irqs to system emulation
target/i386: limit a20 to system emulation
target-arm: remove uses of cpu_interrupt() for user-mode emulation
user-exec: remove cpu_interrupt() stub
treewide: clear bits of cs->interrupt_request with cpu_reset_interrupt()
cpu-common: use atomic access for interrupt_request
cpus: document that qemu_cpu_kick() can be used for BQL-less operation
accel: use store_release/load_acquire for cross-thread exit_request
accel: use atomic accesses for exit_request
accel/tcg: create a thread-kick function for TCG
accel/tcg: inline cpu_exit()
cpus: remove TCG-ism from cpu_exit()
cpus: properly kick CPUs out of inner execution loop
treewide: rename qemu_wait_io_event/qemu_wait_io_event_common
bsd-user, linux-user: introduce qemu_process_cpu_events
cpus: clear exit_request in qemu_process_cpu_events
accel: make all calls to qemu_process_cpu_events look the same
tcg/user: do not set exit_request gratuitously
v2->v3:
- [accel: use store_release/load_acquire for cross-thread exit_request]
improve comments for memory barriers, match rr_kick_next_cpu() more
accurately
- [cpus: properly kick CPUs out of inner execution loop]
fix comment
- [treewide: rename qemu_wait_io_event/qemu_wait_io_event_common]
new patch; new names used throughout in patches 16-19
docs/devel/tcg-icount.rst | 2 +-
accel/tcg/tcg-accel-ops-mttcg.h | 3 -
accel/tcg/tcg-accel-ops.h | 1 +
bsd-user/aarch64/target_arch_cpu.h | 2 +-
bsd-user/arm/target_arch_cpu.h | 2 +-
bsd-user/i386/target_arch_cpu.h | 2 +-
bsd-user/riscv/target_arch_cpu.h | 2 +-
bsd-user/x86_64/target_arch_cpu.h | 2 +-
include/hw/core/cpu.h | 22 +-
include/system/cpus.h | 3 +-
target/arm/internals.h | 5 +
accel/dummy-cpus.c | 2 +-
accel/hvf/hvf-accel-ops.c | 2 +-
accel/kvm/kvm-accel-ops.c | 3 +-
accel/kvm/kvm-all.c | 23 +-
accel/tcg/cpu-exec.c | 34 ++-
accel/tcg/tcg-accel-ops-mttcg.c | 12 +-
accel/tcg/tcg-accel-ops-rr.c | 55 ++--
accel/tcg/tcg-accel-ops.c | 4 +-
accel/tcg/user-exec.c | 11 +-
bsd-user/main.c | 5 -
cpu-common.c | 3 +-
hw/core/cpu-common.c | 19 +-
hw/core/cpu-system.c | 2 +-
hw/ppc/ppc.c | 2 +
hw/ppc/spapr_hcall.c | 7 +-
hw/ppc/spapr_rtas.c | 2 +-
linux-user/aarch64/cpu_loop.c | 2 +-
linux-user/alpha/cpu_loop.c | 2 +-
linux-user/arm/cpu_loop.c | 2 +-
linux-user/hexagon/cpu_loop.c | 2 +-
linux-user/hppa/cpu_loop.c | 2 +-
linux-user/i386/cpu_loop.c | 2 +-
linux-user/loongarch64/cpu_loop.c | 2 +-
linux-user/m68k/cpu_loop.c | 2 +-
linux-user/main.c | 5 -
linux-user/microblaze/cpu_loop.c | 2 +-
linux-user/mips/cpu_loop.c | 2 +-
linux-user/openrisc/cpu_loop.c | 2 +-
linux-user/ppc/cpu_loop.c | 2 +-
linux-user/riscv/cpu_loop.c | 2 +-
linux-user/s390x/cpu_loop.c | 2 +-
linux-user/sh4/cpu_loop.c | 2 +-
linux-user/sparc/cpu_loop.c | 2 +-
linux-user/xtensa/cpu_loop.c | 2 +-
replay/replay-events.c | 3 +-
system/cpu-timers.c | 6 +-
system/cpus.c | 15 +-
target/arm/cpu-irq.c | 381 ++++++++++++++++++++++++++++
target/arm/cpu.c | 370 ---------------------------
target/arm/el2-stubs.c | 37 +++
target/arm/helper.c | 4 +
target/arm/tcg/mte_helper.c | 2 +-
target/avr/helper.c | 4 +-
target/i386/helper.c | 2 +
target/i386/hvf/x86hvf.c | 8 +-
target/i386/kvm/hyperv.c | 1 -
target/i386/kvm/kvm.c | 20 +-
target/i386/nvmm/nvmm-accel-ops.c | 8 +-
target/i386/nvmm/nvmm-all.c | 19 +-
target/i386/tcg/system/seg_helper.c | 13 +-
target/i386/tcg/system/svm_helper.c | 2 +-
target/i386/whpx/whpx-accel-ops.c | 6 +-
target/i386/whpx/whpx-all.c | 23 +-
target/openrisc/sys_helper.c | 2 +-
target/ppc/helper_regs.c | 2 +
target/rx/helper.c | 4 +-
target/s390x/tcg/excp_helper.c | 2 +-
target/sparc/int32_helper.c | 2 +
target/sparc/int64_helper.c | 2 +
target/arm/meson.build | 2 +
71 files changed, 645 insertions(+), 566 deletions(-)
create mode 100644 target/arm/cpu-irq.c
create mode 100644 target/arm/el2-stubs.c
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 01/19] target/ppc: limit cpu_interrupt_exittb to system emulation
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 02/19] target/sparc: limit cpu_check_irqs " Paolo Bonzini
` (17 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Richard Henderson
It is not used by user-mode emulation and is the only caller of
cpu_interrupt() in qemu-ppc* binaries.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/ppc/helper_regs.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/target/ppc/helper_regs.c b/target/ppc/helper_regs.c
index 7e5726871e5..5f217397490 100644
--- a/target/ppc/helper_regs.c
+++ b/target/ppc/helper_regs.c
@@ -274,6 +274,7 @@ TCGTBCPUState ppc_get_tb_cpu_state(CPUState *cs)
return (TCGTBCPUState){ .pc = env->nip, .flags = hflags_current };
}
+#ifndef CONFIG_USER_ONLY
void cpu_interrupt_exittb(CPUState *cs)
{
/*
@@ -285,6 +286,7 @@ void cpu_interrupt_exittb(CPUState *cs)
cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
}
}
+#endif
int hreg_store_msr(CPUPPCState *env, target_ulong value, int alter_hv)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 02/19] target/sparc: limit cpu_check_irqs to system emulation
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 01/19] target/ppc: limit cpu_interrupt_exittb to system emulation Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 03/19] target/i386: limit a20 " Paolo Bonzini
` (16 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Richard Henderson
It is not used by user-mode emulation and is the only caller of
cpu_interrupt() in qemu-sparc* binaries.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/sparc/int32_helper.c | 2 ++
target/sparc/int64_helper.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/target/sparc/int32_helper.c b/target/sparc/int32_helper.c
index 39db4ffa70a..fdcaa0a578b 100644
--- a/target/sparc/int32_helper.c
+++ b/target/sparc/int32_helper.c
@@ -65,6 +65,7 @@ static const char *excp_name_str(int32_t exception_index)
return excp_names[exception_index];
}
+#if !defined(CONFIG_USER_ONLY)
void cpu_check_irqs(CPUSPARCState *env)
{
CPUState *cs;
@@ -96,6 +97,7 @@ void cpu_check_irqs(CPUSPARCState *env)
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
}
}
+#endif
void sparc_cpu_do_interrupt(CPUState *cs)
{
diff --git a/target/sparc/int64_helper.c b/target/sparc/int64_helper.c
index 49e4e51c6dc..23adda4cad7 100644
--- a/target/sparc/int64_helper.c
+++ b/target/sparc/int64_helper.c
@@ -62,6 +62,7 @@ static const char * const excp_names[0x80] = {
};
#endif
+#if !defined(CONFIG_USER_ONLY)
void cpu_check_irqs(CPUSPARCState *env)
{
CPUState *cs;
@@ -127,6 +128,7 @@ void cpu_check_irqs(CPUSPARCState *env)
cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
}
}
+#endif
void sparc_cpu_do_interrupt(CPUState *cs)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 03/19] target/i386: limit a20 to system emulation
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 01/19] target/ppc: limit cpu_interrupt_exittb to system emulation Paolo Bonzini
2025-09-08 10:12 ` [PATCH 02/19] target/sparc: limit cpu_check_irqs " Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 04/19] target-arm: remove uses of cpu_interrupt() for user-mode emulation Paolo Bonzini
` (15 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Philippe Mathieu-Daudé, Richard Henderson
It is not used by user-mode emulation and is the only caller of
cpu_interrupt() in qemu-i386 and qemu-x86_64.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/i386/helper.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/target/i386/helper.c b/target/i386/helper.c
index e0aaed3c4c4..651041ccfa6 100644
--- a/target/i386/helper.c
+++ b/target/i386/helper.c
@@ -110,6 +110,7 @@ int cpu_x86_support_mca_broadcast(CPUX86State *env)
/* x86 mmu */
/* XXX: add PGE support */
+#ifndef CONFIG_USER_ONLY
void x86_cpu_set_a20(X86CPU *cpu, int a20_state)
{
CPUX86State *env = &cpu->env;
@@ -129,6 +130,7 @@ void x86_cpu_set_a20(X86CPU *cpu, int a20_state)
env->a20_mask = ~(1 << 20) | (a20_state << 20);
}
}
+#endif
void cpu_x86_update_cr0(CPUX86State *env, uint32_t new_cr0)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 04/19] target-arm: remove uses of cpu_interrupt() for user-mode emulation
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (2 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 03/19] target/i386: limit a20 " Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 05/19] user-exec: remove cpu_interrupt() stub Paolo Bonzini
` (14 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Richard Henderson
Arm leaves around some functions that use cpu_interrupt(), even for
user-mode emulation when the code is unreachable. Pull out the
system-mode implementation to a separate file, and add stubs for
CONFIG_USER_ONLY.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target/arm/internals.h | 5 +
target/arm/cpu-irq.c | 381 +++++++++++++++++++++++++++++++++++++++++
target/arm/cpu.c | 370 ---------------------------------------
target/arm/el2-stubs.c | 37 ++++
target/arm/helper.c | 4 +
target/arm/meson.build | 2 +
6 files changed, 429 insertions(+), 370 deletions(-)
create mode 100644 target/arm/cpu-irq.c
create mode 100644 target/arm/el2-stubs.c
diff --git a/target/arm/internals.h b/target/arm/internals.h
index f5a1e75db37..f0aa26c5117 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1292,6 +1292,11 @@ static inline const char *aarch32_mode_name(uint32_t psr)
return cpu_mode_names[psr & 0xf];
}
+/**
+ * arm_cpu_exec_interrupt(): Implementation of the cpu_exec_inrerrupt hook.
+ */
+bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request);
+
/**
* arm_cpu_update_virq: Update CPU_INTERRUPT_VIRQ bit in cs->interrupt_request
*
diff --git a/target/arm/cpu-irq.c b/target/arm/cpu-irq.c
new file mode 100644
index 00000000000..fe514cc93af
--- /dev/null
+++ b/target/arm/cpu-irq.c
@@ -0,0 +1,381 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/*
+ * QEMU ARM CPU - interrupt_request handling
+ *
+ * Copyright (c) 2003-2025 QEMU contributors
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "accel/tcg/cpu-ops.h"
+#include "internals.h"
+
+#ifdef CONFIG_TCG
+static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
+ unsigned int target_el,
+ unsigned int cur_el, bool secure,
+ uint64_t hcr_el2)
+{
+ CPUARMState *env = cpu_env(cs);
+ bool pstate_unmasked;
+ bool unmasked = false;
+ bool allIntMask = false;
+
+ /*
+ * Don't take exceptions if they target a lower EL.
+ * This check should catch any exceptions that would not be taken
+ * but left pending.
+ */
+ if (cur_el > target_el) {
+ return false;
+ }
+
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
+ env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
+ allIntMask = env->pstate & PSTATE_ALLINT ||
+ ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
+ (env->pstate & PSTATE_SP));
+ }
+
+ switch (excp_idx) {
+ case EXCP_NMI:
+ pstate_unmasked = !allIntMask;
+ break;
+
+ case EXCP_VINMI:
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
+ /* VINMIs are only taken when hypervized. */
+ return false;
+ }
+ return !allIntMask;
+ case EXCP_VFNMI:
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
+ /* VFNMIs are only taken when hypervized. */
+ return false;
+ }
+ return !allIntMask;
+ case EXCP_FIQ:
+ pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
+ break;
+
+ case EXCP_IRQ:
+ pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
+ break;
+
+ case EXCP_VFIQ:
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
+ /* VFIQs are only taken when hypervized. */
+ return false;
+ }
+ return !(env->daif & PSTATE_F) && (!allIntMask);
+ case EXCP_VIRQ:
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
+ /* VIRQs are only taken when hypervized. */
+ return false;
+ }
+ return !(env->daif & PSTATE_I) && (!allIntMask);
+ case EXCP_VSERR:
+ if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
+ /* VIRQs are only taken when hypervized. */
+ return false;
+ }
+ return !(env->daif & PSTATE_A);
+ default:
+ g_assert_not_reached();
+ }
+
+ /*
+ * Use the target EL, current execution state and SCR/HCR settings to
+ * determine whether the corresponding CPSR bit is used to mask the
+ * interrupt.
+ */
+ if ((target_el > cur_el) && (target_el != 1)) {
+ /* Exceptions targeting a higher EL may not be maskable */
+ if (arm_feature(env, ARM_FEATURE_AARCH64)) {
+ switch (target_el) {
+ case 2:
+ /*
+ * According to ARM DDI 0487H.a, an interrupt can be masked
+ * when HCR_E2H and HCR_TGE are both set regardless of the
+ * current Security state. Note that we need to revisit this
+ * part again once we need to support NMI.
+ */
+ if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
+ unmasked = true;
+ }
+ break;
+ case 3:
+ /* Interrupt cannot be masked when the target EL is 3 */
+ unmasked = true;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ } else {
+ /*
+ * The old 32-bit-only environment has a more complicated
+ * masking setup. HCR and SCR bits not only affect interrupt
+ * routing but also change the behaviour of masking.
+ */
+ bool hcr, scr;
+
+ switch (excp_idx) {
+ case EXCP_FIQ:
+ /*
+ * If FIQs are routed to EL3 or EL2 then there are cases where
+ * we override the CPSR.F in determining if the exception is
+ * masked or not. If neither of these are set then we fall back
+ * to the CPSR.F setting otherwise we further assess the state
+ * below.
+ */
+ hcr = hcr_el2 & HCR_FMO;
+ scr = (env->cp15.scr_el3 & SCR_FIQ);
+
+ /*
+ * When EL3 is 32-bit, the SCR.FW bit controls whether the
+ * CPSR.F bit masks FIQ interrupts when taken in non-secure
+ * state. If SCR.FW is set then FIQs can be masked by CPSR.F
+ * when non-secure but only when FIQs are only routed to EL3.
+ */
+ scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr);
+ break;
+ case EXCP_IRQ:
+ /*
+ * When EL3 execution state is 32-bit, if HCR.IMO is set then
+ * we may override the CPSR.I masking when in non-secure state.
+ * The SCR.IRQ setting has already been taken into consideration
+ * when setting the target EL, so it does not have a further
+ * affect here.
+ */
+ hcr = hcr_el2 & HCR_IMO;
+ scr = false;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ if ((scr || hcr) && !secure) {
+ unmasked = true;
+ }
+ }
+ }
+
+ /*
+ * The PSTATE bits only mask the interrupt if we have not overridden the
+ * ability above.
+ */
+ return unmasked || pstate_unmasked;
+}
+
+bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
+{
+ CPUARMState *env = cpu_env(cs);
+ uint32_t cur_el = arm_current_el(env);
+ bool secure = arm_is_secure(env);
+ uint64_t hcr_el2 = arm_hcr_el2_eff(env);
+ uint32_t target_el;
+ uint32_t excp_idx;
+
+ /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
+
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
+ (arm_sctlr(env, cur_el) & SCTLR_NMI)) {
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
+ excp_idx = EXCP_NMI;
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
+ excp_idx = EXCP_VINMI;
+ target_el = 1;
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
+ excp_idx = EXCP_VFNMI;
+ target_el = 1;
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ } else {
+ /*
+ * NMI disabled: interrupts with superpriority are handled
+ * as if they didn't have it
+ */
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
+ interrupt_request |= CPU_INTERRUPT_HARD;
+ }
+ if (interrupt_request & CPU_INTERRUPT_VINMI) {
+ interrupt_request |= CPU_INTERRUPT_VIRQ;
+ }
+ if (interrupt_request & CPU_INTERRUPT_VFNMI) {
+ interrupt_request |= CPU_INTERRUPT_VFIQ;
+ }
+ }
+
+ if (interrupt_request & CPU_INTERRUPT_FIQ) {
+ excp_idx = EXCP_FIQ;
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_HARD) {
+ excp_idx = EXCP_IRQ;
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_VIRQ) {
+ excp_idx = EXCP_VIRQ;
+ target_el = 1;
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_VFIQ) {
+ excp_idx = EXCP_VFIQ;
+ target_el = 1;
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ if (interrupt_request & CPU_INTERRUPT_VSERR) {
+ excp_idx = EXCP_VSERR;
+ target_el = 1;
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ /* Taking a virtual abort clears HCR_EL2.VSE */
+ env->cp15.hcr_el2 &= ~HCR_VSE;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
+ goto found;
+ }
+ }
+ return false;
+
+ found:
+ cs->exception_index = excp_idx;
+ env->exception.target_el = target_el;
+ cs->cc->tcg_ops->do_interrupt(cs);
+ return true;
+}
+#endif /* CONFIG_TCG */
+
+void arm_cpu_update_virq(ARMCPU *cpu)
+{
+ /*
+ * Update the interrupt level for VIRQ, which is the logical OR of
+ * the HCR_EL2.VI bit and the input line level from the GIC.
+ */
+ CPUARMState *env = &cpu->env;
+ CPUState *cs = CPU(cpu);
+
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
+ !(arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
+ (env->irq_line_state & CPU_INTERRUPT_VIRQ);
+
+ if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VIRQ)) {
+ if (new_state) {
+ cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
+ } else {
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
+ }
+ }
+}
+
+void arm_cpu_update_vfiq(ARMCPU *cpu)
+{
+ /*
+ * Update the interrupt level for VFIQ, which is the logical OR of
+ * the HCR_EL2.VF bit and the input line level from the GIC.
+ */
+ CPUARMState *env = &cpu->env;
+ CPUState *cs = CPU(cpu);
+
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VF) &&
+ !(arm_hcrx_el2_eff(env) & HCRX_VFNMI)) ||
+ (env->irq_line_state & CPU_INTERRUPT_VFIQ);
+
+ if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VFIQ)) {
+ if (new_state) {
+ cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
+ } else {
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
+ }
+ }
+}
+
+void arm_cpu_update_vinmi(ARMCPU *cpu)
+{
+ /*
+ * Update the interrupt level for VINMI, which is the logical OR of
+ * the HCRX_EL2.VINMI bit and the input line level from the GIC.
+ */
+ CPUARMState *env = &cpu->env;
+ CPUState *cs = CPU(cpu);
+
+ bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
+ (arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
+ (env->irq_line_state & CPU_INTERRUPT_VINMI);
+
+ if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VINMI)) {
+ if (new_state) {
+ cpu_interrupt(cs, CPU_INTERRUPT_VINMI);
+ } else {
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VINMI);
+ }
+ }
+}
+
+void arm_cpu_update_vfnmi(ARMCPU *cpu)
+{
+ /*
+ * Update the interrupt level for VFNMI, which is the HCRX_EL2.VFNMI bit.
+ */
+ CPUARMState *env = &cpu->env;
+ CPUState *cs = CPU(cpu);
+
+ bool new_state = (arm_hcr_el2_eff(env) & HCR_VF) &&
+ (arm_hcrx_el2_eff(env) & HCRX_VFNMI);
+
+ if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VFNMI)) {
+ if (new_state) {
+ cpu_interrupt(cs, CPU_INTERRUPT_VFNMI);
+ } else {
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VFNMI);
+ }
+ }
+}
+
+void arm_cpu_update_vserr(ARMCPU *cpu)
+{
+ /*
+ * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
+ */
+ CPUARMState *env = &cpu->env;
+ CPUState *cs = CPU(cpu);
+
+ bool new_state = env->cp15.hcr_el2 & HCR_VSE;
+
+ if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VSERR)) {
+ if (new_state) {
+ cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
+ } else {
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
+ }
+ }
+}
+
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d0f6fcdfcea..633ec55a57c 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -686,376 +686,6 @@ void arm_emulate_firmware_reset(CPUState *cpustate, int target_el)
}
-#if defined(CONFIG_TCG) && !defined(CONFIG_USER_ONLY)
-
-static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
- unsigned int target_el,
- unsigned int cur_el, bool secure,
- uint64_t hcr_el2)
-{
- CPUARMState *env = cpu_env(cs);
- bool pstate_unmasked;
- bool unmasked = false;
- bool allIntMask = false;
-
- /*
- * Don't take exceptions if they target a lower EL.
- * This check should catch any exceptions that would not be taken
- * but left pending.
- */
- if (cur_el > target_el) {
- return false;
- }
-
- if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
- env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
- allIntMask = env->pstate & PSTATE_ALLINT ||
- ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
- (env->pstate & PSTATE_SP));
- }
-
- switch (excp_idx) {
- case EXCP_NMI:
- pstate_unmasked = !allIntMask;
- break;
-
- case EXCP_VINMI:
- if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
- /* VINMIs are only taken when hypervized. */
- return false;
- }
- return !allIntMask;
- case EXCP_VFNMI:
- if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
- /* VFNMIs are only taken when hypervized. */
- return false;
- }
- return !allIntMask;
- case EXCP_FIQ:
- pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
- break;
-
- case EXCP_IRQ:
- pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
- break;
-
- case EXCP_VFIQ:
- if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
- /* VFIQs are only taken when hypervized. */
- return false;
- }
- return !(env->daif & PSTATE_F) && (!allIntMask);
- case EXCP_VIRQ:
- if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
- /* VIRQs are only taken when hypervized. */
- return false;
- }
- return !(env->daif & PSTATE_I) && (!allIntMask);
- case EXCP_VSERR:
- if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
- /* VIRQs are only taken when hypervized. */
- return false;
- }
- return !(env->daif & PSTATE_A);
- default:
- g_assert_not_reached();
- }
-
- /*
- * Use the target EL, current execution state and SCR/HCR settings to
- * determine whether the corresponding CPSR bit is used to mask the
- * interrupt.
- */
- if ((target_el > cur_el) && (target_el != 1)) {
- /* Exceptions targeting a higher EL may not be maskable */
- if (arm_feature(env, ARM_FEATURE_AARCH64)) {
- switch (target_el) {
- case 2:
- /*
- * According to ARM DDI 0487H.a, an interrupt can be masked
- * when HCR_E2H and HCR_TGE are both set regardless of the
- * current Security state. Note that we need to revisit this
- * part again once we need to support NMI.
- */
- if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
- unmasked = true;
- }
- break;
- case 3:
- /* Interrupt cannot be masked when the target EL is 3 */
- unmasked = true;
- break;
- default:
- g_assert_not_reached();
- }
- } else {
- /*
- * The old 32-bit-only environment has a more complicated
- * masking setup. HCR and SCR bits not only affect interrupt
- * routing but also change the behaviour of masking.
- */
- bool hcr, scr;
-
- switch (excp_idx) {
- case EXCP_FIQ:
- /*
- * If FIQs are routed to EL3 or EL2 then there are cases where
- * we override the CPSR.F in determining if the exception is
- * masked or not. If neither of these are set then we fall back
- * to the CPSR.F setting otherwise we further assess the state
- * below.
- */
- hcr = hcr_el2 & HCR_FMO;
- scr = (env->cp15.scr_el3 & SCR_FIQ);
-
- /*
- * When EL3 is 32-bit, the SCR.FW bit controls whether the
- * CPSR.F bit masks FIQ interrupts when taken in non-secure
- * state. If SCR.FW is set then FIQs can be masked by CPSR.F
- * when non-secure but only when FIQs are only routed to EL3.
- */
- scr = scr && !((env->cp15.scr_el3 & SCR_FW) && !hcr);
- break;
- case EXCP_IRQ:
- /*
- * When EL3 execution state is 32-bit, if HCR.IMO is set then
- * we may override the CPSR.I masking when in non-secure state.
- * The SCR.IRQ setting has already been taken into consideration
- * when setting the target EL, so it does not have a further
- * affect here.
- */
- hcr = hcr_el2 & HCR_IMO;
- scr = false;
- break;
- default:
- g_assert_not_reached();
- }
-
- if ((scr || hcr) && !secure) {
- unmasked = true;
- }
- }
- }
-
- /*
- * The PSTATE bits only mask the interrupt if we have not overridden the
- * ability above.
- */
- return unmasked || pstate_unmasked;
-}
-
-static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
-{
- CPUARMState *env = cpu_env(cs);
- uint32_t cur_el = arm_current_el(env);
- bool secure = arm_is_secure(env);
- uint64_t hcr_el2 = arm_hcr_el2_eff(env);
- uint32_t target_el;
- uint32_t excp_idx;
-
- /* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
-
- if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
- (arm_sctlr(env, cur_el) & SCTLR_NMI)) {
- if (interrupt_request & CPU_INTERRUPT_NMI) {
- excp_idx = EXCP_NMI;
- target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_VINMI) {
- excp_idx = EXCP_VINMI;
- target_el = 1;
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_VFNMI) {
- excp_idx = EXCP_VFNMI;
- target_el = 1;
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- } else {
- /*
- * NMI disabled: interrupts with superpriority are handled
- * as if they didn't have it
- */
- if (interrupt_request & CPU_INTERRUPT_NMI) {
- interrupt_request |= CPU_INTERRUPT_HARD;
- }
- if (interrupt_request & CPU_INTERRUPT_VINMI) {
- interrupt_request |= CPU_INTERRUPT_VIRQ;
- }
- if (interrupt_request & CPU_INTERRUPT_VFNMI) {
- interrupt_request |= CPU_INTERRUPT_VFIQ;
- }
- }
-
- if (interrupt_request & CPU_INTERRUPT_FIQ) {
- excp_idx = EXCP_FIQ;
- target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_HARD) {
- excp_idx = EXCP_IRQ;
- target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_VIRQ) {
- excp_idx = EXCP_VIRQ;
- target_el = 1;
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_VFIQ) {
- excp_idx = EXCP_VFIQ;
- target_el = 1;
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- goto found;
- }
- }
- if (interrupt_request & CPU_INTERRUPT_VSERR) {
- excp_idx = EXCP_VSERR;
- target_el = 1;
- if (arm_excp_unmasked(cs, excp_idx, target_el,
- cur_el, secure, hcr_el2)) {
- /* Taking a virtual abort clears HCR_EL2.VSE */
- env->cp15.hcr_el2 &= ~HCR_VSE;
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
- goto found;
- }
- }
- return false;
-
- found:
- cs->exception_index = excp_idx;
- env->exception.target_el = target_el;
- cs->cc->tcg_ops->do_interrupt(cs);
- return true;
-}
-
-#endif /* CONFIG_TCG && !CONFIG_USER_ONLY */
-
-void arm_cpu_update_virq(ARMCPU *cpu)
-{
- /*
- * Update the interrupt level for VIRQ, which is the logical OR of
- * the HCR_EL2.VI bit and the input line level from the GIC.
- */
- CPUARMState *env = &cpu->env;
- CPUState *cs = CPU(cpu);
-
- bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
- !(arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
- (env->irq_line_state & CPU_INTERRUPT_VIRQ);
-
- if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VIRQ)) {
- if (new_state) {
- cpu_interrupt(cs, CPU_INTERRUPT_VIRQ);
- } else {
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
- }
- }
-}
-
-void arm_cpu_update_vfiq(ARMCPU *cpu)
-{
- /*
- * Update the interrupt level for VFIQ, which is the logical OR of
- * the HCR_EL2.VF bit and the input line level from the GIC.
- */
- CPUARMState *env = &cpu->env;
- CPUState *cs = CPU(cpu);
-
- bool new_state = ((arm_hcr_el2_eff(env) & HCR_VF) &&
- !(arm_hcrx_el2_eff(env) & HCRX_VFNMI)) ||
- (env->irq_line_state & CPU_INTERRUPT_VFIQ);
-
- if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VFIQ)) {
- if (new_state) {
- cpu_interrupt(cs, CPU_INTERRUPT_VFIQ);
- } else {
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VFIQ);
- }
- }
-}
-
-void arm_cpu_update_vinmi(ARMCPU *cpu)
-{
- /*
- * Update the interrupt level for VINMI, which is the logical OR of
- * the HCRX_EL2.VINMI bit and the input line level from the GIC.
- */
- CPUARMState *env = &cpu->env;
- CPUState *cs = CPU(cpu);
-
- bool new_state = ((arm_hcr_el2_eff(env) & HCR_VI) &&
- (arm_hcrx_el2_eff(env) & HCRX_VINMI)) ||
- (env->irq_line_state & CPU_INTERRUPT_VINMI);
-
- if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VINMI)) {
- if (new_state) {
- cpu_interrupt(cs, CPU_INTERRUPT_VINMI);
- } else {
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VINMI);
- }
- }
-}
-
-void arm_cpu_update_vfnmi(ARMCPU *cpu)
-{
- /*
- * Update the interrupt level for VFNMI, which is the HCRX_EL2.VFNMI bit.
- */
- CPUARMState *env = &cpu->env;
- CPUState *cs = CPU(cpu);
-
- bool new_state = (arm_hcr_el2_eff(env) & HCR_VF) &&
- (arm_hcrx_el2_eff(env) & HCRX_VFNMI);
-
- if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VFNMI)) {
- if (new_state) {
- cpu_interrupt(cs, CPU_INTERRUPT_VFNMI);
- } else {
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VFNMI);
- }
- }
-}
-
-void arm_cpu_update_vserr(ARMCPU *cpu)
-{
- /*
- * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
- */
- CPUARMState *env = &cpu->env;
- CPUState *cs = CPU(cpu);
-
- bool new_state = env->cp15.hcr_el2 & HCR_VSE;
-
- if (new_state != cpu_test_interrupt(cs, CPU_INTERRUPT_VSERR)) {
- if (new_state) {
- cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
- } else {
- cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
- }
- }
-}
-
#ifndef CONFIG_USER_ONLY
static void arm_cpu_set_irq(void *opaque, int irq, int level)
{
diff --git a/target/arm/el2-stubs.c b/target/arm/el2-stubs.c
new file mode 100644
index 00000000000..972023c337f
--- /dev/null
+++ b/target/arm/el2-stubs.c
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+/* QEMU ARM CPU - user-mode emulation stubs for EL2 interrupts
+ *
+ * These should not really be needed, but CP registers for EL2
+ * are not elided by user-mode emulation and they call these
+ * functions. Leave them as stubs until it's cleaned up.
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "internals.h"
+
+void arm_cpu_update_virq(ARMCPU *cpu)
+{
+ g_assert_not_reached();
+}
+
+void arm_cpu_update_vfiq(ARMCPU *cpu)
+{
+ g_assert_not_reached();
+}
+
+void arm_cpu_update_vinmi(ARMCPU *cpu)
+{
+ g_assert_not_reached();
+}
+
+void arm_cpu_update_vfnmi(ARMCPU *cpu)
+{
+ g_assert_not_reached();
+}
+
+void arm_cpu_update_vserr(ARMCPU *cpu)
+{
+ g_assert_not_reached();
+}
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 19637e7301b..fb62742d983 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2868,8 +2868,12 @@ static void omap_threadid_write(CPUARMState *env, const ARMCPRegInfo *ri,
static void omap_wfi_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
{
+#ifdef CONFIG_USER_ONLY
+ g_assert_not_reached();
+#else
/* Wait-for-interrupt (deprecated) */
cpu_interrupt(env_cpu(env), CPU_INTERRUPT_HALT);
+#endif
}
static void omap_cachemaint_write(CPUARMState *env, const ARMCPRegInfo *ri,
diff --git a/target/arm/meson.build b/target/arm/meson.build
index 07d9271aa4d..914f1498fc5 100644
--- a/target/arm/meson.build
+++ b/target/arm/meson.build
@@ -26,6 +26,7 @@ arm_user_ss.add(files(
'debug_helper.c',
'helper.c',
'vfp_fpscr.c',
+ 'el2-stubs.c',
))
arm_common_system_ss.add(files('cpu.c'))
@@ -38,6 +39,7 @@ arm_common_system_ss.add(files(
'arm-powerctl.c',
'cortex-regs.c',
'cpregs-pmu.c',
+ 'cpu-irq.c',
'debug_helper.c',
'helper.c',
'machine.c',
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 05/19] user-exec: remove cpu_interrupt() stub
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (3 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 04/19] target-arm: remove uses of cpu_interrupt() for user-mode emulation Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 06/19] treewide: clear bits of cs->interrupt_request with cpu_reset_interrupt() Paolo Bonzini
` (13 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Richard Henderson
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/tcg/user-exec.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 748bfab04a7..66c25fba7dd 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -46,11 +46,6 @@ __thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
-void cpu_interrupt(CPUState *cpu, int mask)
-{
- g_assert_not_reached();
-}
-
/*
* Adjust the pc to pass to cpu_restore_state; return the memop type.
*/
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 06/19] treewide: clear bits of cs->interrupt_request with cpu_reset_interrupt()
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (4 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 05/19] user-exec: remove cpu_interrupt() stub Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 07/19] cpu-common: use atomic access for interrupt_request Paolo Bonzini
` (12 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Philippe Mathieu-Daudé, Richard Henderson
Open coding cpu_reset_interrupt() can cause bugs if the BQL is not
taken, for example i386 has the call chain kvm_cpu_exec() ->
kvm_put_vcpu_events() -> kvm_arch_put_registers().
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/tcg/cpu-exec.c | 6 +++---
hw/core/cpu-system.c | 2 +-
target/avr/helper.c | 4 ++--
target/i386/hvf/x86hvf.c | 8 ++++----
target/i386/kvm/kvm.c | 14 +++++++-------
target/i386/nvmm/nvmm-all.c | 10 +++++-----
target/i386/tcg/system/seg_helper.c | 13 ++++++-------
target/i386/tcg/system/svm_helper.c | 2 +-
target/i386/whpx/whpx-all.c | 12 ++++++------
target/openrisc/sys_helper.c | 2 +-
target/rx/helper.c | 4 ++--
target/s390x/tcg/excp_helper.c | 2 +-
12 files changed, 39 insertions(+), 40 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 8491e5badd1..508d2d2d9e2 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -784,7 +784,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
if (unlikely(cpu_test_interrupt(cpu, ~0))) {
bql_lock();
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_DEBUG)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG);
cpu->exception_index = EXCP_DEBUG;
bql_unlock();
return true;
@@ -793,7 +793,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
/* Do nothing */
} else if (cpu_test_interrupt(cpu, CPU_INTERRUPT_HALT)) {
replay_interrupt();
- cpu->interrupt_request &= ~CPU_INTERRUPT_HALT;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_HALT);
cpu->halted = 1;
cpu->exception_index = EXCP_HLT;
bql_unlock();
@@ -840,7 +840,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_EXITTB)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_EXITTB;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_EXITTB);
/* ensure that no TB jump will be modified as
the program flow was changed */
*last_tb = NULL;
diff --git a/hw/core/cpu-system.c b/hw/core/cpu-system.c
index a975405d3a0..09c928c1f92 100644
--- a/hw/core/cpu-system.c
+++ b/hw/core/cpu-system.c
@@ -204,7 +204,7 @@ static int cpu_common_post_load(void *opaque, int version_id)
* 0x01 was CPU_INTERRUPT_EXIT. This line can be removed when the
* version_id is increased.
*/
- cpu->interrupt_request &= ~0x01;
+ cpu_reset_interrupt(cpu, 0x01);
tlb_flush(cpu);
diff --git a/target/avr/helper.c b/target/avr/helper.c
index b9cd6d5ef27..4b29ab35263 100644
--- a/target/avr/helper.c
+++ b/target/avr/helper.c
@@ -47,7 +47,7 @@ bool avr_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
cs->exception_index = EXCP_RESET;
avr_cpu_do_interrupt(cs);
- cs->interrupt_request &= ~CPU_INTERRUPT_RESET;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_RESET);
return true;
}
}
@@ -59,7 +59,7 @@ bool avr_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
env->intsrc &= env->intsrc - 1; /* clear the interrupt */
if (!env->intsrc) {
- cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
}
return true;
}
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 9e05e0e5765..a502437c303 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -397,7 +397,7 @@ bool hvf_inject_interrupts(CPUState *cs)
if (cpu_test_interrupt(cs, CPU_INTERRUPT_NMI)) {
if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_NMI;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_NMI);
info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI;
wvmcs(cs->accel->fd, VMCS_ENTRY_INTR_INFO, info);
} else {
@@ -409,7 +409,7 @@ bool hvf_inject_interrupts(CPUState *cs)
cpu_test_interrupt(cs, CPU_INTERRUPT_HARD) &&
(env->eflags & IF_MASK) && !(info & VMCS_INTR_VALID)) {
int line = cpu_get_pic_interrupt(env);
- cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
if (line >= 0) {
wvmcs(cs->accel->fd, VMCS_ENTRY_INTR_INFO, line |
VMCS_INTR_VALID | VMCS_INTR_T_HWINTR);
@@ -437,7 +437,7 @@ int hvf_process_events(CPUState *cs)
}
if (cpu_test_interrupt(cs, CPU_INTERRUPT_POLL)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
apic_poll_irq(cpu->apic_state);
}
if ((cpu_test_interrupt(cs, CPU_INTERRUPT_HARD) &&
@@ -450,7 +450,7 @@ int hvf_process_events(CPUState *cs)
do_cpu_sipi(cpu);
}
if (cpu_test_interrupt(cs, CPU_INTERRUPT_TPR)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_TPR;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
cpu_synchronize_state(cs);
apic_handle_tpr_access_report(cpu->apic_state, env->eip,
env->tpr_access_type);
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 306430a0521..8420c4090ef 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -5066,7 +5066,7 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
*/
events.smi.pending = cs->interrupt_request & CPU_INTERRUPT_SMI;
events.smi.latched_init = cs->interrupt_request & CPU_INTERRUPT_INIT;
- cs->interrupt_request &= ~(CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_INIT | CPU_INTERRUPT_SMI);
} else {
/* Keep these in cs->interrupt_request. */
events.smi.pending = 0;
@@ -5456,7 +5456,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI)) {
bql_lock();
- cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
bql_unlock();
DPRINTF("injected NMI\n");
ret = kvm_vcpu_ioctl(cpu, KVM_NMI);
@@ -5467,7 +5467,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_SMI)) {
bql_lock();
- cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
bql_unlock();
DPRINTF("injected SMI\n");
ret = kvm_vcpu_ioctl(cpu, KVM_SMI);
@@ -5502,7 +5502,7 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
bql_lock();
- cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
irq = cpu_get_pic_interrupt(env);
if (irq >= 0) {
struct kvm_interrupt intr;
@@ -5597,7 +5597,7 @@ int kvm_arch_process_async_events(CPUState *cs)
/* We must not raise CPU_INTERRUPT_MCE if it's not supported. */
assert(env->mcg_cap);
- cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
kvm_cpu_synchronize_state(cs);
@@ -5627,7 +5627,7 @@ int kvm_arch_process_async_events(CPUState *cs)
}
if (cpu_test_interrupt(cs, CPU_INTERRUPT_POLL)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
apic_poll_irq(cpu->apic_state);
}
if ((cpu_test_interrupt(cs, CPU_INTERRUPT_HARD) &&
@@ -5640,7 +5640,7 @@ int kvm_arch_process_async_events(CPUState *cs)
do_cpu_sipi(cpu);
}
if (cpu_test_interrupt(cs, CPU_INTERRUPT_TPR)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_TPR;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_TPR);
kvm_cpu_synchronize_state(cs);
apic_handle_tpr_access_report(cpu->apic_state, env->eip,
env->tpr_access_type);
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index c1ac74c4f04..e1151b04c6e 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -419,7 +419,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
if (!has_event && cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI)) {
if (nvmm_can_take_nmi(cpu)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
event->type = NVMM_VCPU_EVENT_INTR;
event->vector = 2;
has_event = true;
@@ -428,7 +428,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
if (!has_event && cpu_test_interrupt(cpu, CPU_INTERRUPT_HARD)) {
if (nvmm_can_take_int(cpu)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
event->type = NVMM_VCPU_EVENT_INTR;
event->vector = cpu_get_pic_interrupt(env);
has_event = true;
@@ -437,7 +437,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
/* Don't want SMIs. */
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_SMI)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
}
if (sync_tpr) {
@@ -697,7 +697,7 @@ nvmm_vcpu_loop(CPUState *cpu)
/* set int/nmi windows back to the reset state */
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_POLL)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
apic_poll_irq(x86_cpu->apic_state);
}
if ((cpu_test_interrupt(cpu, CPU_INTERRUPT_HARD) &&
@@ -710,7 +710,7 @@ nvmm_vcpu_loop(CPUState *cpu)
do_cpu_sipi(x86_cpu);
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_TPR)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
nvmm_cpu_synchronize_state(cpu);
apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
env->tpr_access_type);
diff --git a/target/i386/tcg/system/seg_helper.c b/target/i386/tcg/system/seg_helper.c
index 794a23ddfc4..38072e51d72 100644
--- a/target/i386/tcg/system/seg_helper.c
+++ b/target/i386/tcg/system/seg_helper.c
@@ -178,7 +178,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
*/
switch (interrupt_request) {
case CPU_INTERRUPT_POLL:
- cs->interrupt_request &= ~CPU_INTERRUPT_POLL;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_POLL);
apic_poll_irq(cpu->apic_state);
break;
case CPU_INTERRUPT_SIPI:
@@ -186,23 +186,22 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
break;
case CPU_INTERRUPT_SMI:
cpu_svm_check_intercept_param(env, SVM_EXIT_SMI, 0, 0);
- cs->interrupt_request &= ~CPU_INTERRUPT_SMI;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_SMI);
do_smm_enter(cpu);
break;
case CPU_INTERRUPT_NMI:
cpu_svm_check_intercept_param(env, SVM_EXIT_NMI, 0, 0);
- cs->interrupt_request &= ~CPU_INTERRUPT_NMI;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_NMI);
env->hflags2 |= HF2_NMI_MASK;
do_interrupt_x86_hardirq(env, EXCP02_NMI, 1);
break;
case CPU_INTERRUPT_MCE:
- cs->interrupt_request &= ~CPU_INTERRUPT_MCE;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_MCE);
do_interrupt_x86_hardirq(env, EXCP12_MCHK, 0);
break;
case CPU_INTERRUPT_HARD:
cpu_svm_check_intercept_param(env, SVM_EXIT_INTR, 0, 0);
- cs->interrupt_request &= ~(CPU_INTERRUPT_HARD |
- CPU_INTERRUPT_VIRQ);
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD | CPU_INTERRUPT_VIRQ);
intno = cpu_get_pic_interrupt(env);
qemu_log_mask(CPU_LOG_INT,
"Servicing hardware INT=0x%02x\n", intno);
@@ -215,7 +214,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
qemu_log_mask(CPU_LOG_INT,
"Servicing virtual hardware INT=0x%02x\n", intno);
do_interrupt_x86_hardirq(env, intno, 1);
- cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
env->int_ctl &= ~V_IRQ_MASK;
break;
}
diff --git a/target/i386/tcg/system/svm_helper.c b/target/i386/tcg/system/svm_helper.c
index 3569196bdda..505788b0e26 100644
--- a/target/i386/tcg/system/svm_helper.c
+++ b/target/i386/tcg/system/svm_helper.c
@@ -824,7 +824,7 @@ void do_vmexit(CPUX86State *env)
env->intercept_exceptions = 0;
/* Clears the V_IRQ and V_INTR_MASKING bits inside the processor. */
- cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VIRQ);
env->int_ctl = 0;
/* Clears the TSC_OFFSET inside the processor. */
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 878cdd1668c..c09a0a64f22 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -1471,14 +1471,14 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
if (!vcpu->interruption_pending &&
cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_NMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_NMI);
vcpu->interruptable = false;
new_int.InterruptionType = WHvX64PendingNmi;
new_int.InterruptionPending = 1;
new_int.InterruptionVector = 2;
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_SMI)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_SMI;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_SMI);
}
}
@@ -1502,7 +1502,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
vcpu->interruptable && (env->eflags & IF_MASK)) {
assert(!new_int.InterruptionPending);
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_HARD)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
irq = cpu_get_pic_interrupt(env);
if (irq >= 0) {
new_int.InterruptionType = WHvX64PendingInterrupt;
@@ -1520,7 +1520,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
}
} else if (vcpu->ready_for_pic_interrupt &&
cpu_test_interrupt(cpu, CPU_INTERRUPT_HARD)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_HARD);
irq = cpu_get_pic_interrupt(env);
if (irq >= 0) {
reg_names[reg_count] = WHvRegisterPendingEvent;
@@ -1607,7 +1607,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_POLL)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_POLL;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_POLL);
apic_poll_irq(x86_cpu->apic_state);
}
@@ -1623,7 +1623,7 @@ static void whpx_vcpu_process_async_events(CPUState *cpu)
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_TPR)) {
- cpu->interrupt_request &= ~CPU_INTERRUPT_TPR;
+ cpu_reset_interrupt(cpu, CPU_INTERRUPT_TPR);
whpx_cpu_synchronize_state(cpu);
apic_handle_tpr_access_report(x86_cpu->apic_state, env->eip,
env->tpr_access_type);
diff --git a/target/openrisc/sys_helper.c b/target/openrisc/sys_helper.c
index d96b41a01c2..b091a9c6685 100644
--- a/target/openrisc/sys_helper.c
+++ b/target/openrisc/sys_helper.c
@@ -196,7 +196,7 @@ void HELPER(mtspr)(CPUOpenRISCState *env, target_ulong spr, target_ulong rb)
env->ttmr = (rb & ~TTMR_IP) | ip;
} else { /* Clear IP bit. */
env->ttmr = rb & ~TTMR_IP;
- cs->interrupt_request &= ~CPU_INTERRUPT_TIMER;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_TIMER);
}
cpu_openrisc_timer_update(cpu);
bql_unlock();
diff --git a/target/rx/helper.c b/target/rx/helper.c
index ce003af4219..41c9606fd1d 100644
--- a/target/rx/helper.c
+++ b/target/rx/helper.c
@@ -63,7 +63,7 @@ void rx_cpu_do_interrupt(CPUState *cs)
env->bpsw = save_psw;
env->pc = env->fintv;
env->psw_ipl = 15;
- cs->interrupt_request &= ~CPU_INTERRUPT_FIR;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_FIR);
qemu_set_irq(env->ack, env->ack_irq);
qemu_log_mask(CPU_LOG_INT, "fast interrupt raised\n");
} else if (do_irq & CPU_INTERRUPT_HARD) {
@@ -73,7 +73,7 @@ void rx_cpu_do_interrupt(CPUState *cs)
cpu_stl_data(env, env->isp, env->pc);
env->pc = cpu_ldl_data(env, env->intb + env->ack_irq * 4);
env->psw_ipl = env->ack_ipl;
- cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
qemu_set_irq(env->ack, env->ack_irq);
qemu_log_mask(CPU_LOG_INT,
"interrupt 0x%02x raised\n", env->ack_irq);
diff --git a/target/s390x/tcg/excp_helper.c b/target/s390x/tcg/excp_helper.c
index e4c75d0ce01..4c7faeee82b 100644
--- a/target/s390x/tcg/excp_helper.c
+++ b/target/s390x/tcg/excp_helper.c
@@ -559,7 +559,7 @@ try_deliver:
/* we might still have pending interrupts, but not deliverable */
if (!env->pending_int && !qemu_s390_flic_has_any(flic)) {
- cs->interrupt_request &= ~CPU_INTERRUPT_HARD;
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD);
}
/* WAIT PSW during interrupt injection or STOP interrupt */
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 07/19] cpu-common: use atomic access for interrupt_request
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (5 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 06/19] treewide: clear bits of cs->interrupt_request with cpu_reset_interrupt() Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 08/19] cpus: document that qemu_cpu_kick() can be used for BQL-less operation Paolo Bonzini
` (11 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Philippe Mathieu-Daudé, Richard Henderson
Writes to interrupt_request used non-atomic accesses, but there are a
few cases where the access was not protected by the BQL. Now that
there is a full set of helpers, it's easier to guarantee that
interrupt_request accesses are fully atomic, so just drop the
requirement instead of fixing them.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/hw/core/cpu.h | 1 -
hw/core/cpu-common.c | 12 +-----------
system/cpus.c | 3 +--
3 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index b01a0cffd64..23bd02277f4 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -495,7 +495,6 @@ struct CPUState {
bool exit_request;
int exclusive_context_count;
uint32_t cflags_next_tb;
- /* updates protected by BQL */
uint32_t interrupt_request;
int singlestep_enabled;
int64_t icount_budget;
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
index 259cf2a3c36..152abc9024b 100644
--- a/hw/core/cpu-common.c
+++ b/hw/core/cpu-common.c
@@ -67,19 +67,9 @@ CPUState *cpu_create(const char *typename)
return cpu;
}
-/* Resetting the IRQ comes from across the code base so we take the
- * BQL here if we need to. cpu_interrupt assumes it is held.*/
void cpu_reset_interrupt(CPUState *cpu, int mask)
{
- bool need_lock = !bql_locked();
-
- if (need_lock) {
- bql_lock();
- }
- cpu->interrupt_request &= ~mask;
- if (need_lock) {
- bql_unlock();
- }
+ qatomic_and(&cpu->interrupt_request, ~mask);
}
void cpu_exit(CPUState *cpu)
diff --git a/system/cpus.c b/system/cpus.c
index 437848b5eb4..9bfbe2b0607 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -257,8 +257,7 @@ int64_t cpus_get_elapsed_ticks(void)
void cpu_set_interrupt(CPUState *cpu, int mask)
{
/* Pairs with cpu_test_interrupt(). */
- qatomic_store_release(&cpu->interrupt_request,
- cpu->interrupt_request | mask);
+ qatomic_or(&cpu->interrupt_request, mask);
}
void generic_handle_interrupt(CPUState *cpu, int mask)
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 08/19] cpus: document that qemu_cpu_kick() can be used for BQL-less operation
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (6 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 07/19] cpu-common: use atomic access for interrupt_request Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 09/19] accel: use store_release/load_acquire for cross-thread exit_request Paolo Bonzini
` (10 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Richard Henderson
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/hw/core/cpu.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 23bd02277f4..8b57bcd92c9 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -829,7 +829,8 @@ bool qemu_cpu_is_self(CPUState *cpu);
* qemu_cpu_kick:
* @cpu: The vCPU to kick.
*
- * Kicks @cpu's thread.
+ * Kicks @cpu's thread to exit the accelerator. For accelerators that
+ * can do that, the target vCPU thread will try not to take the BQL.
*/
void qemu_cpu_kick(CPUState *cpu);
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 09/19] accel: use store_release/load_acquire for cross-thread exit_request
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (7 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 08/19] cpus: document that qemu_cpu_kick() can be used for BQL-less operation Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 10/19] accel: use atomic accesses for exit_request Paolo Bonzini
` (9 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Richard Henderson, Peter Xu, Igor Mammedov
Reads and writes cpu->exit_request do not use a load-acquire/store-release
pair right now, but this means that cpu_exit() may not write cpu->exit_request
after any flags that are read by the vCPU thread.
Probably everything is protected one way or the other by the BQL, because
cpu->exit_request leads to the slow path, where the CPU thread often takes
the BQL (for example, to go to sleep by waiting on the BQL-protected
cpu->halt_cond); but it's not clear, so use load-acquire/store-release
consistently.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/kvm/kvm-all.c | 19 +++++++++----------
accel/tcg/cpu-exec.c | 7 +++++--
accel/tcg/tcg-accel-ops-rr.c | 11 +++++++++--
hw/core/cpu-common.c | 3 ++-
target/i386/nvmm/nvmm-all.c | 5 ++---
target/i386/whpx/whpx-all.c | 3 ++-
6 files changed, 29 insertions(+), 19 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index f36dfe33492..bd9e5e3886d 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -3029,10 +3029,6 @@ static void kvm_eat_signals(CPUState *cpu)
if (kvm_immediate_exit) {
qatomic_set(&cpu->kvm_run->immediate_exit, 0);
- /* Write kvm_run->immediate_exit before the cpu->exit_request
- * write in kvm_cpu_exec.
- */
- smp_wmb();
return;
}
@@ -3187,7 +3183,8 @@ int kvm_cpu_exec(CPUState *cpu)
}
kvm_arch_pre_run(cpu, run);
- if (qatomic_read(&cpu->exit_request)) {
+ /* Corresponding store-release is in cpu_exit. */
+ if (qatomic_load_acquire(&cpu->exit_request)) {
trace_kvm_interrupt_exit_request();
/*
* KVM requires us to reenter the kernel after IO exits to complete
@@ -3197,13 +3194,15 @@ int kvm_cpu_exec(CPUState *cpu)
kvm_cpu_kick_self();
}
- /* Read cpu->exit_request before KVM_RUN reads run->immediate_exit.
- * Matching barrier in kvm_eat_signals.
- */
- smp_rmb();
-
run_ret = kvm_vcpu_ioctl(cpu, KVM_RUN, 0);
+ /*
+ * After writing cpu->exit_request, cpu_exit() sends a signal that writes
+ * kvm->run->immediate_exit. The signal is already happening after the
+ * write to cpu->exit_request so, if KVM read kvm->run->immediate_exit
+ * as true, cpu->exit_request will always read as true.
+ */
+
attrs = kvm_arch_post_run(cpu, run);
#ifdef KVM_HAVE_MCE_INJECTION
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 508d2d2d9e2..f838535d111 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -851,8 +851,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
}
#endif /* !CONFIG_USER_ONLY */
- /* Finally, check if we need to exit to the main loop. */
- if (unlikely(qatomic_read(&cpu->exit_request)) || icount_exit_request(cpu)) {
+ /*
+ * Finally, check if we need to exit to the main loop.
+ * The corresponding store-release is in cpu_exit.
+ */
+ if (unlikely(qatomic_load_acquire(&cpu->exit_request)) || icount_exit_request(cpu)) {
qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 6eec5c9eee9..e8b0e370a8d 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -242,10 +242,17 @@ static void *rr_cpu_thread_fn(void *arg)
cpu = first_cpu;
}
- while (cpu && cpu_work_list_empty(cpu) && !cpu->exit_request) {
- /* Store rr_current_cpu before evaluating cpu_can_run(). */
+ while (cpu && cpu_work_list_empty(cpu)) {
+ /*
+ * Store rr_current_cpu before evaluating cpu->exit_request.
+ * Pairs with rr_kick_next_cpu().
+ */
qatomic_set_mb(&rr_current_cpu, cpu);
+ /* Pairs with store-release in cpu_exit. */
+ if (qatomic_load_acquire(&cpu->exit_request)) {
+ break;
+ }
current_cpu = cpu;
qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
index 152abc9024b..42463e6258d 100644
--- a/hw/core/cpu-common.c
+++ b/hw/core/cpu-common.c
@@ -74,7 +74,8 @@ void cpu_reset_interrupt(CPUState *cpu, int mask)
void cpu_exit(CPUState *cpu)
{
- qatomic_set(&cpu->exit_request, 1);
+ /* Ensure cpu_exec will see the reason why the exit request was set. */
+ qatomic_store_release(&cpu->exit_request, true);
/* Ensure cpu_exec will see the exit request after TCG has exited. */
smp_wmb();
qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index e1151b04c6e..10bd51d9b59 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -743,7 +743,8 @@ nvmm_vcpu_loop(CPUState *cpu)
nvmm_vcpu_pre_run(cpu);
- if (qatomic_read(&cpu->exit_request)) {
+ /* Corresponding store-release is in cpu_exit. */
+ if (qatomic_load_acquire(&cpu->exit_request)) {
#if NVMM_USER_VERSION >= 2
nvmm_vcpu_stop(vcpu);
#else
@@ -751,8 +752,6 @@ nvmm_vcpu_loop(CPUState *cpu)
#endif
}
- /* Read exit_request before the kernel reads the immediate exit flag */
- smp_rmb();
ret = nvmm_vcpu_run(mach, vcpu);
if (ret == -1) {
error_report("NVMM: Failed to exec a virtual processor,"
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index c09a0a64f22..2106c29c3a0 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -1714,7 +1714,8 @@ static int whpx_vcpu_run(CPUState *cpu)
if (exclusive_step_mode == WHPX_STEP_NONE) {
whpx_vcpu_pre_run(cpu);
- if (qatomic_read(&cpu->exit_request)) {
+ /* Corresponding store-release is in cpu_exit. */
+ if (qatomic_load_acquire(&cpu->exit_request)) {
whpx_vcpu_kick(cpu);
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 10/19] accel: use atomic accesses for exit_request
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (8 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 09/19] accel: use store_release/load_acquire for cross-thread exit_request Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 11/19] accel/tcg: create a thread-kick function for TCG Paolo Bonzini
` (8 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel
Cc: Richard Henderson, Philippe Mathieu-Daudé, Peter Xu,
Igor Mammedov
CPU threads write exit_request as a "note to self" that they need to
go out to a slow path. This write happens out of the BQL and can be
a data race with another threads' cpu_exit(); use atomic accesses
consistently.
While at it, change the source argument from int ("1") to bool ("true").
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/hw/core/cpu.h | 9 +++++++++
accel/kvm/kvm-all.c | 2 +-
accel/tcg/tcg-accel-ops-mttcg.c | 2 +-
accel/tcg/tcg-accel-ops-rr.c | 4 ++--
hw/ppc/spapr_hcall.c | 6 +++---
target/i386/kvm/kvm.c | 6 +++---
target/i386/nvmm/nvmm-accel-ops.c | 2 +-
target/i386/nvmm/nvmm-all.c | 2 +-
target/i386/whpx/whpx-all.c | 6 +++---
9 files changed, 24 insertions(+), 15 deletions(-)
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 8b57bcd92c9..338757e5254 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -422,6 +422,15 @@ struct qemu_work_item;
* valid under cpu_list_lock.
* @created: Indicates whether the CPU thread has been successfully created.
* @halt_cond: condition variable sleeping threads can wait on.
+ * @exit_request: Another thread requests the CPU to call qemu_wait_io_event().
+ * Should be read only by CPU thread with load-acquire, to synchronize with
+ * other threads' store-release operation.
+ *
+ * In some cases, accelerator-specific code will write exit_request from
+ * within the same thread, to "bump" the effect of qemu_cpu_kick() to
+ * the one provided by cpu_exit(), especially when processing interrupt
+ * flags. In this case, the write and read happen in the same thread
+ * and the write therefore can use qemu_atomic_set().
* @interrupt_request: Indicates a pending interrupt request.
* Only used by system emulation.
* @halted: Nonzero if the CPU is in suspended state.
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index bd9e5e3886d..e4167d94b4f 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -3730,7 +3730,7 @@ int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
have_sigbus_pending = true;
pending_sigbus_addr = addr;
pending_sigbus_code = code;
- qatomic_set(&cpu->exit_request, 1);
+ qatomic_set(&cpu->exit_request, true);
return 0;
#else
return 1;
diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index 337b993d3da..b12b7a36b5d 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -85,7 +85,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
/* process any pending work */
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
do {
if (cpu_can_run(cpu)) {
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index e8b0e370a8d..d13e0d8b44d 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -212,7 +212,7 @@ static void *rr_cpu_thread_fn(void *arg)
cpu = first_cpu;
/* process any pending work */
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
while (1) {
/* Only used for icount_enabled() */
@@ -293,7 +293,7 @@ static void *rr_cpu_thread_fn(void *arg)
/* Does not need a memory barrier because a spurious wakeup is okay. */
qatomic_set(&rr_current_cpu, NULL);
- if (cpu && cpu->exit_request) {
+ if (cpu && qatomic_read(&cpu->exit_request)) {
qatomic_set_mb(&cpu->exit_request, 0);
}
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 1e936f35e44..51875e32a09 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -509,7 +509,7 @@ static target_ulong h_cede(PowerPCCPU *cpu, SpaprMachineState *spapr,
if (!cpu_has_work(cs)) {
cs->halted = 1;
cs->exception_index = EXCP_HLT;
- cs->exit_request = 1;
+ qatomic_set(&cs->exit_request, true);
ppc_maybe_interrupt(env);
}
@@ -531,7 +531,7 @@ static target_ulong h_confer_self(PowerPCCPU *cpu)
}
cs->halted = 1;
cs->exception_index = EXCP_HALTED;
- cs->exit_request = 1;
+ qatomic_set(&cs->exit_request, true);
ppc_maybe_interrupt(&cpu->env);
return H_SUCCESS;
@@ -624,7 +624,7 @@ static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
}
cs->exception_index = EXCP_YIELD;
- cs->exit_request = 1;
+ qatomic_set(&cs->exit_request, true);
cpu_loop_exit(cs);
return H_SUCCESS;
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 8420c4090ef..34e74f24470 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -5486,10 +5486,10 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run)
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_INIT) &&
!(env->hflags & HF_SMM_MASK)) {
- qatomic_set(&cpu->exit_request, 1);
+ qatomic_set(&cpu->exit_request, true);
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_TPR)) {
- qatomic_set(&cpu->exit_request, 1);
+ qatomic_set(&cpu->exit_request, true);
}
}
@@ -5604,7 +5604,7 @@ int kvm_arch_process_async_events(CPUState *cs)
if (env->exception_nr == EXCP08_DBLE) {
/* this means triple fault */
qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET);
- cs->exit_request = 1;
+ qatomic_set(&cs->exit_request, true);
return 0;
}
kvm_queue_exception(env, EXCP12_MCHK, 0, 0);
diff --git a/target/i386/nvmm/nvmm-accel-ops.c b/target/i386/nvmm/nvmm-accel-ops.c
index 3799260bbde..86869f133e9 100644
--- a/target/i386/nvmm/nvmm-accel-ops.c
+++ b/target/i386/nvmm/nvmm-accel-ops.c
@@ -77,7 +77,7 @@ static void nvmm_start_vcpu_thread(CPUState *cpu)
*/
static void nvmm_kick_vcpu_thread(CPUState *cpu)
{
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
cpus_kick_thread(cpu);
}
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 10bd51d9b59..7e36c42fbb4 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -414,7 +414,7 @@ nvmm_vcpu_pre_run(CPUState *cpu)
* or commit pending TPR access.
*/
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
}
if (!has_event && cpu_test_interrupt(cpu, CPU_INTERRUPT_NMI)) {
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 2106c29c3a0..00fb7e23100 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -1489,10 +1489,10 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_INIT) &&
!(env->hflags & HF_SMM_MASK)) {
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
}
if (cpu_test_interrupt(cpu, CPU_INTERRUPT_TPR)) {
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
}
}
@@ -1539,7 +1539,7 @@ static void whpx_vcpu_pre_run(CPUState *cpu)
if (tpr != vcpu->tpr) {
vcpu->tpr = tpr;
reg_values[reg_count].Reg64 = tpr;
- cpu->exit_request = 1;
+ qatomic_set(&cpu->exit_request, true);
reg_names[reg_count] = WHvX64RegisterCr8;
reg_count += 1;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 11/19] accel/tcg: create a thread-kick function for TCG
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (9 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 10/19] accel: use atomic accesses for exit_request Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 12/19] accel/tcg: inline cpu_exit() Paolo Bonzini
` (7 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Philippe Mathieu-Daudé, Richard Henderson
Round-robin TCG is calling into cpu_exit() directly. In preparation
for making cpu_exit() usable from all accelerators, define a generic
thread-kick function for TCG which is used directly in the multi-threaded
case, and through CPU_FOREACH in the round-robin case.
Use it also for user-mode emulation, and take the occasion to move
the implementation to accel/tcg/user-exec.c.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
docs/devel/tcg-icount.rst | 2 +-
accel/tcg/tcg-accel-ops-mttcg.h | 3 ---
accel/tcg/tcg-accel-ops.h | 1 +
accel/tcg/cpu-exec.c | 6 ++++++
accel/tcg/tcg-accel-ops-mttcg.c | 5 -----
accel/tcg/tcg-accel-ops-rr.c | 2 +-
accel/tcg/tcg-accel-ops.c | 2 +-
accel/tcg/user-exec.c | 6 ++++++
bsd-user/main.c | 5 -----
linux-user/main.c | 5 -----
10 files changed, 16 insertions(+), 21 deletions(-)
diff --git a/docs/devel/tcg-icount.rst b/docs/devel/tcg-icount.rst
index 7df883446a7..a1dcd79e0fd 100644
--- a/docs/devel/tcg-icount.rst
+++ b/docs/devel/tcg-icount.rst
@@ -37,7 +37,7 @@ translator starts by allocating a budget of instructions to be
executed. The budget of instructions is limited by how long it will be
until the next timer will expire. We store this budget as part of a
vCPU icount_decr field which shared with the machinery for handling
-cpu_exit(). The whole field is checked at the start of every
+qemu_cpu_kick(). The whole field is checked at the start of every
translated block and will cause a return to the outer loop to deal
with whatever caused the exit.
diff --git a/accel/tcg/tcg-accel-ops-mttcg.h b/accel/tcg/tcg-accel-ops-mttcg.h
index 8ffa7a9a9fe..5c145cc8595 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.h
+++ b/accel/tcg/tcg-accel-ops-mttcg.h
@@ -10,9 +10,6 @@
#ifndef TCG_ACCEL_OPS_MTTCG_H
#define TCG_ACCEL_OPS_MTTCG_H
-/* kick MTTCG vCPU thread */
-void mttcg_kick_vcpu_thread(CPUState *cpu);
-
/* start an mttcg vCPU thread */
void mttcg_start_vcpu_thread(CPUState *cpu);
diff --git a/accel/tcg/tcg-accel-ops.h b/accel/tcg/tcg-accel-ops.h
index 6feeb3f3e9b..aecce605d7b 100644
--- a/accel/tcg/tcg-accel-ops.h
+++ b/accel/tcg/tcg-accel-ops.h
@@ -18,5 +18,6 @@ void tcg_cpu_destroy(CPUState *cpu);
int tcg_cpu_exec(CPUState *cpu);
void tcg_handle_interrupt(CPUState *cpu, int mask);
void tcg_cpu_init_cflags(CPUState *cpu, bool parallel);
+void tcg_kick_vcpu_thread(CPUState *cpu);
#endif /* TCG_ACCEL_OPS_H */
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index f838535d111..9241bcadb5f 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -40,6 +40,7 @@
#include "exec/replay-core.h"
#include "system/tcg.h"
#include "exec/helper-proto-common.h"
+#include "tcg-accel-ops.h"
#include "tb-jmp-cache.h"
#include "tb-hash.h"
#include "tb-context.h"
@@ -748,6 +749,11 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
return false;
}
+void tcg_kick_vcpu_thread(CPUState *cpu)
+{
+ cpu_exit(cpu);
+}
+
static inline bool icount_exit_request(CPUState *cpu)
{
if (!icount_enabled()) {
diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index b12b7a36b5d..1148ebcaae5 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -123,11 +123,6 @@ static void *mttcg_cpu_thread_fn(void *arg)
return NULL;
}
-void mttcg_kick_vcpu_thread(CPUState *cpu)
-{
- cpu_exit(cpu);
-}
-
void mttcg_start_vcpu_thread(CPUState *cpu)
{
char thread_name[VCPU_THREAD_NAME_SIZE];
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index d13e0d8b44d..a1d75fd3419 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -43,7 +43,7 @@ void rr_kick_vcpu_thread(CPUState *unused)
CPUState *cpu;
CPU_FOREACH(cpu) {
- cpu_exit(cpu);
+ tcg_kick_vcpu_thread(cpu);
};
}
diff --git a/accel/tcg/tcg-accel-ops.c b/accel/tcg/tcg-accel-ops.c
index 9c37266c1e0..1f662a9c745 100644
--- a/accel/tcg/tcg-accel-ops.c
+++ b/accel/tcg/tcg-accel-ops.c
@@ -206,7 +206,7 @@ static void tcg_accel_ops_init(AccelClass *ac)
if (qemu_tcg_mttcg_enabled()) {
ops->create_vcpu_thread = mttcg_start_vcpu_thread;
- ops->kick_vcpu_thread = mttcg_kick_vcpu_thread;
+ ops->kick_vcpu_thread = tcg_kick_vcpu_thread;
ops->handle_interrupt = tcg_handle_interrupt;
} else {
ops->create_vcpu_thread = rr_start_vcpu_thread;
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 66c25fba7dd..3c072fd868f 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -38,6 +38,7 @@
#include "qemu/int128.h"
#include "trace.h"
#include "tcg/tcg-ldst.h"
+#include "tcg-accel-ops.h"
#include "backend-ldst.h"
#include "internal-common.h"
#include "tb-internal.h"
@@ -46,6 +47,11 @@ __thread uintptr_t helper_retaddr;
//#define DEBUG_SIGNAL
+void qemu_cpu_kick(CPUState *cpu)
+{
+ tcg_kick_vcpu_thread(cpu);
+}
+
/*
* Adjust the pc to pass to cpu_restore_state; return the memop type.
*/
diff --git a/bsd-user/main.c b/bsd-user/main.c
index 9ba69642f50..73aae8c3274 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -214,11 +214,6 @@ bool qemu_cpu_is_self(CPUState *cpu)
return thread_cpu == cpu;
}
-void qemu_cpu_kick(CPUState *cpu)
-{
- cpu_exit(cpu);
-}
-
/* Assumes contents are already zeroed. */
static void init_task_state(TaskState *ts)
{
diff --git a/linux-user/main.c b/linux-user/main.c
index 7b0ccb6fd60..4ddfc9a619c 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -189,11 +189,6 @@ bool qemu_cpu_is_self(CPUState *cpu)
return thread_cpu == cpu;
}
-void qemu_cpu_kick(CPUState *cpu)
-{
- cpu_exit(cpu);
-}
-
void task_settid(TaskState *ts)
{
if (ts->ts_tid == 0) {
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 12/19] accel/tcg: inline cpu_exit()
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (10 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 11/19] accel/tcg: create a thread-kick function for TCG Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 13/19] cpus: remove TCG-ism from cpu_exit() Paolo Bonzini
` (6 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Richard Henderson
Right now, cpu_exit() is not usable from all accelerators because it
includes a TCG-specific thread kick. In fact, cpu_exit() doubles as
the TCG thread-kick via tcg_kick_vcpu_thread().
In preparation for changing that, inline cpu_exit() into
tcg_kick_vcpu_thread(). The direction of the calls can then be
reversed, with an accelerator-independent cpu_exit() calling into
qemu_vcpu_kick() rather than the opposite.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/tcg/cpu-exec.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 9241bcadb5f..3ae545e888f 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -751,7 +751,16 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
void tcg_kick_vcpu_thread(CPUState *cpu)
{
- cpu_exit(cpu);
+ /*
+ * Ensure cpu_exec will see the reason why the exit request was set.
+ * FIXME: this is not always needed. Other accelerators instead
+ * read interrupt_request and set exit_request on demand from the
+ * CPU thread; see kvm_arch_pre_run() for example.
+ */
+ qatomic_store_release(&cpu->exit_request, true);
+
+ /* Ensure cpu_exec will see the exit request after TCG has exited. */
+ qatomic_store_release(&cpu->neg.icount_decr.u16.high, -1);
}
static inline bool icount_exit_request(CPUState *cpu)
@@ -780,7 +789,8 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
/* Clear the interrupt flag now since we're processing
* cpu->interrupt_request and cpu->exit_request.
* Ensure zeroing happens before reading cpu->exit_request or
- * cpu->interrupt_request (see also smp_wmb in cpu_exit())
+ * cpu->interrupt_request (see also store-release in
+ * tcg_kick_vcpu_thread())
*/
qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0);
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 13/19] cpus: remove TCG-ism from cpu_exit()
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (11 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 12/19] accel/tcg: inline cpu_exit() Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop Paolo Bonzini
` (5 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Richard Henderson, Igor Mammedov
Now that TCG has its own kick function, make cpu_exit() do the right kick
for all accelerators.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/core/cpu-common.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
index 42463e6258d..41a339903ca 100644
--- a/hw/core/cpu-common.c
+++ b/hw/core/cpu-common.c
@@ -76,9 +76,7 @@ void cpu_exit(CPUState *cpu)
{
/* Ensure cpu_exec will see the reason why the exit request was set. */
qatomic_store_release(&cpu->exit_request, true);
- /* Ensure cpu_exec will see the exit request after TCG has exited. */
- smp_wmb();
- qatomic_set(&cpu->neg.icount_decr.u16.high, -1);
+ qemu_cpu_kick(cpu);
}
static int cpu_common_gdb_read_register(CPUState *cpu, GByteArray *buf, int reg)
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (12 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 13/19] cpus: remove TCG-ism from cpu_exit() Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-10 7:13 ` Richard Henderson
2025-09-08 10:12 ` [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common Paolo Bonzini
` (4 subsequent siblings)
18 siblings, 1 reply; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel
Now that cpu_exit() actually kicks all accelerators, use it whenever
the message to another thread is processed in qemu_wait_io_event().
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpu-common.c | 3 ++-
hw/ppc/ppc.c | 2 ++
hw/ppc/spapr_hcall.c | 7 +++----
hw/ppc/spapr_rtas.c | 2 +-
replay/replay-events.c | 3 ++-
system/cpu-timers.c | 6 +++---
system/cpus.c | 5 +++--
target/arm/tcg/mte_helper.c | 2 +-
target/i386/kvm/hyperv.c | 1 -
9 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/cpu-common.c b/cpu-common.c
index ef5757d23bf..152661df8e9 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -137,7 +137,8 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
wi->done = false;
qemu_mutex_unlock(&cpu->work_mutex);
- qemu_cpu_kick(cpu);
+ /* exit the inner loop and reach qemu_wait_io_event_common(). */
+ cpu_exit(cpu);
}
void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index 43d0d0e7553..3e436c70413 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -190,6 +190,7 @@ static void ppc970_set_irq(void *opaque, int pin, int level)
if (level) {
trace_ppc_irq_cpu("stop");
cs->halted = 1;
+ cpu_exit(cs);
} else {
trace_ppc_irq_cpu("restart");
cs->halted = 0;
@@ -386,6 +387,7 @@ static void ppc40x_set_irq(void *opaque, int pin, int level)
if (level) {
trace_ppc_irq_cpu("stop");
cs->halted = 1;
+ cpu_exit(cs);
} else {
trace_ppc_irq_cpu("restart");
cs->halted = 0;
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 51875e32a09..c594d4b916e 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -509,8 +509,8 @@ static target_ulong h_cede(PowerPCCPU *cpu, SpaprMachineState *spapr,
if (!cpu_has_work(cs)) {
cs->halted = 1;
cs->exception_index = EXCP_HLT;
- qatomic_set(&cs->exit_request, true);
ppc_maybe_interrupt(env);
+ cpu_exit(cs);
}
return H_SUCCESS;
@@ -531,8 +531,8 @@ static target_ulong h_confer_self(PowerPCCPU *cpu)
}
cs->halted = 1;
cs->exception_index = EXCP_HALTED;
- qatomic_set(&cs->exit_request, true);
ppc_maybe_interrupt(&cpu->env);
+ cpu_exit(cs);
return H_SUCCESS;
}
@@ -624,8 +624,7 @@ static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
}
cs->exception_index = EXCP_YIELD;
- qatomic_set(&cs->exit_request, true);
- cpu_loop_exit(cs);
+ cpu_exit(cs);
return H_SUCCESS;
}
diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index 78309dbb09d..143bc8c3794 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -221,7 +221,7 @@ static void rtas_stop_self(PowerPCCPU *cpu, SpaprMachineState *spapr,
cs->halted = 1;
ppc_store_lpcr(cpu, env->spr[SPR_LPCR] & ~pcc->lpcr_pm);
kvmppc_set_reg_ppc_online(cpu, 0);
- qemu_cpu_kick(cs);
+ cpu_exit(cs);
}
static void rtas_ibm_suspend_me(PowerPCCPU *cpu, SpaprMachineState *spapr,
diff --git a/replay/replay-events.c b/replay/replay-events.c
index 8959da9f1fa..a96e47e7740 100644
--- a/replay/replay-events.c
+++ b/replay/replay-events.c
@@ -118,7 +118,8 @@ void replay_add_event(ReplayAsyncEventKind event_kind,
g_assert(replay_mutex_locked());
QTAILQ_INSERT_TAIL(&events_list, event, events);
- qemu_cpu_kick(first_cpu);
+ /* Kick the TCG thread out of tcg_cpu_exec(). */
+ cpu_exit(first_cpu);
}
void replay_bh_schedule_event(QEMUBH *bh)
diff --git a/system/cpu-timers.c b/system/cpu-timers.c
index cb35fa62b8a..9919b46230f 100644
--- a/system/cpu-timers.c
+++ b/system/cpu-timers.c
@@ -246,14 +246,14 @@ void qemu_timer_notify_cb(void *opaque, QEMUClockType type)
if (qemu_in_vcpu_thread()) {
/*
- * A CPU is currently running; kick it back out to the
+ * A CPU is currently running; send it out of the
* tcg_cpu_exec() loop so it will recalculate its
* icount deadline immediately.
*/
- qemu_cpu_kick(current_cpu);
+ cpu_exit(current_cpu);
} else if (first_cpu) {
/*
- * qemu_cpu_kick is not enough to kick a halted CPU out of
+ * cpu_exit() is not enough to kick a halted CPU out of
* qemu_tcg_wait_io_event. async_run_on_cpu, instead,
* causes cpu_thread_is_idle to return false. This way,
* handle_icount_deadline can run.
diff --git a/system/cpus.c b/system/cpus.c
index 9bfbe2b0607..bb13942cbb7 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -604,7 +604,7 @@ void cpu_pause(CPUState *cpu)
qemu_cpu_stop(cpu, true);
} else {
cpu->stop = true;
- qemu_cpu_kick(cpu);
+ cpu_exit(cpu);
}
}
@@ -644,6 +644,7 @@ void pause_all_vcpus(void)
while (!all_vcpus_paused()) {
qemu_cond_wait(&qemu_pause_cond, &bql);
+ /* FIXME: is this needed? */
CPU_FOREACH(cpu) {
qemu_cpu_kick(cpu);
}
@@ -672,7 +673,7 @@ void cpu_remove_sync(CPUState *cpu)
{
cpu->stop = true;
cpu->unplug = true;
- qemu_cpu_kick(cpu);
+ cpu_exit(cpu);
bql_unlock();
qemu_thread_join(cpu->thread);
bql_lock();
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 0efc18a181e..302e899287c 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -591,7 +591,7 @@ static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr,
* which is rather sooner than "normal". But the alternative
* is waiting until the next syscall.
*/
- qemu_cpu_kick(env_cpu(env));
+ cpu_exit(env_cpu(env));
#endif
}
diff --git a/target/i386/kvm/hyperv.c b/target/i386/kvm/hyperv.c
index 9865120cc43..f7a81bd2700 100644
--- a/target/i386/kvm/hyperv.c
+++ b/target/i386/kvm/hyperv.c
@@ -81,7 +81,6 @@ int kvm_hv_handle_exit(X86CPU *cpu, struct kvm_hyperv_exit *exit)
* necessary because memory hierarchy is being changed
*/
async_safe_run_on_cpu(CPU(cpu), async_synic_update, RUN_ON_CPU_NULL);
- cpu_exit(CPU(cpu));
return EXCP_INTERRUPT;
case KVM_EXIT_HYPERV_HCALL: {
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (13 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-10 7:14 ` Richard Henderson
2025-09-08 10:12 ` [PATCH 16/19] bsd-user, linux-user: introduce qemu_process_cpu_events Paolo Bonzini
` (3 subsequent siblings)
18 siblings, 1 reply; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel
Do so before extending it to the user-mode emulators, where there is no
such thing as an "I/O thread".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/hw/core/cpu.h | 2 +-
include/system/cpus.h | 4 ++--
accel/dummy-cpus.c | 2 +-
accel/hvf/hvf-accel-ops.c | 2 +-
accel/kvm/kvm-accel-ops.c | 2 +-
accel/tcg/tcg-accel-ops-mttcg.c | 2 +-
accel/tcg/tcg-accel-ops-rr.c | 4 ++--
cpu-common.c | 2 +-
system/cpus.c | 6 +++---
target/i386/nvmm/nvmm-accel-ops.c | 2 +-
target/i386/whpx/whpx-accel-ops.c | 2 +-
11 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 338757e5254..6075be0b59f 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -422,7 +422,7 @@ struct qemu_work_item;
* valid under cpu_list_lock.
* @created: Indicates whether the CPU thread has been successfully created.
* @halt_cond: condition variable sleeping threads can wait on.
- * @exit_request: Another thread requests the CPU to call qemu_wait_io_event().
+ * @exit_request: Another thread requests the CPU to call qemu_process_cpu_events().
* Should be read only by CPU thread with load-acquire, to synchronize with
* other threads' store-release operation.
*
diff --git a/include/system/cpus.h b/include/system/cpus.h
index 69be6a77a75..4aebec48705 100644
--- a/include/system/cpus.h
+++ b/include/system/cpus.h
@@ -17,8 +17,8 @@ bool cpu_work_list_empty(CPUState *cpu);
bool cpu_thread_is_idle(CPUState *cpu);
bool all_cpu_threads_idle(void);
bool cpu_can_run(CPUState *cpu);
-void qemu_wait_io_event_common(CPUState *cpu);
-void qemu_wait_io_event(CPUState *cpu);
+void qemu_process_cpu_events_common(CPUState *cpu);
+void qemu_process_cpu_events(CPUState *cpu);
void cpu_thread_signal_created(CPUState *cpu);
void cpu_thread_signal_destroyed(CPUState *cpu);
void cpu_handle_guest_debug(CPUState *cpu);
diff --git a/accel/dummy-cpus.c b/accel/dummy-cpus.c
index 03cfc0fa01e..225a47c31fd 100644
--- a/accel/dummy-cpus.c
+++ b/accel/dummy-cpus.c
@@ -57,7 +57,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
- qemu_wait_io_event(cpu);
+ qemu_process_cpu_events(cpu);
} while (!cpu->unplug);
bql_unlock();
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index d488d6afbac..7a27bdadb4f 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -198,7 +198,7 @@ static void *hvf_cpu_thread_fn(void *arg)
cpu_handle_guest_debug(cpu);
}
}
- qemu_wait_io_event(cpu);
+ qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
hvf_vcpu_destroy(cpu);
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
index b709187c7d7..65a7f76a69a 100644
--- a/accel/kvm/kvm-accel-ops.c
+++ b/accel/kvm/kvm-accel-ops.c
@@ -53,7 +53,7 @@ static void *kvm_vcpu_thread_fn(void *arg)
cpu_handle_guest_debug(cpu);
}
}
- qemu_wait_io_event(cpu);
+ qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
kvm_destroy_vcpu(cpu);
diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index 1148ebcaae5..342917c5f6e 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -113,7 +113,7 @@ static void *mttcg_cpu_thread_fn(void *arg)
}
}
- qemu_wait_io_event(cpu);
+ qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index a1d75fd3419..813b313859a 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -117,7 +117,7 @@ static void rr_wait_io_event(void)
rr_start_kick_timer();
CPU_FOREACH(cpu) {
- qemu_wait_io_event_common(cpu);
+ qemu_process_cpu_events_common(cpu);
}
}
@@ -203,7 +203,7 @@ static void *rr_cpu_thread_fn(void *arg)
/* process any pending work */
CPU_FOREACH(cpu) {
current_cpu = cpu;
- qemu_wait_io_event_common(cpu);
+ qemu_process_cpu_events_common(cpu);
}
}
diff --git a/cpu-common.c b/cpu-common.c
index 152661df8e9..0eb5c7b8f24 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -137,7 +137,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
wi->done = false;
qemu_mutex_unlock(&cpu->work_mutex);
- /* exit the inner loop and reach qemu_wait_io_event_common(). */
+ /* exit the inner loop and reach qemu_process_cpu_events_common(). */
cpu_exit(cpu);
}
diff --git a/system/cpus.c b/system/cpus.c
index bb13942cbb7..fd804e07327 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -450,7 +450,7 @@ static void qemu_cpu_stop(CPUState *cpu, bool exit)
qemu_cond_broadcast(&qemu_pause_cond);
}
-void qemu_wait_io_event_common(CPUState *cpu)
+void qemu_process_cpu_events_common(CPUState *cpu)
{
qatomic_set_mb(&cpu->thread_kicked, false);
if (cpu->stop) {
@@ -459,7 +459,7 @@ void qemu_wait_io_event_common(CPUState *cpu)
process_queued_cpu_work(cpu);
}
-void qemu_wait_io_event(CPUState *cpu)
+void qemu_process_cpu_events(CPUState *cpu)
{
bool slept = false;
@@ -474,7 +474,7 @@ void qemu_wait_io_event(CPUState *cpu)
qemu_plugin_vcpu_resume_cb(cpu);
}
- qemu_wait_io_event_common(cpu);
+ qemu_process_cpu_events_common(cpu);
}
void cpus_kick_thread(CPUState *cpu)
diff --git a/target/i386/nvmm/nvmm-accel-ops.c b/target/i386/nvmm/nvmm-accel-ops.c
index 86869f133e9..d066364b989 100644
--- a/target/i386/nvmm/nvmm-accel-ops.c
+++ b/target/i386/nvmm/nvmm-accel-ops.c
@@ -51,7 +51,7 @@ static void *qemu_nvmm_cpu_thread_fn(void *arg)
while (cpu_thread_is_idle(cpu)) {
qemu_cond_wait_bql(cpu->halt_cond);
}
- qemu_wait_io_event_common(cpu);
+ qemu_process_cpu_events_common(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
nvmm_destroy_vcpu(cpu);
diff --git a/target/i386/whpx/whpx-accel-ops.c b/target/i386/whpx/whpx-accel-ops.c
index da58805b1a6..2ca4ee02636 100644
--- a/target/i386/whpx/whpx-accel-ops.c
+++ b/target/i386/whpx/whpx-accel-ops.c
@@ -51,7 +51,7 @@ static void *whpx_cpu_thread_fn(void *arg)
while (cpu_thread_is_idle(cpu)) {
qemu_cond_wait_bql(cpu->halt_cond);
}
- qemu_wait_io_event_common(cpu);
+ qemu_process_cpu_events_common(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
whpx_destroy_vcpu(cpu);
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 16/19] bsd-user, linux-user: introduce qemu_process_cpu_events
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (14 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 17/19] cpus: clear exit_request in qemu_process_cpu_events Paolo Bonzini
` (2 subsequent siblings)
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Richard Henderson
Add a user-mode emulation version of the function. More will be
added later, for now it is just process_queued_cpu_work.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
bsd-user/aarch64/target_arch_cpu.h | 2 +-
bsd-user/arm/target_arch_cpu.h | 2 +-
bsd-user/i386/target_arch_cpu.h | 2 +-
bsd-user/riscv/target_arch_cpu.h | 2 +-
bsd-user/x86_64/target_arch_cpu.h | 2 +-
include/hw/core/cpu.h | 9 +++++++++
include/system/cpus.h | 1 -
accel/tcg/user-exec.c | 5 +++++
linux-user/aarch64/cpu_loop.c | 2 +-
linux-user/alpha/cpu_loop.c | 2 +-
linux-user/arm/cpu_loop.c | 2 +-
linux-user/hexagon/cpu_loop.c | 2 +-
linux-user/hppa/cpu_loop.c | 2 +-
linux-user/i386/cpu_loop.c | 2 +-
linux-user/loongarch64/cpu_loop.c | 2 +-
linux-user/m68k/cpu_loop.c | 2 +-
linux-user/microblaze/cpu_loop.c | 2 +-
linux-user/mips/cpu_loop.c | 2 +-
linux-user/openrisc/cpu_loop.c | 2 +-
linux-user/ppc/cpu_loop.c | 2 +-
linux-user/riscv/cpu_loop.c | 2 +-
linux-user/s390x/cpu_loop.c | 2 +-
linux-user/sh4/cpu_loop.c | 2 +-
linux-user/sparc/cpu_loop.c | 2 +-
linux-user/xtensa/cpu_loop.c | 2 +-
25 files changed, 36 insertions(+), 23 deletions(-)
diff --git a/bsd-user/aarch64/target_arch_cpu.h b/bsd-user/aarch64/target_arch_cpu.h
index 87fbf6d6775..15df84fda21 100644
--- a/bsd-user/aarch64/target_arch_cpu.h
+++ b/bsd-user/aarch64/target_arch_cpu.h
@@ -54,7 +54,7 @@ static inline G_NORETURN void target_cpu_loop(CPUARMState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_SWI:
diff --git a/bsd-user/arm/target_arch_cpu.h b/bsd-user/arm/target_arch_cpu.h
index bc2eaa0bf4e..9a952ef0ff7 100644
--- a/bsd-user/arm/target_arch_cpu.h
+++ b/bsd-user/arm/target_arch_cpu.h
@@ -46,7 +46,7 @@ static inline G_NORETURN void target_cpu_loop(CPUARMState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_UDEF:
case EXCP_NOCP:
diff --git a/bsd-user/i386/target_arch_cpu.h b/bsd-user/i386/target_arch_cpu.h
index 5d4c931decd..f147d5b6f85 100644
--- a/bsd-user/i386/target_arch_cpu.h
+++ b/bsd-user/i386/target_arch_cpu.h
@@ -113,7 +113,7 @@ static inline G_NORETURN void target_cpu_loop(CPUX86State *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case 0x80: {
diff --git a/bsd-user/riscv/target_arch_cpu.h b/bsd-user/riscv/target_arch_cpu.h
index ef92f004803..ad428d0263d 100644
--- a/bsd-user/riscv/target_arch_cpu.h
+++ b/bsd-user/riscv/target_arch_cpu.h
@@ -49,7 +49,7 @@ static inline G_NORETURN void target_cpu_loop(CPURISCVState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
signo = 0;
diff --git a/bsd-user/x86_64/target_arch_cpu.h b/bsd-user/x86_64/target_arch_cpu.h
index f82042e30af..1fa71d87f12 100644
--- a/bsd-user/x86_64/target_arch_cpu.h
+++ b/bsd-user/x86_64/target_arch_cpu.h
@@ -121,7 +121,7 @@ static inline G_NORETURN void target_cpu_loop(CPUX86State *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_SYSCALL:
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 6075be0b59f..fb788ca1107 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -1145,6 +1145,15 @@ AddressSpace *cpu_get_address_space(CPUState *cpu, int asidx);
G_NORETURN void cpu_abort(CPUState *cpu, const char *fmt, ...)
G_GNUC_PRINTF(2, 3);
+/**
+ * qemu_process_cpu_events:
+ * @cpu: CPU that left the execution loop
+ *
+ * Perform accelerator-independent work after the CPU has left
+ * the inner execution loop.
+ */
+void qemu_process_cpu_events(CPUState *cpu);
+
/* $(top_srcdir)/cpu.c */
void cpu_class_init_props(DeviceClass *dc);
void cpu_exec_class_post_init(CPUClass *cc);
diff --git a/include/system/cpus.h b/include/system/cpus.h
index 4aebec48705..508444ccf1c 100644
--- a/include/system/cpus.h
+++ b/include/system/cpus.h
@@ -18,7 +18,6 @@ bool cpu_thread_is_idle(CPUState *cpu);
bool all_cpu_threads_idle(void);
bool cpu_can_run(CPUState *cpu);
void qemu_process_cpu_events_common(CPUState *cpu);
-void qemu_process_cpu_events(CPUState *cpu);
void cpu_thread_signal_created(CPUState *cpu);
void cpu_thread_signal_destroyed(CPUState *cpu);
void cpu_handle_guest_debug(CPUState *cpu);
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 3c072fd868f..65f5da6c506 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -52,6 +52,11 @@ void qemu_cpu_kick(CPUState *cpu)
tcg_kick_vcpu_thread(cpu);
}
+void qemu_process_cpu_events(CPUState *cpu)
+{
+ process_queued_cpu_work(cpu);
+}
+
/*
* Adjust the pc to pass to cpu_restore_state; return the memop type.
*/
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
index 4c4921152e8..f6b498c6c43 100644
--- a/linux-user/aarch64/cpu_loop.c
+++ b/linux-user/aarch64/cpu_loop.c
@@ -38,7 +38,7 @@ void cpu_loop(CPUARMState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_SWI:
diff --git a/linux-user/alpha/cpu_loop.c b/linux-user/alpha/cpu_loop.c
index 728b64906d9..bb8346b5094 100644
--- a/linux-user/alpha/cpu_loop.c
+++ b/linux-user/alpha/cpu_loop.c
@@ -35,7 +35,7 @@ void cpu_loop(CPUAlphaState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_RESET:
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
index 9aeb9b0087f..cd89b7d6f5e 100644
--- a/linux-user/arm/cpu_loop.c
+++ b/linux-user/arm/cpu_loop.c
@@ -295,7 +295,7 @@ void cpu_loop(CPUARMState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch(trapnr) {
case EXCP_UDEF:
diff --git a/linux-user/hexagon/cpu_loop.c b/linux-user/hexagon/cpu_loop.c
index 25c97edcaef..1941f4c9c16 100644
--- a/linux-user/hexagon/cpu_loop.c
+++ b/linux-user/hexagon/cpu_loop.c
@@ -36,7 +36,7 @@ void cpu_loop(CPUHexagonState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
index 3af50653bb7..356cb48acc3 100644
--- a/linux-user/hppa/cpu_loop.c
+++ b/linux-user/hppa/cpu_loop.c
@@ -119,7 +119,7 @@ void cpu_loop(CPUHPPAState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_SYSCALL:
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
index 7b2d8b03d84..f3f58576af5 100644
--- a/linux-user/i386/cpu_loop.c
+++ b/linux-user/i386/cpu_loop.c
@@ -214,7 +214,7 @@ void cpu_loop(CPUX86State *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch(trapnr) {
case 0x80:
diff --git a/linux-user/loongarch64/cpu_loop.c b/linux-user/loongarch64/cpu_loop.c
index a0a4cbb7cc3..26a5ce3a936 100644
--- a/linux-user/loongarch64/cpu_loop.c
+++ b/linux-user/loongarch64/cpu_loop.c
@@ -27,7 +27,7 @@ void cpu_loop(CPULoongArchState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
diff --git a/linux-user/m68k/cpu_loop.c b/linux-user/m68k/cpu_loop.c
index aca0bf23dc6..2c9f628241f 100644
--- a/linux-user/m68k/cpu_loop.c
+++ b/linux-user/m68k/cpu_loop.c
@@ -33,7 +33,7 @@ void cpu_loop(CPUM68KState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch(trapnr) {
case EXCP_ILLEGAL:
diff --git a/linux-user/microblaze/cpu_loop.c b/linux-user/microblaze/cpu_loop.c
index d8277961c73..78506ab23d9 100644
--- a/linux-user/microblaze/cpu_loop.c
+++ b/linux-user/microblaze/cpu_loop.c
@@ -32,7 +32,7 @@ void cpu_loop(CPUMBState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
diff --git a/linux-user/mips/cpu_loop.c b/linux-user/mips/cpu_loop.c
index e67b8a2e463..2365de1de1a 100644
--- a/linux-user/mips/cpu_loop.c
+++ b/linux-user/mips/cpu_loop.c
@@ -74,7 +74,7 @@ void cpu_loop(CPUMIPSState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch(trapnr) {
case EXCP_SYSCALL:
diff --git a/linux-user/openrisc/cpu_loop.c b/linux-user/openrisc/cpu_loop.c
index 8c72347a99a..2167d880d55 100644
--- a/linux-user/openrisc/cpu_loop.c
+++ b/linux-user/openrisc/cpu_loop.c
@@ -33,7 +33,7 @@ void cpu_loop(CPUOpenRISCState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_SYSCALL:
diff --git a/linux-user/ppc/cpu_loop.c b/linux-user/ppc/cpu_loop.c
index 22885ffd906..b0b0cb14b41 100644
--- a/linux-user/ppc/cpu_loop.c
+++ b/linux-user/ppc/cpu_loop.c
@@ -77,7 +77,7 @@ void cpu_loop(CPUPPCState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
arch_interrupt = true;
switch (trapnr) {
diff --git a/linux-user/riscv/cpu_loop.c b/linux-user/riscv/cpu_loop.c
index b3162815320..ce542540c28 100644
--- a/linux-user/riscv/cpu_loop.c
+++ b/linux-user/riscv/cpu_loop.c
@@ -36,7 +36,7 @@ void cpu_loop(CPURISCVState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
diff --git a/linux-user/s390x/cpu_loop.c b/linux-user/s390x/cpu_loop.c
index 49e44548f85..4929b32e1fc 100644
--- a/linux-user/s390x/cpu_loop.c
+++ b/linux-user/s390x/cpu_loop.c
@@ -64,7 +64,7 @@ void cpu_loop(CPUS390XState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
diff --git a/linux-user/sh4/cpu_loop.c b/linux-user/sh4/cpu_loop.c
index 259ea1cc8bb..0c9d7e9c46b 100644
--- a/linux-user/sh4/cpu_loop.c
+++ b/linux-user/sh4/cpu_loop.c
@@ -34,7 +34,7 @@ void cpu_loop(CPUSH4State *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case 0x160:
diff --git a/linux-user/sparc/cpu_loop.c b/linux-user/sparc/cpu_loop.c
index 7d30cd1ff22..7391e2add8d 100644
--- a/linux-user/sparc/cpu_loop.c
+++ b/linux-user/sparc/cpu_loop.c
@@ -220,7 +220,7 @@ void cpu_loop (CPUSPARCState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
switch (trapnr) {
case TARGET_TT_SYSCALL:
diff --git a/linux-user/xtensa/cpu_loop.c b/linux-user/xtensa/cpu_loop.c
index 43a194fc4a4..a0ff10eff82 100644
--- a/linux-user/xtensa/cpu_loop.c
+++ b/linux-user/xtensa/cpu_loop.c
@@ -133,7 +133,7 @@ void cpu_loop(CPUXtensaState *env)
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
- process_queued_cpu_work(cs);
+ qemu_process_cpu_events(cs);
env->sregs[PS] &= ~PS_EXCM;
switch (trapnr) {
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 17/19] cpus: clear exit_request in qemu_process_cpu_events
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (15 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 16/19] bsd-user, linux-user: introduce qemu_process_cpu_events Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 18/19] accel: make all calls to qemu_process_cpu_events look the same Paolo Bonzini
2025-09-08 10:12 ` [PATCH 19/19] tcg/user: do not set exit_request gratuitously Paolo Bonzini
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov, Richard Henderson
Make the code common to all accelerators: after seeing cpu->exit_request
set to true, accelerator code needs to reach qemu_process_cpu_events_common().
So for the common cases where they use qemu_process_cpu_events(), go ahead and
clear it in there. Note that the cheap qatomic_set() is enough because
at this point the thread has taken the BQL; qatomic_set_mb() is not needed.
In particular, this is the ordering of the communication between
I/O and vCPU threads is always the same.
In the I/O thread:
(a) store other memory locations that will be checked if cpu->exit_request
or cpu->interrupt_request is 1 (for example cpu->stop or cpu->work_list
for cpu->exit_request)
(b) cpu_exit(): store-release cpu->exit_request, or
(b) cpu_interrupt(): store-release cpu->interrupt_request
>>> at this point, cpu->halt_cond is broadcast and the BQL released
(c) do the accelerator-specific kick (e.g. write icount_decr for TCG,
pthread_kill for KVM, etc.)
In the vCPU thread instead the opposite order is respected:
(c) the accelerator's execution loop exits thanks to the kick
(b) then the inner execution loop checks cpu->interrupt_request
and cpu->exit_request. If needed cpu->interrupt_request is
converted into cpu->exit_request when work is needed outside
the execution loop.
(a) then the other memory locations are checked. Some may need to
be read under the BQL, but the vCPU thread may also take other
locks (e.g. for queued work items) or none at all.
qatomic_set_mb() would only be needed if the halt sleep was done
outside the BQL (though in that case, cpu->exit_request probably
would be replaced by a QemuEvent or something like that).
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/kvm/kvm-all.c | 2 --
accel/tcg/cpu-exec.c | 1 -
accel/tcg/tcg-accel-ops-rr.c | 9 +++++++--
accel/tcg/tcg-accel-ops.c | 2 --
accel/tcg/user-exec.c | 1 +
system/cpus.c | 1 +
target/i386/nvmm/nvmm-all.c | 2 --
target/i386/whpx/whpx-all.c | 2 --
8 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index e4167d94b4f..d13156bee87 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -3155,7 +3155,6 @@ int kvm_cpu_exec(CPUState *cpu)
trace_kvm_cpu_exec();
if (kvm_arch_process_async_events(cpu)) {
- qatomic_set(&cpu->exit_request, 0);
return EXCP_HLT;
}
@@ -3345,7 +3344,6 @@ int kvm_cpu_exec(CPUState *cpu)
vm_stop(RUN_STATE_INTERNAL_ERROR);
}
- qatomic_set(&cpu->exit_request, 0);
return ret;
}
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 3ae545e888f..ad94f96b252 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -872,7 +872,6 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
* The corresponding store-release is in cpu_exit.
*/
if (unlikely(qatomic_load_acquire(&cpu->exit_request)) || icount_exit_request(cpu)) {
- qatomic_set(&cpu->exit_request, 0);
if (cpu->exception_index == -1) {
cpu->exception_index = EXCP_INTERRUPT;
}
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 813b313859a..7dbdba7b514 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -293,8 +293,13 @@ static void *rr_cpu_thread_fn(void *arg)
/* Does not need a memory barrier because a spurious wakeup is okay. */
qatomic_set(&rr_current_cpu, NULL);
- if (cpu && qatomic_read(&cpu->exit_request)) {
- qatomic_set_mb(&cpu->exit_request, 0);
+ if (cpu) {
+ /*
+ * This could even reset exit_request for all CPUs, but in practice
+ * races between CPU exits and changes to "cpu" are so rare that
+ * there's no advantage in doing so.
+ */
+ qatomic_set(&cpu->exit_request, false);
}
if (icount_enabled() && all_cpu_threads_idle()) {
diff --git a/accel/tcg/tcg-accel-ops.c b/accel/tcg/tcg-accel-ops.c
index 1f662a9c745..3bd98005042 100644
--- a/accel/tcg/tcg-accel-ops.c
+++ b/accel/tcg/tcg-accel-ops.c
@@ -82,8 +82,6 @@ int tcg_cpu_exec(CPUState *cpu)
ret = cpu_exec(cpu);
cpu_exec_end(cpu);
- qatomic_set_mb(&cpu->exit_request, 0);
-
return ret;
}
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 65f5da6c506..916f18754f6 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -54,6 +54,7 @@ void qemu_cpu_kick(CPUState *cpu)
void qemu_process_cpu_events(CPUState *cpu)
{
+ qatomic_set(&cpu->exit_request, false);
process_queued_cpu_work(cpu);
}
diff --git a/system/cpus.c b/system/cpus.c
index fd804e07327..aa7bfcf56e5 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -463,6 +463,7 @@ void qemu_process_cpu_events(CPUState *cpu)
{
bool slept = false;
+ qatomic_set(&cpu->exit_request, false);
while (cpu_thread_is_idle(cpu)) {
if (!slept) {
slept = true;
diff --git a/target/i386/nvmm/nvmm-all.c b/target/i386/nvmm/nvmm-all.c
index 7e36c42fbb4..ed424251673 100644
--- a/target/i386/nvmm/nvmm-all.c
+++ b/target/i386/nvmm/nvmm-all.c
@@ -817,8 +817,6 @@ nvmm_vcpu_loop(CPUState *cpu)
cpu_exec_end(cpu);
bql_lock();
- qatomic_set(&cpu->exit_request, false);
-
return ret < 0;
}
diff --git a/target/i386/whpx/whpx-all.c b/target/i386/whpx/whpx-all.c
index 00fb7e23100..2a85168ed51 100644
--- a/target/i386/whpx/whpx-all.c
+++ b/target/i386/whpx/whpx-all.c
@@ -2050,8 +2050,6 @@ static int whpx_vcpu_run(CPUState *cpu)
whpx_last_vcpu_stopping(cpu);
}
- qatomic_set(&cpu->exit_request, false);
-
return ret < 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 18/19] accel: make all calls to qemu_process_cpu_events look the same
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (16 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 17/19] cpus: clear exit_request in qemu_process_cpu_events Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
2025-09-08 10:12 ` [PATCH 19/19] tcg/user: do not set exit_request gratuitously Paolo Bonzini
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Philippe Mathieu-Daudé, Richard Henderson
There is no reason for some accelerators to use qemu_process_cpu_events_common
(which is separated from qemu_process_cpu_events() specifically for round
robin TCG). They can also check for events directly on the first pass through
the loop, instead of setting cpu->exit_request to true.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/dummy-cpus.c | 2 +-
accel/hvf/hvf-accel-ops.c | 2 +-
accel/kvm/kvm-accel-ops.c | 3 ++-
accel/tcg/tcg-accel-ops-mttcg.c | 7 ++---
accel/tcg/tcg-accel-ops-rr.c | 43 ++++++++++++++-----------------
target/i386/nvmm/nvmm-accel-ops.c | 6 ++---
target/i386/whpx/whpx-accel-ops.c | 6 ++---
7 files changed, 30 insertions(+), 39 deletions(-)
diff --git a/accel/dummy-cpus.c b/accel/dummy-cpus.c
index 225a47c31fd..5752f6302c8 100644
--- a/accel/dummy-cpus.c
+++ b/accel/dummy-cpus.c
@@ -43,6 +43,7 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
+ qemu_process_cpu_events(cpu);
bql_unlock();
#ifndef _WIN32
do {
@@ -57,7 +58,6 @@ static void *dummy_cpu_thread_fn(void *arg)
qemu_sem_wait(&cpu->sem);
#endif
bql_lock();
- qemu_process_cpu_events(cpu);
} while (!cpu->unplug);
bql_unlock();
diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c
index 7a27bdadb4f..8b794c2d418 100644
--- a/accel/hvf/hvf-accel-ops.c
+++ b/accel/hvf/hvf-accel-ops.c
@@ -192,13 +192,13 @@ static void *hvf_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
+ qemu_process_cpu_events(cpu);
if (cpu_can_run(cpu)) {
r = hvf_vcpu_exec(cpu);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
}
}
- qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
hvf_vcpu_destroy(cpu);
diff --git a/accel/kvm/kvm-accel-ops.c b/accel/kvm/kvm-accel-ops.c
index 65a7f76a69a..8ed6945c2f7 100644
--- a/accel/kvm/kvm-accel-ops.c
+++ b/accel/kvm/kvm-accel-ops.c
@@ -47,13 +47,14 @@ static void *kvm_vcpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
+ qemu_process_cpu_events(cpu);
+
if (cpu_can_run(cpu)) {
r = kvm_cpu_exec(cpu);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
}
}
- qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
kvm_destroy_vcpu(cpu);
diff --git a/accel/tcg/tcg-accel-ops-mttcg.c b/accel/tcg/tcg-accel-ops-mttcg.c
index 342917c5f6e..cf1ee7ac258 100644
--- a/accel/tcg/tcg-accel-ops-mttcg.c
+++ b/accel/tcg/tcg-accel-ops-mttcg.c
@@ -84,10 +84,9 @@ static void *mttcg_cpu_thread_fn(void *arg)
cpu_thread_signal_created(cpu);
qemu_guest_random_seed_thread_part2(cpu->random_seed);
- /* process any pending work */
- qatomic_set(&cpu->exit_request, true);
-
do {
+ qemu_process_cpu_events(cpu);
+
if (cpu_can_run(cpu)) {
int r;
bql_unlock();
@@ -112,8 +111,6 @@ static void *mttcg_cpu_thread_fn(void *arg)
break;
}
}
-
- qemu_process_cpu_events(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
tcg_cpu_destroy(cpu);
diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
index 7dbdba7b514..2fb46439971 100644
--- a/accel/tcg/tcg-accel-ops-rr.c
+++ b/accel/tcg/tcg-accel-ops-rr.c
@@ -211,13 +211,30 @@ static void *rr_cpu_thread_fn(void *arg)
cpu = first_cpu;
- /* process any pending work */
- qatomic_set(&cpu->exit_request, true);
-
while (1) {
/* Only used for icount_enabled() */
int64_t cpu_budget = 0;
+ if (cpu) {
+ /*
+ * This could even reset exit_request for all CPUs, but in practice
+ * races between CPU exits and changes to "cpu" are so rare that
+ * there's no advantage in doing so.
+ */
+ qatomic_set(&cpu->exit_request, false);
+ }
+
+ if (icount_enabled() && all_cpu_threads_idle()) {
+ /*
+ * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
+ * in the main_loop, wake it up in order to start the warp timer.
+ */
+ qemu_notify_event();
+ }
+
+ rr_wait_io_event();
+ rr_deal_with_unplugged_cpus();
+
bql_unlock();
replay_mutex_lock();
bql_lock();
@@ -292,26 +309,6 @@ static void *rr_cpu_thread_fn(void *arg)
/* Does not need a memory barrier because a spurious wakeup is okay. */
qatomic_set(&rr_current_cpu, NULL);
-
- if (cpu) {
- /*
- * This could even reset exit_request for all CPUs, but in practice
- * races between CPU exits and changes to "cpu" are so rare that
- * there's no advantage in doing so.
- */
- qatomic_set(&cpu->exit_request, false);
- }
-
- if (icount_enabled() && all_cpu_threads_idle()) {
- /*
- * When all cpus are sleeping (e.g in WFI), to avoid a deadlock
- * in the main_loop, wake it up in order to start the warp timer.
- */
- qemu_notify_event();
- }
-
- rr_wait_io_event();
- rr_deal_with_unplugged_cpus();
}
g_assert_not_reached();
diff --git a/target/i386/nvmm/nvmm-accel-ops.c b/target/i386/nvmm/nvmm-accel-ops.c
index d066364b989..dd5d5428b1c 100644
--- a/target/i386/nvmm/nvmm-accel-ops.c
+++ b/target/i386/nvmm/nvmm-accel-ops.c
@@ -42,16 +42,14 @@ static void *qemu_nvmm_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
+ qemu_process_cpu_events(cpu);
+
if (cpu_can_run(cpu)) {
r = nvmm_vcpu_exec(cpu);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
}
}
- while (cpu_thread_is_idle(cpu)) {
- qemu_cond_wait_bql(cpu->halt_cond);
- }
- qemu_process_cpu_events_common(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
nvmm_destroy_vcpu(cpu);
diff --git a/target/i386/whpx/whpx-accel-ops.c b/target/i386/whpx/whpx-accel-ops.c
index 2ca4ee02636..f75886128d0 100644
--- a/target/i386/whpx/whpx-accel-ops.c
+++ b/target/i386/whpx/whpx-accel-ops.c
@@ -42,16 +42,14 @@ static void *whpx_cpu_thread_fn(void *arg)
qemu_guest_random_seed_thread_part2(cpu->random_seed);
do {
+ qemu_process_cpu_events(cpu);
+
if (cpu_can_run(cpu)) {
r = whpx_vcpu_exec(cpu);
if (r == EXCP_DEBUG) {
cpu_handle_guest_debug(cpu);
}
}
- while (cpu_thread_is_idle(cpu)) {
- qemu_cond_wait_bql(cpu->halt_cond);
- }
- qemu_process_cpu_events_common(cpu);
} while (!cpu->unplug || cpu_can_run(cpu));
whpx_destroy_vcpu(cpu);
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 19/19] tcg/user: do not set exit_request gratuitously
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
` (17 preceding siblings ...)
2025-09-08 10:12 ` [PATCH 18/19] accel: make all calls to qemu_process_cpu_events look the same Paolo Bonzini
@ 2025-09-08 10:12 ` Paolo Bonzini
18 siblings, 0 replies; 22+ messages in thread
From: Paolo Bonzini @ 2025-09-08 10:12 UTC (permalink / raw)
To: qemu-devel; +Cc: Igor Mammedov
Whenever user-mode emulation needs to go all the way out of the cpu
exec loop, it uses cpu_exit(), which already sets cpu->exit_request.
Therefore, there is no need for tcg_kick_vcpu_thread() to set
cpu->exit_request again outside system emulation.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
accel/tcg/cpu-exec.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index ad94f96b252..7c20d9db122 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -751,6 +751,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
void tcg_kick_vcpu_thread(CPUState *cpu)
{
+#ifndef CONFIG_USER_ONLY
/*
* Ensure cpu_exec will see the reason why the exit request was set.
* FIXME: this is not always needed. Other accelerators instead
@@ -758,6 +759,7 @@ void tcg_kick_vcpu_thread(CPUState *cpu)
* CPU thread; see kvm_arch_pre_run() for example.
*/
qatomic_store_release(&cpu->exit_request, true);
+#endif
/* Ensure cpu_exec will see the exit request after TCG has exited. */
qatomic_store_release(&cpu->neg.icount_decr.u16.high, -1);
--
2.51.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop
2025-09-08 10:12 ` [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop Paolo Bonzini
@ 2025-09-10 7:13 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2025-09-10 7:13 UTC (permalink / raw)
To: qemu-devel
On 9/8/25 10:12, Paolo Bonzini wrote:
> Now that cpu_exit() actually kicks all accelerators, use it whenever
> the message to another thread is processed in qemu_wait_io_event().
>
> Signed-off-by: Paolo Bonzini<pbonzini@redhat.com>
> ---
> cpu-common.c | 3 ++-
> hw/ppc/ppc.c | 2 ++
> hw/ppc/spapr_hcall.c | 7 +++----
> hw/ppc/spapr_rtas.c | 2 +-
> replay/replay-events.c | 3 ++-
> system/cpu-timers.c | 6 +++---
> system/cpus.c | 5 +++--
> target/arm/tcg/mte_helper.c | 2 +-
> target/i386/kvm/hyperv.c | 1 -
> 9 files changed, 17 insertions(+), 14 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common
2025-09-08 10:12 ` [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common Paolo Bonzini
@ 2025-09-10 7:14 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2025-09-10 7:14 UTC (permalink / raw)
To: qemu-devel
On 9/8/25 10:12, Paolo Bonzini wrote:
> Do so before extending it to the user-mode emulators, where there is no
> such thing as an "I/O thread".
>
> Signed-off-by: Paolo Bonzini<pbonzini@redhat.com>
> ---
> include/hw/core/cpu.h | 2 +-
> include/system/cpus.h | 4 ++--
> accel/dummy-cpus.c | 2 +-
> accel/hvf/hvf-accel-ops.c | 2 +-
> accel/kvm/kvm-accel-ops.c | 2 +-
> accel/tcg/tcg-accel-ops-mttcg.c | 2 +-
> accel/tcg/tcg-accel-ops-rr.c | 4 ++--
> cpu-common.c | 2 +-
> system/cpus.c | 6 +++---
> target/i386/nvmm/nvmm-accel-ops.c | 2 +-
> target/i386/whpx/whpx-accel-ops.c | 2 +-
> 11 files changed, 15 insertions(+), 15 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2025-09-10 7:15 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-08 10:12 [PATCH v3 00/19] accel, cpus: clean up cpu->exit_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 01/19] target/ppc: limit cpu_interrupt_exittb to system emulation Paolo Bonzini
2025-09-08 10:12 ` [PATCH 02/19] target/sparc: limit cpu_check_irqs " Paolo Bonzini
2025-09-08 10:12 ` [PATCH 03/19] target/i386: limit a20 " Paolo Bonzini
2025-09-08 10:12 ` [PATCH 04/19] target-arm: remove uses of cpu_interrupt() for user-mode emulation Paolo Bonzini
2025-09-08 10:12 ` [PATCH 05/19] user-exec: remove cpu_interrupt() stub Paolo Bonzini
2025-09-08 10:12 ` [PATCH 06/19] treewide: clear bits of cs->interrupt_request with cpu_reset_interrupt() Paolo Bonzini
2025-09-08 10:12 ` [PATCH 07/19] cpu-common: use atomic access for interrupt_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 08/19] cpus: document that qemu_cpu_kick() can be used for BQL-less operation Paolo Bonzini
2025-09-08 10:12 ` [PATCH 09/19] accel: use store_release/load_acquire for cross-thread exit_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 10/19] accel: use atomic accesses for exit_request Paolo Bonzini
2025-09-08 10:12 ` [PATCH 11/19] accel/tcg: create a thread-kick function for TCG Paolo Bonzini
2025-09-08 10:12 ` [PATCH 12/19] accel/tcg: inline cpu_exit() Paolo Bonzini
2025-09-08 10:12 ` [PATCH 13/19] cpus: remove TCG-ism from cpu_exit() Paolo Bonzini
2025-09-08 10:12 ` [PATCH 14/19] cpus: properly kick CPUs out of inner execution loop Paolo Bonzini
2025-09-10 7:13 ` Richard Henderson
2025-09-08 10:12 ` [PATCH 15/19] treewide: rename qemu_wait_io_event/qemu_wait_io_event_common Paolo Bonzini
2025-09-10 7:14 ` Richard Henderson
2025-09-08 10:12 ` [PATCH 16/19] bsd-user, linux-user: introduce qemu_process_cpu_events Paolo Bonzini
2025-09-08 10:12 ` [PATCH 17/19] cpus: clear exit_request in qemu_process_cpu_events Paolo Bonzini
2025-09-08 10:12 ` [PATCH 18/19] accel: make all calls to qemu_process_cpu_events look the same Paolo Bonzini
2025-09-08 10:12 ` [PATCH 19/19] tcg/user: do not set exit_request gratuitously Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).