qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
@ 2024-09-23 16:12 Ilya Leoshkevich
  2024-09-23 16:12 ` [PATCH 01/18] gdbstub: Make gdb_get_char() static Ilya Leoshkevich
                   ` (20 more replies)
  0 siblings, 21 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:12 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Hi,

On reporting a breakpoint in a non-non-stop mode, GDB remotes must stop
all threads. Currently qemu-user doesn't do that, breaking the
debugging session for at least two reasons: concurrent access to the
GDB socket, and an assertion within GDB [1].

This series fixes this by importing pause_all_vcpus() from qemu-system.
This in turn requires introducing BQL and a few stubs to qemu-user.

Best regards,
Ilya

[1] https://gitlab.com/qemu-project/qemu/-/issues/2465

Ilya Leoshkevich (18):
  gdbstub: Make gdb_get_char() static
  gdbstub: Move phy_memory_mode to GDBSystemState
  gdbstub: Move gdb_syscall_mode to GDBSyscallState
  gdbstub: Factor out gdb_try_stop()
  accel/tcg: Factor out cpu_exec_user()
  qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
  qemu-thread: Introduce QEMU_COND_INITIALIZER
  replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
  qemu-timer: Provide qemu_clock_enable() stub for qemu-user
  cpu: Use BQL in qemu-user
  accel/tcg: Unify user implementations of qemu_cpu_kick()
  cpu: Track CPUs executing syscalls
  cpu: Implement cpu_thread_is_idle() for qemu-user
  cpu: Introduce cpu_is_paused()
  cpu: Set current_cpu early in qemu-user
  cpu: Allow pausing and resuming CPUs in qemu-user
  gdbstub: Pause all CPUs before sending stop replies
  tests/tcg: Stress test thread breakpoints

 accel/tcg/user-exec-stub.c                    |   4 -
 accel/tcg/user-exec.c                         |  55 ++++++
 bsd-user/aarch64/target_arch_cpu.h            |   6 +-
 bsd-user/arm/target_arch_cpu.h                |   5 +-
 bsd-user/freebsd/os-syscall.c                 |  10 +
 bsd-user/i386/target_arch_cpu.h               |   5 +-
 bsd-user/main.c                               |   8 +-
 bsd-user/x86_64/target_arch_cpu.h             |   5 +-
 cpu-common.c                                  | 179 ++++++++++++++++++
 gdbstub/gdbstub.c                             |  17 +-
 gdbstub/internals.h                           |   4 +-
 gdbstub/syscalls.c                            |  20 +-
 gdbstub/system.c                              |  18 +-
 gdbstub/user.c                                |  28 ++-
 include/exec/cpu-common.h                     |  15 ++
 include/exec/replay-core.h                    |  13 ++
 include/hw/core/cpu.h                         |   1 +
 include/qemu/thread-posix.h                   |   8 +
 include/qemu/thread-win32.h                   |   8 +
 include/sysemu/cpus.h                         |   6 -
 include/sysemu/replay.h                       |  13 --
 linux-user/aarch64/cpu_loop.c                 |   5 +-
 linux-user/alpha/cpu_loop.c                   |   5 +-
 linux-user/arm/cpu_loop.c                     |   5 +-
 linux-user/hexagon/cpu_loop.c                 |   5 +-
 linux-user/hppa/cpu_loop.c                    |   5 +-
 linux-user/i386/cpu_loop.c                    |   5 +-
 linux-user/loongarch64/cpu_loop.c             |   5 +-
 linux-user/m68k/cpu_loop.c                    |   5 +-
 linux-user/main.c                             |   9 +-
 linux-user/microblaze/cpu_loop.c              |   5 +-
 linux-user/mips/cpu_loop.c                    |   5 +-
 linux-user/openrisc/cpu_loop.c                |   5 +-
 linux-user/ppc/cpu_loop.c                     |   5 +-
 linux-user/riscv/cpu_loop.c                   |   5 +-
 linux-user/s390x/cpu_loop.c                   |   5 +-
 linux-user/sh4/cpu_loop.c                     |   5 +-
 linux-user/sparc/cpu_loop.c                   |   5 +-
 linux-user/syscall.c                          |  12 ++
 linux-user/xtensa/cpu_loop.c                  |   5 +-
 replay/stubs-system.c                         |   8 +
 stubs/meson.build                             |   8 +
 stubs/qemu-timer.c                            |   6 +
 stubs/replay-mutex.c                          |  10 +
 stubs/replay-tools.c                          |   8 -
 system/cpus.c                                 | 172 +----------------
 tests/tcg/multiarch/Makefile.target           |  13 +-
 .../gdbstub/test-thread-breakpoint-stress.py  |  28 +++
 .../tcg/multiarch/thread-breakpoint-stress.c  |  92 +++++++++
 49 files changed, 552 insertions(+), 327 deletions(-)
 create mode 100644 stubs/qemu-timer.c
 create mode 100644 stubs/replay-mutex.c
 create mode 100644 tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py
 create mode 100644 tests/tcg/multiarch/thread-breakpoint-stress.c

-- 
2.46.0



^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH 01/18] gdbstub: Make gdb_get_char() static
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
@ 2024-09-23 16:12 ` Ilya Leoshkevich
  2024-10-05 19:20   ` Richard Henderson
  2024-09-23 16:12 ` [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState Ilya Leoshkevich
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:12 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

It's user-only since commit a7e0f9bd2ace ("gdbstub: abstract target
specific details from gdb_put_packet_binary").

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 gdbstub/internals.h | 2 --
 gdbstub/user.c      | 2 +-
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/gdbstub/internals.h b/gdbstub/internals.h
index bf5a5c63029..5acc36846f3 100644
--- a/gdbstub/internals.h
+++ b/gdbstub/internals.h
@@ -142,8 +142,6 @@ void gdb_create_default_process(GDBState *s);
 int gdb_signal_to_target(int sig);
 int gdb_target_signal_to_gdb(int sig);
 
-int gdb_get_char(void); /* user only */
-
 /**
  * gdb_continue() - handle continue in mode specific way.
  */
diff --git a/gdbstub/user.c b/gdbstub/user.c
index b36033bc7a2..6a493c5ba3a 100644
--- a/gdbstub/user.c
+++ b/gdbstub/user.c
@@ -103,7 +103,7 @@ typedef struct {
 
 static GDBUserState gdbserver_user_state;
 
-int gdb_get_char(void)
+static int gdb_get_char(void)
 {
     uint8_t ch;
     int ret;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
  2024-09-23 16:12 ` [PATCH 01/18] gdbstub: Make gdb_get_char() static Ilya Leoshkevich
@ 2024-09-23 16:12 ` Ilya Leoshkevich
  2024-10-05 19:21   ` Richard Henderson
  2024-09-23 16:12 ` [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState Ilya Leoshkevich
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:12 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Follow the convention that all the pieces of the global stub state must
be inside a single struct.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 gdbstub/system.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/gdbstub/system.c b/gdbstub/system.c
index 1ad87fe7fdf..5ce357c6c2b 100644
--- a/gdbstub/system.c
+++ b/gdbstub/system.c
@@ -35,6 +35,7 @@
 typedef struct {
     CharBackend chr;
     Chardev *mon_chr;
+    int phy_memory_mode;
 } GDBSystemState;
 
 GDBSystemState gdbserver_system_state;
@@ -445,14 +446,12 @@ void gdb_qemu_exit(int code)
 /*
  * Memory access
  */
-static int phy_memory_mode;
-
 int gdb_target_memory_rw_debug(CPUState *cpu, hwaddr addr,
                                uint8_t *buf, int len, bool is_write)
 {
     CPUClass *cc;
 
-    if (phy_memory_mode) {
+    if (gdbserver_system_state.phy_memory_mode) {
         if (is_write) {
             cpu_physical_memory_write(addr, buf, len);
         } else {
@@ -491,7 +490,8 @@ bool gdb_can_reverse(void)
 void gdb_handle_query_qemu_phy_mem_mode(GArray *params,
                                         void *ctx)
 {
-    g_string_printf(gdbserver_state.str_buf, "%d", phy_memory_mode);
+    g_string_printf(gdbserver_state.str_buf, "%d",
+                    gdbserver_system_state.phy_memory_mode);
     gdb_put_strbuf();
 }
 
@@ -503,9 +503,9 @@ void gdb_handle_set_qemu_phy_mem_mode(GArray *params, void *ctx)
     }
 
     if (!gdb_get_cmd_param(params, 0)->val_ul) {
-        phy_memory_mode = 0;
+        gdbserver_system_state.phy_memory_mode = 0;
     } else {
-        phy_memory_mode = 1;
+        gdbserver_system_state.phy_memory_mode = 1;
     }
     gdb_put_packet("OK");
 }
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
  2024-09-23 16:12 ` [PATCH 01/18] gdbstub: Make gdb_get_char() static Ilya Leoshkevich
  2024-09-23 16:12 ` [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState Ilya Leoshkevich
@ 2024-09-23 16:12 ` Ilya Leoshkevich
  2024-10-05 19:22   ` Richard Henderson
  2024-09-23 16:12 ` [PATCH 04/18] gdbstub: Factor out gdb_try_stop() Ilya Leoshkevich
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:12 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Follow the convention that all the pieces of the global stub state must
be inside a single struct.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 gdbstub/syscalls.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/gdbstub/syscalls.c b/gdbstub/syscalls.c
index 4e1295b782d..42307f0abb1 100644
--- a/gdbstub/syscalls.c
+++ b/gdbstub/syscalls.c
@@ -24,6 +24,11 @@
 typedef struct {
     char syscall_buf[256];
     gdb_syscall_complete_cb current_syscall_cb;
+    enum {
+        GDB_SYS_UNKNOWN,
+        GDB_SYS_ENABLED,
+        GDB_SYS_DISABLED,
+    } mode;
 } GDBSyscallState;
 
 static GDBSyscallState gdbserver_syscall_state;
@@ -37,12 +42,6 @@ static bool gdb_attached(void)
     return gdbserver_state.init && gdbserver_state.c_cpu;
 }
 
-static enum {
-    GDB_SYS_UNKNOWN,
-    GDB_SYS_ENABLED,
-    GDB_SYS_DISABLED,
-} gdb_syscall_mode;
-
 /* Decide if either remote gdb syscalls or native file IO should be used. */
 int use_gdb_syscalls(void)
 {
@@ -57,16 +56,17 @@ int use_gdb_syscalls(void)
 
     /* -semihosting-config target=auto */
     /* On the first call check if gdb is connected and remember. */
-    if (gdb_syscall_mode == GDB_SYS_UNKNOWN) {
-        gdb_syscall_mode = gdb_attached() ? GDB_SYS_ENABLED : GDB_SYS_DISABLED;
+    if (gdbserver_syscall_state.mode == GDB_SYS_UNKNOWN) {
+        gdbserver_syscall_state.mode = gdb_attached() ? GDB_SYS_ENABLED :
+                                                        GDB_SYS_DISABLED;
     }
-    return gdb_syscall_mode == GDB_SYS_ENABLED;
+    return gdbserver_syscall_state.mode == GDB_SYS_ENABLED;
 }
 
 /* called when the stub detaches */
 void gdb_disable_syscalls(void)
 {
-    gdb_syscall_mode = GDB_SYS_DISABLED;
+    gdbserver_syscall_state.mode = GDB_SYS_DISABLED;
 }
 
 void gdb_syscall_reset(void)
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 04/18] gdbstub: Factor out gdb_try_stop()
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (2 preceding siblings ...)
  2024-09-23 16:12 ` [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState Ilya Leoshkevich
@ 2024-09-23 16:12 ` Ilya Leoshkevich
  2024-10-05 19:26   ` Richard Henderson
  2024-09-23 16:13 ` [PATCH 05/18] accel/tcg: Factor out cpu_exec_user() Ilya Leoshkevich
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:12 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Move checking and setting allow_stop_reply into a function.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 gdbstub/gdbstub.c   | 15 +++++++++++----
 gdbstub/internals.h |  2 ++
 gdbstub/system.c    |  6 ++----
 gdbstub/user.c      | 11 ++++-------
 4 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/gdbstub/gdbstub.c b/gdbstub/gdbstub.c
index d08568cea0e..a096104b07a 100644
--- a/gdbstub/gdbstub.c
+++ b/gdbstub/gdbstub.c
@@ -1422,11 +1422,10 @@ static void handle_v_attach(GArray *params, void *user_ctx)
     gdbserver_state.g_cpu = cpu;
     gdbserver_state.c_cpu = cpu;
 
-    if (gdbserver_state.allow_stop_reply) {
+    if (gdb_try_stop()) {
         g_string_printf(gdbserver_state.str_buf, "T%02xthread:", GDB_SIGNAL_TRAP);
         gdb_append_thread_id(cpu, gdbserver_state.str_buf);
         g_string_append_c(gdbserver_state.str_buf, ';');
-        gdbserver_state.allow_stop_reply = false;
 cleanup:
         gdb_put_strbuf();
     }
@@ -2016,12 +2015,11 @@ static void handle_gen_set(GArray *params, void *user_ctx)
 
 static void handle_target_halt(GArray *params, void *user_ctx)
 {
-    if (gdbserver_state.allow_stop_reply) {
+    if (gdb_try_stop()) {
         g_string_printf(gdbserver_state.str_buf, "T%02xthread:", GDB_SIGNAL_TRAP);
         gdb_append_thread_id(gdbserver_state.c_cpu, gdbserver_state.str_buf);
         g_string_append_c(gdbserver_state.str_buf, ';');
         gdb_put_strbuf();
-        gdbserver_state.allow_stop_reply = false;
     }
     /*
      * Remove all the breakpoints when this query is issued,
@@ -2493,3 +2491,12 @@ void gdb_create_default_process(GDBState *s)
     process->target_xml = NULL;
 }
 
+bool gdb_try_stop(void)
+{
+    if (!gdbserver_state.allow_stop_reply) {
+        return false;
+    }
+
+    gdbserver_state.allow_stop_reply = false;
+    return true;
+}
diff --git a/gdbstub/internals.h b/gdbstub/internals.h
index 5acc36846f3..310861e581b 100644
--- a/gdbstub/internals.h
+++ b/gdbstub/internals.h
@@ -215,4 +215,6 @@ void gdb_breakpoint_remove_all(CPUState *cs);
 int gdb_target_memory_rw_debug(CPUState *cs, hwaddr addr,
                                uint8_t *buf, int len, bool is_write);
 
+bool gdb_try_stop(void);
+
 #endif /* GDBSTUB_INTERNALS_H */
diff --git a/gdbstub/system.c b/gdbstub/system.c
index 5ce357c6c2b..fbe9528569c 100644
--- a/gdbstub/system.c
+++ b/gdbstub/system.c
@@ -141,7 +141,7 @@ static void gdb_vm_state_change(void *opaque, bool running, RunState state)
         return;
     }
 
-    if (!gdbserver_state.allow_stop_reply) {
+    if (!gdb_try_stop()) {
         return;
     }
 
@@ -211,7 +211,6 @@ static void gdb_vm_state_change(void *opaque, bool running, RunState state)
 
 send_packet:
     gdb_put_packet(buf->str);
-    gdbserver_state.allow_stop_reply = false;
 
     /* disable single step if it was enabled */
     cpu_single_step(cpu, 0);
@@ -428,10 +427,9 @@ void gdb_exit(int code)
 
     trace_gdbstub_op_exiting((uint8_t)code);
 
-    if (gdbserver_state.allow_stop_reply) {
+    if (gdb_try_stop()) {
         snprintf(buf, sizeof(buf), "W%02x", (uint8_t)code);
         gdb_put_packet(buf);
-        gdbserver_state.allow_stop_reply = false;
     }
 
     qemu_chr_fe_deinit(&gdbserver_system_state.chr, true);
diff --git a/gdbstub/user.c b/gdbstub/user.c
index 6a493c5ba3a..77ba227fc3b 100644
--- a/gdbstub/user.c
+++ b/gdbstub/user.c
@@ -181,10 +181,9 @@ void gdb_exit(int code)
 
     trace_gdbstub_op_exiting((uint8_t)code);
 
-    if (gdbserver_state.allow_stop_reply) {
+    if (gdb_try_stop()) {
         snprintf(buf, sizeof(buf), "W%02x", (uint8_t)code);
         gdb_put_packet(buf);
-        gdbserver_state.allow_stop_reply = false;
     }
 
 }
@@ -222,7 +221,7 @@ int gdb_handlesig(CPUState *cpu, int sig, const char *reason, void *siginfo,
 
     if (sig != 0) {
         gdb_set_stop_cpu(cpu);
-        if (gdbserver_state.allow_stop_reply) {
+        if (gdb_try_stop()) {
             g_string_printf(gdbserver_state.str_buf,
                             "T%02xthread:", gdb_target_signal_to_gdb(sig));
             gdb_append_thread_id(cpu, gdbserver_state.str_buf);
@@ -231,7 +230,6 @@ int gdb_handlesig(CPUState *cpu, int sig, const char *reason, void *siginfo,
                 g_string_append(gdbserver_state.str_buf, reason);
             }
             gdb_put_strbuf();
-            gdbserver_state.allow_stop_reply = false;
         }
     }
     /*
@@ -276,13 +274,12 @@ void gdb_signalled(CPUArchState *env, int sig)
     char buf[4];
 
     if (!gdbserver_state.init || gdbserver_user_state.fd < 0 ||
-        !gdbserver_state.allow_stop_reply) {
+        !gdb_try_stop()) {
         return;
     }
 
     snprintf(buf, sizeof(buf), "X%02x", gdb_target_signal_to_gdb(sig));
     gdb_put_packet(buf);
-    gdbserver_state.allow_stop_reply = false;
 }
 
 static void gdb_accept_init(int fd)
@@ -502,7 +499,7 @@ void gdbserver_fork_end(CPUState *cpu, pid_t pid)
         gdbserver_user_state.fork_peer_pid = pid;
         gdbserver_user_state.fork_peer_tid = pid;
 
-        if (!gdbserver_state.allow_stop_reply) {
+        if (!gdb_try_stop()) {
             goto fail;
         }
         g_string_printf(gdbserver_state.str_buf,
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 05/18] accel/tcg: Factor out cpu_exec_user()
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (3 preceding siblings ...)
  2024-09-23 16:12 ` [PATCH 04/18] gdbstub: Factor out gdb_try_stop() Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-10-05 19:29   ` Richard Henderson
  2024-09-23 16:13 ` [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER Ilya Leoshkevich
                   ` (15 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

All linux-user cpu_loop() implementations contain the same sequence
of function calls. Factor them out so that they can be changed in one
place.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c              | 12 ++++++++++++
 bsd-user/aarch64/target_arch_cpu.h |  6 +-----
 bsd-user/arm/target_arch_cpu.h     |  5 +----
 bsd-user/i386/target_arch_cpu.h    |  5 +----
 bsd-user/x86_64/target_arch_cpu.h  |  5 +----
 include/exec/cpu-common.h          |  2 ++
 linux-user/aarch64/cpu_loop.c      |  5 +----
 linux-user/alpha/cpu_loop.c        |  5 +----
 linux-user/arm/cpu_loop.c          |  5 +----
 linux-user/hexagon/cpu_loop.c      |  5 +----
 linux-user/hppa/cpu_loop.c         |  5 +----
 linux-user/i386/cpu_loop.c         |  5 +----
 linux-user/loongarch64/cpu_loop.c  |  5 +----
 linux-user/m68k/cpu_loop.c         |  5 +----
 linux-user/microblaze/cpu_loop.c   |  5 +----
 linux-user/mips/cpu_loop.c         |  5 +----
 linux-user/openrisc/cpu_loop.c     |  5 +----
 linux-user/ppc/cpu_loop.c          |  5 +----
 linux-user/riscv/cpu_loop.c        |  5 +----
 linux-user/s390x/cpu_loop.c        |  5 +----
 linux-user/sh4/cpu_loop.c          |  5 +----
 linux-user/sparc/cpu_loop.c        |  5 +----
 linux-user/xtensa/cpu_loop.c       |  5 +----
 23 files changed, 35 insertions(+), 85 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 7ddc47b0ba4..ca3e8e988ee 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1288,3 +1288,15 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
 #define DATA_SIZE 16
 #include "atomic_template.h"
 #endif
+
+int cpu_exec_user(CPUState *cs)
+{
+    int trapnr;
+
+    cpu_exec_start(cs);
+    trapnr = cpu_exec(cs);
+    cpu_exec_end(cs);
+    process_queued_cpu_work(cs);
+
+    return trapnr;
+}
diff --git a/bsd-user/aarch64/target_arch_cpu.h b/bsd-user/aarch64/target_arch_cpu.h
index b288e0d069b..6ab6c07e973 100644
--- a/bsd-user/aarch64/target_arch_cpu.h
+++ b/bsd-user/aarch64/target_arch_cpu.h
@@ -51,11 +51,7 @@ static inline void target_cpu_loop(CPUARMState *env)
     abi_long ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
-
+        trapnr = cpu_exec_user(cs);
         switch (trapnr) {
         case EXCP_SWI:
             /* See arm64/arm64/trap.c cpu_fetch_syscall_args() */
diff --git a/bsd-user/arm/target_arch_cpu.h b/bsd-user/arm/target_arch_cpu.h
index 517d0087644..2fa97c168c0 100644
--- a/bsd-user/arm/target_arch_cpu.h
+++ b/bsd-user/arm/target_arch_cpu.h
@@ -43,10 +43,7 @@ static inline void target_cpu_loop(CPUARMState *env)
     CPUState *cs = env_cpu(env);
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
         switch (trapnr) {
         case EXCP_UDEF:
         case EXCP_NOCP:
diff --git a/bsd-user/i386/target_arch_cpu.h b/bsd-user/i386/target_arch_cpu.h
index 9bf2c4244b7..cbc4d77daba 100644
--- a/bsd-user/i386/target_arch_cpu.h
+++ b/bsd-user/i386/target_arch_cpu.h
@@ -110,10 +110,7 @@ static inline void target_cpu_loop(CPUX86State *env)
     /* target_siginfo_t info; */
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case 0x80: {
diff --git a/bsd-user/x86_64/target_arch_cpu.h b/bsd-user/x86_64/target_arch_cpu.h
index 4094d61da1a..5442056d009 100644
--- a/bsd-user/x86_64/target_arch_cpu.h
+++ b/bsd-user/x86_64/target_arch_cpu.h
@@ -118,10 +118,7 @@ static inline void target_cpu_loop(CPUX86State *env)
     /* target_siginfo_t info; */
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_SYSCALL:
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 2e1b499cb71..4a7b43f9aa3 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -226,6 +226,8 @@ G_NORETURN void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc);
 
 /* accel/tcg/cpu-exec.c */
 int cpu_exec(CPUState *cpu);
+/* accel/tcg/user-exec.c */
+int cpu_exec_user(CPUState *cs);
 
 /**
  * env_archcpu(env)
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
index 71cdc8be50c..9104996fd46 100644
--- a/linux-user/aarch64/cpu_loop.c
+++ b/linux-user/aarch64/cpu_loop.c
@@ -83,10 +83,7 @@ void cpu_loop(CPUARMState *env)
     abi_long ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_SWI:
diff --git a/linux-user/alpha/cpu_loop.c b/linux-user/alpha/cpu_loop.c
index 2ea039aa71f..dda42aa9ee7 100644
--- a/linux-user/alpha/cpu_loop.c
+++ b/linux-user/alpha/cpu_loop.c
@@ -32,10 +32,7 @@ void cpu_loop(CPUAlphaState *env)
     while (1) {
         bool arch_interrupt = true;
 
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_RESET:
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
index ec665862d93..81fb01c4f95 100644
--- a/linux-user/arm/cpu_loop.c
+++ b/linux-user/arm/cpu_loop.c
@@ -325,10 +325,7 @@ void cpu_loop(CPUARMState *env)
     abi_ulong ret;
 
     for(;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch(trapnr) {
         case EXCP_UDEF:
diff --git a/linux-user/hexagon/cpu_loop.c b/linux-user/hexagon/cpu_loop.c
index d41159e52ad..678d8a42abb 100644
--- a/linux-user/hexagon/cpu_loop.c
+++ b/linux-user/hexagon/cpu_loop.c
@@ -33,10 +33,7 @@ void cpu_loop(CPUHexagonState *env)
     target_ulong ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_INTERRUPT:
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
index bc093b8fe8b..121763a56cb 100644
--- a/linux-user/hppa/cpu_loop.c
+++ b/linux-user/hppa/cpu_loop.c
@@ -114,10 +114,7 @@ void cpu_loop(CPUHPPAState *env)
     int trapnr;
 
     while (1) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_SYSCALL:
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
index 92beb6830cc..8707a3eaf25 100644
--- a/linux-user/i386/cpu_loop.c
+++ b/linux-user/i386/cpu_loop.c
@@ -210,10 +210,7 @@ void cpu_loop(CPUX86State *env)
     abi_ulong ret;
 
     for(;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch(trapnr) {
         case 0x80:
diff --git a/linux-user/loongarch64/cpu_loop.c b/linux-user/loongarch64/cpu_loop.c
index 73d7b6796a4..1735c1c4ff1 100644
--- a/linux-user/loongarch64/cpu_loop.c
+++ b/linux-user/loongarch64/cpu_loop.c
@@ -18,10 +18,7 @@ void cpu_loop(CPULoongArchState *env)
     abi_long ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_INTERRUPT:
diff --git a/linux-user/m68k/cpu_loop.c b/linux-user/m68k/cpu_loop.c
index f79b8e4ab05..8e2f9161c64 100644
--- a/linux-user/m68k/cpu_loop.c
+++ b/linux-user/m68k/cpu_loop.c
@@ -30,10 +30,7 @@ void cpu_loop(CPUM68KState *env)
     unsigned int n;
 
     for(;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch(trapnr) {
         case EXCP_ILLEGAL:
diff --git a/linux-user/microblaze/cpu_loop.c b/linux-user/microblaze/cpu_loop.c
index 212e62d0a62..3867a62c07b 100644
--- a/linux-user/microblaze/cpu_loop.c
+++ b/linux-user/microblaze/cpu_loop.c
@@ -29,10 +29,7 @@ void cpu_loop(CPUMBState *env)
     CPUState *cs = env_cpu(env);
 
     while (1) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_INTERRUPT:
diff --git a/linux-user/mips/cpu_loop.c b/linux-user/mips/cpu_loop.c
index 462387a0737..d14af50cbbe 100644
--- a/linux-user/mips/cpu_loop.c
+++ b/linux-user/mips/cpu_loop.c
@@ -71,10 +71,7 @@ void cpu_loop(CPUMIPSState *env)
 # endif
 
     for(;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch(trapnr) {
         case EXCP_SYSCALL:
diff --git a/linux-user/openrisc/cpu_loop.c b/linux-user/openrisc/cpu_loop.c
index a7aa586c8f9..f1eb799bd27 100644
--- a/linux-user/openrisc/cpu_loop.c
+++ b/linux-user/openrisc/cpu_loop.c
@@ -30,10 +30,7 @@ void cpu_loop(CPUOpenRISCState *env)
     abi_long ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_SYSCALL:
diff --git a/linux-user/ppc/cpu_loop.c b/linux-user/ppc/cpu_loop.c
index 02204ad8beb..53a6ad996f3 100644
--- a/linux-user/ppc/cpu_loop.c
+++ b/linux-user/ppc/cpu_loop.c
@@ -74,10 +74,7 @@ void cpu_loop(CPUPPCState *env)
     for(;;) {
         bool arch_interrupt;
 
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         arch_interrupt = true;
         switch (trapnr) {
diff --git a/linux-user/riscv/cpu_loop.c b/linux-user/riscv/cpu_loop.c
index 52c49c2e426..e9013b312d2 100644
--- a/linux-user/riscv/cpu_loop.c
+++ b/linux-user/riscv/cpu_loop.c
@@ -33,10 +33,7 @@ void cpu_loop(CPURISCVState *env)
     target_ulong ret;
 
     for (;;) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_INTERRUPT:
diff --git a/linux-user/s390x/cpu_loop.c b/linux-user/s390x/cpu_loop.c
index 8b7ac2879ef..d8231403b65 100644
--- a/linux-user/s390x/cpu_loop.c
+++ b/linux-user/s390x/cpu_loop.c
@@ -61,10 +61,7 @@ void cpu_loop(CPUS390XState *env)
     abi_long ret;
 
     while (1) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case EXCP_INTERRUPT:
diff --git a/linux-user/sh4/cpu_loop.c b/linux-user/sh4/cpu_loop.c
index c805f9db110..282d955ee92 100644
--- a/linux-user/sh4/cpu_loop.c
+++ b/linux-user/sh4/cpu_loop.c
@@ -31,10 +31,7 @@ void cpu_loop(CPUSH4State *env)
     while (1) {
         bool arch_interrupt = true;
 
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case 0x160:
diff --git a/linux-user/sparc/cpu_loop.c b/linux-user/sparc/cpu_loop.c
index 50424a54df5..8923a44d607 100644
--- a/linux-user/sparc/cpu_loop.c
+++ b/linux-user/sparc/cpu_loop.c
@@ -217,10 +217,7 @@ void cpu_loop (CPUSPARCState *env)
     abi_long ret;
 
     while (1) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         switch (trapnr) {
         case TARGET_TT_SYSCALL:
diff --git a/linux-user/xtensa/cpu_loop.c b/linux-user/xtensa/cpu_loop.c
index d51ce053926..7eb21415213 100644
--- a/linux-user/xtensa/cpu_loop.c
+++ b/linux-user/xtensa/cpu_loop.c
@@ -130,10 +130,7 @@ void cpu_loop(CPUXtensaState *env)
     int trapnr;
 
     while (1) {
-        cpu_exec_start(cs);
-        trapnr = cpu_exec(cs);
-        cpu_exec_end(cs);
-        process_queued_cpu_work(cs);
+        trapnr = cpu_exec_user(cs);
 
         env->sregs[PS] &= ~PS_EXCM;
         switch (trapnr) {
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (4 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 05/18] accel/tcg: Factor out cpu_exec_user() Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-10-05 19:30   ` Richard Henderson
  2024-09-23 16:13 ` [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER Ilya Leoshkevich
                   ` (14 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Allow static initialization of mutexes.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/qemu/thread-posix.h | 6 ++++++
 include/qemu/thread-win32.h | 6 ++++++
 2 files changed, 12 insertions(+)

diff --git a/include/qemu/thread-posix.h b/include/qemu/thread-posix.h
index 5f2f3d1386b..fc0846bfa7c 100644
--- a/include/qemu/thread-posix.h
+++ b/include/qemu/thread-posix.h
@@ -13,6 +13,12 @@ struct QemuMutex {
     bool initialized;
 };
 
+#ifdef CONFIG_DEBUG_MUTEX
+#define QEMU_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL, 0, true}
+#else
+#define QEMU_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, true}
+#endif
+
 /*
  * QemuRecMutex cannot be a typedef of QemuMutex lest we have two
  * compatible cases in _Generic.  See qemu/lockable.h.
diff --git a/include/qemu/thread-win32.h b/include/qemu/thread-win32.h
index d95af4498fc..ed1f2d0e733 100644
--- a/include/qemu/thread-win32.h
+++ b/include/qemu/thread-win32.h
@@ -12,6 +12,12 @@ struct QemuMutex {
     bool initialized;
 };
 
+#ifdef CONFIG_DEBUG_MUTEX
+#define QEMU_MUTEX_INITIALIZER {SRWLOCK_INIT, NULL, 0, true}
+#else
+#define QEMU_MUTEX_INITIALIZER {SRWLOCK_INIT, true}
+#endif
+
 typedef struct QemuRecMutex QemuRecMutex;
 struct QemuRecMutex {
     CRITICAL_SECTION lock;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (5 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-10-05 19:30   ` Richard Henderson
  2024-09-23 16:13 ` [PATCH 08/18] replay: Add replay_mutex_{lock, unlock}() stubs for qemu-user Ilya Leoshkevich
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Allow static initialization of condition variables.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/qemu/thread-posix.h | 2 ++
 include/qemu/thread-win32.h | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/include/qemu/thread-posix.h b/include/qemu/thread-posix.h
index fc0846bfa7c..ed08181a9c6 100644
--- a/include/qemu/thread-posix.h
+++ b/include/qemu/thread-posix.h
@@ -32,6 +32,8 @@ struct QemuCond {
     bool initialized;
 };
 
+#define QEMU_COND_INITIALIZER {PTHREAD_COND_INITIALIZER, true}
+
 struct QemuSemaphore {
     QemuMutex mutex;
     QemuCond cond;
diff --git a/include/qemu/thread-win32.h b/include/qemu/thread-win32.h
index ed1f2d0e733..e1b014fcac4 100644
--- a/include/qemu/thread-win32.h
+++ b/include/qemu/thread-win32.h
@@ -29,6 +29,8 @@ struct QemuCond {
     bool initialized;
 };
 
+#define QEMU_COND_INITIALIZER {CONDITION_VARIABLE_INIT, true}
+
 struct QemuSemaphore {
     HANDLE sema;
     bool initialized;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 08/18] replay: Add replay_mutex_{lock, unlock}() stubs for qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (6 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 09/18] qemu-timer: Provide qemu_clock_enable() stub " Ilya Leoshkevich
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Sharing pause_all_vcpus() with qemu-user requires providing no-op
definitions of replay mutex functions. Make these functions available
via replay-core.h and move the existing stubs to a separate file.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/exec/replay-core.h | 13 +++++++++++++
 include/sysemu/replay.h    | 13 -------------
 replay/stubs-system.c      |  8 ++++++++
 stubs/meson.build          |  4 ++++
 stubs/replay-mutex.c       | 10 ++++++++++
 stubs/replay-tools.c       |  8 --------
 6 files changed, 35 insertions(+), 21 deletions(-)
 create mode 100644 stubs/replay-mutex.c

diff --git a/include/exec/replay-core.h b/include/exec/replay-core.h
index 244c77acce5..74766bae90d 100644
--- a/include/exec/replay-core.h
+++ b/include/exec/replay-core.h
@@ -77,4 +77,17 @@ void replay_save_random(int ret, void *buf, size_t len);
 /* Loads the saved values for the random number generator */
 int replay_read_random(void *buf, size_t len);
 
+/* Replay locking
+ *
+ * The locks are needed to protect the shared structures and log file
+ * when doing record/replay. They also are the main sync-point between
+ * the main-loop thread and the vCPU thread. This was a role
+ * previously filled by the BQL which has been busy trying to reduce
+ * its impact across the code. This ensures blocks of events stay
+ * sequential and reproducible.
+ */
+
+void replay_mutex_lock(void);
+void replay_mutex_unlock(void);
+
 #endif
diff --git a/include/sysemu/replay.h b/include/sysemu/replay.h
index 8102fa54f01..d6e0342b27c 100644
--- a/include/sysemu/replay.h
+++ b/include/sysemu/replay.h
@@ -51,19 +51,6 @@ typedef struct ReplayNetState ReplayNetState;
 /* Name of the initial VM snapshot */
 extern char *replay_snapshot;
 
-/* Replay locking
- *
- * The locks are needed to protect the shared structures and log file
- * when doing record/replay. They also are the main sync-point between
- * the main-loop thread and the vCPU thread. This was a role
- * previously filled by the BQL which has been busy trying to reduce
- * its impact across the code. This ensures blocks of events stay
- * sequential and reproducible.
- */
-
-void replay_mutex_lock(void);
-void replay_mutex_unlock(void);
-
 /* Processing the instructions */
 
 /*! Returns number of executed instructions. */
diff --git a/replay/stubs-system.c b/replay/stubs-system.c
index 50cefdb2d69..45984e3b098 100644
--- a/replay/stubs-system.c
+++ b/replay/stubs-system.c
@@ -94,3 +94,11 @@ void qmp_replay_seek(int64_t icount, Error **errp)
     error_set(errp, ERROR_CLASS_COMMAND_NOT_FOUND,
               "replay support not available");
 }
+
+void replay_mutex_lock(void)
+{
+}
+
+void replay_mutex_unlock(void)
+{
+}
diff --git a/stubs/meson.build b/stubs/meson.build
index 772a3e817df..ab4b98a0e18 100644
--- a/stubs/meson.build
+++ b/stubs/meson.build
@@ -46,6 +46,10 @@ if have_block or have_ga
   stub_ss.add(files('qmp-quit.c'))
 endif
 
+if have_block or have_ga or have_user
+  stub_ss.add(files('replay-mutex.c'))
+endif
+
 if have_block or have_user
   stub_ss.add(files('qtest.c'))
   stub_ss.add(files('vm-stop.c'))
diff --git a/stubs/replay-mutex.c b/stubs/replay-mutex.c
new file mode 100644
index 00000000000..08c69139bcb
--- /dev/null
+++ b/stubs/replay-mutex.c
@@ -0,0 +1,10 @@
+#include "qemu/osdep.h"
+#include "exec/replay-core.h"
+
+void replay_mutex_lock(void)
+{
+}
+
+void replay_mutex_unlock(void)
+{
+}
diff --git a/stubs/replay-tools.c b/stubs/replay-tools.c
index 3e8ca3212d9..2bca8ab6657 100644
--- a/stubs/replay-tools.c
+++ b/stubs/replay-tools.c
@@ -41,14 +41,6 @@ bool replay_checkpoint(ReplayCheckpoint checkpoint)
     return true;
 }
 
-void replay_mutex_lock(void)
-{
-}
-
-void replay_mutex_unlock(void)
-{
-}
-
 void replay_register_char_driver(struct Chardev *chr)
 {
 }
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 09/18] qemu-timer: Provide qemu_clock_enable() stub for qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (7 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 08/18] replay: Add replay_mutex_{lock, unlock}() stubs for qemu-user Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 10/18] cpu: Use BQL in qemu-user Ilya Leoshkevich
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Sharing pause_all_vcpus() with qemu-user requires a no-op
implementation of qemu_clock_enable().

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 stubs/meson.build  | 4 ++++
 stubs/qemu-timer.c | 6 ++++++
 2 files changed, 10 insertions(+)
 create mode 100644 stubs/qemu-timer.c

diff --git a/stubs/meson.build b/stubs/meson.build
index ab4b98a0e18..9e78a0b9745 100644
--- a/stubs/meson.build
+++ b/stubs/meson.build
@@ -46,6 +46,10 @@ if have_block or have_ga
   stub_ss.add(files('qmp-quit.c'))
 endif
 
+if have_user
+  stub_ss.add(files('qemu-timer.c'))
+endif
+
 if have_block or have_ga or have_user
   stub_ss.add(files('replay-mutex.c'))
 endif
diff --git a/stubs/qemu-timer.c b/stubs/qemu-timer.c
new file mode 100644
index 00000000000..27b67f7b313
--- /dev/null
+++ b/stubs/qemu-timer.c
@@ -0,0 +1,6 @@
+#include "qemu/osdep.h"
+#include "qemu/timer.h"
+
+void qemu_clock_enable(QEMUClockType type, bool enabled)
+{
+}
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 10/18] cpu: Use BQL in qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (8 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 09/18] qemu-timer: Provide qemu_clock_enable() stub " Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick() Ilya Leoshkevich
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Currently BQL is stubbed out in qemu-user. However, enabling the
ability to pause and resume CPUs requires BQL, so introduce it.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c         |  2 ++
 bsd-user/freebsd/os-syscall.c |  6 ++++
 bsd-user/main.c               |  2 ++
 cpu-common.c                  | 45 ++++++++++++++++++++++++++++++
 gdbstub/user.c                |  5 ++++
 linux-user/main.c             |  3 ++
 linux-user/syscall.c          |  6 ++++
 system/cpus.c                 | 52 ++---------------------------------
 8 files changed, 72 insertions(+), 49 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index ca3e8e988ee..d56882c87f3 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1293,9 +1293,11 @@ int cpu_exec_user(CPUState *cs)
 {
     int trapnr;
 
+    bql_unlock();
     cpu_exec_start(cs);
     trapnr = cpu_exec(cs);
     cpu_exec_end(cs);
+    bql_lock();
     process_queued_cpu_work(cs);
 
     return trapnr;
diff --git a/bsd-user/freebsd/os-syscall.c b/bsd-user/freebsd/os-syscall.c
index ca2f6fdb66e..c2849d43223 100644
--- a/bsd-user/freebsd/os-syscall.c
+++ b/bsd-user/freebsd/os-syscall.c
@@ -32,6 +32,7 @@
 #include "qemu.h"
 #include "signal-common.h"
 #include "user/syscall-trace.h"
+#include "qemu/main-loop.h"
 
 /* BSD independent syscall shims */
 #include "bsd-file.h"
@@ -935,16 +936,21 @@ abi_long do_freebsd_syscall(void *cpu_env, int num, abi_long arg1,
 {
     abi_long ret;
 
+    bql_unlock();
+
     if (do_strace) {
         print_freebsd_syscall(num, arg1, arg2, arg3, arg4, arg5, arg6);
     }
 
     ret = freebsd_syscall(cpu_env, num, arg1, arg2, arg3, arg4, arg5, arg6,
                           arg7, arg8);
+
     if (do_strace) {
         print_freebsd_syscall_ret(num, ret);
     }
 
+    bql_lock();
+
     return ret;
 }
 
diff --git a/bsd-user/main.c b/bsd-user/main.c
index cc980e6f401..ba5b54c228d 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -48,6 +48,7 @@
 #include "qemu/guest-random.h"
 #include "gdbstub/user.h"
 #include "exec/page-vary.h"
+#include "qemu/main-loop.h"
 
 #include "host-os.h"
 #include "target_arch_cpu.h"
@@ -616,6 +617,7 @@ int main(int argc, char **argv)
 
     target_cpu_init(env, regs);
 
+    bql_lock();
     if (gdbstub) {
         gdbserver_start(gdbstub);
         gdb_handlesig(cpu, 0, NULL, NULL, 0);
diff --git a/cpu-common.c b/cpu-common.c
index 6b262233a3b..cb7c10a3915 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -452,3 +452,48 @@ void cpu_breakpoint_remove_all(CPUState *cpu, int mask)
         }
     }
 }
+
+/* The Big QEMU Lock (BQL) */
+static QemuMutex bql = QEMU_MUTEX_INITIALIZER;
+
+QEMU_DEFINE_STATIC_CO_TLS(bool, bql_locked)
+
+bool bql_locked(void)
+{
+    return get_bql_locked();
+}
+
+/*
+ * The BQL is taken from so many places that it is worth profiling the
+ * callers directly, instead of funneling them all through a single function.
+ */
+void bql_lock_impl(const char *file, int line)
+{
+    QemuMutexLockFunc bql_lock_fn = qatomic_read(&bql_mutex_lock_func);
+
+    g_assert(!bql_locked());
+    bql_lock_fn(&bql, file, line);
+    set_bql_locked(true);
+}
+
+void bql_unlock(void)
+{
+    g_assert(bql_locked());
+    set_bql_locked(false);
+    qemu_mutex_unlock(&bql);
+}
+
+void qemu_cond_wait_bql(QemuCond *cond)
+{
+    qemu_cond_wait(cond, &bql);
+}
+
+void qemu_cond_timedwait_bql(QemuCond *cond, int ms)
+{
+    qemu_cond_timedwait(cond, &bql, ms);
+}
+
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
+{
+    do_run_on_cpu(cpu, func, data, &bql);
+}
diff --git a/gdbstub/user.c b/gdbstub/user.c
index 77ba227fc3b..82007b09db6 100644
--- a/gdbstub/user.c
+++ b/gdbstub/user.c
@@ -12,6 +12,7 @@
 #include "qemu/osdep.h"
 #include "qemu/bitops.h"
 #include "qemu/cutils.h"
+#include "qemu/main-loop.h"
 #include "qemu/sockets.h"
 #include "exec/hwaddr.h"
 #include "exec/tb-flush.h"
@@ -169,6 +170,8 @@ void gdb_exit(int code)
 {
     char buf[4];
 
+    BQL_LOCK_GUARD();
+
     if (!gdbserver_state.init) {
         return;
     }
@@ -464,6 +467,8 @@ void gdbserver_fork_end(CPUState *cpu, pid_t pid)
     char b;
     int fd;
 
+    BQL_LOCK_GUARD();
+
     if (!gdbserver_state.init || gdbserver_user_state.fd < 0) {
         return;
     }
diff --git a/linux-user/main.c b/linux-user/main.c
index 8143a0d4b02..016f60bf3dc 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -57,6 +57,7 @@
 #include "user-mmap.h"
 #include "tcg/perf.h"
 #include "exec/page-vary.h"
+#include "qemu/main-loop.h"
 
 #ifdef CONFIG_SEMIHOSTING
 #include "semihosting/semihost.h"
@@ -1011,6 +1012,8 @@ int main(int argc, char **argv, char **envp)
 
     target_cpu_copy_regs(env, regs);
 
+    bql_lock();
+
     if (gdbstub) {
         if (gdbserver_start(gdbstub) < 0) {
             fprintf(stderr, "qemu: could not open gdbserver on %s\n",
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index b693aeff5bb..ff34ae11340 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -144,6 +144,7 @@
 #include "qapi/error.h"
 #include "fd-trans.h"
 #include "cpu_loop-common.h"
+#include "qemu/main-loop.h"
 
 #ifndef CLONE_IO
 #define CLONE_IO                0x80000000      /* Clone io context */
@@ -6529,6 +6530,7 @@ static void *clone_func(void *arg)
     /* Wait until the parent has finished initializing the tls state.  */
     pthread_mutex_lock(&clone_lock);
     pthread_mutex_unlock(&clone_lock);
+    bql_lock();
     cpu_loop(env);
     /* never exits */
     return NULL;
@@ -13772,6 +13774,8 @@ abi_long do_syscall(CPUArchState *cpu_env, int num, abi_long arg1,
     record_syscall_start(cpu, num, arg1,
                          arg2, arg3, arg4, arg5, arg6, arg7, arg8);
 
+    bql_unlock();
+
     if (unlikely(qemu_loglevel_mask(LOG_STRACE))) {
         print_syscall(cpu_env, num, arg1, arg2, arg3, arg4, arg5, arg6);
     }
@@ -13784,6 +13788,8 @@ abi_long do_syscall(CPUArchState *cpu_env, int num, abi_long arg1,
                           arg3, arg4, arg5, arg6);
     }
 
+    bql_lock();
+
     record_syscall_return(cpu, num, ret);
     return ret;
 }
diff --git a/system/cpus.c b/system/cpus.c
index 1c818ff6828..fe84b822798 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -65,9 +65,6 @@
 
 #endif /* CONFIG_LINUX */
 
-/* The Big QEMU Lock (BQL) */
-static QemuMutex bql;
-
 /*
  * The chosen accelerator is supposed to register this.
  */
@@ -420,16 +417,10 @@ void qemu_init_cpu_loop(void)
     qemu_init_sigbus();
     qemu_cond_init(&qemu_cpu_cond);
     qemu_cond_init(&qemu_pause_cond);
-    qemu_mutex_init(&bql);
 
     qemu_thread_get_self(&io_thread);
 }
 
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
-{
-    do_run_on_cpu(cpu, func, data, &bql);
-}
-
 static void qemu_cpu_stop(CPUState *cpu, bool exit)
 {
     g_assert(qemu_cpu_is_self(cpu));
@@ -459,7 +450,7 @@ void qemu_wait_io_event(CPUState *cpu)
             slept = true;
             qemu_plugin_vcpu_idle_cb(cpu);
         }
-        qemu_cond_wait(cpu->halt_cond, &bql);
+        qemu_cond_wait_bql(cpu->halt_cond);
     }
     if (slept) {
         qemu_plugin_vcpu_resume_cb(cpu);
@@ -512,48 +503,11 @@ bool qemu_in_vcpu_thread(void)
     return current_cpu && qemu_cpu_is_self(current_cpu);
 }
 
-QEMU_DEFINE_STATIC_CO_TLS(bool, bql_locked)
-
-bool bql_locked(void)
-{
-    return get_bql_locked();
-}
-
 bool qemu_in_main_thread(void)
 {
     return bql_locked();
 }
 
-/*
- * The BQL is taken from so many places that it is worth profiling the
- * callers directly, instead of funneling them all through a single function.
- */
-void bql_lock_impl(const char *file, int line)
-{
-    QemuMutexLockFunc bql_lock_fn = qatomic_read(&bql_mutex_lock_func);
-
-    g_assert(!bql_locked());
-    bql_lock_fn(&bql, file, line);
-    set_bql_locked(true);
-}
-
-void bql_unlock(void)
-{
-    g_assert(bql_locked());
-    set_bql_locked(false);
-    qemu_mutex_unlock(&bql);
-}
-
-void qemu_cond_wait_bql(QemuCond *cond)
-{
-    qemu_cond_wait(cond, &bql);
-}
-
-void qemu_cond_timedwait_bql(QemuCond *cond, int ms)
-{
-    qemu_cond_timedwait(cond, &bql, ms);
-}
-
 /* signal CPU creation */
 void cpu_thread_signal_created(CPUState *cpu)
 {
@@ -613,7 +567,7 @@ void pause_all_vcpus(void)
     replay_mutex_unlock();
 
     while (!all_vcpus_paused()) {
-        qemu_cond_wait(&qemu_pause_cond, &bql);
+        qemu_cond_wait_bql(&qemu_pause_cond);
         CPU_FOREACH(cpu) {
             qemu_cpu_kick(cpu);
         }
@@ -684,7 +638,7 @@ void qemu_init_vcpu(CPUState *cpu)
     cpus_accel->create_vcpu_thread(cpu);
 
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &bql);
+        qemu_cond_wait_bql(&qemu_cpu_cond);
     }
 }
 
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick()
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (9 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 10/18] cpu: Use BQL in qemu-user Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-10-05 19:31   ` Richard Henderson
  2024-09-23 16:13 ` [PATCH 12/18] cpu: Track CPUs executing syscalls Ilya Leoshkevich
                   ` (9 subsequent siblings)
  20 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

linux-user and bsd-user have the same implementation.
Move it to user-exec.c.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c | 5 +++++
 bsd-user/main.c       | 5 -----
 linux-user/main.c     | 5 -----
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index d56882c87f3..7bd6e94b8e8 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1302,3 +1302,8 @@ int cpu_exec_user(CPUState *cs)
 
     return trapnr;
 }
+
+void qemu_cpu_kick(CPUState *cpu)
+{
+    cpu_exit(cpu);
+}
diff --git a/bsd-user/main.c b/bsd-user/main.c
index ba5b54c228d..b424a21f643 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -208,11 +208,6 @@ bool qemu_cpu_is_self(CPUState *cpu)
     return thread_cpu == cpu;
 }
 
-void qemu_cpu_kick(CPUState *cpu)
-{
-    cpu_exit(cpu);
-}
-
 /* Assumes contents are already zeroed.  */
 static void init_task_state(TaskState *ts)
 {
diff --git a/linux-user/main.c b/linux-user/main.c
index 016f60bf3dc..60091cf3053 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -186,11 +186,6 @@ bool qemu_cpu_is_self(CPUState *cpu)
     return thread_cpu == cpu;
 }
 
-void qemu_cpu_kick(CPUState *cpu)
-{
-    cpu_exit(cpu);
-}
-
 void task_settid(TaskState *ts)
 {
     if (ts->ts_tid == 0) {
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 12/18] cpu: Track CPUs executing syscalls
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (10 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick() Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 13/18] cpu: Implement cpu_thread_is_idle() for qemu-user Ilya Leoshkevich
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

CPUs that execute syscalls should be considered paused by
all_vcpus_paused(). Lay the groundwork by introducing a bool field in
CPUState to track this. The field is not used by sysemu, but it's only
one byte, so it should not be a problem.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c         | 10 ++++++++++
 bsd-user/freebsd/os-syscall.c |  4 ++++
 include/exec/cpu-common.h     |  2 ++
 include/hw/core/cpu.h         |  1 +
 linux-user/syscall.c          |  5 +++++
 5 files changed, 22 insertions(+)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 7bd6e94b8e8..3ebace1e833 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1307,3 +1307,13 @@ void qemu_cpu_kick(CPUState *cpu)
 {
     cpu_exit(cpu);
 }
+
+void cpu_enter_syscall(CPUState *cs)
+{
+    cs->in_syscall = true;
+}
+
+void cpu_exit_syscall(CPUState *cs)
+{
+    cs->in_syscall = false;
+}
diff --git a/bsd-user/freebsd/os-syscall.c b/bsd-user/freebsd/os-syscall.c
index c2849d43223..9f54345e11b 100644
--- a/bsd-user/freebsd/os-syscall.c
+++ b/bsd-user/freebsd/os-syscall.c
@@ -936,6 +936,8 @@ abi_long do_freebsd_syscall(void *cpu_env, int num, abi_long arg1,
 {
     abi_long ret;
 
+    cpu_enter_syscall(env_cpu(cpu_env));
+
     bql_unlock();
 
     if (do_strace) {
@@ -951,6 +953,8 @@ abi_long do_freebsd_syscall(void *cpu_env, int num, abi_long arg1,
 
     bql_lock();
 
+    cpu_exit_syscall(env_cpu(cpu_env));
+
     return ret;
 }
 
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 4a7b43f9aa3..32bd3cad83f 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -228,6 +228,8 @@ G_NORETURN void cpu_loop_exit_restore(CPUState *cpu, uintptr_t pc);
 int cpu_exec(CPUState *cpu);
 /* accel/tcg/user-exec.c */
 int cpu_exec_user(CPUState *cs);
+void cpu_enter_syscall(CPUState *cs);
+void cpu_exit_syscall(CPUState *cs);
 
 /**
  * env_archcpu(env)
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 1c9c775df65..d073a79731b 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -475,6 +475,7 @@ struct CPUState {
     bool created;
     bool stop;
     bool stopped;
+    bool in_syscall;
 
     /* Should CPU start in powered-off state? */
     bool start_powered_off;
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index ff34ae11340..344c2e65234 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -13771,6 +13771,8 @@ abi_long do_syscall(CPUArchState *cpu_env, int num, abi_long arg1,
     }
 #endif
 
+    cpu_enter_syscall(cpu);
+
     record_syscall_start(cpu, num, arg1,
                          arg2, arg3, arg4, arg5, arg6, arg7, arg8);
 
@@ -13791,5 +13793,8 @@ abi_long do_syscall(CPUArchState *cpu_env, int num, abi_long arg1,
     bql_lock();
 
     record_syscall_return(cpu, num, ret);
+
+    cpu_exit_syscall(cpu);
+
     return ret;
 }
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 13/18] cpu: Implement cpu_thread_is_idle() for qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (11 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 12/18] cpu: Track CPUs executing syscalls Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 14/18] cpu: Introduce cpu_is_paused() Ilya Leoshkevich
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Stopped CPUs are parked until cpu_thread_is_idle() is true, so
implement it for qemu-user. Share a part of the qemu-system's
implementation.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c     | 12 ++++++++++++
 cpu-common.c              | 19 +++++++++++++++++++
 include/exec/cpu-common.h |  3 +++
 include/sysemu/cpus.h     |  1 -
 system/cpus.c             | 17 ++++-------------
 5 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 3ebace1e833..57a13c81fc4 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1317,3 +1317,15 @@ void cpu_exit_syscall(CPUState *cs)
 {
     cs->in_syscall = false;
 }
+
+bool cpu_is_stopped(CPUState *cpu)
+{
+    return cpu->stopped;
+}
+
+bool cpu_thread_is_idle(CPUState *cpu)
+{
+    int ret = cpu_thread_is_idle_common(cpu);
+
+    return ret == -1 ? true : ret;
+}
diff --git a/cpu-common.c b/cpu-common.c
index cb7c10a3915..2822ee9373d 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -497,3 +497,22 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
 {
     do_run_on_cpu(cpu, func, data, &bql);
 }
+
+bool cpu_work_list_empty(CPUState *cpu)
+{
+    return QSIMPLEQ_EMPTY_ATOMIC(&cpu->work_list);
+}
+
+int cpu_thread_is_idle_common(CPUState *cpu)
+{
+    if (cpu->stop || !cpu_work_list_empty(cpu)) {
+        return 0;
+    }
+    if (cpu_is_stopped(cpu)) {
+        return 1;
+    }
+    if (!cpu->halted || cpu_has_work(cpu)) {
+        return 0;
+    }
+    return -1;
+}
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 32bd3cad83f..d7fc24bc13d 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -231,6 +231,9 @@ int cpu_exec_user(CPUState *cs);
 void cpu_enter_syscall(CPUState *cs);
 void cpu_exit_syscall(CPUState *cs);
 
+int cpu_thread_is_idle_common(CPUState *cpu);
+bool cpu_thread_is_idle(CPUState *cpu);
+
 /**
  * env_archcpu(env)
  * @env: The architecture environment
diff --git a/include/sysemu/cpus.h b/include/sysemu/cpus.h
index b4a566cfe75..bfa3fd45650 100644
--- a/include/sysemu/cpus.h
+++ b/include/sysemu/cpus.h
@@ -21,7 +21,6 @@ void dummy_start_vcpu_thread(CPUState *);
 
 void cpus_kick_thread(CPUState *cpu);
 bool cpu_work_list_empty(CPUState *cpu);
-bool cpu_thread_is_idle(CPUState *cpu);
 bool all_cpu_threads_idle(void);
 bool cpu_can_run(CPUState *cpu);
 void qemu_wait_io_event_common(CPUState *cpu);
diff --git a/system/cpus.c b/system/cpus.c
index fe84b822798..13072be26fa 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -75,21 +75,12 @@ bool cpu_is_stopped(CPUState *cpu)
     return cpu->stopped || !runstate_is_running();
 }
 
-bool cpu_work_list_empty(CPUState *cpu)
-{
-    return QSIMPLEQ_EMPTY_ATOMIC(&cpu->work_list);
-}
-
 bool cpu_thread_is_idle(CPUState *cpu)
 {
-    if (cpu->stop || !cpu_work_list_empty(cpu)) {
-        return false;
-    }
-    if (cpu_is_stopped(cpu)) {
-        return true;
-    }
-    if (!cpu->halted || cpu_has_work(cpu)) {
-        return false;
+    int ret = cpu_thread_is_idle_common(cpu);
+
+    if (ret != -1) {
+        return ret;
     }
     if (cpus_accel->cpu_thread_is_idle) {
         return cpus_accel->cpu_thread_is_idle(cpu);
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 14/18] cpu: Introduce cpu_is_paused()
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (12 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 13/18] cpu: Implement cpu_thread_is_idle() for qemu-user Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 15/18] cpu: Set current_cpu early in qemu-user Ilya Leoshkevich
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

A qemu-system CPU is considered paused as a result of an external
request. A qemu-user CPU, in addition to that, should be considered
paused when it's executing a syscall.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec.c     | 5 +++++
 include/exec/cpu-common.h | 1 +
 system/cpus.c             | 7 ++++++-
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index 57a13c81fc4..de4753cded7 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1329,3 +1329,8 @@ bool cpu_thread_is_idle(CPUState *cpu)
 
     return ret == -1 ? true : ret;
 }
+
+bool cpu_is_paused(CPUState *cpu)
+{
+    return cpu->stopped || cpu->in_syscall;
+}
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index d7fc24bc13d..e8b530ed889 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -233,6 +233,7 @@ void cpu_exit_syscall(CPUState *cs);
 
 int cpu_thread_is_idle_common(CPUState *cpu);
 bool cpu_thread_is_idle(CPUState *cpu);
+bool cpu_is_paused(CPUState *cpu);
 
 /**
  * env_archcpu(env)
diff --git a/system/cpus.c b/system/cpus.c
index 13072be26fa..407140c41f6 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -530,12 +530,17 @@ void cpu_resume(CPUState *cpu)
     qemu_cpu_kick(cpu);
 }
 
+bool cpu_is_paused(CPUState *cpu)
+{
+    return cpu->stopped;
+}
+
 static bool all_vcpus_paused(void)
 {
     CPUState *cpu;
 
     CPU_FOREACH(cpu) {
-        if (!cpu->stopped) {
+        if (!cpu_is_paused(cpu)) {
             return false;
         }
     }
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 15/18] cpu: Set current_cpu early in qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (13 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 14/18] cpu: Introduce cpu_is_paused() Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 16/18] cpu: Allow pausing and resuming CPUs " Ilya Leoshkevich
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

qemu_plugin_get_registers() may be called before cpu_exec(), and it
requires current_cpu.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 bsd-user/main.c      | 1 +
 linux-user/main.c    | 1 +
 linux-user/syscall.c | 1 +
 3 files changed, 3 insertions(+)

diff --git a/bsd-user/main.c b/bsd-user/main.c
index b424a21f643..fb70aadbcee 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -617,6 +617,7 @@ int main(int argc, char **argv)
         gdbserver_start(gdbstub);
         gdb_handlesig(cpu, 0, NULL, NULL, 0);
     }
+    current_cpu = cpu;
     cpu_loop(env);
     /* never exits */
     return 0;
diff --git a/linux-user/main.c b/linux-user/main.c
index 60091cf3053..4a794445d72 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -1022,6 +1022,7 @@ int main(int argc, char **argv, char **envp)
     qemu_semihosting_guestfd_init();
 #endif
 
+    current_cpu = cpu;
     cpu_loop(env);
     /* never exits */
     return 0;
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 344c2e65234..46a8ba7098c 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -6531,6 +6531,7 @@ static void *clone_func(void *arg)
     pthread_mutex_lock(&clone_lock);
     pthread_mutex_unlock(&clone_lock);
     bql_lock();
+    current_cpu = cpu;
     cpu_loop(env);
     /* never exits */
     return NULL;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 16/18] cpu: Allow pausing and resuming CPUs in qemu-user
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (14 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 15/18] cpu: Set current_cpu early in qemu-user Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 17/18] gdbstub: Pause all CPUs before sending stop replies Ilya Leoshkevich
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Move the respective functions from sysemu to cpu-common.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 accel/tcg/user-exec-stub.c |   4 --
 accel/tcg/user-exec.c      |  11 +++-
 cpu-common.c               | 115 +++++++++++++++++++++++++++++++++++++
 include/exec/cpu-common.h  |   7 +++
 include/sysemu/cpus.h      |   5 --
 system/cpus.c              | 108 ----------------------------------
 6 files changed, 132 insertions(+), 118 deletions(-)

diff --git a/accel/tcg/user-exec-stub.c b/accel/tcg/user-exec-stub.c
index 4fbe2dbdc88..e79f2e88498 100644
--- a/accel/tcg/user-exec-stub.c
+++ b/accel/tcg/user-exec-stub.c
@@ -2,10 +2,6 @@
 #include "hw/core/cpu.h"
 #include "exec/replay-core.h"
 
-void cpu_resume(CPUState *cpu)
-{
-}
-
 void cpu_remove_sync(CPUState *cpu)
 {
 }
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
index de4753cded7..3399b074485 100644
--- a/accel/tcg/user-exec.c
+++ b/accel/tcg/user-exec.c
@@ -1293,12 +1293,19 @@ int cpu_exec_user(CPUState *cs)
 {
     int trapnr;
 
+    do {
+        qemu_wait_io_event(cs);
+    } while (!cpu_can_run(cs));
+
     bql_unlock();
     cpu_exec_start(cs);
     trapnr = cpu_exec(cs);
     cpu_exec_end(cs);
     bql_lock();
-    process_queued_cpu_work(cs);
+
+    do {
+        qemu_wait_io_event(cs);
+    } while (!cpu_can_run(cs));
 
     return trapnr;
 }
@@ -1306,11 +1313,13 @@ int cpu_exec_user(CPUState *cs)
 void qemu_cpu_kick(CPUState *cpu)
 {
     cpu_exit(cpu);
+    qemu_cond_broadcast(cpu->halt_cond);
 }
 
 void cpu_enter_syscall(CPUState *cs)
 {
     cs->in_syscall = true;
+    qemu_pause_cond_broadcast();
 }
 
 void cpu_exit_syscall(CPUState *cs)
diff --git a/cpu-common.c b/cpu-common.c
index 2822ee9373d..979e3fe8806 100644
--- a/cpu-common.c
+++ b/cpu-common.c
@@ -24,6 +24,8 @@
 #include "sysemu/cpus.h"
 #include "qemu/lockable.h"
 #include "trace/trace-root.h"
+#include "exec/replay-core.h"
+#include "qemu/plugin.h"
 
 QemuMutex qemu_cpu_list_lock;
 static QemuCond exclusive_cond;
@@ -456,6 +458,9 @@ void cpu_breakpoint_remove_all(CPUState *cpu, int mask)
 /* The Big QEMU Lock (BQL) */
 static QemuMutex bql = QEMU_MUTEX_INITIALIZER;
 
+/* system init */
+static QemuCond qemu_pause_cond = QEMU_COND_INITIALIZER;
+
 QEMU_DEFINE_STATIC_CO_TLS(bool, bql_locked)
 
 bool bql_locked(void)
@@ -498,6 +503,105 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
     do_run_on_cpu(cpu, func, data, &bql);
 }
 
+static bool all_vcpus_paused(void)
+{
+    CPUState *cpu;
+
+    CPU_FOREACH(cpu) {
+        if (!cpu_is_paused(cpu)) {
+            return false;
+        }
+    }
+
+    return true;
+}
+
+void pause_all_vcpus(void)
+{
+    CPUState *cpu;
+
+    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
+    CPU_FOREACH(cpu) {
+        cpu_pause(cpu);
+    }
+
+    /* We need to drop the replay_lock so any vCPU threads woken up
+     * can finish their replay tasks
+     */
+    replay_mutex_unlock();
+
+    while (!all_vcpus_paused()) {
+        qemu_cond_wait_bql(&qemu_pause_cond);
+        CPU_FOREACH(cpu) {
+            qemu_cpu_kick(cpu);
+        }
+    }
+
+    bql_unlock();
+    replay_mutex_lock();
+    bql_lock();
+}
+
+void qemu_pause_cond_broadcast(void)
+{
+    qemu_cond_broadcast(&qemu_pause_cond);
+}
+
+static void qemu_cpu_stop(CPUState *cpu, bool exit)
+{
+    g_assert(qemu_cpu_is_self(cpu));
+    cpu->stop = false;
+    cpu->stopped = true;
+    if (exit) {
+        cpu_exit(cpu);
+    }
+    qemu_pause_cond_broadcast();
+}
+
+void qemu_wait_io_event_common(CPUState *cpu)
+{
+    qatomic_set_mb(&cpu->thread_kicked, false);
+    if (cpu->stop) {
+        qemu_cpu_stop(cpu, false);
+    }
+    process_queued_cpu_work(cpu);
+}
+
+void qemu_wait_io_event(CPUState *cpu)
+{
+    bool slept = false;
+
+    while (cpu_thread_is_idle(cpu)) {
+        if (!slept) {
+            slept = true;
+            qemu_plugin_vcpu_idle_cb(cpu);
+        }
+        qemu_cond_wait_bql(cpu->halt_cond);
+    }
+    if (slept) {
+        qemu_plugin_vcpu_resume_cb(cpu);
+    }
+
+    qemu_wait_io_event_common(cpu);
+}
+
+void cpu_pause(CPUState *cpu)
+{
+    if (qemu_cpu_is_self(cpu)) {
+        qemu_cpu_stop(cpu, true);
+    } else {
+        cpu->stop = true;
+        qemu_cpu_kick(cpu);
+    }
+}
+
+void cpu_resume(CPUState *cpu)
+{
+    cpu->stop = false;
+    cpu->stopped = false;
+    qemu_cpu_kick(cpu);
+}
+
 bool cpu_work_list_empty(CPUState *cpu)
 {
     return QSIMPLEQ_EMPTY_ATOMIC(&cpu->work_list);
@@ -516,3 +620,14 @@ int cpu_thread_is_idle_common(CPUState *cpu)
     }
     return -1;
 }
+
+bool cpu_can_run(CPUState *cpu)
+{
+    if (cpu->stop) {
+        return false;
+    }
+    if (cpu_is_stopped(cpu)) {
+        return false;
+    }
+    return true;
+}
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index e8b530ed889..a54368c5b69 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -234,6 +234,13 @@ void cpu_exit_syscall(CPUState *cs);
 int cpu_thread_is_idle_common(CPUState *cpu);
 bool cpu_thread_is_idle(CPUState *cpu);
 bool cpu_is_paused(CPUState *cpu);
+bool cpu_can_run(CPUState *cpu);
+void qemu_wait_io_event_common(CPUState *cpu);
+void qemu_wait_io_event(CPUState *cpu);
+void pause_all_vcpus(void);
+void qemu_pause_cond_broadcast(void);
+bool cpu_work_list_empty(CPUState *cpu);
+
 
 /**
  * env_archcpu(env)
diff --git a/include/sysemu/cpus.h b/include/sysemu/cpus.h
index bfa3fd45650..ebfd0b77bcd 100644
--- a/include/sysemu/cpus.h
+++ b/include/sysemu/cpus.h
@@ -20,11 +20,7 @@ void dummy_start_vcpu_thread(CPUState *);
 #define VCPU_THREAD_NAME_SIZE 16
 
 void cpus_kick_thread(CPUState *cpu);
-bool cpu_work_list_empty(CPUState *cpu);
 bool all_cpu_threads_idle(void);
-bool cpu_can_run(CPUState *cpu);
-void qemu_wait_io_event_common(CPUState *cpu);
-void qemu_wait_io_event(CPUState *cpu);
 void cpu_thread_signal_created(CPUState *cpu);
 void cpu_thread_signal_destroyed(CPUState *cpu);
 void cpu_handle_guest_debug(CPUState *cpu);
@@ -34,7 +30,6 @@ void cpu_handle_guest_debug(CPUState *cpu);
 bool qemu_in_vcpu_thread(void);
 void qemu_init_cpu_loop(void);
 void resume_all_vcpus(void);
-void pause_all_vcpus(void);
 void cpu_stop_current(void);
 
 extern int icount_align_option;
diff --git a/system/cpus.c b/system/cpus.c
index 407140c41f6..9ad7bae056e 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -307,17 +307,6 @@ int vm_shutdown(void)
     return do_vm_stop(RUN_STATE_SHUTDOWN, false);
 }
 
-bool cpu_can_run(CPUState *cpu)
-{
-    if (cpu->stop) {
-        return false;
-    }
-    if (cpu_is_stopped(cpu)) {
-        return false;
-    }
-    return true;
-}
-
 void cpu_handle_guest_debug(CPUState *cpu)
 {
     if (replay_running_debug()) {
@@ -400,56 +389,15 @@ static QemuThread io_thread;
 
 /* cpu creation */
 static QemuCond qemu_cpu_cond;
-/* system init */
-static QemuCond qemu_pause_cond;
 
 void qemu_init_cpu_loop(void)
 {
     qemu_init_sigbus();
     qemu_cond_init(&qemu_cpu_cond);
-    qemu_cond_init(&qemu_pause_cond);
 
     qemu_thread_get_self(&io_thread);
 }
 
-static void qemu_cpu_stop(CPUState *cpu, bool exit)
-{
-    g_assert(qemu_cpu_is_self(cpu));
-    cpu->stop = false;
-    cpu->stopped = true;
-    if (exit) {
-        cpu_exit(cpu);
-    }
-    qemu_cond_broadcast(&qemu_pause_cond);
-}
-
-void qemu_wait_io_event_common(CPUState *cpu)
-{
-    qatomic_set_mb(&cpu->thread_kicked, false);
-    if (cpu->stop) {
-        qemu_cpu_stop(cpu, false);
-    }
-    process_queued_cpu_work(cpu);
-}
-
-void qemu_wait_io_event(CPUState *cpu)
-{
-    bool slept = false;
-
-    while (cpu_thread_is_idle(cpu)) {
-        if (!slept) {
-            slept = true;
-            qemu_plugin_vcpu_idle_cb(cpu);
-        }
-        qemu_cond_wait_bql(cpu->halt_cond);
-    }
-    if (slept) {
-        qemu_plugin_vcpu_resume_cb(cpu);
-    }
-
-    qemu_wait_io_event_common(cpu);
-}
-
 void cpus_kick_thread(CPUState *cpu)
 {
     if (cpu->thread_kicked) {
@@ -513,67 +461,11 @@ void cpu_thread_signal_destroyed(CPUState *cpu)
     qemu_cond_signal(&qemu_cpu_cond);
 }
 
-void cpu_pause(CPUState *cpu)
-{
-    if (qemu_cpu_is_self(cpu)) {
-        qemu_cpu_stop(cpu, true);
-    } else {
-        cpu->stop = true;
-        qemu_cpu_kick(cpu);
-    }
-}
-
-void cpu_resume(CPUState *cpu)
-{
-    cpu->stop = false;
-    cpu->stopped = false;
-    qemu_cpu_kick(cpu);
-}
-
 bool cpu_is_paused(CPUState *cpu)
 {
     return cpu->stopped;
 }
 
-static bool all_vcpus_paused(void)
-{
-    CPUState *cpu;
-
-    CPU_FOREACH(cpu) {
-        if (!cpu_is_paused(cpu)) {
-            return false;
-        }
-    }
-
-    return true;
-}
-
-void pause_all_vcpus(void)
-{
-    CPUState *cpu;
-
-    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
-    CPU_FOREACH(cpu) {
-        cpu_pause(cpu);
-    }
-
-    /* We need to drop the replay_lock so any vCPU threads woken up
-     * can finish their replay tasks
-     */
-    replay_mutex_unlock();
-
-    while (!all_vcpus_paused()) {
-        qemu_cond_wait_bql(&qemu_pause_cond);
-        CPU_FOREACH(cpu) {
-            qemu_cpu_kick(cpu);
-        }
-    }
-
-    bql_unlock();
-    replay_mutex_lock();
-    bql_lock();
-}
-
 void resume_all_vcpus(void)
 {
     CPUState *cpu;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 17/18] gdbstub: Pause all CPUs before sending stop replies
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (15 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 16/18] cpu: Allow pausing and resuming CPUs " Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:13 ` [PATCH 18/18] tests/tcg: Stress test thread breakpoints Ilya Leoshkevich
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

This is required by the GDB remote protocol.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 gdbstub/gdbstub.c |  2 ++
 gdbstub/user.c    | 10 +++++++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/gdbstub/gdbstub.c b/gdbstub/gdbstub.c
index a096104b07a..be632f8b214 100644
--- a/gdbstub/gdbstub.c
+++ b/gdbstub/gdbstub.c
@@ -2497,6 +2497,8 @@ bool gdb_try_stop(void)
         return false;
     }
 
+    pause_all_vcpus();
+
     gdbserver_state.allow_stop_reply = false;
     return true;
 }
diff --git a/gdbstub/user.c b/gdbstub/user.c
index 82007b09db6..3095c846a99 100644
--- a/gdbstub/user.c
+++ b/gdbstub/user.c
@@ -458,6 +458,7 @@ static void disable_gdbstub(CPUState *thread_cpu)
         cpu_breakpoint_remove_all(cpu, BP_GDB);
         /* no cpu_watchpoint_remove_all for user-mode */
         cpu_single_step(cpu, 0);
+        cpu_resume(cpu);
     }
     tb_flush(thread_cpu);
 }
@@ -650,9 +651,16 @@ int gdb_continue_partial(char *newstates)
      * previous situation, where only one CPU would be single-stepped.
      */
     CPU_FOREACH(cpu) {
-        if (newstates[cpu->cpu_index] == 's') {
+        switch (newstates[cpu->cpu_index]) {
+        case 's':
             trace_gdbstub_op_stepping(cpu->cpu_index);
             cpu_single_step(cpu, gdbserver_state.sstep_flags);
+            QEMU_FALLTHROUGH;
+        case 'c':
+        case 'C':
+        case 'S':
+            cpu_resume(cpu);
+            break;
         }
     }
     gdbserver_user_state.running_state = 1;
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH 18/18] tests/tcg: Stress test thread breakpoints
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (16 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 17/18] gdbstub: Pause all CPUs before sending stop replies Ilya Leoshkevich
@ 2024-09-23 16:13 ` Ilya Leoshkevich
  2024-09-23 16:37 ` [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:13 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel, Ilya Leoshkevich

Add a test to prevent regressions.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 tests/tcg/multiarch/Makefile.target           | 13 ++-
 .../gdbstub/test-thread-breakpoint-stress.py  | 28 ++++++
 .../tcg/multiarch/thread-breakpoint-stress.c  | 92 +++++++++++++++++++
 3 files changed, 132 insertions(+), 1 deletion(-)
 create mode 100644 tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py
 create mode 100644 tests/tcg/multiarch/thread-breakpoint-stress.c

diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
index 5e3391ec9d2..0a9cd037094 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -42,6 +42,9 @@ munmap-pthread: LDFLAGS+=-pthread
 vma-pthread: CFLAGS+=-pthread
 vma-pthread: LDFLAGS+=-pthread
 
+thread-breakpoint-stress: CFLAGS+=-pthread
+thread-breakpoint-stress: LDFLAGS+=-pthread
+
 # The vma-pthread seems very sensitive on gitlab and we currently
 # don't know if its exposing a real bug or the test is flaky.
 ifneq ($(GITLAB_CI),)
@@ -127,6 +130,13 @@ run-gdbstub-follow-fork-mode-parent: follow-fork-mode
 		--bin $< --test $(MULTIARCH_SRC)/gdbstub/follow-fork-mode-parent.py, \
 	following parents on fork)
 
+run-gdbstub-thread-breakpoint-stress: thread-breakpoint-stress
+	$(call run-test, $@, $(GDB_SCRIPT) \
+		--gdb $(GDB) \
+		--qemu $(QEMU) --qargs "$(QEMU_OPTS)" \
+		--bin $< --test $(MULTIARCH_SRC)/gdbstub/test-thread-breakpoint-stress.py, \
+	hitting many breakpoints on different threads)
+
 else
 run-gdbstub-%:
 	$(call skip-test, "gdbstub test $*", "need working gdb with $(patsubst -%,,$(TARGET_NAME)) support")
@@ -136,7 +146,8 @@ EXTRA_RUNS += run-gdbstub-sha1 run-gdbstub-qxfer-auxv-read \
 	      run-gdbstub-registers run-gdbstub-prot-none \
 	      run-gdbstub-catch-syscalls run-gdbstub-follow-fork-mode-child \
 	      run-gdbstub-follow-fork-mode-parent \
-	      run-gdbstub-qxfer-siginfo-read
+	      run-gdbstub-qxfer-siginfo-read \
+	      run-gdbstub-thread-breakpoint-stress
 
 # ARM Compatible Semi Hosting Tests
 #
diff --git a/tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py b/tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py
new file mode 100644
index 00000000000..489d238f02d
--- /dev/null
+++ b/tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py
@@ -0,0 +1,28 @@
+"""Test multiple threads hitting breakpoints.
+
+SPDX-License-Identifier: GPL-2.0-or-later
+"""
+from test_gdbstub import main, report
+
+
+N_BREAK_THREADS = 2
+N_BREAKS = 100
+
+
+def run_test():
+    """Run through the tests one by one"""
+    if gdb.selected_inferior().architecture().name() == "MicroBlaze":
+        print("SKIP: Atomics are broken on MicroBlaze")
+        exit(0)
+    gdb.execute("break break_here")
+    gdb.execute("continue")
+    for _ in range(N_BREAK_THREADS * N_BREAKS):
+        counter1 = int(gdb.parse_and_eval("s->counter"))
+        counter2 = int(gdb.parse_and_eval("s->counter"))
+        report(counter1 == counter2, "{} == {}".format(counter1, counter2))
+        gdb.execute("continue")
+    exitcode = int(gdb.parse_and_eval("$_exitcode"))
+    report(exitcode == 0, "{} == 0".format(exitcode))
+
+
+main(run_test)
diff --git a/tests/tcg/multiarch/thread-breakpoint-stress.c b/tests/tcg/multiarch/thread-breakpoint-stress.c
new file mode 100644
index 00000000000..1feed8577aa
--- /dev/null
+++ b/tests/tcg/multiarch/thread-breakpoint-stress.c
@@ -0,0 +1,92 @@
+/*
+ * Test multiple threads hitting breakpoints.
+ *
+ * The main thread performs a lengthy syscall. The test verifies that this
+ * does not interfere with the ability to stop threads.
+ *
+ * The counter thread constantly increments a value by 1. The test verifies
+ * that it is stopped when another thread hits a breakpoint.
+ *
+ * The break threads constantly and simultaneously hit the same breakpoint.
+ * The test verifies that GDB and gdbstub do not lose any hits and do not
+ * deadlock.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+#include <assert.h>
+#include <pthread.h>
+#include <stdlib.h>
+
+struct state {
+    int counter;
+    int done;
+    pthread_barrier_t barrier;
+    int break_counter;
+};
+
+static void *counter_loop(void *arg)
+{
+    struct state *s = arg;
+
+    while (!__atomic_load_n(&s->done, __ATOMIC_SEQ_CST)) {
+        __atomic_add_fetch(&s->counter, 1, __ATOMIC_SEQ_CST);
+    }
+
+    return NULL;
+}
+
+#define N_BREAK_THREADS 2
+#define N_BREAKS 100
+
+/* Non-static to avoid inlining. */
+void break_here(struct state *s)
+{
+    __atomic_add_fetch(&s->break_counter, 1, __ATOMIC_SEQ_CST);
+}
+
+static void *break_loop(void *arg)
+{
+    struct state *s = arg;
+    int i;
+
+    pthread_barrier_wait(&s->barrier);
+    for (i = 0; i < N_BREAKS; i++) {
+        break_here(s);
+    }
+
+    return NULL;
+}
+
+int main(void)
+{
+    pthread_t break_threads[N_BREAK_THREADS], counter_thread;
+    struct state s = {};
+    int i, ret;
+
+#ifdef __MICROBLAZE__
+    /*
+     * Microblaze has broken atomics.
+     * See https://github.com/Xilinx/meta-xilinx/blob/xlnx-rel-v2024.1/meta-microblaze/recipes-devtools/gcc/gcc-12/0009-Patch-microblaze-Fix-atomic-boolean-return-value.patch
+     */
+    return EXIT_SUCCESS;
+#endif
+
+    ret = pthread_barrier_init(&s.barrier, NULL, N_BREAK_THREADS);
+    assert(ret == 0);
+    ret = pthread_create(&counter_thread, NULL, counter_loop, &s);
+    assert(ret == 0);
+    for (i = 0; i < N_BREAK_THREADS; i++) {
+        ret = pthread_create(&break_threads[i], NULL, break_loop, &s);
+        assert(ret == 0);
+    }
+    for (i = 0; i < N_BREAK_THREADS; i++) {
+        ret = pthread_join(break_threads[i], NULL);
+        assert(ret == 0);
+    }
+    __atomic_store_n(&s.done, 1, __ATOMIC_SEQ_CST);
+    ret = pthread_join(counter_thread, NULL);
+    assert(ret == 0);
+    assert(s.break_counter == N_BREAK_THREADS * N_BREAKS);
+
+    return EXIT_SUCCESS;
+}
-- 
2.46.0



^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (17 preceding siblings ...)
  2024-09-23 16:13 ` [PATCH 18/18] tests/tcg: Stress test thread breakpoints Ilya Leoshkevich
@ 2024-09-23 16:37 ` Ilya Leoshkevich
  2024-09-24 11:46 ` Richard Henderson
  2025-01-08 15:56 ` Alex Bennée
  20 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-23 16:37 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On Mon, 2024-09-23 at 18:12 +0200, Ilya Leoshkevich wrote:
> Hi,
> 
> On reporting a breakpoint in a non-non-stop mode, GDB remotes must
> stop
> all threads. Currently qemu-user doesn't do that, breaking the
> debugging session for at least two reasons: concurrent access to the
> GDB socket, and an assertion within GDB [1].
> 
> This series fixes this by importing pause_all_vcpus() from qemu-
> system.
> This in turn requires introducing BQL and a few stubs to qemu-user.
> 
> Best regards,
> Ilya
> 
> [1] https://gitlab.com/qemu-project/qemu/-/issues/2465
> 
> Ilya Leoshkevich (18):
>   gdbstub: Make gdb_get_char() static
>   gdbstub: Move phy_memory_mode to GDBSystemState
>   gdbstub: Move gdb_syscall_mode to GDBSyscallState
>   gdbstub: Factor out gdb_try_stop()
>   accel/tcg: Factor out cpu_exec_user()
>   qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
>   qemu-thread: Introduce QEMU_COND_INITIALIZER
>   replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
>   qemu-timer: Provide qemu_clock_enable() stub for qemu-user
>   cpu: Use BQL in qemu-user
>   accel/tcg: Unify user implementations of qemu_cpu_kick()
>   cpu: Track CPUs executing syscalls
>   cpu: Implement cpu_thread_is_idle() for qemu-user
>   cpu: Introduce cpu_is_paused()
>   cpu: Set current_cpu early in qemu-user
>   cpu: Allow pausing and resuming CPUs in qemu-user
>   gdbstub: Pause all CPUs before sending stop replies
>   tests/tcg: Stress test thread breakpoints
> 
>  accel/tcg/user-exec-stub.c                    |   4 -
>  accel/tcg/user-exec.c                         |  55 ++++++
>  bsd-user/aarch64/target_arch_cpu.h            |   6 +-
>  bsd-user/arm/target_arch_cpu.h                |   5 +-
>  bsd-user/freebsd/os-syscall.c                 |  10 +
>  bsd-user/i386/target_arch_cpu.h               |   5 +-
>  bsd-user/main.c                               |   8 +-
>  bsd-user/x86_64/target_arch_cpu.h             |   5 +-
>  cpu-common.c                                  | 179
> ++++++++++++++++++
>  gdbstub/gdbstub.c                             |  17 +-
>  gdbstub/internals.h                           |   4 +-
>  gdbstub/syscalls.c                            |  20 +-
>  gdbstub/system.c                              |  18 +-
>  gdbstub/user.c                                |  28 ++-
>  include/exec/cpu-common.h                     |  15 ++
>  include/exec/replay-core.h                    |  13 ++
>  include/hw/core/cpu.h                         |   1 +
>  include/qemu/thread-posix.h                   |   8 +
>  include/qemu/thread-win32.h                   |   8 +
>  include/sysemu/cpus.h                         |   6 -
>  include/sysemu/replay.h                       |  13 --
>  linux-user/aarch64/cpu_loop.c                 |   5 +-
>  linux-user/alpha/cpu_loop.c                   |   5 +-
>  linux-user/arm/cpu_loop.c                     |   5 +-
>  linux-user/hexagon/cpu_loop.c                 |   5 +-
>  linux-user/hppa/cpu_loop.c                    |   5 +-
>  linux-user/i386/cpu_loop.c                    |   5 +-
>  linux-user/loongarch64/cpu_loop.c             |   5 +-
>  linux-user/m68k/cpu_loop.c                    |   5 +-
>  linux-user/main.c                             |   9 +-
>  linux-user/microblaze/cpu_loop.c              |   5 +-
>  linux-user/mips/cpu_loop.c                    |   5 +-
>  linux-user/openrisc/cpu_loop.c                |   5 +-
>  linux-user/ppc/cpu_loop.c                     |   5 +-
>  linux-user/riscv/cpu_loop.c                   |   5 +-
>  linux-user/s390x/cpu_loop.c                   |   5 +-
>  linux-user/sh4/cpu_loop.c                     |   5 +-
>  linux-user/sparc/cpu_loop.c                   |   5 +-
>  linux-user/syscall.c                          |  12 ++
>  linux-user/xtensa/cpu_loop.c                  |   5 +-
>  replay/stubs-system.c                         |   8 +
>  stubs/meson.build                             |   8 +
>  stubs/qemu-timer.c                            |   6 +
>  stubs/replay-mutex.c                          |  10 +
>  stubs/replay-tools.c                          |   8 -
>  system/cpus.c                                 | 172 +---------------
> -
>  tests/tcg/multiarch/Makefile.target           |  13 +-
>  .../gdbstub/test-thread-breakpoint-stress.py  |  28 +++
>  .../tcg/multiarch/thread-breakpoint-stress.c  |  92 +++++++++
>  49 files changed, 552 insertions(+), 327 deletions(-)
>  create mode 100644 stubs/qemu-timer.c
>  create mode 100644 stubs/replay-mutex.c
>  create mode 100644 tests/tcg/multiarch/gdbstub/test-thread-
> breakpoint-stress.py
>  create mode 100644 tests/tcg/multiarch/thread-breakpoint-stress.c

Correction: the subject should have "qemu-user" instead of "qemu-cpu".

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (18 preceding siblings ...)
  2024-09-23 16:37 ` [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
@ 2024-09-24 11:46 ` Richard Henderson
  2024-09-25  7:43   ` Ilya Leoshkevich
  2025-01-08 15:56 ` Alex Bennée
  20 siblings, 1 reply; 37+ messages in thread
From: Richard Henderson @ 2024-09-24 11:46 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 18:12, Ilya Leoshkevich wrote:
> Hi,
> 
> On reporting a breakpoint in a non-non-stop mode, GDB remotes must stop
> all threads. Currently qemu-user doesn't do that, breaking the
> debugging session for at least two reasons: concurrent access to the
> GDB socket, and an assertion within GDB [1].
> 
> This series fixes this by importing pause_all_vcpus() from qemu-system.
> This in turn requires introducing BQL and a few stubs to qemu-user.

I would have expected you to reuse (some portion of) start_exclusive, which is already 
part of qemu-user.  Is there a reason you chose a solution which requires...

>    replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
>    qemu-timer: Provide qemu_clock_enable() stub for qemu-user
>    cpu: Use BQL in qemu-user

all sorts of other infrastructure?


r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-09-24 11:46 ` Richard Henderson
@ 2024-09-25  7:43   ` Ilya Leoshkevich
  2024-10-05 19:51     ` Richard Henderson
  0 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-09-25  7:43 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On Tue, 2024-09-24 at 13:46 +0200, Richard Henderson wrote:
> On 9/23/24 18:12, Ilya Leoshkevich wrote:
> > Hi,
> > 
> > On reporting a breakpoint in a non-non-stop mode, GDB remotes must
> > stop
> > all threads. Currently qemu-user doesn't do that, breaking the
> > debugging session for at least two reasons: concurrent access to
> > the
> > GDB socket, and an assertion within GDB [1].
> > 
> > This series fixes this by importing pause_all_vcpus() from qemu-
> > system.
> > This in turn requires introducing BQL and a few stubs to qemu-user.
> 
> I would have expected you to reuse (some portion of) start_exclusive,
> which is already 
> part of qemu-user.  Is there a reason you chose a solution which
> requires...
> 
> >    replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
> >    qemu-timer: Provide qemu_clock_enable() stub for qemu-user
> >    cpu: Use BQL in qemu-user
> 
> all sorts of other infrastructure?
> 
> 
> r~

I don't think start_exclusive() would protect the gdb socket from
concurrent accesses (e.g., if two threads are simultaneously stopped).

I have a patch [1] that introduces a big gdbstub lock for that, but it
looks more complex than just extending BQL to qemu-user. Also, the
BQL-based pause/resume code already works for the system mode and is
well tested.

[1]
https://gitlab.com/iii-i/qemu/-/commit/0944716218820f8bdfdcf80acc6c39a48b91670c


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 01/18] gdbstub: Make gdb_get_char() static
  2024-09-23 16:12 ` [PATCH 01/18] gdbstub: Make gdb_get_char() static Ilya Leoshkevich
@ 2024-10-05 19:20   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:20 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:12, Ilya Leoshkevich wrote:
> It's user-only since commit a7e0f9bd2ace ("gdbstub: abstract target
> specific details from gdb_put_packet_binary").
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   gdbstub/internals.h | 2 --
>   gdbstub/user.c      | 2 +-
>   2 files changed, 1 insertion(+), 3 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState
  2024-09-23 16:12 ` [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState Ilya Leoshkevich
@ 2024-10-05 19:21   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:21 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:12, Ilya Leoshkevich wrote:
> Follow the convention that all the pieces of the global stub state must
> be inside a single struct.
> 
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
>   gdbstub/system.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/gdbstub/system.c b/gdbstub/system.c
> index 1ad87fe7fdf..5ce357c6c2b 100644
> --- a/gdbstub/system.c
> +++ b/gdbstub/system.c
> @@ -35,6 +35,7 @@
>   typedef struct {
>       CharBackend chr;
>       Chardev *mon_chr;
> +    int phy_memory_mode;
>   } GDBSystemState;

While you're at it, this should be a bool.

Anyway,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~

>   
>   GDBSystemState gdbserver_system_state;
> @@ -445,14 +446,12 @@ void gdb_qemu_exit(int code)
>   /*
>    * Memory access
>    */
> -static int phy_memory_mode;
> -
>   int gdb_target_memory_rw_debug(CPUState *cpu, hwaddr addr,
>                                  uint8_t *buf, int len, bool is_write)
>   {
>       CPUClass *cc;
>   
> -    if (phy_memory_mode) {
> +    if (gdbserver_system_state.phy_memory_mode) {
>           if (is_write) {
>               cpu_physical_memory_write(addr, buf, len);
>           } else {
> @@ -491,7 +490,8 @@ bool gdb_can_reverse(void)
>   void gdb_handle_query_qemu_phy_mem_mode(GArray *params,
>                                           void *ctx)
>   {
> -    g_string_printf(gdbserver_state.str_buf, "%d", phy_memory_mode);
> +    g_string_printf(gdbserver_state.str_buf, "%d",
> +                    gdbserver_system_state.phy_memory_mode);
>       gdb_put_strbuf();
>   }
>   
> @@ -503,9 +503,9 @@ void gdb_handle_set_qemu_phy_mem_mode(GArray *params, void *ctx)
>       }
>   
>       if (!gdb_get_cmd_param(params, 0)->val_ul) {
> -        phy_memory_mode = 0;
> +        gdbserver_system_state.phy_memory_mode = 0;
>       } else {
> -        phy_memory_mode = 1;
> +        gdbserver_system_state.phy_memory_mode = 1;
>       }
>       gdb_put_packet("OK");
>   }



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState
  2024-09-23 16:12 ` [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState Ilya Leoshkevich
@ 2024-10-05 19:22   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:22 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:12, Ilya Leoshkevich wrote:
> Follow the convention that all the pieces of the global stub state must
> be inside a single struct.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   gdbstub/syscalls.c | 20 ++++++++++----------
>   1 file changed, 10 insertions(+), 10 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 04/18] gdbstub: Factor out gdb_try_stop()
  2024-09-23 16:12 ` [PATCH 04/18] gdbstub: Factor out gdb_try_stop() Ilya Leoshkevich
@ 2024-10-05 19:26   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:26 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:12, Ilya Leoshkevich wrote:
> Move checking and setting allow_stop_reply into a function.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   gdbstub/gdbstub.c   | 15 +++++++++++----
>   gdbstub/internals.h |  2 ++
>   gdbstub/system.c    |  6 ++----
>   gdbstub/user.c      | 11 ++++-------
>   4 files changed, 19 insertions(+), 15 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 05/18] accel/tcg: Factor out cpu_exec_user()
  2024-09-23 16:13 ` [PATCH 05/18] accel/tcg: Factor out cpu_exec_user() Ilya Leoshkevich
@ 2024-10-05 19:29   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:29 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:13, Ilya Leoshkevich wrote:
> All linux-user cpu_loop() implementations contain the same sequence
> of function calls. Factor them out so that they can be changed in one
> place.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   accel/tcg/user-exec.c              | 12 ++++++++++++
>   bsd-user/aarch64/target_arch_cpu.h |  6 +-----
>   bsd-user/arm/target_arch_cpu.h     |  5 +----
>   bsd-user/i386/target_arch_cpu.h    |  5 +----
>   bsd-user/x86_64/target_arch_cpu.h  |  5 +----
>   include/exec/cpu-common.h          |  2 ++
>   linux-user/aarch64/cpu_loop.c      |  5 +----
>   linux-user/alpha/cpu_loop.c        |  5 +----
>   linux-user/arm/cpu_loop.c          |  5 +----
>   linux-user/hexagon/cpu_loop.c      |  5 +----
>   linux-user/hppa/cpu_loop.c         |  5 +----
>   linux-user/i386/cpu_loop.c         |  5 +----
>   linux-user/loongarch64/cpu_loop.c  |  5 +----
>   linux-user/m68k/cpu_loop.c         |  5 +----
>   linux-user/microblaze/cpu_loop.c   |  5 +----
>   linux-user/mips/cpu_loop.c         |  5 +----
>   linux-user/openrisc/cpu_loop.c     |  5 +----
>   linux-user/ppc/cpu_loop.c          |  5 +----
>   linux-user/riscv/cpu_loop.c        |  5 +----
>   linux-user/s390x/cpu_loop.c        |  5 +----
>   linux-user/sh4/cpu_loop.c          |  5 +----
>   linux-user/sparc/cpu_loop.c        |  5 +----
>   linux-user/xtensa/cpu_loop.c       |  5 +----
>   23 files changed, 35 insertions(+), 85 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
  2024-09-23 16:13 ` [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER Ilya Leoshkevich
@ 2024-10-05 19:30   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:30 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:13, Ilya Leoshkevich wrote:
> Allow static initialization of mutexes.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   include/qemu/thread-posix.h | 6 ++++++
>   include/qemu/thread-win32.h | 6 ++++++
>   2 files changed, 12 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER
  2024-09-23 16:13 ` [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER Ilya Leoshkevich
@ 2024-10-05 19:30   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:30 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:13, Ilya Leoshkevich wrote:
> Allow static initialization of condition variables.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   include/qemu/thread-posix.h | 2 ++
>   include/qemu/thread-win32.h | 2 ++
>   2 files changed, 4 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick()
  2024-09-23 16:13 ` [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick() Ilya Leoshkevich
@ 2024-10-05 19:31   ` Richard Henderson
  0 siblings, 0 replies; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:31 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/23/24 09:13, Ilya Leoshkevich wrote:
> linux-user and bsd-user have the same implementation.
> Move it to user-exec.c.
> 
> Signed-off-by: Ilya Leoshkevich<iii@linux.ibm.com>
> ---
>   accel/tcg/user-exec.c | 5 +++++
>   bsd-user/main.c       | 5 -----
>   linux-user/main.c     | 5 -----
>   3 files changed, 5 insertions(+), 10 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-09-25  7:43   ` Ilya Leoshkevich
@ 2024-10-05 19:51     ` Richard Henderson
  2024-10-05 20:26       ` Ilya Leoshkevich
  0 siblings, 1 reply; 37+ messages in thread
From: Richard Henderson @ 2024-10-05 19:51 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 9/25/24 00:43, Ilya Leoshkevich wrote:
> On Tue, 2024-09-24 at 13:46 +0200, Richard Henderson wrote:
>> On 9/23/24 18:12, Ilya Leoshkevich wrote:
>>> Hi,
>>>
>>> On reporting a breakpoint in a non-non-stop mode, GDB remotes must
>>> stop
>>> all threads. Currently qemu-user doesn't do that, breaking the
>>> debugging session for at least two reasons: concurrent access to
>>> the
>>> GDB socket, and an assertion within GDB [1].
>>>
>>> This series fixes this by importing pause_all_vcpus() from qemu-
>>> system.
>>> This in turn requires introducing BQL and a few stubs to qemu-user.
>>
>> I would have expected you to reuse (some portion of) start_exclusive,
>> which is already
>> part of qemu-user.  Is there a reason you chose a solution which
>> requires...
>>
>>>     replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
>>>     qemu-timer: Provide qemu_clock_enable() stub for qemu-user
>>>     cpu: Use BQL in qemu-user
>>
>> all sorts of other infrastructure?
>>
>>
>> r~
> 
> I don't think start_exclusive() would protect the gdb socket from
> concurrent accesses (e.g., if two threads are simultaneously stopped).

Of course it would, otherwise "exclusive" has no meaning.
All other cpus are blocked in exclusive_idle().

Importantly, no cpus are blocked in syscalls, where the kernel can modify memory behind 
gdbstub's back (e.g. read).  I think considering "in_syscall" to be "paused" a mistake.


r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-10-05 19:51     ` Richard Henderson
@ 2024-10-05 20:26       ` Ilya Leoshkevich
  2024-10-05 20:35         ` Ilya Leoshkevich
  0 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-10-05 20:26 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On Sat, 2024-10-05 at 12:51 -0700, Richard Henderson wrote:
> On 9/25/24 00:43, Ilya Leoshkevich wrote:
> > On Tue, 2024-09-24 at 13:46 +0200, Richard Henderson wrote:
> > > On 9/23/24 18:12, Ilya Leoshkevich wrote:
> > > > Hi,
> > > > 
> > > > On reporting a breakpoint in a non-non-stop mode, GDB remotes
> > > > must
> > > > stop
> > > > all threads. Currently qemu-user doesn't do that, breaking the
> > > > debugging session for at least two reasons: concurrent access
> > > > to
> > > > the
> > > > GDB socket, and an assertion within GDB [1].
> > > > 
> > > > This series fixes this by importing pause_all_vcpus() from
> > > > qemu-
> > > > system.
> > > > This in turn requires introducing BQL and a few stubs to qemu-
> > > > user.
> > > 
> > > I would have expected you to reuse (some portion of)
> > > start_exclusive,
> > > which is already
> > > part of qemu-user.  Is there a reason you chose a solution which
> > > requires...
> > > 
> > > >     replay: Add replay_mutex_{lock,unlock}() stubs for qemu-
> > > > user
> > > >     qemu-timer: Provide qemu_clock_enable() stub for qemu-user
> > > >     cpu: Use BQL in qemu-user
> > > 
> > > all sorts of other infrastructure?
> > > 
> > > 
> > > r~
> > 
> > I don't think start_exclusive() would protect the gdb socket from
> > concurrent accesses (e.g., if two threads are simultaneously
> > stopped).
> 
> Of course it would, otherwise "exclusive" has no meaning.
> All other cpus are blocked in exclusive_idle().
> 
> Importantly, no cpus are blocked in syscalls, where the kernel can
> modify memory behind 
> gdbstub's back (e.g. read).  I think considering "in_syscall" to be
> "paused" a mistake.
> 
> 
> r~

How can we handle the long-running syscalls?
Just waiting sounds unsatisfying.
Sending a reserved host signal may alter the guest's behaviour if a
syscall like pause() is interrupted.
What do you think about SIGSTOP-ping the "in_syscall" threads?
A quick experiment shows that it should be completely invisible to the
guest - the following program continues to run after SIGSTOP/SIGCONT:

#include <sys/syscall.h>
#include <unistd.h>
int main(void) { syscall(__NR_pause); };


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-10-05 20:26       ` Ilya Leoshkevich
@ 2024-10-05 20:35         ` Ilya Leoshkevich
  2024-10-08 18:17           ` Richard Henderson
  0 siblings, 1 reply; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-10-05 20:35 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On Sat, 2024-10-05 at 22:26 +0200, Ilya Leoshkevich wrote:
> On Sat, 2024-10-05 at 12:51 -0700, Richard Henderson wrote:
> > On 9/25/24 00:43, Ilya Leoshkevich wrote:
> > > On Tue, 2024-09-24 at 13:46 +0200, Richard Henderson wrote:
> > > > On 9/23/24 18:12, Ilya Leoshkevich wrote:
> > > > > Hi,
> > > > > 
> > > > > On reporting a breakpoint in a non-non-stop mode, GDB remotes
> > > > > must
> > > > > stop
> > > > > all threads. Currently qemu-user doesn't do that, breaking
> > > > > the
> > > > > debugging session for at least two reasons: concurrent access
> > > > > to
> > > > > the
> > > > > GDB socket, and an assertion within GDB [1].
> > > > > 
> > > > > This series fixes this by importing pause_all_vcpus() from
> > > > > qemu-
> > > > > system.
> > > > > This in turn requires introducing BQL and a few stubs to
> > > > > qemu-
> > > > > user.
> > > > 
> > > > I would have expected you to reuse (some portion of)
> > > > start_exclusive,
> > > > which is already
> > > > part of qemu-user.  Is there a reason you chose a solution
> > > > which
> > > > requires...
> > > > 
> > > > >     replay: Add replay_mutex_{lock,unlock}() stubs for qemu-
> > > > > user
> > > > >     qemu-timer: Provide qemu_clock_enable() stub for qemu-
> > > > > user
> > > > >     cpu: Use BQL in qemu-user
> > > > 
> > > > all sorts of other infrastructure?
> > > > 
> > > > 
> > > > r~
> > > 
> > > I don't think start_exclusive() would protect the gdb socket from
> > > concurrent accesses (e.g., if two threads are simultaneously
> > > stopped).
> > 
> > Of course it would, otherwise "exclusive" has no meaning.
> > All other cpus are blocked in exclusive_idle().
> > 
> > Importantly, no cpus are blocked in syscalls, where the kernel can
> > modify memory behind 
> > gdbstub's back (e.g. read).  I think considering "in_syscall" to be
> > "paused" a mistake.
> > 
> > 
> > r~
> 
> How can we handle the long-running syscalls?
> Just waiting sounds unsatisfying.
> Sending a reserved host signal may alter the guest's behaviour if a
> syscall like pause() is interrupted.
> What do you think about SIGSTOP-ping the "in_syscall" threads?
> A quick experiment shows that it should be completely invisible to
> the
> guest - the following program continues to run after SIGSTOP/SIGCONT:
> 
> #include <sys/syscall.h>
> #include <unistd.h>
> int main(void) { syscall(__NR_pause); };

Hmm, no, that won't work: SIGSTOP would stop all threads.

So I wonder if reserving a host signal for interrupting "in_syscall"
threads would be an acceptable tradeoff?


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-10-05 20:35         ` Ilya Leoshkevich
@ 2024-10-08 18:17           ` Richard Henderson
  2024-10-09 22:01             ` Ilya Leoshkevich
  0 siblings, 1 reply; 37+ messages in thread
From: Richard Henderson @ 2024-10-08 18:17 UTC (permalink / raw)
  To: Ilya Leoshkevich, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On 10/5/24 13:35, Ilya Leoshkevich wrote:
>> How can we handle the long-running syscalls?
>> Just waiting sounds unsatisfying.
>> Sending a reserved host signal may alter the guest's behaviour if a
>> syscall like pause() is interrupted.
>> What do you think about SIGSTOP-ping the "in_syscall" threads?
>> A quick experiment shows that it should be completely invisible to
>> the
>> guest - the following program continues to run after SIGSTOP/SIGCONT:
>>
>> #include <sys/syscall.h>
>> #include <unistd.h>
>> int main(void) { syscall(__NR_pause); };
> 
> Hmm, no, that won't work: SIGSTOP would stop all threads.
> 
> So I wonder if reserving a host signal for interrupting "in_syscall"
> threads would be an acceptable tradeoff?

Could work, yes.  We already steal SIGRTMIN for guest abort (to distinguish from host 
abort), and remap guest __SIGRTMIN to host SIGRTMIN+1.  Grabbing SIGRTMIN+1 should work 
ok, modulo the existing problem of presenting the guest with an incomplete set of signals.

I've wondered from time to time about multiplexing signals in this space, but I think that 
runs afoul of having a consistent mapping for interprocess signaling.


r~


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-10-08 18:17           ` Richard Henderson
@ 2024-10-09 22:01             ` Ilya Leoshkevich
  0 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2024-10-09 22:01 UTC (permalink / raw)
  To: Richard Henderson, Paolo Bonzini, Alex Bennée,
	Philippe Mathieu-Daudé
  Cc: qemu-devel

On Tue, 2024-10-08 at 11:17 -0700, Richard Henderson wrote:
> On 10/5/24 13:35, Ilya Leoshkevich wrote:
> > > How can we handle the long-running syscalls?
> > > Just waiting sounds unsatisfying.
> > > Sending a reserved host signal may alter the guest's behaviour if
> > > a
> > > syscall like pause() is interrupted.
> > > What do you think about SIGSTOP-ping the "in_syscall" threads?
> > > A quick experiment shows that it should be completely invisible
> > > to
> > > the
> > > guest - the following program continues to run after
> > > SIGSTOP/SIGCONT:
> > > 
> > > #include <sys/syscall.h>
> > > #include <unistd.h>
> > > int main(void) { syscall(__NR_pause); };
> > 
> > Hmm, no, that won't work: SIGSTOP would stop all threads.
> > 
> > So I wonder if reserving a host signal for interrupting
> > "in_syscall"
> > threads would be an acceptable tradeoff?
> 
> Could work, yes.  We already steal SIGRTMIN for guest abort (to
> distinguish from host 
> abort), and remap guest __SIGRTMIN to host SIGRTMIN+1.  Grabbing
> SIGRTMIN+1 should work 
> ok, modulo the existing problem of presenting the guest with an
> incomplete set of signals.
> 
> I've wondered from time to time about multiplexing signals in this
> space, but I think that 
> runs afoul of having a consistent mapping for interprocess signaling.
> 
> 
> r~

I tried to think through how this would work in conjunction with
start_exclusive(), and there is one problem I don't see a good solution
for. Maybe you will have an idea.

The way I'm thinking of implementing this is as follows:

- Reserve the host's SIGRTMIN+1 and tweak host_signal_handler() to do
  nothing for this signal.

- In gdb_try_stop(), call start_exclusive(). After it returns, some
  threads will be parked in exclusive_idle(). Some other threads will
  be on their way to getting parked, and this needs to actually happen
  before gdb_try_stop() can proceed. For example, the ones that are
  executing handle_pending_signal() may change memory and CPU state.
  IIUC start_exclusive() will not wait for them, because they are not
  "running". I think a global counter protected by qemu_cpu_list_lock
  and paired with a new condition variable should be enough for this.

- Threads executing long-running syscalls will need to be interrupted
  by SIGRTMIN+1. These syscalls will return -EINTR and will need
  to be manually restarted so as not to disturb poorly written guests.
  This needs to happen only if there are no pending guest signals.

- Here is a minor problem: how to identify threads which need to be
  signalled? in_syscall may not be enough. But maybe signalling all
  threads won't hurt too much. The parked ones won't notice anyway.

- But here is the major problem: what if we signal a thread just before
  it starts executing a long-running syscall? Such thread will be stuck
  and we'll need to signal it again. But how to determine that this
  needs to be done?

  An obvious solution is to signal all threads in a loop with a 0.1s
  delay until the counter reaches n_threads. But it's quite ugly.

  Ideally SIGRTMIN+1 should be blocked most of the time. Then we should
  identify all places where long-running syscalls may be invoked and
  unblock SIGRTMIN+1 atomically with executing them. But I'm not aware
  of such mechanism (I have an extremely vague recollection that
  someone managed to abuse rseq for this, but we shouldn't be relying
  on rseq being available anyway).


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
                   ` (19 preceding siblings ...)
  2024-09-24 11:46 ` Richard Henderson
@ 2025-01-08 15:56 ` Alex Bennée
  2025-01-08 16:20   ` Ilya Leoshkevich
  20 siblings, 1 reply; 37+ messages in thread
From: Alex Bennée @ 2025-01-08 15:56 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Richard Henderson, Paolo Bonzini, Philippe Mathieu-Daudé,
	qemu-devel

Ilya Leoshkevich <iii@linux.ibm.com> writes:

> Hi,
>
> On reporting a breakpoint in a non-non-stop mode, GDB remotes must stop
> all threads. Currently qemu-user doesn't do that, breaking the
> debugging session for at least two reasons: concurrent access to the
> GDB socket, and an assertion within GDB [1].
>
> This series fixes this by importing pause_all_vcpus() from qemu-system.
> This in turn requires introducing BQL and a few stubs to qemu-user.

Is there a conclusion to this design choice? I'd like to avoid bringing
in a bunch of system-mode infrastructure if the existing exclusive code
would work. For that I'll defer to the linux-user maintainer or Richard
who knows the code better than I do.

I could certainly harvest the early clean-up patches to keep the delta
low while the details are worked out. Is there going to be a v2?

>
> Best regards,
> Ilya
>
> [1] https://gitlab.com/qemu-project/qemu/-/issues/2465
>
> Ilya Leoshkevich (18):
>   gdbstub: Make gdb_get_char() static
>   gdbstub: Move phy_memory_mode to GDBSystemState
>   gdbstub: Move gdb_syscall_mode to GDBSyscallState
>   gdbstub: Factor out gdb_try_stop()
>   accel/tcg: Factor out cpu_exec_user()
>   qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
>   qemu-thread: Introduce QEMU_COND_INITIALIZER
>   replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
>   qemu-timer: Provide qemu_clock_enable() stub for qemu-user
>   cpu: Use BQL in qemu-user
>   accel/tcg: Unify user implementations of qemu_cpu_kick()
>   cpu: Track CPUs executing syscalls
>   cpu: Implement cpu_thread_is_idle() for qemu-user
>   cpu: Introduce cpu_is_paused()
>   cpu: Set current_cpu early in qemu-user
>   cpu: Allow pausing and resuming CPUs in qemu-user
>   gdbstub: Pause all CPUs before sending stop replies
>   tests/tcg: Stress test thread breakpoints
>
>  accel/tcg/user-exec-stub.c                    |   4 -
>  accel/tcg/user-exec.c                         |  55 ++++++
>  bsd-user/aarch64/target_arch_cpu.h            |   6 +-
>  bsd-user/arm/target_arch_cpu.h                |   5 +-
>  bsd-user/freebsd/os-syscall.c                 |  10 +
>  bsd-user/i386/target_arch_cpu.h               |   5 +-
>  bsd-user/main.c                               |   8 +-
>  bsd-user/x86_64/target_arch_cpu.h             |   5 +-
>  cpu-common.c                                  | 179 ++++++++++++++++++
>  gdbstub/gdbstub.c                             |  17 +-
>  gdbstub/internals.h                           |   4 +-
>  gdbstub/syscalls.c                            |  20 +-
>  gdbstub/system.c                              |  18 +-
>  gdbstub/user.c                                |  28 ++-
>  include/exec/cpu-common.h                     |  15 ++
>  include/exec/replay-core.h                    |  13 ++
>  include/hw/core/cpu.h                         |   1 +
>  include/qemu/thread-posix.h                   |   8 +
>  include/qemu/thread-win32.h                   |   8 +
>  include/sysemu/cpus.h                         |   6 -
>  include/sysemu/replay.h                       |  13 --
>  linux-user/aarch64/cpu_loop.c                 |   5 +-
>  linux-user/alpha/cpu_loop.c                   |   5 +-
>  linux-user/arm/cpu_loop.c                     |   5 +-
>  linux-user/hexagon/cpu_loop.c                 |   5 +-
>  linux-user/hppa/cpu_loop.c                    |   5 +-
>  linux-user/i386/cpu_loop.c                    |   5 +-
>  linux-user/loongarch64/cpu_loop.c             |   5 +-
>  linux-user/m68k/cpu_loop.c                    |   5 +-
>  linux-user/main.c                             |   9 +-
>  linux-user/microblaze/cpu_loop.c              |   5 +-
>  linux-user/mips/cpu_loop.c                    |   5 +-
>  linux-user/openrisc/cpu_loop.c                |   5 +-
>  linux-user/ppc/cpu_loop.c                     |   5 +-
>  linux-user/riscv/cpu_loop.c                   |   5 +-
>  linux-user/s390x/cpu_loop.c                   |   5 +-
>  linux-user/sh4/cpu_loop.c                     |   5 +-
>  linux-user/sparc/cpu_loop.c                   |   5 +-
>  linux-user/syscall.c                          |  12 ++
>  linux-user/xtensa/cpu_loop.c                  |   5 +-
>  replay/stubs-system.c                         |   8 +
>  stubs/meson.build                             |   8 +
>  stubs/qemu-timer.c                            |   6 +
>  stubs/replay-mutex.c                          |  10 +
>  stubs/replay-tools.c                          |   8 -
>  system/cpus.c                                 | 172 +----------------
>  tests/tcg/multiarch/Makefile.target           |  13 +-
>  .../gdbstub/test-thread-breakpoint-stress.py  |  28 +++
>  .../tcg/multiarch/thread-breakpoint-stress.c  |  92 +++++++++
>  49 files changed, 552 insertions(+), 327 deletions(-)
>  create mode 100644 stubs/qemu-timer.c
>  create mode 100644 stubs/replay-mutex.c
>  create mode 100644 tests/tcg/multiarch/gdbstub/test-thread-breakpoint-stress.py
>  create mode 100644 tests/tcg/multiarch/thread-breakpoint-stress.c

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint
  2025-01-08 15:56 ` Alex Bennée
@ 2025-01-08 16:20   ` Ilya Leoshkevich
  0 siblings, 0 replies; 37+ messages in thread
From: Ilya Leoshkevich @ 2025-01-08 16:20 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Richard Henderson, Paolo Bonzini, Philippe Mathieu-Daudé,
	qemu-devel

On Wed, 2025-01-08 at 15:56 +0000, Alex Bennée wrote:
> Ilya Leoshkevich <iii@linux.ibm.com> writes:
> 
> > Hi,
> > 
> > On reporting a breakpoint in a non-non-stop mode, GDB remotes must
> > stop
> > all threads. Currently qemu-user doesn't do that, breaking the
> > debugging session for at least two reasons: concurrent access to
> > the
> > GDB socket, and an assertion within GDB [1].
> > 
> > This series fixes this by importing pause_all_vcpus() from qemu-
> > system.
> > This in turn requires introducing BQL and a few stubs to qemu-user.
> 
> Is there a conclusion to this design choice? I'd like to avoid
> bringing
> in a bunch of system-mode infrastructure if the existing exclusive
> code
> would work. For that I'll defer to the linux-user maintainer or
> Richard
> who knows the code better than I do.

I wanted to re-implement parking CPUs using a reserved host signal.
I've submitted the foundations for this in [1], and I'm currently
waiting for the review.

[1]
https://lore.kernel.org/qemu-devel/20241216123412.77450-1-iii@linux.ibm.com/

> 
> I could certainly harvest the early clean-up patches to keep the
> delta
> low while the details are worked out. Is there going to be a v2?

It would be great if clean-up patches could be taken separately, since
IMHO they make sense on their own. I plan to send a v2 after [1] is
integrated.

> > Best regards,
> > Ilya
> > 
> > [1] https://gitlab.com/qemu-project/qemu/-/issues/2465
> > 
> > Ilya Leoshkevich (18):
> >   gdbstub: Make gdb_get_char() static
> >   gdbstub: Move phy_memory_mode to GDBSystemState
> >   gdbstub: Move gdb_syscall_mode to GDBSyscallState
> >   gdbstub: Factor out gdb_try_stop()
> >   accel/tcg: Factor out cpu_exec_user()
> >   qemu-thread: Introduce QEMU_MUTEX_INITIALIZER
> >   qemu-thread: Introduce QEMU_COND_INITIALIZER
> >   replay: Add replay_mutex_{lock,unlock}() stubs for qemu-user
> >   qemu-timer: Provide qemu_clock_enable() stub for qemu-user
> >   cpu: Use BQL in qemu-user
> >   accel/tcg: Unify user implementations of qemu_cpu_kick()
> >   cpu: Track CPUs executing syscalls
> >   cpu: Implement cpu_thread_is_idle() for qemu-user
> >   cpu: Introduce cpu_is_paused()
> >   cpu: Set current_cpu early in qemu-user
> >   cpu: Allow pausing and resuming CPUs in qemu-user
> >   gdbstub: Pause all CPUs before sending stop replies
> >   tests/tcg: Stress test thread breakpoints
> > 
> >  accel/tcg/user-exec-stub.c                    |   4 -
> >  accel/tcg/user-exec.c                         |  55 ++++++
> >  bsd-user/aarch64/target_arch_cpu.h            |   6 +-
> >  bsd-user/arm/target_arch_cpu.h                |   5 +-
> >  bsd-user/freebsd/os-syscall.c                 |  10 +
> >  bsd-user/i386/target_arch_cpu.h               |   5 +-
> >  bsd-user/main.c                               |   8 +-
> >  bsd-user/x86_64/target_arch_cpu.h             |   5 +-
> >  cpu-common.c                                  | 179
> > ++++++++++++++++++
> >  gdbstub/gdbstub.c                             |  17 +-
> >  gdbstub/internals.h                           |   4 +-
> >  gdbstub/syscalls.c                            |  20 +-
> >  gdbstub/system.c                              |  18 +-
> >  gdbstub/user.c                                |  28 ++-
> >  include/exec/cpu-common.h                     |  15 ++
> >  include/exec/replay-core.h                    |  13 ++
> >  include/hw/core/cpu.h                         |   1 +
> >  include/qemu/thread-posix.h                   |   8 +
> >  include/qemu/thread-win32.h                   |   8 +
> >  include/sysemu/cpus.h                         |   6 -
> >  include/sysemu/replay.h                       |  13 --
> >  linux-user/aarch64/cpu_loop.c                 |   5 +-
> >  linux-user/alpha/cpu_loop.c                   |   5 +-
> >  linux-user/arm/cpu_loop.c                     |   5 +-
> >  linux-user/hexagon/cpu_loop.c                 |   5 +-
> >  linux-user/hppa/cpu_loop.c                    |   5 +-
> >  linux-user/i386/cpu_loop.c                    |   5 +-
> >  linux-user/loongarch64/cpu_loop.c             |   5 +-
> >  linux-user/m68k/cpu_loop.c                    |   5 +-
> >  linux-user/main.c                             |   9 +-
> >  linux-user/microblaze/cpu_loop.c              |   5 +-
> >  linux-user/mips/cpu_loop.c                    |   5 +-
> >  linux-user/openrisc/cpu_loop.c                |   5 +-
> >  linux-user/ppc/cpu_loop.c                     |   5 +-
> >  linux-user/riscv/cpu_loop.c                   |   5 +-
> >  linux-user/s390x/cpu_loop.c                   |   5 +-
> >  linux-user/sh4/cpu_loop.c                     |   5 +-
> >  linux-user/sparc/cpu_loop.c                   |   5 +-
> >  linux-user/syscall.c                          |  12 ++
> >  linux-user/xtensa/cpu_loop.c                  |   5 +-
> >  replay/stubs-system.c                         |   8 +
> >  stubs/meson.build                             |   8 +
> >  stubs/qemu-timer.c                            |   6 +
> >  stubs/replay-mutex.c                          |  10 +
> >  stubs/replay-tools.c                          |   8 -
> >  system/cpus.c                                 | 172 +-------------
> > ---
> >  tests/tcg/multiarch/Makefile.target           |  13 +-
> >  .../gdbstub/test-thread-breakpoint-stress.py  |  28 +++
> >  .../tcg/multiarch/thread-breakpoint-stress.c  |  92 +++++++++
> >  49 files changed, 552 insertions(+), 327 deletions(-)
> >  create mode 100644 stubs/qemu-timer.c
> >  create mode 100644 stubs/replay-mutex.c
> >  create mode 100644 tests/tcg/multiarch/gdbstub/test-thread-
> > breakpoint-stress.py
> >  create mode 100644 tests/tcg/multiarch/thread-breakpoint-stress.c
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2025-01-08 16:20 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-23 16:12 [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
2024-09-23 16:12 ` [PATCH 01/18] gdbstub: Make gdb_get_char() static Ilya Leoshkevich
2024-10-05 19:20   ` Richard Henderson
2024-09-23 16:12 ` [PATCH 02/18] gdbstub: Move phy_memory_mode to GDBSystemState Ilya Leoshkevich
2024-10-05 19:21   ` Richard Henderson
2024-09-23 16:12 ` [PATCH 03/18] gdbstub: Move gdb_syscall_mode to GDBSyscallState Ilya Leoshkevich
2024-10-05 19:22   ` Richard Henderson
2024-09-23 16:12 ` [PATCH 04/18] gdbstub: Factor out gdb_try_stop() Ilya Leoshkevich
2024-10-05 19:26   ` Richard Henderson
2024-09-23 16:13 ` [PATCH 05/18] accel/tcg: Factor out cpu_exec_user() Ilya Leoshkevich
2024-10-05 19:29   ` Richard Henderson
2024-09-23 16:13 ` [PATCH 06/18] qemu-thread: Introduce QEMU_MUTEX_INITIALIZER Ilya Leoshkevich
2024-10-05 19:30   ` Richard Henderson
2024-09-23 16:13 ` [PATCH 07/18] qemu-thread: Introduce QEMU_COND_INITIALIZER Ilya Leoshkevich
2024-10-05 19:30   ` Richard Henderson
2024-09-23 16:13 ` [PATCH 08/18] replay: Add replay_mutex_{lock, unlock}() stubs for qemu-user Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 09/18] qemu-timer: Provide qemu_clock_enable() stub " Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 10/18] cpu: Use BQL in qemu-user Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 11/18] accel/tcg: Unify user implementations of qemu_cpu_kick() Ilya Leoshkevich
2024-10-05 19:31   ` Richard Henderson
2024-09-23 16:13 ` [PATCH 12/18] cpu: Track CPUs executing syscalls Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 13/18] cpu: Implement cpu_thread_is_idle() for qemu-user Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 14/18] cpu: Introduce cpu_is_paused() Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 15/18] cpu: Set current_cpu early in qemu-user Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 16/18] cpu: Allow pausing and resuming CPUs " Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 17/18] gdbstub: Pause all CPUs before sending stop replies Ilya Leoshkevich
2024-09-23 16:13 ` [PATCH 18/18] tests/tcg: Stress test thread breakpoints Ilya Leoshkevich
2024-09-23 16:37 ` [PATCH 00/18] Stop all qemu-cpu threads on a breakpoint Ilya Leoshkevich
2024-09-24 11:46 ` Richard Henderson
2024-09-25  7:43   ` Ilya Leoshkevich
2024-10-05 19:51     ` Richard Henderson
2024-10-05 20:26       ` Ilya Leoshkevich
2024-10-05 20:35         ` Ilya Leoshkevich
2024-10-08 18:17           ` Richard Henderson
2024-10-09 22:01             ` Ilya Leoshkevich
2025-01-08 15:56 ` Alex Bennée
2025-01-08 16:20   ` Ilya Leoshkevich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).