qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
@ 2025-11-28 13:39 Thomas Huth
  2025-11-28 17:25 ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Huth @ 2025-11-28 13:39 UTC (permalink / raw)
  To: qemu-devel, Ilya Leoshkevich; +Cc: qemu-s390x

From: Thomas Huth <thuth@redhat.com>

We just have to make sure that we can set the endianness to big endian,
then we can also run this test on s390x.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 Marked as RFC since it depends on the fix for this bug (so it cannot
 be merged yet):
 https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com/

 tests/functional/reverse_debugging.py        |  4 +++-
 tests/functional/s390x/meson.build           |  1 +
 tests/functional/s390x/test_reverse_debug.py | 21 ++++++++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)
 create mode 100755 tests/functional/s390x/test_reverse_debug.py

diff --git a/tests/functional/reverse_debugging.py b/tests/functional/reverse_debugging.py
index 86fca8d81f1..8b9507674a0 100644
--- a/tests/functional/reverse_debugging.py
+++ b/tests/functional/reverse_debugging.py
@@ -64,7 +64,7 @@ def vm_get_icount(vm):
 
     @skipIfMissingImports("pygdbmi") # Required by GDB class
     @skipIfMissingEnv("QEMU_TEST_GDB")
-    def reverse_debugging(self, gdb_arch, shift=7, args=None):
+    def reverse_debugging(self, gdb_arch, shift=7, args=None, big_endian=False):
         from qemu_test import GDB
 
         # create qcow2 for snapshots
@@ -99,6 +99,8 @@ def reverse_debugging(self, gdb_arch, shift=7, args=None):
             gdb_cmd = os.getenv('QEMU_TEST_GDB')
             gdb = GDB(gdb_cmd)
             try:
+                if big_endian:
+                    gdb.cli("set endian big")
                 self.reverse_debugging_run(gdb, vm, port, gdb_arch, last_icount)
             finally:
                 self.log.info('exiting gdb and qemu')
diff --git a/tests/functional/s390x/meson.build b/tests/functional/s390x/meson.build
index 70cd36e2913..0f03e1c9db8 100644
--- a/tests/functional/s390x/meson.build
+++ b/tests/functional/s390x/meson.build
@@ -6,6 +6,7 @@ test_s390x_timeouts = {
 
 tests_s390x_system_quick = [
   'vmstate',
+  'reverse_debug',
 ]
 
 tests_s390x_system_thorough = [
diff --git a/tests/functional/s390x/test_reverse_debug.py b/tests/functional/s390x/test_reverse_debug.py
new file mode 100755
index 00000000000..0767dc91678
--- /dev/null
+++ b/tests/functional/s390x/test_reverse_debug.py
@@ -0,0 +1,21 @@
+#!/usr/bin/env python3
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+#
+'''
+Reverse debugging test for s390x
+'''
+
+from reverse_debugging import ReverseDebugging
+
+
+class ReverseDebuggingS390x(ReverseDebugging):
+
+    def test_revdbg(self):
+        self.set_machine('s390-ccw-virtio')
+        self.reverse_debugging(gdb_arch='s390:64-bit', shift=6,
+                               big_endian=True, args=('-no-shutdown',))
+
+
+if __name__ == '__main__':
+    ReverseDebugging.main()
-- 
2.51.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-28 13:39 [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x Thomas Huth
@ 2025-11-28 17:25 ` Ilya Leoshkevich
  2025-11-29 21:33   ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-11-28 17:25 UTC (permalink / raw)
  To: Thomas Huth, qemu-devel; +Cc: qemu-s390x

On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> From: Thomas Huth <thuth@redhat.com>
> 
> We just have to make sure that we can set the endianness to big
> endian,
> then we can also run this test on s390x.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  Marked as RFC since it depends on the fix for this bug (so it cannot
>  be merged yet):
>  
> https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> /
> 
>  tests/functional/reverse_debugging.py        |  4 +++-
>  tests/functional/s390x/meson.build           |  1 +
>  tests/functional/s390x/test_reverse_debug.py | 21
> ++++++++++++++++++++
>  3 files changed, 25 insertions(+), 1 deletion(-)
>  create mode 100755 tests/functional/s390x/test_reverse_debug.py

Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>


I have a simple fix which helps with your original report, but not
with this test. I'm still investigating.

--- a/target/s390x/machine.c
+++ b/target/s390x/machine.c
@@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
         kvm_s390_vcpu_interrupt_pre_save(cpu);
     }
 
+    if (tcg_enabled()) {
+        /*
+         * Ensure symmetry with cpu_post_load() with respect to
+         * CHECKPOINT_CLOCK_VIRTUAL.
+         */
+        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
+    }
+
     return 0;
 }


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-28 17:25 ` Ilya Leoshkevich
@ 2025-11-29 21:33   ` Ilya Leoshkevich
  2025-11-30 16:47     ` Alex Bennée
  0 siblings, 1 reply; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-11-29 21:33 UTC (permalink / raw)
  To: Thomas Huth, qemu-devel; +Cc: qemu-s390x

On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
> On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > From: Thomas Huth <thuth@redhat.com>
> > 
> > We just have to make sure that we can set the endianness to big
> > endian,
> > then we can also run this test on s390x.
> > 
> > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > ---
> >  Marked as RFC since it depends on the fix for this bug (so it
> > cannot
> >  be merged yet):
> >  
> > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > /
> > 
> >  tests/functional/reverse_debugging.py        |  4 +++-
> >  tests/functional/s390x/meson.build           |  1 +
> >  tests/functional/s390x/test_reverse_debug.py | 21
> > ++++++++++++++++++++
> >  3 files changed, 25 insertions(+), 1 deletion(-)
> >  create mode 100755 tests/functional/s390x/test_reverse_debug.py
> 
> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
> 
> 
> I have a simple fix which helps with your original report, but not
> with this test. I'm still investigating.
> 
> --- a/target/s390x/machine.c
> +++ b/target/s390x/machine.c
> @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
>          kvm_s390_vcpu_interrupt_pre_save(cpu);
>      }
>  
> +    if (tcg_enabled()) {
> +        /*
> +         * Ensure symmetry with cpu_post_load() with respect to
> +         * CHECKPOINT_CLOCK_VIRTUAL.
> +         */
> +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
> +    }
> +
>      return 0;
>  }

Interestingly enough, this patch fails only under load, e.g., if I run
make check -j"$(nproc)" or if I run your test in isolation, but with
stress-ng cpu in background. The culprit appears to be:

s390_tod_load()
  qemu_s390_tod_set()
    async_run_on_cpu(tcg_s390_tod_updated)

Depending on the system load, this additional tcg_s390_tod_updated()
may or may not end up being called during handle_backward(). If it
does, we get an infinite loop again, because now we need two
checkpoints.

I have a feeling that this code may be violating some record-replay
requirement, but I can't quite put my finger on it. For example,
async_run_on_cpu() does not sound like something deterministic, but
then again it just queues work for rr_cpu_thread_fn(), which is
supposed to be deterministic.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-29 21:33   ` Ilya Leoshkevich
@ 2025-11-30 16:47     ` Alex Bennée
  2025-11-30 18:32       ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bennée @ 2025-11-30 16:47 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: Thomas Huth, qemu-devel, qemu-s390x

Ilya Leoshkevich <iii@linux.ibm.com> writes:

> On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
>> On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
>> > From: Thomas Huth <thuth@redhat.com>
>> > 
>> > We just have to make sure that we can set the endianness to big
>> > endian,
>> > then we can also run this test on s390x.
>> > 
>> > Signed-off-by: Thomas Huth <thuth@redhat.com>
>> > ---
>> >  Marked as RFC since it depends on the fix for this bug (so it
>> > cannot
>> >  be merged yet):
>> >  
>> > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
>> > /
>> > 
>> >  tests/functional/reverse_debugging.py        |  4 +++-
>> >  tests/functional/s390x/meson.build           |  1 +
>> >  tests/functional/s390x/test_reverse_debug.py | 21
>> > ++++++++++++++++++++
>> >  3 files changed, 25 insertions(+), 1 deletion(-)
>> >  create mode 100755 tests/functional/s390x/test_reverse_debug.py
>> 
>> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
>> 
>> 
>> I have a simple fix which helps with your original report, but not
>> with this test. I'm still investigating.
>> 
>> --- a/target/s390x/machine.c
>> +++ b/target/s390x/machine.c
>> @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
>>          kvm_s390_vcpu_interrupt_pre_save(cpu);
>>      }
>>  
>> +    if (tcg_enabled()) {
>> +        /*
>> +         * Ensure symmetry with cpu_post_load() with respect to
>> +         * CHECKPOINT_CLOCK_VIRTUAL.
>> +         */
>> +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
>> +    }
>> +
>>      return 0;
>>  }
>
> Interestingly enough, this patch fails only under load, e.g., if I run
> make check -j"$(nproc)" or if I run your test in isolation, but with
> stress-ng cpu in background. The culprit appears to be:
>
> s390_tod_load()
>   qemu_s390_tod_set()
>     async_run_on_cpu(tcg_s390_tod_updated)
>
> Depending on the system load, this additional tcg_s390_tod_updated()
> may or may not end up being called during handle_backward(). If it
> does, we get an infinite loop again, because now we need two
> checkpoints.
>
> I have a feeling that this code may be violating some record-replay
> requirement, but I can't quite put my finger on it. For example,
> async_run_on_cpu() does not sound like something deterministic, but
> then again it just queues work for rr_cpu_thread_fn(), which is
> supposed to be deterministic.

The the async_run_on_cpu is called from the vcpu thread in response to a
deterministic event at a known point in time it should be fine. If it
came from another thread that is not synchronised via replay_lock then
things will go wrong.

But this is a VM load save helper?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-30 16:47     ` Alex Bennée
@ 2025-11-30 18:32       ` Ilya Leoshkevich
  2025-11-30 19:03         ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-11-30 18:32 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Thomas Huth, qemu-devel, qemu-s390x

On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
> Ilya Leoshkevich <iii@linux.ibm.com> writes:
> 
> > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
> > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > > > From: Thomas Huth <thuth@redhat.com>
> > > > 
> > > > We just have to make sure that we can set the endianness to big
> > > > endian,
> > > > then we can also run this test on s390x.
> > > > 
> > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > > ---
> > > >  Marked as RFC since it depends on the fix for this bug (so it
> > > > cannot
> > > >  be merged yet):
> > > >  
> > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > > > /
> > > > 
> > > >  tests/functional/reverse_debugging.py        |  4 +++-
> > > >  tests/functional/s390x/meson.build           |  1 +
> > > >  tests/functional/s390x/test_reverse_debug.py | 21
> > > > ++++++++++++++++++++
> > > >  3 files changed, 25 insertions(+), 1 deletion(-)
> > > >  create mode 100755
> > > > tests/functional/s390x/test_reverse_debug.py
> > > 
> > > Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > 
> > > 
> > > I have a simple fix which helps with your original report, but
> > > not
> > > with this test. I'm still investigating.
> > > 
> > > --- a/target/s390x/machine.c
> > > +++ b/target/s390x/machine.c
> > > @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
> > >          kvm_s390_vcpu_interrupt_pre_save(cpu);
> > >      }
> > >  
> > > +    if (tcg_enabled()) {
> > > +        /*
> > > +         * Ensure symmetry with cpu_post_load() with respect to
> > > +         * CHECKPOINT_CLOCK_VIRTUAL.
> > > +         */
> > > +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
> > > +    }
> > > +
> > >      return 0;
> > >  }
> > 
> > Interestingly enough, this patch fails only under load, e.g., if I
> > run
> > make check -j"$(nproc)" or if I run your test in isolation, but
> > with
> > stress-ng cpu in background. The culprit appears to be:
> > 
> > s390_tod_load()
> >   qemu_s390_tod_set()
> >     async_run_on_cpu(tcg_s390_tod_updated)
> > 
> > Depending on the system load, this additional
> > tcg_s390_tod_updated()
> > may or may not end up being called during handle_backward(). If it
> > does, we get an infinite loop again, because now we need two
> > checkpoints.
> > 
> > I have a feeling that this code may be violating some record-replay
> > requirement, but I can't quite put my finger on it. For example,
> > async_run_on_cpu() does not sound like something deterministic, but
> > then again it just queues work for rr_cpu_thread_fn(), which is
> > supposed to be deterministic.
> 
> The the async_run_on_cpu is called from the vcpu thread in response
> to a
> deterministic event at a known point in time it should be fine. If it
> came from another thread that is not synchronised via replay_lock
> then
> things will go wrong.
> 
> But this is a VM load save helper?

Yes, and it's called from the main thread. Either during
initialization, or as a reaction to GDB packets.

Here is the call stack:

  qemu_loadvm_state()
    qemu_loadvm_state_main()
      qemu_loadvm_section_start_full()
        vmstate_load()
          vmstate_load_state()
            cpu_post_load()
              tcg_s390_tod_updated()
                update_ckc_timer()
                  timer_mod()
          s390_tod_load()
            qemu_s390_tod_set()  # via tdc->set()
              async_run_on_cpu(tcg_s390_tod_updated)

So you think we may have to take the replay lock around
load_snapshot()? So that all async_run_on_cpu() calls it makes end up
being handled by the vCPU thread deterministically.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-30 18:32       ` Ilya Leoshkevich
@ 2025-11-30 19:03         ` Ilya Leoshkevich
  2025-11-30 22:59           ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-11-30 19:03 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Thomas Huth, qemu-devel, qemu-s390x

On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
> On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
> > Ilya Leoshkevich <iii@linux.ibm.com> writes:
> > 
> > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
> > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > > > > From: Thomas Huth <thuth@redhat.com>
> > > > > 
> > > > > We just have to make sure that we can set the endianness to
> > > > > big
> > > > > endian,
> > > > > then we can also run this test on s390x.
> > > > > 
> > > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > > > ---
> > > > >  Marked as RFC since it depends on the fix for this bug (so
> > > > > it
> > > > > cannot
> > > > >  be merged yet):
> > > > >  
> > > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > > > > /
> > > > > 
> > > > >  tests/functional/reverse_debugging.py        |  4 +++-
> > > > >  tests/functional/s390x/meson.build           |  1 +
> > > > >  tests/functional/s390x/test_reverse_debug.py | 21
> > > > > ++++++++++++++++++++
> > > > >  3 files changed, 25 insertions(+), 1 deletion(-)
> > > > >  create mode 100755
> > > > > tests/functional/s390x/test_reverse_debug.py
> > > > 
> > > > Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > 
> > > > 
> > > > I have a simple fix which helps with your original report, but
> > > > not
> > > > with this test. I'm still investigating.
> > > > 
> > > > --- a/target/s390x/machine.c
> > > > +++ b/target/s390x/machine.c
> > > > @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
> > > >          kvm_s390_vcpu_interrupt_pre_save(cpu);
> > > >      }
> > > >  
> > > > +    if (tcg_enabled()) {
> > > > +        /*
> > > > +         * Ensure symmetry with cpu_post_load() with respect
> > > > to
> > > > +         * CHECKPOINT_CLOCK_VIRTUAL.
> > > > +         */
> > > > +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
> > > > +    }
> > > > +
> > > >      return 0;
> > > >  }
> > > 
> > > Interestingly enough, this patch fails only under load, e.g., if
> > > I
> > > run
> > > make check -j"$(nproc)" or if I run your test in isolation, but
> > > with
> > > stress-ng cpu in background. The culprit appears to be:
> > > 
> > > s390_tod_load()
> > >   qemu_s390_tod_set()
> > >     async_run_on_cpu(tcg_s390_tod_updated)
> > > 
> > > Depending on the system load, this additional
> > > tcg_s390_tod_updated()
> > > may or may not end up being called during handle_backward(). If
> > > it
> > > does, we get an infinite loop again, because now we need two
> > > checkpoints.
> > > 
> > > I have a feeling that this code may be violating some record-
> > > replay
> > > requirement, but I can't quite put my finger on it. For example,
> > > async_run_on_cpu() does not sound like something deterministic,
> > > but
> > > then again it just queues work for rr_cpu_thread_fn(), which is
> > > supposed to be deterministic.
> > 
> > The the async_run_on_cpu is called from the vcpu thread in response
> > to a
> > deterministic event at a known point in time it should be fine. If
> > it
> > came from another thread that is not synchronised via replay_lock
> > then
> > things will go wrong.
> > 
> > But this is a VM load save helper?
> 
> Yes, and it's called from the main thread. Either during
> initialization, or as a reaction to GDB packets.
> 
> Here is the call stack:
> 
>   qemu_loadvm_state()
>     qemu_loadvm_state_main()
>       qemu_loadvm_section_start_full()
>         vmstate_load()
>           vmstate_load_state()
>             cpu_post_load()
>               tcg_s390_tod_updated()
>                 update_ckc_timer()
>                   timer_mod()
>           s390_tod_load()
>             qemu_s390_tod_set()  # via tdc->set()
>               async_run_on_cpu(tcg_s390_tod_updated)
> 
> So you think we may have to take the replay lock around
> load_snapshot()? So that all async_run_on_cpu() calls it makes end up
> being handled by the vCPU thread deterministically.

To answer my own question: apparently this is already the case; at
least, the following does not cause any fallout:

diff --git a/include/system/replay.h b/include/system/replay.h
index 6859df09580..e1cd9b2f900 100644
--- a/include/system/replay.h
+++ b/include/system/replay.h
@@ -60,6 +60,7 @@ extern char *replay_snapshot;
 
 void replay_mutex_lock(void);
 void replay_mutex_unlock(void);
+bool replay_mutex_locked(void);
 
 static inline void replay_unlock_guard(void *unused)
 {
diff --git a/migration/savevm.c b/migration/savevm.c
index 62cc2ce25cb..ba945d3a1ea 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3199,6 +3199,8 @@ bool save_snapshot(const char *name, bool
overwrite, const char *vmstate,
     uint64_t vm_state_size;
     g_autoptr(GDateTime) now = g_date_time_new_now_local();
 
+    g_assert(replay_mutex_locked());
+
     GLOBAL_STATE_CODE();
 
     if (!migrate_can_snapshot(errp)) {
@@ -3390,6 +3392,8 @@ bool load_snapshot(const char *name, const char
*vmstate,
     int ret;
     MigrationIncomingState *mis = migration_incoming_get_current();
 
+    g_assert(replay_mutex_locked());
+
     if (!migrate_can_snapshot(errp)) {
         return false;
     }
diff --git a/replay/replay-internal.h b/replay/replay-internal.h
index 75249b76936..30825a0753e 100644
--- a/replay/replay-internal.h
+++ b/replay/replay-internal.h
@@ -124,7 +124,6 @@ void replay_get_array_alloc(uint8_t **buf, size_t
*size);
  * synchronisation between vCPU and main-loop threads. */
 
 void replay_mutex_init(void);
-bool replay_mutex_locked(void);
 
 /*! Checks error status of the file. */
 void replay_check_error(void);



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-30 19:03         ` Ilya Leoshkevich
@ 2025-11-30 22:59           ` Ilya Leoshkevich
  2025-12-01 10:36             ` Alex Bennée
  0 siblings, 1 reply; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-11-30 22:59 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Thomas Huth, qemu-devel, qemu-s390x

On Sun, 2025-11-30 at 20:03 +0100, Ilya Leoshkevich wrote:
> On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
> > On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
> > > Ilya Leoshkevich <iii@linux.ibm.com> writes:
> > > 
> > > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
> > > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > > > > > From: Thomas Huth <thuth@redhat.com>
> > > > > > 
> > > > > > We just have to make sure that we can set the endianness to
> > > > > > big
> > > > > > endian,
> > > > > > then we can also run this test on s390x.
> > > > > > 
> > > > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > > > > ---
> > > > > >  Marked as RFC since it depends on the fix for this bug (so
> > > > > > it
> > > > > > cannot
> > > > > >  be merged yet):
> > > > > >  
> > > > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > > > > > /
> > > > > > 
> > > > > >  tests/functional/reverse_debugging.py        |  4 +++-
> > > > > >  tests/functional/s390x/meson.build           |  1 +
> > > > > >  tests/functional/s390x/test_reverse_debug.py | 21
> > > > > > ++++++++++++++++++++
> > > > > >  3 files changed, 25 insertions(+), 1 deletion(-)
> > > > > >  create mode 100755
> > > > > > tests/functional/s390x/test_reverse_debug.py
> > > > > 
> > > > > Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > 
> > > > > 
> > > > > I have a simple fix which helps with your original report,
> > > > > but
> > > > > not
> > > > > with this test. I'm still investigating.
> > > > > 
> > > > > --- a/target/s390x/machine.c
> > > > > +++ b/target/s390x/machine.c
> > > > > @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
> > > > >          kvm_s390_vcpu_interrupt_pre_save(cpu);
> > > > >      }
> > > > >  
> > > > > +    if (tcg_enabled()) {
> > > > > +        /*
> > > > > +         * Ensure symmetry with cpu_post_load() with respect
> > > > > to
> > > > > +         * CHECKPOINT_CLOCK_VIRTUAL.
> > > > > +         */
> > > > > +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
> > > > > +    }
> > > > > +
> > > > >      return 0;
> > > > >  }
> > > > 
> > > > Interestingly enough, this patch fails only under load, e.g.,
> > > > if
> > > > I
> > > > run
> > > > make check -j"$(nproc)" or if I run your test in isolation, but
> > > > with
> > > > stress-ng cpu in background. The culprit appears to be:
> > > > 
> > > > s390_tod_load()
> > > >   qemu_s390_tod_set()
> > > >     async_run_on_cpu(tcg_s390_tod_updated)
> > > > 
> > > > Depending on the system load, this additional
> > > > tcg_s390_tod_updated()
> > > > may or may not end up being called during handle_backward(). If
> > > > it
> > > > does, we get an infinite loop again, because now we need two
> > > > checkpoints.
> > > > 
> > > > I have a feeling that this code may be violating some record-
> > > > replay
> > > > requirement, but I can't quite put my finger on it. For
> > > > example,
> > > > async_run_on_cpu() does not sound like something deterministic,
> > > > but
> > > > then again it just queues work for rr_cpu_thread_fn(), which is
> > > > supposed to be deterministic.
> > > 
> > > The the async_run_on_cpu is called from the vcpu thread in
> > > response
> > > to a
> > > deterministic event at a known point in time it should be fine.
> > > If
> > > it
> > > came from another thread that is not synchronised via replay_lock
> > > then
> > > things will go wrong.
> > > 
> > > But this is a VM load save helper?
> > 
> > Yes, and it's called from the main thread. Either during
> > initialization, or as a reaction to GDB packets.
> > 
> > Here is the call stack:
> > 
> >   qemu_loadvm_state()
> >     qemu_loadvm_state_main()
> >       qemu_loadvm_section_start_full()
> >         vmstate_load()
> >           vmstate_load_state()
> >             cpu_post_load()
> >               tcg_s390_tod_updated()
> >                 update_ckc_timer()
> >                   timer_mod()
> >           s390_tod_load()
> >             qemu_s390_tod_set()  # via tdc->set()
> >               async_run_on_cpu(tcg_s390_tod_updated)
> > 
> > So you think we may have to take the replay lock around
> > load_snapshot()? So that all async_run_on_cpu() calls it makes end
> > up
> > being handled by the vCPU thread deterministically.
> 
> To answer my own question: apparently this is already the case; at
> least, the following does not cause any fallout:
> 
> diff --git a/include/system/replay.h b/include/system/replay.h
> index 6859df09580..e1cd9b2f900 100644
> --- a/include/system/replay.h
> +++ b/include/system/replay.h
> @@ -60,6 +60,7 @@ extern char *replay_snapshot;
>  
>  void replay_mutex_lock(void);
>  void replay_mutex_unlock(void);
> +bool replay_mutex_locked(void);
>  
>  static inline void replay_unlock_guard(void *unused)
>  {
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 62cc2ce25cb..ba945d3a1ea 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -3199,6 +3199,8 @@ bool save_snapshot(const char *name, bool
> overwrite, const char *vmstate,
>      uint64_t vm_state_size;
>      g_autoptr(GDateTime) now = g_date_time_new_now_local();
>  
> +    g_assert(replay_mutex_locked());
> +
>      GLOBAL_STATE_CODE();
>  
>      if (!migrate_can_snapshot(errp)) {
> @@ -3390,6 +3392,8 @@ bool load_snapshot(const char *name, const char
> *vmstate,
>      int ret;
>      MigrationIncomingState *mis = migration_incoming_get_current();
>  
> +    g_assert(replay_mutex_locked());
> +
>      if (!migrate_can_snapshot(errp)) {
>          return false;
>      }
> diff --git a/replay/replay-internal.h b/replay/replay-internal.h
> index 75249b76936..30825a0753e 100644
> --- a/replay/replay-internal.h
> +++ b/replay/replay-internal.h
> @@ -124,7 +124,6 @@ void replay_get_array_alloc(uint8_t **buf, size_t
> *size);
>   * synchronisation between vCPU and main-loop threads. */
>  
>  void replay_mutex_init(void);
> -bool replay_mutex_locked(void);
>  
>  /*! Checks error status of the file. */
>  void replay_check_error(void);

I believe now I at least understand what the race is about:

- cpu_post_load() fires the TOD timer immediately.

- s390_tod_load() schedules work for firing the TOD timer.

- If rr loop sees work and then timer, we get one timer callback.

- If rr loop sees timer and then work, we get two timer callbacks.

- Record and replay may diverge due to this race.

- In this particular case divergence makes rr loop spin: it sees that
  TOD timer has expired, but cannot invoke its callback, because there
  is no recorded CHECKPOINT_CLOCK_VIRTUAL.

- The order in which rr loop sees work and timer depends on whether
  and when rr loop wakes up during load_snapshot().

- rr loop may wake up after the main thread kicks the CPU and drops
  the BQL, which may happen if it calls, e.g., qemu_cond_wait_bql().


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-11-30 22:59           ` Ilya Leoshkevich
@ 2025-12-01 10:36             ` Alex Bennée
  2025-12-01 11:17               ` Ilya Leoshkevich
  0 siblings, 1 reply; 11+ messages in thread
From: Alex Bennée @ 2025-12-01 10:36 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: Thomas Huth, qemu-devel, qemu-s390x

Ilya Leoshkevich <iii@linux.ibm.com> writes:

> On Sun, 2025-11-30 at 20:03 +0100, Ilya Leoshkevich wrote:
>> On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
>> > On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
>> > > Ilya Leoshkevich <iii@linux.ibm.com> writes:
>> > > 
>> > > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
>> > > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
>> > > > > > From: Thomas Huth <thuth@redhat.com>
>> > > > > > 
>> > > > > > We just have to make sure that we can set the endianness to
>> > > > > > big
>> > > > > > endian,
>> > > > > > then we can also run this test on s390x.
>> > > > > > 
>> > > > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
>> > > > > > ---
>> > > > > >  Marked as RFC since it depends on the fix for this bug (so
>> > > > > > it
>> > > > > > cannot
>> > > > > >  be merged yet):
>> > > > > >  
>> > > > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
>> > > > > > /
>> > > > > > 
>> > > > > >  tests/functional/reverse_debugging.py        |  4 +++-
>> > > > > >  tests/functional/s390x/meson.build           |  1 +
>> > > > > >  tests/functional/s390x/test_reverse_debug.py | 21
>> > > > > > ++++++++++++++++++++
>> > > > > >  3 files changed, 25 insertions(+), 1 deletion(-)
>> > > > > >  create mode 100755
>> > > > > > tests/functional/s390x/test_reverse_debug.py
>> > > > > 
>> > > > > Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
>> > > > > 
>> > > > > 
>> > > > > I have a simple fix which helps with your original report,
>> > > > > but
>> > > > > not
>> > > > > with this test. I'm still investigating.
>> > > > > 
>> > > > > --- a/target/s390x/machine.c
>> > > > > +++ b/target/s390x/machine.c
>> > > > > @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
>> > > > >          kvm_s390_vcpu_interrupt_pre_save(cpu);
>> > > > >      }
>> > > > >  
>> > > > > +    if (tcg_enabled()) {
>> > > > > +        /*
>> > > > > +         * Ensure symmetry with cpu_post_load() with respect
>> > > > > to
>> > > > > +         * CHECKPOINT_CLOCK_VIRTUAL.
>> > > > > +         */
>> > > > > +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
>> > > > > +    }
>> > > > > +
>> > > > >      return 0;
>> > > > >  }
>> > > > 
>> > > > Interestingly enough, this patch fails only under load, e.g.,
>> > > > if
>> > > > I
>> > > > run
>> > > > make check -j"$(nproc)" or if I run your test in isolation, but
>> > > > with
>> > > > stress-ng cpu in background. The culprit appears to be:
>> > > > 
>> > > > s390_tod_load()
>> > > >   qemu_s390_tod_set()
>> > > >     async_run_on_cpu(tcg_s390_tod_updated)
>> > > > 
>> > > > Depending on the system load, this additional
>> > > > tcg_s390_tod_updated()
>> > > > may or may not end up being called during handle_backward(). If
>> > > > it
>> > > > does, we get an infinite loop again, because now we need two
>> > > > checkpoints.
>> > > > 
>> > > > I have a feeling that this code may be violating some record-
>> > > > replay
>> > > > requirement, but I can't quite put my finger on it. For
>> > > > example,
>> > > > async_run_on_cpu() does not sound like something deterministic,
>> > > > but
>> > > > then again it just queues work for rr_cpu_thread_fn(), which is
>> > > > supposed to be deterministic.
>> > > 
>> > > The the async_run_on_cpu is called from the vcpu thread in
>> > > response
>> > > to a
>> > > deterministic event at a known point in time it should be fine.
>> > > If
>> > > it
>> > > came from another thread that is not synchronised via replay_lock
>> > > then
>> > > things will go wrong.
>> > > 
>> > > But this is a VM load save helper?
>> > 
>> > Yes, and it's called from the main thread. Either during
>> > initialization, or as a reaction to GDB packets.
>> > 
>> > Here is the call stack:
>> > 
>> >   qemu_loadvm_state()
>> >     qemu_loadvm_state_main()
>> >       qemu_loadvm_section_start_full()
>> >         vmstate_load()
>> >           vmstate_load_state()
>> >             cpu_post_load()
>> >               tcg_s390_tod_updated()
>> >                 update_ckc_timer()
>> >                   timer_mod()
>> >           s390_tod_load()
>> >             qemu_s390_tod_set()  # via tdc->set()
>> >               async_run_on_cpu(tcg_s390_tod_updated)
>> > 
>> > So you think we may have to take the replay lock around
>> > load_snapshot()? So that all async_run_on_cpu() calls it makes end
>> > up
>> > being handled by the vCPU thread deterministically.
>> 
>> To answer my own question: apparently this is already the case; at
>> least, the following does not cause any fallout:
>> 
>> diff --git a/include/system/replay.h b/include/system/replay.h
>> index 6859df09580..e1cd9b2f900 100644
>> --- a/include/system/replay.h
>> +++ b/include/system/replay.h
>> @@ -60,6 +60,7 @@ extern char *replay_snapshot;
>>  
>>  void replay_mutex_lock(void);
>>  void replay_mutex_unlock(void);
>> +bool replay_mutex_locked(void);
>>  
>>  static inline void replay_unlock_guard(void *unused)
>>  {
>> diff --git a/migration/savevm.c b/migration/savevm.c
>> index 62cc2ce25cb..ba945d3a1ea 100644
>> --- a/migration/savevm.c
>> +++ b/migration/savevm.c
>> @@ -3199,6 +3199,8 @@ bool save_snapshot(const char *name, bool
>> overwrite, const char *vmstate,
>>      uint64_t vm_state_size;
>>      g_autoptr(GDateTime) now = g_date_time_new_now_local();
>>  
>> +    g_assert(replay_mutex_locked());
>> +
>>      GLOBAL_STATE_CODE();
>>  
>>      if (!migrate_can_snapshot(errp)) {
>> @@ -3390,6 +3392,8 @@ bool load_snapshot(const char *name, const char
>> *vmstate,
>>      int ret;
>>      MigrationIncomingState *mis = migration_incoming_get_current();
>>  
>> +    g_assert(replay_mutex_locked());
>> +
>>      if (!migrate_can_snapshot(errp)) {
>>          return false;
>>      }
>> diff --git a/replay/replay-internal.h b/replay/replay-internal.h
>> index 75249b76936..30825a0753e 100644
>> --- a/replay/replay-internal.h
>> +++ b/replay/replay-internal.h
>> @@ -124,7 +124,6 @@ void replay_get_array_alloc(uint8_t **buf, size_t
>> *size);
>>   * synchronisation between vCPU and main-loop threads. */
>>  
>>  void replay_mutex_init(void);
>> -bool replay_mutex_locked(void);
>>  
>>  /*! Checks error status of the file. */
>>  void replay_check_error(void);
>
> I believe now I at least understand what the race is about:
>
> - cpu_post_load() fires the TOD timer immediately.
>
> - s390_tod_load() schedules work for firing the TOD timer.

Is this a duplicate of work then? Could we just rely on one or the
other? If you drop the cpu_post_load() tweak then the vmstate load
helper should still ensure everything works right?

> - If rr loop sees work and then timer, we get one timer callback.
>
> - If rr loop sees timer and then work, we get two timer callbacks.

If the timer is armed we should expect at least to execute a few
instructions before triggering the timer, unless it was armed ready
expired. 

> - Record and replay may diverge due to this race.
>
> - In this particular case divergence makes rr loop spin: it sees that
>   TOD timer has expired, but cannot invoke its callback, because there
>   is no recorded CHECKPOINT_CLOCK_VIRTUAL.
>
> - The order in which rr loop sees work and timer depends on whether
>   and when rr loop wakes up during load_snapshot().
>
> - rr loop may wake up after the main thread kicks the CPU and drops
>   the BQL, which may happen if it calls, e.g., qemu_cond_wait_bql().

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-12-01 10:36             ` Alex Bennée
@ 2025-12-01 11:17               ` Ilya Leoshkevich
  2025-12-01 11:57                 ` Ilya Leoshkevich
  2025-12-01 12:43                 ` Alex Bennée
  0 siblings, 2 replies; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-12-01 11:17 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Thomas Huth, qemu-devel, qemu-s390x

On Mon, 2025-12-01 at 10:36 +0000, Alex Bennée wrote:
> Ilya Leoshkevich <iii@linux.ibm.com> writes:
> 
> > On Sun, 2025-11-30 at 20:03 +0100, Ilya Leoshkevich wrote:
> > > On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
> > > > On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
> > > > > Ilya Leoshkevich <iii@linux.ibm.com> writes:
> > > > > 
> > > > > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
> > > > > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > > > > > > > From: Thomas Huth <thuth@redhat.com>
> > > > > > > > 
> > > > > > > > We just have to make sure that we can set the
> > > > > > > > endianness to
> > > > > > > > big
> > > > > > > > endian,
> > > > > > > > then we can also run this test on s390x.
> > > > > > > > 
> > > > > > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > > > > > > ---
> > > > > > > >  Marked as RFC since it depends on the fix for this bug
> > > > > > > > (so
> > > > > > > > it
> > > > > > > > cannot
> > > > > > > >  be merged yet):
> > > > > > > >  
> > > > > > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > > > > > > > /
> > > > > > > > 
> > > > > > > >  tests/functional/reverse_debugging.py        |  4 +++-
> > > > > > > >  tests/functional/s390x/meson.build           |  1 +
> > > > > > > >  tests/functional/s390x/test_reverse_debug.py | 21
> > > > > > > > ++++++++++++++++++++
> > > > > > > >  3 files changed, 25 insertions(+), 1 deletion(-)
> > > > > > > >  create mode 100755
> > > > > > > > tests/functional/s390x/test_reverse_debug.py
> > > > > > > 
> > > > > > > Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
> > > > > > > 
> > > > > > > 
> > > > > > > I have a simple fix which helps with your original
> > > > > > > report,
> > > > > > > but
> > > > > > > not
> > > > > > > with this test. I'm still investigating.
> > > > > > > 
> > > > > > > --- a/target/s390x/machine.c
> > > > > > > +++ b/target/s390x/machine.c
> > > > > > > @@ -52,6 +52,14 @@ static int cpu_pre_save(void *opaque)
> > > > > > >          kvm_s390_vcpu_interrupt_pre_save(cpu);
> > > > > > >      }
> > > > > > >  
> > > > > > > +    if (tcg_enabled()) {
> > > > > > > +        /*
> > > > > > > +         * Ensure symmetry with cpu_post_load() with
> > > > > > > respect
> > > > > > > to
> > > > > > > +         * CHECKPOINT_CLOCK_VIRTUAL.
> > > > > > > +         */
> > > > > > > +        tcg_s390_tod_updated(CPU(cpu), RUN_ON_CPU_NULL);
> > > > > > > +    }
> > > > > > > +
> > > > > > >      return 0;
> > > > > > >  }
> > > > > > 
> > > > > > Interestingly enough, this patch fails only under load,
> > > > > > e.g.,
> > > > > > if
> > > > > > I
> > > > > > run
> > > > > > make check -j"$(nproc)" or if I run your test in isolation,
> > > > > > but
> > > > > > with
> > > > > > stress-ng cpu in background. The culprit appears to be:
> > > > > > 
> > > > > > s390_tod_load()
> > > > > >   qemu_s390_tod_set()
> > > > > >     async_run_on_cpu(tcg_s390_tod_updated)
> > > > > > 
> > > > > > Depending on the system load, this additional
> > > > > > tcg_s390_tod_updated()
> > > > > > may or may not end up being called during
> > > > > > handle_backward(). If
> > > > > > it
> > > > > > does, we get an infinite loop again, because now we need
> > > > > > two
> > > > > > checkpoints.
> > > > > > 
> > > > > > I have a feeling that this code may be violating some
> > > > > > record-
> > > > > > replay
> > > > > > requirement, but I can't quite put my finger on it. For
> > > > > > example,
> > > > > > async_run_on_cpu() does not sound like something
> > > > > > deterministic,
> > > > > > but
> > > > > > then again it just queues work for rr_cpu_thread_fn(),
> > > > > > which is
> > > > > > supposed to be deterministic.
> > > > > 
> > > > > The the async_run_on_cpu is called from the vcpu thread in
> > > > > response
> > > > > to a
> > > > > deterministic event at a known point in time it should be
> > > > > fine.
> > > > > If
> > > > > it
> > > > > came from another thread that is not synchronised via
> > > > > replay_lock
> > > > > then
> > > > > things will go wrong.
> > > > > 
> > > > > But this is a VM load save helper?
> > > > 
> > > > Yes, and it's called from the main thread. Either during
> > > > initialization, or as a reaction to GDB packets.
> > > > 
> > > > Here is the call stack:
> > > > 
> > > >   qemu_loadvm_state()
> > > >     qemu_loadvm_state_main()
> > > >       qemu_loadvm_section_start_full()
> > > >         vmstate_load()
> > > >           vmstate_load_state()
> > > >             cpu_post_load()
> > > >               tcg_s390_tod_updated()
> > > >                 update_ckc_timer()
> > > >                   timer_mod()
> > > >           s390_tod_load()
> > > >             qemu_s390_tod_set()  # via tdc->set()
> > > >               async_run_on_cpu(tcg_s390_tod_updated)
> > > > 
> > > > So you think we may have to take the replay lock around
> > > > load_snapshot()? So that all async_run_on_cpu() calls it makes
> > > > end
> > > > up
> > > > being handled by the vCPU thread deterministically.
> > > 
> > > To answer my own question: apparently this is already the case;
> > > at
> > > least, the following does not cause any fallout:
> > > 
> > > diff --git a/include/system/replay.h b/include/system/replay.h
> > > index 6859df09580..e1cd9b2f900 100644
> > > --- a/include/system/replay.h
> > > +++ b/include/system/replay.h
> > > @@ -60,6 +60,7 @@ extern char *replay_snapshot;
> > >  
> > >  void replay_mutex_lock(void);
> > >  void replay_mutex_unlock(void);
> > > +bool replay_mutex_locked(void);
> > >  
> > >  static inline void replay_unlock_guard(void *unused)
> > >  {
> > > diff --git a/migration/savevm.c b/migration/savevm.c
> > > index 62cc2ce25cb..ba945d3a1ea 100644
> > > --- a/migration/savevm.c
> > > +++ b/migration/savevm.c
> > > @@ -3199,6 +3199,8 @@ bool save_snapshot(const char *name, bool
> > > overwrite, const char *vmstate,
> > >      uint64_t vm_state_size;
> > >      g_autoptr(GDateTime) now = g_date_time_new_now_local();
> > >  
> > > +    g_assert(replay_mutex_locked());
> > > +
> > >      GLOBAL_STATE_CODE();
> > >  
> > >      if (!migrate_can_snapshot(errp)) {
> > > @@ -3390,6 +3392,8 @@ bool load_snapshot(const char *name, const
> > > char
> > > *vmstate,
> > >      int ret;
> > >      MigrationIncomingState *mis =
> > > migration_incoming_get_current();
> > >  
> > > +    g_assert(replay_mutex_locked());
> > > +
> > >      if (!migrate_can_snapshot(errp)) {
> > >          return false;
> > >      }
> > > diff --git a/replay/replay-internal.h b/replay/replay-internal.h
> > > index 75249b76936..30825a0753e 100644
> > > --- a/replay/replay-internal.h
> > > +++ b/replay/replay-internal.h
> > > @@ -124,7 +124,6 @@ void replay_get_array_alloc(uint8_t **buf,
> > > size_t
> > > *size);
> > >   * synchronisation between vCPU and main-loop threads. */
> > >  
> > >  void replay_mutex_init(void);
> > > -bool replay_mutex_locked(void);
> > >  
> > >  /*! Checks error status of the file. */
> > >  void replay_check_error(void);
> > 
> > I believe now I at least understand what the race is about:
> > 
> > - cpu_post_load() fires the TOD timer immediately.
> > 
> > - s390_tod_load() schedules work for firing the TOD timer.
> 
> Is this a duplicate of work then? Could we just rely on one or the
> other? If you drop the cpu_post_load() tweak then the vmstate load
> helper should still ensure everything works right?

Getting rid of it fixes the problem and makes sense anyway.

> > - If rr loop sees work and then timer, we get one timer callback.
> > 
> > - If rr loop sees timer and then work, we get two timer callbacks.
> 
> If the timer is armed we should expect at least to execute a few
> instructions before triggering the timer, unless it was armed ready
> expired.

Yes, it is armed expired.


Isn't it a deficiency in record-replay that work and timers are not
ordered relative to each other? Can't it bite us somewhere else?

> > - Record and replay may diverge due to this race.
> > 
> > - In this particular case divergence makes rr loop spin: it sees
> > that
> >   TOD timer has expired, but cannot invoke its callback, because
> > there
> >   is no recorded CHECKPOINT_CLOCK_VIRTUAL.
> > 
> > - The order in which rr loop sees work and timer depends on whether
> >   and when rr loop wakes up during load_snapshot().
> > 
> > - rr loop may wake up after the main thread kicks the CPU and drops
> >   the BQL, which may happen if it calls, e.g.,
> > qemu_cond_wait_bql().


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-12-01 11:17               ` Ilya Leoshkevich
@ 2025-12-01 11:57                 ` Ilya Leoshkevich
  2025-12-01 12:43                 ` Alex Bennée
  1 sibling, 0 replies; 11+ messages in thread
From: Ilya Leoshkevich @ 2025-12-01 11:57 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Thomas Huth, qemu-devel, qemu-s390x

On Mon, 2025-12-01 at 12:17 +0100, Ilya Leoshkevich wrote:
> On Mon, 2025-12-01 at 10:36 +0000, Alex Bennée wrote:
> > Ilya Leoshkevich <iii@linux.ibm.com> writes:
> > 
> > > On Sun, 2025-11-30 at 20:03 +0100, Ilya Leoshkevich wrote:
> > > > On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
> > > > > On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
> > > > > > Ilya Leoshkevich <iii@linux.ibm.com> writes:
> > > > > > 
> > > > > > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich
> > > > > > > wrote:
> > > > > > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
> > > > > > > > > From: Thomas Huth <thuth@redhat.com>
> > > > > > > > > 
> > > > > > > > > We just have to make sure that we can set the
> > > > > > > > > endianness to
> > > > > > > > > big
> > > > > > > > > endian,
> > > > > > > > > then we can also run this test on s390x.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > > > > > > > ---
> > > > > > > > >  Marked as RFC since it depends on the fix for this
> > > > > > > > > bug
> > > > > > > > > (so
> > > > > > > > > it
> > > > > > > > > cannot
> > > > > > > > >  be merged yet):
> > > > > > > > >  
> > > > > > > > > https://lore.kernel.org/qemu-devel/a0accce9-6042-4a7b-a7c7-218212818891@redhat.com
> > > > > > > > > /
> > > > > > > > > 
> > > > > > > > >  tests/functional/reverse_debugging.py        |  4
> > > > > > > > > +++-
> > > > > > > > >  tests/functional/s390x/meson.build           |  1 +
> > > > > > > > >  tests/functional/s390x/test_reverse_debug.py | 21
> > > > > > > > > ++++++++++++++++++++
> > > > > > > > >  3 files changed, 25 insertions(+), 1 deletion(-)
> > > > > > > > >  create mode 100755
> > > > > > > > > tests/functional/s390x/test_reverse_debug.py

[...]

> > > I believe now I at least understand what the race is about:
> > > 
> > > - cpu_post_load() fires the TOD timer immediately.
> > > 
> > > - s390_tod_load() schedules work for firing the TOD timer.
> > 
> > Is this a duplicate of work then? Could we just rely on one or the
> > other? If you drop the cpu_post_load() tweak then the vmstate load
> > helper should still ensure everything works right?
> 
> Getting rid of it fixes the problem and makes sense anyway.

Hmm, on the other hand, this appears to have been done this way
deliberately:


commit 7c12f710bad60dc7e509da4e80c77e952ef0490c
Author: David Hildenbrand <david@kernel.org>
Date:   Wed Jun 27 15:44:09 2018 +0200

    s390x/tcg: rearm the CKC timer during migration
    
    If the CPU data is migrated after the TOD clock, the CKC timer of a
CPU
    is not rearmed. Let's rearm it when loading the CPU state.
    
    Introduce tcg-stub.c just like kvm-stub.c for tcg specific stubs.


I guess introducing a dependency on migration order is indeed not great
for maintainability.

> > > - If rr loop sees work and then timer, we get one timer callback.
> > > 
> > > - If rr loop sees timer and then work, we get two timer
> > > callbacks.
> > 
> > If the timer is armed we should expect at least to execute a few
> > instructions before triggering the timer, unless it was armed ready
> > expired.
> 
> Yes, it is armed expired.
> 
> 
> Isn't it a deficiency in record-replay that work and timers are not
> ordered relative to each other? Can't it bite us somewhere else?

[...]
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x
  2025-12-01 11:17               ` Ilya Leoshkevich
  2025-12-01 11:57                 ` Ilya Leoshkevich
@ 2025-12-01 12:43                 ` Alex Bennée
  1 sibling, 0 replies; 11+ messages in thread
From: Alex Bennée @ 2025-12-01 12:43 UTC (permalink / raw)
  To: Ilya Leoshkevich; +Cc: Thomas Huth, qemu-devel, qemu-s390x

Ilya Leoshkevich <iii@linux.ibm.com> writes:

> On Mon, 2025-12-01 at 10:36 +0000, Alex Bennée wrote:
>> Ilya Leoshkevich <iii@linux.ibm.com> writes:
>> 
>> > On Sun, 2025-11-30 at 20:03 +0100, Ilya Leoshkevich wrote:
>> > > On Sun, 2025-11-30 at 19:32 +0100, Ilya Leoshkevich wrote:
>> > > > On Sun, 2025-11-30 at 16:47 +0000, Alex Bennée wrote:
>> > > > > Ilya Leoshkevich <iii@linux.ibm.com> writes:
>> > > > > 
>> > > > > > On Fri, 2025-11-28 at 18:25 +0100, Ilya Leoshkevich wrote:
>> > > > > > > On Fri, 2025-11-28 at 14:39 +0100, Thomas Huth wrote:
>> > > > > > > > From: Thomas Huth <thuth@redhat.com>
<snip>
>> > > > > The the async_run_on_cpu is called from the vcpu thread in
>> > > > > response
>> > > > > to a
>> > > > > deterministic event at a known point in time it should be
>> > > > > fine.
>> > > > > If
>> > > > > it
>> > > > > came from another thread that is not synchronised via
>> > > > > replay_lock
>> > > > > then
>> > > > > things will go wrong.
>> > > > > 
>> > > > > But this is a VM load save helper?
>> > > > 
>> > > > Yes, and it's called from the main thread. Either during
>> > > > initialization, or as a reaction to GDB packets.
>> > > > 
>> > > > Here is the call stack:
>> > > > 
>> > > >   qemu_loadvm_state()
>> > > >     qemu_loadvm_state_main()
>> > > >       qemu_loadvm_section_start_full()
>> > > >         vmstate_load()
>> > > >           vmstate_load_state()
>> > > >             cpu_post_load()
>> > > >               tcg_s390_tod_updated()
>> > > >                 update_ckc_timer()
>> > > >                   timer_mod()
>> > > >           s390_tod_load()
>> > > >             qemu_s390_tod_set()  # via tdc->set()
>> > > >               async_run_on_cpu(tcg_s390_tod_updated)
>> > > > 
>> > > > So you think we may have to take the replay lock around
>> > > > load_snapshot()? So that all async_run_on_cpu() calls it makes
>> > > > end
>> > > > up
>> > > > being handled by the vCPU thread deterministically.
<snip>
>> > 
>> > I believe now I at least understand what the race is about:
>> > 
>> > - cpu_post_load() fires the TOD timer immediately.
>> > 
>> > - s390_tod_load() schedules work for firing the TOD timer.
>> 
>> Is this a duplicate of work then? Could we just rely on one or the
>> other? If you drop the cpu_post_load() tweak then the vmstate load
>> helper should still ensure everything works right?
>
> Getting rid of it fixes the problem and makes sense anyway.
>
>> > - If rr loop sees work and then timer, we get one timer callback.
>> > 
>> > - If rr loop sees timer and then work, we get two timer callbacks.
>> 
>> If the timer is armed we should expect at least to execute a few
>> instructions before triggering the timer, unless it was armed ready
>> expired.
>
> Yes, it is armed expired.
>
> Isn't it a deficiency in record-replay that work and timers are not
> ordered relative to each other? Can't it bite us somewhere else?

They normally should be although I notice:

  void icount_handle_deadline(void)
  {
      assert(qemu_in_vcpu_thread());
      int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
                                                    QEMU_TIMER_ATTR_ALL);

      /*
       * Instructions, interrupts, and exceptions are processed in cpu-exec.
       * Don't interrupt cpu thread, when these events are waiting
       * (i.e., there is no checkpoint)
       */
      if (deadline == 0) {
          icount_notify_aio_contexts();
      }
  }

should run the pre-expired timers before we exec the current TB. But the
comment suggests it is not expecting any checkpoint related activity. I
wonder if we can assert that is the case to catch future issues.

>> > - Record and replay may diverge due to this race.
>> > 
>> > - In this particular case divergence makes rr loop spin: it sees
>> > that
>> >   TOD timer has expired, but cannot invoke its callback, because
>> > there
>> >   is no recorded CHECKPOINT_CLOCK_VIRTUAL.
>> > 
>> > - The order in which rr loop sees work and timer depends on whether
>> >   and when rr loop wakes up during load_snapshot().
>> > 
>> > - rr loop may wake up after the main thread kicks the CPU and drops
>> >   the BQL, which may happen if it calls, e.g.,
>> > qemu_cond_wait_bql().

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-12-01 12:44 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-28 13:39 [RFC PATCH] tests/functional/s390x: Add reverse debugging test for s390x Thomas Huth
2025-11-28 17:25 ` Ilya Leoshkevich
2025-11-29 21:33   ` Ilya Leoshkevich
2025-11-30 16:47     ` Alex Bennée
2025-11-30 18:32       ` Ilya Leoshkevich
2025-11-30 19:03         ` Ilya Leoshkevich
2025-11-30 22:59           ` Ilya Leoshkevich
2025-12-01 10:36             ` Alex Bennée
2025-12-01 11:17               ` Ilya Leoshkevich
2025-12-01 11:57                 ` Ilya Leoshkevich
2025-12-01 12:43                 ` Alex Bennée

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).