qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Implement icount=auto using TCG Plugins
@ 2024-05-16 22:20 Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini

The goal here is to be able to scale temporally execution of qemu-user/system,
using a given number of instructions per second.

We define a virtual clock, that can be late or in advance compared to real time.
When we are in advance, we slow execution (by sleeping) until catching real
time.

Finally, we should be able to cleanup icount=auto mode completely, and keep
icount usage for determistic purposes only.

It is built upon new TCG Plugins inline ops (store + conditional callbacks), now
merged on master.

Example in user-mode:

- Retrieve number of instructions to execute /bin/true
$ ./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true
cpu 0 insns: 120546
total insns: 120546
- Slow execution to match 5 seconds
$ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips,ips=$((120546/5)) /bin/true
real    0m4.985s

Alex Bennée (4):
  sysemu: add set_virtual_time to accel ops
  qtest: use cpu interface in qtest_clock_warp
  sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time
  plugins: add time control API

Pierrick Bouvier (1):
  contrib/plugins: add ips plugin example for cost modeling

 include/qemu/qemu-plugin.h                    |  23 ++
 include/qemu/timer.h                          |  15 ++
 include/sysemu/accel-ops.h                    |  18 +-
 include/sysemu/cpu-timers.h                   |   3 +-
 include/sysemu/qtest.h                        |   1 +
 accel/qtest/qtest.c                           |   1 +
 contrib/plugins/ips.c                         | 239 ++++++++++++++++++
 plugins/api.c                                 |  31 +++
 ...t-virtual-clock.c => cpus-virtual-clock.c} |   5 +
 system/cpus.c                                 |  11 +
 system/qtest.c                                |  27 +-
 util/qemu-timer.c                             |  26 ++
 contrib/plugins/Makefile                      |   1 +
 plugins/qemu-plugins.symbols                  |   2 +
 stubs/meson.build                             |   6 +-
 15 files changed, 383 insertions(+), 26 deletions(-)
 create mode 100644 contrib/plugins/ips.c
 rename stubs/{cpus-get-virtual-clock.c => cpus-virtual-clock.c} (68%)

-- 
2.39.2



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
@ 2024-05-16 22:20 ` Pierrick Bouvier
  2024-05-28 17:11   ` Alex Bennée
                     ` (2 more replies)
  2024-05-16 22:20 ` [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp Pierrick Bouvier
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini, Philippe Mathieu-Daudé

From: Alex Bennée <alex.bennee@linaro.org>

We are about to remove direct calls to individual accelerators for
this information and will need a central point for plugins to hook
into time changes.

From: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 include/sysemu/accel-ops.h                     | 18 +++++++++++++++++-
 include/sysemu/cpu-timers.h                    |  3 ++-
 ...et-virtual-clock.c => cpus-virtual-clock.c} |  5 +++++
 system/cpus.c                                  | 11 +++++++++++
 stubs/meson.build                              |  6 +++++-
 5 files changed, 40 insertions(+), 3 deletions(-)
 rename stubs/{cpus-get-virtual-clock.c => cpus-virtual-clock.c} (68%)

diff --git a/include/sysemu/accel-ops.h b/include/sysemu/accel-ops.h
index ef91fc28bbd..a0886722305 100644
--- a/include/sysemu/accel-ops.h
+++ b/include/sysemu/accel-ops.h
@@ -20,7 +20,12 @@
 typedef struct AccelOpsClass AccelOpsClass;
 DECLARE_CLASS_CHECKERS(AccelOpsClass, ACCEL_OPS, TYPE_ACCEL_OPS)
 
-/* cpus.c operations interface */
+/**
+ * struct AccelOpsClass - accelerator interfaces
+ *
+ * This structure is used to abstract accelerator differences from the
+ * core CPU code. Not all have to be implemented.
+ */
 struct AccelOpsClass {
     /*< private >*/
     ObjectClass parent_class;
@@ -44,7 +49,18 @@ struct AccelOpsClass {
 
     void (*handle_interrupt)(CPUState *cpu, int mask);
 
+    /**
+     * @get_virtual_clock: fetch virtual clock
+     * @set_virtual_clock: set virtual clock
+     *
+     * These allow the timer subsystem to defer to the accelerator to
+     * fetch time. The set function is needed if the accelerator wants
+     * to track the changes to time as the timer is warped through
+     * various timer events.
+     */
     int64_t (*get_virtual_clock)(void);
+    void (*set_virtual_clock)(int64_t time);
+
     int64_t (*get_elapsed_ticks)(void);
 
     /* gdbstub hooks */
diff --git a/include/sysemu/cpu-timers.h b/include/sysemu/cpu-timers.h
index d86738a378d..7bfa960fbd6 100644
--- a/include/sysemu/cpu-timers.h
+++ b/include/sysemu/cpu-timers.h
@@ -96,8 +96,9 @@ int64_t cpu_get_clock(void);
 
 void qemu_timer_notify_cb(void *opaque, QEMUClockType type);
 
-/* get the VIRTUAL clock and VM elapsed ticks via the cpus accel interface */
+/* get/set VIRTUAL clock and VM elapsed ticks via the cpus accel interface */
 int64_t cpus_get_virtual_clock(void);
+void cpus_set_virtual_clock(int64_t new_time);
 int64_t cpus_get_elapsed_ticks(void);
 
 #endif /* SYSEMU_CPU_TIMERS_H */
diff --git a/stubs/cpus-get-virtual-clock.c b/stubs/cpus-virtual-clock.c
similarity index 68%
rename from stubs/cpus-get-virtual-clock.c
rename to stubs/cpus-virtual-clock.c
index fd447d53f3c..af7c1a1d403 100644
--- a/stubs/cpus-get-virtual-clock.c
+++ b/stubs/cpus-virtual-clock.c
@@ -6,3 +6,8 @@ int64_t cpus_get_virtual_clock(void)
 {
     return cpu_get_clock();
 }
+
+void cpus_set_virtual_clock(int64_t new_time)
+{
+    /* do nothing */
+}
diff --git a/system/cpus.c b/system/cpus.c
index 68d161d96b7..03ba026667c 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -229,6 +229,17 @@ int64_t cpus_get_virtual_clock(void)
     return cpu_get_clock();
 }
 
+/*
+ * Signal the new virtual time to the accelerator. This is only needed
+ * by accelerators that need to track the changes as we warp time.
+ */
+void cpus_set_virtual_clock(int64_t new_time)
+{
+    if (cpus_accel && cpus_accel->set_virtual_clock) {
+        cpus_accel->set_virtual_clock(new_time);
+    }
+}
+
 /*
  * return the time elapsed in VM between vm_start and vm_stop.  Unless
  * icount is active, cpus_get_elapsed_ticks() uses units of the host CPU cycle
diff --git a/stubs/meson.build b/stubs/meson.build
index 3b9d42023cb..672213b7482 100644
--- a/stubs/meson.build
+++ b/stubs/meson.build
@@ -3,6 +3,11 @@
 # below, so that it is clear who needs the stubbed functionality.
 
 stub_ss.add(files('cpu-get-clock.c'))
+stub_ss.add(files('cpus-virtual-clock.c'))
+stub_ss.add(files('qemu-timer-notify-cb.c'))
+stub_ss.add(files('icount.c'))
+stub_ss.add(files('dump.c'))
+stub_ss.add(files('error-printf.c'))
 stub_ss.add(files('fdset.c'))
 stub_ss.add(files('iothread-lock.c'))
 stub_ss.add(files('is-daemonized.c'))
@@ -28,7 +33,6 @@ endif
 if have_block or have_ga
   stub_ss.add(files('replay-tools.c'))
   # stubs for hooks in util/main-loop.c, util/async.c etc.
-  stub_ss.add(files('cpus-get-virtual-clock.c'))
   stub_ss.add(files('icount.c'))
   stub_ss.add(files('graph-lock.c'))
   if linux_io_uring.found()
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp
  2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
@ 2024-05-16 22:20 ` Pierrick Bouvier
  2024-05-30  6:32   ` Paolo Bonzini
  2024-05-16 22:20 ` [PATCH 3/5] sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time Pierrick Bouvier
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini

From: Alex Bennée <alex.bennee@linaro.org>

This generalises the qtest_clock_warp code to use the AccelOps
handlers for updating its own sense of time. This will make the next
patch which moves the warp code closer to pure code motion.

From: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Thomas Huth <thuth@redhat.com>
---
 include/sysemu/qtest.h | 1 +
 accel/qtest/qtest.c    | 1 +
 system/qtest.c         | 6 +++---
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h
index b5d5fd34637..45f3b7e1df5 100644
--- a/include/sysemu/qtest.h
+++ b/include/sysemu/qtest.h
@@ -36,6 +36,7 @@ void qtest_server_set_send_handler(void (*send)(void *, const char *),
 void qtest_server_inproc_recv(void *opaque, const char *buf);
 
 int64_t qtest_get_virtual_clock(void);
+void qtest_set_virtual_clock(int64_t count);
 #endif
 
 #endif
diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
index f6056ac8361..53182e6c2ae 100644
--- a/accel/qtest/qtest.c
+++ b/accel/qtest/qtest.c
@@ -52,6 +52,7 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
 
     ops->create_vcpu_thread = dummy_start_vcpu_thread;
     ops->get_virtual_clock = qtest_get_virtual_clock;
+    ops->set_virtual_clock = qtest_set_virtual_clock;
 };
 
 static const TypeInfo qtest_accel_ops_type = {
diff --git a/system/qtest.c b/system/qtest.c
index 6da58b3874e..ee8b139e982 100644
--- a/system/qtest.c
+++ b/system/qtest.c
@@ -332,14 +332,14 @@ int64_t qtest_get_virtual_clock(void)
     return qatomic_read_i64(&qtest_clock_counter);
 }
 
-static void qtest_set_virtual_clock(int64_t count)
+void qtest_set_virtual_clock(int64_t count)
 {
     qatomic_set_i64(&qtest_clock_counter, count);
 }
 
 static void qtest_clock_warp(int64_t dest)
 {
-    int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    int64_t clock = cpus_get_virtual_clock();
     AioContext *aio_context;
     assert(qtest_enabled());
     aio_context = qemu_get_aio_context();
@@ -348,7 +348,7 @@ static void qtest_clock_warp(int64_t dest)
                                                       QEMU_TIMER_ATTR_ALL);
         int64_t warp = qemu_soonest_timeout(dest - clock, deadline);
 
-        qtest_set_virtual_clock(qtest_get_virtual_clock() + warp);
+        cpus_set_virtual_clock(cpus_get_virtual_clock() + warp);
 
         qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
         timerlist_run_timers(aio_context->tlg.tl[QEMU_CLOCK_VIRTUAL]);
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/5] sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time
  2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp Pierrick Bouvier
@ 2024-05-16 22:20 ` Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 4/5] plugins: add time control API Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling Pierrick Bouvier
  4 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini

From: Alex Bennée <alex.bennee@linaro.org>

Move the key functionality of moving time forward into the clock
sub-system itself. This will allow us to plumb in time control into
plugins.

From: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/qemu/timer.h | 15 +++++++++++++++
 system/qtest.c       | 25 +++----------------------
 util/qemu-timer.c    | 26 ++++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 22 deletions(-)

diff --git a/include/qemu/timer.h b/include/qemu/timer.h
index 9a366e551fb..910587d8293 100644
--- a/include/qemu/timer.h
+++ b/include/qemu/timer.h
@@ -245,6 +245,21 @@ bool qemu_clock_run_timers(QEMUClockType type);
  */
 bool qemu_clock_run_all_timers(void);
 
+/**
+ * qemu_clock_advance_virtual_time(): advance the virtual time tick
+ * @target: target time in nanoseconds
+ *
+ * This function is used where the control of the flow of time has
+ * been delegated to outside the clock subsystem (be it qtest, icount
+ * or some other external source). You can ask the clock system to
+ * return @early at the first expired timer.
+ *
+ * Time can only move forward, attempts to reverse time would lead to
+ * an error.
+ *
+ * Returns: new virtual time.
+ */
+int64_t qemu_clock_advance_virtual_time(int64_t dest);
 
 /*
  * QEMUTimerList
diff --git a/system/qtest.c b/system/qtest.c
index ee8b139e982..e6f6b4e62d5 100644
--- a/system/qtest.c
+++ b/system/qtest.c
@@ -337,26 +337,6 @@ void qtest_set_virtual_clock(int64_t count)
     qatomic_set_i64(&qtest_clock_counter, count);
 }
 
-static void qtest_clock_warp(int64_t dest)
-{
-    int64_t clock = cpus_get_virtual_clock();
-    AioContext *aio_context;
-    assert(qtest_enabled());
-    aio_context = qemu_get_aio_context();
-    while (clock < dest) {
-        int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
-                                                      QEMU_TIMER_ATTR_ALL);
-        int64_t warp = qemu_soonest_timeout(dest - clock, deadline);
-
-        cpus_set_virtual_clock(cpus_get_virtual_clock() + warp);
-
-        qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
-        timerlist_run_timers(aio_context->tlg.tl[QEMU_CLOCK_VIRTUAL]);
-        clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
-    }
-    qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
-}
-
 static bool (*process_command_cb)(CharBackend *chr, gchar **words);
 
 void qtest_set_command_cb(bool (*pc_cb)(CharBackend *chr, gchar **words))
@@ -755,7 +735,8 @@ static void qtest_process_command(CharBackend *chr, gchar **words)
             ns = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
                                             QEMU_TIMER_ATTR_ALL);
         }
-        qtest_clock_warp(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + ns);
+        qemu_clock_advance_virtual_time(
+            qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + ns);
         qtest_send_prefix(chr);
         qtest_sendf(chr, "OK %"PRIi64"\n",
                     (int64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL));
@@ -781,7 +762,7 @@ static void qtest_process_command(CharBackend *chr, gchar **words)
         g_assert(words[1]);
         ret = qemu_strtoi64(words[1], NULL, 0, &ns);
         g_assert(ret == 0);
-        qtest_clock_warp(ns);
+        qemu_clock_advance_virtual_time(ns);
         qtest_send_prefix(chr);
         qtest_sendf(chr, "OK %"PRIi64"\n",
                     (int64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL));
diff --git a/util/qemu-timer.c b/util/qemu-timer.c
index 6a0de33dd2b..213114be68c 100644
--- a/util/qemu-timer.c
+++ b/util/qemu-timer.c
@@ -645,6 +645,11 @@ int64_t qemu_clock_get_ns(QEMUClockType type)
     }
 }
 
+static void qemu_virtual_clock_set_ns(int64_t time)
+{
+    return cpus_set_virtual_clock(time);
+}
+
 void init_clocks(QEMUTimerListNotifyCB *notify_cb)
 {
     QEMUClockType type;
@@ -675,3 +680,24 @@ bool qemu_clock_run_all_timers(void)
 
     return progress;
 }
+
+int64_t qemu_clock_advance_virtual_time(int64_t dest)
+{
+    int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    AioContext *aio_context;
+    aio_context = qemu_get_aio_context();
+    while (clock < dest) {
+        int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL,
+                                                      QEMU_TIMER_ATTR_ALL);
+        int64_t warp = qemu_soonest_timeout(dest - clock, deadline);
+
+        qemu_virtual_clock_set_ns(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + warp);
+
+        qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
+        timerlist_run_timers(aio_context->tlg.tl[QEMU_CLOCK_VIRTUAL]);
+        clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    }
+    qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
+
+    return clock;
+}
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/5] plugins: add time control API
  2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
                   ` (2 preceding siblings ...)
  2024-05-16 22:20 ` [PATCH 3/5] sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time Pierrick Bouvier
@ 2024-05-16 22:20 ` Pierrick Bouvier
  2024-05-16 22:20 ` [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling Pierrick Bouvier
  4 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini

From: Alex Bennée <alex.bennee@linaro.org>

Expose the ability to control time through the plugin API. Only one
plugin can control time so it has to request control when loaded.
There are probably more corner cases to catch here.

From: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
 include/qemu/qemu-plugin.h   | 23 +++++++++++++++++++++++
 plugins/api.c                | 31 +++++++++++++++++++++++++++++++
 plugins/qemu-plugins.symbols |  2 ++
 3 files changed, 56 insertions(+)

diff --git a/include/qemu/qemu-plugin.h b/include/qemu/qemu-plugin.h
index 95703d8fec1..80b1637cede 100644
--- a/include/qemu/qemu-plugin.h
+++ b/include/qemu/qemu-plugin.h
@@ -661,6 +661,29 @@ void qemu_plugin_register_vcpu_mem_inline_per_vcpu(
     qemu_plugin_u64 entry,
     uint64_t imm);
 
+/**
+ * qemu_plugin_request_time_control() - request the ability to control time
+ *
+ * This grants the plugin the ability to control system time. Only one
+ * plugin can control time so if multiple plugins request the ability
+ * all but the first will fail.
+ *
+ * Returns an opaque handle or NULL if fails
+ */
+const void *qemu_plugin_request_time_control(void);
+
+/**
+ * qemu_plugin_update_ns() - update system emulation time
+ * @handle: opaque handle returned by qemu_plugin_request_time_control()
+ * @time: time in nanoseconds
+ *
+ * This allows an appropriately authorised plugin (i.e. holding the
+ * time control handle) to move system time forward to @time.
+ *
+ * Start time is 0.
+ */
+void qemu_plugin_update_ns(const void *handle, int64_t time);
+
 typedef void
 (*qemu_plugin_vcpu_syscall_cb_t)(qemu_plugin_id_t id, unsigned int vcpu_index,
                                  int64_t num, uint64_t a1, uint64_t a2,
diff --git a/plugins/api.c b/plugins/api.c
index 5a0a7f8c712..26822b69ea2 100644
--- a/plugins/api.c
+++ b/plugins/api.c
@@ -39,6 +39,7 @@
 #include "qemu/main-loop.h"
 #include "qemu/plugin.h"
 #include "qemu/log.h"
+#include "qemu/timer.h"
 #include "tcg/tcg.h"
 #include "exec/exec-all.h"
 #include "exec/gdbstub.h"
@@ -583,3 +584,33 @@ uint64_t qemu_plugin_u64_sum(qemu_plugin_u64 entry)
     }
     return total;
 }
+
+/*
+ * Time control
+ */
+static bool has_control;
+
+const void *qemu_plugin_request_time_control(void)
+{
+    if (!has_control) {
+        has_control = true;
+        return &has_control;
+    }
+    return NULL;
+}
+
+static void advance_virtual_time__async(CPUState *cpu, run_on_cpu_data data)
+{
+    int64_t new_time = data.host_ulong;
+    qemu_clock_advance_virtual_time(new_time);
+}
+
+void qemu_plugin_update_ns(const void *handle, int64_t new_time)
+{
+    if (handle == &has_control) {
+        /* Need to execute out of cpu_exec, so bql can be locked. */
+        async_run_on_cpu(current_cpu,
+                         advance_virtual_time__async,
+                         RUN_ON_CPU_HOST_ULONG(new_time));
+    }
+}
diff --git a/plugins/qemu-plugins.symbols b/plugins/qemu-plugins.symbols
index aa0a77a319f..ca773d8d9fe 100644
--- a/plugins/qemu-plugins.symbols
+++ b/plugins/qemu-plugins.symbols
@@ -38,6 +38,7 @@
   qemu_plugin_register_vcpu_tb_exec_cond_cb;
   qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu;
   qemu_plugin_register_vcpu_tb_trans_cb;
+  qemu_plugin_request_time_control;
   qemu_plugin_reset;
   qemu_plugin_scoreboard_free;
   qemu_plugin_scoreboard_find;
@@ -51,5 +52,6 @@
   qemu_plugin_u64_set;
   qemu_plugin_u64_sum;
   qemu_plugin_uninstall;
+  qemu_plugin_update_ns;
   qemu_plugin_vcpu_for_each;
 };
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
                   ` (3 preceding siblings ...)
  2024-05-16 22:20 ` [PATCH 4/5] plugins: add time control API Pierrick Bouvier
@ 2024-05-16 22:20 ` Pierrick Bouvier
  2024-05-28 19:14   ` Alex Bennée
  4 siblings, 1 reply; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-16 22:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Pierrick Bouvier,
	Paolo Bonzini

This plugin uses the new time control interface to make decisions
about the state of time during the emulation. The algorithm is
currently very simple. The user specifies an ips rate which applies
per core. If the core runs ahead of its allocated execution time the
plugin sleeps for a bit to let real time catch up. Either way time is
updated for the emulation as a function of total executed instructions
with some adjustments for cores that idle.

Examples
--------

Slow down execution of /bin/true:
$ num_insn=$(./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true |& grep total | sed -e 's/.*: //')
$ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips.so,ips=$(($num_insn/4)) /bin/true
real 4.000s

Boot a Linux kernel simulating a 250MHz cpu:
$ /build/qemu-system-x86_64 -kernel /boot/vmlinuz-6.1.0-21-amd64 -append "console=ttyS0" -plugin ./build/contrib/plugins/libips.so,ips=$((250*1000*1000)) -smp 1 -m 512
check time until kernel panic on serial0

Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
 contrib/plugins/ips.c    | 239 +++++++++++++++++++++++++++++++++++++++
 contrib/plugins/Makefile |   1 +
 2 files changed, 240 insertions(+)
 create mode 100644 contrib/plugins/ips.c

diff --git a/contrib/plugins/ips.c b/contrib/plugins/ips.c
new file mode 100644
index 00000000000..cf3159df391
--- /dev/null
+++ b/contrib/plugins/ips.c
@@ -0,0 +1,239 @@
+/*
+ * ips rate limiting plugin.
+ *
+ * This plugin can be used to restrict the execution of a system to a
+ * particular number of Instructions Per Second (ips). This controls
+ * time as seen by the guest so while wall-clock time may be longer
+ * from the guests point of view time will pass at the normal rate.
+ *
+ * This uses the new plugin API which allows the plugin to control
+ * system time.
+ *
+ * Copyright (c) 2023 Linaro Ltd
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include <stdio.h>
+#include <glib.h>
+#include <qemu-plugin.h>
+
+QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
+
+/* how many times do we update time per sec */
+#define NUM_TIME_UPDATE_PER_SEC 10
+#define NSEC_IN_ONE_SEC (1000 * 1000 * 1000)
+
+static GMutex global_state_lock;
+
+static uint64_t insn_per_second = 1000 * 1000; /* ips per core, per second */
+static uint64_t insn_quantum; /* trap every N instructions */
+static bool precise_execution; /* count every instruction */
+static int64_t start_time_ns; /* time (ns since epoch) first vCPU started */
+static int64_t virtual_time_ns; /* last set virtual time */
+
+static const void *time_handle;
+
+typedef enum {
+    UNKNOWN = 0,
+    EXECUTING,
+    IDLE,
+    FINISHED
+} vCPUState;
+
+typedef struct {
+    uint64_t counter;
+    uint64_t track_insn;
+    vCPUState state;
+    /* timestamp when vCPU entered state */
+    int64_t last_state_time;
+} vCPUTime;
+
+struct qemu_plugin_scoreboard *vcpus;
+
+/* return epoch time in ns */
+static int64_t now_ns(void)
+{
+    return g_get_real_time() * 1000;
+}
+
+static uint64_t num_insn_during(int64_t elapsed_ns)
+{
+    double num_secs = elapsed_ns / (double) NSEC_IN_ONE_SEC;
+    return num_secs * (double) insn_per_second;
+}
+
+static int64_t time_for_insn(uint64_t num_insn)
+{
+    double num_secs = (double) num_insn / (double) insn_per_second;
+    return num_secs * (double) NSEC_IN_ONE_SEC;
+}
+
+static int64_t uptime_ns(void)
+{
+    int64_t now = now_ns();
+    g_assert(now >= start_time_ns);
+    return now - start_time_ns;
+}
+
+static void vcpu_set_state(vCPUTime *vcpu, vCPUState new_state)
+{
+    vcpu->last_state_time = now_ns();
+    vcpu->state = new_state;
+}
+
+static void update_system_time(vCPUTime *vcpu)
+{
+    /* flush remaining instructions */
+    vcpu->counter += vcpu->track_insn;
+    vcpu->track_insn = 0;
+
+    int64_t uptime = uptime_ns();
+    uint64_t expected_insn = num_insn_during(uptime);
+
+    if (vcpu->counter >= expected_insn) {
+        /* this vcpu ran faster than expected, so it has to sleep */
+        uint64_t insn_advance = vcpu->counter - expected_insn;
+        uint64_t time_advance_ns = time_for_insn(insn_advance);
+        int64_t sleep_us = time_advance_ns / 1000;
+        g_usleep(sleep_us);
+    }
+
+    /* based on number of instructions, what should be the new time? */
+    int64_t new_virtual_time = time_for_insn(vcpu->counter);
+
+    g_mutex_lock(&global_state_lock);
+
+    /* Time only moves forward. Another vcpu might have updated it already. */
+    if (new_virtual_time > virtual_time_ns) {
+        qemu_plugin_update_ns(time_handle, new_virtual_time);
+        virtual_time_ns = new_virtual_time;
+    }
+
+    g_mutex_unlock(&global_state_lock);
+}
+
+static void set_start_time()
+{
+    g_mutex_lock(&global_state_lock);
+    if (!start_time_ns) {
+        start_time_ns = now_ns();
+    }
+    g_mutex_unlock(&global_state_lock);
+}
+
+static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
+{
+    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
+    /* ensure start time is set first */
+    set_start_time();
+    /* start counter from absolute time reference */
+    vcpu->counter = num_insn_during(uptime_ns());
+    vcpu_set_state(vcpu, EXECUTING);
+}
+
+static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
+{
+    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
+    vcpu_set_state(vcpu, IDLE);
+}
+
+static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
+{
+    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
+    g_assert(vcpu->state == IDLE);
+    int64_t idle_time = now_ns() - vcpu->last_state_time;
+    /* accumulate expected number of instructions */
+    vcpu->counter += num_insn_during(idle_time);
+    vcpu_set_state(vcpu, EXECUTING);
+}
+
+static void vcpu_exit(qemu_plugin_id_t id, unsigned int cpu_index)
+{
+    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
+    vcpu_set_state(vcpu, FINISHED);
+    update_system_time(vcpu);
+    vcpu->counter = 0;
+}
+
+static void every_insn_quantum(unsigned int cpu_index, void *udata)
+{
+    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
+    g_assert(vcpu->track_insn >= insn_quantum);
+    update_system_time(vcpu);
+}
+
+static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
+{
+    size_t n_insns = qemu_plugin_tb_n_insns(tb);
+    qemu_plugin_u64 track_insn =
+        qemu_plugin_scoreboard_u64_in_struct(vcpus, vCPUTime, track_insn);
+    if (precise_execution) {
+        /* count (and eventually trap) on every instruction */
+        for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
+            struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
+            qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
+                insn, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, 1);
+            qemu_plugin_register_vcpu_insn_exec_cond_cb(
+                insn, every_insn_quantum,
+                QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
+                track_insn, insn_quantum, NULL);
+        }
+    } else {
+        /* count (and eventually trap) once per tb */
+        qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
+            tb, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, n_insns);
+        qemu_plugin_register_vcpu_tb_exec_cond_cb(
+            tb, every_insn_quantum,
+            QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
+            track_insn, insn_quantum, NULL);
+    }
+}
+
+static void plugin_exit(qemu_plugin_id_t id, void *udata)
+{
+    qemu_plugin_scoreboard_free(vcpus);
+}
+
+QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
+                                           const qemu_info_t *info, int argc,
+                                           char **argv)
+{
+    for (int i = 0; i < argc; i++) {
+        char *opt = argv[i];
+        g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
+        if (g_strcmp0(tokens[0], "ips") == 0) {
+            insn_per_second = g_ascii_strtoull(tokens[1], NULL, 10);
+            if (!insn_per_second && errno) {
+                fprintf(stderr, "%s: couldn't parse %s (%s)\n",
+                        __func__, tokens[1], g_strerror(errno));
+                return -1;
+            }
+
+        } else if (g_strcmp0(tokens[0], "precise") == 0) {
+            if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
+                                        &precise_execution)) {
+                fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
+                return -1;
+            }
+        } else {
+            fprintf(stderr, "option parsing failed: %s\n", opt);
+            return -1;
+        }
+    }
+
+    vcpus = qemu_plugin_scoreboard_new(sizeof(vCPUTime));
+    insn_quantum = insn_per_second / NUM_TIME_UPDATE_PER_SEC;
+
+    time_handle = qemu_plugin_request_time_control();
+    g_assert(time_handle);
+
+    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
+    qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
+    qemu_plugin_register_vcpu_idle_cb(id, vcpu_idle);
+    qemu_plugin_register_vcpu_resume_cb(id, vcpu_resume);
+    qemu_plugin_register_vcpu_exit_cb(id, vcpu_exit);
+    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
+
+    return 0;
+}
diff --git a/contrib/plugins/Makefile b/contrib/plugins/Makefile
index 0b64d2c1e3a..449ead11305 100644
--- a/contrib/plugins/Makefile
+++ b/contrib/plugins/Makefile
@@ -27,6 +27,7 @@ endif
 NAMES += hwprofile
 NAMES += cache
 NAMES += drcov
+NAMES += ips
 
 ifeq ($(CONFIG_WIN32),y)
 SO_SUFFIX := .dll
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
@ 2024-05-28 17:11   ` Alex Bennée
  2024-05-28 18:57     ` Pierrick Bouvier
  2024-05-29 12:29   ` Philippe Mathieu-Daudé
  2024-05-30  6:30   ` Paolo Bonzini
  2 siblings, 1 reply; 21+ messages in thread
From: Alex Bennée @ 2024-05-28 17:11 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:

> From: Alex Bennée <alex.bennee@linaro.org>
>
> We are about to remove direct calls to individual accelerators for
> this information and will need a central point for plugins to hook
> into time changes.
>
> From: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>

Just a note, when patches written by other people come via your tree you
should add your s-o-b tag to indicate:

  "I'm legally okay to contribute this and happy for it to go into QEMU"

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-28 17:11   ` Alex Bennée
@ 2024-05-28 18:57     ` Pierrick Bouvier
  0 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-28 18:57 UTC (permalink / raw)
  To: Alex Bennée
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

On 5/28/24 10:11, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> 
>> From: Alex Bennée <alex.bennee@linaro.org>
>>
>> We are about to remove direct calls to individual accelerators for
>> this information and will need a central point for plugins to hook
>> into time changes.
>>
>> From: Alex Bennée <alex.bennee@linaro.org>
>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> 
> Just a note, when patches written by other people come via your tree you
> should add your s-o-b tag to indicate:
> 
>    "I'm legally okay to contribute this and happy for it to go into QEMU"
> 

Thanks for clarifying, I didn't know it was needed as a committer (vs 
author), and checkpatch.pl does not seem to check for this.

I'll add this when reposting the series, if we have some comments.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-16 22:20 ` [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling Pierrick Bouvier
@ 2024-05-28 19:14   ` Alex Bennée
  2024-05-28 19:26     ` Pierrick Bouvier
  0 siblings, 1 reply; 21+ messages in thread
From: Alex Bennée @ 2024-05-28 19:14 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:

> This plugin uses the new time control interface to make decisions
> about the state of time during the emulation. The algorithm is
> currently very simple. The user specifies an ips rate which applies
> per core. If the core runs ahead of its allocated execution time the
> plugin sleeps for a bit to let real time catch up. Either way time is
> updated for the emulation as a function of total executed instructions
> with some adjustments for cores that idle.
>
> Examples
> --------
>
> Slow down execution of /bin/true:
> $ num_insn=$(./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true |& grep total | sed -e 's/.*: //')
> $ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips.so,ips=$(($num_insn/4)) /bin/true
> real 4.000s
>
> Boot a Linux kernel simulating a 250MHz cpu:
> $ /build/qemu-system-x86_64 -kernel /boot/vmlinuz-6.1.0-21-amd64 -append "console=ttyS0" -plugin ./build/contrib/plugins/libips.so,ips=$((250*1000*1000)) -smp 1 -m 512
> check time until kernel panic on serial0
>
> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
> ---
>  contrib/plugins/ips.c    | 239 +++++++++++++++++++++++++++++++++++++++
>  contrib/plugins/Makefile |   1 +
>  2 files changed, 240 insertions(+)
>  create mode 100644 contrib/plugins/ips.c
>
> diff --git a/contrib/plugins/ips.c b/contrib/plugins/ips.c
> new file mode 100644
> index 00000000000..cf3159df391
> --- /dev/null
> +++ b/contrib/plugins/ips.c
> @@ -0,0 +1,239 @@
> +/*
> + * ips rate limiting plugin.
> + *
> + * This plugin can be used to restrict the execution of a system to a
> + * particular number of Instructions Per Second (ips). This controls
> + * time as seen by the guest so while wall-clock time may be longer
> + * from the guests point of view time will pass at the normal rate.
> + *
> + * This uses the new plugin API which allows the plugin to control
> + * system time.
> + *
> + * Copyright (c) 2023 Linaro Ltd
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include <stdio.h>
> +#include <glib.h>
> +#include <qemu-plugin.h>
> +
> +QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
> +
> +/* how many times do we update time per sec */
> +#define NUM_TIME_UPDATE_PER_SEC 10
> +#define NSEC_IN_ONE_SEC (1000 * 1000 * 1000)
> +
> +static GMutex global_state_lock;
> +
> +static uint64_t insn_per_second = 1000 * 1000; /* ips per core, per second */
> +static uint64_t insn_quantum; /* trap every N instructions */
> +static bool precise_execution; /* count every instruction */
> +static int64_t start_time_ns; /* time (ns since epoch) first vCPU started */
> +static int64_t virtual_time_ns; /* last set virtual time */
> +
> +static const void *time_handle;
> +
> +typedef enum {
> +    UNKNOWN = 0,
> +    EXECUTING,
> +    IDLE,
> +    FINISHED
> +} vCPUState;
> +
> +typedef struct {
> +    uint64_t counter;
> +    uint64_t track_insn;
> +    vCPUState state;
> +    /* timestamp when vCPU entered state */
> +    int64_t last_state_time;
> +} vCPUTime;
> +
> +struct qemu_plugin_scoreboard *vcpus;
> +
> +/* return epoch time in ns */
> +static int64_t now_ns(void)
> +{
> +    return g_get_real_time() * 1000;
> +}
> +
> +static uint64_t num_insn_during(int64_t elapsed_ns)
> +{
> +    double num_secs = elapsed_ns / (double) NSEC_IN_ONE_SEC;
> +    return num_secs * (double) insn_per_second;
> +}
> +
> +static int64_t time_for_insn(uint64_t num_insn)
> +{
> +    double num_secs = (double) num_insn / (double) insn_per_second;
> +    return num_secs * (double) NSEC_IN_ONE_SEC;
> +}
> +
> +static int64_t uptime_ns(void)
> +{
> +    int64_t now = now_ns();
> +    g_assert(now >= start_time_ns);
> +    return now - start_time_ns;
> +}
> +
> +static void vcpu_set_state(vCPUTime *vcpu, vCPUState new_state)
> +{
> +    vcpu->last_state_time = now_ns();
> +    vcpu->state = new_state;
> +}
> +
> +static void update_system_time(vCPUTime *vcpu)
> +{
> +    /* flush remaining instructions */
> +    vcpu->counter += vcpu->track_insn;
> +    vcpu->track_insn = 0;
> +
> +    int64_t uptime = uptime_ns();
> +    uint64_t expected_insn = num_insn_during(uptime);
> +
> +    if (vcpu->counter >= expected_insn) {
> +        /* this vcpu ran faster than expected, so it has to sleep */
> +        uint64_t insn_advance = vcpu->counter - expected_insn;
> +        uint64_t time_advance_ns = time_for_insn(insn_advance);
> +        int64_t sleep_us = time_advance_ns / 1000;
> +        g_usleep(sleep_us);
> +    }
> +
> +    /* based on number of instructions, what should be the new time? */
> +    int64_t new_virtual_time = time_for_insn(vcpu->counter);
> +
> +    g_mutex_lock(&global_state_lock);
> +
> +    /* Time only moves forward. Another vcpu might have updated it already. */
> +    if (new_virtual_time > virtual_time_ns) {
> +        qemu_plugin_update_ns(time_handle, new_virtual_time);
> +        virtual_time_ns = new_virtual_time;
> +    }
> +
> +    g_mutex_unlock(&global_state_lock);
> +}
> +
> +static void set_start_time()
> +{
> +    g_mutex_lock(&global_state_lock);
> +    if (!start_time_ns) {
> +        start_time_ns = now_ns();
> +    }
> +    g_mutex_unlock(&global_state_lock);
> +}
> +
> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
> +{
> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
> +    /* ensure start time is set first */
> +    set_start_time();
> +    /* start counter from absolute time reference */
> +    vcpu->counter = num_insn_during(uptime_ns());
> +    vcpu_set_state(vcpu, EXECUTING);
> +}
> +
> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
> +{
> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
> +    vcpu_set_state(vcpu, IDLE);
> +}
> +
> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
> +{
> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
> +    g_assert(vcpu->state == IDLE);

I'm triggering a weird race here:

  (gdb) b vcpu_init
  Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
  (gdb) r
  The program being debugged has been started already.
  Start it from the beginning? (y or n) y
  Starting program: /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64 -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars -cpu cortex-a57 -smp 32 -accel tcg -device virtio-net-pci,netdev=unet -device virtio-scsi-pci -device scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22 -blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap -serial mon:stdio -blockdev node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true -blockdev node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append root=/dev/sda2\ console=ttyAMA0 -plugin contrib/plugins/libips.so,ips=1000000000
  [Thread debugging using libthread_db enabled]
  Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
  [New Thread 0x7fffe72006c0 (LWP 360538)]
  [New Thread 0x7fffe68006c0 (LWP 360540)]
  [New Thread 0x7fffe5e006c0 (LWP 360541)]
  [New Thread 0x7fffe54006c0 (LWP 360542)]
  [New Thread 0x7fffe4a006c0 (LWP 360543)]
  [New Thread 0x7fffdfe006c0 (LWP 360544)]
  [New Thread 0x7fffdf4006c0 (LWP 360545)]
  [New Thread 0x7fffdea006c0 (LWP 360546)]
  [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]

  Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=0) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb) c
  Continuing.
  [New Thread 0x7fffde0006c0 (LWP 360548)]
  [Switching to Thread 0x7fffdea006c0 (LWP 360546)]

  Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=1) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb)
  Continuing.
  [New Thread 0x7fffdd6006c0 (LWP 360549)]
  [Switching to Thread 0x7fffde0006c0 (LWP 360548)]

  Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=2) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb)
  Continuing.
  [New Thread 0x7fffdcc006c0 (LWP 360550)]
  [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]

  Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=3) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb)
  Continuing.
  [New Thread 0x7fffd3e006c0 (LWP 360551)]
  [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]

  Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=4) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb) n
  129         set_start_time();
  (gdb)
  131         vcpu->counter = num_insn_during(uptime_ns());
  (gdb)
  132         vcpu_set_state(vcpu, EXECUTING);
  (gdb)
  133     }
  (gdb) p vcpu->state
  $1 = EXECUTING
  (gdb) p &vcpu->state
  $2 = (vCPUState *) 0x555557c6b5d0
  (gdb) watch *(vCPUState *) 0x555557c6b5d0
  Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
  (gdb) c
  Continuing.
  [Thread 0x7fffdfe006c0 (LWP 360544) exited]
  [Thread 0x7fffe5e006c0 (LWP 360541) exited]

  Thread 12 "qemu-system-aar" hit Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0

  Old value = EXECUTING
  New value = IDLE
  vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
  83      }
  (gdb) c
  Continuing.
  [New Thread 0x7fffdfe006c0 (LWP 360591)]
  [New Thread 0x7fffe5e006c0 (LWP 360592)]
  [New Thread 0x7fffd34006c0 (LWP 360593)]
  [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]

  Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=6) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
  127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
  (gdb)
  Continuing.
  [New Thread 0x7fffd2a006c0 (LWP 360594)]
  **
  ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
  Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)

  Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
  [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
  44      ./nptl/pthread_kill.c: No such file or directory.
  (gdb) bt
  #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
  #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
  #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
  #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
  #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
  #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
  #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
  #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
  #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
  #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
  #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
  #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
  #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
  #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
  (gdb)

But I don't understand how we can ever hit the idle callback without
first hitting the init callback.

> +    int64_t idle_time = now_ns() - vcpu->last_state_time;
> +    /* accumulate expected number of instructions */
> +    vcpu->counter += num_insn_during(idle_time);
> +    vcpu_set_state(vcpu, EXECUTING);
> +}
> +
> +static void vcpu_exit(qemu_plugin_id_t id, unsigned int cpu_index)
> +{
> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
> +    vcpu_set_state(vcpu, FINISHED);
> +    update_system_time(vcpu);
> +    vcpu->counter = 0;
> +}
> +
> +static void every_insn_quantum(unsigned int cpu_index, void *udata)
> +{
> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
> +    g_assert(vcpu->track_insn >= insn_quantum);
> +    update_system_time(vcpu);
> +}
> +
> +static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
> +{
> +    size_t n_insns = qemu_plugin_tb_n_insns(tb);
> +    qemu_plugin_u64 track_insn =
> +        qemu_plugin_scoreboard_u64_in_struct(vcpus, vCPUTime, track_insn);
> +    if (precise_execution) {
> +        /* count (and eventually trap) on every instruction */
> +        for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
> +            struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
> +            qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
> +                insn, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, 1);
> +            qemu_plugin_register_vcpu_insn_exec_cond_cb(
> +                insn, every_insn_quantum,
> +                QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
> +                track_insn, insn_quantum, NULL);
> +        }
> +    } else {
> +        /* count (and eventually trap) once per tb */
> +        qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
> +            tb, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, n_insns);
> +        qemu_plugin_register_vcpu_tb_exec_cond_cb(
> +            tb, every_insn_quantum,
> +            QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
> +            track_insn, insn_quantum, NULL);
> +    }
> +}
> +
> +static void plugin_exit(qemu_plugin_id_t id, void *udata)
> +{
> +    qemu_plugin_scoreboard_free(vcpus);
> +}
> +
> +QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
> +                                           const qemu_info_t *info, int argc,
> +                                           char **argv)
> +{
> +    for (int i = 0; i < argc; i++) {
> +        char *opt = argv[i];
> +        g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
> +        if (g_strcmp0(tokens[0], "ips") == 0) {
> +            insn_per_second = g_ascii_strtoull(tokens[1], NULL, 10);
> +            if (!insn_per_second && errno) {
> +                fprintf(stderr, "%s: couldn't parse %s (%s)\n",
> +                        __func__, tokens[1], g_strerror(errno));
> +                return -1;
> +            }
> +
> +        } else if (g_strcmp0(tokens[0], "precise") == 0) {
> +            if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
> +                                        &precise_execution)) {
> +                fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
> +                return -1;
> +            }
> +        } else {
> +            fprintf(stderr, "option parsing failed: %s\n", opt);
> +            return -1;
> +        }
> +    }
> +
> +    vcpus = qemu_plugin_scoreboard_new(sizeof(vCPUTime));
> +    insn_quantum = insn_per_second / NUM_TIME_UPDATE_PER_SEC;
> +
> +    time_handle = qemu_plugin_request_time_control();
> +    g_assert(time_handle);
> +
> +    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
> +    qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
> +    qemu_plugin_register_vcpu_idle_cb(id, vcpu_idle);
> +    qemu_plugin_register_vcpu_resume_cb(id, vcpu_resume);
> +    qemu_plugin_register_vcpu_exit_cb(id, vcpu_exit);
> +    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
> +
> +    return 0;
> +}
> diff --git a/contrib/plugins/Makefile b/contrib/plugins/Makefile
> index 0b64d2c1e3a..449ead11305 100644
> --- a/contrib/plugins/Makefile
> +++ b/contrib/plugins/Makefile
> @@ -27,6 +27,7 @@ endif
>  NAMES += hwprofile
>  NAMES += cache
>  NAMES += drcov
> +NAMES += ips
>  
>  ifeq ($(CONFIG_WIN32),y)
>  SO_SUFFIX := .dll

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-28 19:14   ` Alex Bennée
@ 2024-05-28 19:26     ` Pierrick Bouvier
  2024-05-28 19:57       ` Alex Bennée
  0 siblings, 1 reply; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-28 19:26 UTC (permalink / raw)
  To: Alex Bennée
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

On 5/28/24 12:14, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> 
>> This plugin uses the new time control interface to make decisions
>> about the state of time during the emulation. The algorithm is
>> currently very simple. The user specifies an ips rate which applies
>> per core. If the core runs ahead of its allocated execution time the
>> plugin sleeps for a bit to let real time catch up. Either way time is
>> updated for the emulation as a function of total executed instructions
>> with some adjustments for cores that idle.
>>
>> Examples
>> --------
>>
>> Slow down execution of /bin/true:
>> $ num_insn=$(./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true |& grep total | sed -e 's/.*: //')
>> $ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips.so,ips=$(($num_insn/4)) /bin/true
>> real 4.000s
>>
>> Boot a Linux kernel simulating a 250MHz cpu:
>> $ /build/qemu-system-x86_64 -kernel /boot/vmlinuz-6.1.0-21-amd64 -append "console=ttyS0" -plugin ./build/contrib/plugins/libips.so,ips=$((250*1000*1000)) -smp 1 -m 512
>> check time until kernel panic on serial0
>>
>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>> ---
>>   contrib/plugins/ips.c    | 239 +++++++++++++++++++++++++++++++++++++++
>>   contrib/plugins/Makefile |   1 +
>>   2 files changed, 240 insertions(+)
>>   create mode 100644 contrib/plugins/ips.c
>>
>> diff --git a/contrib/plugins/ips.c b/contrib/plugins/ips.c
>> new file mode 100644
>> index 00000000000..cf3159df391
>> --- /dev/null
>> +++ b/contrib/plugins/ips.c
>> @@ -0,0 +1,239 @@
>> +/*
>> + * ips rate limiting plugin.
>> + *
>> + * This plugin can be used to restrict the execution of a system to a
>> + * particular number of Instructions Per Second (ips). This controls
>> + * time as seen by the guest so while wall-clock time may be longer
>> + * from the guests point of view time will pass at the normal rate.
>> + *
>> + * This uses the new plugin API which allows the plugin to control
>> + * system time.
>> + *
>> + * Copyright (c) 2023 Linaro Ltd
>> + *
>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> + */
>> +
>> +#include <stdio.h>
>> +#include <glib.h>
>> +#include <qemu-plugin.h>
>> +
>> +QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
>> +
>> +/* how many times do we update time per sec */
>> +#define NUM_TIME_UPDATE_PER_SEC 10
>> +#define NSEC_IN_ONE_SEC (1000 * 1000 * 1000)
>> +
>> +static GMutex global_state_lock;
>> +
>> +static uint64_t insn_per_second = 1000 * 1000; /* ips per core, per second */
>> +static uint64_t insn_quantum; /* trap every N instructions */
>> +static bool precise_execution; /* count every instruction */
>> +static int64_t start_time_ns; /* time (ns since epoch) first vCPU started */
>> +static int64_t virtual_time_ns; /* last set virtual time */
>> +
>> +static const void *time_handle;
>> +
>> +typedef enum {
>> +    UNKNOWN = 0,
>> +    EXECUTING,
>> +    IDLE,
>> +    FINISHED
>> +} vCPUState;
>> +
>> +typedef struct {
>> +    uint64_t counter;
>> +    uint64_t track_insn;
>> +    vCPUState state;
>> +    /* timestamp when vCPU entered state */
>> +    int64_t last_state_time;
>> +} vCPUTime;
>> +
>> +struct qemu_plugin_scoreboard *vcpus;
>> +
>> +/* return epoch time in ns */
>> +static int64_t now_ns(void)
>> +{
>> +    return g_get_real_time() * 1000;
>> +}
>> +
>> +static uint64_t num_insn_during(int64_t elapsed_ns)
>> +{
>> +    double num_secs = elapsed_ns / (double) NSEC_IN_ONE_SEC;
>> +    return num_secs * (double) insn_per_second;
>> +}
>> +
>> +static int64_t time_for_insn(uint64_t num_insn)
>> +{
>> +    double num_secs = (double) num_insn / (double) insn_per_second;
>> +    return num_secs * (double) NSEC_IN_ONE_SEC;
>> +}
>> +
>> +static int64_t uptime_ns(void)
>> +{
>> +    int64_t now = now_ns();
>> +    g_assert(now >= start_time_ns);
>> +    return now - start_time_ns;
>> +}
>> +
>> +static void vcpu_set_state(vCPUTime *vcpu, vCPUState new_state)
>> +{
>> +    vcpu->last_state_time = now_ns();
>> +    vcpu->state = new_state;
>> +}
>> +
>> +static void update_system_time(vCPUTime *vcpu)
>> +{
>> +    /* flush remaining instructions */
>> +    vcpu->counter += vcpu->track_insn;
>> +    vcpu->track_insn = 0;
>> +
>> +    int64_t uptime = uptime_ns();
>> +    uint64_t expected_insn = num_insn_during(uptime);
>> +
>> +    if (vcpu->counter >= expected_insn) {
>> +        /* this vcpu ran faster than expected, so it has to sleep */
>> +        uint64_t insn_advance = vcpu->counter - expected_insn;
>> +        uint64_t time_advance_ns = time_for_insn(insn_advance);
>> +        int64_t sleep_us = time_advance_ns / 1000;
>> +        g_usleep(sleep_us);
>> +    }
>> +
>> +    /* based on number of instructions, what should be the new time? */
>> +    int64_t new_virtual_time = time_for_insn(vcpu->counter);
>> +
>> +    g_mutex_lock(&global_state_lock);
>> +
>> +    /* Time only moves forward. Another vcpu might have updated it already. */
>> +    if (new_virtual_time > virtual_time_ns) {
>> +        qemu_plugin_update_ns(time_handle, new_virtual_time);
>> +        virtual_time_ns = new_virtual_time;
>> +    }
>> +
>> +    g_mutex_unlock(&global_state_lock);
>> +}
>> +
>> +static void set_start_time()
>> +{
>> +    g_mutex_lock(&global_state_lock);
>> +    if (!start_time_ns) {
>> +        start_time_ns = now_ns();
>> +    }
>> +    g_mutex_unlock(&global_state_lock);
>> +}
>> +
>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>> +{
>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>> +    /* ensure start time is set first */
>> +    set_start_time();
>> +    /* start counter from absolute time reference */
>> +    vcpu->counter = num_insn_during(uptime_ns());
>> +    vcpu_set_state(vcpu, EXECUTING);
>> +}
>> +
>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>> +{
>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>> +    vcpu_set_state(vcpu, IDLE);
>> +}
>> +
>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>> +{
>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>> +    g_assert(vcpu->state == IDLE);
> 
> I'm triggering a weird race here:
> 
>    (gdb) b vcpu_init
>    Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>    (gdb) r
>    The program being debugged has been started already.
>    Start it from the beginning? (y or n) y
>    Starting program: /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64 -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars -cpu cortex-a57 -smp 32 -accel tcg -device virtio-net-pci,netdev=unet -device virtio-scsi-pci -device scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22 -blockdev driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap -serial mon:stdio -blockdev node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true -blockdev node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append root=/dev/sda2\ console=ttyAMA0 -plugin contrib/plugins/libips.so,ips=1000000000
>    [Thread debugging using libthread_db enabled]
>    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>    [New Thread 0x7fffe72006c0 (LWP 360538)]
>    [New Thread 0x7fffe68006c0 (LWP 360540)]
>    [New Thread 0x7fffe5e006c0 (LWP 360541)]
>    [New Thread 0x7fffe54006c0 (LWP 360542)]
>    [New Thread 0x7fffe4a006c0 (LWP 360543)]
>    [New Thread 0x7fffdfe006c0 (LWP 360544)]
>    [New Thread 0x7fffdf4006c0 (LWP 360545)]
>    [New Thread 0x7fffdea006c0 (LWP 360546)]
>    [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
> 
>    Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=0) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb) c
>    Continuing.
>    [New Thread 0x7fffde0006c0 (LWP 360548)]
>    [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
> 
>    Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=1) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb)
>    Continuing.
>    [New Thread 0x7fffdd6006c0 (LWP 360549)]
>    [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
> 
>    Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=2) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb)
>    Continuing.
>    [New Thread 0x7fffdcc006c0 (LWP 360550)]
>    [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
> 
>    Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=3) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb)
>    Continuing.
>    [New Thread 0x7fffd3e006c0 (LWP 360551)]
>    [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
> 
>    Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=4) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb) n
>    129         set_start_time();
>    (gdb)
>    131         vcpu->counter = num_insn_during(uptime_ns());
>    (gdb)
>    132         vcpu_set_state(vcpu, EXECUTING);
>    (gdb)
>    133     }
>    (gdb) p vcpu->state
>    $1 = EXECUTING
>    (gdb) p &vcpu->state
>    $2 = (vCPUState *) 0x555557c6b5d0
>    (gdb) watch *(vCPUState *) 0x555557c6b5d0
>    Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>    (gdb) c
>    Continuing.
>    [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>    [Thread 0x7fffe5e006c0 (LWP 360541) exited]
> 
>    Thread 12 "qemu-system-aar" hit Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
> 
>    Old value = EXECUTING
>    New value = IDLE
>    vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>    83      }
>    (gdb) c
>    Continuing.
>    [New Thread 0x7fffdfe006c0 (LWP 360591)]
>    [New Thread 0x7fffe5e006c0 (LWP 360592)]
>    [New Thread 0x7fffd34006c0 (LWP 360593)]
>    [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
> 
>    Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init (id=10457908569352202058, cpu_index=6) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>    (gdb)
>    Continuing.
>    [New Thread 0x7fffd2a006c0 (LWP 360594)]
>    **
>    ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>    Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
> 
>    Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>    [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>    __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>    44      ./nptl/pthread_kill.c: No such file or directory.
>    (gdb) bt
>    #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>    #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>    #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>    #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>    #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>    #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>    #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>    #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>    #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>    #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>    #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>    #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>    #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>    #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>    (gdb)
> 
> But I don't understand how we can ever hit the idle callback without
> first hitting the init callback.
> 

More exactly, the assert we hit means that the idle callback, was not 
called before the resume callback.
Any chance you can check what is the current vcpu->state value? I wonder 
if it's not an exited cpu, that gets resumed after.

Thanks,
Pierrick

>> +    int64_t idle_time = now_ns() - vcpu->last_state_time;
>> +    /* accumulate expected number of instructions */
>> +    vcpu->counter += num_insn_during(idle_time);
>> +    vcpu_set_state(vcpu, EXECUTING);
>> +}
>> +
>> +static void vcpu_exit(qemu_plugin_id_t id, unsigned int cpu_index)
>> +{
>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>> +    vcpu_set_state(vcpu, FINISHED);
>> +    update_system_time(vcpu);
>> +    vcpu->counter = 0;
>> +}
>> +
>> +static void every_insn_quantum(unsigned int cpu_index, void *udata)
>> +{
>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>> +    g_assert(vcpu->track_insn >= insn_quantum);
>> +    update_system_time(vcpu);
>> +}
>> +
>> +static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>> +{
>> +    size_t n_insns = qemu_plugin_tb_n_insns(tb);
>> +    qemu_plugin_u64 track_insn =
>> +        qemu_plugin_scoreboard_u64_in_struct(vcpus, vCPUTime, track_insn);
>> +    if (precise_execution) {
>> +        /* count (and eventually trap) on every instruction */
>> +        for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
>> +            struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
>> +            qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
>> +                insn, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, 1);
>> +            qemu_plugin_register_vcpu_insn_exec_cond_cb(
>> +                insn, every_insn_quantum,
>> +                QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>> +                track_insn, insn_quantum, NULL);
>> +        }
>> +    } else {
>> +        /* count (and eventually trap) once per tb */
>> +        qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
>> +            tb, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, n_insns);
>> +        qemu_plugin_register_vcpu_tb_exec_cond_cb(
>> +            tb, every_insn_quantum,
>> +            QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>> +            track_insn, insn_quantum, NULL);
>> +    }
>> +}
>> +
>> +static void plugin_exit(qemu_plugin_id_t id, void *udata)
>> +{
>> +    qemu_plugin_scoreboard_free(vcpus);
>> +}
>> +
>> +QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
>> +                                           const qemu_info_t *info, int argc,
>> +                                           char **argv)
>> +{
>> +    for (int i = 0; i < argc; i++) {
>> +        char *opt = argv[i];
>> +        g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
>> +        if (g_strcmp0(tokens[0], "ips") == 0) {
>> +            insn_per_second = g_ascii_strtoull(tokens[1], NULL, 10);
>> +            if (!insn_per_second && errno) {
>> +                fprintf(stderr, "%s: couldn't parse %s (%s)\n",
>> +                        __func__, tokens[1], g_strerror(errno));
>> +                return -1;
>> +            }
>> +
>> +        } else if (g_strcmp0(tokens[0], "precise") == 0) {
>> +            if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
>> +                                        &precise_execution)) {
>> +                fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
>> +                return -1;
>> +            }
>> +        } else {
>> +            fprintf(stderr, "option parsing failed: %s\n", opt);
>> +            return -1;
>> +        }
>> +    }
>> +
>> +    vcpus = qemu_plugin_scoreboard_new(sizeof(vCPUTime));
>> +    insn_quantum = insn_per_second / NUM_TIME_UPDATE_PER_SEC;
>> +
>> +    time_handle = qemu_plugin_request_time_control();
>> +    g_assert(time_handle);
>> +
>> +    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
>> +    qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
>> +    qemu_plugin_register_vcpu_idle_cb(id, vcpu_idle);
>> +    qemu_plugin_register_vcpu_resume_cb(id, vcpu_resume);
>> +    qemu_plugin_register_vcpu_exit_cb(id, vcpu_exit);
>> +    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
>> +
>> +    return 0;
>> +}
>> diff --git a/contrib/plugins/Makefile b/contrib/plugins/Makefile
>> index 0b64d2c1e3a..449ead11305 100644
>> --- a/contrib/plugins/Makefile
>> +++ b/contrib/plugins/Makefile
>> @@ -27,6 +27,7 @@ endif
>>   NAMES += hwprofile
>>   NAMES += cache
>>   NAMES += drcov
>> +NAMES += ips
>>   
>>   ifeq ($(CONFIG_WIN32),y)
>>   SO_SUFFIX := .dll
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-28 19:26     ` Pierrick Bouvier
@ 2024-05-28 19:57       ` Alex Bennée
  2024-05-28 20:26         ` Pierrick Bouvier
  0 siblings, 1 reply; 21+ messages in thread
From: Alex Bennée @ 2024-05-28 19:57 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:

> On 5/28/24 12:14, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>> 
>>> This plugin uses the new time control interface to make decisions
>>> about the state of time during the emulation. The algorithm is
>>> currently very simple. The user specifies an ips rate which applies
>>> per core. If the core runs ahead of its allocated execution time the
>>> plugin sleeps for a bit to let real time catch up. Either way time is
>>> updated for the emulation as a function of total executed instructions
>>> with some adjustments for cores that idle.
>>>
>>> Examples
>>> --------
>>>
>>> Slow down execution of /bin/true:
>>> $ num_insn=$(./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true |& grep total | sed -e 's/.*: //')
>>> $ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips.so,ips=$(($num_insn/4)) /bin/true
>>> real 4.000s
>>>
>>> Boot a Linux kernel simulating a 250MHz cpu:
>>> $ /build/qemu-system-x86_64 -kernel /boot/vmlinuz-6.1.0-21-amd64 -append "console=ttyS0" -plugin ./build/contrib/plugins/libips.so,ips=$((250*1000*1000)) -smp 1 -m 512
>>> check time until kernel panic on serial0
>>>
>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>>> ---
>>>   contrib/plugins/ips.c    | 239 +++++++++++++++++++++++++++++++++++++++
>>>   contrib/plugins/Makefile |   1 +
>>>   2 files changed, 240 insertions(+)
>>>   create mode 100644 contrib/plugins/ips.c
>>>
>>> diff --git a/contrib/plugins/ips.c b/contrib/plugins/ips.c
>>> new file mode 100644
>>> index 00000000000..cf3159df391
>>> --- /dev/null
>>> +++ b/contrib/plugins/ips.c
>>> @@ -0,0 +1,239 @@
>>> +/*
>>> + * ips rate limiting plugin.
>>> + *
>>> + * This plugin can be used to restrict the execution of a system to a
>>> + * particular number of Instructions Per Second (ips). This controls
>>> + * time as seen by the guest so while wall-clock time may be longer
>>> + * from the guests point of view time will pass at the normal rate.
>>> + *
>>> + * This uses the new plugin API which allows the plugin to control
>>> + * system time.
>>> + *
>>> + * Copyright (c) 2023 Linaro Ltd
>>> + *
>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>> + */
>>> +
>>> +#include <stdio.h>
>>> +#include <glib.h>
>>> +#include <qemu-plugin.h>
>>> +
>>> +QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
>>> +
>>> +/* how many times do we update time per sec */
>>> +#define NUM_TIME_UPDATE_PER_SEC 10
>>> +#define NSEC_IN_ONE_SEC (1000 * 1000 * 1000)
>>> +
>>> +static GMutex global_state_lock;
>>> +
>>> +static uint64_t insn_per_second = 1000 * 1000; /* ips per core, per second */
>>> +static uint64_t insn_quantum; /* trap every N instructions */
>>> +static bool precise_execution; /* count every instruction */
>>> +static int64_t start_time_ns; /* time (ns since epoch) first vCPU started */
>>> +static int64_t virtual_time_ns; /* last set virtual time */
>>> +
>>> +static const void *time_handle;
>>> +
>>> +typedef enum {
>>> +    UNKNOWN = 0,
>>> +    EXECUTING,
>>> +    IDLE,
>>> +    FINISHED
>>> +} vCPUState;
>>> +
>>> +typedef struct {
>>> +    uint64_t counter;
>>> +    uint64_t track_insn;
>>> +    vCPUState state;
>>> +    /* timestamp when vCPU entered state */
>>> +    int64_t last_state_time;
>>> +} vCPUTime;
>>> +
>>> +struct qemu_plugin_scoreboard *vcpus;
>>> +
>>> +/* return epoch time in ns */
>>> +static int64_t now_ns(void)
>>> +{
>>> +    return g_get_real_time() * 1000;
>>> +}
>>> +
>>> +static uint64_t num_insn_during(int64_t elapsed_ns)
>>> +{
>>> +    double num_secs = elapsed_ns / (double) NSEC_IN_ONE_SEC;
>>> +    return num_secs * (double) insn_per_second;
>>> +}
>>> +
>>> +static int64_t time_for_insn(uint64_t num_insn)
>>> +{
>>> +    double num_secs = (double) num_insn / (double) insn_per_second;
>>> +    return num_secs * (double) NSEC_IN_ONE_SEC;
>>> +}
>>> +
>>> +static int64_t uptime_ns(void)
>>> +{
>>> +    int64_t now = now_ns();
>>> +    g_assert(now >= start_time_ns);
>>> +    return now - start_time_ns;
>>> +}
>>> +
>>> +static void vcpu_set_state(vCPUTime *vcpu, vCPUState new_state)
>>> +{
>>> +    vcpu->last_state_time = now_ns();
>>> +    vcpu->state = new_state;
>>> +}
>>> +
>>> +static void update_system_time(vCPUTime *vcpu)
>>> +{
>>> +    /* flush remaining instructions */
>>> +    vcpu->counter += vcpu->track_insn;
>>> +    vcpu->track_insn = 0;
>>> +
>>> +    int64_t uptime = uptime_ns();
>>> +    uint64_t expected_insn = num_insn_during(uptime);
>>> +
>>> +    if (vcpu->counter >= expected_insn) {
>>> +        /* this vcpu ran faster than expected, so it has to sleep */
>>> +        uint64_t insn_advance = vcpu->counter - expected_insn;
>>> +        uint64_t time_advance_ns = time_for_insn(insn_advance);
>>> +        int64_t sleep_us = time_advance_ns / 1000;
>>> +        g_usleep(sleep_us);
>>> +    }
>>> +
>>> +    /* based on number of instructions, what should be the new time? */
>>> +    int64_t new_virtual_time = time_for_insn(vcpu->counter);
>>> +
>>> +    g_mutex_lock(&global_state_lock);
>>> +
>>> +    /* Time only moves forward. Another vcpu might have updated it already. */
>>> +    if (new_virtual_time > virtual_time_ns) {
>>> +        qemu_plugin_update_ns(time_handle, new_virtual_time);
>>> +        virtual_time_ns = new_virtual_time;
>>> +    }
>>> +
>>> +    g_mutex_unlock(&global_state_lock);
>>> +}
>>> +
>>> +static void set_start_time()
>>> +{
>>> +    g_mutex_lock(&global_state_lock);
>>> +    if (!start_time_ns) {
>>> +        start_time_ns = now_ns();
>>> +    }
>>> +    g_mutex_unlock(&global_state_lock);
>>> +}
>>> +
>>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>>> +{
>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>> +    /* ensure start time is set first */
>>> +    set_start_time();
>>> +    /* start counter from absolute time reference */
>>> +    vcpu->counter = num_insn_during(uptime_ns());
>>> +    vcpu_set_state(vcpu, EXECUTING);
>>> +}
>>> +
>>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>>> +{
>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>> +    vcpu_set_state(vcpu, IDLE);
>>> +}
>>> +
>>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>>> +{
>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>> +    g_assert(vcpu->state == IDLE);
>> I'm triggering a weird race here:
>>    (gdb) b vcpu_init
>>    Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>>    (gdb) r
>>    The program being debugged has been started already.
>>    Start it from the beginning? (y or n) y
>>    Starting program:
>> /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64
>> -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars
>> -cpu cortex-a57 -smp 32 -accel tcg -device
>> virtio-net-pci,netdev=unet -device virtio-scsi-pci -device
>> scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22
>> -blockdev
>> driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap
>> -serial mon:stdio -blockdev
>> node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true
>> -blockdev
>> node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars
>> -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel
>> /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append
>> root=/dev/sda2\ console=ttyAMA0 -plugin
>> contrib/plugins/libips.so,ips=1000000000
>>    [Thread debugging using libthread_db enabled]
>>    Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>    [New Thread 0x7fffe72006c0 (LWP 360538)]
>>    [New Thread 0x7fffe68006c0 (LWP 360540)]
>>    [New Thread 0x7fffe5e006c0 (LWP 360541)]
>>    [New Thread 0x7fffe54006c0 (LWP 360542)]
>>    [New Thread 0x7fffe4a006c0 (LWP 360543)]
>>    [New Thread 0x7fffdfe006c0 (LWP 360544)]
>>    [New Thread 0x7fffdf4006c0 (LWP 360545)]
>>    [New Thread 0x7fffdea006c0 (LWP 360546)]
>>    [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
>>    Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=0) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb) c
>>    Continuing.
>>    [New Thread 0x7fffde0006c0 (LWP 360548)]
>>    [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
>>    Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=1) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb)
>>    Continuing.
>>    [New Thread 0x7fffdd6006c0 (LWP 360549)]
>>    [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
>>    Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=2) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb)
>>    Continuing.
>>    [New Thread 0x7fffdcc006c0 (LWP 360550)]
>>    [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
>>    Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=3) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb)
>>    Continuing.
>>    [New Thread 0x7fffd3e006c0 (LWP 360551)]
>>    [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
>>    Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=4) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb) n
>>    129         set_start_time();
>>    (gdb)
>>    131         vcpu->counter = num_insn_during(uptime_ns());
>>    (gdb)
>>    132         vcpu_set_state(vcpu, EXECUTING);
>>    (gdb)
>>    133     }
>>    (gdb) p vcpu->state
>>    $1 = EXECUTING
>>    (gdb) p &vcpu->state
>>    $2 = (vCPUState *) 0x555557c6b5d0
>>    (gdb) watch *(vCPUState *) 0x555557c6b5d0
>>    Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>>    (gdb) c
>>    Continuing.
>>    [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>>    [Thread 0x7fffe5e006c0 (LWP 360541) exited]
>>    Thread 12 "qemu-system-aar" hit Hardware watchpoint 2:
>> *(vCPUState *) 0x555557c6b5d0
>>    Old value = EXECUTING
>>    New value = IDLE
>>    vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>>    83      }
>>    (gdb) c
>>    Continuing.
>>    [New Thread 0x7fffdfe006c0 (LWP 360591)]
>>    [New Thread 0x7fffe5e006c0 (LWP 360592)]
>>    [New Thread 0x7fffd34006c0 (LWP 360593)]
>>    [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
>>    Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>> (id=10457908569352202058, cpu_index=6) at
>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>    127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>    (gdb)
>>    Continuing.
>>    [New Thread 0x7fffd2a006c0 (LWP 360594)]
>>    **
>>    ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>    Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>    Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>>    [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>>    __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>    44      ./nptl/pthread_kill.c: No such file or directory.
>>    (gdb) bt
>>    #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>    #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>>    #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>>    #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>>    #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>    #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>    #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>>    #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>>    #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>>    #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>>    #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>>    #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>>    #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>>    #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>>    (gdb)
>> But I don't understand how we can ever hit the idle callback without
>> first hitting the init callback.
>> 
>
> More exactly, the assert we hit means that the idle callback, was not
> called before the resume callback.
> Any chance you can check what is the current vcpu->state value? I
> wonder if it's not an exited cpu, that gets resumed after.

No when I looked at it it was set as UNKNOWN. Unfortunately it doesn't
trigger if I stick breakpoints in or run under rr.

>
> Thanks,
> Pierrick
>
>>> +    int64_t idle_time = now_ns() - vcpu->last_state_time;
>>> +    /* accumulate expected number of instructions */
>>> +    vcpu->counter += num_insn_during(idle_time);
>>> +    vcpu_set_state(vcpu, EXECUTING);
>>> +}
>>> +
>>> +static void vcpu_exit(qemu_plugin_id_t id, unsigned int cpu_index)
>>> +{
>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>> +    vcpu_set_state(vcpu, FINISHED);
>>> +    update_system_time(vcpu);
>>> +    vcpu->counter = 0;
>>> +}
>>> +
>>> +static void every_insn_quantum(unsigned int cpu_index, void *udata)
>>> +{
>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>> +    g_assert(vcpu->track_insn >= insn_quantum);
>>> +    update_system_time(vcpu);
>>> +}
>>> +
>>> +static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>>> +{
>>> +    size_t n_insns = qemu_plugin_tb_n_insns(tb);
>>> +    qemu_plugin_u64 track_insn =
>>> +        qemu_plugin_scoreboard_u64_in_struct(vcpus, vCPUTime, track_insn);
>>> +    if (precise_execution) {
>>> +        /* count (and eventually trap) on every instruction */
>>> +        for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
>>> +            struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
>>> +            qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
>>> +                insn, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, 1);
>>> +            qemu_plugin_register_vcpu_insn_exec_cond_cb(
>>> +                insn, every_insn_quantum,
>>> +                QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>>> +                track_insn, insn_quantum, NULL);
>>> +        }
>>> +    } else {
>>> +        /* count (and eventually trap) once per tb */
>>> +        qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
>>> +            tb, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, n_insns);
>>> +        qemu_plugin_register_vcpu_tb_exec_cond_cb(
>>> +            tb, every_insn_quantum,
>>> +            QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>>> +            track_insn, insn_quantum, NULL);
>>> +    }
>>> +}
>>> +
>>> +static void plugin_exit(qemu_plugin_id_t id, void *udata)
>>> +{
>>> +    qemu_plugin_scoreboard_free(vcpus);
>>> +}
>>> +
>>> +QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
>>> +                                           const qemu_info_t *info, int argc,
>>> +                                           char **argv)
>>> +{
>>> +    for (int i = 0; i < argc; i++) {
>>> +        char *opt = argv[i];
>>> +        g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
>>> +        if (g_strcmp0(tokens[0], "ips") == 0) {
>>> +            insn_per_second = g_ascii_strtoull(tokens[1], NULL, 10);
>>> +            if (!insn_per_second && errno) {
>>> +                fprintf(stderr, "%s: couldn't parse %s (%s)\n",
>>> +                        __func__, tokens[1], g_strerror(errno));
>>> +                return -1;
>>> +            }
>>> +
>>> +        } else if (g_strcmp0(tokens[0], "precise") == 0) {
>>> +            if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
>>> +                                        &precise_execution)) {
>>> +                fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
>>> +                return -1;
>>> +            }
>>> +        } else {
>>> +            fprintf(stderr, "option parsing failed: %s\n", opt);
>>> +            return -1;
>>> +        }
>>> +    }
>>> +
>>> +    vcpus = qemu_plugin_scoreboard_new(sizeof(vCPUTime));
>>> +    insn_quantum = insn_per_second / NUM_TIME_UPDATE_PER_SEC;
>>> +
>>> +    time_handle = qemu_plugin_request_time_control();
>>> +    g_assert(time_handle);
>>> +
>>> +    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
>>> +    qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
>>> +    qemu_plugin_register_vcpu_idle_cb(id, vcpu_idle);
>>> +    qemu_plugin_register_vcpu_resume_cb(id, vcpu_resume);
>>> +    qemu_plugin_register_vcpu_exit_cb(id, vcpu_exit);
>>> +    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
>>> +
>>> +    return 0;
>>> +}
>>> diff --git a/contrib/plugins/Makefile b/contrib/plugins/Makefile
>>> index 0b64d2c1e3a..449ead11305 100644
>>> --- a/contrib/plugins/Makefile
>>> +++ b/contrib/plugins/Makefile
>>> @@ -27,6 +27,7 @@ endif
>>>   NAMES += hwprofile
>>>   NAMES += cache
>>>   NAMES += drcov
>>> +NAMES += ips
>>>     ifeq ($(CONFIG_WIN32),y)
>>>   SO_SUFFIX := .dll
>> 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-28 19:57       ` Alex Bennée
@ 2024-05-28 20:26         ` Pierrick Bouvier
  2024-05-29 12:13           ` Alex Bennée
  0 siblings, 1 reply; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-28 20:26 UTC (permalink / raw)
  To: Alex Bennée
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

On 5/28/24 12:57, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> 
>> On 5/28/24 12:14, Alex Bennée wrote:
>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>
>>>> This plugin uses the new time control interface to make decisions
>>>> about the state of time during the emulation. The algorithm is
>>>> currently very simple. The user specifies an ips rate which applies
>>>> per core. If the core runs ahead of its allocated execution time the
>>>> plugin sleeps for a bit to let real time catch up. Either way time is
>>>> updated for the emulation as a function of total executed instructions
>>>> with some adjustments for cores that idle.
>>>>
>>>> Examples
>>>> --------
>>>>
>>>> Slow down execution of /bin/true:
>>>> $ num_insn=$(./build/qemu-x86_64 -plugin ./build/tests/plugin/libinsn.so -d plugin /bin/true |& grep total | sed -e 's/.*: //')
>>>> $ time ./build/qemu-x86_64 -plugin ./build/contrib/plugins/libips.so,ips=$(($num_insn/4)) /bin/true
>>>> real 4.000s
>>>>
>>>> Boot a Linux kernel simulating a 250MHz cpu:
>>>> $ /build/qemu-system-x86_64 -kernel /boot/vmlinuz-6.1.0-21-amd64 -append "console=ttyS0" -plugin ./build/contrib/plugins/libips.so,ips=$((250*1000*1000)) -smp 1 -m 512
>>>> check time until kernel panic on serial0
>>>>
>>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>>>> ---
>>>>    contrib/plugins/ips.c    | 239 +++++++++++++++++++++++++++++++++++++++
>>>>    contrib/plugins/Makefile |   1 +
>>>>    2 files changed, 240 insertions(+)
>>>>    create mode 100644 contrib/plugins/ips.c
>>>>
>>>> diff --git a/contrib/plugins/ips.c b/contrib/plugins/ips.c
>>>> new file mode 100644
>>>> index 00000000000..cf3159df391
>>>> --- /dev/null
>>>> +++ b/contrib/plugins/ips.c
>>>> @@ -0,0 +1,239 @@
>>>> +/*
>>>> + * ips rate limiting plugin.
>>>> + *
>>>> + * This plugin can be used to restrict the execution of a system to a
>>>> + * particular number of Instructions Per Second (ips). This controls
>>>> + * time as seen by the guest so while wall-clock time may be longer
>>>> + * from the guests point of view time will pass at the normal rate.
>>>> + *
>>>> + * This uses the new plugin API which allows the plugin to control
>>>> + * system time.
>>>> + *
>>>> + * Copyright (c) 2023 Linaro Ltd
>>>> + *
>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>>> + */
>>>> +
>>>> +#include <stdio.h>
>>>> +#include <glib.h>
>>>> +#include <qemu-plugin.h>
>>>> +
>>>> +QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
>>>> +
>>>> +/* how many times do we update time per sec */
>>>> +#define NUM_TIME_UPDATE_PER_SEC 10
>>>> +#define NSEC_IN_ONE_SEC (1000 * 1000 * 1000)
>>>> +
>>>> +static GMutex global_state_lock;
>>>> +
>>>> +static uint64_t insn_per_second = 1000 * 1000; /* ips per core, per second */
>>>> +static uint64_t insn_quantum; /* trap every N instructions */
>>>> +static bool precise_execution; /* count every instruction */
>>>> +static int64_t start_time_ns; /* time (ns since epoch) first vCPU started */
>>>> +static int64_t virtual_time_ns; /* last set virtual time */
>>>> +
>>>> +static const void *time_handle;
>>>> +
>>>> +typedef enum {
>>>> +    UNKNOWN = 0,
>>>> +    EXECUTING,
>>>> +    IDLE,
>>>> +    FINISHED
>>>> +} vCPUState;
>>>> +
>>>> +typedef struct {
>>>> +    uint64_t counter;
>>>> +    uint64_t track_insn;
>>>> +    vCPUState state;
>>>> +    /* timestamp when vCPU entered state */
>>>> +    int64_t last_state_time;
>>>> +} vCPUTime;
>>>> +
>>>> +struct qemu_plugin_scoreboard *vcpus;
>>>> +
>>>> +/* return epoch time in ns */
>>>> +static int64_t now_ns(void)
>>>> +{
>>>> +    return g_get_real_time() * 1000;
>>>> +}
>>>> +
>>>> +static uint64_t num_insn_during(int64_t elapsed_ns)
>>>> +{
>>>> +    double num_secs = elapsed_ns / (double) NSEC_IN_ONE_SEC;
>>>> +    return num_secs * (double) insn_per_second;
>>>> +}
>>>> +
>>>> +static int64_t time_for_insn(uint64_t num_insn)
>>>> +{
>>>> +    double num_secs = (double) num_insn / (double) insn_per_second;
>>>> +    return num_secs * (double) NSEC_IN_ONE_SEC;
>>>> +}
>>>> +
>>>> +static int64_t uptime_ns(void)
>>>> +{
>>>> +    int64_t now = now_ns();
>>>> +    g_assert(now >= start_time_ns);
>>>> +    return now - start_time_ns;
>>>> +}
>>>> +
>>>> +static void vcpu_set_state(vCPUTime *vcpu, vCPUState new_state)
>>>> +{
>>>> +    vcpu->last_state_time = now_ns();
>>>> +    vcpu->state = new_state;
>>>> +}
>>>> +
>>>> +static void update_system_time(vCPUTime *vcpu)
>>>> +{
>>>> +    /* flush remaining instructions */
>>>> +    vcpu->counter += vcpu->track_insn;
>>>> +    vcpu->track_insn = 0;
>>>> +
>>>> +    int64_t uptime = uptime_ns();
>>>> +    uint64_t expected_insn = num_insn_during(uptime);
>>>> +
>>>> +    if (vcpu->counter >= expected_insn) {
>>>> +        /* this vcpu ran faster than expected, so it has to sleep */
>>>> +        uint64_t insn_advance = vcpu->counter - expected_insn;
>>>> +        uint64_t time_advance_ns = time_for_insn(insn_advance);
>>>> +        int64_t sleep_us = time_advance_ns / 1000;
>>>> +        g_usleep(sleep_us);
>>>> +    }
>>>> +
>>>> +    /* based on number of instructions, what should be the new time? */
>>>> +    int64_t new_virtual_time = time_for_insn(vcpu->counter);
>>>> +
>>>> +    g_mutex_lock(&global_state_lock);
>>>> +
>>>> +    /* Time only moves forward. Another vcpu might have updated it already. */
>>>> +    if (new_virtual_time > virtual_time_ns) {
>>>> +        qemu_plugin_update_ns(time_handle, new_virtual_time);
>>>> +        virtual_time_ns = new_virtual_time;
>>>> +    }
>>>> +
>>>> +    g_mutex_unlock(&global_state_lock);
>>>> +}
>>>> +
>>>> +static void set_start_time()
>>>> +{
>>>> +    g_mutex_lock(&global_state_lock);
>>>> +    if (!start_time_ns) {
>>>> +        start_time_ns = now_ns();
>>>> +    }
>>>> +    g_mutex_unlock(&global_state_lock);
>>>> +}
>>>> +
>>>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>>>> +{
>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>> +    /* ensure start time is set first */
>>>> +    set_start_time();
>>>> +    /* start counter from absolute time reference */
>>>> +    vcpu->counter = num_insn_during(uptime_ns());
>>>> +    vcpu_set_state(vcpu, EXECUTING);
>>>> +}
>>>> +
>>>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>>>> +{
>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>> +    vcpu_set_state(vcpu, IDLE);
>>>> +}
>>>> +
>>>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>>>> +{
>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>> +    g_assert(vcpu->state == IDLE);
>>> I'm triggering a weird race here:
>>>     (gdb) b vcpu_init
>>>     Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>>>     (gdb) r
>>>     The program being debugged has been started already.
>>>     Start it from the beginning? (y or n) y
>>>     Starting program:
>>> /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64
>>> -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars
>>> -cpu cortex-a57 -smp 32 -accel tcg -device
>>> virtio-net-pci,netdev=unet -device virtio-scsi-pci -device
>>> scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22
>>> -blockdev
>>> driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap
>>> -serial mon:stdio -blockdev
>>> node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true
>>> -blockdev
>>> node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars
>>> -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel
>>> /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append
>>> root=/dev/sda2\ console=ttyAMA0 -plugin
>>> contrib/plugins/libips.so,ips=1000000000
>>>     [Thread debugging using libthread_db enabled]
>>>     Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>     [New Thread 0x7fffe72006c0 (LWP 360538)]
>>>     [New Thread 0x7fffe68006c0 (LWP 360540)]
>>>     [New Thread 0x7fffe5e006c0 (LWP 360541)]
>>>     [New Thread 0x7fffe54006c0 (LWP 360542)]
>>>     [New Thread 0x7fffe4a006c0 (LWP 360543)]
>>>     [New Thread 0x7fffdfe006c0 (LWP 360544)]
>>>     [New Thread 0x7fffdf4006c0 (LWP 360545)]
>>>     [New Thread 0x7fffdea006c0 (LWP 360546)]
>>>     [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
>>>     Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=0) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb) c
>>>     Continuing.
>>>     [New Thread 0x7fffde0006c0 (LWP 360548)]
>>>     [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
>>>     Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=1) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb)
>>>     Continuing.
>>>     [New Thread 0x7fffdd6006c0 (LWP 360549)]
>>>     [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
>>>     Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=2) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb)
>>>     Continuing.
>>>     [New Thread 0x7fffdcc006c0 (LWP 360550)]
>>>     [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
>>>     Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=3) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb)
>>>     Continuing.
>>>     [New Thread 0x7fffd3e006c0 (LWP 360551)]
>>>     [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
>>>     Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=4) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb) n
>>>     129         set_start_time();
>>>     (gdb)
>>>     131         vcpu->counter = num_insn_during(uptime_ns());
>>>     (gdb)
>>>     132         vcpu_set_state(vcpu, EXECUTING);
>>>     (gdb)
>>>     133     }
>>>     (gdb) p vcpu->state
>>>     $1 = EXECUTING
>>>     (gdb) p &vcpu->state
>>>     $2 = (vCPUState *) 0x555557c6b5d0
>>>     (gdb) watch *(vCPUState *) 0x555557c6b5d0
>>>     Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>>>     (gdb) c
>>>     Continuing.
>>>     [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>>>     [Thread 0x7fffe5e006c0 (LWP 360541) exited]
>>>     Thread 12 "qemu-system-aar" hit Hardware watchpoint 2:
>>> *(vCPUState *) 0x555557c6b5d0
>>>     Old value = EXECUTING
>>>     New value = IDLE
>>>     vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>>>     83      }
>>>     (gdb) c
>>>     Continuing.
>>>     [New Thread 0x7fffdfe006c0 (LWP 360591)]
>>>     [New Thread 0x7fffe5e006c0 (LWP 360592)]
>>>     [New Thread 0x7fffd34006c0 (LWP 360593)]
>>>     [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
>>>     Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>> (id=10457908569352202058, cpu_index=6) at
>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>     (gdb)
>>>     Continuing.
>>>     [New Thread 0x7fffd2a006c0 (LWP 360594)]
>>>     **
>>>     ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>     Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>     Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>>>     [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>>>     __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>     44      ./nptl/pthread_kill.c: No such file or directory.
>>>     (gdb) bt
>>>     #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>     #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>>>     #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>>>     #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>>>     #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>     #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>     #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>>>     #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>>>     #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>>>     #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>>>     #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>>>     #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>>>     #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>>>     #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>>>     (gdb)
>>> But I don't understand how we can ever hit the idle callback without
>>> first hitting the init callback.
>>>
>>
>> More exactly, the assert we hit means that the idle callback, was not
>> called before the resume callback.
>> Any chance you can check what is the current vcpu->state value? I
>> wonder if it's not an exited cpu, that gets resumed after.
> 
> No when I looked at it it was set as UNKNOWN. Unfortunately it doesn't
> trigger if I stick breakpoints in or run under rr.
>

By using rr record --chaos mode maybe?

>>
>> Thanks,
>> Pierrick
>>
>>>> +    int64_t idle_time = now_ns() - vcpu->last_state_time;
>>>> +    /* accumulate expected number of instructions */
>>>> +    vcpu->counter += num_insn_during(idle_time);
>>>> +    vcpu_set_state(vcpu, EXECUTING);
>>>> +}
>>>> +
>>>> +static void vcpu_exit(qemu_plugin_id_t id, unsigned int cpu_index)
>>>> +{
>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>> +    vcpu_set_state(vcpu, FINISHED);
>>>> +    update_system_time(vcpu);
>>>> +    vcpu->counter = 0;
>>>> +}
>>>> +
>>>> +static void every_insn_quantum(unsigned int cpu_index, void *udata)
>>>> +{
>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>> +    g_assert(vcpu->track_insn >= insn_quantum);
>>>> +    update_system_time(vcpu);
>>>> +}
>>>> +
>>>> +static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>>>> +{
>>>> +    size_t n_insns = qemu_plugin_tb_n_insns(tb);
>>>> +    qemu_plugin_u64 track_insn =
>>>> +        qemu_plugin_scoreboard_u64_in_struct(vcpus, vCPUTime, track_insn);
>>>> +    if (precise_execution) {
>>>> +        /* count (and eventually trap) on every instruction */
>>>> +        for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
>>>> +            struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
>>>> +            qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
>>>> +                insn, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, 1);
>>>> +            qemu_plugin_register_vcpu_insn_exec_cond_cb(
>>>> +                insn, every_insn_quantum,
>>>> +                QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>>>> +                track_insn, insn_quantum, NULL);
>>>> +        }
>>>> +    } else {
>>>> +        /* count (and eventually trap) once per tb */
>>>> +        qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
>>>> +            tb, QEMU_PLUGIN_INLINE_ADD_U64, track_insn, n_insns);
>>>> +        qemu_plugin_register_vcpu_tb_exec_cond_cb(
>>>> +            tb, every_insn_quantum,
>>>> +            QEMU_PLUGIN_CB_NO_REGS, QEMU_PLUGIN_COND_GE,
>>>> +            track_insn, insn_quantum, NULL);
>>>> +    }
>>>> +}
>>>> +
>>>> +static void plugin_exit(qemu_plugin_id_t id, void *udata)
>>>> +{
>>>> +    qemu_plugin_scoreboard_free(vcpus);
>>>> +}
>>>> +
>>>> +QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
>>>> +                                           const qemu_info_t *info, int argc,
>>>> +                                           char **argv)
>>>> +{
>>>> +    for (int i = 0; i < argc; i++) {
>>>> +        char *opt = argv[i];
>>>> +        g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
>>>> +        if (g_strcmp0(tokens[0], "ips") == 0) {
>>>> +            insn_per_second = g_ascii_strtoull(tokens[1], NULL, 10);
>>>> +            if (!insn_per_second && errno) {
>>>> +                fprintf(stderr, "%s: couldn't parse %s (%s)\n",
>>>> +                        __func__, tokens[1], g_strerror(errno));
>>>> +                return -1;
>>>> +            }
>>>> +
>>>> +        } else if (g_strcmp0(tokens[0], "precise") == 0) {
>>>> +            if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
>>>> +                                        &precise_execution)) {
>>>> +                fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
>>>> +                return -1;
>>>> +            }
>>>> +        } else {
>>>> +            fprintf(stderr, "option parsing failed: %s\n", opt);
>>>> +            return -1;
>>>> +        }
>>>> +    }
>>>> +
>>>> +    vcpus = qemu_plugin_scoreboard_new(sizeof(vCPUTime));
>>>> +    insn_quantum = insn_per_second / NUM_TIME_UPDATE_PER_SEC;
>>>> +
>>>> +    time_handle = qemu_plugin_request_time_control();
>>>> +    g_assert(time_handle);
>>>> +
>>>> +    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
>>>> +    qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
>>>> +    qemu_plugin_register_vcpu_idle_cb(id, vcpu_idle);
>>>> +    qemu_plugin_register_vcpu_resume_cb(id, vcpu_resume);
>>>> +    qemu_plugin_register_vcpu_exit_cb(id, vcpu_exit);
>>>> +    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> diff --git a/contrib/plugins/Makefile b/contrib/plugins/Makefile
>>>> index 0b64d2c1e3a..449ead11305 100644
>>>> --- a/contrib/plugins/Makefile
>>>> +++ b/contrib/plugins/Makefile
>>>> @@ -27,6 +27,7 @@ endif
>>>>    NAMES += hwprofile
>>>>    NAMES += cache
>>>>    NAMES += drcov
>>>> +NAMES += ips
>>>>      ifeq ($(CONFIG_WIN32),y)
>>>>    SO_SUFFIX := .dll
>>>
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-28 20:26         ` Pierrick Bouvier
@ 2024-05-29 12:13           ` Alex Bennée
  2024-05-29 14:33             ` Pierrick Bouvier
  0 siblings, 1 reply; 21+ messages in thread
From: Alex Bennée @ 2024-05-29 12:13 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:

(Added Philip to CC)

> On 5/28/24 12:57, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>> 
>>> On 5/28/24 12:14, Alex Bennée wrote:
>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>
>>>>> This plugin uses the new time control interface to make decisions
>>>>> about the state of time during the emulation. The algorithm is
>>>>> currently very simple. The user specifies an ips rate which applies
>>>>> per core.

<snip>

>>>>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>> +{
>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>> +    /* ensure start time is set first */
>>>>> +    set_start_time();
>>>>> +    /* start counter from absolute time reference */
>>>>> +    vcpu->counter = num_insn_during(uptime_ns());
>>>>> +    vcpu_set_state(vcpu, EXECUTING);
>>>>> +}
>>>>> +
>>>>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>> +{
>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>> +    vcpu_set_state(vcpu, IDLE);
>>>>> +}
>>>>> +
>>>>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>> +{
>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>> +    g_assert(vcpu->state == IDLE);
>>>> I'm triggering a weird race here:
>>>>     (gdb) b vcpu_init
>>>>     Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>>>>     (gdb) r
>>>>     The program being debugged has been started already.
>>>>     Start it from the beginning? (y or n) y
>>>>     Starting program:
>>>> /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64
>>>> -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars
>>>> -cpu cortex-a57 -smp 32 -accel tcg -device
>>>> virtio-net-pci,netdev=unet -device virtio-scsi-pci -device
>>>> scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22
>>>> -blockdev
>>>> driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap
>>>> -serial mon:stdio -blockdev
>>>> node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true
>>>> -blockdev
>>>> node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars
>>>> -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel
>>>> /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append
>>>> root=/dev/sda2\ console=ttyAMA0 -plugin
>>>> contrib/plugins/libips.so,ips=1000000000
>>>>     [Thread debugging using libthread_db enabled]
>>>>     Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>>     [New Thread 0x7fffe72006c0 (LWP 360538)]
>>>>     [New Thread 0x7fffe68006c0 (LWP 360540)]
>>>>     [New Thread 0x7fffe5e006c0 (LWP 360541)]
>>>>     [New Thread 0x7fffe54006c0 (LWP 360542)]
>>>>     [New Thread 0x7fffe4a006c0 (LWP 360543)]
>>>>     [New Thread 0x7fffdfe006c0 (LWP 360544)]
>>>>     [New Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>     [New Thread 0x7fffdea006c0 (LWP 360546)]
>>>>     [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>     Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=0) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb) c
>>>>     Continuing.
>>>>     [New Thread 0x7fffde0006c0 (LWP 360548)]
>>>>     [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
>>>>     Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=1) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb)
>>>>     Continuing.
>>>>     [New Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>     [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
>>>>     Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=2) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb)
>>>>     Continuing.
>>>>     [New Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>     [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>     Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=3) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb)
>>>>     Continuing.
>>>>     [New Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>     [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>     Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=4) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb) n
>>>>     129         set_start_time();
>>>>     (gdb)
>>>>     131         vcpu->counter = num_insn_during(uptime_ns());
>>>>     (gdb)
>>>>     132         vcpu_set_state(vcpu, EXECUTING);
>>>>     (gdb)
>>>>     133     }
>>>>     (gdb) p vcpu->state
>>>>     $1 = EXECUTING
>>>>     (gdb) p &vcpu->state
>>>>     $2 = (vCPUState *) 0x555557c6b5d0
>>>>     (gdb) watch *(vCPUState *) 0x555557c6b5d0
>>>>     Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>>>>     (gdb) c
>>>>     Continuing.
>>>>     [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>>>>     [Thread 0x7fffe5e006c0 (LWP 360541) exited]
>>>>     Thread 12 "qemu-system-aar" hit Hardware watchpoint 2:
>>>> *(vCPUState *) 0x555557c6b5d0
>>>>     Old value = EXECUTING
>>>>     New value = IDLE
>>>>     vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>>>>     83      }
>>>>     (gdb) c
>>>>     Continuing.
>>>>     [New Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>     [New Thread 0x7fffe5e006c0 (LWP 360592)]
>>>>     [New Thread 0x7fffd34006c0 (LWP 360593)]
>>>>     [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>     Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>> (id=10457908569352202058, cpu_index=6) at
>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>     127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>     (gdb)
>>>>     Continuing.
>>>>     [New Thread 0x7fffd2a006c0 (LWP 360594)]
>>>>     **
>>>>     ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>     Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>     Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>>>>     [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>     __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>     44      ./nptl/pthread_kill.c: No such file or directory.
>>>>     (gdb) bt
>>>>     #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>     #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>>>>     #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>>>>     #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>>>>     #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>     #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>     #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>>>>     #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>>>>     #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>>>>     #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>>>>     #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>>>>     #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>>>>     #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>>>>     #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>>>>     (gdb)
>>>> But I don't understand how we can ever hit the idle callback without
>>>> first hitting the init callback.
>>>>
>>>
>>> More exactly, the assert we hit means that the idle callback, was not
>>> called before the resume callback.
>>> Any chance you can check what is the current vcpu->state value? I
>>> wonder if it's not an exited cpu, that gets resumed after.
>> No when I looked at it it was set as UNKNOWN. Unfortunately it
>> doesn't
>> trigger if I stick breakpoints in or run under rr.
>>
>
> By using rr record --chaos mode maybe?

Ahh yes that triggered it.

It looks like we have a race:

  (rr) c
  Continuing.

  Thread 1 hit Hardware watchpoint 7: *(int *) 0x5559ba6899b0

  Old value = -1
  New value = 10
  machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2214
  2214            numa_cpu_pre_plug(&possible_cpus->cpus[cs->cpu_index], DEVICE(cpuobj),
  (rr) c
  Continuing.
  [New Thread 650125.650142]
  [Switching to Thread 650125.650142]

  Thread 44 hit Breakpoint 8, mttcg_cpu_thread_fn (arg=0x5559ba6896e0) at ../../accel/tcg/tcg-accel-ops-mttcg.c:70
  70          assert(tcg_enabled());
  (rr) p cpu->cpu_index
  $24 = 10
  (rr) c
  Continuing.
  [Switching to Thread 650125.650125]

  Thread 1 hit Breakpoint 10, cpu_common_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../hw/core/cpu-common.c:205
  205         Object *machine = qdev_get_machine();
  (rr) p cpu->cpu_index
  $25 = 10
  (rr) bt
  #0  cpu_common_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../hw/core/cpu-common.c:205
  #1  0x00005559b606e6a5 in arm_cpu_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../target/arm/cpu.c:2552
  #2  0x00005559b63dd5d0 in device_set_realized (obj=0x5559ba6896e0, value=true, errp=0x7fff02322d20) at ../../hw/core/qdev.c:510
  #3  0x00005559b63e84e7 in property_set_bool (obj=0x5559ba6896e0, v=0x5559ba681540, name=0x5559b68fafe1 "realized", opaque=0x5559b97f73c0, errp=0x7fff02322d20)
      at ../../qom/object.c:2354
  #4  0x00005559b63e6065 in object_property_set (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", v=0x5559ba681540, errp=0x7fff02322d20) at ../../qom/object.c:1463
  #5  0x00005559b63ead1d in object_property_set_qobject (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=0x5559ba681420, errp=0x5559b7810320 <error_fatal>)
      at ../../qom/qom-qobject.c:28
  #6  0x00005559b63e640a in object_property_set_bool (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=true, errp=0x5559b7810320 <error_fatal>)
      at ../../qom/object.c:1533
  #7  0x00005559b63dccbd in qdev_realize (dev=0x5559ba6896e0, bus=0x0, errp=0x5559b7810320 <error_fatal>) at ../../hw/core/qdev.c:291
  #8  0x00005559b5fe4243 in machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2295
  #9  0x00005559b5ada5d6 in machine_run_board_init (machine=0x5559b9ac3c00, mem_path=0x0, errp=0x7fff02322f30) at ../../hw/core/machine.c:1576
  #10 0x00005559b5eb416f in qemu_init_board () at ../../system/vl.c:2621
  #11 0x00005559b5eb4482 in qmp_x_exit_preconfig (errp=0x5559b7810320 <error_fatal>) at ../../system/vl.c:2713
  #12 0x00005559b5eb7088 in qemu_init (argc=35, argv=0x7fff02323268) at ../../system/vl.c:3759
  #13 0x00005559b63d7ff2 in main (argc=35, argv=0x7fff02323268) at ../../system/main.c:47

Because looking at the order things happen in the specific CPU realisation:

    qemu_init_vcpu(cs);
    cpu_reset(cs);

    acc->parent_realize(dev, errp);

And the common realize function doesn't get to do:

    /* Plugin initialization must wait until the cpu start executing code */
#ifdef CONFIG_PLUGIN
    if (tcg_enabled()) {
        cpu->plugin_state = qemu_plugin_create_vcpu_state();
        async_run_on_cpu(cpu, qemu_plugin_vcpu_init__async, RUN_ON_CPU_NULL);
    }
#endif

before we've started executing.....

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
  2024-05-28 17:11   ` Alex Bennée
@ 2024-05-29 12:29   ` Philippe Mathieu-Daudé
  2024-05-29 18:57     ` Pierrick Bouvier
  2024-05-30  6:30   ` Paolo Bonzini
  2 siblings, 1 reply; 21+ messages in thread
From: Philippe Mathieu-Daudé @ 2024-05-29 12:29 UTC (permalink / raw)
  To: Pierrick Bouvier, qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

On 17/5/24 00:20, Pierrick Bouvier wrote:
> From: Alex Bennée <alex.bennee@linaro.org>
> 
> We are about to remove direct calls to individual accelerators for
> this information and will need a central point for plugins to hook
> into time changes.
> 
> From: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>   include/sysemu/accel-ops.h                     | 18 +++++++++++++++++-
>   include/sysemu/cpu-timers.h                    |  3 ++-
>   ...et-virtual-clock.c => cpus-virtual-clock.c} |  5 +++++
>   system/cpus.c                                  | 11 +++++++++++
>   stubs/meson.build                              |  6 +++++-
>   5 files changed, 40 insertions(+), 3 deletions(-)


> diff --git a/stubs/meson.build b/stubs/meson.build
> index 3b9d42023cb..672213b7482 100644
> --- a/stubs/meson.build
> +++ b/stubs/meson.build
> @@ -3,6 +3,11 @@
>   # below, so that it is clear who needs the stubbed functionality.
>   
>   stub_ss.add(files('cpu-get-clock.c'))
> +stub_ss.add(files('cpus-virtual-clock.c'))

Are these lines <...

> +stub_ss.add(files('qemu-timer-notify-cb.c'))
> +stub_ss.add(files('icount.c'))
> +stub_ss.add(files('dump.c'))
> +stub_ss.add(files('error-printf.c'))

...> due to a failed git-rebase?

>   stub_ss.add(files('fdset.c'))
>   stub_ss.add(files('iothread-lock.c'))
>   stub_ss.add(files('is-daemonized.c'))
> @@ -28,7 +33,6 @@ endif
>   if have_block or have_ga
>     stub_ss.add(files('replay-tools.c'))
>     # stubs for hooks in util/main-loop.c, util/async.c etc.
> -  stub_ss.add(files('cpus-get-virtual-clock.c'))
>     stub_ss.add(files('icount.c'))
>     stub_ss.add(files('graph-lock.c'))
>     if linux_io_uring.found()



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-29 12:13           ` Alex Bennée
@ 2024-05-29 14:33             ` Pierrick Bouvier
  2024-05-29 15:39               ` Alex Bennée
  0 siblings, 1 reply; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-29 14:33 UTC (permalink / raw)
  To: Alex Bennée
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

On 5/29/24 05:13, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> 
> (Added Philip to CC)
> 
>> On 5/28/24 12:57, Alex Bennée wrote:
>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>
>>>> On 5/28/24 12:14, Alex Bennée wrote:
>>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>>
>>>>>> This plugin uses the new time control interface to make decisions
>>>>>> about the state of time during the emulation. The algorithm is
>>>>>> currently very simple. The user specifies an ips rate which applies
>>>>>> per core.
> 
> <snip>
> 
>>>>>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>> +{
>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>> +    /* ensure start time is set first */
>>>>>> +    set_start_time();
>>>>>> +    /* start counter from absolute time reference */
>>>>>> +    vcpu->counter = num_insn_during(uptime_ns());
>>>>>> +    vcpu_set_state(vcpu, EXECUTING);
>>>>>> +}
>>>>>> +
>>>>>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>> +{
>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>> +    vcpu_set_state(vcpu, IDLE);
>>>>>> +}
>>>>>> +
>>>>>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>> +{
>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>> +    g_assert(vcpu->state == IDLE);
>>>>> I'm triggering a weird race here:
>>>>>      (gdb) b vcpu_init
>>>>>      Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>>>>>      (gdb) r
>>>>>      The program being debugged has been started already.
>>>>>      Start it from the beginning? (y or n) y
>>>>>      Starting program:
>>>>> /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64
>>>>> -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars
>>>>> -cpu cortex-a57 -smp 32 -accel tcg -device
>>>>> virtio-net-pci,netdev=unet -device virtio-scsi-pci -device
>>>>> scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22
>>>>> -blockdev
>>>>> driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap
>>>>> -serial mon:stdio -blockdev
>>>>> node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true
>>>>> -blockdev
>>>>> node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars
>>>>> -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel
>>>>> /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append
>>>>> root=/dev/sda2\ console=ttyAMA0 -plugin
>>>>> contrib/plugins/libips.so,ips=1000000000
>>>>>      [Thread debugging using libthread_db enabled]
>>>>>      Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>>>      [New Thread 0x7fffe72006c0 (LWP 360538)]
>>>>>      [New Thread 0x7fffe68006c0 (LWP 360540)]
>>>>>      [New Thread 0x7fffe5e006c0 (LWP 360541)]
>>>>>      [New Thread 0x7fffe54006c0 (LWP 360542)]
>>>>>      [New Thread 0x7fffe4a006c0 (LWP 360543)]
>>>>>      [New Thread 0x7fffdfe006c0 (LWP 360544)]
>>>>>      [New Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>>      [New Thread 0x7fffdea006c0 (LWP 360546)]
>>>>>      [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>>      Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=0) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb) c
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffde0006c0 (LWP 360548)]
>>>>>      [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
>>>>>      Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=1) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb)
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>>      [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
>>>>>      Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=2) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb)
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>>      [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>>      Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=3) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb)
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>>      [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>>      Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=4) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb) n
>>>>>      129         set_start_time();
>>>>>      (gdb)
>>>>>      131         vcpu->counter = num_insn_during(uptime_ns());
>>>>>      (gdb)
>>>>>      132         vcpu_set_state(vcpu, EXECUTING);
>>>>>      (gdb)
>>>>>      133     }
>>>>>      (gdb) p vcpu->state
>>>>>      $1 = EXECUTING
>>>>>      (gdb) p &vcpu->state
>>>>>      $2 = (vCPUState *) 0x555557c6b5d0
>>>>>      (gdb) watch *(vCPUState *) 0x555557c6b5d0
>>>>>      Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>>>>>      (gdb) c
>>>>>      Continuing.
>>>>>      [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>>>>>      [Thread 0x7fffe5e006c0 (LWP 360541) exited]
>>>>>      Thread 12 "qemu-system-aar" hit Hardware watchpoint 2:
>>>>> *(vCPUState *) 0x555557c6b5d0
>>>>>      Old value = EXECUTING
>>>>>      New value = IDLE
>>>>>      vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>>>>>      83      }
>>>>>      (gdb) c
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>>      [New Thread 0x7fffe5e006c0 (LWP 360592)]
>>>>>      [New Thread 0x7fffd34006c0 (LWP 360593)]
>>>>>      [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>>      Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>> (id=10457908569352202058, cpu_index=6) at
>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>      (gdb)
>>>>>      Continuing.
>>>>>      [New Thread 0x7fffd2a006c0 (LWP 360594)]
>>>>>      **
>>>>>      ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>>      Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>>      Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>>>>>      [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>>      __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>>      44      ./nptl/pthread_kill.c: No such file or directory.
>>>>>      (gdb) bt
>>>>>      #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>>      #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>>>>>      #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>>>>>      #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>>>>>      #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>>      #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>>      #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>>>>>      #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>>>>>      #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>>>>>      #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>>>>>      #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>>>>>      #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>>>>>      #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>>>>>      #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>>>>>      (gdb)
>>>>> But I don't understand how we can ever hit the idle callback without
>>>>> first hitting the init callback.
>>>>>
>>>>
>>>> More exactly, the assert we hit means that the idle callback, was not
>>>> called before the resume callback.
>>>> Any chance you can check what is the current vcpu->state value? I
>>>> wonder if it's not an exited cpu, that gets resumed after.
>>> No when I looked at it it was set as UNKNOWN. Unfortunately it
>>> doesn't
>>> trigger if I stick breakpoints in or run under rr.
>>>
>>
>> By using rr record --chaos mode maybe?
> 
> Ahh yes that triggered it.
> 
> It looks like we have a race:
> 
>    (rr) c
>    Continuing.
> 
>    Thread 1 hit Hardware watchpoint 7: *(int *) 0x5559ba6899b0
> 
>    Old value = -1
>    New value = 10
>    machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2214
>    2214            numa_cpu_pre_plug(&possible_cpus->cpus[cs->cpu_index], DEVICE(cpuobj),
>    (rr) c
>    Continuing.
>    [New Thread 650125.650142]
>    [Switching to Thread 650125.650142]
> 
>    Thread 44 hit Breakpoint 8, mttcg_cpu_thread_fn (arg=0x5559ba6896e0) at ../../accel/tcg/tcg-accel-ops-mttcg.c:70
>    70          assert(tcg_enabled());
>    (rr) p cpu->cpu_index
>    $24 = 10
>    (rr) c
>    Continuing.
>    [Switching to Thread 650125.650125]
> 
>    Thread 1 hit Breakpoint 10, cpu_common_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../hw/core/cpu-common.c:205
>    205         Object *machine = qdev_get_machine();
>    (rr) p cpu->cpu_index
>    $25 = 10
>    (rr) bt
>    #0  cpu_common_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../hw/core/cpu-common.c:205
>    #1  0x00005559b606e6a5 in arm_cpu_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../target/arm/cpu.c:2552
>    #2  0x00005559b63dd5d0 in device_set_realized (obj=0x5559ba6896e0, value=true, errp=0x7fff02322d20) at ../../hw/core/qdev.c:510
>    #3  0x00005559b63e84e7 in property_set_bool (obj=0x5559ba6896e0, v=0x5559ba681540, name=0x5559b68fafe1 "realized", opaque=0x5559b97f73c0, errp=0x7fff02322d20)
>        at ../../qom/object.c:2354
>    #4  0x00005559b63e6065 in object_property_set (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", v=0x5559ba681540, errp=0x7fff02322d20) at ../../qom/object.c:1463
>    #5  0x00005559b63ead1d in object_property_set_qobject (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=0x5559ba681420, errp=0x5559b7810320 <error_fatal>)
>        at ../../qom/qom-qobject.c:28
>    #6  0x00005559b63e640a in object_property_set_bool (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=true, errp=0x5559b7810320 <error_fatal>)
>        at ../../qom/object.c:1533
>    #7  0x00005559b63dccbd in qdev_realize (dev=0x5559ba6896e0, bus=0x0, errp=0x5559b7810320 <error_fatal>) at ../../hw/core/qdev.c:291
>    #8  0x00005559b5fe4243 in machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2295
>    #9  0x00005559b5ada5d6 in machine_run_board_init (machine=0x5559b9ac3c00, mem_path=0x0, errp=0x7fff02322f30) at ../../hw/core/machine.c:1576
>    #10 0x00005559b5eb416f in qemu_init_board () at ../../system/vl.c:2621
>    #11 0x00005559b5eb4482 in qmp_x_exit_preconfig (errp=0x5559b7810320 <error_fatal>) at ../../system/vl.c:2713
>    #12 0x00005559b5eb7088 in qemu_init (argc=35, argv=0x7fff02323268) at ../../system/vl.c:3759
>    #13 0x00005559b63d7ff2 in main (argc=35, argv=0x7fff02323268) at ../../system/main.c:47
> 
> Because looking at the order things happen in the specific CPU realisation:
> 
>      qemu_init_vcpu(cs);
>      cpu_reset(cs);
> 
>      acc->parent_realize(dev, errp);
> 
> And the common realize function doesn't get to do:
> 
>      /* Plugin initialization must wait until the cpu start executing code */
> #ifdef CONFIG_PLUGIN
>      if (tcg_enabled()) {
>          cpu->plugin_state = qemu_plugin_create_vcpu_state();
>          async_run_on_cpu(cpu, qemu_plugin_vcpu_init__async, RUN_ON_CPU_NULL);
>      }
> #endif
> 
> before we've started executing.....
> 

Thanks for the analysis Alex.
It seems to me this is integrating the recent series from Philippe.
Maybe the race has been introduced there.
Could you try without this series?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling
  2024-05-29 14:33             ` Pierrick Bouvier
@ 2024-05-29 15:39               ` Alex Bennée
  0 siblings, 0 replies; 21+ messages in thread
From: Alex Bennée @ 2024-05-29 15:39 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini,
	Philippe Mathieu-Daudé

Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:

> On 5/29/24 05:13, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>> (Added Philip to CC)
>> 
>>> On 5/28/24 12:57, Alex Bennée wrote:
>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>
>>>>> On 5/28/24 12:14, Alex Bennée wrote:
>>>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>>>
>>>>>>> This plugin uses the new time control interface to make decisions
>>>>>>> about the state of time during the emulation. The algorithm is
>>>>>>> currently very simple. The user specifies an ips rate which applies
>>>>>>> per core.
>> <snip>
>> 
>>>>>>> +static void vcpu_init(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>>> +{
>>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>> +    /* ensure start time is set first */
>>>>>>> +    set_start_time();
>>>>>>> +    /* start counter from absolute time reference */
>>>>>>> +    vcpu->counter = num_insn_during(uptime_ns());
>>>>>>> +    vcpu_set_state(vcpu, EXECUTING);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void vcpu_idle(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>>> +{
>>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>> +    vcpu_set_state(vcpu, IDLE);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void vcpu_resume(qemu_plugin_id_t id, unsigned int cpu_index)
>>>>>>> +{
>>>>>>> +    vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>> +    g_assert(vcpu->state == IDLE);
>>>>>> I'm triggering a weird race here:
>>>>>>      (gdb) b vcpu_init
>>>>>>      Breakpoint 1 at 0x7ffff7fa15f7: file /home/alex/lsrc/qemu.git/contrib/plugins/ips.c, line 127.
>>>>>>      (gdb) r
>>>>>>      The program being debugged has been started already.
>>>>>>      Start it from the beginning? (y or n) y
>>>>>>      Starting program:
>>>>>> /home/alex/lsrc/qemu.git/builds/arm.debug/qemu-system-aarch64
>>>>>> -machine type=virt,virtualization=on,pflash0=rom,pflash1=efivars
>>>>>> -cpu cortex-a57 -smp 32 -accel tcg -device
>>>>>> virtio-net-pci,netdev=unet -device virtio-scsi-pci -device
>>>>>> scsi-hd,drive=hd -netdev user,id=unet,hostfwd=tcp::2222-:22
>>>>>> -blockdev
>>>>>> driver=raw,node-name=hd,file.driver=host_device,file.filename=/dev/zen-ssd2/trixie-arm64,discard=unmap
>>>>>> -serial mon:stdio -blockdev
>>>>>> node-name=rom,driver=file,filename=/home/alex/lsrc/qemu.git/builds/arm.debug/pc-bios/edk2-aarch64-code.fd,read-only=true
>>>>>> -blockdev
>>>>>> node-name=efivars,driver=file,filename=/home/alex/images/qemu-arm64-efivars
>>>>>> -m 8192 -object memory-backend-memfd,id=mem,size=8G,share=on -kernel
>>>>>> /home/alex/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image -append
>>>>>> root=/dev/sda2\ console=ttyAMA0 -plugin
>>>>>> contrib/plugins/libips.so,ips=1000000000
>>>>>>      [Thread debugging using libthread_db enabled]
>>>>>>      Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>>>>      [New Thread 0x7fffe72006c0 (LWP 360538)]
>>>>>>      [New Thread 0x7fffe68006c0 (LWP 360540)]
>>>>>>      [New Thread 0x7fffe5e006c0 (LWP 360541)]
>>>>>>      [New Thread 0x7fffe54006c0 (LWP 360542)]
>>>>>>      [New Thread 0x7fffe4a006c0 (LWP 360543)]
>>>>>>      [New Thread 0x7fffdfe006c0 (LWP 360544)]
>>>>>>      [New Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>>>      [New Thread 0x7fffdea006c0 (LWP 360546)]
>>>>>>      [Switching to Thread 0x7fffdf4006c0 (LWP 360545)]
>>>>>>      Thread 8 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=0) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb) c
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffde0006c0 (LWP 360548)]
>>>>>>      [Switching to Thread 0x7fffdea006c0 (LWP 360546)]
>>>>>>      Thread 9 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=1) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb)
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>>>      [Switching to Thread 0x7fffde0006c0 (LWP 360548)]
>>>>>>      Thread 10 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=2) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb)
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>>>      [Switching to Thread 0x7fffdd6006c0 (LWP 360549)]
>>>>>>      Thread 11 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=3) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb)
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>>>      [Switching to Thread 0x7fffdcc006c0 (LWP 360550)]
>>>>>>      Thread 12 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=4) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb) n
>>>>>>      129         set_start_time();
>>>>>>      (gdb)
>>>>>>      131         vcpu->counter = num_insn_during(uptime_ns());
>>>>>>      (gdb)
>>>>>>      132         vcpu_set_state(vcpu, EXECUTING);
>>>>>>      (gdb)
>>>>>>      133     }
>>>>>>      (gdb) p vcpu->state
>>>>>>      $1 = EXECUTING
>>>>>>      (gdb) p &vcpu->state
>>>>>>      $2 = (vCPUState *) 0x555557c6b5d0
>>>>>>      (gdb) watch *(vCPUState *) 0x555557c6b5d0
>>>>>>      Hardware watchpoint 2: *(vCPUState *) 0x555557c6b5d0
>>>>>>      (gdb) c
>>>>>>      Continuing.
>>>>>>      [Thread 0x7fffdfe006c0 (LWP 360544) exited]
>>>>>>      [Thread 0x7fffe5e006c0 (LWP 360541) exited]
>>>>>>      Thread 12 "qemu-system-aar" hit Hardware watchpoint 2:
>>>>>> *(vCPUState *) 0x555557c6b5d0
>>>>>>      Old value = EXECUTING
>>>>>>      New value = IDLE
>>>>>>      vcpu_set_state (vcpu=0x555557c6b5c0, new_state=IDLE) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:83
>>>>>>      83      }
>>>>>>      (gdb) c
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>>>      [New Thread 0x7fffe5e006c0 (LWP 360592)]
>>>>>>      [New Thread 0x7fffd34006c0 (LWP 360593)]
>>>>>>      [Switching to Thread 0x7fffdfe006c0 (LWP 360591)]
>>>>>>      Thread 14 "qemu-system-aar" hit Breakpoint 1, vcpu_init
>>>>>> (id=10457908569352202058, cpu_index=6) at
>>>>>> /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:127
>>>>>>      127         vCPUTime *vcpu = qemu_plugin_scoreboard_find(vcpus, cpu_index);
>>>>>>      (gdb)
>>>>>>      Continuing.
>>>>>>      [New Thread 0x7fffd2a006c0 (LWP 360594)]
>>>>>>      **
>>>>>>      ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>>>      Bail out! ERROR:/home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144:vcpu_resume: assertion failed: (vcpu->state == IDLE)
>>>>>>      Thread 13 "qemu-system-aar" received signal SIGABRT, Aborted.
>>>>>>      [Switching to Thread 0x7fffd3e006c0 (LWP 360551)]
>>>>>>      __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>>>      44      ./nptl/pthread_kill.c: No such file or directory.
>>>>>>      (gdb) bt
>>>>>>      #0  __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44
>>>>>>      #1  0x00007ffff4ca9e8f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
>>>>>>      #2  0x00007ffff4c5afb2 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
>>>>>>      #3  0x00007ffff4c45472 in __GI_abort () at ./stdlib/abort.c:79
>>>>>>      #4  0x00007ffff6e46ec8 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>>>      #5  0x00007ffff6ea6e1a in g_assertion_message_expr () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
>>>>>>      #6  0x00007ffff7fa16e2 in vcpu_resume (id=10457908569352202058, cpu_index=5) at /home/alex/lsrc/qemu.git/contrib/plugins/ips.c:144
>>>>>>      #7  0x00005555562c3202 in plugin_vcpu_cb__simple (cpu=0x5555583d9540, ev=QEMU_PLUGIN_EV_VCPU_RESUME) at ../../plugins/core.c:111
>>>>>>      #8  0x00005555562c43f7 in qemu_plugin_vcpu_resume_cb (cpu=0x5555583d9540) at ../../plugins/core.c:523
>>>>>>      #9  0x0000555555d9150e in qemu_wait_io_event (cpu=0x5555583d9540) at ../../system/cpus.c:465
>>>>>>      #10 0x00005555562c67d0 in mttcg_cpu_thread_fn (arg=0x5555583d9540) at ../../accel/tcg/tcg-accel-ops-mttcg.c:118
>>>>>>      #11 0x00005555565120ff in qemu_thread_start (args=0x5555584716e0) at ../../util/qemu-thread-posix.c:541
>>>>>>      #12 0x00007ffff4ca8134 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
>>>>>>      #13 0x00007ffff4d287dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>>>>>>      (gdb)
>>>>>> But I don't understand how we can ever hit the idle callback without
>>>>>> first hitting the init callback.
>>>>>>
>>>>>
>>>>> More exactly, the assert we hit means that the idle callback, was not
>>>>> called before the resume callback.
>>>>> Any chance you can check what is the current vcpu->state value? I
>>>>> wonder if it's not an exited cpu, that gets resumed after.
>>>> No when I looked at it it was set as UNKNOWN. Unfortunately it
>>>> doesn't
>>>> trigger if I stick breakpoints in or run under rr.
>>>>
>>>
>>> By using rr record --chaos mode maybe?
>> Ahh yes that triggered it.
>> It looks like we have a race:
>>    (rr) c
>>    Continuing.
>>    Thread 1 hit Hardware watchpoint 7: *(int *) 0x5559ba6899b0
>>    Old value = -1
>>    New value = 10
>>    machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2214
>>    2214            numa_cpu_pre_plug(&possible_cpus->cpus[cs->cpu_index], DEVICE(cpuobj),
>>    (rr) c
>>    Continuing.
>>    [New Thread 650125.650142]
>>    [Switching to Thread 650125.650142]
>>    Thread 44 hit Breakpoint 8, mttcg_cpu_thread_fn
>> (arg=0x5559ba6896e0) at ../../accel/tcg/tcg-accel-ops-mttcg.c:70
>>    70          assert(tcg_enabled());
>>    (rr) p cpu->cpu_index
>>    $24 = 10
>>    (rr) c
>>    Continuing.
>>    [Switching to Thread 650125.650125]
>>    Thread 1 hit Breakpoint 10, cpu_common_realizefn
>> (dev=0x5559ba6896e0, errp=0x7fff02322c10) at
>> ../../hw/core/cpu-common.c:205
>>    205         Object *machine = qdev_get_machine();
>>    (rr) p cpu->cpu_index
>>    $25 = 10
>>    (rr) bt
>>    #0  cpu_common_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../hw/core/cpu-common.c:205
>>    #1  0x00005559b606e6a5 in arm_cpu_realizefn (dev=0x5559ba6896e0, errp=0x7fff02322c10) at ../../target/arm/cpu.c:2552
>>    #2  0x00005559b63dd5d0 in device_set_realized (obj=0x5559ba6896e0, value=true, errp=0x7fff02322d20) at ../../hw/core/qdev.c:510
>>    #3  0x00005559b63e84e7 in property_set_bool (obj=0x5559ba6896e0, v=0x5559ba681540, name=0x5559b68fafe1 "realized", opaque=0x5559b97f73c0, errp=0x7fff02322d20)
>>        at ../../qom/object.c:2354
>>    #4  0x00005559b63e6065 in object_property_set (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", v=0x5559ba681540, errp=0x7fff02322d20) at ../../qom/object.c:1463
>>    #5  0x00005559b63ead1d in object_property_set_qobject (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=0x5559ba681420, errp=0x5559b7810320 <error_fatal>)
>>        at ../../qom/qom-qobject.c:28
>>    #6  0x00005559b63e640a in object_property_set_bool (obj=0x5559ba6896e0, name=0x5559b68fafe1 "realized", value=true, errp=0x5559b7810320 <error_fatal>)
>>        at ../../qom/object.c:1533
>>    #7  0x00005559b63dccbd in qdev_realize (dev=0x5559ba6896e0, bus=0x0, errp=0x5559b7810320 <error_fatal>) at ../../hw/core/qdev.c:291
>>    #8  0x00005559b5fe4243 in machvirt_init (machine=0x5559b9ac3c00) at ../../hw/arm/virt.c:2295
>>    #9  0x00005559b5ada5d6 in machine_run_board_init (machine=0x5559b9ac3c00, mem_path=0x0, errp=0x7fff02322f30) at ../../hw/core/machine.c:1576
>>    #10 0x00005559b5eb416f in qemu_init_board () at ../../system/vl.c:2621
>>    #11 0x00005559b5eb4482 in qmp_x_exit_preconfig (errp=0x5559b7810320 <error_fatal>) at ../../system/vl.c:2713
>>    #12 0x00005559b5eb7088 in qemu_init (argc=35, argv=0x7fff02323268) at ../../system/vl.c:3759
>>    #13 0x00005559b63d7ff2 in main (argc=35, argv=0x7fff02323268) at ../../system/main.c:47
>> Because looking at the order things happen in the specific CPU
>> realisation:
>>      qemu_init_vcpu(cs);
>>      cpu_reset(cs);
>>      acc->parent_realize(dev, errp);
>> And the common realize function doesn't get to do:
>>      /* Plugin initialization must wait until the cpu start
>> executing code */
>> #ifdef CONFIG_PLUGIN
>>      if (tcg_enabled()) {
>>          cpu->plugin_state = qemu_plugin_create_vcpu_state();
>>          async_run_on_cpu(cpu, qemu_plugin_vcpu_init__async, RUN_ON_CPU_NULL);
>>      }
>> #endif
>> before we've started executing.....
>> 
>
> Thanks for the analysis Alex.
> It seems to me this is integrating the recent series from Philippe.
> Maybe the race has been introduced there.
> Could you try without this series?

No this is based off master and I think the race has always been there.
I think I was just triggering because I was using -smp 32 for some bit
system simulations. See:

  Message-Id: <20240529152219.825680-1-alex.bennee@linaro.org>
  Date: Wed, 29 May 2024 16:22:19 +0100
  Subject: [RFC PATCH] cpus: split qemu_init_vcpu and delay vCPU thread creation
  From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-29 12:29   ` Philippe Mathieu-Daudé
@ 2024-05-29 18:57     ` Pierrick Bouvier
  0 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-29 18:57 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: Thomas Huth, Laurent Vivier, Mahmoud Mandour, Alex Bennée,
	Alexandre Iooss, Richard Henderson, Paolo Bonzini

On 5/29/24 05:29, Philippe Mathieu-Daudé wrote:
> On 17/5/24 00:20, Pierrick Bouvier wrote:
>> From: Alex Bennée <alex.bennee@linaro.org>
>>
>> We are about to remove direct calls to individual accelerators for
>> this information and will need a central point for plugins to hook
>> into time changes.
>>
>> From: Alex Bennée <alex.bennee@linaro.org>
>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>> ---
>>    include/sysemu/accel-ops.h                     | 18 +++++++++++++++++-
>>    include/sysemu/cpu-timers.h                    |  3 ++-
>>    ...et-virtual-clock.c => cpus-virtual-clock.c} |  5 +++++
>>    system/cpus.c                                  | 11 +++++++++++
>>    stubs/meson.build                              |  6 +++++-
>>    5 files changed, 40 insertions(+), 3 deletions(-)
> 
> 
>> diff --git a/stubs/meson.build b/stubs/meson.build
>> index 3b9d42023cb..672213b7482 100644
>> --- a/stubs/meson.build
>> +++ b/stubs/meson.build
>> @@ -3,6 +3,11 @@
>>    # below, so that it is clear who needs the stubbed functionality.
>>    
>>    stub_ss.add(files('cpu-get-clock.c'))
>> +stub_ss.add(files('cpus-virtual-clock.c'))
> 
> Are these lines <...
> 
>> +stub_ss.add(files('qemu-timer-notify-cb.c'))
>> +stub_ss.add(files('icount.c'))
>> +stub_ss.add(files('dump.c'))
>> +stub_ss.add(files('error-printf.c'))
> 
> ...> due to a failed git-rebase?
> 

You're right, fixed this!

>>    stub_ss.add(files('fdset.c'))
>>    stub_ss.add(files('iothread-lock.c'))
>>    stub_ss.add(files('is-daemonized.c'))
>> @@ -28,7 +33,6 @@ endif
>>    if have_block or have_ga
>>      stub_ss.add(files('replay-tools.c'))
>>      # stubs for hooks in util/main-loop.c, util/async.c etc.
>> -  stub_ss.add(files('cpus-get-virtual-clock.c'))
>>      stub_ss.add(files('icount.c'))
>>      stub_ss.add(files('graph-lock.c'))
>>      if linux_io_uring.found()
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
  2024-05-28 17:11   ` Alex Bennée
  2024-05-29 12:29   ` Philippe Mathieu-Daudé
@ 2024-05-30  6:30   ` Paolo Bonzini
  2024-05-30 17:18     ` Pierrick Bouvier
  2 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2024-05-30  6:30 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alex Bennée, Alexandre Iooss, Richard Henderson,
	Philippe Mathieu-Daudé

On Fri, May 17, 2024 at 12:21 AM Pierrick Bouvier
<pierrick.bouvier@linaro.org> wrote:
> diff --git a/stubs/meson.build b/stubs/meson.build
> index 3b9d42023cb..672213b7482 100644
> --- a/stubs/meson.build
> +++ b/stubs/meson.build
> @@ -3,6 +3,11 @@
>  # below, so that it is clear who needs the stubbed functionality.
>
>  stub_ss.add(files('cpu-get-clock.c'))
> +stub_ss.add(files('cpus-virtual-clock.c'))
> +stub_ss.add(files('qemu-timer-notify-cb.c'))
> +stub_ss.add(files('icount.c'))
> +stub_ss.add(files('dump.c'))
> +stub_ss.add(files('error-printf.c'))

Was there a problem when rebasing?

Paolo

>  stub_ss.add(files('fdset.c'))
>  stub_ss.add(files('iothread-lock.c'))
>  stub_ss.add(files('is-daemonized.c'))
> @@ -28,7 +33,6 @@ endif
>  if have_block or have_ga
>    stub_ss.add(files('replay-tools.c'))
>    # stubs for hooks in util/main-loop.c, util/async.c etc.
> -  stub_ss.add(files('cpus-get-virtual-clock.c'))
>    stub_ss.add(files('icount.c'))
>    stub_ss.add(files('graph-lock.c'))
>    if linux_io_uring.found()
> --
> 2.39.2
>



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp
  2024-05-16 22:20 ` [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp Pierrick Bouvier
@ 2024-05-30  6:32   ` Paolo Bonzini
  2024-05-30 17:41     ` Pierrick Bouvier
  0 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2024-05-30  6:32 UTC (permalink / raw)
  To: Pierrick Bouvier
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alex Bennée, Alexandre Iooss, Richard Henderson

On Fri, May 17, 2024 at 12:21 AM Pierrick Bouvier
<pierrick.bouvier@linaro.org> wrote:
>
> From: Alex Bennée <alex.bennee@linaro.org>
>
> This generalises the qtest_clock_warp code to use the AccelOps
> handlers for updating its own sense of time. This will make the next
> patch which moves the warp code closer to pure code motion.
>
> From: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> Acked-by: Thomas Huth <thuth@redhat.com>
> ---
>  include/sysemu/qtest.h | 1 +
>  accel/qtest/qtest.c    | 1 +
>  system/qtest.c         | 6 +++---
>  3 files changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h
> index b5d5fd34637..45f3b7e1df5 100644
> --- a/include/sysemu/qtest.h
> +++ b/include/sysemu/qtest.h
> @@ -36,6 +36,7 @@ void qtest_server_set_send_handler(void (*send)(void *, const char *),
>  void qtest_server_inproc_recv(void *opaque, const char *buf);
>
>  int64_t qtest_get_virtual_clock(void);
> +void qtest_set_virtual_clock(int64_t count);

You can move qtest_get_virtual_clock/qtest_set_virtual_clock to
accel/qtest/qtest.c instead, and make them static.

They are not used anymore in system/qtest.c, and it actually makes a
lot more sense that they aren't.

Paolo

>  #endif
>
>  #endif
> diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
> index f6056ac8361..53182e6c2ae 100644
> --- a/accel/qtest/qtest.c
> +++ b/accel/qtest/qtest.c
> @@ -52,6 +52,7 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
>
>      ops->create_vcpu_thread = dummy_start_vcpu_thread;
>      ops->get_virtual_clock = qtest_get_virtual_clock;
> +    ops->set_virtual_clock = qtest_set_virtual_clock;
>  };
>
>  static const TypeInfo qtest_accel_ops_type = {
> diff --git a/system/qtest.c b/system/qtest.c
> index 6da58b3874e..ee8b139e982 100644
> --- a/system/qtest.c
> +++ b/system/qtest.c
> @@ -332,14 +332,14 @@ int64_t qtest_get_virtual_clock(void)
>      return qatomic_read_i64(&qtest_clock_counter);
>  }
>
> -static void qtest_set_virtual_clock(int64_t count)
> +void qtest_set_virtual_clock(int64_t count)
>  {
>      qatomic_set_i64(&qtest_clock_counter, count);
>  }
>
>  static void qtest_clock_warp(int64_t dest)
>  {
> -    int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
> +    int64_t clock = cpus_get_virtual_clock();
>      AioContext *aio_context;
>      assert(qtest_enabled());
>      aio_context = qemu_get_aio_context();
> @@ -348,7 +348,7 @@ static void qtest_clock_warp(int64_t dest)
>                                                        QEMU_TIMER_ATTR_ALL);
>          int64_t warp = qemu_soonest_timeout(dest - clock, deadline);
>
> -        qtest_set_virtual_clock(qtest_get_virtual_clock() + warp);
> +        cpus_set_virtual_clock(cpus_get_virtual_clock() + warp);
>
>          qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
>          timerlist_run_timers(aio_context->tlg.tl[QEMU_CLOCK_VIRTUAL]);
> --
> 2.39.2
>



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/5] sysemu: add set_virtual_time to accel ops
  2024-05-30  6:30   ` Paolo Bonzini
@ 2024-05-30 17:18     ` Pierrick Bouvier
  0 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-30 17:18 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alex Bennée, Alexandre Iooss, Richard Henderson,
	Philippe Mathieu-Daudé

On 5/29/24 23:30, Paolo Bonzini wrote:
> On Fri, May 17, 2024 at 12:21 AM Pierrick Bouvier
> <pierrick.bouvier@linaro.org> wrote:
>> diff --git a/stubs/meson.build b/stubs/meson.build
>> index 3b9d42023cb..672213b7482 100644
>> --- a/stubs/meson.build
>> +++ b/stubs/meson.build
>> @@ -3,6 +3,11 @@
>>   # below, so that it is clear who needs the stubbed functionality.
>>
>>   stub_ss.add(files('cpu-get-clock.c'))
>> +stub_ss.add(files('cpus-virtual-clock.c'))
>> +stub_ss.add(files('qemu-timer-notify-cb.c'))
>> +stub_ss.add(files('icount.c'))
>> +stub_ss.add(files('dump.c'))
>> +stub_ss.add(files('error-printf.c'))
> 
> Was there a problem when rebasing?
> 
> Paolo
> 

Hi Paolo,

Yes, Philippe made the same comment. I'll fix it in next revision.
Thanks!

>>   stub_ss.add(files('fdset.c'))
>>   stub_ss.add(files('iothread-lock.c'))
>>   stub_ss.add(files('is-daemonized.c'))
>> @@ -28,7 +33,6 @@ endif
>>   if have_block or have_ga
>>     stub_ss.add(files('replay-tools.c'))
>>     # stubs for hooks in util/main-loop.c, util/async.c etc.
>> -  stub_ss.add(files('cpus-get-virtual-clock.c'))
>>     stub_ss.add(files('icount.c'))
>>     stub_ss.add(files('graph-lock.c'))
>>     if linux_io_uring.found()
>> --
>> 2.39.2
>>
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp
  2024-05-30  6:32   ` Paolo Bonzini
@ 2024-05-30 17:41     ` Pierrick Bouvier
  0 siblings, 0 replies; 21+ messages in thread
From: Pierrick Bouvier @ 2024-05-30 17:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: qemu-devel, Thomas Huth, Laurent Vivier, Mahmoud Mandour,
	Alex Bennée, Alexandre Iooss, Richard Henderson

On 5/29/24 23:32, Paolo Bonzini wrote:
> On Fri, May 17, 2024 at 12:21 AM Pierrick Bouvier
> <pierrick.bouvier@linaro.org> wrote:
>>
>> From: Alex Bennée <alex.bennee@linaro.org>
>>
>> This generalises the qtest_clock_warp code to use the AccelOps
>> handlers for updating its own sense of time. This will make the next
>> patch which moves the warp code closer to pure code motion.
>>
>> From: Alex Bennée <alex.bennee@linaro.org>
>> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
>> Acked-by: Thomas Huth <thuth@redhat.com>
>> ---
>>   include/sysemu/qtest.h | 1 +
>>   accel/qtest/qtest.c    | 1 +
>>   system/qtest.c         | 6 +++---
>>   3 files changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/sysemu/qtest.h b/include/sysemu/qtest.h
>> index b5d5fd34637..45f3b7e1df5 100644
>> --- a/include/sysemu/qtest.h
>> +++ b/include/sysemu/qtest.h
>> @@ -36,6 +36,7 @@ void qtest_server_set_send_handler(void (*send)(void *, const char *),
>>   void qtest_server_inproc_recv(void *opaque, const char *buf);
>>
>>   int64_t qtest_get_virtual_clock(void);
>> +void qtest_set_virtual_clock(int64_t count);
> 
> You can move qtest_get_virtual_clock/qtest_set_virtual_clock to
> accel/qtest/qtest.c instead, and make them static.
> 
> They are not used anymore in system/qtest.c, and it actually makes a
> lot more sense that they aren't.
> 

Changed for next revision,
thanks

> Paolo
> 
>>   #endif
>>
>>   #endif
>> diff --git a/accel/qtest/qtest.c b/accel/qtest/qtest.c
>> index f6056ac8361..53182e6c2ae 100644
>> --- a/accel/qtest/qtest.c
>> +++ b/accel/qtest/qtest.c
>> @@ -52,6 +52,7 @@ static void qtest_accel_ops_class_init(ObjectClass *oc, void *data)
>>
>>       ops->create_vcpu_thread = dummy_start_vcpu_thread;
>>       ops->get_virtual_clock = qtest_get_virtual_clock;
>> +    ops->set_virtual_clock = qtest_set_virtual_clock;
>>   };
>>
>>   static const TypeInfo qtest_accel_ops_type = {
>> diff --git a/system/qtest.c b/system/qtest.c
>> index 6da58b3874e..ee8b139e982 100644
>> --- a/system/qtest.c
>> +++ b/system/qtest.c
>> @@ -332,14 +332,14 @@ int64_t qtest_get_virtual_clock(void)
>>       return qatomic_read_i64(&qtest_clock_counter);
>>   }
>>
>> -static void qtest_set_virtual_clock(int64_t count)
>> +void qtest_set_virtual_clock(int64_t count)
>>   {
>>       qatomic_set_i64(&qtest_clock_counter, count);
>>   }
>>
>>   static void qtest_clock_warp(int64_t dest)
>>   {
>> -    int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
>> +    int64_t clock = cpus_get_virtual_clock();
>>       AioContext *aio_context;
>>       assert(qtest_enabled());
>>       aio_context = qemu_get_aio_context();
>> @@ -348,7 +348,7 @@ static void qtest_clock_warp(int64_t dest)
>>                                                         QEMU_TIMER_ATTR_ALL);
>>           int64_t warp = qemu_soonest_timeout(dest - clock, deadline);
>>
>> -        qtest_set_virtual_clock(qtest_get_virtual_clock() + warp);
>> +        cpus_set_virtual_clock(cpus_get_virtual_clock() + warp);
>>
>>           qemu_clock_run_timers(QEMU_CLOCK_VIRTUAL);
>>           timerlist_run_timers(aio_context->tlg.tl[QEMU_CLOCK_VIRTUAL]);
>> --
>> 2.39.2
>>
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2024-05-30 17:42 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-16 22:20 [PATCH 0/5] Implement icount=auto using TCG Plugins Pierrick Bouvier
2024-05-16 22:20 ` [PATCH 1/5] sysemu: add set_virtual_time to accel ops Pierrick Bouvier
2024-05-28 17:11   ` Alex Bennée
2024-05-28 18:57     ` Pierrick Bouvier
2024-05-29 12:29   ` Philippe Mathieu-Daudé
2024-05-29 18:57     ` Pierrick Bouvier
2024-05-30  6:30   ` Paolo Bonzini
2024-05-30 17:18     ` Pierrick Bouvier
2024-05-16 22:20 ` [PATCH 2/5] qtest: use cpu interface in qtest_clock_warp Pierrick Bouvier
2024-05-30  6:32   ` Paolo Bonzini
2024-05-30 17:41     ` Pierrick Bouvier
2024-05-16 22:20 ` [PATCH 3/5] sysemu: generalise qtest_warp_clock as qemu_clock_advance_virtual_time Pierrick Bouvier
2024-05-16 22:20 ` [PATCH 4/5] plugins: add time control API Pierrick Bouvier
2024-05-16 22:20 ` [PATCH 5/5] contrib/plugins: add ips plugin example for cost modeling Pierrick Bouvier
2024-05-28 19:14   ` Alex Bennée
2024-05-28 19:26     ` Pierrick Bouvier
2024-05-28 19:57       ` Alex Bennée
2024-05-28 20:26         ` Pierrick Bouvier
2024-05-29 12:13           ` Alex Bennée
2024-05-29 14:33             ` Pierrick Bouvier
2024-05-29 15:39               ` Alex Bennée

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).