* [PATCH v7 0/6] plugins: access values during a memory read/write
@ 2024-07-24 19:47 Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 1/6] plugins: save value during memory accesses Pierrick Bouvier
` (6 more replies)
0 siblings, 7 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum
This series allows plugins to know which value is read/written during a memory
access.
For every memory access, we know copy this value before calling mem callbacks,
and those can query it using new API function:
- qemu_plugin_mem_get_value
Mem plugin was extended to print accesses, and a new test was added to check
functionality work as expected. A bug was found where callbacks were not
called as expected.
This will open new use cases for plugins, such as tracking specific values in
memory.
Needs review:
Patch 7: tests/tcg/multiarch: add test for plugin memory access
v7
- renamed variable for adding plugins tests in Makefile
- do not run any command when plugin output should not be checked (thanks Alex)
- add LICENSE + summary for tests/tcg/multiarch/test-plugin-mem-access.c
- test for mem access is now multiarch (tested on aarch64, x86_64, i386)
v6
- fix big endian offset for plugin_gen_mem_callbacks_i32
v5
- fixed width output for mem values in mem plugin
- move plugin_mem_value to CPUNegativeOffset
- tcg/tcg-op-ldst.c: only store word size mem access (do not set upper bits)
v4
- fix prototype for stubs qemu_plugin_vcpu_mem_cb (inverted low/high parameters
names)
- link gitlab bugs resolved (thanks @Anton Kochkov for reporting)
https://gitlab.com/qemu-project/qemu/-/issues/1719
https://gitlab.com/qemu-project/qemu/-/issues/2152
v3
- simplify API: return an algebraic data type for value accessed
this can be easily extended when QEMU will support wider accesses
- fix Makefile test (use quiet-command instead of manually run the command)
- rename upper/lower to high/low
- reorder functions parameters and code to low/high instead of high/low, to
follow current convention in QEMU codebase
v2
- fix compilation on aarch64 (missing undef in accel/tcg/atomic_template.h)
v3
- add info when printing memory accesses (insn_vaddr,mem_vaddr,mem_hwaddr)
Pierrick Bouvier (6):
plugins: save value during memory accesses
plugins: extend API to get latest memory value accessed
tests/tcg: add mechanism to run specific tests with plugins
tests/tcg: allow to check output of plugins
tests/plugin/mem: add option to print memory accesses
tests/tcg/multiarch: add test for plugin memory access
accel/tcg/atomic_template.h | 66 ++++++-
include/hw/core/cpu.h | 4 +
include/qemu/plugin.h | 4 +
include/qemu/qemu-plugin.h | 32 ++++
plugins/api.c | 33 ++++
plugins/core.c | 6 +
tcg/tcg-op-ldst.c | 66 ++++++-
tests/plugin/mem.c | 69 ++++++-
tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
accel/tcg/atomic_common.c.inc | 13 +-
accel/tcg/ldst_common.c.inc | 38 ++--
plugins/qemu-plugins.symbols | 1 +
tests/tcg/Makefile.target | 12 +-
tests/tcg/multiarch/Makefile.target | 7 +
.../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
15 files changed, 524 insertions(+), 32 deletions(-)
create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
--
2.39.2
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v7 1/6] plugins: save value during memory accesses
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 2/6] plugins: extend API to get latest memory value accessed Pierrick Bouvier
` (5 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum
Different code paths handle memory accesses:
- tcg generated code
- load/store helpers
- atomic helpers
This value is saved in cpu->neg.plugin_mem_value_{high,low}. Values are
written only for accessed word size (upper bits are not set).
Atomic operations are doing read/write at the same time, so we generate
two memory callbacks instead of one, to allow plugins to access distinct
values.
For now, we can have access only up to 128 bits, thus split this in two
64 bits words. When QEMU will support wider operations, we'll be able to
reconsider this.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
accel/tcg/atomic_template.h | 66 ++++++++++++++++++++++++++++++-----
include/hw/core/cpu.h | 4 +++
include/qemu/plugin.h | 4 +++
plugins/core.c | 6 ++++
tcg/tcg-op-ldst.c | 66 +++++++++++++++++++++++++++++++----
accel/tcg/atomic_common.c.inc | 13 ++++++-
accel/tcg/ldst_common.c.inc | 38 ++++++++++++--------
7 files changed, 167 insertions(+), 30 deletions(-)
diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h
index 1dc2151dafd..89593b2502f 100644
--- a/accel/tcg/atomic_template.h
+++ b/accel/tcg/atomic_template.h
@@ -53,6 +53,14 @@
# error unsupported data size
#endif
+#if DATA_SIZE == 16
+# define VALUE_LOW(val) int128_getlo(val)
+# define VALUE_HIGH(val) int128_gethi(val)
+#else
+# define VALUE_LOW(val) val
+# define VALUE_HIGH(val) 0
+#endif
+
#if DATA_SIZE >= 4
# define ABI_TYPE DATA_TYPE
#else
@@ -83,7 +91,12 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ret = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
#endif
ATOMIC_MMU_CLEANUP;
- atomic_trace_rmw_post(env, addr, oi);
+ atomic_trace_rmw_post(env, addr,
+ VALUE_LOW(ret),
+ VALUE_HIGH(ret),
+ VALUE_LOW(newv),
+ VALUE_HIGH(newv),
+ oi);
return ret;
}
@@ -97,7 +110,12 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ret = qatomic_xchg__nocheck(haddr, val);
ATOMIC_MMU_CLEANUP;
- atomic_trace_rmw_post(env, addr, oi);
+ atomic_trace_rmw_post(env, addr,
+ VALUE_LOW(ret),
+ VALUE_HIGH(ret),
+ VALUE_LOW(val),
+ VALUE_HIGH(val),
+ oi);
return ret;
}
@@ -109,7 +127,12 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, val); \
ATOMIC_MMU_CLEANUP; \
- atomic_trace_rmw_post(env, addr, oi); \
+ atomic_trace_rmw_post(env, addr, \
+ VALUE_LOW(ret), \
+ VALUE_HIGH(ret), \
+ VALUE_LOW(val), \
+ VALUE_HIGH(val), \
+ oi); \
return ret; \
}
@@ -145,7 +168,12 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
cmp = qatomic_cmpxchg__nocheck(haddr, old, new); \
} while (cmp != old); \
ATOMIC_MMU_CLEANUP; \
- atomic_trace_rmw_post(env, addr, oi); \
+ atomic_trace_rmw_post(env, addr, \
+ VALUE_LOW(old), \
+ VALUE_HIGH(old), \
+ VALUE_LOW(xval), \
+ VALUE_HIGH(xval), \
+ oi); \
return RET; \
}
@@ -188,7 +216,12 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr addr,
ret = qatomic_cmpxchg__nocheck(haddr, BSWAP(cmpv), BSWAP(newv));
#endif
ATOMIC_MMU_CLEANUP;
- atomic_trace_rmw_post(env, addr, oi);
+ atomic_trace_rmw_post(env, addr,
+ VALUE_LOW(ret),
+ VALUE_HIGH(ret),
+ VALUE_LOW(newv),
+ VALUE_HIGH(newv),
+ oi);
return BSWAP(ret);
}
@@ -202,7 +235,12 @@ ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val,
ret = qatomic_xchg__nocheck(haddr, BSWAP(val));
ATOMIC_MMU_CLEANUP;
- atomic_trace_rmw_post(env, addr, oi);
+ atomic_trace_rmw_post(env, addr,
+ VALUE_LOW(ret),
+ VALUE_HIGH(ret),
+ VALUE_LOW(val),
+ VALUE_HIGH(val),
+ oi);
return BSWAP(ret);
}
@@ -214,7 +252,12 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
haddr = atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr); \
ret = qatomic_##X(haddr, BSWAP(val)); \
ATOMIC_MMU_CLEANUP; \
- atomic_trace_rmw_post(env, addr, oi); \
+ atomic_trace_rmw_post(env, addr, \
+ VALUE_LOW(ret), \
+ VALUE_HIGH(ret), \
+ VALUE_LOW(val), \
+ VALUE_HIGH(val), \
+ oi); \
return BSWAP(ret); \
}
@@ -247,7 +290,12 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr, \
ldn = qatomic_cmpxchg__nocheck(haddr, ldo, BSWAP(new)); \
} while (ldo != ldn); \
ATOMIC_MMU_CLEANUP; \
- atomic_trace_rmw_post(env, addr, oi); \
+ atomic_trace_rmw_post(env, addr, \
+ VALUE_LOW(old), \
+ VALUE_HIGH(old), \
+ VALUE_LOW(xval), \
+ VALUE_HIGH(xval), \
+ oi); \
return RET; \
}
@@ -281,3 +329,5 @@ GEN_ATOMIC_HELPER_FN(add_fetch, ADD, DATA_TYPE, new)
#undef SUFFIX
#undef DATA_SIZE
#undef SHIFT
+#undef VALUE_LOW
+#undef VALUE_HIGH
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 1c9c775df65..04e9ad49968 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -350,6 +350,8 @@ typedef union IcountDecr {
* from CPUArchState, via small negative offsets.
* @can_do_io: True if memory-mapped IO is allowed.
* @plugin_mem_cbs: active plugin memory callbacks
+ * @plugin_mem_value_low: 64 lower bits of latest accessed mem value.
+ * @plugin_mem_value_high: 64 higher bits of latest accessed mem value.
*/
typedef struct CPUNegativeOffsetState {
CPUTLB tlb;
@@ -358,6 +360,8 @@ typedef struct CPUNegativeOffsetState {
* The callback pointer are accessed via TCG (see gen_empty_mem_helper).
*/
GArray *plugin_mem_cbs;
+ uint64_t plugin_mem_value_low;
+ uint64_t plugin_mem_value_high;
#endif
IcountDecr icount_decr;
bool can_do_io;
diff --git a/include/qemu/plugin.h b/include/qemu/plugin.h
index af5f9db4692..9726a9ebf36 100644
--- a/include/qemu/plugin.h
+++ b/include/qemu/plugin.h
@@ -167,6 +167,8 @@ qemu_plugin_vcpu_syscall(CPUState *cpu, int64_t num, uint64_t a1,
void qemu_plugin_vcpu_syscall_ret(CPUState *cpu, int64_t num, int64_t ret);
void qemu_plugin_vcpu_mem_cb(CPUState *cpu, uint64_t vaddr,
+ uint64_t value_low,
+ uint64_t value_high,
MemOpIdx oi, enum qemu_plugin_mem_rw rw);
void qemu_plugin_flush_cb(void);
@@ -251,6 +253,8 @@ void qemu_plugin_vcpu_syscall_ret(CPUState *cpu, int64_t num, int64_t ret)
{ }
static inline void qemu_plugin_vcpu_mem_cb(CPUState *cpu, uint64_t vaddr,
+ uint64_t value_low,
+ uint64_t value_high,
MemOpIdx oi,
enum qemu_plugin_mem_rw rw)
{ }
diff --git a/plugins/core.c b/plugins/core.c
index e31a5c1c9cc..912c0da4206 100644
--- a/plugins/core.c
+++ b/plugins/core.c
@@ -600,6 +600,8 @@ void exec_inline_op(enum plugin_dyn_cb_type type,
}
void qemu_plugin_vcpu_mem_cb(CPUState *cpu, uint64_t vaddr,
+ uint64_t value_low,
+ uint64_t value_high,
MemOpIdx oi, enum qemu_plugin_mem_rw rw)
{
GArray *arr = cpu->neg.plugin_mem_cbs;
@@ -608,6 +610,10 @@ void qemu_plugin_vcpu_mem_cb(CPUState *cpu, uint64_t vaddr,
if (arr == NULL) {
return;
}
+
+ cpu->neg.plugin_mem_value_low = value_low;
+ cpu->neg.plugin_mem_value_high = value_high;
+
for (i = 0; i < arr->len; i++) {
struct qemu_plugin_dyn_cb *cb =
&g_array_index(arr, struct qemu_plugin_dyn_cb, i);
diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c
index 85101602581..23dc807f119 100644
--- a/tcg/tcg-op-ldst.c
+++ b/tcg/tcg-op-ldst.c
@@ -148,11 +148,11 @@ static TCGv_i64 plugin_maybe_preserve_addr(TCGTemp *addr)
return NULL;
}
+#ifdef CONFIG_PLUGIN
static void
plugin_gen_mem_callbacks(TCGv_i64 copy_addr, TCGTemp *orig_addr, MemOpIdx oi,
enum qemu_plugin_mem_rw rw)
{
-#ifdef CONFIG_PLUGIN
if (tcg_ctx->plugin_insn != NULL) {
qemu_plugin_meminfo_t info = make_plugin_meminfo(oi, rw);
@@ -172,6 +172,54 @@ plugin_gen_mem_callbacks(TCGv_i64 copy_addr, TCGTemp *orig_addr, MemOpIdx oi,
}
}
}
+}
+#endif
+
+static void
+plugin_gen_mem_callbacks_i32(TCGv_i32 val,
+ TCGv_i64 copy_addr, TCGTemp *orig_addr,
+ MemOpIdx oi, enum qemu_plugin_mem_rw rw)
+{
+#ifdef CONFIG_PLUGIN
+ if (tcg_ctx->plugin_insn != NULL) {
+ tcg_gen_st_i32(val, tcg_env,
+ offsetof(CPUState, neg.plugin_mem_value_low) -
+ sizeof(CPUState) + (HOST_BIG_ENDIAN * 4));
+ plugin_gen_mem_callbacks(copy_addr, orig_addr, oi, rw);
+ }
+#endif
+}
+
+static void
+plugin_gen_mem_callbacks_i64(TCGv_i64 val,
+ TCGv_i64 copy_addr, TCGTemp *orig_addr,
+ MemOpIdx oi, enum qemu_plugin_mem_rw rw)
+{
+#ifdef CONFIG_PLUGIN
+ if (tcg_ctx->plugin_insn != NULL) {
+ tcg_gen_st_i64(val, tcg_env,
+ offsetof(CPUState, neg.plugin_mem_value_low) -
+ sizeof(CPUState));
+ plugin_gen_mem_callbacks(copy_addr, orig_addr, oi, rw);
+ }
+#endif
+}
+
+static void
+plugin_gen_mem_callbacks_i128(TCGv_i128 val,
+ TCGv_i64 copy_addr, TCGTemp *orig_addr,
+ MemOpIdx oi, enum qemu_plugin_mem_rw rw)
+{
+#ifdef CONFIG_PLUGIN
+ if (tcg_ctx->plugin_insn != NULL) {
+ tcg_gen_st_i64(TCGV128_LOW(val), tcg_env,
+ offsetof(CPUState, neg.plugin_mem_value_low) -
+ sizeof(CPUState));
+ tcg_gen_st_i64(TCGV128_HIGH(val), tcg_env,
+ offsetof(CPUState, neg.plugin_mem_value_high) -
+ sizeof(CPUState));
+ plugin_gen_mem_callbacks(copy_addr, orig_addr, oi, rw);
+ }
#endif
}
@@ -203,7 +251,8 @@ static void tcg_gen_qemu_ld_i32_int(TCGv_i32 val, TCGTemp *addr,
opc = INDEX_op_qemu_ld_a64_i32;
}
gen_ldst(opc, tcgv_i32_temp(val), NULL, addr, oi);
- plugin_gen_mem_callbacks(copy_addr, addr, orig_oi, QEMU_PLUGIN_MEM_R);
+ plugin_gen_mem_callbacks_i32(val, copy_addr, addr, orig_oi,
+ QEMU_PLUGIN_MEM_R);
if ((orig_memop ^ memop) & MO_BSWAP) {
switch (orig_memop & MO_SIZE) {
@@ -271,7 +320,7 @@ static void tcg_gen_qemu_st_i32_int(TCGv_i32 val, TCGTemp *addr,
}
}
gen_ldst(opc, tcgv_i32_temp(val), NULL, addr, oi);
- plugin_gen_mem_callbacks(NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
+ plugin_gen_mem_callbacks_i32(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
if (swap) {
tcg_temp_free_i32(swap);
@@ -324,7 +373,8 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr,
opc = INDEX_op_qemu_ld_a64_i64;
}
gen_ldst_i64(opc, val, addr, oi);
- plugin_gen_mem_callbacks(copy_addr, addr, orig_oi, QEMU_PLUGIN_MEM_R);
+ plugin_gen_mem_callbacks_i64(val, copy_addr, addr, orig_oi,
+ QEMU_PLUGIN_MEM_R);
if ((orig_memop ^ memop) & MO_BSWAP) {
int flags = (orig_memop & MO_SIGN
@@ -396,7 +446,7 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr,
opc = INDEX_op_qemu_st_a64_i64;
}
gen_ldst_i64(opc, val, addr, oi);
- plugin_gen_mem_callbacks(NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
+ plugin_gen_mem_callbacks_i64(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W);
if (swap) {
tcg_temp_free_i64(swap);
@@ -606,7 +656,8 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr,
tcg_constant_i32(orig_oi));
}
- plugin_gen_mem_callbacks(ext_addr, addr, orig_oi, QEMU_PLUGIN_MEM_R);
+ plugin_gen_mem_callbacks_i128(val, ext_addr, addr, orig_oi,
+ QEMU_PLUGIN_MEM_R);
}
void tcg_gen_qemu_ld_i128_chk(TCGv_i128 val, TCGTemp *addr, TCGArg idx,
@@ -722,7 +773,8 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr,
tcg_constant_i32(orig_oi));
}
- plugin_gen_mem_callbacks(ext_addr, addr, orig_oi, QEMU_PLUGIN_MEM_W);
+ plugin_gen_mem_callbacks_i128(val, ext_addr, addr, orig_oi,
+ QEMU_PLUGIN_MEM_W);
}
void tcg_gen_qemu_st_i128_chk(TCGv_i128 val, TCGTemp *addr, TCGArg idx,
diff --git a/accel/tcg/atomic_common.c.inc b/accel/tcg/atomic_common.c.inc
index 95a5c5ff12d..6056598c23d 100644
--- a/accel/tcg/atomic_common.c.inc
+++ b/accel/tcg/atomic_common.c.inc
@@ -14,9 +14,20 @@
*/
static void atomic_trace_rmw_post(CPUArchState *env, uint64_t addr,
+ uint64_t read_value_low,
+ uint64_t read_value_high,
+ uint64_t write_value_low,
+ uint64_t write_value_high,
MemOpIdx oi)
{
- qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_RW);
+ if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
+ read_value_low, read_value_high,
+ oi, QEMU_PLUGIN_MEM_R);
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
+ write_value_low, write_value_high,
+ oi, QEMU_PLUGIN_MEM_W);
+ }
}
/*
diff --git a/accel/tcg/ldst_common.c.inc b/accel/tcg/ldst_common.c.inc
index 87ceb954873..ebbf380d767 100644
--- a/accel/tcg/ldst_common.c.inc
+++ b/accel/tcg/ldst_common.c.inc
@@ -123,10 +123,15 @@ void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi)
* Load helpers for cpu_ldst.h
*/
-static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
+static void plugin_load_cb(CPUArchState *env, abi_ptr addr,
+ uint64_t value_low,
+ uint64_t value_high,
+ MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
- qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R);
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
+ value_low, value_high,
+ oi, QEMU_PLUGIN_MEM_R);
}
}
@@ -136,7 +141,7 @@ uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra)
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_UB);
ret = do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
- plugin_load_cb(env, addr, oi);
+ plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
@@ -147,7 +152,7 @@ uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr,
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
ret = do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
- plugin_load_cb(env, addr, oi);
+ plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
@@ -158,7 +163,7 @@ uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr,
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
ret = do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
- plugin_load_cb(env, addr, oi);
+ plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
@@ -169,7 +174,7 @@ uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr,
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
ret = do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD);
- plugin_load_cb(env, addr, oi);
+ plugin_load_cb(env, addr, ret, 0, oi);
return ret;
}
@@ -180,7 +185,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
ret = do_ld16_mmu(env_cpu(env), addr, oi, ra);
- plugin_load_cb(env, addr, oi);
+ plugin_load_cb(env, addr, int128_getlo(ret), int128_gethi(ret), oi);
return ret;
}
@@ -188,10 +193,15 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
* Store helpers for cpu_ldst.h
*/
-static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi)
+static void plugin_store_cb(CPUArchState *env, abi_ptr addr,
+ uint64_t value_low,
+ uint64_t value_high,
+ MemOpIdx oi)
{
if (cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
- qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W);
+ qemu_plugin_vcpu_mem_cb(env_cpu(env), addr,
+ value_low, value_high,
+ oi, QEMU_PLUGIN_MEM_W);
}
}
@@ -199,7 +209,7 @@ void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val,
MemOpIdx oi, uintptr_t retaddr)
{
helper_stb_mmu(env, addr, val, oi, retaddr);
- plugin_store_cb(env, addr, oi);
+ plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
@@ -207,7 +217,7 @@ void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val,
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_16);
do_st2_mmu(env_cpu(env), addr, val, oi, retaddr);
- plugin_store_cb(env, addr, oi);
+ plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
@@ -215,7 +225,7 @@ void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val,
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_32);
do_st4_mmu(env_cpu(env), addr, val, oi, retaddr);
- plugin_store_cb(env, addr, oi);
+ plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
@@ -223,7 +233,7 @@ void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val,
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_64);
do_st8_mmu(env_cpu(env), addr, val, oi, retaddr);
- plugin_store_cb(env, addr, oi);
+ plugin_store_cb(env, addr, val, 0, oi);
}
void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
@@ -231,7 +241,7 @@ void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val,
{
tcg_debug_assert((get_memop(oi) & MO_SIZE) == MO_128);
do_st16_mmu(env_cpu(env), addr, val, oi, retaddr);
- plugin_store_cb(env, addr, oi);
+ plugin_store_cb(env, addr, int128_getlo(val), int128_gethi(val), oi);
}
/*
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v7 2/6] plugins: extend API to get latest memory value accessed
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 1/6] plugins: save value during memory accesses Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 3/6] tests/tcg: add mechanism to run specific tests with plugins Pierrick Bouvier
` (4 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum, Xingtao Yao
This value can be accessed only during a memory callback, using
new qemu_plugin_mem_get_value function.
Returned value can be extended when QEMU will support accesses wider
than 128 bits.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1719
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2152
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
include/qemu/qemu-plugin.h | 32 ++++++++++++++++++++++++++++++++
plugins/api.c | 33 +++++++++++++++++++++++++++++++++
plugins/qemu-plugins.symbols | 1 +
3 files changed, 66 insertions(+)
diff --git a/include/qemu/qemu-plugin.h b/include/qemu/qemu-plugin.h
index c71c705b699..649ce89815f 100644
--- a/include/qemu/qemu-plugin.h
+++ b/include/qemu/qemu-plugin.h
@@ -262,6 +262,29 @@ enum qemu_plugin_mem_rw {
QEMU_PLUGIN_MEM_RW,
};
+enum qemu_plugin_mem_value_type {
+ QEMU_PLUGIN_MEM_VALUE_U8,
+ QEMU_PLUGIN_MEM_VALUE_U16,
+ QEMU_PLUGIN_MEM_VALUE_U32,
+ QEMU_PLUGIN_MEM_VALUE_U64,
+ QEMU_PLUGIN_MEM_VALUE_U128,
+};
+
+/* typedef qemu_plugin_mem_value - value accessed during a load/store */
+typedef struct {
+ enum qemu_plugin_mem_value_type type;
+ union {
+ uint8_t u8;
+ uint16_t u16;
+ uint32_t u32;
+ uint64_t u64;
+ struct {
+ uint64_t low;
+ uint64_t high;
+ } u128;
+ } data;
+} qemu_plugin_mem_value;
+
/**
* enum qemu_plugin_cond - condition to enable callback
*
@@ -551,6 +574,15 @@ bool qemu_plugin_mem_is_big_endian(qemu_plugin_meminfo_t info);
QEMU_PLUGIN_API
bool qemu_plugin_mem_is_store(qemu_plugin_meminfo_t info);
+/**
+ * qemu_plugin_mem_get_mem_value() - return last value loaded/stored
+ * @info: opaque memory transaction handle
+ *
+ * Returns: memory value
+ */
+QEMU_PLUGIN_API
+qemu_plugin_mem_value qemu_plugin_mem_get_value(qemu_plugin_meminfo_t info);
+
/**
* qemu_plugin_get_hwaddr() - return handle for memory operation
* @info: opaque memory info structure
diff --git a/plugins/api.c b/plugins/api.c
index 2ff13d09de6..3316d4a04d4 100644
--- a/plugins/api.c
+++ b/plugins/api.c
@@ -351,6 +351,39 @@ bool qemu_plugin_mem_is_store(qemu_plugin_meminfo_t info)
return get_plugin_meminfo_rw(info) & QEMU_PLUGIN_MEM_W;
}
+qemu_plugin_mem_value qemu_plugin_mem_get_value(qemu_plugin_meminfo_t info)
+{
+ uint64_t low = current_cpu->neg.plugin_mem_value_low;
+ qemu_plugin_mem_value value;
+
+ switch (qemu_plugin_mem_size_shift(info)) {
+ case 0:
+ value.type = QEMU_PLUGIN_MEM_VALUE_U8;
+ value.data.u8 = (uint8_t)low;
+ break;
+ case 1:
+ value.type = QEMU_PLUGIN_MEM_VALUE_U16;
+ value.data.u16 = (uint16_t)low;
+ break;
+ case 2:
+ value.type = QEMU_PLUGIN_MEM_VALUE_U32;
+ value.data.u32 = (uint32_t)low;
+ break;
+ case 3:
+ value.type = QEMU_PLUGIN_MEM_VALUE_U64;
+ value.data.u64 = low;
+ break;
+ case 4:
+ value.type = QEMU_PLUGIN_MEM_VALUE_U128;
+ value.data.u128.low = low;
+ value.data.u128.high = current_cpu->neg.plugin_mem_value_high;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ return value;
+}
+
/*
* Virtual Memory queries
*/
diff --git a/plugins/qemu-plugins.symbols b/plugins/qemu-plugins.symbols
index ca773d8d9fe..eed9d8abd90 100644
--- a/plugins/qemu-plugins.symbols
+++ b/plugins/qemu-plugins.symbols
@@ -13,6 +13,7 @@
qemu_plugin_insn_size;
qemu_plugin_insn_symbol;
qemu_plugin_insn_vaddr;
+ qemu_plugin_mem_get_value;
qemu_plugin_mem_is_big_endian;
qemu_plugin_mem_is_sign_extended;
qemu_plugin_mem_is_store;
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v7 3/6] tests/tcg: add mechanism to run specific tests with plugins
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 1/6] plugins: save value during memory accesses Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 2/6] plugins: extend API to get latest memory value accessed Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 4/6] tests/tcg: allow to check output of plugins Pierrick Bouvier
` (3 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum, Xingtao Yao
Only multiarch tests are run with plugins, and we want to be able to run
per-arch test with plugins too.
Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
tests/tcg/Makefile.target | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/tcg/Makefile.target b/tests/tcg/Makefile.target
index cb8cfeb6dac..197d3de950b 100644
--- a/tests/tcg/Makefile.target
+++ b/tests/tcg/Makefile.target
@@ -152,10 +152,11 @@ PLUGINS=$(patsubst %.c, lib%.so, $(notdir $(wildcard $(PLUGIN_SRC)/*.c)))
# only expand MULTIARCH_TESTS which are common on most of our targets
# to avoid an exponential explosion as new tests are added. We also
# add some special helpers the run-plugin- rules can use below.
+# In more, extra tests can be added using ADDITIONAL_PLUGINS_TESTS variable.
ifneq ($(MULTIARCH_TESTS),)
$(foreach p,$(PLUGINS), \
- $(foreach t,$(MULTIARCH_TESTS),\
+ $(foreach t,$(MULTIARCH_TESTS) $(ADDITIONAL_PLUGINS_TESTS),\
$(eval run-plugin-$(t)-with-$(p): $t $p) \
$(eval RUN_TESTS+=run-plugin-$(t)-with-$(p))))
endif # MULTIARCH_TESTS
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v7 4/6] tests/tcg: allow to check output of plugins
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
` (2 preceding siblings ...)
2024-07-24 19:47 ` [PATCH v7 3/6] tests/tcg: add mechanism to run specific tests with plugins Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 5/6] tests/plugin/mem: add option to print memory accesses Pierrick Bouvier
` (2 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum, Xingtao Yao
A specific plugin test can now read and check a plugin output, to ensure
it contains expected values.
Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
tests/tcg/Makefile.target | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/tests/tcg/Makefile.target b/tests/tcg/Makefile.target
index 197d3de950b..7cfbdead0bb 100644
--- a/tests/tcg/Makefile.target
+++ b/tests/tcg/Makefile.target
@@ -90,6 +90,7 @@ CFLAGS=
LDFLAGS=
QEMU_OPTS=
+CHECK_PLUGIN_OUTPUT_COMMAND=
# If TCG debugging, or TCI is enabled things are a lot slower
@@ -180,6 +181,10 @@ run-plugin-%:
-plugin $(PLUGIN_LIB)/$(call extract-plugin,$@)$(PLUGIN_ARGS) \
-d plugin -D $*.pout \
$(call strip-plugin,$<))
+ $(if $(CHECK_PLUGIN_OUTPUT_COMMAND), \
+ $(call quiet-command, $(CHECK_PLUGIN_OUTPUT_COMMAND) $*.pout, \
+ TEST, check plugin $(call extract-plugin,$@) output \
+ with $(call strip-plugin,$<)))
else
run-%: %
$(call run-test, $<, \
@@ -194,6 +199,10 @@ run-plugin-%:
-plugin $(PLUGIN_LIB)/$(call extract-plugin,$@)$(PLUGIN_ARGS) \
-d plugin -D $*.pout \
$(QEMU_OPTS) $(call strip-plugin,$<))
+ $(if $(CHECK_PLUGIN_OUTPUT_COMMAND), \
+ $(call quiet-command, $(CHECK_PLUGIN_OUTPUT_COMMAND) $*.pout, \
+ TEST, check plugin $(call extract-plugin,$@) output \
+ with $(call strip-plugin,$<)))
endif
gdb-%: %
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v7 5/6] tests/plugin/mem: add option to print memory accesses
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
` (3 preceding siblings ...)
2024-07-24 19:47 ` [PATCH v7 4/6] tests/tcg: allow to check output of plugins Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
2024-09-05 15:21 ` [PATCH v7 0/6] plugins: access values during a memory read/write Alex Bennée
6 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum, Xingtao Yao
By using "print-accesses=true" option, mem plugin will now print every
value accessed, with associated size, type (store vs load), symbol,
instruction address and phys/virt address accessed.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
tests/plugin/mem.c | 69 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 68 insertions(+), 1 deletion(-)
diff --git a/tests/plugin/mem.c b/tests/plugin/mem.c
index b650dddcce1..086e6f5bdfc 100644
--- a/tests/plugin/mem.c
+++ b/tests/plugin/mem.c
@@ -21,10 +21,15 @@ typedef struct {
uint64_t io_count;
} CPUCount;
+typedef struct {
+ uint64_t vaddr;
+ const char *sym;
+} InsnInfo;
+
static struct qemu_plugin_scoreboard *counts;
static qemu_plugin_u64 mem_count;
static qemu_plugin_u64 io_count;
-static bool do_inline, do_callback;
+static bool do_inline, do_callback, do_print_accesses;
static bool do_haddr;
static enum qemu_plugin_mem_rw rw = QEMU_PLUGIN_MEM_RW;
@@ -60,6 +65,44 @@ static void vcpu_mem(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
}
}
+static void print_access(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
+ uint64_t vaddr, void *udata)
+{
+ InsnInfo *insn_info = udata;
+ unsigned size = 8 << qemu_plugin_mem_size_shift(meminfo);
+ const char *type = qemu_plugin_mem_is_store(meminfo) ? "store" : "load";
+ qemu_plugin_mem_value value = qemu_plugin_mem_get_value(meminfo);
+ uint64_t hwaddr =
+ qemu_plugin_hwaddr_phys_addr(qemu_plugin_get_hwaddr(meminfo, vaddr));
+ g_autoptr(GString) out = g_string_new("");
+ g_string_printf(out,
+ "0x%"PRIx64",%s,0x%"PRIx64",0x%"PRIx64",%d,%s,",
+ insn_info->vaddr, insn_info->sym,
+ vaddr, hwaddr, size, type);
+ switch (value.type) {
+ case QEMU_PLUGIN_MEM_VALUE_U8:
+ g_string_append_printf(out, "0x%02"PRIx8, value.data.u8);
+ break;
+ case QEMU_PLUGIN_MEM_VALUE_U16:
+ g_string_append_printf(out, "0x%04"PRIx16, value.data.u16);
+ break;
+ case QEMU_PLUGIN_MEM_VALUE_U32:
+ g_string_append_printf(out, "0x%08"PRIx32, value.data.u32);
+ break;
+ case QEMU_PLUGIN_MEM_VALUE_U64:
+ g_string_append_printf(out, "0x%016"PRIx64, value.data.u64);
+ break;
+ case QEMU_PLUGIN_MEM_VALUE_U128:
+ g_string_append_printf(out, "0x%016"PRIx64"%016"PRIx64,
+ value.data.u128.high, value.data.u128.low);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+ g_string_append_printf(out, "\n");
+ qemu_plugin_outs(out->str);
+}
+
static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
{
size_t n = qemu_plugin_tb_n_insns(tb);
@@ -79,6 +122,16 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
QEMU_PLUGIN_CB_NO_REGS,
rw, NULL);
}
+ if (do_print_accesses) {
+ /* we leak this pointer, to avoid locking to keep track of it */
+ InsnInfo *insn_info = g_malloc(sizeof(InsnInfo));
+ const char *sym = qemu_plugin_insn_symbol(insn);
+ insn_info->sym = sym ? sym : "";
+ insn_info->vaddr = qemu_plugin_insn_vaddr(insn);
+ qemu_plugin_register_vcpu_mem_cb(insn, print_access,
+ QEMU_PLUGIN_CB_NO_REGS,
+ rw, (void *) insn_info);
+ }
}
}
@@ -117,6 +170,12 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
return -1;
}
+ } else if (g_strcmp0(tokens[0], "print-accesses") == 0) {
+ if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
+ &do_print_accesses)) {
+ fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
+ return -1;
+ }
} else {
fprintf(stderr, "option parsing failed: %s\n", opt);
return -1;
@@ -129,6 +188,14 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
return -1;
}
+ if (do_print_accesses) {
+ g_autoptr(GString) out = g_string_new("");
+ g_string_printf(out,
+ "insn_vaddr,insn_symbol,mem_vaddr,mem_hwaddr,"
+ "access_size,access_type,mem_value\n");
+ qemu_plugin_outs(out->str);
+ }
+
counts = qemu_plugin_scoreboard_new(sizeof(CPUCount));
mem_count = qemu_plugin_scoreboard_u64_in_struct(
counts, CPUCount, mem_count);
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
` (4 preceding siblings ...)
2024-07-24 19:47 ` [PATCH v7 5/6] tests/plugin/mem: add option to print memory accesses Pierrick Bouvier
@ 2024-07-24 19:47 ` Pierrick Bouvier
2024-08-29 9:03 ` Alex Bennée
2024-09-04 15:41 ` Alex Bennée
2024-09-05 15:21 ` [PATCH v7 0/6] plugins: access values during a memory read/write Alex Bennée
6 siblings, 2 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-07-24 19:47 UTC (permalink / raw)
To: qemu-devel
Cc: Alexandre Iooss, Zhao Liu, Mahmoud Mandour, Yanan Wang,
Pierrick Bouvier, Eduardo Habkost, Paolo Bonzini,
Alex Bennée, Philippe Mathieu-Daudé, Richard Henderson,
Marcel Apfelbaum, Xingtao Yao
Add an explicit test to check expected memory values are read/written.
8,16,32 load/store are tested for all arch.
64,128 load/store are tested for aarch64/x64.
atomic operations (8,16,32,64) are tested for x64 only.
By default, atomic accesses are non atomic if a single cpu is running,
so we force creation of a second one by creating a new thread first.
load/store helpers code path can't be triggered easily in user mode (no
softmmu), so we can't test it here.
Output of test-plugin-mem-access.c is the list of expected patterns in
plugin output. By reading stdout, we can compare to plugins output and
have a multiarch test.
Can be run with:
make -C build/tests/tcg/$ARCH-linux-user run-plugin-test-plugin-mem-access-with-libmem.so
Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
---
tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
tests/tcg/multiarch/Makefile.target | 7 +
.../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
3 files changed, 212 insertions(+)
create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
diff --git a/tests/tcg/multiarch/test-plugin-mem-access.c b/tests/tcg/multiarch/test-plugin-mem-access.c
new file mode 100644
index 00000000000..09d1fa22e35
--- /dev/null
+++ b/tests/tcg/multiarch/test-plugin-mem-access.c
@@ -0,0 +1,175 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ *
+ * Check if we detect all memory accesses expected using plugin API.
+ * Used in conjunction with ./check-plugin-mem-access.sh check script.
+ * Output of this program is the list of patterns expected in plugin output.
+ *
+ * 8,16,32 load/store are tested for all arch.
+ * 64,128 load/store are tested for aarch64/x64.
+ * atomic operations (8,16,32,64) are tested for x64 only.
+ */
+
+#include <pthread.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+#if defined(__x86_64__)
+#include <emmintrin.h>
+#elif defined(__aarch64__)
+#include <arm_neon.h>
+#endif /* __x86_64__ */
+
+static void *data;
+
+/* ,store_u8,.*,8,store,0xf1 */
+#define PRINT_EXPECTED(function, type, value, action) \
+do { \
+ printf(",%s,.*,%d,%s,%s\n", \
+ #function, (int) sizeof(type) * 8, action, value); \
+} \
+while (0)
+
+#define DEFINE_STORE(name, type, value) \
+ \
+static void print_expected_store_##name(void) \
+{ \
+ PRINT_EXPECTED(store_##name, type, #value, "store"); \
+} \
+ \
+static void store_##name(void) \
+{ \
+ *((type *)data) = value; \
+ print_expected_store_##name(); \
+}
+
+#define DEFINE_ATOMIC_OP(name, type, value) \
+ \
+static void print_expected_atomic_op_##name(void) \
+{ \
+ PRINT_EXPECTED(atomic_op_##name, type, "0x0*42", "load"); \
+ PRINT_EXPECTED(atomic_op_##name, type, #value, "store"); \
+} \
+ \
+static void atomic_op_##name(void) \
+{ \
+ *((type *)data) = 0x42; \
+ __sync_val_compare_and_swap((type *)data, 0x42, value); \
+ print_expected_atomic_op_##name(); \
+}
+
+#define DEFINE_LOAD(name, type, value) \
+ \
+static void print_expected_load_##name(void) \
+{ \
+ PRINT_EXPECTED(load_##name, type, #value, "load"); \
+} \
+ \
+static void load_##name(void) \
+{ \
+ type src = *((type *) data); \
+ type dest = src; \
+ (void)src, (void)dest; \
+ print_expected_load_##name(); \
+}
+
+DEFINE_STORE(u8, uint8_t, 0xf1)
+DEFINE_LOAD(u8, uint8_t, 0xf1)
+DEFINE_STORE(u16, uint16_t, 0xf123)
+DEFINE_LOAD(u16, uint16_t, 0xf123)
+DEFINE_STORE(u32, uint32_t, 0xff112233)
+DEFINE_LOAD(u32, uint32_t, 0xff112233)
+
+#if defined(__x86_64__) || defined(__aarch64__)
+DEFINE_STORE(u64, uint64_t, 0xf123456789abcdef)
+DEFINE_LOAD(u64, uint64_t, 0xf123456789abcdef)
+
+static void print_expected_store_u128(void)
+{
+ PRINT_EXPECTED(store_u128, __int128,
+ "0xf122334455667788f123456789abcdef", "store");
+}
+
+static void store_u128(void)
+{
+#ifdef __x86_64__
+ _mm_store_si128(data, _mm_set_epi32(0xf1223344, 0x55667788,
+ 0xf1234567, 0x89abcdef));
+#else
+ const uint32_t init[4] = {0x89abcdef, 0xf1234567, 0x55667788, 0xf1223344};
+ uint32x4_t vec = vld1q_u32(init);
+ vst1q_u32(data, vec);
+#endif /* __x86_64__ */
+ print_expected_store_u128();
+}
+
+static void print_expected_load_u128(void)
+{
+ PRINT_EXPECTED(load_u128, __int128,
+ "0xf122334455667788f123456789abcdef", "load");
+}
+
+static void load_u128(void)
+{
+#ifdef __x86_64__
+ __m128i var = _mm_load_si128(data);
+#else
+ uint32x4_t var = vld1q_u32(data);
+#endif
+ (void) var;
+ print_expected_load_u128();
+}
+#endif /* __x86_64__ || __aarch64__ */
+
+#if defined(__x86_64__)
+DEFINE_ATOMIC_OP(u8, uint8_t, 0xf1)
+DEFINE_ATOMIC_OP(u16, uint16_t, 0xf123)
+DEFINE_ATOMIC_OP(u32, uint32_t, 0xff112233)
+DEFINE_ATOMIC_OP(u64, uint64_t, 0xf123456789abcdef)
+#endif /* __x86_64__ */
+
+static void *f(void *p)
+{
+ return NULL;
+}
+
+int main(void)
+{
+ /*
+ * We force creation of a second thread to enable cpu flag CF_PARALLEL.
+ * This will generate atomic operations when needed.
+ */
+ pthread_t thread;
+ pthread_create(&thread, NULL, &f, NULL);
+ pthread_join(thread, NULL);
+
+ /* allocate storage up to 128 bits */
+ data = malloc(16);
+
+ store_u8();
+ load_u8();
+
+ store_u16();
+ load_u16();
+
+ store_u32();
+ load_u32();
+
+#if defined(__x86_64__) || defined(__aarch64__)
+ store_u64();
+ load_u64();
+
+ store_u128();
+ load_u128();
+#endif /* __x86_64__ || __aarch64__ */
+
+#if defined(__x86_64__)
+ atomic_op_u8();
+ atomic_op_u16();
+ atomic_op_u32();
+ atomic_op_u64();
+#endif /* __x86_64__ */
+
+ free(data);
+}
diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
index 5e3391ec9d2..d90cbd3e521 100644
--- a/tests/tcg/multiarch/Makefile.target
+++ b/tests/tcg/multiarch/Makefile.target
@@ -170,5 +170,12 @@ run-plugin-semiconsole-with-%:
TESTS += semihosting semiconsole
endif
+# Test plugin memory access instrumentation
+run-plugin-test-plugin-mem-access-with-libmem.so: \
+ PLUGIN_ARGS=$(COMMA)print-accesses=true
+run-plugin-test-plugin-mem-access-with-libmem.so: \
+ CHECK_PLUGIN_OUTPUT_COMMAND= \
+ $(SRC_PATH)/tests/tcg/multiarch/check-plugin-mem-access.sh
+
# Update TESTS
TESTS += $(MULTIARCH_TESTS)
diff --git a/tests/tcg/multiarch/check-plugin-mem-access.sh b/tests/tcg/multiarch/check-plugin-mem-access.sh
new file mode 100755
index 00000000000..909606943bb
--- /dev/null
+++ b/tests/tcg/multiarch/check-plugin-mem-access.sh
@@ -0,0 +1,30 @@
+#!/usr/bin/env bash
+
+set -euo pipefail
+
+die()
+{
+ echo "$@" 1>&2
+ exit 1
+}
+
+check()
+{
+ file=$1
+ pattern=$2
+ grep "$pattern" "$file" > /dev/null || die "\"$pattern\" not found in $file"
+}
+
+[ $# -eq 1 ] || die "usage: plugin_out_file"
+
+plugin_out=$1
+
+expected()
+{
+ ./test-plugin-mem-access ||
+ die "running test-plugin-mem-access executable failed"
+}
+
+expected | while read line; do
+ check "$plugin_out" "$line"
+done
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-07-24 19:47 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
@ 2024-08-29 9:03 ` Alex Bennée
2024-08-30 15:25 ` [RFC PATCH] tests/tcg: add a system test to check memory instrumentation Alex Bennée
2024-08-30 19:08 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
2024-09-04 15:41 ` Alex Bennée
1 sibling, 2 replies; 20+ messages in thread
From: Alex Bennée @ 2024-08-29 9:03 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum,
Xingtao Yao
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> Add an explicit test to check expected memory values are read/written.
> 8,16,32 load/store are tested for all arch.
> 64,128 load/store are tested for aarch64/x64.
> atomic operations (8,16,32,64) are tested for x64 only.
>
> By default, atomic accesses are non atomic if a single cpu is running,
> so we force creation of a second one by creating a new thread first.
>
> load/store helpers code path can't be triggered easily in user mode (no
> softmmu), so we can't test it here.
>
> Output of test-plugin-mem-access.c is the list of expected patterns in
> plugin output. By reading stdout, we can compare to plugins output and
> have a multiarch test.
>
> Can be run with:
> make -C build/tests/tcg/$ARCH-linux-user run-plugin-test-plugin-mem-access-with-libmem.so
>
> Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
> ---
> tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
> tests/tcg/multiarch/Makefile.target | 7 +
> .../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
> 3 files changed, 212 insertions(+)
> create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
> create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
>
> diff --git a/tests/tcg/multiarch/test-plugin-mem-access.c b/tests/tcg/multiarch/test-plugin-mem-access.c
> new file mode 100644
> index 00000000000..09d1fa22e35
> --- /dev/null
> +++ b/tests/tcg/multiarch/test-plugin-mem-access.c
> @@ -0,0 +1,175 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + *
> + * Check if we detect all memory accesses expected using plugin API.
> + * Used in conjunction with ./check-plugin-mem-access.sh check script.
> + * Output of this program is the list of patterns expected in plugin output.
> + *
> + * 8,16,32 load/store are tested for all arch.
> + * 64,128 load/store are tested for aarch64/x64.
> + * atomic operations (8,16,32,64) are tested for x64 only.
> + */
It would be nice to build this for the softmmu path as well. I'm not
sure if this can be done with as single source or we need a second test.
I shall have a play.
> +
> +#include <pthread.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +
> +#if defined(__x86_64__)
> +#include <emmintrin.h>
> +#elif defined(__aarch64__)
> +#include <arm_neon.h>
> +#endif /* __x86_64__ */
> +
> +static void *data;
> +
> +/* ,store_u8,.*,8,store,0xf1 */
> +#define PRINT_EXPECTED(function, type, value, action) \
> +do { \
> + printf(",%s,.*,%d,%s,%s\n", \
> + #function, (int) sizeof(type) * 8, action, value); \
> +} \
> +while (0)
> +
> +#define DEFINE_STORE(name, type, value) \
> + \
> +static void print_expected_store_##name(void) \
> +{ \
> + PRINT_EXPECTED(store_##name, type, #value, "store"); \
> +} \
> + \
> +static void store_##name(void) \
> +{ \
> + *((type *)data) = value; \
> + print_expected_store_##name(); \
> +}
> +
> +#define DEFINE_ATOMIC_OP(name, type, value) \
> + \
> +static void print_expected_atomic_op_##name(void) \
> +{ \
> + PRINT_EXPECTED(atomic_op_##name, type, "0x0*42", "load"); \
> + PRINT_EXPECTED(atomic_op_##name, type, #value, "store"); \
> +} \
> + \
> +static void atomic_op_##name(void) \
> +{ \
> + *((type *)data) = 0x42; \
> + __sync_val_compare_and_swap((type *)data, 0x42, value); \
> + print_expected_atomic_op_##name(); \
> +}
> +
> +#define DEFINE_LOAD(name, type, value) \
> + \
> +static void print_expected_load_##name(void) \
> +{ \
> + PRINT_EXPECTED(load_##name, type, #value, "load"); \
> +} \
> + \
> +static void load_##name(void) \
> +{ \
> + type src = *((type *) data); \
> + type dest = src; \
> + (void)src, (void)dest; \
> + print_expected_load_##name(); \
> +}
> +
> +DEFINE_STORE(u8, uint8_t, 0xf1)
> +DEFINE_LOAD(u8, uint8_t, 0xf1)
> +DEFINE_STORE(u16, uint16_t, 0xf123)
> +DEFINE_LOAD(u16, uint16_t, 0xf123)
> +DEFINE_STORE(u32, uint32_t, 0xff112233)
> +DEFINE_LOAD(u32, uint32_t, 0xff112233)
> +
> +#if defined(__x86_64__) || defined(__aarch64__)
> +DEFINE_STORE(u64, uint64_t, 0xf123456789abcdef)
> +DEFINE_LOAD(u64, uint64_t, 0xf123456789abcdef)
> +
> +static void print_expected_store_u128(void)
> +{
> + PRINT_EXPECTED(store_u128, __int128,
> + "0xf122334455667788f123456789abcdef", "store");
> +}
> +
> +static void store_u128(void)
> +{
> +#ifdef __x86_64__
> + _mm_store_si128(data, _mm_set_epi32(0xf1223344, 0x55667788,
> + 0xf1234567, 0x89abcdef));
> +#else
> + const uint32_t init[4] = {0x89abcdef, 0xf1234567, 0x55667788, 0xf1223344};
> + uint32x4_t vec = vld1q_u32(init);
> + vst1q_u32(data, vec);
> +#endif /* __x86_64__ */
> + print_expected_store_u128();
> +}
> +
> +static void print_expected_load_u128(void)
> +{
> + PRINT_EXPECTED(load_u128, __int128,
> + "0xf122334455667788f123456789abcdef", "load");
> +}
> +
> +static void load_u128(void)
> +{
> +#ifdef __x86_64__
> + __m128i var = _mm_load_si128(data);
> +#else
> + uint32x4_t var = vld1q_u32(data);
> +#endif
> + (void) var;
> + print_expected_load_u128();
> +}
> +#endif /* __x86_64__ || __aarch64__ */
> +
> +#if defined(__x86_64__)
> +DEFINE_ATOMIC_OP(u8, uint8_t, 0xf1)
> +DEFINE_ATOMIC_OP(u16, uint16_t, 0xf123)
> +DEFINE_ATOMIC_OP(u32, uint32_t, 0xff112233)
> +DEFINE_ATOMIC_OP(u64, uint64_t, 0xf123456789abcdef)
> +#endif /* __x86_64__ */
> +
> +static void *f(void *p)
> +{
> + return NULL;
> +}
> +
> +int main(void)
> +{
> + /*
> + * We force creation of a second thread to enable cpu flag CF_PARALLEL.
> + * This will generate atomic operations when needed.
> + */
> + pthread_t thread;
> + pthread_create(&thread, NULL, &f, NULL);
> + pthread_join(thread, NULL);
> +
> + /* allocate storage up to 128 bits */
> + data = malloc(16);
> +
> + store_u8();
> + load_u8();
> +
> + store_u16();
> + load_u16();
> +
> + store_u32();
> + load_u32();
> +
> +#if defined(__x86_64__) || defined(__aarch64__)
> + store_u64();
> + load_u64();
> +
> + store_u128();
> + load_u128();
> +#endif /* __x86_64__ || __aarch64__ */
> +
> +#if defined(__x86_64__)
> + atomic_op_u8();
> + atomic_op_u16();
> + atomic_op_u32();
> + atomic_op_u64();
> +#endif /* __x86_64__ */
> +
> + free(data);
> +}
> diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
> index 5e3391ec9d2..d90cbd3e521 100644
> --- a/tests/tcg/multiarch/Makefile.target
> +++ b/tests/tcg/multiarch/Makefile.target
> @@ -170,5 +170,12 @@ run-plugin-semiconsole-with-%:
> TESTS += semihosting semiconsole
> endif
>
> +# Test plugin memory access instrumentation
> +run-plugin-test-plugin-mem-access-with-libmem.so: \
> + PLUGIN_ARGS=$(COMMA)print-accesses=true
> +run-plugin-test-plugin-mem-access-with-libmem.so: \
> + CHECK_PLUGIN_OUTPUT_COMMAND= \
> + $(SRC_PATH)/tests/tcg/multiarch/check-plugin-mem-access.sh
> +
> # Update TESTS
> TESTS += $(MULTIARCH_TESTS)
> diff --git a/tests/tcg/multiarch/check-plugin-mem-access.sh b/tests/tcg/multiarch/check-plugin-mem-access.sh
> new file mode 100755
> index 00000000000..909606943bb
> --- /dev/null
> +++ b/tests/tcg/multiarch/check-plugin-mem-access.sh
> @@ -0,0 +1,30 @@
> +#!/usr/bin/env bash
> +
> +set -euo pipefail
> +
> +die()
> +{
> + echo "$@" 1>&2
> + exit 1
> +}
> +
> +check()
> +{
> + file=$1
> + pattern=$2
> + grep "$pattern" "$file" > /dev/null || die "\"$pattern\" not found in $file"
> +}
> +
> +[ $# -eq 1 ] || die "usage: plugin_out_file"
> +
> +plugin_out=$1
> +
> +expected()
> +{
> + ./test-plugin-mem-access ||
> + die "running test-plugin-mem-access executable failed"
I'm confused by this. We seem to be running the test again and this is
going to fail if binfmt_misc isn't setup (which we don't assume for
running the TCG tests).
> +}
> +
> +expected | while read line; do
> + check "$plugin_out" "$line"
> +done
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* [RFC PATCH] tests/tcg: add a system test to check memory instrumentation
2024-08-29 9:03 ` Alex Bennée
@ 2024-08-30 15:25 ` Alex Bennée
2024-08-30 19:17 ` Pierrick Bouvier
2024-08-30 19:08 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
1 sibling, 1 reply; 20+ messages in thread
From: Alex Bennée @ 2024-08-30 15:25 UTC (permalink / raw)
To: qemu-devel
Cc: pierrick.bouvier, Alex Bennée, Alexandre Iooss,
Mahmoud Mandour
At first I thought I could compile the user-mode test for system mode
however we already have a fairly comprehensive test case for system
mode in "memory" so lets use that.
First we extend the test to report where the test_data region is. Then
we expand the pdot() function to track the total number of reads and
writes to the region. We have to add some addition pdot() calls to
take into account multiple reads/writes in the test loops.
As tracking every access will quickly build up with "print-access" we
add a new mode to track groups of reads and writes to pages. Because
the test_data is page aligned we can be sure all accesses to it are
ones we can count.
Finally we add a python script to integrate the data from the plugin
and the output of the test and validate they both agree on the total
counts.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
tests/tcg/multiarch/system/memory.c | 48 +++++---
tests/tcg/plugins/mem.c | 79 ++++++++++++-
.../multiarch/system/Makefile.softmmu-target | 6 +
.../system/validate-memory-counts.py | 108 ++++++++++++++++++
4 files changed, 224 insertions(+), 17 deletions(-)
create mode 100755 tests/tcg/multiarch/system/validate-memory-counts.py
diff --git a/tests/tcg/multiarch/system/memory.c b/tests/tcg/multiarch/system/memory.c
index 6eb2eb16f7..335ecbd7f0 100644
--- a/tests/tcg/multiarch/system/memory.c
+++ b/tests/tcg/multiarch/system/memory.c
@@ -14,12 +14,16 @@
#include <stdint.h>
#include <stdbool.h>
+#include <inttypes.h>
#include <minilib.h>
#ifndef CHECK_UNALIGNED
# error "Target does not specify CHECK_UNALIGNED"
#endif
+uint32_t test_read_count;
+uint32_t test_write_count;
+
#define MEM_PAGE_SIZE 4096 /* nominal 4k "pages" */
#define TEST_SIZE (MEM_PAGE_SIZE * 4) /* 4 pages */
@@ -32,8 +36,13 @@ typedef void (*init_ufn) (int offset);
typedef bool (*read_ufn) (int offset);
typedef bool (*read_sfn) (int offset, bool nf);
-static void pdot(int count)
+static void pdot(int count, bool write)
{
+ if (write) {
+ test_write_count++;
+ } else {
+ test_read_count++;
+ }
if (count % 128 == 0) {
ml_printf(".");
}
@@ -66,7 +75,7 @@ static void init_test_data_u8(int unused_offset)
ml_printf("Filling test area with u8:");
for (i = 0; i < TEST_SIZE; i++) {
*ptr++ = BYTE_NEXT(count);
- pdot(i);
+ pdot(i, true);
}
ml_printf("done\n");
}
@@ -91,8 +100,9 @@ static void init_test_data_s8(bool neg_first)
neg_first ? "neg first" : "pos first");
for (i = 0; i < TEST_SIZE / 2; i++) {
*ptr++ = get_byte(i, neg_first);
+ pdot(i, true);
*ptr++ = get_byte(i, !neg_first);
- pdot(i);
+ pdot(i, true);
}
ml_printf("done\n");
}
@@ -107,6 +117,7 @@ static void reset_start_data(int offset)
int i;
for (i = 0; i < offset; i++) {
*ptr++ = 0;
+ pdot(i, true);
}
}
@@ -125,7 +136,7 @@ static void init_test_data_u16(int offset)
uint16_t low = BYTE_NEXT(count), high = BYTE_NEXT(count);
word = BYTE_SHIFT(high, 1) | BYTE_SHIFT(low, 0);
*ptr++ = word;
- pdot(i);
+ pdot(i, true);
}
ml_printf("done @ %p\n", ptr);
}
@@ -147,7 +158,7 @@ static void init_test_data_u32(int offset)
word = BYTE_SHIFT(b1, 3) | BYTE_SHIFT(b2, 2) | BYTE_SHIFT(b3, 1) |
BYTE_SHIFT(b4, 0);
*ptr++ = word;
- pdot(i);
+ pdot(i, true);
}
ml_printf("done @ %p\n", ptr);
}
@@ -172,7 +183,7 @@ static void init_test_data_u64(int offset)
BYTE_SHIFT(b4, 4) | BYTE_SHIFT(b5, 3) | BYTE_SHIFT(b6, 2) |
BYTE_SHIFT(b7, 1) | BYTE_SHIFT(b8, 0);
*ptr++ = word;
- pdot(i);
+ pdot(i, true);
}
ml_printf("done @ %p\n", ptr);
}
@@ -194,7 +205,7 @@ static bool read_test_data_u16(int offset)
ml_printf("Error %d < %d\n", high, low);
return false;
} else {
- pdot(i);
+ pdot(i, false);
}
}
@@ -236,7 +247,7 @@ static bool read_test_data_u32(int offset)
ml_printf("Error %d, %d, %d, %d", b1, b2, b3, b4);
return false;
} else {
- pdot(i);
+ pdot(i, false);
}
}
ml_printf("done @ %p\n", ptr);
@@ -290,7 +301,7 @@ static bool read_test_data_u64(int offset)
b1, b2, b3, b4, b5, b6, b7, b8);
return false;
} else {
- pdot(i);
+ pdot(i, false);
}
}
ml_printf("done @ %p\n", ptr);
@@ -357,9 +368,11 @@ static bool read_test_data_s8(int offset, bool neg_first)
second = *ptr++;
if (neg_first && first < 0 && second > 0) {
- pdot(i);
+ pdot(i, false);
+ pdot(i, false);
} else if (!neg_first && first > 0 && second < 0) {
- pdot(i);
+ pdot(i, false);
+ pdot(i, false);
} else {
ml_printf("Error %d %c %d\n", first, neg_first ? '<' : '>', second);
return false;
@@ -390,9 +403,9 @@ static bool read_test_data_s16(int offset, bool neg_first)
int32_t data = *ptr++;
if (neg_first && data < 0) {
- pdot(i);
+ pdot(i, false);
} else if (!neg_first && data > 0) {
- pdot(i);
+ pdot(i, false);
} else {
ml_printf("Error %d %c 0\n", data, neg_first ? '<' : '>');
return false;
@@ -423,9 +436,9 @@ static bool read_test_data_s32(int offset, bool neg_first)
int64_t data = *ptr++;
if (neg_first && data < 0) {
- pdot(i);
+ pdot(i, false);
} else if (!neg_first && data > 0) {
- pdot(i);
+ pdot(i, false);
} else {
ml_printf("Error %d %c 0\n", data, neg_first ? '<' : '>');
return false;
@@ -475,6 +488,9 @@ int main(void)
int i;
bool ok = true;
+ ml_printf("Test data start: 0x%"PRIxPTR"\n", &test_data[0]);
+ ml_printf("Test data end: 0x%"PRIxPTR"\n", &test_data[TEST_SIZE]);
+
/* Run through the unsigned tests first */
for (i = 0; i < ARRAY_SIZE(init_ufns) && ok; i++) {
ok = do_unsigned_test(init_ufns[i]);
@@ -490,6 +506,8 @@ int main(void)
ok = do_signed_reads(true);
}
+ ml_printf("Test data read: %"PRId32"\n", test_read_count);
+ ml_printf("Test data write: %"PRId32"\n", test_write_count);
ml_printf("Test complete: %s\n", ok ? "PASSED" : "FAILED");
return ok ? 0 : -1;
}
diff --git a/tests/tcg/plugins/mem.c b/tests/tcg/plugins/mem.c
index 086e6f5bdf..f9a2ab4c13 100644
--- a/tests/tcg/plugins/mem.c
+++ b/tests/tcg/plugins/mem.c
@@ -26,13 +26,27 @@ typedef struct {
const char *sym;
} InsnInfo;
+typedef struct {
+ uint64_t page_address;
+ uint64_t reads;
+ uint64_t read_bytes;
+ uint64_t writes;
+ uint64_t written_bytes;
+} PageInfo;
+
static struct qemu_plugin_scoreboard *counts;
static qemu_plugin_u64 mem_count;
static qemu_plugin_u64 io_count;
-static bool do_inline, do_callback, do_print_accesses;
+static bool do_inline, do_callback, do_print_accesses, do_page_summary;
static bool do_haddr;
static enum qemu_plugin_mem_rw rw = QEMU_PLUGIN_MEM_RW;
+static uint64_t page_size = 4096;
+static uint64_t page_mask;
+
+static GMutex lock;
+static GHashTable *pages;
+
static void plugin_exit(qemu_plugin_id_t id, void *p)
{
g_autoptr(GString) out = g_string_new("");
@@ -46,6 +60,31 @@ static void plugin_exit(qemu_plugin_id_t id, void *p)
qemu_plugin_u64_sum(io_count));
}
qemu_plugin_outs(out->str);
+
+
+ if (do_page_summary) {
+ GList *counts = g_hash_table_get_values(pages);
+
+ g_string_printf(out, "PageAddr, Reads, Read Bytes, Writes, Write Bytes\n");
+
+ if (counts && g_list_next(counts)) {
+ for (/* counts */; counts->next; counts = counts->next) {
+ PageInfo *pi = (PageInfo *) counts->data;
+
+ g_string_append_printf(out,
+ "0x%016"PRIx64", "
+ "%"PRId64", %"PRId64", "
+ "%"PRId64", %"PRId64"\n",
+ pi->page_address,
+ pi->reads,
+ pi->read_bytes,
+ pi->writes,
+ pi->written_bytes);
+ }
+ }
+ qemu_plugin_outs(out->str);
+ }
+
qemu_plugin_scoreboard_free(counts);
}
@@ -63,6 +102,31 @@ static void vcpu_mem(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
} else {
qemu_plugin_u64_add(mem_count, cpu_index, 1);
}
+
+ if (do_page_summary) {
+ uint64_t page = vaddr & ~page_mask;
+ PageInfo *pi;
+ unsigned size = 8 << qemu_plugin_mem_size_shift(meminfo);
+
+ g_mutex_lock(&lock);
+ pi = (PageInfo *) g_hash_table_lookup(pages, GUINT_TO_POINTER(page));
+
+ if (!pi) {
+ pi = g_new0(PageInfo, 1);
+ pi->page_address = page;
+ g_hash_table_insert(pages, GUINT_TO_POINTER(page), (gpointer) pi);
+ }
+
+ if (qemu_plugin_mem_is_store(meminfo)) {
+ pi->writes++;
+ pi->written_bytes += size;
+ } else {
+ pi->reads++;
+ pi->read_bytes += size;
+ }
+
+ g_mutex_unlock(&lock);
+ }
}
static void print_access(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
@@ -117,7 +181,7 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
QEMU_PLUGIN_INLINE_ADD_U64,
mem_count, 1);
}
- if (do_callback) {
+ if (do_callback || do_page_summary) {
qemu_plugin_register_vcpu_mem_cb(insn, vcpu_mem,
QEMU_PLUGIN_CB_NO_REGS,
rw, NULL);
@@ -176,6 +240,12 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
return -1;
}
+ } else if (g_strcmp0(tokens[0], "page-summary") == 0) {
+ if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
+ &do_page_summary)) {
+ fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
+ return -1;
+ }
} else {
fprintf(stderr, "option parsing failed: %s\n", opt);
return -1;
@@ -196,6 +266,11 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
qemu_plugin_outs(out->str);
}
+ if (do_page_summary) {
+ page_mask = (page_size - 1);
+ pages = g_hash_table_new(NULL, g_direct_equal);
+ }
+
counts = qemu_plugin_scoreboard_new(sizeof(CPUCount));
mem_count = qemu_plugin_scoreboard_u64_in_struct(
counts, CPUCount, mem_count);
diff --git a/tests/tcg/multiarch/system/Makefile.softmmu-target b/tests/tcg/multiarch/system/Makefile.softmmu-target
index 32dc0f9830..a1b33a6973 100644
--- a/tests/tcg/multiarch/system/Makefile.softmmu-target
+++ b/tests/tcg/multiarch/system/Makefile.softmmu-target
@@ -65,3 +65,9 @@ endif
MULTIARCH_RUNS += run-gdbstub-memory run-gdbstub-interrupt \
run-gdbstub-untimely-packet run-gdbstub-registers
+
+# Test plugin memory access instrumentation
+run-plugin-memory-with-libmem.so: \
+ PLUGIN_ARGS=$(COMMA)page-summary=true
+run-plugin-memory-with-libmem.so: \
+ CHECK_PLUGIN_OUTPUT_COMMAND=$(MULTIARCH_SYSTEM_SRC)/validate-memory-counts.py $@.out
diff --git a/tests/tcg/multiarch/system/validate-memory-counts.py b/tests/tcg/multiarch/system/validate-memory-counts.py
new file mode 100755
index 0000000000..8c18bff066
--- /dev/null
+++ b/tests/tcg/multiarch/system/validate-memory-counts.py
@@ -0,0 +1,108 @@
+#!/usr/bin/env python3
+#
+# validate-memory-counts.py: check we instrumented memory properly
+#
+# This program takes two inputs:
+# - the mem plugin output
+# - the memory binary output
+#
+# Copyright (C) 2024 Linaro Ltd
+#
+# SPDX-License-Identifier: GPL-2.0-or-later
+
+import sys
+
+def extract_counts(path):
+ """
+ Load the output from path and extract the lines containing:
+
+ Test data start: 0x40214000
+ Test data end: 0x40218001
+ Test data read: 2522280
+ Test data write: 262111
+
+ From the stream of data. Extract the values for use in the
+ validation function.
+ """
+ start_address = None
+ end_address = None
+ read_count = 0
+ write_count = 0
+ with open(path, 'r') as f:
+ for line in f:
+ if line.startswith("Test data start:"):
+ start_address = int(line.split(':')[1].strip(), 16)
+ elif line.startswith("Test data end:"):
+ end_address = int(line.split(':')[1].strip(), 16)
+ elif line.startswith("Test data read:"):
+ read_count = int(line.split(':')[1].strip())
+ elif line.startswith("Test data write:"):
+ write_count = int(line.split(':')[1].strip())
+ return start_address, end_address, read_count, write_count
+
+
+def parse_plugin_output(path, start, end):
+ """
+ Load the plugin output from path in the form of:
+
+ PageAddr, Reads, Read Bytes, Writes, Write Bytes
+ 0x0000000040214000, 630296, 15719488, 69700, 1116480
+ 0x0000000040201000, 0, 0, 2, 128
+ 0x0000000040215000, 630784, 15728640, 69632, 1114112
+
+ And extract the ranges that match test data start and end and
+ return the results.
+ """
+ total_reads = 0
+ total_read_bytes = 0
+ total_writes = 0
+ total_written_bytes = 0
+
+ with open(path, 'r') as f:
+ next(f) # Skip the header
+ for line in f:
+ parts = line.strip().split(', ')
+ if len(parts) != 5:
+ continue
+ page_addr = int(parts[0], 16)
+ reads = int(parts[1])
+ read_bytes = int(parts[2])
+ writes = int(parts[3])
+ written_bytes = int(parts[4])
+ if start <= page_addr < end: # Checking if within range
+ total_reads += reads
+ total_read_bytes += read_bytes
+ total_writes += writes
+ total_written_bytes += written_bytes
+
+ return total_reads, total_read_bytes, total_writes, total_written_bytes
+
+def main():
+ if len(sys.argv) != 3:
+ print("Usage: <script_name>.py <memory_binary_output_path> <mem_plugin_output_path>")
+ sys.exit(1)
+
+ memory_binary_output_path = sys.argv[1]
+ mem_plugin_output_path = sys.argv[2]
+
+ # Extract counts from memory binary
+ start, end, expected_reads, expected_writes = extract_counts(memory_binary_output_path)
+
+ if start is None or end is None:
+ print("Failed to extract start or end address from memory binary output.")
+ sys.exit(1)
+
+ # Parse plugin output
+ actual_reads, actual_read_bytes, actual_writes, actual_written_bytes = parse_plugin_output(mem_plugin_output_path, start, end)
+
+ # Compare and report
+ if actual_reads == expected_reads and actual_writes == expected_writes:
+ sys.exit(0)
+ else:
+ print("Fail: The memory reads and writes count does not match.")
+ print(f"Expected Reads: {expected_reads}, Actual Reads: {actual_reads}")
+ print(f"Expected Writes: {expected_writes}, Actual Writes: {actual_writes}")
+ sys.exit(1)
+
+if __name__ == "__main__":
+ main()
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-08-29 9:03 ` Alex Bennée
2024-08-30 15:25 ` [RFC PATCH] tests/tcg: add a system test to check memory instrumentation Alex Bennée
@ 2024-08-30 19:08 ` Pierrick Bouvier
2024-09-04 13:19 ` Alex Bennée
1 sibling, 1 reply; 20+ messages in thread
From: Pierrick Bouvier @ 2024-08-30 19:08 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum,
Xingtao Yao
On 8/29/24 02:03, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>
>> Add an explicit test to check expected memory values are read/written.
>> 8,16,32 load/store are tested for all arch.
>> 64,128 load/store are tested for aarch64/x64.
>> atomic operations (8,16,32,64) are tested for x64 only.
>>
>> By default, atomic accesses are non atomic if a single cpu is running,
>> so we force creation of a second one by creating a new thread first.
>>
>> load/store helpers code path can't be triggered easily in user mode (no
>> softmmu), so we can't test it here.
>>
>> Output of test-plugin-mem-access.c is the list of expected patterns in
>> plugin output. By reading stdout, we can compare to plugins output and
>> have a multiarch test.
>>
>> Can be run with:
>> make -C build/tests/tcg/$ARCH-linux-user run-plugin-test-plugin-mem-access-with-libmem.so
>>
>> Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>> ---
>> tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
>> tests/tcg/multiarch/Makefile.target | 7 +
>> .../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
>> 3 files changed, 212 insertions(+)
>> create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
>> create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
>>
>> diff --git a/tests/tcg/multiarch/test-plugin-mem-access.c b/tests/tcg/multiarch/test-plugin-mem-access.c
>> new file mode 100644
>> index 00000000000..09d1fa22e35
>> --- /dev/null
>> +++ b/tests/tcg/multiarch/test-plugin-mem-access.c
>> @@ -0,0 +1,175 @@
>> +/*
>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> + *
>> + * Check if we detect all memory accesses expected using plugin API.
>> + * Used in conjunction with ./check-plugin-mem-access.sh check script.
>> + * Output of this program is the list of patterns expected in plugin output.
>> + *
>> + * 8,16,32 load/store are tested for all arch.
>> + * 64,128 load/store are tested for aarch64/x64.
>> + * atomic operations (8,16,32,64) are tested for x64 only.
>> + */
>
> It would be nice to build this for the softmmu path as well. I'm not
> sure if this can be done with as single source or we need a second test.
> I shall have a play.
>
Ok, thanks.
>> +
>> +#include <pthread.h>
>> +#include <stdint.h>
>> +#include <stdio.h>
>> +#include <stdlib.h>
>> +
>> +#if defined(__x86_64__)
>> +#include <emmintrin.h>
>> +#elif defined(__aarch64__)
>> +#include <arm_neon.h>
>> +#endif /* __x86_64__ */
>> +
>> +static void *data;
>> +
>> +/* ,store_u8,.*,8,store,0xf1 */
>> +#define PRINT_EXPECTED(function, type, value, action) \
>> +do { \
>> + printf(",%s,.*,%d,%s,%s\n", \
>> + #function, (int) sizeof(type) * 8, action, value); \
>> +} \
>> +while (0)
>> +
>> +#define DEFINE_STORE(name, type, value) \
>> + \
>> +static void print_expected_store_##name(void) \
>> +{ \
>> + PRINT_EXPECTED(store_##name, type, #value, "store"); \
>> +} \
>> + \
>> +static void store_##name(void) \
>> +{ \
>> + *((type *)data) = value; \
>> + print_expected_store_##name(); \
>> +}
>> +
>> +#define DEFINE_ATOMIC_OP(name, type, value) \
>> + \
>> +static void print_expected_atomic_op_##name(void) \
>> +{ \
>> + PRINT_EXPECTED(atomic_op_##name, type, "0x0*42", "load"); \
>> + PRINT_EXPECTED(atomic_op_##name, type, #value, "store"); \
>> +} \
>> + \
>> +static void atomic_op_##name(void) \
>> +{ \
>> + *((type *)data) = 0x42; \
>> + __sync_val_compare_and_swap((type *)data, 0x42, value); \
>> + print_expected_atomic_op_##name(); \
>> +}
>> +
>> +#define DEFINE_LOAD(name, type, value) \
>> + \
>> +static void print_expected_load_##name(void) \
>> +{ \
>> + PRINT_EXPECTED(load_##name, type, #value, "load"); \
>> +} \
>> + \
>> +static void load_##name(void) \
>> +{ \
>> + type src = *((type *) data); \
>> + type dest = src; \
>> + (void)src, (void)dest; \
>> + print_expected_load_##name(); \
>> +}
>> +
>> +DEFINE_STORE(u8, uint8_t, 0xf1)
>> +DEFINE_LOAD(u8, uint8_t, 0xf1)
>> +DEFINE_STORE(u16, uint16_t, 0xf123)
>> +DEFINE_LOAD(u16, uint16_t, 0xf123)
>> +DEFINE_STORE(u32, uint32_t, 0xff112233)
>> +DEFINE_LOAD(u32, uint32_t, 0xff112233)
>> +
>> +#if defined(__x86_64__) || defined(__aarch64__)
>> +DEFINE_STORE(u64, uint64_t, 0xf123456789abcdef)
>> +DEFINE_LOAD(u64, uint64_t, 0xf123456789abcdef)
>> +
>> +static void print_expected_store_u128(void)
>> +{
>> + PRINT_EXPECTED(store_u128, __int128,
>> + "0xf122334455667788f123456789abcdef", "store");
>> +}
>> +
>> +static void store_u128(void)
>> +{
>> +#ifdef __x86_64__
>> + _mm_store_si128(data, _mm_set_epi32(0xf1223344, 0x55667788,
>> + 0xf1234567, 0x89abcdef));
>> +#else
>> + const uint32_t init[4] = {0x89abcdef, 0xf1234567, 0x55667788, 0xf1223344};
>> + uint32x4_t vec = vld1q_u32(init);
>> + vst1q_u32(data, vec);
>> +#endif /* __x86_64__ */
>> + print_expected_store_u128();
>> +}
>> +
>> +static void print_expected_load_u128(void)
>> +{
>> + PRINT_EXPECTED(load_u128, __int128,
>> + "0xf122334455667788f123456789abcdef", "load");
>> +}
>> +
>> +static void load_u128(void)
>> +{
>> +#ifdef __x86_64__
>> + __m128i var = _mm_load_si128(data);
>> +#else
>> + uint32x4_t var = vld1q_u32(data);
>> +#endif
>> + (void) var;
>> + print_expected_load_u128();
>> +}
>> +#endif /* __x86_64__ || __aarch64__ */
>> +
>> +#if defined(__x86_64__)
>> +DEFINE_ATOMIC_OP(u8, uint8_t, 0xf1)
>> +DEFINE_ATOMIC_OP(u16, uint16_t, 0xf123)
>> +DEFINE_ATOMIC_OP(u32, uint32_t, 0xff112233)
>> +DEFINE_ATOMIC_OP(u64, uint64_t, 0xf123456789abcdef)
>> +#endif /* __x86_64__ */
>> +
>> +static void *f(void *p)
>> +{
>> + return NULL;
>> +}
>> +
>> +int main(void)
>> +{
>> + /*
>> + * We force creation of a second thread to enable cpu flag CF_PARALLEL.
>> + * This will generate atomic operations when needed.
>> + */
>> + pthread_t thread;
>> + pthread_create(&thread, NULL, &f, NULL);
>> + pthread_join(thread, NULL);
>> +
>> + /* allocate storage up to 128 bits */
>> + data = malloc(16);
>> +
>> + store_u8();
>> + load_u8();
>> +
>> + store_u16();
>> + load_u16();
>> +
>> + store_u32();
>> + load_u32();
>> +
>> +#if defined(__x86_64__) || defined(__aarch64__)
>> + store_u64();
>> + load_u64();
>> +
>> + store_u128();
>> + load_u128();
>> +#endif /* __x86_64__ || __aarch64__ */
>> +
>> +#if defined(__x86_64__)
>> + atomic_op_u8();
>> + atomic_op_u16();
>> + atomic_op_u32();
>> + atomic_op_u64();
>> +#endif /* __x86_64__ */
>> +
>> + free(data);
>> +}
>> diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
>> index 5e3391ec9d2..d90cbd3e521 100644
>> --- a/tests/tcg/multiarch/Makefile.target
>> +++ b/tests/tcg/multiarch/Makefile.target
>> @@ -170,5 +170,12 @@ run-plugin-semiconsole-with-%:
>> TESTS += semihosting semiconsole
>> endif
>>
>> +# Test plugin memory access instrumentation
>> +run-plugin-test-plugin-mem-access-with-libmem.so: \
>> + PLUGIN_ARGS=$(COMMA)print-accesses=true
>> +run-plugin-test-plugin-mem-access-with-libmem.so: \
>> + CHECK_PLUGIN_OUTPUT_COMMAND= \
>> + $(SRC_PATH)/tests/tcg/multiarch/check-plugin-mem-access.sh
>> +
>> # Update TESTS
>> TESTS += $(MULTIARCH_TESTS)
>> diff --git a/tests/tcg/multiarch/check-plugin-mem-access.sh b/tests/tcg/multiarch/check-plugin-mem-access.sh
>> new file mode 100755
>> index 00000000000..909606943bb
>> --- /dev/null
>> +++ b/tests/tcg/multiarch/check-plugin-mem-access.sh
>> @@ -0,0 +1,30 @@
>> +#!/usr/bin/env bash
>> +
>> +set -euo pipefail
>> +
>> +die()
>> +{
>> + echo "$@" 1>&2
>> + exit 1
>> +}
>> +
>> +check()
>> +{
>> + file=$1
>> + pattern=$2
>> + grep "$pattern" "$file" > /dev/null || die "\"$pattern\" not found in $file"
>> +}
>> +
>> +[ $# -eq 1 ] || die "usage: plugin_out_file"
>> +
>> +plugin_out=$1
>> +
>> +expected()
>> +{
>> + ./test-plugin-mem-access ||
>> + die "running test-plugin-mem-access executable failed"
>
> I'm confused by this. We seem to be running the test again and this is
> going to fail if binfmt_misc isn't setup (which we don't assume for
> running the TCG tests).
>
The test stdout is the expected output to grep. This is to avoid avoid
an "expected file" and a "source file" somewhere else.
Could we use compiled qemu-user to run it instead?
I'm trying to find a solution where "expected" is not duplicated between
several files.
>> +}
>> +
>> +expected | while read line; do
>> + check "$plugin_out" "$line"
>> +done
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [RFC PATCH] tests/tcg: add a system test to check memory instrumentation
2024-08-30 15:25 ` [RFC PATCH] tests/tcg: add a system test to check memory instrumentation Alex Bennée
@ 2024-08-30 19:17 ` Pierrick Bouvier
0 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-08-30 19:17 UTC (permalink / raw)
To: Alex Bennée, qemu-devel; +Cc: Alexandre Iooss, Mahmoud Mandour
On 8/30/24 08:25, Alex Bennée wrote:
> At first I thought I could compile the user-mode test for system mode
> however we already have a fairly comprehensive test case for system
> mode in "memory" so lets use that.
>
> First we extend the test to report where the test_data region is. Then
> we expand the pdot() function to track the total number of reads and
> writes to the region. We have to add some addition pdot() calls to
> take into account multiple reads/writes in the test loops.
>
> As tracking every access will quickly build up with "print-access" we
> add a new mode to track groups of reads and writes to pages. Because
> the test_data is page aligned we can be sure all accesses to it are
> ones we can count.
>
> Finally we add a python script to integrate the data from the plugin
> and the output of the test and validate they both agree on the total
> counts.
>
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> ---
> tests/tcg/multiarch/system/memory.c | 48 +++++---
> tests/tcg/plugins/mem.c | 79 ++++++++++++-
> .../multiarch/system/Makefile.softmmu-target | 6 +
> .../system/validate-memory-counts.py | 108 ++++++++++++++++++
> 4 files changed, 224 insertions(+), 17 deletions(-)
> create mode 100755 tests/tcg/multiarch/system/validate-memory-counts.py
>
> diff --git a/tests/tcg/multiarch/system/memory.c b/tests/tcg/multiarch/system/memory.c
> index 6eb2eb16f7..335ecbd7f0 100644
> --- a/tests/tcg/multiarch/system/memory.c
> +++ b/tests/tcg/multiarch/system/memory.c
> @@ -14,12 +14,16 @@
>
> #include <stdint.h>
> #include <stdbool.h>
> +#include <inttypes.h>
> #include <minilib.h>
>
> #ifndef CHECK_UNALIGNED
> # error "Target does not specify CHECK_UNALIGNED"
> #endif
>
> +uint32_t test_read_count;
> +uint32_t test_write_count;
> +
> #define MEM_PAGE_SIZE 4096 /* nominal 4k "pages" */
> #define TEST_SIZE (MEM_PAGE_SIZE * 4) /* 4 pages */
>
> @@ -32,8 +36,13 @@ typedef void (*init_ufn) (int offset);
> typedef bool (*read_ufn) (int offset);
> typedef bool (*read_sfn) (int offset, bool nf);
>
> -static void pdot(int count)
> +static void pdot(int count, bool write)
> {
> + if (write) {
> + test_write_count++;
> + } else {
> + test_read_count++;
> + }
> if (count % 128 == 0) {
> ml_printf(".");
> }
> @@ -66,7 +75,7 @@ static void init_test_data_u8(int unused_offset)
> ml_printf("Filling test area with u8:");
> for (i = 0; i < TEST_SIZE; i++) {
> *ptr++ = BYTE_NEXT(count);
> - pdot(i);
> + pdot(i, true);
> }
> ml_printf("done\n");
> }
> @@ -91,8 +100,9 @@ static void init_test_data_s8(bool neg_first)
> neg_first ? "neg first" : "pos first");
> for (i = 0; i < TEST_SIZE / 2; i++) {
> *ptr++ = get_byte(i, neg_first);
> + pdot(i, true);
> *ptr++ = get_byte(i, !neg_first);
> - pdot(i);
> + pdot(i, true);
> }
> ml_printf("done\n");
> }
> @@ -107,6 +117,7 @@ static void reset_start_data(int offset)
> int i;
> for (i = 0; i < offset; i++) {
> *ptr++ = 0;
> + pdot(i, true);
> }
> }
>
> @@ -125,7 +136,7 @@ static void init_test_data_u16(int offset)
> uint16_t low = BYTE_NEXT(count), high = BYTE_NEXT(count);
> word = BYTE_SHIFT(high, 1) | BYTE_SHIFT(low, 0);
> *ptr++ = word;
> - pdot(i);
> + pdot(i, true);
> }
> ml_printf("done @ %p\n", ptr);
> }
> @@ -147,7 +158,7 @@ static void init_test_data_u32(int offset)
> word = BYTE_SHIFT(b1, 3) | BYTE_SHIFT(b2, 2) | BYTE_SHIFT(b3, 1) |
> BYTE_SHIFT(b4, 0);
> *ptr++ = word;
> - pdot(i);
> + pdot(i, true);
> }
> ml_printf("done @ %p\n", ptr);
> }
> @@ -172,7 +183,7 @@ static void init_test_data_u64(int offset)
> BYTE_SHIFT(b4, 4) | BYTE_SHIFT(b5, 3) | BYTE_SHIFT(b6, 2) |
> BYTE_SHIFT(b7, 1) | BYTE_SHIFT(b8, 0);
> *ptr++ = word;
> - pdot(i);
> + pdot(i, true);
> }
> ml_printf("done @ %p\n", ptr);
> }
> @@ -194,7 +205,7 @@ static bool read_test_data_u16(int offset)
> ml_printf("Error %d < %d\n", high, low);
> return false;
> } else {
> - pdot(i);
> + pdot(i, false);
> }
>
> }
> @@ -236,7 +247,7 @@ static bool read_test_data_u32(int offset)
> ml_printf("Error %d, %d, %d, %d", b1, b2, b3, b4);
> return false;
> } else {
> - pdot(i);
> + pdot(i, false);
> }
> }
> ml_printf("done @ %p\n", ptr);
> @@ -290,7 +301,7 @@ static bool read_test_data_u64(int offset)
> b1, b2, b3, b4, b5, b6, b7, b8);
> return false;
> } else {
> - pdot(i);
> + pdot(i, false);
> }
> }
> ml_printf("done @ %p\n", ptr);
> @@ -357,9 +368,11 @@ static bool read_test_data_s8(int offset, bool neg_first)
> second = *ptr++;
>
> if (neg_first && first < 0 && second > 0) {
> - pdot(i);
> + pdot(i, false);
> + pdot(i, false);
> } else if (!neg_first && first > 0 && second < 0) {
> - pdot(i);
> + pdot(i, false);
> + pdot(i, false);
> } else {
> ml_printf("Error %d %c %d\n", first, neg_first ? '<' : '>', second);
> return false;
> @@ -390,9 +403,9 @@ static bool read_test_data_s16(int offset, bool neg_first)
> int32_t data = *ptr++;
>
> if (neg_first && data < 0) {
> - pdot(i);
> + pdot(i, false);
> } else if (!neg_first && data > 0) {
> - pdot(i);
> + pdot(i, false);
> } else {
> ml_printf("Error %d %c 0\n", data, neg_first ? '<' : '>');
> return false;
> @@ -423,9 +436,9 @@ static bool read_test_data_s32(int offset, bool neg_first)
> int64_t data = *ptr++;
>
> if (neg_first && data < 0) {
> - pdot(i);
> + pdot(i, false);
> } else if (!neg_first && data > 0) {
> - pdot(i);
> + pdot(i, false);
> } else {
> ml_printf("Error %d %c 0\n", data, neg_first ? '<' : '>');
> return false;
> @@ -475,6 +488,9 @@ int main(void)
> int i;
> bool ok = true;
>
> + ml_printf("Test data start: 0x%"PRIxPTR"\n", &test_data[0]);
> + ml_printf("Test data end: 0x%"PRIxPTR"\n", &test_data[TEST_SIZE]);
> +
> /* Run through the unsigned tests first */
> for (i = 0; i < ARRAY_SIZE(init_ufns) && ok; i++) {
> ok = do_unsigned_test(init_ufns[i]);
> @@ -490,6 +506,8 @@ int main(void)
> ok = do_signed_reads(true);
> }
>
> + ml_printf("Test data read: %"PRId32"\n", test_read_count);
> + ml_printf("Test data write: %"PRId32"\n", test_write_count);
> ml_printf("Test complete: %s\n", ok ? "PASSED" : "FAILED");
> return ok ? 0 : -1;
> }
> diff --git a/tests/tcg/plugins/mem.c b/tests/tcg/plugins/mem.c
> index 086e6f5bdf..f9a2ab4c13 100644
> --- a/tests/tcg/plugins/mem.c
> +++ b/tests/tcg/plugins/mem.c
> @@ -26,13 +26,27 @@ typedef struct {
> const char *sym;
> } InsnInfo;
>
> +typedef struct {
> + uint64_t page_address;
> + uint64_t reads;
> + uint64_t read_bytes;
> + uint64_t writes;
> + uint64_t written_bytes;
> +} PageInfo;
> +
> static struct qemu_plugin_scoreboard *counts;
> static qemu_plugin_u64 mem_count;
> static qemu_plugin_u64 io_count;
> -static bool do_inline, do_callback, do_print_accesses;
> +static bool do_inline, do_callback, do_print_accesses, do_page_summary;
> static bool do_haddr;
> static enum qemu_plugin_mem_rw rw = QEMU_PLUGIN_MEM_RW;
>
> +static uint64_t page_size = 4096;
> +static uint64_t page_mask;
> +
> +static GMutex lock;
> +static GHashTable *pages;
> +
> static void plugin_exit(qemu_plugin_id_t id, void *p)
> {
> g_autoptr(GString) out = g_string_new("");
> @@ -46,6 +60,31 @@ static void plugin_exit(qemu_plugin_id_t id, void *p)
> qemu_plugin_u64_sum(io_count));
> }
> qemu_plugin_outs(out->str);
> +
> +
> + if (do_page_summary) {
> + GList *counts = g_hash_table_get_values(pages);
> +
> + g_string_printf(out, "PageAddr, Reads, Read Bytes, Writes, Write Bytes\n");
> +
> + if (counts && g_list_next(counts)) {
> + for (/* counts */; counts->next; counts = counts->next) {
> + PageInfo *pi = (PageInfo *) counts->data;
> +
> + g_string_append_printf(out,
> + "0x%016"PRIx64", "
> + "%"PRId64", %"PRId64", "
> + "%"PRId64", %"PRId64"\n",
> + pi->page_address,
> + pi->reads,
> + pi->read_bytes,
> + pi->writes,
> + pi->written_bytes);
> + }
> + }
> + qemu_plugin_outs(out->str);
> + }
> +
> qemu_plugin_scoreboard_free(counts);
> }
>
> @@ -63,6 +102,31 @@ static void vcpu_mem(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
> } else {
> qemu_plugin_u64_add(mem_count, cpu_index, 1);
> }
> +
> + if (do_page_summary) {
> + uint64_t page = vaddr & ~page_mask;
> + PageInfo *pi;
> + unsigned size = 8 << qemu_plugin_mem_size_shift(meminfo);
> +
> + g_mutex_lock(&lock);
> + pi = (PageInfo *) g_hash_table_lookup(pages, GUINT_TO_POINTER(page));
> +
> + if (!pi) {
> + pi = g_new0(PageInfo, 1);
> + pi->page_address = page;
> + g_hash_table_insert(pages, GUINT_TO_POINTER(page), (gpointer) pi);
> + }
> +
> + if (qemu_plugin_mem_is_store(meminfo)) {
> + pi->writes++;
> + pi->written_bytes += size;
> + } else {
> + pi->reads++;
> + pi->read_bytes += size;
> + }
> +
> + g_mutex_unlock(&lock);
> + }
> }
>
> static void print_access(unsigned int cpu_index, qemu_plugin_meminfo_t meminfo,
> @@ -117,7 +181,7 @@ static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
> QEMU_PLUGIN_INLINE_ADD_U64,
> mem_count, 1);
> }
> - if (do_callback) {
> + if (do_callback || do_page_summary) {
> qemu_plugin_register_vcpu_mem_cb(insn, vcpu_mem,
> QEMU_PLUGIN_CB_NO_REGS,
> rw, NULL);
> @@ -176,6 +240,12 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
> fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
> return -1;
> }
> + } else if (g_strcmp0(tokens[0], "page-summary") == 0) {
> + if (!qemu_plugin_bool_parse(tokens[0], tokens[1],
> + &do_page_summary)) {
> + fprintf(stderr, "boolean argument parsing failed: %s\n", opt);
> + return -1;
> + }
> } else {
> fprintf(stderr, "option parsing failed: %s\n", opt);
> return -1;
> @@ -196,6 +266,11 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
> qemu_plugin_outs(out->str);
> }
>
> + if (do_page_summary) {
> + page_mask = (page_size - 1);
> + pages = g_hash_table_new(NULL, g_direct_equal);
> + }
> +
> counts = qemu_plugin_scoreboard_new(sizeof(CPUCount));
> mem_count = qemu_plugin_scoreboard_u64_in_struct(
> counts, CPUCount, mem_count);
> diff --git a/tests/tcg/multiarch/system/Makefile.softmmu-target b/tests/tcg/multiarch/system/Makefile.softmmu-target
> index 32dc0f9830..a1b33a6973 100644
> --- a/tests/tcg/multiarch/system/Makefile.softmmu-target
> +++ b/tests/tcg/multiarch/system/Makefile.softmmu-target
> @@ -65,3 +65,9 @@ endif
>
> MULTIARCH_RUNS += run-gdbstub-memory run-gdbstub-interrupt \
> run-gdbstub-untimely-packet run-gdbstub-registers
> +
> +# Test plugin memory access instrumentation
> +run-plugin-memory-with-libmem.so: \
> + PLUGIN_ARGS=$(COMMA)page-summary=true
> +run-plugin-memory-with-libmem.so: \
> + CHECK_PLUGIN_OUTPUT_COMMAND=$(MULTIARCH_SYSTEM_SRC)/validate-memory-counts.py $@.out
> diff --git a/tests/tcg/multiarch/system/validate-memory-counts.py b/tests/tcg/multiarch/system/validate-memory-counts.py
> new file mode 100755
> index 0000000000..8c18bff066
> --- /dev/null
> +++ b/tests/tcg/multiarch/system/validate-memory-counts.py
> @@ -0,0 +1,108 @@
> +#!/usr/bin/env python3
> +#
> +# validate-memory-counts.py: check we instrumented memory properly
> +#
> +# This program takes two inputs:
> +# - the mem plugin output
> +# - the memory binary output
> +#
> +# Copyright (C) 2024 Linaro Ltd
> +#
> +# SPDX-License-Identifier: GPL-2.0-or-later
> +
> +import sys
> +
> +def extract_counts(path):
> + """
> + Load the output from path and extract the lines containing:
> +
> + Test data start: 0x40214000
> + Test data end: 0x40218001
> + Test data read: 2522280
> + Test data write: 262111
> +
> + From the stream of data. Extract the values for use in the
> + validation function.
> + """
> + start_address = None
> + end_address = None
> + read_count = 0
> + write_count = 0
> + with open(path, 'r') as f:
> + for line in f:
> + if line.startswith("Test data start:"):
> + start_address = int(line.split(':')[1].strip(), 16)
> + elif line.startswith("Test data end:"):
> + end_address = int(line.split(':')[1].strip(), 16)
> + elif line.startswith("Test data read:"):
> + read_count = int(line.split(':')[1].strip())
> + elif line.startswith("Test data write:"):
> + write_count = int(line.split(':')[1].strip())
> + return start_address, end_address, read_count, write_count
> +
> +
> +def parse_plugin_output(path, start, end):
> + """
> + Load the plugin output from path in the form of:
> +
> + PageAddr, Reads, Read Bytes, Writes, Write Bytes
> + 0x0000000040214000, 630296, 15719488, 69700, 1116480
> + 0x0000000040201000, 0, 0, 2, 128
> + 0x0000000040215000, 630784, 15728640, 69632, 1114112
> +
> + And extract the ranges that match test data start and end and
> + return the results.
> + """
> + total_reads = 0
> + total_read_bytes = 0
> + total_writes = 0
> + total_written_bytes = 0
> +
> + with open(path, 'r') as f:
> + next(f) # Skip the header
> + for line in f:
> + parts = line.strip().split(', ')
> + if len(parts) != 5:
> + continue
> + page_addr = int(parts[0], 16)
> + reads = int(parts[1])
> + read_bytes = int(parts[2])
> + writes = int(parts[3])
> + written_bytes = int(parts[4])
> + if start <= page_addr < end: # Checking if within range
> + total_reads += reads
> + total_read_bytes += read_bytes
> + total_writes += writes
> + total_written_bytes += written_bytes
> +
> + return total_reads, total_read_bytes, total_writes, total_written_bytes
> +
> +def main():
> + if len(sys.argv) != 3:
> + print("Usage: <script_name>.py <memory_binary_output_path> <mem_plugin_output_path>")
> + sys.exit(1)
> +
> + memory_binary_output_path = sys.argv[1]
> + mem_plugin_output_path = sys.argv[2]
> +
> + # Extract counts from memory binary
> + start, end, expected_reads, expected_writes = extract_counts(memory_binary_output_path)
> +
> + if start is None or end is None:
> + print("Failed to extract start or end address from memory binary output.")
> + sys.exit(1)
> +
> + # Parse plugin output
> + actual_reads, actual_read_bytes, actual_writes, actual_written_bytes = parse_plugin_output(mem_plugin_output_path, start, end)
> +
> + # Compare and report
> + if actual_reads == expected_reads and actual_writes == expected_writes:
> + sys.exit(0)
> + else:
> + print("Fail: The memory reads and writes count does not match.")
> + print(f"Expected Reads: {expected_reads}, Actual Reads: {actual_reads}")
> + print(f"Expected Writes: {expected_writes}, Actual Writes: {actual_writes}")
> + sys.exit(1)
> +
> +if __name__ == "__main__":
> + main()
Thanks for investigating this.
Overall, it would be a good thing to have a test like this.
However, I think it misses the point attached to this series.
Indeed, the new API function qemu_plugin_mem_get_value() is never
called, and we just count expect read/write at given addresses.
There is value into this, but I definitely thing this should be in a
different series, after the current one is merged.
What do you think?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-08-30 19:08 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
@ 2024-09-04 13:19 ` Alex Bennée
0 siblings, 0 replies; 20+ messages in thread
From: Alex Bennée @ 2024-09-04 13:19 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum,
Xingtao Yao
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> On 8/29/24 02:03, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>
<snip>
>>> diff --git a/tests/tcg/multiarch/check-plugin-mem-access.sh b/tests/tcg/multiarch/check-plugin-mem-access.sh
>>> new file mode 100755
>>> index 00000000000..909606943bb
>>> --- /dev/null
>>> +++ b/tests/tcg/multiarch/check-plugin-mem-access.sh
>>> @@ -0,0 +1,30 @@
>>> +#!/usr/bin/env bash
>>> +
>>> +set -euo pipefail
>>> +
>>> +die()
>>> +{
>>> + echo "$@" 1>&2
>>> + exit 1
>>> +}
>>> +
>>> +check()
>>> +{
>>> + file=$1
>>> + pattern=$2
>>> + grep "$pattern" "$file" > /dev/null || die "\"$pattern\" not found in $file"
>>> +}
>>> +
>>> +[ $# -eq 1 ] || die "usage: plugin_out_file"
>>> +
>>> +plugin_out=$1
>>> +
>>> +expected()
>>> +{
>>> + ./test-plugin-mem-access ||
>>> + die "running test-plugin-mem-access executable failed"
>> I'm confused by this. We seem to be running the test again and this
>> is
>> going to fail if binfmt_misc isn't setup (which we don't assume for
>> running the TCG tests).
>>
>
> The test stdout is the expected output to grep. This is to avoid avoid
> an "expected file" and a "source file" somewhere else.
Is this really such an issue. For the system mode test I just did:
run-plugin-memory-with-libmem.so: \
CHECK_PLUGIN_OUTPUT_COMMAND=$(MULTIARCH_SYSTEM_SRC)/validate-memory-counts.py $@.out
> Could we use compiled qemu-user to run it instead?
Yes - although that would be inefficient (and you need to pass that path
in somehow anyway)
> I'm trying to find a solution where "expected" is not duplicated
> between several files.
Move it all into python?
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-07-24 19:47 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
2024-08-29 9:03 ` Alex Bennée
@ 2024-09-04 15:41 ` Alex Bennée
2024-09-04 16:28 ` Alex Bennée
1 sibling, 1 reply; 20+ messages in thread
From: Alex Bennée @ 2024-09-04 15:41 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum,
Xingtao Yao
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> Add an explicit test to check expected memory values are read/written.
> 8,16,32 load/store are tested for all arch.
> 64,128 load/store are tested for aarch64/x64.
> atomic operations (8,16,32,64) are tested for x64 only.
>
> By default, atomic accesses are non atomic if a single cpu is running,
> so we force creation of a second one by creating a new thread first.
>
> load/store helpers code path can't be triggered easily in user mode (no
> softmmu), so we can't test it here.
>
> Output of test-plugin-mem-access.c is the list of expected patterns in
> plugin output. By reading stdout, we can compare to plugins output and
> have a multiarch test.
>
> Can be run with:
> make -C build/tests/tcg/$ARCH-linux-user run-plugin-test-plugin-mem-access-with-libmem.so
>
> Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
> ---
> tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
> tests/tcg/multiarch/Makefile.target | 7 +
> .../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
> 3 files changed, 212 insertions(+)
> create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
> create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
>
> diff --git a/tests/tcg/multiarch/test-plugin-mem-access.c b/tests/tcg/multiarch/test-plugin-mem-access.c
> new file mode 100644
> index 00000000000..09d1fa22e35
<snip>
> diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
> index 5e3391ec9d2..d90cbd3e521 100644
> --- a/tests/tcg/multiarch/Makefile.target
> +++ b/tests/tcg/multiarch/Makefile.target
> @@ -170,5 +170,12 @@ run-plugin-semiconsole-with-%:
> TESTS += semihosting semiconsole
> endif
>
Also you need:
test-plugin-mem-access: CFLAGS+=-pthread
test-plugin-mem-access: LDFLAGS+=-pthread
So less tolerant gcc's include pthread (otherwise the alpha-linux-user
fails), with that fix I get:
TEST check plugin libmem.so output with test-plugin-mem-access
",store_u8,.*,8,store,0xf1" not found in test-plugin-mem-access-with-libmem.so.pout
make[1]: *** [Makefile:181: run-plugin-test-plugin-mem-access-with-libmem.so] Error 1
make: *** [/home/alex/lsrc/qemu.git/tests/Makefile.include:56: run-tcg-tests-alpha-linux-user] Error 2
> +# Test plugin memory access instrumentation
> +run-plugin-test-plugin-mem-access-with-libmem.so: \
> + PLUGIN_ARGS=$(COMMA)print-accesses=true
> +run-plugin-test-plugin-mem-access-with-libmem.so: \
> + CHECK_PLUGIN_OUTPUT_COMMAND= \
> + $(SRC_PATH)/tests/tcg/multiarch/check-plugin-mem-access.sh
> +
> # Update TESTS
> TESTS += $(MULTIARCH_TESTS)
> diff --git a/tests/tcg/multiarch/check-plugin-mem-access.sh b/tests/tcg/multiarch/check-plugin-mem-access.sh
> new file mode 100755
> index 00000000000..909606943bb
> --- /dev/null
> +++ b/tests/tcg/multiarch/check-plugin-mem-access.sh
> @@ -0,0 +1,30 @@
> +#!/usr/bin/env bash
> +
> +set -euo pipefail
> +
> +die()
> +{
> + echo "$@" 1>&2
> + exit 1
> +}
> +
> +check()
> +{
> + file=$1
> + pattern=$2
> + grep "$pattern" "$file" > /dev/null || die "\"$pattern\" not found in $file"
> +}
> +
> +[ $# -eq 1 ] || die "usage: plugin_out_file"
> +
> +plugin_out=$1
> +
> +expected()
> +{
> + ./test-plugin-mem-access ||
> + die "running test-plugin-mem-access executable failed"
> +}
> +
> +expected | while read line; do
> + check "$plugin_out" "$line"
> +done
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access
2024-09-04 15:41 ` Alex Bennée
@ 2024-09-04 16:28 ` Alex Bennée
0 siblings, 0 replies; 20+ messages in thread
From: Alex Bennée @ 2024-09-04 16:28 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum,
Xingtao Yao
Alex Bennée <alex.bennee@linaro.org> writes:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>
>> Add an explicit test to check expected memory values are read/written.
>> 8,16,32 load/store are tested for all arch.
>> 64,128 load/store are tested for aarch64/x64.
>> atomic operations (8,16,32,64) are tested for x64 only.
>>
>> By default, atomic accesses are non atomic if a single cpu is running,
>> so we force creation of a second one by creating a new thread first.
>>
>> load/store helpers code path can't be triggered easily in user mode (no
>> softmmu), so we can't test it here.
>>
>> Output of test-plugin-mem-access.c is the list of expected patterns in
>> plugin output. By reading stdout, we can compare to plugins output and
>> have a multiarch test.
>>
>> Can be run with:
>> make -C build/tests/tcg/$ARCH-linux-user run-plugin-test-plugin-mem-access-with-libmem.so
>>
>> Tested-by: Xingtao Yao <yaoxt.fnst@fujitsu.com>
>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>> ---
>> tests/tcg/multiarch/test-plugin-mem-access.c | 175 ++++++++++++++++++
>> tests/tcg/multiarch/Makefile.target | 7 +
>> .../tcg/multiarch/check-plugin-mem-access.sh | 30 +++
>> 3 files changed, 212 insertions(+)
>> create mode 100644 tests/tcg/multiarch/test-plugin-mem-access.c
>> create mode 100755 tests/tcg/multiarch/check-plugin-mem-access.sh
>>
>> diff --git a/tests/tcg/multiarch/test-plugin-mem-access.c b/tests/tcg/multiarch/test-plugin-mem-access.c
>> new file mode 100644
>> index 00000000000..09d1fa22e35
> <snip>
>> diff --git a/tests/tcg/multiarch/Makefile.target b/tests/tcg/multiarch/Makefile.target
>> index 5e3391ec9d2..d90cbd3e521 100644
>> --- a/tests/tcg/multiarch/Makefile.target
>> +++ b/tests/tcg/multiarch/Makefile.target
>> @@ -170,5 +170,12 @@ run-plugin-semiconsole-with-%:
>> TESTS += semihosting semiconsole
>> endif
>>
>
> Also you need:
>
> test-plugin-mem-access: CFLAGS+=-pthread
> test-plugin-mem-access: LDFLAGS+=-pthread
>
> So less tolerant gcc's include pthread (otherwise the alpha-linux-user
> fails), with that fix I get:
>
> TEST check plugin libmem.so output with test-plugin-mem-access
> ",store_u8,.*,8,store,0xf1" not found in test-plugin-mem-access-with-libmem.so.pout
> make[1]: *** [Makefile:181: run-plugin-test-plugin-mem-access-with-libmem.so] Error 1
> make: *** [/home/alex/lsrc/qemu.git/tests/Makefile.include:56: run-tcg-tests-alpha-linux-user] Error 2
And ensure we enable BWX for alpha so it emits bytes stores instead of
faking it with masking:
modified tests/tcg/alpha/Makefile.target
@@ -13,3 +13,5 @@ test-cmov: test-cond.c
$(CC) $(CFLAGS) $(EXTRA_CFLAGS) $< -o $@ $(LDFLAGS)
run-test-cmov: test-cmov
+
+test-plugin-mem-access: CFLAGS+=-mbwx
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
` (5 preceding siblings ...)
2024-07-24 19:47 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
@ 2024-09-05 15:21 ` Alex Bennée
2024-09-07 1:49 ` Pierrick Bouvier
6 siblings, 1 reply; 20+ messages in thread
From: Alex Bennée @ 2024-09-05 15:21 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> This series allows plugins to know which value is read/written during a memory
> access.
>
> For every memory access, we know copy this value before calling mem callbacks,
> and those can query it using new API function:
> - qemu_plugin_mem_get_value
Queued to patches 1-5 to plugins/next, thanks.
You can send the re-spun version of 6 once the review comments have been
done.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-09-05 15:21 ` [PATCH v7 0/6] plugins: access values during a memory read/write Alex Bennée
@ 2024-09-07 1:49 ` Pierrick Bouvier
2024-09-09 10:00 ` Alex Bennée
0 siblings, 1 reply; 20+ messages in thread
From: Pierrick Bouvier @ 2024-09-07 1:49 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
On 9/5/24 08:21, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>
>> This series allows plugins to know which value is read/written during a memory
>> access.
>>
>> For every memory access, we know copy this value before calling mem callbacks,
>> and those can query it using new API function:
>> - qemu_plugin_mem_get_value
>
> Queued to patches 1-5 to plugins/next, thanks.
>
> You can send the re-spun version of 6 once the review comments have been
> done.
>
Thanks Alex,
right now, my try to make check-tcg are blocked with the cross
containers who don't compile, so I'll wait for this to be resolved.
I still wonder if having a simple aarch64/x64 test is not enough, and
covering 99.9% of the bug we could introduce in the future on this.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-09-07 1:49 ` Pierrick Bouvier
@ 2024-09-09 10:00 ` Alex Bennée
2024-09-09 19:04 ` Pierrick Bouvier
0 siblings, 1 reply; 20+ messages in thread
From: Alex Bennée @ 2024-09-09 10:00 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> On 9/5/24 08:21, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>
>>> This series allows plugins to know which value is read/written during a memory
>>> access.
>>>
>>> For every memory access, we know copy this value before calling mem callbacks,
>>> and those can query it using new API function:
>>> - qemu_plugin_mem_get_value
>> Queued to patches 1-5 to plugins/next, thanks.
>> You can send the re-spun version of 6 once the review comments have
>> been
>> done.
>>
>
> Thanks Alex,
>
> right now, my try to make check-tcg are blocked with the cross
> containers who don't compile, so I'll wait for this to be resolved.
Which ones?
> I still wonder if having a simple aarch64/x64 test is not enough, and
> covering 99.9% of the bug we could introduce in the future on this.
Have you measured the code coverage of the test?
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-09-09 10:00 ` Alex Bennée
@ 2024-09-09 19:04 ` Pierrick Bouvier
2024-09-09 20:21 ` Alex Bennée
0 siblings, 1 reply; 20+ messages in thread
From: Pierrick Bouvier @ 2024-09-09 19:04 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
On 9/9/24 03:00, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>
>> On 9/5/24 08:21, Alex Bennée wrote:
>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>
>>>> This series allows plugins to know which value is read/written during a memory
>>>> access.
>>>>
>>>> For every memory access, we know copy this value before calling mem callbacks,
>>>> and those can query it using new API function:
>>>> - qemu_plugin_mem_get_value
>>> Queued to patches 1-5 to plugins/next, thanks.
>>> You can send the re-spun version of 6 once the review comments have
>>> been
>>> done.
>>>
>>
>> Thanks Alex,
>>
>> right now, my try to make check-tcg are blocked with the cross
>> containers who don't compile, so I'll wait for this to be resolved.
>
> Which ones?
docker-image-debian-mips64el-cross
docker-image-debian-mipsel-cross
(about broken packages).
I saw something mentioning this recently on the mailing list, so not
sure what would be our solution to this (ignoring?)
>
>> I still wonder if having a simple aarch64/x64 test is not enough, and
>> covering 99.9% of the bug we could introduce in the future on this.
>
> Have you measured the code coverage of the test?
>
Nope, but all the code changed is tcg-generic, so testing this on all
arch does not bring benefit in terms of coverage.
So by focusing on the "all arch" aspect, we just test tcg implementation
itself, instead of the plugins part.
The problems we identified so far is compilation flags specific per
arch, and specific flags to emit words instruction. It does not seem
related to what we really want to test here.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-09-09 19:04 ` Pierrick Bouvier
@ 2024-09-09 20:21 ` Alex Bennée
2024-09-09 21:42 ` Pierrick Bouvier
0 siblings, 1 reply; 20+ messages in thread
From: Alex Bennée @ 2024-09-09 20:21 UTC (permalink / raw)
To: Pierrick Bouvier
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> On 9/9/24 03:00, Alex Bennée wrote:
>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>
>>> On 9/5/24 08:21, Alex Bennée wrote:
>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>
>>>>> This series allows plugins to know which value is read/written during a memory
>>>>> access.
>>>>>
>>>>> For every memory access, we know copy this value before calling mem callbacks,
>>>>> and those can query it using new API function:
>>>>> - qemu_plugin_mem_get_value
>>>> Queued to patches 1-5 to plugins/next, thanks.
>>>> You can send the re-spun version of 6 once the review comments have
>>>> been
>>>> done.
>>>>
>>>
>>> Thanks Alex,
>>>
>>> right now, my try to make check-tcg are blocked with the cross
>>> containers who don't compile, so I'll wait for this to be resolved.
>> Which ones?
>
> docker-image-debian-mips64el-cross
> docker-image-debian-mipsel-cross
> (about broken packages).
I have fixes for mipsel at least when I post my series.
>
> I saw something mentioning this recently on the mailing list, so not
> sure what would be our solution to this (ignoring?)
>
>>
>>> I still wonder if having a simple aarch64/x64 test is not enough, and
>>> covering 99.9% of the bug we could introduce in the future on this.
>> Have you measured the code coverage of the test?
>>
>
> Nope, but all the code changed is tcg-generic, so testing this on all
> arch does not bring benefit in terms of coverage.
Would that it were so simple. Quite often which bits of the generic TCG
code get exercised depends on the guest architecture using it. I'm not
saying we have to go over and above to enable fiddly architectures but we
should at least understand if the reason they fail is down to them or
core code.
> So by focusing on the "all arch" aspect, we just test tcg
> implementation itself, instead of the plugins part.
>
> The problems we identified so far is compilation flags specific per
> arch, and specific flags to emit words instruction. It does not seem
> related to what we really want to test here.
I'm also investigating why arm-softmmu seems to be seeing more accesses
than it should have from the test.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v7 0/6] plugins: access values during a memory read/write
2024-09-09 20:21 ` Alex Bennée
@ 2024-09-09 21:42 ` Pierrick Bouvier
0 siblings, 0 replies; 20+ messages in thread
From: Pierrick Bouvier @ 2024-09-09 21:42 UTC (permalink / raw)
To: Alex Bennée
Cc: qemu-devel, Alexandre Iooss, Zhao Liu, Mahmoud Mandour,
Yanan Wang, Eduardo Habkost, Paolo Bonzini,
Philippe Mathieu-Daudé, Richard Henderson, Marcel Apfelbaum
On 9/9/24 13:21, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>
>> On 9/9/24 03:00, Alex Bennée wrote:
>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>
>>>> On 9/5/24 08:21, Alex Bennée wrote:
>>>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>>>
>>>>>> This series allows plugins to know which value is read/written during a memory
>>>>>> access.
>>>>>>
>>>>>> For every memory access, we know copy this value before calling mem callbacks,
>>>>>> and those can query it using new API function:
>>>>>> - qemu_plugin_mem_get_value
>>>>> Queued to patches 1-5 to plugins/next, thanks.
>>>>> You can send the re-spun version of 6 once the review comments have
>>>>> been
>>>>> done.
>>>>>
>>>>
>>>> Thanks Alex,
>>>>
>>>> right now, my try to make check-tcg are blocked with the cross
>>>> containers who don't compile, so I'll wait for this to be resolved.
>>> Which ones?
>>
>> docker-image-debian-mips64el-cross
>> docker-image-debian-mipsel-cross
>> (about broken packages).
>
> I have fixes for mipsel at least when I post my series.
>
>>
>> I saw something mentioning this recently on the mailing list, so not
>> sure what would be our solution to this (ignoring?)
>>
>>>
>>>> I still wonder if having a simple aarch64/x64 test is not enough, and
>>>> covering 99.9% of the bug we could introduce in the future on this.
>>> Have you measured the code coverage of the test?
>>>
>>
>> Nope, but all the code changed is tcg-generic, so testing this on all
>> arch does not bring benefit in terms of coverage.
>
> Would that it were so simple. Quite often which bits of the generic TCG
> code get exercised depends on the guest architecture using it. I'm not
> saying we have to go over and above to enable fiddly architectures but we
> should at least understand if the reason they fail is down to them or
> core code.
I understand your point, and will try to make this work on all arch.
>
>> So by focusing on the "all arch" aspect, we just test tcg
>> implementation itself, instead of the plugins part.
>>
>> The problems we identified so far is compilation flags specific per
>> arch, and specific flags to emit words instruction. It does not seem
>> related to what we really want to test here.
>
> I'm also investigating why arm-softmmu seems to be seeing more accesses
> than it should have from the test.
>
Good!
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2024-09-09 21:43 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-24 19:47 [PATCH v7 0/6] plugins: access values during a memory read/write Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 1/6] plugins: save value during memory accesses Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 2/6] plugins: extend API to get latest memory value accessed Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 3/6] tests/tcg: add mechanism to run specific tests with plugins Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 4/6] tests/tcg: allow to check output of plugins Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 5/6] tests/plugin/mem: add option to print memory accesses Pierrick Bouvier
2024-07-24 19:47 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
2024-08-29 9:03 ` Alex Bennée
2024-08-30 15:25 ` [RFC PATCH] tests/tcg: add a system test to check memory instrumentation Alex Bennée
2024-08-30 19:17 ` Pierrick Bouvier
2024-08-30 19:08 ` [PATCH v7 6/6] tests/tcg/multiarch: add test for plugin memory access Pierrick Bouvier
2024-09-04 13:19 ` Alex Bennée
2024-09-04 15:41 ` Alex Bennée
2024-09-04 16:28 ` Alex Bennée
2024-09-05 15:21 ` [PATCH v7 0/6] plugins: access values during a memory read/write Alex Bennée
2024-09-07 1:49 ` Pierrick Bouvier
2024-09-09 10:00 ` Alex Bennée
2024-09-09 19:04 ` Pierrick Bouvier
2024-09-09 20:21 ` Alex Bennée
2024-09-09 21:42 ` Pierrick Bouvier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).