qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, Mahmoud Mandour <ma.mandourr@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <richard.henderson@linaro.org>,
	Alexandre Iooss <erdnaxe@crans.org>
Subject: Re: [PATCH v2 05/14] tests/plugin: add test plugin for inline operations
Date: Tue, 30 Jan 2024 20:40:11 +0400	[thread overview]
Message-ID: <5b48f3f5-7c20-49c6-9a1b-eaf792120c51@linaro.org> (raw)
In-Reply-To: <87h6iuoovb.fsf@draig.linaro.org>

On 1/30/24 18:52, Alex Bennée wrote:
> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> 
>> On 1/26/24 20:05, Alex Bennée wrote:
>>> Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
>>>
>>>> For now, it simply performs instruction, bb and mem count, and ensure
>>>> that inline vs callback versions have the same result. Later, we'll
>>>> extend it when new inline operations are added.
>>>>
>>>> Use existing plugins to test everything works is a bit cumbersome, as
>>>> different events are treated in different plugins. Thus, this new one.
>>>>
>>>> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
>>>> ---
>>>>    tests/plugin/inline.c    | 182 +++++++++++++++++++++++++++++++++++++++
>>>>    tests/plugin/meson.build |   2 +-
>>>>    2 files changed, 183 insertions(+), 1 deletion(-)
>>>>    create mode 100644 tests/plugin/inline.c
>>>>
>>>> diff --git a/tests/plugin/inline.c b/tests/plugin/inline.c
>>>> new file mode 100644
>>>> index 00000000000..28d1c3b1e48
>>>> --- /dev/null
>>>> +++ b/tests/plugin/inline.c
>>>> @@ -0,0 +1,182 @@
>>>> +/*
>>>> + * Copyright (C) 2023, Pierrick Bouvier <pierrick.bouvier@linaro.org>
>>>> + *
>>>> + * Demonstrates and tests usage of inline ops.
>>>> + *
>>>> + * License: GNU GPL, version 2 or later.
>>>> + *   See the COPYING file in the top-level directory.
>>>> + */
>>>> +
>>>> +#include <glib.h>
>>>> +#include <stdint.h>
>>>> +#include <stdio.h>
>>>> +
>>>> +#include <qemu-plugin.h>
>>>> +
>>>> +typedef struct {
>>>> +    uint64_t count_tb;
>>>> +    uint64_t count_tb_inline;
>>>> +    uint64_t count_insn;
>>>> +    uint64_t count_insn_inline;
>>>> +    uint64_t count_mem;
>>>> +    uint64_t count_mem_inline;
>>>> +} CPUCount;
>>> I wonder if there is any way to enforce the structures being an
>>> array of
>>> 64 bit counts? I do worry the compiler might want day decide to do
>>> something silly but legal leading to confusion.
>>> I guess qemu_plugin_scoreboard_new could:
>>>     g_assert((element_size % sizeof(uint64_t)) == 0)
>>>
>>
>> Given explaination on patch [02/14], do you see more that CPUCount
>> could hold any type, and qemu_plugin_u64 allows to target a specific
>> member in it?
>>
>> In general, qemu plugin runtime simply is given an offset and total
>> size of the struct, so a compiled plugin can have any
>> optimization/padding on this struct without this affecting the result.
>>
>>> ?
>>>
>>>> +static qemu_plugin_u64_t count_tb;
>>>> +static qemu_plugin_u64_t count_tb_inline;
>>>> +static qemu_plugin_u64_t count_insn;
>>>> +static qemu_plugin_u64_t count_insn_inline;
>>>> +static qemu_plugin_u64_t count_mem;
>>>> +static qemu_plugin_u64_t count_mem_inline;
>>> Can't this just be a non scoreboard instance of CPUCount?
>>>
>>
>> We could always use:
>> CPUCount* count = qemu_plugin_scoreboard_get(score, i);
>> count->count_tb++;
> 
> I thought these where globals protected by a lock? I wasn't suggesting
> hiding the abstraction behind the scoreboard API.
> 

global_count_* var are protected by a lock (and serve as reference value 
to test inline operations).
The scoreboard is used for inline and callback (per cpu) operations.

>>
>> However, the part where we sum all values would now need some glue
>> code, which is longer and more error prone than having those
>> definition here (and declaration in install function).
>>
>>>> +
>>>> +static uint64_t global_count_tb;
>>>> +static uint64_t global_count_insn;
>>>> +static uint64_t global_count_mem;
>>>> +static unsigned int max_cpu_index;
>>>> +static GMutex tb_lock;
>>>> +static GMutex insn_lock;
>>>> +static GMutex mem_lock;
>>>> +
>>>> +QEMU_PLUGIN_EXPORT int qemu_plugin_version = QEMU_PLUGIN_VERSION;
>>>> +
>>>> +static void stats_insn(void)
>>>> +{
>>>> +    const uint64_t expected = global_count_insn;
>>>> +    const uint64_t per_vcpu = qemu_plugin_u64_sum(count_insn);
>>>> +    const uint64_t inl_per_vcpu =
>>>> +        qemu_plugin_u64_sum(count_insn_inline);
>>>> +    printf("insn: %" PRIu64 "\n", expected);
>>>> +    printf("insn: %" PRIu64 " (per vcpu)\n", per_vcpu);
>>>> +    printf("insn: %" PRIu64 " (per vcpu inline)\n", inl_per_vcpu);
>>>> +    g_assert(expected > 0);
>>>> +    g_assert(per_vcpu == expected);
>>>> +    g_assert(inl_per_vcpu == expected);
>>>> +}
>>>> +
>>>> +static void stats_tb(void)
>>>> +{
>>>> +    const uint64_t expected = global_count_tb;
>>>> +    const uint64_t per_vcpu = qemu_plugin_u64_sum(count_tb);
>>>> +    const uint64_t inl_per_vcpu =
>>>> +        qemu_plugin_u64_sum(count_tb_inline);
>>>> +    printf("tb: %" PRIu64 "\n", expected);
>>>> +    printf("tb: %" PRIu64 " (per vcpu)\n", per_vcpu);
>>>> +    printf("tb: %" PRIu64 " (per vcpu inline)\n", inl_per_vcpu);
>>>> +    g_assert(expected > 0);
>>>> +    g_assert(per_vcpu == expected);
>>>> +    g_assert(inl_per_vcpu == expected);
>>>> +}
>>>> +
>>>> +static void stats_mem(void)
>>>> +{
>>>> +    const uint64_t expected = global_count_mem;
>>>> +    const uint64_t per_vcpu = qemu_plugin_u64_sum(count_mem);
>>>> +    const uint64_t inl_per_vcpu =
>>>> +        qemu_plugin_u64_sum(count_mem_inline);
>>>> +    printf("mem: %" PRIu64 "\n", expected);
>>>> +    printf("mem: %" PRIu64 " (per vcpu)\n", per_vcpu);
>>>> +    printf("mem: %" PRIu64 " (per vcpu inline)\n", inl_per_vcpu);
>>>> +    g_assert(expected > 0);
>>>> +    g_assert(per_vcpu == expected);
>>>> +    g_assert(inl_per_vcpu == expected);
>>>> +}
>>>> +
>>>> +static void plugin_exit(qemu_plugin_id_t id, void *udata)
>>>> +{
>>>> +    const unsigned int num_cpus = qemu_plugin_scoreboard_size(counts);
>>>> +    g_assert(num_cpus == max_cpu_index + 1);
>>>> +
>>>> +    for (int i = 0; i < num_cpus ; ++i) {
>>>> +        const uint64_t tb = *qemu_plugin_u64_get(count_tb, i);
>>>> +        const uint64_t tb_inline = *qemu_plugin_u64_get(count_tb_inline, i);
>>>> +        const uint64_t insn = *qemu_plugin_u64_get(count_insn, i);
>>>> +        const uint64_t insn_inline = *qemu_plugin_u64_get(count_insn_inline, i);
>>>> +        const uint64_t mem = *qemu_plugin_u64_get(count_mem, i);
>>>> +        const uint64_t mem_inline = *qemu_plugin_u64_get(count_mem_inline, i);
>>>> +        printf("cpu %d: tb (%" PRIu64 ", %" PRIu64 ") | "
>>>> +               "insn (%" PRIu64 ", %" PRIu64 ") | "
>>>> +               "mem (%" PRIu64 ", %" PRIu64 ")"
>>>> +               "\n",
>>>> +               i, tb, tb_inline, insn, insn_inline, mem, mem_inline);
>>>> +        g_assert(tb == tb_inline);
>>>> +        g_assert(insn == insn_inline);
>>>> +        g_assert(mem == mem_inline);
>>>> +    }
>>>> +
>>>> +    stats_tb();
>>>> +    stats_insn();
>>>> +    stats_mem();
>>>> +
>>>> +    qemu_plugin_scoreboard_free(counts);
>>>> +}
>>>> +
>>>> +static void vcpu_tb_exec(unsigned int cpu_index, void *udata)
>>>> +{
>>>> +    (*qemu_plugin_u64_get(count_tb, cpu_index))++;
>>>> +    g_mutex_lock(&tb_lock);
>>>> +    max_cpu_index = MAX(max_cpu_index, cpu_index);
>>>> +    global_count_tb++;
>>>> +    g_mutex_unlock(&tb_lock);
>>>> +}
>>>> +
>>>> +static void vcpu_insn_exec(unsigned int cpu_index, void *udata)
>>>> +{
>>>> +    (*qemu_plugin_u64_get(count_insn, cpu_index))++;
>>>> +    g_mutex_lock(&insn_lock);
>>>> +    global_count_insn++;
>>>> +    g_mutex_unlock(&insn_lock);
>>>> +}
>>>> +
>>>> +static void vcpu_mem_access(unsigned int cpu_index,
>>>> +                            qemu_plugin_meminfo_t info,
>>>> +                            uint64_t vaddr,
>>>> +                            void *userdata)
>>>> +{
>>>> +    (*qemu_plugin_u64_get(count_mem, cpu_index))++;
>>>> +    g_mutex_lock(&mem_lock);
>>>> +    global_count_mem++;
>>>> +    g_mutex_unlock(&mem_lock);
>>>> +}
>>>> +
>>>> +static void vcpu_tb_trans(qemu_plugin_id_t id, struct qemu_plugin_tb *tb)
>>>> +{
>>>> +    qemu_plugin_register_vcpu_tb_exec_cb(
>>>> +        tb, vcpu_tb_exec, QEMU_PLUGIN_CB_NO_REGS, 0);
>>>> +    qemu_plugin_register_vcpu_tb_exec_inline_per_vcpu(
>>>> +        tb, QEMU_PLUGIN_INLINE_ADD_U64, count_tb_inline, 1);
>>>> +
>>>> +    for (int idx = 0; idx < qemu_plugin_tb_n_insns(tb); ++idx) {
>>>> +        struct qemu_plugin_insn *insn = qemu_plugin_tb_get_insn(tb, idx);
>>>> +        qemu_plugin_register_vcpu_insn_exec_cb(
>>>> +            insn, vcpu_insn_exec, QEMU_PLUGIN_CB_NO_REGS, 0);
>>>> +        qemu_plugin_register_vcpu_insn_exec_inline_per_vcpu(
>>>> +            insn, QEMU_PLUGIN_INLINE_ADD_U64, count_insn_inline, 1);
>>>> +        qemu_plugin_register_vcpu_mem_cb(insn, &vcpu_mem_access,
>>>> +                                         QEMU_PLUGIN_CB_NO_REGS,
>>>> +                                         QEMU_PLUGIN_MEM_RW, 0);
>>>> +        qemu_plugin_register_vcpu_mem_inline_per_vcpu(
>>>> +            insn, QEMU_PLUGIN_MEM_RW,
>>>> +            QEMU_PLUGIN_INLINE_ADD_U64,
>>>> +            count_mem_inline, 1);
>>>> +    }
>>>> +}
>>>> +
>>>> +QEMU_PLUGIN_EXPORT
>>>> +int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
>>>> +                        int argc, char **argv)
>>>> +{
>>>> +    counts = qemu_plugin_scoreboard_new(sizeof(CPUCount));
>>>> +    count_tb = qemu_plugin_u64_struct(counts, CPUCount, count_tb);
>>>> +    count_insn = qemu_plugin_u64_struct(counts, CPUCount, count_insn);
>>>> +    count_mem = qemu_plugin_u64_struct(counts, CPUCount, count_mem);
>>>> +    count_tb_inline = qemu_plugin_u64_struct(counts, CPUCount, count_tb_inline);
>>>> +    count_insn_inline =
>>>> +        qemu_plugin_u64_struct(counts, CPUCount, count_insn_inline);
>>>> +    count_mem_inline =
>>>> +        qemu_plugin_u64_struct(counts, CPUCount, count_mem_inline);
>>>> +    qemu_plugin_register_vcpu_tb_trans_cb(id, vcpu_tb_trans);
>>>> +    qemu_plugin_register_atexit_cb(id, plugin_exit, NULL);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> diff --git a/tests/plugin/meson.build b/tests/plugin/meson.build
>>>> index e18183aaeda..9eece5bab51 100644
>>>> --- a/tests/plugin/meson.build
>>>> +++ b/tests/plugin/meson.build
>>>> @@ -1,6 +1,6 @@
>>>>    t = []
>>>>    if get_option('plugins')
>>>> -  foreach i : ['bb', 'empty', 'insn', 'mem', 'syscall']
>>>> +  foreach i : ['bb', 'empty', 'inline', 'insn', 'mem', 'syscall']
>>>>        if host_os == 'windows'
>>>>          t += shared_module(i, files(i + '.c') + '../../contrib/plugins/win32_linker.c',
>>>>                            include_directories: '../../include/qemu',
>>> Otherwise:
>>> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>>>
> 

  reply	other threads:[~2024-01-30 16:40 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-18  3:23 [PATCH v2 00/14] TCG Plugin inline operation enhancement Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 01/14] plugins: implement inline operation relative to cpu_index Pierrick Bouvier
2024-01-26 12:07   ` Alex Bennée
2024-01-29 10:10     ` Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 02/14] plugins: scoreboard API Pierrick Bouvier
2024-01-26 15:14   ` Alex Bennée
2024-01-30  7:37     ` Pierrick Bouvier
2024-01-30 10:23       ` Alex Bennée
2024-01-30 11:10         ` Pierrick Bouvier
2024-01-31  7:44     ` Pierrick Bouvier
2024-02-01  5:28       ` Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 03/14] docs/devel: plugins can trigger a tb flush Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 04/14] plugins: add inline operation per vcpu Pierrick Bouvier
2024-01-26 15:17   ` Alex Bennée
2024-01-18  3:23 ` [PATCH v2 05/14] tests/plugin: add test plugin for inline operations Pierrick Bouvier
2024-01-26 16:05   ` Alex Bennée
2024-01-30  7:49     ` Pierrick Bouvier
2024-01-30 14:52       ` Alex Bennée
2024-01-30 16:40         ` Pierrick Bouvier [this message]
2024-01-18  3:23 ` [PATCH v2 06/14] tests/plugin/mem: migrate to new per_vcpu API Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 07/14] tests/plugin/insn: " Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 08/14] tests/plugin/bb: " Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 09/14] contrib/plugins/hotblocks: " Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 10/14] contrib/plugins/howvec: " Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 11/14] plugins: remove non per_vcpu inline operation from API Pierrick Bouvier
2024-01-26 16:26   ` Alex Bennée
2024-01-30  7:53     ` Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 12/14] plugins: register inline op with a qemu_plugin_u64_t Pierrick Bouvier
2024-01-18  3:23 ` [PATCH v2 13/14] MAINTAINERS: Add myself as reviewer for TCG Plugins Pierrick Bouvier
2024-01-26 16:27   ` Alex Bennée
2024-01-18  3:23 ` [PATCH v2 14/14] contrib/plugins/execlog: fix new warnings Pierrick Bouvier
2024-01-26 16:31   ` Alex Bennée
2024-01-30  7:51     ` Pierrick Bouvier
2024-01-26  9:21 ` [PATCH v2 00/14] TCG Plugin inline operation enhancement Pierrick Bouvier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5b48f3f5-7c20-49c6-9a1b-eaf792120c51@linaro.org \
    --to=pierrick.bouvier@linaro.org \
    --cc=alex.bennee@linaro.org \
    --cc=erdnaxe@crans.org \
    --cc=ma.mandourr@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).