From: Maxim Levitsky <mlevitsk@redhat.com>
To: kvm@vger.kernel.org
Cc: Paolo Bonzini <pbonzini@redhat.com>,
linux-kselftest@vger.kernel.org, Shuah Khan <shuah@kernel.org>,
linux-kernel@vger.kernel.org,
Sean Christopherson <seanjc@google.com>
Subject: Re: [PATCH 1/1] KVM: selftests: pmu_counters_test: increase robustness of LLC cache misses
Date: Wed, 26 Jun 2024 12:08:50 -0400 [thread overview]
Message-ID: <cefafca99f046a89e111e6972ed02199ef95c250.camel@redhat.com> (raw)
In-Reply-To: <20240621204305.1730677-2-mlevitsk@redhat.com>
On Fri, 2024-06-21 at 16:43 -0400, Maxim Levitsky wrote:
> Currently this test does a single CLFLUSH on its memory location
> but due to speculative execution this might not cause LLC misses.
>
> Instead, do a cache flush on each loop iteration to confuse the prediction
> and make sure that cache misses always occur.
>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> ---
> .../selftests/kvm/x86_64/pmu_counters_test.c | 20 +++++++++----------
> 1 file changed, 9 insertions(+), 11 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> index 96446134c00b7..ddc0b7e4a888e 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> @@ -14,8 +14,8 @@
> * instructions that are needed to set up the loop and then disabled the
> * counter. 1 CLFLUSH/CLFLUSHOPT/NOP, 1 MFENCE, 2 MOV, 2 XOR, 1 WRMSR.
> */
> -#define NUM_EXTRA_INSNS 7
> -#define NUM_INSNS_RETIRED (NUM_BRANCHES + NUM_EXTRA_INSNS)
> +#define NUM_EXTRA_INSNS 5
> +#define NUM_INSNS_RETIRED (NUM_BRANCHES * 2 + NUM_EXTRA_INSNS)
>
> static uint8_t kvm_pmu_version;
> static bool kvm_has_perf_caps;
> @@ -133,9 +133,8 @@ static void guest_assert_event_count(uint8_t idx,
> * doesn't need to be clobbered as the input value, @pmc_msr, is restored
> * before the end of the sequence.
> *
> - * If CLFUSH{,OPT} is supported, flush the cacheline containing (at least) the
> - * start of the loop to force LLC references and misses, i.e. to allow testing
> - * that those events actually count.
> + * If CLFUSH{,OPT} is supported, flush the cacheline containing the CLFUSH{,OPT}
> + * instruction on each loop iteration to ensure that LLC cache misses happen.
> *
> * If forced emulation is enabled (and specified), force emulation on a subset
> * of the measured code to verify that KVM correctly emulates instructions and
> @@ -145,10 +144,9 @@ static void guest_assert_event_count(uint8_t idx,
> #define GUEST_MEASURE_EVENT(_msr, _value, clflush, FEP) \
> do { \
> __asm__ __volatile__("wrmsr\n\t" \
> - clflush "\n\t" \
> - "mfence\n\t" \
> - "1: mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" \
> - FEP "loop .\n\t" \
> + " mov $" __stringify(NUM_BRANCHES) ", %%ecx\n\t" \
> + "1: " clflush "\n\t" \
> + FEP "loop 1b\n\t" \
> FEP "mov %%edi, %%ecx\n\t" \
> FEP "xor %%eax, %%eax\n\t" \
> FEP "xor %%edx, %%edx\n\t" \
> @@ -163,9 +161,9 @@ do { \
> wrmsr(pmc_msr, 0); \
> \
> if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) \
> - GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt 1f", FEP); \
> + GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt .", FEP); \
> else if (this_cpu_has(X86_FEATURE_CLFLUSH)) \
> - GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflush 1f", FEP); \
> + GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflush .", FEP); \
> else \
> GUEST_MEASURE_EVENT(_ctrl_msr, _value, "nop", FEP); \
> \
Any update? The test patched with this patch survived about 3 days of running
in a loop.
Best regards,
Maxim Levitsky
next prev parent reply other threads:[~2024-06-26 16:08 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-21 20:43 [PATCH 0/1] KVM: selftests: pmu_counters_test: increase robustness of LLC cache misses Maxim Levitsky
2024-06-21 20:43 ` [PATCH 1/1] " Maxim Levitsky
2024-06-26 16:08 ` Maxim Levitsky [this message]
2024-06-27 17:42 ` Sean Christopherson
2024-07-05 2:48 ` Maxim Levitsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cefafca99f046a89e111e6972ed02199ef95c250.camel@redhat.com \
--to=mlevitsk@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).