* [PATCH v3 00/14] Improvements for (nested) SVM testing
@ 2025-11-10 23:26 Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 01/14] scripts: Always return '2' when skipping tests Yosry Ahmed
` (15 more replies)
0 siblings, 16 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
This is a combined v2/v3 of the patch series in [1] and [2], with a
couple of extra changes.
The series mostly includes fixups and cleanups, and more importantly new
tests for selective CR0 write intercepts and LBRV, which cover bugs
recently fixed by [3] and [4].
Patches 1-2 are unrelated fixups outside of SVM that I just included
here instead of submitting separately out of pure laziness.
Patches 3-8 are cleanups and refactoring.
Patch 9 deflakes svm_tsc_scale_test in a not-so-sophisticated way.
Patches 10-11 generalize the existing selective CR0 intercept tests and
extend them with new test cases.
Patches 12-13 cleanup the existing LBRV tests and extend them with new
test cases.
Patch 14 is a rename to some of VMCB fields to match recent renames in
KVM [5]. This is the last patch is in the series also due to laziness.
I realize that the patch ordering is a mess, and I can rebase and
reorder if it actually matters to anyone.
[1]https://lore.kernel.org/kvm/20251028221213.1937120-1-yosry.ahmed@linux.dev/
[2]https://lore.kernel.org/kvm/20251104193016.3408754-1-yosry.ahmed@linux.dev/
[3]https://lore.kernel.org/kvm/20251024192918.3191141-1-yosry.ahmed@linux.dev/
[4]https://lore.kernel.org/kvm/20251108004524.1600006-1-yosry.ahmed@linux.dev/
[5]https://lore.kernel.org/kvm/20251110222922.613224-1-yosry.ahmed@linux.dev/
Yosry Ahmed (14):
scripts: Always return '2' when skipping tests
x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
x86/svm: Cleanup selective cr0 write intercept test
x86/svm: Move CR0 selective write intercept test near CR3 intercept
x86/svm: Add FEP helpers for SVM tests
x86/svm: Report unsupported SVM tests
x86/svm: Move report_svm_guest() to the top of svm_tests.c
x86/svm: Print SVM test names before running tests
x86/svm: Deflake svm_tsc_scale_test
x86/svm: Generalize and improve selective CR0 write intercept test
x86/svm: Add more selective CR0 write and LMSW test cases
x86/svm: Cleanup LBRV tests
x86/svm: Add more LBRV test cases
x86/svm: Rename VMCB fields to match KVM
lib/x86/desc.h | 8 +
scripts/runtime.bash | 4 +-
x86/svm.c | 12 +-
x86/svm.h | 7 +-
x86/svm_tests.c | 402 +++++++++++++++++++++++++++++--------------
x86/vmx_tests.c | 5 +-
6 files changed, 303 insertions(+), 135 deletions(-)
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v3 01/14] scripts: Always return '2' when skipping tests
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available Yosry Ahmed
` (14 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
runtime.bash always returns 2 (or 77 in one case) when a test is
skipped. But two cases are missed and return 0. Fix them.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
scripts/runtime.bash | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/runtime.bash b/scripts/runtime.bash
index 6805e97f90c8f..0cbe2695948b8 100644
--- a/scripts/runtime.bash
+++ b/scripts/runtime.bash
@@ -103,13 +103,13 @@ function run()
if [ -z "$GEN_SE_HEADER" ] && find_word "pv-host" "$groups"; then
print_result "SKIP" $testname "" "no gen-se-header available for pv-host test"
- return
+ return 2
fi
if [ -z "$only_group" ] && find_word nodefault "$groups" &&
skip_nodefault; then
print_result "SKIP" $testname "" "test marked as manual run only"
- return;
+ return 2
fi
if [ -n "$arch" ] && [ "$arch" != "$ARCH" ]; then
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 01/14] scripts: Always return '2' when skipping tests Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-14 0:40 ` Sean Christopherson
2025-11-10 23:26 ` [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test Yosry Ahmed
` (13 subsequent siblings)
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
The check to skip the test is currently performed in the guest code.
There a few TEST_ASSERTs that happen before the guest is run, which
internally call report_passed(). The latter increases the number of
passed tests.
Hence, when vmx_pf_exception_test_fep is run, report_summary() does not
return a "skip" error code because the total number of tests is larger
than the number of skipped tests.
Skip early if FEP is not available, before any assertions, such that
report_summary() finds exactly 1 skipped test and returns the
appropriate error code.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/vmx_tests.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 0b3cfe50c6142..4f214ebdbe1d9 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -10644,7 +10644,10 @@ static void vmx_pf_exception_test(void)
static void vmx_pf_exception_forced_emulation_test(void)
{
- __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
+ if (is_fep_available)
+ __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
+ else
+ report_skip("Forced emulation prefix (FEP) not available\n");
}
static void invalidate_tlb_no_vpid(void *data)
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 01/14] scripts: Always return '2' when skipping tests Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-12 13:48 ` Manali Shukla
2025-11-10 23:26 ` [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept Yosry Ahmed
` (12 subsequent siblings)
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Rename the test and functions to more general names describing the test
more accurately. Use X86_CR0_CD instead of hardcoding the bitmask, and
explicitly clear the bit in the prepare() function to make it clearer
that it would only be set by the test.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 22 +++++++++-------------
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 80d5aeb108650..e911659194b3d 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -793,23 +793,19 @@ static bool check_asid_zero(struct svm_test *test)
return vmcb->control.exit_code == SVM_EXIT_ERR;
}
-static void sel_cr0_bug_prepare(struct svm_test *test)
+static void prepare_sel_cr0_intercept(struct svm_test *test)
{
+ vmcb->save.cr0 &= ~X86_CR0_CD;
vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
}
-static bool sel_cr0_bug_finished(struct svm_test *test)
-{
- return true;
-}
-
-static void sel_cr0_bug_test(struct svm_test *test)
+static void test_sel_cr0_write_intercept(struct svm_test *test)
{
unsigned long cr0;
- /* read cr0, clear CD, and write back */
+ /* read cr0, set CD, and write back */
cr0 = read_cr0();
- cr0 |= (1UL << 30);
+ cr0 |= X86_CR0_CD;
write_cr0(cr0);
/*
@@ -821,7 +817,7 @@ static void sel_cr0_bug_test(struct svm_test *test)
exit(report_summary());
}
-static bool sel_cr0_bug_check(struct svm_test *test)
+static bool check_sel_cr0_intercept(struct svm_test *test)
{
return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
}
@@ -3486,9 +3482,9 @@ struct svm_test svm_tests[] = {
{ "asid_zero", default_supported, prepare_asid_zero,
default_prepare_gif_clear, test_asid_zero,
default_finished, check_asid_zero },
- { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
- default_prepare_gif_clear, sel_cr0_bug_test,
- sel_cr0_bug_finished, sel_cr0_bug_check },
+ { "sel cr0 write intercept", default_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
{ "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
default_prepare_gif_clear, tsc_adjust_test,
default_finished, tsc_adjust_check },
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (2 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-12 13:52 ` Manali Shukla
2025-11-10 23:26 ` [PATCH v3 05/14] x86/svm: Add FEP helpers for SVM tests Yosry Ahmed
` (11 subsequent siblings)
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
It makes more semantic sense for these tests to be in close proximity.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 64 ++++++++++++++++++++++++-------------------------
1 file changed, 32 insertions(+), 32 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index e911659194b3d..feeb27d61435b 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -112,6 +112,35 @@ static bool finished_rsm_intercept(struct svm_test *test)
return get_test_stage(test) == 2;
}
+static void prepare_sel_cr0_intercept(struct svm_test *test)
+{
+ vmcb->save.cr0 &= ~X86_CR0_CD;
+ vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
+}
+
+static void test_sel_cr0_write_intercept(struct svm_test *test)
+{
+ unsigned long cr0;
+
+ /* read cr0, set CD, and write back */
+ cr0 = read_cr0();
+ cr0 |= X86_CR0_CD;
+ write_cr0(cr0);
+
+ /*
+ * If we are here the test failed, not sure what to do now because we
+ * are not in guest-mode anymore so we can't trigger an intercept.
+ * Trigger a tripple-fault for now.
+ */
+ report_fail("sel_cr0 test. Can not recover from this - exiting");
+ exit(report_summary());
+}
+
+static bool check_sel_cr0_intercept(struct svm_test *test)
+{
+ return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
+}
+
static void prepare_cr3_intercept(struct svm_test *test)
{
default_prepare(test);
@@ -793,35 +822,6 @@ static bool check_asid_zero(struct svm_test *test)
return vmcb->control.exit_code == SVM_EXIT_ERR;
}
-static void prepare_sel_cr0_intercept(struct svm_test *test)
-{
- vmcb->save.cr0 &= ~X86_CR0_CD;
- vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
-}
-
-static void test_sel_cr0_write_intercept(struct svm_test *test)
-{
- unsigned long cr0;
-
- /* read cr0, set CD, and write back */
- cr0 = read_cr0();
- cr0 |= X86_CR0_CD;
- write_cr0(cr0);
-
- /*
- * If we are here the test failed, not sure what to do now because we
- * are not in guest-mode anymore so we can't trigger an intercept.
- * Trigger a tripple-fault for now.
- */
- report_fail("sel_cr0 test. Can not recover from this - exiting");
- exit(report_summary());
-}
-
-static bool check_sel_cr0_intercept(struct svm_test *test)
-{
- return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
-}
-
#define TSC_ADJUST_VALUE (1ll << 32)
#define TSC_OFFSET_VALUE (~0ull << 48)
static bool ok;
@@ -3458,6 +3458,9 @@ struct svm_test svm_tests[] = {
{ "rsm", default_supported,
prepare_rsm_intercept, default_prepare_gif_clear,
test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept },
+ { "sel cr0 write intercept", default_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
{ "cr3 read intercept", default_supported,
prepare_cr3_intercept, default_prepare_gif_clear,
test_cr3_intercept, default_finished, check_cr3_intercept },
@@ -3482,9 +3485,6 @@ struct svm_test svm_tests[] = {
{ "asid_zero", default_supported, prepare_asid_zero,
default_prepare_gif_clear, test_asid_zero,
default_finished, check_asid_zero },
- { "sel cr0 write intercept", default_supported,
- prepare_sel_cr0_intercept, default_prepare_gif_clear,
- test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
{ "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
default_prepare_gif_clear, tsc_adjust_test,
default_finished, tsc_adjust_check },
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 05/14] x86/svm: Add FEP helpers for SVM tests
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (3 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 06/14] x86/svm: Report unsupported " Yosry Ahmed
` (10 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Add helpers to check if FEP is enabled to use as supported() callbacks
in SVM tests for the emulator. Also add a macro that executes an
assembly instruction conditionally with FEP, which will make writing
SVM tests that run with and without FEP more convenient.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
lib/x86/desc.h | 8 ++++++++
x86/svm.c | 5 +++++
x86/svm.h | 1 +
3 files changed, 14 insertions(+)
diff --git a/lib/x86/desc.h b/lib/x86/desc.h
index 68f38f3d75333..06c8be65221dc 100644
--- a/lib/x86/desc.h
+++ b/lib/x86/desc.h
@@ -284,6 +284,14 @@ extern unsigned long get_gdt_entry_limit(gdt_entry_t *entry);
#define asm_fep_safe(insn, inputs...) \
__asm_safe_out1(KVM_FEP, insn,, inputs)
+#define asm_conditional_fep_safe(fep, insn, inputs...) \
+({ \
+ if (fep) \
+ asm_fep_safe(insn, inputs); \
+ else \
+ asm_safe(insn, inputs); \
+})
+
#define __asm_safe_out1(fep, insn, output, inputs...) \
({ \
asm volatile(__ASM_TRY(fep, "1f") \
diff --git a/x86/svm.c b/x86/svm.c
index e715e270bc5b7..035367a1e90cf 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -53,6 +53,11 @@ bool default_supported(void)
return true;
}
+bool fep_supported(void)
+{
+ return is_fep_available;
+}
+
bool vgif_supported(void)
{
return this_cpu_has(X86_FEATURE_VGIF);
diff --git a/x86/svm.h b/x86/svm.h
index c1dd84afb25eb..264583a6547ef 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -417,6 +417,7 @@ u64 *npt_get_pdpe(u64 address);
u64 *npt_get_pml4e(void);
bool smp_supported(void);
bool default_supported(void);
+bool fep_supported(void);
bool vgif_supported(void);
bool lbrv_supported(void);
bool tsc_scale_supported(void);
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 06/14] x86/svm: Report unsupported SVM tests
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (4 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 05/14] x86/svm: Add FEP helpers for SVM tests Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c Yosry Ahmed
` (9 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Print a message when a test is skipped due to being unsupported for
better visibility.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/x86/svm.c b/x86/svm.c
index 035367a1e90cf..5015339ddb657 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -403,8 +403,10 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
for (; svm_tests[i].name != NULL; i++) {
if (!test_wanted(svm_tests[i].name, av, ac))
continue;
- if (svm_tests[i].supported && !svm_tests[i].supported())
+ if (svm_tests[i].supported && !svm_tests[i].supported()) {
+ report_skip("%s (not supported)", svm_tests[i].name);
continue;
+ }
if (svm_tests[i].v2 == NULL) {
if (svm_tests[i].on_vcpu) {
if (cpu_count() <= svm_tests[i].on_vcpu)
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (5 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 06/14] x86/svm: Report unsupported " Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 08/14] x86/svm: Print SVM test names before running tests Yosry Ahmed
` (8 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Move the macro ahead of other tests that will start using it.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index feeb27d61435b..61ab63db462dc 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -37,6 +37,21 @@ u64 latclgi_max;
u64 latclgi_min;
u64 runs;
+/*
+ * Report failures from SVM guest code, and on failure, set the stage to -1 and
+ * do VMMCALL to terminate the test (host side must treat -1 as "finished").
+ * TODO: fix the tests that don't play nice with a straight report, e.g. the
+ * V_TPR test fails if report() is invoked.
+ */
+#define report_svm_guest(cond, test, fmt, args...) \
+do { \
+ if (!(cond)) { \
+ report_fail(fmt, ##args); \
+ set_test_stage(test, -1); \
+ vmmcall(); \
+ } \
+} while (0)
+
static void null_test(struct svm_test *test)
{
}
@@ -1074,21 +1089,6 @@ static bool lat_svm_insn_check(struct svm_test *test)
return true;
}
-/*
- * Report failures from SVM guest code, and on failure, set the stage to -1 and
- * do VMMCALL to terminate the test (host side must treat -1 as "finished").
- * TODO: fix the tests that don't play nice with a straight report, e.g. the
- * V_TPR test fails if report() is invoked.
- */
-#define report_svm_guest(cond, test, fmt, args...) \
-do { \
- if (!(cond)) { \
- report_fail(fmt, ##args); \
- set_test_stage(test, -1); \
- vmmcall(); \
- } \
-} while (0)
-
bool pending_event_ipi_fired;
bool pending_event_guest_run;
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 08/14] x86/svm: Print SVM test names before running tests
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (6 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test Yosry Ahmed
` (7 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
When SVM tests are run, the log is a see of PASS/FAIL/SKIPs that are not
clearly separated by test. Sometimes it's hard to attribute a failure to
a specific test (e.g. if the same helper is reused by multiple tests).
Print the test name before running each test.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/x86/svm.c b/x86/svm.c
index 5015339ddb657..de9eb19443caa 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -407,6 +407,7 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
report_skip("%s (not supported)", svm_tests[i].name);
continue;
}
+ printf("SVM test: %s\n", svm_tests[i].name);
if (svm_tests[i].v2 == NULL) {
if (svm_tests[i].on_vcpu) {
if (cpu_count() <= svm_tests[i].on_vcpu)
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (7 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 08/14] x86/svm: Print SVM test names before running tests Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-14 0:34 ` Sean Christopherson
2025-11-10 23:26 ` [PATCH v3 10/14] x86/svm: Generalize and improve selective CR0 write intercept test Yosry Ahmed
` (6 subsequent siblings)
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
On an AMT Turin (EPYC Zen 5), svm_tsc_scale_test flakes on the last test
case with 0.0001 TSC scaling ratio, even with the 24-bit shift for
stability. On failure, the actual value is 49 instead of the expected
50.
Use a higher scaling ratio, 0.001, which makes the test pass
consistently.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 61ab63db462dc..1e7556a37adec 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -943,7 +943,7 @@ static void svm_tsc_scale_test(void)
}
svm_tsc_scale_run_testcase(50, 255, rdrand());
- svm_tsc_scale_run_testcase(50, 0.0001, rdrand());
+ svm_tsc_scale_run_testcase(50, 0.001, rdrand());
}
static void latency_prepare(struct svm_test *test)
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 10/14] x86/svm: Generalize and improve selective CR0 write intercept test
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (8 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 11/14] x86/svm: Add more selective CR0 write and LMSW test cases Yosry Ahmed
` (5 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
In preparation for adding more test cases, make the test easier to
extend. Create a generic helper that sets an arbitrary bit in CR0,
optionally using FEP. The helper also stores the value to be written in
test->scratch, making it possible to double check if the write was
actually executed or not.
Use report_svm_guest() instead of report_fail() + exit().
Make test_sel_cr0_write_intercept() use the generic helper, and add
another test case that sets FEP to exercise the interception path in the
emulator.
Finally, in check_sel_cr0_intercept() also check that the write was not
executed by comparing CR0 value in the VMCB12 with the
value-to-be-written stored in test->scratch.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 1e7556a37adec..7e292a9a7b4ec 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -133,27 +133,36 @@ static void prepare_sel_cr0_intercept(struct svm_test *test)
vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
}
-static void test_sel_cr0_write_intercept(struct svm_test *test)
+static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit,
+ bool intercept, bool fep)
{
unsigned long cr0;
- /* read cr0, set CD, and write back */
- cr0 = read_cr0();
- cr0 |= X86_CR0_CD;
- write_cr0(cr0);
+ cr0 = read_cr0();
+ cr0 |= bit;
+ test->scratch = cr0;
- /*
- * If we are here the test failed, not sure what to do now because we
- * are not in guest-mode anymore so we can't trigger an intercept.
- * Trigger a tripple-fault for now.
- */
- report_fail("sel_cr0 test. Can not recover from this - exiting");
- exit(report_summary());
+ asm_conditional_fep_safe(fep, "mov %0,%%cr0", "r"(cr0));
+
+ /* This code should be unreachable when an intercept is expected */
+ report_svm_guest(!intercept, test, "Expected intercept on CR0 write");
+}
+
+/* MOV-to-CR0 updating CR0.CD is intercepted by the selective intercept */
+static void test_sel_cr0_write_intercept(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_CD, true, false);
+}
+
+static void test_sel_cr0_write_intercept_emul(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_CD, true, true);
}
static bool check_sel_cr0_intercept(struct svm_test *test)
{
- return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
+ return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE &&
+ vmcb->save.cr0 != test->scratch;
}
static void prepare_cr3_intercept(struct svm_test *test)
@@ -3461,6 +3470,9 @@ struct svm_test svm_tests[] = {
{ "sel cr0 write intercept", default_supported,
prepare_sel_cr0_intercept, default_prepare_gif_clear,
test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
+ { "sel cr0 write intercept emulate", fep_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_write_intercept_emul, default_finished, check_sel_cr0_intercept},
{ "cr3 read intercept", default_supported,
prepare_cr3_intercept, default_prepare_gif_clear,
test_cr3_intercept, default_finished, check_cr3_intercept },
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 11/14] x86/svm: Add more selective CR0 write and LMSW test cases
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (9 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 10/14] x86/svm: Generalize and improve selective CR0 write intercept test Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 12/14] x86/svm: Cleanup LBRV tests Yosry Ahmed
` (4 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Add more test cases that cover:
- The priority between selective and non-selective CR0 intercepts.
- Writes to CR0 that should not intercept (e.g. CR0.MP).
- Writes to CR0 using LMSW, which should always intercept (even when
updating CR0.MP).
Emulator variants of all test cases are added as well.
The new tests exercises bugs fixed by:
https://lore.kernel.org/kvm/20251024192918.3191141-1-yosry.ahmed@linux.dev/.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 72 insertions(+), 4 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 7e292a9a7b4ec..40e9e7e344ed8 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -129,20 +129,36 @@ static bool finished_rsm_intercept(struct svm_test *test)
static void prepare_sel_cr0_intercept(struct svm_test *test)
{
+ /* Clear CR0.MP and CR0.CD as the tests will set either of them */
+ vmcb->save.cr0 &= ~X86_CR0_MP;
vmcb->save.cr0 &= ~X86_CR0_CD;
vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
}
+static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test)
+{
+ /* Clear CR0.MP and CR0.CD as the tests will set either of them */
+ vmcb->save.cr0 &= ~X86_CR0_MP;
+ vmcb->save.cr0 &= ~X86_CR0_CD;
+ vmcb->control.intercept_cr_write |= (1ULL << 0);
+ vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
+}
+
static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit,
- bool intercept, bool fep)
+ bool is_lmsw, bool intercept, bool fep)
{
+ unsigned short msw;
unsigned long cr0;
cr0 = read_cr0();
cr0 |= bit;
+ msw = cr0 & 0xfUL;
test->scratch = cr0;
- asm_conditional_fep_safe(fep, "mov %0,%%cr0", "r"(cr0));
+ if (is_lmsw)
+ asm_conditional_fep_safe(fep, "lmsw %0", "r"(msw));
+ else
+ asm_conditional_fep_safe(fep, "mov %0,%%cr0", "r"(cr0));
/* This code should be unreachable when an intercept is expected */
report_svm_guest(!intercept, test, "Expected intercept on CR0 write");
@@ -151,12 +167,34 @@ static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit,
/* MOV-to-CR0 updating CR0.CD is intercepted by the selective intercept */
static void test_sel_cr0_write_intercept(struct svm_test *test)
{
- __test_cr0_write_bit(test, X86_CR0_CD, true, false);
+ __test_cr0_write_bit(test, X86_CR0_CD, false, true, false);
}
static void test_sel_cr0_write_intercept_emul(struct svm_test *test)
{
- __test_cr0_write_bit(test, X86_CR0_CD, true, true);
+ __test_cr0_write_bit(test, X86_CR0_CD, false, true, true);
+}
+
+/* MOV-to-CR0 updating CR0.MP is NOT intercepted by the selective intercept */
+static void test_sel_cr0_write_nointercept(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_MP, false, false, false);
+}
+
+static void test_sel_cr0_write_nointercept_emul(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_MP, false, false, true);
+}
+
+/* LMSW updating CR0.MP is intercepted by the selective intercept */
+static void test_sel_cr0_lmsw_intercept(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_MP, true, false, false);
+}
+
+static void test_sel_cr0_lmsw_intercept_emul(struct svm_test *test)
+{
+ __test_cr0_write_bit(test, X86_CR0_MP, true, false, true);
}
static bool check_sel_cr0_intercept(struct svm_test *test)
@@ -165,6 +203,18 @@ static bool check_sel_cr0_intercept(struct svm_test *test)
vmcb->save.cr0 != test->scratch;
}
+static bool check_nonsel_cr0_intercept(struct svm_test *test)
+{
+ return vmcb->control.exit_code == SVM_EXIT_WRITE_CR0 &&
+ vmcb->save.cr0 != test->scratch;
+}
+
+static bool check_cr0_nointercept(struct svm_test *test)
+{
+ return vmcb->control.exit_code == SVM_EXIT_VMMCALL &&
+ vmcb->save.cr0 == test->scratch;
+}
+
static void prepare_cr3_intercept(struct svm_test *test)
{
default_prepare(test);
@@ -3473,6 +3523,24 @@ struct svm_test svm_tests[] = {
{ "sel cr0 write intercept emulate", fep_supported,
prepare_sel_cr0_intercept, default_prepare_gif_clear,
test_sel_cr0_write_intercept_emul, default_finished, check_sel_cr0_intercept},
+ { "sel cr0 write intercept priority", default_supported,
+ prepare_sel_nonsel_cr0_intercepts, default_prepare_gif_clear,
+ test_sel_cr0_write_intercept, default_finished, check_nonsel_cr0_intercept},
+ { "sel cr0 write intercept priority emulate", fep_supported,
+ prepare_sel_nonsel_cr0_intercepts, default_prepare_gif_clear,
+ test_sel_cr0_write_intercept_emul, default_finished, check_nonsel_cr0_intercept},
+ { "sel cr0 write nointercept", default_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_write_nointercept, default_finished, check_cr0_nointercept},
+ { "sel cr0 write nointercept emulate", fep_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_write_nointercept_emul, default_finished, check_cr0_nointercept},
+ { "sel cr0 lmsw intercept", default_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_lmsw_intercept, default_finished, check_sel_cr0_intercept},
+ { "sel cr0 lmsw intercept emulate", fep_supported,
+ prepare_sel_cr0_intercept, default_prepare_gif_clear,
+ test_sel_cr0_lmsw_intercept_emul, default_finished, check_sel_cr0_intercept},
{ "cr3 read intercept", default_supported,
prepare_cr3_intercept, default_prepare_gif_clear,
test_cr3_intercept, default_finished, check_cr3_intercept },
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 12/14] x86/svm: Cleanup LBRV tests
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (10 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 11/14] x86/svm: Add more selective CR0 write and LMSW test cases Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-13 11:58 ` Shivansh Dhiman
2025-11-10 23:26 ` [PATCH v3 13/14] x86/svm: Add more LBRV test cases Yosry Ahmed
` (3 subsequent siblings)
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
In LBRV tests, failures in the guest trigger a #UD and do not convey
useful debugging info (i.e. expected and actual values of MSRs). Also, a
lot of macros are used to perform branch checks, obscuring the tests to
an extent.
Instead, add a helper to read the branch IPs, and remove the check
macros. Consistently use TEST_EXPECT_EQ() in both virtual host and guest
code, instead of a mix of report(), TEST_EXPECT_EQ(), and #UD.
Opportunisitcally slightly reorder test checks to improve semantics, and
replace the report(true, ..) calls that document the test with comments.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 138 +++++++++++++++++++++++-------------------------
1 file changed, 65 insertions(+), 73 deletions(-)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 40e9e7e344ed8..33c92b17c87db 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3006,34 +3006,17 @@ static void svm_no_nm_test(void)
"fnop with CR0.TS and CR0.EM unset no #NM exception");
}
-static u64 amd_get_lbr_rip(u32 msr)
+/* These functions have to be inlined to avoid affecting LBR registers */
+static __always_inline u64 amd_get_lbr_rip(u32 msr)
{
return rdmsr(msr) & ~AMD_LBR_RECORD_MISPREDICT;
}
-#define HOST_CHECK_LBR(from_expected, to_expected) \
-do { \
- TEST_EXPECT_EQ((u64)from_expected, amd_get_lbr_rip(MSR_IA32_LASTBRANCHFROMIP)); \
- TEST_EXPECT_EQ((u64)to_expected, amd_get_lbr_rip(MSR_IA32_LASTBRANCHTOIP)); \
-} while (0)
-
-/*
- * FIXME: Do something other than generate an exception to communicate failure.
- * Debugging without expected vs. actual is an absolute nightmare.
- */
-#define GUEST_CHECK_LBR(from_expected, to_expected) \
-do { \
- if ((u64)(from_expected) != amd_get_lbr_rip(MSR_IA32_LASTBRANCHFROMIP)) \
- asm volatile("ud2"); \
- if ((u64)(to_expected) != amd_get_lbr_rip(MSR_IA32_LASTBRANCHTOIP)) \
- asm volatile("ud2"); \
-} while (0)
-
-#define REPORT_GUEST_LBR_ERROR(vmcb) \
- report(false, "LBR guest test failed. Exit reason 0x%x, RIP = %lx, from = %lx, to = %lx, ex from = %lx, ex to = %lx", \
- vmcb->control.exit_code, vmcb->save.rip, \
- vmcb->save.br_from, vmcb->save.br_to, \
- vmcb->save.last_excp_from, vmcb->save.last_excp_to)
+static __always_inline void get_lbr_ips(u64 *from, u64 *to)
+{
+ *from = amd_get_lbr_rip(MSR_IA32_LASTBRANCHFROMIP);
+ *to = amd_get_lbr_rip(MSR_IA32_LASTBRANCHTOIP);
+}
#define DO_BRANCH(branch_name) \
asm volatile ( \
@@ -3058,55 +3041,64 @@ u64 dbgctl;
static void svm_lbrv_test_guest1(void)
{
+ u64 from_ip, to_ip;
+
/*
* This guest expects the LBR to be already enabled when it starts,
* it does a branch, and then disables the LBR and then checks.
*/
+ dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
DO_BRANCH(guest_branch0);
- dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ /* Disable LBR before the checks to avoid changing the last branch */
wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+ dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, 0);
- if (dbgctl != DEBUGCTLMSR_LBR)
- asm volatile("ud2\n");
- if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0)
- asm volatile("ud2\n");
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch0_from, from_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch0_to, to_ip);
- GUEST_CHECK_LBR(&guest_branch0_from, &guest_branch0_to);
asm volatile ("vmmcall\n");
}
static void svm_lbrv_test_guest2(void)
{
+ u64 from_ip, to_ip;
+
/*
* This guest expects the LBR to be disabled when it starts,
* enables it, does a branch, disables it and then checks.
*/
-
- DO_BRANCH(guest_branch1);
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, 0);
- if (dbgctl != 0)
- asm volatile("ud2\n");
+ DO_BRANCH(guest_branch1);
- GUEST_CHECK_LBR(&host_branch2_from, &host_branch2_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&host_branch2_from, from_ip);
+ TEST_EXPECT_EQ((u64)&host_branch2_to, to_ip);
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
+
DO_BRANCH(guest_branch2);
wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
- if (dbgctl != DEBUGCTLMSR_LBR)
- asm volatile("ud2\n");
- GUEST_CHECK_LBR(&guest_branch2_from, &guest_branch2_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch2_from, from_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
asm volatile ("vmmcall\n");
}
static void svm_lbrv_test0(void)
{
- report(true, "Basic LBR test");
+ u64 from_ip, to_ip;
+
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
DO_BRANCH(host_branch0);
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
@@ -3116,12 +3108,15 @@ static void svm_lbrv_test0(void)
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
TEST_EXPECT_EQ(dbgctl, 0);
- HOST_CHECK_LBR(&host_branch0_from, &host_branch0_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&host_branch0_from, from_ip);
+ TEST_EXPECT_EQ((u64)&host_branch0_to, to_ip);
}
+/* Test that without LBRV enabled, guest LBR state does 'leak' to the host(1) */
static void svm_lbrv_test1(void)
{
- report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(1)");
+ u64 from_ip, to_ip;
svm_setup_vmrun((u64)svm_lbrv_test_guest1);
vmcb->control.virt_ext = 0;
@@ -3130,19 +3125,19 @@ static void svm_lbrv_test1(void)
DO_BRANCH(host_branch1);
SVM_BARE_VMRUN;
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, 0);
- if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
- REPORT_GUEST_LBR_ERROR(vmcb);
- return;
- }
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
- TEST_EXPECT_EQ(dbgctl, 0);
- HOST_CHECK_LBR(&guest_branch0_from, &guest_branch0_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch0_from, from_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch0_to, to_ip);
}
+/* Test that without LBRV enabled, guest LBR state does 'leak' to the host(2) */
static void svm_lbrv_test2(void)
{
- report(true, "Test that without LBRV enabled, guest LBR state does 'leak' to the host(2)");
+ u64 from_ip, to_ip;
svm_setup_vmrun((u64)svm_lbrv_test_guest2);
vmcb->control.virt_ext = 0;
@@ -3152,25 +3147,25 @@ static void svm_lbrv_test2(void)
wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
SVM_BARE_VMRUN;
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
- wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+ TEST_EXPECT_EQ(dbgctl, 0);
- if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
- REPORT_GUEST_LBR_ERROR(vmcb);
- return;
- }
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
- TEST_EXPECT_EQ(dbgctl, 0);
- HOST_CHECK_LBR(&guest_branch2_from, &guest_branch2_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch2_from, from_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
}
+/* Test that with LBRV enabled, guest LBR state doesn't leak (1) */
static void svm_lbrv_nested_test1(void)
{
+ u64 from_ip, to_ip;
+
if (!lbrv_supported()) {
report_skip("LBRV not supported in the guest");
return;
}
- report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (1)");
svm_setup_vmrun((u64)svm_lbrv_test_guest1);
vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
@@ -3181,28 +3176,26 @@ static void svm_lbrv_nested_test1(void)
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
- if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
- REPORT_GUEST_LBR_ERROR(vmcb);
- return;
- }
-
- if (vmcb->save.dbgctl != 0) {
- report(false, "unexpected virtual guest MSR_IA32_DEBUGCTLMSR value 0x%lx", vmcb->save.dbgctl);
- return;
- }
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
- HOST_CHECK_LBR(&host_branch3_from, &host_branch3_to);
+ TEST_EXPECT_EQ(vmcb->save.dbgctl, 0);
+
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&host_branch3_from, from_ip);
+ TEST_EXPECT_EQ((u64)&host_branch3_to, to_ip);
}
+/* Test that with LBRV enabled, guest LBR state doesn't leak (2) */
static void svm_lbrv_nested_test2(void)
{
+ u64 from_ip, to_ip;
+
if (!lbrv_supported()) {
report_skip("LBRV not supported in the guest");
return;
}
- report(true, "Test that with LBRV enabled, guest LBR state doesn't leak (2)");
svm_setup_vmrun((u64)svm_lbrv_test_guest2);
vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
@@ -3215,14 +3208,13 @@ static void svm_lbrv_nested_test2(void)
SVM_BARE_VMRUN;
dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+ TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
- if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
- REPORT_GUEST_LBR_ERROR(vmcb);
- return;
- }
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
- TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
- HOST_CHECK_LBR(&host_branch4_from, &host_branch4_to);
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&host_branch4_from, from_ip);
+ TEST_EXPECT_EQ((u64)&host_branch4_to, to_ip);
}
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 13/14] x86/svm: Add more LBRV test cases
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (11 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 12/14] x86/svm: Cleanup LBRV tests Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM Yosry Ahmed
` (2 subsequent siblings)
15 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Add tests exercising using LBR, disabling it, then running a guest which
enables and uses LBR but does not disable it.
Make sure that when LBRV is disabled by virtual host, the guest state
correctly leaks into virtual host, but not when LBRV is enabled. This
also exercises KVM disabling intercepts for LBRs in L2 but re-enabling
them when exiting to L1.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm_tests.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 33c92b17c87db..47a2edfbb6c9b 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3031,11 +3031,14 @@ static __always_inline void get_lbr_ips(u64 *from, u64 *to)
extern u64 guest_branch0_from, guest_branch0_to;
extern u64 guest_branch2_from, guest_branch2_to;
+extern u64 guest_branch3_from, guest_branch3_to;
extern u64 host_branch0_from, host_branch0_to;
extern u64 host_branch2_from, host_branch2_to;
extern u64 host_branch3_from, host_branch3_to;
extern u64 host_branch4_from, host_branch4_to;
+extern u64 host_branch5_from, host_branch5_to;
+extern u64 host_branch6_from, host_branch6_to;
u64 dbgctl;
@@ -3095,6 +3098,23 @@ static void svm_lbrv_test_guest2(void)
asm volatile ("vmmcall\n");
}
+static void svm_lbrv_test_guest3(void)
+{
+ /*
+ * This guest expects LBR to be disabled, it enables LBR and does a
+ * branch, then exits to L1 without disabling LBR or doing more
+ * branches.
+ */
+ dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, 0);
+
+ wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
+ DO_BRANCH(guest_branch3);
+
+ /* Do not call the vmmcall() fn to avoid overriding the last branch */
+ asm volatile ("vmmcall\n\t");
+}
+
static void svm_lbrv_test0(void)
{
u64 from_ip, to_ip;
@@ -3156,6 +3176,33 @@ static void svm_lbrv_test2(void)
TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
}
+/*
+ * Test that without LBRV enabled, enabling LBR in the guest then exiting will
+ * keep LBR enabled and 'leak' state to the host correctly.
+ */
+static void svm_lbrv_test3(void)
+{
+ u64 from_ip, to_ip;
+
+ svm_setup_vmrun((u64)svm_lbrv_test_guest3);
+ vmcb->control.virt_ext = 0;
+
+ wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
+ DO_BRANCH(host_branch5);
+ wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+
+ SVM_BARE_VMRUN;
+ dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+ TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
+
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch3_from, from_ip);
+ TEST_EXPECT_EQ((u64)&guest_branch3_to, to_ip);
+}
+
/* Test that with LBRV enabled, guest LBR state doesn't leak (1) */
static void svm_lbrv_nested_test1(void)
{
@@ -3217,6 +3264,37 @@ static void svm_lbrv_nested_test2(void)
TEST_EXPECT_EQ((u64)&host_branch4_to, to_ip);
}
+/*
+ * Test that with LBRV enabled, enabling LBR in the guest then exiting does not
+ * 'leak' state to the host.
+ */
+static void svm_lbrv_nested_test3(void)
+{
+ u64 from_ip, to_ip;
+
+ if (!lbrv_supported()) {
+ report_skip("LBRV not supported in the guest");
+ return;
+ }
+
+ svm_setup_vmrun((u64)svm_lbrv_test_guest3);
+ vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ vmcb->save.dbgctl = 0;
+
+ wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
+ DO_BRANCH(host_branch6);
+ wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+
+ SVM_BARE_VMRUN;
+ dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
+ TEST_EXPECT_EQ(dbgctl, 0);
+
+ TEST_EXPECT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+
+ get_lbr_ips(&from_ip, &to_ip);
+ TEST_EXPECT_EQ((u64)&host_branch6_from, from_ip);
+ TEST_EXPECT_EQ((u64)&host_branch6_to, to_ip);
+}
// test that a nested guest which does enable INTR interception
// but doesn't enable virtual interrupt masking works
@@ -3622,8 +3700,10 @@ struct svm_test svm_tests[] = {
TEST(svm_lbrv_test0),
TEST(svm_lbrv_test1),
TEST(svm_lbrv_test2),
+ TEST(svm_lbrv_test3),
TEST(svm_lbrv_nested_test1),
TEST(svm_lbrv_nested_test2),
+ TEST(svm_lbrv_nested_test3),
TEST(svm_intr_intercept_mix_if),
TEST(svm_intr_intercept_mix_gif),
TEST(svm_intr_intercept_mix_gif2),
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (12 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 13/14] x86/svm: Add more LBRV test cases Yosry Ahmed
@ 2025-11-10 23:26 ` Yosry Ahmed
2025-11-14 0:40 ` Sean Christopherson
2025-11-11 0:52 ` [PATCH v3 00/14] Improvements for (nested) SVM testing Sean Christopherson
2025-11-14 0:46 ` Sean Christopherson
15 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-10 23:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel, Yosry Ahmed
Rename nested_ctl and virt_ext to misc_ctl and misc_ctl2, respectively,
to match new names in KVM code.
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
x86/svm.c | 2 +-
x86/svm.h | 6 +++---
x86/svm_tests.c | 12 ++++++------
3 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/x86/svm.c b/x86/svm.c
index de9eb19443caa..c40ef154bcacd 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -200,7 +200,7 @@ void vmcb_ident(struct vmcb *vmcb)
ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap);
if (npt_supported()) {
- ctrl->nested_ctl = 1;
+ ctrl->misc_ctl = 1;
ctrl->nested_cr3 = (u64)pml4e;
ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
diff --git a/x86/svm.h b/x86/svm.h
index 264583a6547ef..00d28199f65f5 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -94,12 +94,12 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u64 exit_info_2;
u32 exit_int_info;
u32 exit_int_info_err;
- u64 nested_ctl;
+ u64 misc_ctl;
u8 reserved_4[16];
u32 event_inj;
u32 event_inj_err;
u64 nested_cr3;
- u64 virt_ext;
+ u64 misc_ctl2;
u32 clean;
u32 reserved_5;
u64 next_rip;
@@ -370,7 +370,7 @@ struct __attribute__ ((__packed__)) vmcb {
#define MSR_BITMAP_SIZE 8192
-#define LBR_CTL_ENABLE_MASK BIT_ULL(0)
+#define SVM_MISC_CTL2_LBR_CTL_ENABLE BIT_ULL(0)
struct svm_test {
const char *name;
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 47a2edfbb6c9b..49b5906965b7e 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -3139,7 +3139,7 @@ static void svm_lbrv_test1(void)
u64 from_ip, to_ip;
svm_setup_vmrun((u64)svm_lbrv_test_guest1);
- vmcb->control.virt_ext = 0;
+ vmcb->control.misc_ctl2 = 0;
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
DO_BRANCH(host_branch1);
@@ -3160,7 +3160,7 @@ static void svm_lbrv_test2(void)
u64 from_ip, to_ip;
svm_setup_vmrun((u64)svm_lbrv_test_guest2);
- vmcb->control.virt_ext = 0;
+ vmcb->control.misc_ctl2 = 0;
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
DO_BRANCH(host_branch2);
@@ -3185,7 +3185,7 @@ static void svm_lbrv_test3(void)
u64 from_ip, to_ip;
svm_setup_vmrun((u64)svm_lbrv_test_guest3);
- vmcb->control.virt_ext = 0;
+ vmcb->control.misc_ctl2 = 0;
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
DO_BRANCH(host_branch5);
@@ -3214,7 +3214,7 @@ static void svm_lbrv_nested_test1(void)
}
svm_setup_vmrun((u64)svm_lbrv_test_guest1);
- vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ vmcb->control.misc_ctl2 = SVM_MISC_CTL2_LBR_CTL_ENABLE;
vmcb->save.dbgctl = DEBUGCTLMSR_LBR;
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
@@ -3244,7 +3244,7 @@ static void svm_lbrv_nested_test2(void)
}
svm_setup_vmrun((u64)svm_lbrv_test_guest2);
- vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ vmcb->control.misc_ctl2 = SVM_MISC_CTL2_LBR_CTL_ENABLE;
vmcb->save.dbgctl = 0;
vmcb->save.br_from = (u64)&host_branch2_from;
@@ -3278,7 +3278,7 @@ static void svm_lbrv_nested_test3(void)
}
svm_setup_vmrun((u64)svm_lbrv_test_guest3);
- vmcb->control.virt_ext = LBR_CTL_ENABLE_MASK;
+ vmcb->control.misc_ctl2 = SVM_MISC_CTL2_LBR_CTL_ENABLE;
vmcb->save.dbgctl = 0;
wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
--
2.51.2.1041.gc1ab5b90ca-goog
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [PATCH v3 00/14] Improvements for (nested) SVM testing
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (13 preceding siblings ...)
2025-11-10 23:26 ` [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM Yosry Ahmed
@ 2025-11-11 0:52 ` Sean Christopherson
2025-11-11 0:58 ` Yosry Ahmed
2025-11-14 0:46 ` Sean Christopherson
15 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2025-11-11 0:52 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
Nit, in the future, please label the patches with "kvm-unit-tests" so that it's
super obvious what they're for, e.g. [kvm-unit-tests PATCH ...].
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 00/14] Improvements for (nested) SVM testing
2025-11-11 0:52 ` [PATCH v3 00/14] Improvements for (nested) SVM testing Sean Christopherson
@ 2025-11-11 0:58 ` Yosry Ahmed
0 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-11 0:58 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Mon, Nov 10, 2025 at 04:52:37PM -0800, Sean Christopherson wrote:
> Nit, in the future, please label the patches with "kvm-unit-tests" so that it's
> super obvious what they're for, e.g. [kvm-unit-tests PATCH ...].
Ugh not a nit, it's actually annoying, even for me, because I use these
labels to search for my patches. Sorry about that, I had the correct
prefix in previous versions but forgot to add it this time around.
Let me know if you want me to resend to make things easier for you
(assuming you'll be the one to apply this if/when the time comes :) ).
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test
2025-11-10 23:26 ` [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test Yosry Ahmed
@ 2025-11-12 13:48 ` Manali Shukla
0 siblings, 0 replies; 30+ messages in thread
From: Manali Shukla @ 2025-11-12 13:48 UTC (permalink / raw)
To: Yosry Ahmed, Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On 11/11/2025 4:56 AM, Yosry Ahmed wrote:
> Rename the test and functions to more general names describing the test
> more accurately. Use X86_CR0_CD instead of hardcoding the bitmask, and
> explicitly clear the bit in the prepare() function to make it clearer
> that it would only be set by the test.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> x86/svm_tests.c | 22 +++++++++-------------
> 1 file changed, 9 insertions(+), 13 deletions(-)
>
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 80d5aeb108650..e911659194b3d 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -793,23 +793,19 @@ static bool check_asid_zero(struct svm_test *test)
> return vmcb->control.exit_code == SVM_EXIT_ERR;
> }
>
> -static void sel_cr0_bug_prepare(struct svm_test *test)
> +static void prepare_sel_cr0_intercept(struct svm_test *test)
> {
> + vmcb->save.cr0 &= ~X86_CR0_CD;
> vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> }
>
> -static bool sel_cr0_bug_finished(struct svm_test *test)
> -{
> - return true;
> -}
> -
> -static void sel_cr0_bug_test(struct svm_test *test)
> +static void test_sel_cr0_write_intercept(struct svm_test *test)
> {
> unsigned long cr0;
>
> - /* read cr0, clear CD, and write back */
> + /* read cr0, set CD, and write back */
> cr0 = read_cr0();
> - cr0 |= (1UL << 30);
> + cr0 |= X86_CR0_CD;
> write_cr0(cr0);
>
> /*
> @@ -821,7 +817,7 @@ static void sel_cr0_bug_test(struct svm_test *test)
> exit(report_summary());
> }
>
> -static bool sel_cr0_bug_check(struct svm_test *test)
> +static bool check_sel_cr0_intercept(struct svm_test *test)
> {
> return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> }
> @@ -3486,9 +3482,9 @@ struct svm_test svm_tests[] = {
> { "asid_zero", default_supported, prepare_asid_zero,
> default_prepare_gif_clear, test_asid_zero,
> default_finished, check_asid_zero },
> - { "sel_cr0_bug", default_supported, sel_cr0_bug_prepare,
> - default_prepare_gif_clear, sel_cr0_bug_test,
> - sel_cr0_bug_finished, sel_cr0_bug_check },
> + { "sel cr0 write intercept", default_supported,
> + prepare_sel_cr0_intercept, default_prepare_gif_clear,
> + test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
> { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
> default_prepare_gif_clear, tsc_adjust_test,
> default_finished, tsc_adjust_check },
LGTM
Reviewed-by: Manali Shukla <manali.shukla@amd.com>
Tested on AMD Genoa machine. This test case works as expected.
Tested-by: Manali Shukla <manali.shukla@amd.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept
2025-11-10 23:26 ` [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept Yosry Ahmed
@ 2025-11-12 13:52 ` Manali Shukla
0 siblings, 0 replies; 30+ messages in thread
From: Manali Shukla @ 2025-11-12 13:52 UTC (permalink / raw)
To: Yosry Ahmed, Sean Christopherson
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On 11/11/2025 4:56 AM, Yosry Ahmed wrote:
> It makes more semantic sense for these tests to be in close proximity.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> x86/svm_tests.c | 64 ++++++++++++++++++++++++-------------------------
> 1 file changed, 32 insertions(+), 32 deletions(-)
>
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index e911659194b3d..feeb27d61435b 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -112,6 +112,35 @@ static bool finished_rsm_intercept(struct svm_test *test)
> return get_test_stage(test) == 2;
> }
>
> +static void prepare_sel_cr0_intercept(struct svm_test *test)
> +{
> + vmcb->save.cr0 &= ~X86_CR0_CD;
> + vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> +}
> +
> +static void test_sel_cr0_write_intercept(struct svm_test *test)
> +{
> + unsigned long cr0;
> +
> + /* read cr0, set CD, and write back */
> + cr0 = read_cr0();
> + cr0 |= X86_CR0_CD;
> + write_cr0(cr0);
> +
> + /*
> + * If we are here the test failed, not sure what to do now because we
> + * are not in guest-mode anymore so we can't trigger an intercept.
> + * Trigger a tripple-fault for now.
> + */
> + report_fail("sel_cr0 test. Can not recover from this - exiting");
> + exit(report_summary());
> +}
> +
> +static bool check_sel_cr0_intercept(struct svm_test *test)
> +{
> + return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> +}
> +
> static void prepare_cr3_intercept(struct svm_test *test)
> {
> default_prepare(test);
> @@ -793,35 +822,6 @@ static bool check_asid_zero(struct svm_test *test)
> return vmcb->control.exit_code == SVM_EXIT_ERR;
> }
>
> -static void prepare_sel_cr0_intercept(struct svm_test *test)
> -{
> - vmcb->save.cr0 &= ~X86_CR0_CD;
> - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0);
> -}
> -
> -static void test_sel_cr0_write_intercept(struct svm_test *test)
> -{
> - unsigned long cr0;
> -
> - /* read cr0, set CD, and write back */
> - cr0 = read_cr0();
> - cr0 |= X86_CR0_CD;
> - write_cr0(cr0);
> -
> - /*
> - * If we are here the test failed, not sure what to do now because we
> - * are not in guest-mode anymore so we can't trigger an intercept.
> - * Trigger a tripple-fault for now.
> - */
> - report_fail("sel_cr0 test. Can not recover from this - exiting");
> - exit(report_summary());
> -}
> -
> -static bool check_sel_cr0_intercept(struct svm_test *test)
> -{
> - return vmcb->control.exit_code == SVM_EXIT_CR0_SEL_WRITE;
> -}
> -
> #define TSC_ADJUST_VALUE (1ll << 32)
> #define TSC_OFFSET_VALUE (~0ull << 48)
> static bool ok;
> @@ -3458,6 +3458,9 @@ struct svm_test svm_tests[] = {
> { "rsm", default_supported,
> prepare_rsm_intercept, default_prepare_gif_clear,
> test_rsm_intercept, finished_rsm_intercept, check_rsm_intercept },
> + { "sel cr0 write intercept", default_supported,
> + prepare_sel_cr0_intercept, default_prepare_gif_clear,
> + test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
> { "cr3 read intercept", default_supported,
> prepare_cr3_intercept, default_prepare_gif_clear,
> test_cr3_intercept, default_finished, check_cr3_intercept },
> @@ -3482,9 +3485,6 @@ struct svm_test svm_tests[] = {
> { "asid_zero", default_supported, prepare_asid_zero,
> default_prepare_gif_clear, test_asid_zero,
> default_finished, check_asid_zero },
> - { "sel cr0 write intercept", default_supported,
> - prepare_sel_cr0_intercept, default_prepare_gif_clear,
> - test_sel_cr0_write_intercept, default_finished, check_sel_cr0_intercept},
> { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
> default_prepare_gif_clear, tsc_adjust_test,
> default_finished, tsc_adjust_check },
You might probably want to add "No functional change intended" in the
commit message, since this is just the movement of a test case.
Reviewed-by: Manali Shukla <manali.shukla@amd.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 12/14] x86/svm: Cleanup LBRV tests
2025-11-10 23:26 ` [PATCH v3 12/14] x86/svm: Cleanup LBRV tests Yosry Ahmed
@ 2025-11-13 11:58 ` Shivansh Dhiman
2025-11-13 14:59 ` Yosry Ahmed
0 siblings, 1 reply; 30+ messages in thread
From: Shivansh Dhiman @ 2025-11-13 11:58 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Paolo Bonzini, Sean Christopherson, Kevin Cheng, kvm,
linux-kernel
Hi Yosry,
I tested this on EPYC-Turin and found that some tests seem to be a bit flaky.
See below.
On 11-11-2025 04:56, Yosry Ahmed wrote:
> @@ -3058,55 +3041,64 @@ u64 dbgctl;
>
> static void svm_lbrv_test_guest1(void)
> {
> + u64 from_ip, to_ip;
> +
> /*
> * This guest expects the LBR to be already enabled when it starts,
> * it does a branch, and then disables the LBR and then checks.
> */
> + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
This TEST_EXPECT_EQ is run when LBR is enabled, causing it to change last
branch. I tried to move it below wrmsr(MSR_IA32_DEBUGCTLMSR, 0) and it works
fine that way.
>
> DO_BRANCH(guest_branch0);
>
> - dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> + /* Disable LBR before the checks to avoid changing the last branch */
> wrmsr(MSR_IA32_DEBUGCTLMSR, 0);> + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> + TEST_EXPECT_EQ(dbgctl, 0);
>
> - if (dbgctl != DEBUGCTLMSR_LBR)
> - asm volatile("ud2\n");
> - if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0)
> - asm volatile("ud2\n");
> + get_lbr_ips(&from_ip, &to_ip);
> + TEST_EXPECT_EQ((u64)&guest_branch0_from, from_ip);
> + TEST_EXPECT_EQ((u64)&guest_branch0_to, to_ip);
>
> - GUEST_CHECK_LBR(&guest_branch0_from, &guest_branch0_to);
> asm volatile ("vmmcall\n");
> }
>
> static void svm_lbrv_test_guest2(void)
> {
> + u64 from_ip, to_ip;
> +
> /*
> * This guest expects the LBR to be disabled when it starts,
> * enables it, does a branch, disables it and then checks.
> */
> -
> - DO_BRANCH(guest_branch1);
> dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> + TEST_EXPECT_EQ(dbgctl, 0);
>
> - if (dbgctl != 0)
> - asm volatile("ud2\n");
> + DO_BRANCH(guest_branch1);
>
> - GUEST_CHECK_LBR(&host_branch2_from, &host_branch2_to);
> + get_lbr_ips(&from_ip, &to_ip);
> + TEST_EXPECT_EQ((u64)&host_branch2_from, from_ip);
> + TEST_EXPECT_EQ((u64)&host_branch2_to, to_ip);
>
> wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
Same thing here as well.
> +
> DO_BRANCH(guest_branch2);
> wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>
> - if (dbgctl != DEBUGCTLMSR_LBR)
> - asm volatile("ud2\n");
> - GUEST_CHECK_LBR(&guest_branch2_from, &guest_branch2_to);
> + get_lbr_ips(&from_ip, &to_ip);
> + TEST_EXPECT_EQ((u64)&guest_branch2_from, from_ip);
> + TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
>
> asm volatile ("vmmcall\n");
> }
Reviewed-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Other tests look good to me, and work fine.
Tested-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 12/14] x86/svm: Cleanup LBRV tests
2025-11-13 11:58 ` Shivansh Dhiman
@ 2025-11-13 14:59 ` Yosry Ahmed
2025-11-14 4:57 ` Shivansh Dhiman
0 siblings, 1 reply; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-13 14:59 UTC (permalink / raw)
To: Shivansh Dhiman
Cc: Paolo Bonzini, Sean Christopherson, Kevin Cheng, kvm,
linux-kernel
On Thu, Nov 13, 2025 at 05:28:11PM +0530, Shivansh Dhiman wrote:
> Hi Yosry,
>
> I tested this on EPYC-Turin and found that some tests seem to be a bit flaky.
> See below.
Which ones? I was also running the tests on EPYC-Turin.
>
> On 11-11-2025 04:56, Yosry Ahmed wrote:
> > @@ -3058,55 +3041,64 @@ u64 dbgctl;
> >
> > static void svm_lbrv_test_guest1(void)
> > {
> > + u64 from_ip, to_ip;
> > +
> > /*
> > * This guest expects the LBR to be already enabled when it starts,
> > * it does a branch, and then disables the LBR and then checks.
> > */
> > + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
>
> This TEST_EXPECT_EQ is run when LBR is enabled, causing it to change last
> branch. I tried to move it below wrmsr(MSR_IA32_DEBUGCTLMSR, 0) and it works
> fine that way.
It shouldn't matter though because we execute the branch we care about
after TEST_EXPECT_EQ(), it's DO_BRANCH(guest_branch0) below. Is it
possible that the compiler reordered them for some reason?
I liked having the check here because it's easier to follow when the
checks are done at their logical place rather than delayed after
wrmsr().
>
> >
> > DO_BRANCH(guest_branch0);
> >
> > - dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > + /* Disable LBR before the checks to avoid changing the last branch */
> > wrmsr(MSR_IA32_DEBUGCTLMSR, 0);> + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > + TEST_EXPECT_EQ(dbgctl, 0);
> >
> > - if (dbgctl != DEBUGCTLMSR_LBR)
> > - asm volatile("ud2\n");
> > - if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0)
> > - asm volatile("ud2\n");
> > + get_lbr_ips(&from_ip, &to_ip);
> > + TEST_EXPECT_EQ((u64)&guest_branch0_from, from_ip);
> > + TEST_EXPECT_EQ((u64)&guest_branch0_to, to_ip);
> >
> > - GUEST_CHECK_LBR(&guest_branch0_from, &guest_branch0_to);
> > asm volatile ("vmmcall\n");
> > }
> >
> > static void svm_lbrv_test_guest2(void)
> > {
> > + u64 from_ip, to_ip;
> > +
> > /*
> > * This guest expects the LBR to be disabled when it starts,
> > * enables it, does a branch, disables it and then checks.
> > */
> > -
> > - DO_BRANCH(guest_branch1);
> > dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > + TEST_EXPECT_EQ(dbgctl, 0);
> >
> > - if (dbgctl != 0)
> > - asm volatile("ud2\n");
> > + DO_BRANCH(guest_branch1);
> >
> > - GUEST_CHECK_LBR(&host_branch2_from, &host_branch2_to);
> > + get_lbr_ips(&from_ip, &to_ip);
> > + TEST_EXPECT_EQ((u64)&host_branch2_from, from_ip);
> > + TEST_EXPECT_EQ((u64)&host_branch2_to, to_ip);
> >
> > wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
> > dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
> > + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
>
> Same thing here as well.
>
> > +
> > DO_BRANCH(guest_branch2);
> > wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
> >
> > - if (dbgctl != DEBUGCTLMSR_LBR)
> > - asm volatile("ud2\n");
> > - GUEST_CHECK_LBR(&guest_branch2_from, &guest_branch2_to);
> > + get_lbr_ips(&from_ip, &to_ip);
> > + TEST_EXPECT_EQ((u64)&guest_branch2_from, from_ip);
> > + TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
> >
> > asm volatile ("vmmcall\n");
> > }
> Reviewed-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
>
> Other tests look good to me, and work fine.
>
> Tested-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Thanks!
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test
2025-11-10 23:26 ` [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test Yosry Ahmed
@ 2025-11-14 0:34 ` Sean Christopherson
2025-11-14 5:46 ` Yosry Ahmed
0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2025-11-14 0:34 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Mon, Nov 10, 2025, Yosry Ahmed wrote:
> On an AMT Turin (EPYC Zen 5), svm_tsc_scale_test flakes on the last test
> case with 0.0001 TSC scaling ratio, even with the 24-bit shift for
> stability. On failure, the actual value is 49 instead of the expected
> 50.
>
> Use a higher scaling ratio, 0.001, which makes the test pass
> consistently.
Top-tier analysis right here :-D
I'm going to take Jim's version instead of papering over the bug.
https://lore.kernel.org/all/CALMp9eQep3H-OtqmLe3O2MsOT-Vx4y0-LordKgN+pkp04VLSWw@mail.gmail.com
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
2025-11-10 23:26 ` [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available Yosry Ahmed
@ 2025-11-14 0:40 ` Sean Christopherson
2025-11-14 0:47 ` Yosry Ahmed
0 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2025-11-14 0:40 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Mon, Nov 10, 2025, Yosry Ahmed wrote:
> The check to skip the test is currently performed in the guest code.
> There a few TEST_ASSERTs that happen before the guest is run, which
> internally call report_passed(). The latter increases the number of
> passed tests.
>
> Hence, when vmx_pf_exception_test_fep is run, report_summary() does not
> return a "skip" error code because the total number of tests is larger
> than the number of skipped tests.
>
> Skip early if FEP is not available, before any assertions, such that
> report_summary() finds exactly 1 skipped test and returns the
> appropriate error code.
>
> Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> x86/vmx_tests.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 0b3cfe50c6142..4f214ebdbe1d9 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -10644,7 +10644,10 @@ static void vmx_pf_exception_test(void)
>
> static void vmx_pf_exception_forced_emulation_test(void)
> {
> - __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
> + if (is_fep_available)
> + __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
> + else
> + report_skip("Forced emulation prefix (FEP) not available\n");
To be consistent with other tests, and the kernel's general pattern of:
if (<error>) {
<react>
return;
}
<do useful stuff>
I'll tweak this to
if (!is_fep_available) {
report_skip("Forced emulation prefix (FEP) not available\n");
return;
}
__vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
when applying.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM
2025-11-10 23:26 ` [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM Yosry Ahmed
@ 2025-11-14 0:40 ` Sean Christopherson
0 siblings, 0 replies; 30+ messages in thread
From: Sean Christopherson @ 2025-11-14 0:40 UTC (permalink / raw)
To: Yosry Ahmed; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Mon, Nov 10, 2025, Yosry Ahmed wrote:
> Rename nested_ctl and virt_ext to misc_ctl and misc_ctl2, respectively,
> to match new names in KVM code.
I like the change, but I'm going to hold off on this patch until the KVM changes
land, e.g. so that we don't end up going in two different directions.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 00/14] Improvements for (nested) SVM testing
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
` (14 preceding siblings ...)
2025-11-11 0:52 ` [PATCH v3 00/14] Improvements for (nested) SVM testing Sean Christopherson
@ 2025-11-14 0:46 ` Sean Christopherson
2025-11-14 5:42 ` Yosry Ahmed
15 siblings, 1 reply; 30+ messages in thread
From: Sean Christopherson @ 2025-11-14 0:46 UTC (permalink / raw)
To: Sean Christopherson, Yosry Ahmed
Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Mon, 10 Nov 2025 23:26:28 +0000, Yosry Ahmed wrote:
> This is a combined v2/v3 of the patch series in [1] and [2], with a
> couple of extra changes.
>
> The series mostly includes fixups and cleanups, and more importantly new
> tests for selective CR0 write intercepts and LBRV, which cover bugs
> recently fixed by [3] and [4].
>
> [...]
Applied to kvm-x86 next. In the future, please only bundle related and/or
dependent patches, especially for one-off patches. It's a lot easier on my
end (and likely for most maintainers and reviewers) to deal separate series.
E.g. if you need to spin a new version, then you aren't spamming everyone
with 10+ patches just to rev one patch that might not even be realted to the
rest. And having separate series makes it easier to select exactly what I
want to apply.
Concretely, at a glance, this could/should be 7 different patches/series:
1, 2, 3-5 + 7 + 10-11, 6, 8, 9, 12-13, 14
[01/14] scripts: Always return '2' when skipping tests
https://github.com/kvm-x86/kvm-unit-tests/commit/1825b2d46c1a
[02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
https://github.com/kvm-x86/kvm-unit-tests/commit/cab22b23b676
[03/14] x86/svm: Cleanup selective cr0 write intercept test
https://github.com/kvm-x86/kvm-unit-tests/commit/5f57e54c42e6
[04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept
https://github.com/kvm-x86/kvm-unit-tests/commit/0fa8b9beffba
[05/14] x86/svm: Add FEP helpers for SVM tests
https://github.com/kvm-x86/kvm-unit-tests/commit/1c5e0e1c75aa
[06/14] x86/svm: Report unsupported SVM tests
https://github.com/kvm-x86/kvm-unit-tests/commit/d7e64b50d0e3
[07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c
https://github.com/kvm-x86/kvm-unit-tests/commit/9af8f8e09dff
[08/14] x86/svm: Print SVM test names before running tests
https://github.com/kvm-x86/kvm-unit-tests/commit/044c33c54661
[09/14] x86/svm: Deflake svm_tsc_scale_test
[ DROP ]
[10/14] x86/svm: Generalize and improve selective CR0 write intercept test
https://github.com/kvm-x86/kvm-unit-tests/commit/cc34f04ac665
[11/14] x86/svm: Add more selective CR0 write and LMSW test cases
https://github.com/kvm-x86/kvm-unit-tests/commit/09e2c95edefd
[12/14] x86/svm: Cleanup LBRV tests
https://github.com/kvm-x86/kvm-unit-tests/commit/114a564310f6
[13/14] x86/svm: Add more LBRV test cases
https://github.com/kvm-x86/kvm-unit-tests/commit/ffd01c54af99
[14/14] x86/svm: Rename VMCB fields to match KVM
[ WAIT ]
--
https://github.com/kvm-x86/kvm-unit-tests/tree/next
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
2025-11-14 0:40 ` Sean Christopherson
@ 2025-11-14 0:47 ` Yosry Ahmed
0 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-14 0:47 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Thu, Nov 13, 2025 at 04:40:12PM -0800, Sean Christopherson wrote:
> On Mon, Nov 10, 2025, Yosry Ahmed wrote:
> > The check to skip the test is currently performed in the guest code.
> > There a few TEST_ASSERTs that happen before the guest is run, which
> > internally call report_passed(). The latter increases the number of
> > passed tests.
> >
> > Hence, when vmx_pf_exception_test_fep is run, report_summary() does not
> > return a "skip" error code because the total number of tests is larger
> > than the number of skipped tests.
> >
> > Skip early if FEP is not available, before any assertions, such that
> > report_summary() finds exactly 1 skipped test and returns the
> > appropriate error code.
> >
> > Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> > ---
> > x86/vmx_tests.c | 5 ++++-
> > 1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> > index 0b3cfe50c6142..4f214ebdbe1d9 100644
> > --- a/x86/vmx_tests.c
> > +++ b/x86/vmx_tests.c
> > @@ -10644,7 +10644,10 @@ static void vmx_pf_exception_test(void)
> >
> > static void vmx_pf_exception_forced_emulation_test(void)
> > {
> > - __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
> > + if (is_fep_available)
> > + __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
> > + else
> > + report_skip("Forced emulation prefix (FEP) not available\n");
>
> To be consistent with other tests, and the kernel's general pattern of:
>
> if (<error>) {
> <react>
> return;
> }
>
> <do useful stuff>
>
> I'll tweak this to
>
> if (!is_fep_available) {
> report_skip("Forced emulation prefix (FEP) not available\n");
> return;
> }
>
> __vmx_pf_exception_test(NULL, NULL, vmx_pf_exception_forced_emulation_test_guest);
>
> when applying.
LGTM, thanks!
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 12/14] x86/svm: Cleanup LBRV tests
2025-11-13 14:59 ` Yosry Ahmed
@ 2025-11-14 4:57 ` Shivansh Dhiman
2025-11-14 5:40 ` Yosry Ahmed
0 siblings, 1 reply; 30+ messages in thread
From: Shivansh Dhiman @ 2025-11-14 4:57 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Paolo Bonzini, Sean Christopherson, Kevin Cheng, kvm,
linux-kernel
Hi Yosry,
On 13-11-2025 20:29, Yosry Ahmed wrote:
> On Thu, Nov 13, 2025 at 05:28:11PM +0530, Shivansh Dhiman wrote:
>> Hi Yosry,
>>
>> I tested this on EPYC-Turin and found that some tests seem to be a bit flaky.
>> See below.
>
> Which ones? I was also running the tests on EPYC-Turin.
Most of the nested LBRV tests had this issue. I checked your other patch to fix
this. I tested it and it does fixes it for me. Thanks.
>>
>> On 11-11-2025 04:56, Yosry Ahmed wrote:
>>> @@ -3058,55 +3041,64 @@ u64 dbgctl;
>>>
>>> static void svm_lbrv_test_guest1(void)
>>> {
>>> + u64 from_ip, to_ip;
>>> +
>>> /*
>>> * This guest expects the LBR to be already enabled when it starts,
>>> * it does a branch, and then disables the LBR and then checks.
>>> */
>>> + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>>> + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
>>
>> This TEST_EXPECT_EQ is run when LBR is enabled, causing it to change last
>> branch. I tried to move it below wrmsr(MSR_IA32_DEBUGCTLMSR, 0) and it works
>> fine that way.
>
> It shouldn't matter though because we execute the branch we care about
> after TEST_EXPECT_EQ(), it's DO_BRANCH(guest_branch0) below. Is it
> possible that the compiler reordered them for some reason?
>
> I liked having the check here because it's easier to follow when the
> checks are done at their logical place rather than delayed after
> wrmsr().
Correct, that should be the natural order.
>>
>>>
>>> DO_BRANCH(guest_branch0);
>>>
>>> - dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>>> + /* Disable LBR before the checks to avoid changing the last branch */
>>> wrmsr(MSR_IA32_DEBUGCTLMSR, 0);> + dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>>> + TEST_EXPECT_EQ(dbgctl, 0);
>>>
>>> - if (dbgctl != DEBUGCTLMSR_LBR)
>>> - asm volatile("ud2\n");
>>> - if (rdmsr(MSR_IA32_DEBUGCTLMSR) != 0)
>>> - asm volatile("ud2\n");
>>> + get_lbr_ips(&from_ip, &to_ip);
>>> + TEST_EXPECT_EQ((u64)&guest_branch0_from, from_ip);
>>> + TEST_EXPECT_EQ((u64)&guest_branch0_to, to_ip);
>>>
>>> - GUEST_CHECK_LBR(&guest_branch0_from, &guest_branch0_to);
>>> asm volatile ("vmmcall\n");
>>> }
>>>
>>> static void svm_lbrv_test_guest2(void)
>>> {
>>> + u64 from_ip, to_ip;
>>> +
>>> /*
>>> * This guest expects the LBR to be disabled when it starts,
>>> * enables it, does a branch, disables it and then checks.
>>> */
>>> -
>>> - DO_BRANCH(guest_branch1);
>>> dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>>> + TEST_EXPECT_EQ(dbgctl, 0);
>>>
>>> - if (dbgctl != 0)
>>> - asm volatile("ud2\n");
>>> + DO_BRANCH(guest_branch1);
>>>
>>> - GUEST_CHECK_LBR(&host_branch2_from, &host_branch2_to);
>>> + get_lbr_ips(&from_ip, &to_ip);
>>> + TEST_EXPECT_EQ((u64)&host_branch2_from, from_ip);
>>> + TEST_EXPECT_EQ((u64)&host_branch2_to, to_ip);
>>>
>>> wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
>>> dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR);
>>> + TEST_EXPECT_EQ(dbgctl, DEBUGCTLMSR_LBR);
>>
>> Same thing here as well.
>>
>>> +
>>> DO_BRANCH(guest_branch2);
>>> wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
>>>
>>> - if (dbgctl != DEBUGCTLMSR_LBR)
>>> - asm volatile("ud2\n");
>>> - GUEST_CHECK_LBR(&guest_branch2_from, &guest_branch2_to);
>>> + get_lbr_ips(&from_ip, &to_ip);
>>> + TEST_EXPECT_EQ((u64)&guest_branch2_from, from_ip);
>>> + TEST_EXPECT_EQ((u64)&guest_branch2_to, to_ip);
>>>
>>> asm volatile ("vmmcall\n");
>>> }
>> Reviewed-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
>>
>> Other tests look good to me, and work fine.
>>
>> Tested-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
>
> Thanks!
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 12/14] x86/svm: Cleanup LBRV tests
2025-11-14 4:57 ` Shivansh Dhiman
@ 2025-11-14 5:40 ` Yosry Ahmed
0 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-14 5:40 UTC (permalink / raw)
To: Shivansh Dhiman
Cc: Paolo Bonzini, Sean Christopherson, Kevin Cheng, kvm,
linux-kernel
On Fri, Nov 14, 2025 at 10:27:07AM +0530, Shivansh Dhiman wrote:
> Hi Yosry,
>
> On 13-11-2025 20:29, Yosry Ahmed wrote:
> > On Thu, Nov 13, 2025 at 05:28:11PM +0530, Shivansh Dhiman wrote:
> >> Hi Yosry,
> >>
> >> I tested this on EPYC-Turin and found that some tests seem to be a bit flaky.
> >> See below.
> >
> > Which ones? I was also running the tests on EPYC-Turin.
>
> Most of the nested LBRV tests had this issue. I checked your other patch to fix
> this. I tested it and it does fixes it for me. Thanks.
Yeah it was an unrelated problem, and it only flaked on some machines.
Apparently even though the APM does not mention usage of any of the
higher bits in LBR MSRs, they are in fact used. The PPR of some models
show that some of the upper bits are reserved.
I think the failure was due to bit 62 being set, which I think is the
speculation bit, which explains why it's inconsistent.
Anyway, thanks for confirming.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 00/14] Improvements for (nested) SVM testing
2025-11-14 0:46 ` Sean Christopherson
@ 2025-11-14 5:42 ` Yosry Ahmed
0 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-14 5:42 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Thu, Nov 13, 2025 at 04:46:13PM -0800, Sean Christopherson wrote:
> On Mon, 10 Nov 2025 23:26:28 +0000, Yosry Ahmed wrote:
> > This is a combined v2/v3 of the patch series in [1] and [2], with a
> > couple of extra changes.
> >
> > The series mostly includes fixups and cleanups, and more importantly new
> > tests for selective CR0 write intercepts and LBRV, which cover bugs
> > recently fixed by [3] and [4].
> >
> > [...]
>
> Applied to kvm-x86 next. In the future, please only bundle related and/or
> dependent patches, especially for one-off patches. It's a lot easier on my
> end (and likely for most maintainers and reviewers) to deal separate series.
>
> E.g. if you need to spin a new version, then you aren't spamming everyone
> with 10+ patches just to rev one patch that might not even be realted to the
> rest. And having separate series makes it easier to select exactly what I
> want to apply.
Noted. I thought that I was going to end up sending a handful of
scattered patches and it will actually end up being difficult for you to
keep track of them. To be honest, it was also easier for me :P
Anyway, will separate such changes going forward!
>
> Concretely, at a glance, this could/should be 7 different patches/series:
>
> 1, 2, 3-5 + 7 + 10-11, 6, 8, 9, 12-13, 14
3-5 + 7 + 10-11 is more-or-less what was in v1. I piled the rest on the
series :)
>
> [01/14] scripts: Always return '2' when skipping tests
> https://github.com/kvm-x86/kvm-unit-tests/commit/1825b2d46c1a
> [02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available
> https://github.com/kvm-x86/kvm-unit-tests/commit/cab22b23b676
> [03/14] x86/svm: Cleanup selective cr0 write intercept test
> https://github.com/kvm-x86/kvm-unit-tests/commit/5f57e54c42e6
> [04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept
> https://github.com/kvm-x86/kvm-unit-tests/commit/0fa8b9beffba
> [05/14] x86/svm: Add FEP helpers for SVM tests
> https://github.com/kvm-x86/kvm-unit-tests/commit/1c5e0e1c75aa
> [06/14] x86/svm: Report unsupported SVM tests
> https://github.com/kvm-x86/kvm-unit-tests/commit/d7e64b50d0e3
> [07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c
> https://github.com/kvm-x86/kvm-unit-tests/commit/9af8f8e09dff
> [08/14] x86/svm: Print SVM test names before running tests
> https://github.com/kvm-x86/kvm-unit-tests/commit/044c33c54661
> [09/14] x86/svm: Deflake svm_tsc_scale_test
> [ DROP ]
> [10/14] x86/svm: Generalize and improve selective CR0 write intercept test
> https://github.com/kvm-x86/kvm-unit-tests/commit/cc34f04ac665
> [11/14] x86/svm: Add more selective CR0 write and LMSW test cases
> https://github.com/kvm-x86/kvm-unit-tests/commit/09e2c95edefd
> [12/14] x86/svm: Cleanup LBRV tests
> https://github.com/kvm-x86/kvm-unit-tests/commit/114a564310f6
> [13/14] x86/svm: Add more LBRV test cases
> https://github.com/kvm-x86/kvm-unit-tests/commit/ffd01c54af99
> [14/14] x86/svm: Rename VMCB fields to match KVM
> [ WAIT ]
>
> --
> https://github.com/kvm-x86/kvm-unit-tests/tree/next
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test
2025-11-14 0:34 ` Sean Christopherson
@ 2025-11-14 5:46 ` Yosry Ahmed
0 siblings, 0 replies; 30+ messages in thread
From: Yosry Ahmed @ 2025-11-14 5:46 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Paolo Bonzini, Kevin Cheng, kvm, linux-kernel
On Thu, Nov 13, 2025 at 04:34:35PM -0800, Sean Christopherson wrote:
> On Mon, Nov 10, 2025, Yosry Ahmed wrote:
> > On an AMT Turin (EPYC Zen 5), svm_tsc_scale_test flakes on the last test
> > case with 0.0001 TSC scaling ratio, even with the 24-bit shift for
> > stability. On failure, the actual value is 49 instead of the expected
> > 50.
> >
> > Use a higher scaling ratio, 0.001, which makes the test pass
> > consistently.
>
> Top-tier analysis right here :-D
Not my proudest moment :P
I saw the test is already using some arbitrary numbers and I was too
lazy to do what Jim did. I initially had a patch that allows for a
certain % error instead, like the selftest, but I opted for the simple
change :P
In my defense, I did call it not-so-sophisticated in the cover letter!
>
> I'm going to take Jim's version instead of papering over the bug.
>
> https://lore.kernel.org/all/CALMp9eQep3H-OtqmLe3O2MsOT-Vx4y0-LordKgN+pkp04VLSWw@mail.gmail.com
I am actually glad there's a better patch than mine :)
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2025-11-14 5:46 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-10 23:26 [PATCH v3 00/14] Improvements for (nested) SVM testing Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 01/14] scripts: Always return '2' when skipping tests Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 02/14] x86/vmx: Skip vmx_pf_exception_test_fep early if FEP is not available Yosry Ahmed
2025-11-14 0:40 ` Sean Christopherson
2025-11-14 0:47 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 03/14] x86/svm: Cleanup selective cr0 write intercept test Yosry Ahmed
2025-11-12 13:48 ` Manali Shukla
2025-11-10 23:26 ` [PATCH v3 04/14] x86/svm: Move CR0 selective write intercept test near CR3 intercept Yosry Ahmed
2025-11-12 13:52 ` Manali Shukla
2025-11-10 23:26 ` [PATCH v3 05/14] x86/svm: Add FEP helpers for SVM tests Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 06/14] x86/svm: Report unsupported " Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 07/14] x86/svm: Move report_svm_guest() to the top of svm_tests.c Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 08/14] x86/svm: Print SVM test names before running tests Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 09/14] x86/svm: Deflake svm_tsc_scale_test Yosry Ahmed
2025-11-14 0:34 ` Sean Christopherson
2025-11-14 5:46 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 10/14] x86/svm: Generalize and improve selective CR0 write intercept test Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 11/14] x86/svm: Add more selective CR0 write and LMSW test cases Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 12/14] x86/svm: Cleanup LBRV tests Yosry Ahmed
2025-11-13 11:58 ` Shivansh Dhiman
2025-11-13 14:59 ` Yosry Ahmed
2025-11-14 4:57 ` Shivansh Dhiman
2025-11-14 5:40 ` Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 13/14] x86/svm: Add more LBRV test cases Yosry Ahmed
2025-11-10 23:26 ` [PATCH v3 14/14] x86/svm: Rename VMCB fields to match KVM Yosry Ahmed
2025-11-14 0:40 ` Sean Christopherson
2025-11-11 0:52 ` [PATCH v3 00/14] Improvements for (nested) SVM testing Sean Christopherson
2025-11-11 0:58 ` Yosry Ahmed
2025-11-14 0:46 ` Sean Christopherson
2025-11-14 5:42 ` Yosry Ahmed
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox