public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: selftests: Add a requirement for disabling numa balancing
@ 2024-01-17  6:44 Tao Su
  2024-01-17 15:12 ` Sean Christopherson
  0 siblings, 1 reply; 5+ messages in thread
From: Tao Su @ 2024-01-17  6:44 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, shuah, yi1.lai, tao1.su

In dirty_log_page_splitting_test, vm_get_stat(vm, "pages_4k") has
probability of gradually reducing to 0 after vm exit. The reason is that
the host triggers numa balancing and unmaps the related spte. So, the
number of pages currently mapped in EPT (kvm->stat.pages) is not equal
to the pages touched by the guest, which causes stats_populated.pages_4k
and stats_repopulated.pages_4k in this test are not same, resulting in
failure.

The calltrace of unmapping spte triggered by numa balancing:
	handle_changed_spte+0x64b/0x830 [kvm]
	tdp_mmu_zap_leafs+0x159/0x290 [kvm]
	kvm_tdp_mmu_unmap_gfn_range+0x7b/0xa0 [kvm]
	kvm_unmap_gfn_range+0x10f/0x130 [kvm]
	? _raw_spin_unlock+0x1d/0x40
	? hugetlb_follow_page_mask+0x1ba/0x400
	? preempt_count_add+0x86/0xd0
	kvm_mmu_notifier_invalidate_range_start+0x14d/0x380 [kvm]
	__mmu_notifier_invalidate_range_start+0x89/0x1f0
	change_protection+0xce1/0x1490
	? __pfx_tlb_is_not_lazy+0x10/0x10
	change_prot_numa+0x5d/0xb0
	? kmalloc_trace+0x2e/0xa0
	task_numa_work+0x364/0x550
	task_work_run+0x62/0xa0
	xfer_to_guest_mode_handle_work+0xc3/0xd0
	kvm_arch_vcpu_ioctl_run+0xe8e/0x1b90 [kvm]
	kvm_vcpu_ioctl+0x282/0x710 [kvm]

dirty_log_page_splitting_test assumes that kvm->stat.pages and the pages
touched by the guest are the same, but the assumption is no longer true
if numa balancing is enabled. Add a requirement for disabling
numa_balancing to avoid confusing due to test failure.

Actually, all page migration (including numa balancing) will trigger this
issue, e.g. running script:
	./x86_64/dirty_log_page_splitting_test &
	PID=$!
	sleep 1
	migratepages $PID 0 1
It is unusual to create above test environment intentionally, but numa
balancing initiated by the kernel will most likely be triggered, at
least in dirty_log_page_splitting_test.

Reported-by: Yi Lai <yi1.lai@intel.com>
Signed-off-by: Tao Su <tao1.su@linux.intel.com>
Tested-by: Yi Lai <yi1.lai@intel.com>
---
 .../kvm/x86_64/dirty_log_page_splitting_test.c        | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
index 634c6bfcd572..f2c796111d83 100644
--- a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
+++ b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
@@ -212,10 +212,21 @@ static void help(char *name)
 
 int main(int argc, char *argv[])
 {
+	FILE *f;
 	int opt;
+	int ret, numa_balancing;
 
 	TEST_REQUIRE(get_kvm_param_bool("eager_page_split"));
 	TEST_REQUIRE(get_kvm_param_bool("tdp_mmu"));
+	f = fopen("/proc/sys/kernel/numa_balancing", "r");
+	if (f) {
+		ret = fscanf(f, "%d", &numa_balancing);
+		TEST_ASSERT(ret == 1, "Error reading numa_balancing");
+		TEST_ASSERT(!numa_balancing, "please run "
+			    "'echo 0 > /proc/sys/kernel/numa_balancing'");
+		fclose(f);
+		f = NULL;
+	}
 
 	while ((opt = getopt(argc, argv, "b:hs:")) != -1) {
 		switch (opt) {

base-commit: 052d534373b7ed33712a63d5e17b2b6cdbce84fd
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-01-19  6:08 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-17  6:44 [PATCH] KVM: selftests: Add a requirement for disabling numa balancing Tao Su
2024-01-17 15:12 ` Sean Christopherson
2024-01-18  5:43   ` Tao Su
2024-01-18 17:33     ` Sean Christopherson
2024-01-19  6:05       ` Tao Su

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox