* [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU
@ 2025-03-27 1:23 James Houghton
2025-03-27 1:23 ` [PATCH 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers James Houghton
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel
This is a follow-up from Maxim's recent v2[1] and the selftest changes
from the v8 of the x86 lockless aging series[2].
With MGLRU, touching a page doesn't necessarily clear the Idle flag.
This has come up in the past, and the recommendation was to use MGLRU
generation numbers[3], which is what this series does.
With NUMA balancing, pages are temporarily mapped as PROT_NONE, so the
SPTEs will be zapped, losing the Accessed bits. The fix here is, in the
event we have lost access information to print a warning and continue
with the test, just like what we do if the test is running a nested VM.
A flag is added for the user to specify if they wish for the test to
always enforce or always skip this check.
Based on kvm/next.
[1]: https://lore.kernel.org/all/20250325015741.2478906-1-mlevitsk@redhat.com/
[2]: https://lore.kernel.org/kvm/20241105184333.2305744-12-jthoughton@google.com/
[3]: https://lore.kernel.org/all/CAOUHufZeADNp_y=Ng+acmMMgnTR=ZGFZ7z-m6O47O=CmJauWjw@mail.gmail.com/
James Houghton (3):
cgroup: selftests: Move cgroup_util into its own library
KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests
KVM: selftests: access_tracking_perf_test: Use MGLRU for access
tracking
Maxim Levitsky (1):
KVM: selftests: access_tracking_perf_test: Add option to skip the
sanity check
Sean Christopherson (1):
KVM: selftests: Extract guts of THP accessor to standalone sysfs
helpers
tools/testing/selftests/cgroup/Makefile | 21 +-
.../selftests/cgroup/{ => lib}/cgroup_util.c | 3 +-
.../cgroup/{ => lib/include}/cgroup_util.h | 4 +-
.../testing/selftests/cgroup/lib/libcgroup.mk | 12 +
tools/testing/selftests/kvm/Makefile.kvm | 4 +-
.../selftests/kvm/access_tracking_perf_test.c | 263 ++++++++++--
.../selftests/kvm/include/lru_gen_util.h | 51 +++
.../testing/selftests/kvm/include/test_util.h | 1 +
.../testing/selftests/kvm/lib/lru_gen_util.c | 383 ++++++++++++++++++
tools/testing/selftests/kvm/lib/test_util.c | 42 +-
10 files changed, 726 insertions(+), 58 deletions(-)
rename tools/testing/selftests/cgroup/{ => lib}/cgroup_util.c (99%)
rename tools/testing/selftests/cgroup/{ => lib/include}/cgroup_util.h (99%)
create mode 100644 tools/testing/selftests/cgroup/lib/libcgroup.mk
create mode 100644 tools/testing/selftests/kvm/include/lru_gen_util.h
create mode 100644 tools/testing/selftests/kvm/lib/lru_gen_util.c
base-commit: 782f9feaa9517caf33186dcdd6b50a8f770ed29b
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
@ 2025-03-27 1:23 ` James Houghton
2025-03-27 1:23 ` [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check James Houghton
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel
From: Sean Christopherson <seanjc@google.com>
Extract the guts of thp_configured() and get_trans_hugepagesz() to
standalone helpers so that the core logic can be reused for other sysfs
files, e.g. to query numa_balancing.
Opportunistically assert that the initial fscanf() read at least one byte,
and add a comment explaining the second call to fscanf().
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
---
tools/testing/selftests/kvm/lib/test_util.c | 35 ++++++++++++++-------
1 file changed, 24 insertions(+), 11 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 8ed0b74ae8373..3dc8538f5d696 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -132,37 +132,50 @@ void print_skip(const char *fmt, ...)
puts(", skipping test");
}
-bool thp_configured(void)
+static bool test_sysfs_path(const char *path)
{
- int ret;
struct stat statbuf;
+ int ret;
- ret = stat("/sys/kernel/mm/transparent_hugepage", &statbuf);
+ ret = stat(path, &statbuf);
TEST_ASSERT(ret == 0 || (ret == -1 && errno == ENOENT),
- "Error in stating /sys/kernel/mm/transparent_hugepage");
+ "Error in stat()ing '%s'", path);
return ret == 0;
}
-size_t get_trans_hugepagesz(void)
+bool thp_configured(void)
+{
+ return test_sysfs_path("/sys/kernel/mm/transparent_hugepage");
+}
+
+static size_t get_sysfs_val(const char *path)
{
size_t size;
FILE *f;
int ret;
- TEST_ASSERT(thp_configured(), "THP is not configured in host kernel");
-
- f = fopen("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", "r");
- TEST_ASSERT(f != NULL, "Error in opening transparent_hugepage/hpage_pmd_size");
+ f = fopen(path, "r");
+ TEST_ASSERT(f, "Error opening '%s'", path);
ret = fscanf(f, "%ld", &size);
+ TEST_ASSERT(ret > 0, "Error reading '%s'", path);
+
+ /* Re-scan the input stream to verify the entire file was read. */
ret = fscanf(f, "%ld", &size);
- TEST_ASSERT(ret < 1, "Error reading transparent_hugepage/hpage_pmd_size");
- fclose(f);
+ TEST_ASSERT(ret < 1, "Error reading '%s'", path);
+ fclose(f);
return size;
}
+size_t get_trans_hugepagesz(void)
+{
+ TEST_ASSERT(thp_configured(), "THP is not configured in host kernel");
+
+ return get_sysfs_val("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size");
+}
+
size_t get_def_hugetlb_pagesz(void)
{
char buf[64];
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
2025-03-27 1:23 ` [PATCH 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers James Houghton
@ 2025-03-27 1:23 ` James Houghton
2025-03-28 19:32 ` Maxim Levitsky
2025-03-27 1:23 ` [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library James Houghton
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel
From: Maxim Levitsky <mlevitsk@redhat.com>
Add an option to skip sanity check of number of still idle pages,
and set it by default to skip, in case hypervisor or NUMA balancing
is detected.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Co-developed-by: James Houghton <jthoughton@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
---
.../selftests/kvm/access_tracking_perf_test.c | 61 ++++++++++++++++---
.../testing/selftests/kvm/include/test_util.h | 1 +
tools/testing/selftests/kvm/lib/test_util.c | 7 +++
3 files changed, 60 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 447e619cf856e..0e594883ec13e 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -65,6 +65,16 @@ static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
/* Whether to overlap the regions of memory vCPUs access. */
static bool overlap_memory_access;
+/*
+ * If the test should only warn if there are too many idle pages (i.e., it is
+ * expected).
+ * -1: Not yet set.
+ * 0: We do not expect too many idle pages, so FAIL if too many idle pages.
+ * 1: Having too many idle pages is expected, so merely print a warning if
+ * too many idle pages are found.
+ */
+static int idle_pages_warn_only = -1;
+
struct test_params {
/* The backing source for the region of memory. */
enum vm_mem_backing_src_type backing_src;
@@ -177,18 +187,12 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
* arbitrary; high enough that we ensure most memory access went through
* access tracking but low enough as to not make the test too brittle
* over time and across architectures.
- *
- * When running the guest as a nested VM, "warn" instead of asserting
- * as the TLB size is effectively unlimited and the KVM doesn't
- * explicitly flush the TLB when aging SPTEs. As a result, more pages
- * are cached and the guest won't see the "idle" bit cleared.
*/
if (still_idle >= pages / 10) {
-#ifdef __x86_64__
- TEST_ASSERT(this_cpu_has(X86_FEATURE_HYPERVISOR),
+ TEST_ASSERT(idle_pages_warn_only,
"vCPU%d: Too many pages still idle (%lu out of %lu)",
vcpu_idx, still_idle, pages);
-#endif
+
printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), "
"this will affect performance results.\n",
vcpu_idx, still_idle, pages);
@@ -328,6 +332,31 @@ static void run_test(enum vm_guest_mode mode, void *arg)
memstress_destroy_vm(vm);
}
+static int access_tracking_unreliable(void)
+{
+#ifdef __x86_64__
+ /*
+ * When running nested, the TLB size is effectively unlimited and the
+ * KVM doesn't explicitly flush the TLB when aging SPTEs. As a result,
+ * more pages are cached and the guest won't see the "idle" bit cleared.
+ */
+ if (this_cpu_has(X86_FEATURE_HYPERVISOR)) {
+ puts("Skipping idle page count sanity check, because the test is run nested");
+ return 1;
+ }
+#endif
+ /*
+ * When NUMA balancing is enabled, guest memory can be mapped
+ * PROT_NONE, and the Accessed bits won't be queriable.
+ */
+ if (is_numa_balancing_enabled()) {
+ puts("Skipping idle page count sanity check, because NUMA balancing is enabled");
+ return 1;
+ }
+
+ return 0;
+}
+
static void help(char *name)
{
puts("");
@@ -342,6 +371,12 @@ static void help(char *name)
printf(" -v: specify the number of vCPUs to run.\n");
printf(" -o: Overlap guest memory accesses instead of partitioning\n"
" them into a separate region of memory for each vCPU.\n");
+ printf(" -w: Control whether the test warns or fails if more than 10%\n"
+ " of pages are still seen as idle/old after accessing guest\n"
+ " memory. >0 == warn only, 0 == fail, <0 == auto. For auto\n"
+ " mode, the test fails by default, but switches to warn only\n"
+ " if NUMA balancing is enabled or the test detects it's running\n"
+ " in a VM.\n");
backing_src_help("-s");
puts("");
exit(0);
@@ -359,7 +394,7 @@ int main(int argc, char *argv[])
guest_modes_append_default();
- while ((opt = getopt(argc, argv, "hm:b:v:os:")) != -1) {
+ while ((opt = getopt(argc, argv, "hm:b:v:os:w:")) != -1) {
switch (opt) {
case 'm':
guest_modes_cmdline(optarg);
@@ -376,6 +411,11 @@ int main(int argc, char *argv[])
case 's':
params.backing_src = parse_backing_src_type(optarg);
break;
+ case 'w':
+ idle_pages_warn_only =
+ atoi_non_negative("Idle pages warning",
+ optarg);
+ break;
case 'h':
default:
help(argv[0]);
@@ -388,6 +428,9 @@ int main(int argc, char *argv[])
"CONFIG_IDLE_PAGE_TRACKING is not enabled");
close(page_idle_fd);
+ if (idle_pages_warn_only == -1)
+ idle_pages_warn_only = access_tracking_unreliable();
+
for_each_guest_mode(run_test, ¶ms);
return 0;
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 77d13d7920cb8..c6ef895fbd9ab 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -153,6 +153,7 @@ bool is_backing_src_hugetlb(uint32_t i);
void backing_src_help(const char *flag);
enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
long get_run_delay(void);
+bool is_numa_balancing_enabled(void);
/*
* Whether or not the given source type is shared memory (as opposed to
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 3dc8538f5d696..03eb99af9b8de 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -176,6 +176,13 @@ size_t get_trans_hugepagesz(void)
return get_sysfs_val("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size");
}
+bool is_numa_balancing_enabled(void)
+{
+ if (!test_sysfs_path("/proc/sys/kernel/numa_balancing"))
+ return false;
+ return get_sysfs_val("/proc/sys/kernel/numa_balancing") == 1;
+}
+
size_t get_def_hugetlb_pagesz(void)
{
char buf[64];
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
2025-03-27 1:23 ` [PATCH 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers James Houghton
2025-03-27 1:23 ` [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check James Houghton
@ 2025-03-27 1:23 ` James Houghton
2025-03-27 9:43 ` Michal Koutný
2025-03-27 1:23 ` [PATCH 4/5] KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests James Houghton
2025-03-27 1:23 ` [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking James Houghton
4 siblings, 1 reply; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel,
David Matlack
KVM selftests will soon need to use some of the cgroup creation and
deletion functionality from cgroup_util.
Suggested-by: David Matlack <dmatlack@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
---
tools/testing/selftests/cgroup/Makefile | 21 ++++++++++---------
.../selftests/cgroup/{ => lib}/cgroup_util.c | 3 +--
.../cgroup/{ => lib/include}/cgroup_util.h | 4 ++--
.../testing/selftests/cgroup/lib/libcgroup.mk | 12 +++++++++++
4 files changed, 26 insertions(+), 14 deletions(-)
rename tools/testing/selftests/cgroup/{ => lib}/cgroup_util.c (99%)
rename tools/testing/selftests/cgroup/{ => lib/include}/cgroup_util.h (99%)
create mode 100644 tools/testing/selftests/cgroup/lib/libcgroup.mk
diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile
index 1b897152bab6e..e01584c2189ac 100644
--- a/tools/testing/selftests/cgroup/Makefile
+++ b/tools/testing/selftests/cgroup/Makefile
@@ -21,14 +21,15 @@ TEST_GEN_PROGS += test_zswap
LOCAL_HDRS += $(selfdir)/clone3/clone3_selftests.h $(selfdir)/pidfd/pidfd.h
include ../lib.mk
+include lib/libcgroup.mk
-$(OUTPUT)/test_core: cgroup_util.c
-$(OUTPUT)/test_cpu: cgroup_util.c
-$(OUTPUT)/test_cpuset: cgroup_util.c
-$(OUTPUT)/test_freezer: cgroup_util.c
-$(OUTPUT)/test_hugetlb_memcg: cgroup_util.c
-$(OUTPUT)/test_kill: cgroup_util.c
-$(OUTPUT)/test_kmem: cgroup_util.c
-$(OUTPUT)/test_memcontrol: cgroup_util.c
-$(OUTPUT)/test_pids: cgroup_util.c
-$(OUTPUT)/test_zswap: cgroup_util.c
+$(OUTPUT)/test_core: $(LIBCGROUP_O)
+$(OUTPUT)/test_cpu: $(LIBCGROUP_O)
+$(OUTPUT)/test_cpuset: $(LIBCGROUP_O)
+$(OUTPUT)/test_freezer: $(LIBCGROUP_O)
+$(OUTPUT)/test_hugetlb_memcg: $(LIBCGROUP_O)
+$(OUTPUT)/test_kill: $(LIBCGROUP_O)
+$(OUTPUT)/test_kmem: $(LIBCGROUP_O)
+$(OUTPUT)/test_memcontrol: $(LIBCGROUP_O)
+$(OUTPUT)/test_pids: $(LIBCGROUP_O)
+$(OUTPUT)/test_zswap: $(LIBCGROUP_O)
diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/lib/cgroup_util.c
similarity index 99%
rename from tools/testing/selftests/cgroup/cgroup_util.c
rename to tools/testing/selftests/cgroup/lib/cgroup_util.c
index 1e2d46636a0ca..d5649486a11df 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c
@@ -16,8 +16,7 @@
#include <sys/wait.h>
#include <unistd.h>
-#include "cgroup_util.h"
-#include "../clone3/clone3_selftests.h"
+#include <cgroup_util.h>
/* Returns read len on success, or -errno on failure. */
static ssize_t read_text(const char *path, char *buf, size_t max_len)
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h
similarity index 99%
rename from tools/testing/selftests/cgroup/cgroup_util.h
rename to tools/testing/selftests/cgroup/lib/include/cgroup_util.h
index 19b131ee77072..7a0441e5eb296 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/lib/include/cgroup_util.h
@@ -2,9 +2,9 @@
#include <stdbool.h>
#include <stdlib.h>
-#include "../kselftest.h"
-
+#ifndef PAGE_SIZE
#define PAGE_SIZE 4096
+#endif
#define MB(x) (x << 20)
diff --git a/tools/testing/selftests/cgroup/lib/libcgroup.mk b/tools/testing/selftests/cgroup/lib/libcgroup.mk
new file mode 100644
index 0000000000000..2cbf07337c23f
--- /dev/null
+++ b/tools/testing/selftests/cgroup/lib/libcgroup.mk
@@ -0,0 +1,12 @@
+CGROUP_DIR := $(selfdir)/cgroup
+
+LIBCGROUP_C := lib/cgroup_util.c
+
+LIBCGROUP_O := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBCGROUP_C))
+
+CFLAGS += -I$(CGROUP_DIR)/lib/include
+
+$(LIBCGROUP_O): $(OUTPUT)/%.o : $(CGROUP_DIR)/%.c
+ $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@
+
+EXTRA_CLEAN += $(LIBCGROUP_O)
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/5] KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
` (2 preceding siblings ...)
2025-03-27 1:23 ` [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library James Houghton
@ 2025-03-27 1:23 ` James Houghton
2025-03-27 1:23 ` [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking James Houghton
4 siblings, 0 replies; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel
libcgroup.o is built separately from KVM selftests and cgroup selftests,
so different compiler flags used by the different selftests will not
conflict with each other.
Signed-off-by: James Houghton <jthoughton@google.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index f773f8f992494..c86a680f52b28 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -219,6 +219,7 @@ OVERRIDE_TARGETS = 1
# importantly defines, i.e. overwrites, $(CC) (unless `make -e` or `make CC=`,
# which causes the environment variable to override the makefile).
include ../lib.mk
+include ../cgroup/lib/libcgroup.mk
INSTALL_HDR_PATH = $(top_srcdir)/usr
LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
@@ -272,7 +273,7 @@ LIBKVM_S := $(filter %.S,$(LIBKVM))
LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C))
LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S))
LIBKVM_STRING_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_STRING))
-LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(LIBKVM_STRING_OBJ)
+LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(LIBKVM_STRING_OBJ) $(LIBCGROUP_O)
SPLIT_TEST_GEN_PROGS := $(patsubst %, $(OUTPUT)/%, $(SPLIT_TESTS))
SPLIT_TEST_GEN_OBJ := $(patsubst %, $(OUTPUT)/$(ARCH)/%.o, $(SPLIT_TESTS))
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
` (3 preceding siblings ...)
2025-03-27 1:23 ` [PATCH 4/5] KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests James Houghton
@ 2025-03-27 1:23 ` James Houghton
2025-03-27 18:26 ` James Houghton
4 siblings, 1 reply; 12+ messages in thread
From: James Houghton @ 2025-03-27 1:23 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, James Houghton, cgroups, linux-kernel
By using MGLRU's debugfs for invoking test_young() and clear_young(), we
avoid page_idle's incompatibility with MGLRU, and we can mark pages as
idle (clear_young()) much faster.
The ability to use page_idle is left in, as it is useful for kernels
that do not have MGLRU built in. If MGLRU is enabled but is not usable
(e.g. we can't access the debugfs mount), the test will fail, as
page_idle is not compatible with MGLRU.
cgroup utility functions have been borrowed so that, when running with
MGLRU, we can create a memcg in which to run our test.
Other MGLRU-debugfs-specific parsing code has been added to
lru_gen_util.{c,h}.
Co-developed-by: Axel Rasmussen <axelrasmussen@google.com>
Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../selftests/kvm/access_tracking_perf_test.c | 208 ++++++++--
.../selftests/kvm/include/lru_gen_util.h | 51 +++
.../testing/selftests/kvm/lib/lru_gen_util.c | 383 ++++++++++++++++++
4 files changed, 617 insertions(+), 26 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/lru_gen_util.h
create mode 100644 tools/testing/selftests/kvm/lib/lru_gen_util.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index c86a680f52b28..6ab0441238a7f 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -8,6 +8,7 @@ LIBKVM += lib/elf.c
LIBKVM += lib/guest_modes.c
LIBKVM += lib/io.c
LIBKVM += lib/kvm_util.c
+LIBKVM += lib/lru_gen_util.c
LIBKVM += lib/memstress.c
LIBKVM += lib/guest_sprintf.c
LIBKVM += lib/rbtree.c
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 0e594883ec13e..1c8e43e18e4c6 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -7,9 +7,11 @@
* This test measures the performance effects of KVM's access tracking.
* Access tracking is driven by the MMU notifiers test_young, clear_young, and
* clear_flush_young. These notifiers do not have a direct userspace API,
- * however the clear_young notifier can be triggered by marking a pages as idle
- * in /sys/kernel/mm/page_idle/bitmap. This test leverages that mechanism to
- * enable access tracking on guest memory.
+ * however the clear_young notifier can be triggered either by
+ * 1. marking a pages as idle in /sys/kernel/mm/page_idle/bitmap OR
+ * 2. adding a new MGLRU generation using the lru_gen debugfs file.
+ * This test leverages page_idle to enable access tracking on guest memory
+ * unless MGLRU is enabled, in which case MGLRU is used.
*
* To measure performance this test runs a VM with a configurable number of
* vCPUs that each touch every page in disjoint regions of memory. Performance
@@ -17,10 +19,11 @@
* predefined region.
*
* Note that a deterministic correctness test of access tracking is not possible
- * by using page_idle as it exists today. This is for a few reasons:
+ * by using page_idle or MGLRU aging as it exists today. This is for a few
+ * reasons:
*
- * 1. page_idle only issues clear_young notifiers, which lack a TLB flush. This
- * means subsequent guest accesses are not guaranteed to see page table
+ * 1. page_idle and MGLRU only issue clear_young notifiers, which lack a TLB flush.
+ * This means subsequent guest accesses are not guaranteed to see page table
* updates made by KVM until some time in the future.
*
* 2. page_idle only operates on LRU pages. Newly allocated pages are not
@@ -48,9 +51,17 @@
#include "guest_modes.h"
#include "processor.h"
+#include "cgroup_util.h"
+#include "lru_gen_util.h"
+
+static const char *TEST_MEMCG_NAME = "access_tracking_perf_test";
+
/* Global variable used to synchronize all of the vCPU threads. */
static int iteration;
+/* The cgroup v2 root. Needed for lru_gen-based aging. */
+char cgroup_root[PATH_MAX];
+
/* Defines what vCPU threads should do during a given iteration. */
static enum {
/* Run the vCPU to access all its memory. */
@@ -75,6 +86,12 @@ static bool overlap_memory_access;
*/
static int idle_pages_warn_only = -1;
+/* Whether or not to use MGLRU instead of page_idle for access tracking */
+static bool use_lru_gen;
+
+/* Total number of pages to expect in the memcg after touching everything */
+static long total_pages;
+
struct test_params {
/* The backing source for the region of memory. */
enum vm_mem_backing_src_type backing_src;
@@ -133,8 +150,24 @@ static void mark_page_idle(int page_idle_fd, uint64_t pfn)
"Set page_idle bits for PFN 0x%" PRIx64, pfn);
}
-static void mark_vcpu_memory_idle(struct kvm_vm *vm,
- struct memstress_vcpu_args *vcpu_args)
+static void too_many_idle_pages(long idle_pages, long total_pages, int vcpu_idx)
+{
+ char prefix[18] = {};
+
+ if (vcpu_idx >= 0)
+ snprintf(prefix, 18, "vCPU%d: ", vcpu_idx);
+
+ TEST_ASSERT(idle_pages_warn_only,
+ "%sToo many pages still idle (%lu out of %lu)",
+ prefix, idle_pages, total_pages);
+
+ printf("WARNING: %sToo many pages still idle (%lu out of %lu), "
+ "this will affect performance results.\n",
+ prefix, idle_pages, total_pages);
+}
+
+static void pageidle_mark_vcpu_memory_idle(struct kvm_vm *vm,
+ struct memstress_vcpu_args *vcpu_args)
{
int vcpu_idx = vcpu_args->vcpu_idx;
uint64_t base_gva = vcpu_args->gva;
@@ -188,20 +221,81 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
* access tracking but low enough as to not make the test too brittle
* over time and across architectures.
*/
- if (still_idle >= pages / 10) {
- TEST_ASSERT(idle_pages_warn_only,
- "vCPU%d: Too many pages still idle (%lu out of %lu)",
- vcpu_idx, still_idle, pages);
-
- printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), "
- "this will affect performance results.\n",
- vcpu_idx, still_idle, pages);
- }
+ if (still_idle >= pages / 10)
+ too_many_idle_pages(still_idle, pages,
+ overlap_memory_access ? -1 : vcpu_idx);
close(page_idle_fd);
close(pagemap_fd);
}
+int find_generation(struct memcg_stats *stats, long total_pages)
+{
+ /*
+ * For finding the generation that contains our pages, use the same
+ * 90% threshold that page_idle uses.
+ */
+ int gen = lru_gen_find_generation(stats, total_pages * 9 / 10);
+
+ if (gen >= 0)
+ return gen;
+
+ if (!idle_pages_warn_only) {
+ TEST_FAIL("Could not find a generation with 90%% of guest memory (%ld pages).",
+ total_pages * 9 / 10);
+ return gen;
+ }
+
+ /*
+ * We couldn't find a generation with 90% of guest memory, which can
+ * happen if access tracking is unreliable. Simply look for a majority
+ * of pages.
+ */
+ puts("WARNING: Couldn't find a generation with 90% of guest memory. "
+ "Performance results may not be accurate.");
+ gen = lru_gen_find_generation(stats, total_pages / 2);
+ TEST_ASSERT(gen >= 0,
+ "Could not find a generation with 50%% of guest memory (%ld pages).",
+ total_pages / 2);
+ return gen;
+}
+
+static void lru_gen_mark_memory_idle(struct kvm_vm *vm)
+{
+ struct timespec ts_start;
+ struct timespec ts_elapsed;
+ struct memcg_stats stats;
+ int found_gens[2];
+
+ /* Find current generation the pages lie in. */
+ lru_gen_read_memcg_stats(&stats, TEST_MEMCG_NAME);
+ found_gens[0] = find_generation(&stats, total_pages);
+
+ /* Make a new generation */
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ lru_gen_do_aging(&stats, TEST_MEMCG_NAME);
+ ts_elapsed = timespec_elapsed(ts_start);
+
+ /* Check the generation again */
+ found_gens[1] = find_generation(&stats, total_pages);
+
+ /*
+ * This function should only be invoked with newly-accessed pages,
+ * so pages should always move to a newer generation.
+ */
+ if (found_gens[0] >= found_gens[1]) {
+ /* We did not move to a newer generation. */
+ long idle_pages = lru_gen_sum_memcg_stats_for_gen(found_gens[1],
+ &stats);
+
+ too_many_idle_pages(min_t(long, idle_pages, total_pages),
+ total_pages, -1);
+ }
+ pr_info("%-30s: %ld.%09lds\n",
+ "Mark memory idle (lru_gen)", ts_elapsed.tv_sec,
+ ts_elapsed.tv_nsec);
+}
+
static void assert_ucall(struct kvm_vcpu *vcpu, uint64_t expected_ucall)
{
struct ucall uc;
@@ -241,7 +335,7 @@ static void vcpu_thread_main(struct memstress_vcpu_args *vcpu_args)
assert_ucall(vcpu, UCALL_SYNC);
break;
case ITERATION_MARK_IDLE:
- mark_vcpu_memory_idle(vm, vcpu_args);
+ pageidle_mark_vcpu_memory_idle(vm, vcpu_args);
break;
}
@@ -293,15 +387,18 @@ static void access_memory(struct kvm_vm *vm, int nr_vcpus,
static void mark_memory_idle(struct kvm_vm *vm, int nr_vcpus)
{
+ if (use_lru_gen)
+ return lru_gen_mark_memory_idle(vm);
+
/*
* Even though this parallelizes the work across vCPUs, this is still a
* very slow operation because page_idle forces the test to mark one pfn
- * at a time and the clear_young notifier serializes on the KVM MMU
+ * at a time and the clear_young notifier may serialize on the KVM MMU
* lock.
*/
pr_debug("Marking VM memory idle (slow)...\n");
iteration_work = ITERATION_MARK_IDLE;
- run_iteration(vm, nr_vcpus, "Mark memory idle");
+ run_iteration(vm, nr_vcpus, "Mark memory idle (page_idle)");
}
static void run_test(enum vm_guest_mode mode, void *arg)
@@ -318,6 +415,15 @@ static void run_test(enum vm_guest_mode mode, void *arg)
pr_info("\n");
access_memory(vm, nr_vcpus, ACCESS_WRITE, "Populating memory");
+ if (use_lru_gen) {
+ struct memcg_stats stats;
+
+ /* Do an initial page table scan */
+ lru_gen_read_memcg_stats(&stats, TEST_MEMCG_NAME);
+ TEST_ASSERT(lru_gen_sum_memcg_stats(&stats) >= total_pages,
+ "Not all pages accounted for. Was the memcg set up correctly?");
+ }
+
/* As a control, read and write to the populated memory first. */
access_memory(vm, nr_vcpus, ACCESS_WRITE, "Writing to populated memory");
access_memory(vm, nr_vcpus, ACCESS_READ, "Reading from populated memory");
@@ -353,7 +459,12 @@ static int access_tracking_unreliable(void)
puts("Skipping idle page count sanity check, because NUMA balancing is enabled");
return 1;
}
+ return 0;
+}
+int run_test_in_cg(const char *cgroup, void *arg)
+{
+ for_each_guest_mode(run_test, arg);
return 0;
}
@@ -371,7 +482,7 @@ static void help(char *name)
printf(" -v: specify the number of vCPUs to run.\n");
printf(" -o: Overlap guest memory accesses instead of partitioning\n"
" them into a separate region of memory for each vCPU.\n");
- printf(" -w: Control whether the test warns or fails if more than 10%\n"
+ printf(" -w: Control whether the test warns or fails if more than 10%%\n"
" of pages are still seen as idle/old after accessing guest\n"
" memory. >0 == warn only, 0 == fail, <0 == auto. For auto\n"
" mode, the test fails by default, but switches to warn only\n"
@@ -382,6 +493,12 @@ static void help(char *name)
exit(0);
}
+void destroy_cgroup(char *cg)
+{
+ printf("Destroying cgroup: %s\n", cg);
+ cg_destroy(cg);
+}
+
int main(int argc, char *argv[])
{
struct test_params params = {
@@ -389,6 +506,7 @@ int main(int argc, char *argv[])
.vcpu_memory_bytes = DEFAULT_PER_VCPU_MEM_SIZE,
.nr_vcpus = 1,
};
+ char *new_cg = NULL;
int page_idle_fd;
int opt;
@@ -423,15 +541,53 @@ int main(int argc, char *argv[])
}
}
- page_idle_fd = open("/sys/kernel/mm/page_idle/bitmap", O_RDWR);
- __TEST_REQUIRE(page_idle_fd >= 0,
- "CONFIG_IDLE_PAGE_TRACKING is not enabled");
- close(page_idle_fd);
+ if (lru_gen_usable()) {
+ if (cg_find_unified_root(cgroup_root, sizeof(cgroup_root), NULL))
+ ksft_exit_skip("cgroup v2 isn't mounted\n");
+
+ new_cg = cg_name(cgroup_root, TEST_MEMCG_NAME);
+ printf("Creating cgroup: %s\n", new_cg);
+ if (cg_create(new_cg) && errno != EEXIST)
+ ksft_exit_skip("could not create new cgroup: %s\n", new_cg);
+
+ use_lru_gen = true;
+ } else {
+ page_idle_fd = open("/sys/kernel/mm/page_idle/bitmap", O_RDWR);
+ __TEST_REQUIRE(page_idle_fd >= 0,
+ "Couldn't open /sys/kernel/mm/page_idle/bitmap. "
+ "Is CONFIG_IDLE_PAGE_TRACKING enabled?");
+
+ close(page_idle_fd);
+ }
if (idle_pages_warn_only == -1)
idle_pages_warn_only = access_tracking_unreliable();
- for_each_guest_mode(run_test, ¶ms);
+ /*
+ * If guest_page_size is larger than the host's page size, the
+ * guest (memstress) will only fault in a subset of the host's pages.
+ */
+ total_pages = params.nr_vcpus * params.vcpu_memory_bytes /
+ max(memstress_args.guest_page_size,
+ (uint64_t)getpagesize());
+
+ if (use_lru_gen) {
+ int ret;
+
+ puts("Using lru_gen for aging");
+ /*
+ * This will fork off a new process to run the test within
+ * a new memcg, so we need to properly propagate the return
+ * value up.
+ */
+ ret = cg_run(new_cg, &run_test_in_cg, ¶ms);
+ destroy_cgroup(new_cg);
+ if (ret)
+ return ret;
+ } else {
+ puts("Using page_idle for aging");
+ for_each_guest_mode(run_test, ¶ms);
+ }
return 0;
}
diff --git a/tools/testing/selftests/kvm/include/lru_gen_util.h b/tools/testing/selftests/kvm/include/lru_gen_util.h
new file mode 100644
index 0000000000000..d32ff5d8ffd05
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/lru_gen_util.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Tools for integrating with lru_gen, like parsing the lru_gen debugfs output.
+ *
+ * Copyright (C) 2025, Google LLC.
+ */
+#ifndef SELFTEST_KVM_LRU_GEN_UTIL_H
+#define SELFTEST_KVM_LRU_GEN_UTIL_H
+
+#include <inttypes.h>
+#include <limits.h>
+#include <stdlib.h>
+
+#include "test_util.h"
+
+#define MAX_NR_GENS 16 /* MAX_NR_GENS in include/linux/mmzone.h */
+#define MAX_NR_NODES 4 /* Maximum number of nodes supported by the test */
+
+#define LRU_GEN_DEBUGFS "/sys/kernel/debug/lru_gen"
+#define LRU_GEN_ENABLED_PATH "/sys/kernel/mm/lru_gen/enabled"
+#define LRU_GEN_ENABLED 1
+#define LRU_GEN_MM_WALK 2
+
+struct generation_stats {
+ int gen;
+ long age_ms;
+ long nr_anon;
+ long nr_file;
+};
+
+struct node_stats {
+ int node;
+ int nr_gens; /* Number of populated gens entries. */
+ struct generation_stats gens[MAX_NR_GENS];
+};
+
+struct memcg_stats {
+ unsigned long memcg_id;
+ int nr_nodes; /* Number of populated nodes entries. */
+ struct node_stats nodes[MAX_NR_NODES];
+};
+
+void lru_gen_read_memcg_stats(struct memcg_stats *stats, const char *memcg);
+long lru_gen_sum_memcg_stats(const struct memcg_stats *stats);
+long lru_gen_sum_memcg_stats_for_gen(int gen, const struct memcg_stats *stats);
+void lru_gen_do_aging(struct memcg_stats *stats, const char *memcg);
+int lru_gen_find_generation(const struct memcg_stats *stats,
+ unsigned long total_pages);
+bool lru_gen_usable(void);
+
+#endif /* SELFTEST_KVM_LRU_GEN_UTIL_H */
diff --git a/tools/testing/selftests/kvm/lib/lru_gen_util.c b/tools/testing/selftests/kvm/lib/lru_gen_util.c
new file mode 100644
index 0000000000000..783a1f1028a26
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/lru_gen_util.c
@@ -0,0 +1,383 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2025, Google LLC.
+ */
+
+#include <time.h>
+
+#include "lru_gen_util.h"
+
+/*
+ * Tracks state while we parse memcg lru_gen stats. The file we're parsing is
+ * structured like this (some extra whitespace elided):
+ *
+ * memcg (id) (path)
+ * node (id)
+ * (gen_nr) (age_in_ms) (nr_anon_pages) (nr_file_pages)
+ */
+struct memcg_stats_parse_context {
+ bool consumed; /* Whether or not this line was consumed */
+ /* Next parse handler to invoke */
+ void (*next_handler)(struct memcg_stats *,
+ struct memcg_stats_parse_context *, char *);
+ int current_node_idx; /* Current index in nodes array */
+ const char *name; /* The name of the memcg we're looking for */
+};
+
+static void memcg_stats_handle_searching(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line);
+static void memcg_stats_handle_in_memcg(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line);
+static void memcg_stats_handle_in_node(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line);
+
+struct split_iterator {
+ char *str;
+ char *save;
+};
+
+static char *split_next(struct split_iterator *it)
+{
+ char *ret = strtok_r(it->str, " \t\n\r", &it->save);
+
+ it->str = NULL;
+ return ret;
+}
+
+static void memcg_stats_handle_searching(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line)
+{
+ struct split_iterator it = { .str = line };
+ char *prefix = split_next(&it);
+ char *memcg_id = split_next(&it);
+ char *memcg_name = split_next(&it);
+ char *end;
+
+ ctx->consumed = true;
+
+ if (!prefix || strcmp("memcg", prefix))
+ return; /* Not a memcg line (maybe empty), skip */
+
+ TEST_ASSERT(memcg_id && memcg_name,
+ "malformed memcg line; no memcg id or memcg_name");
+
+ if (strcmp(memcg_name + 1, ctx->name))
+ return; /* Wrong memcg, skip */
+
+ /* Found it! */
+
+ stats->memcg_id = strtoul(memcg_id, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed memcg id '%s'", memcg_id);
+ if (!stats->memcg_id)
+ return; /* Removed memcg? */
+
+ ctx->next_handler = memcg_stats_handle_in_memcg;
+}
+
+static void memcg_stats_handle_in_memcg(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line)
+{
+ struct split_iterator it = { .str = line };
+ char *prefix = split_next(&it);
+ char *id = split_next(&it);
+ long found_node_id;
+ char *end;
+
+ ctx->consumed = true;
+ ctx->current_node_idx = -1;
+
+ if (!prefix)
+ return; /* Skip empty lines */
+
+ if (!strcmp("memcg", prefix)) {
+ /* Memcg done, found next one; stop. */
+ ctx->next_handler = NULL;
+ return;
+ } else if (strcmp("node", prefix))
+ TEST_ASSERT(false, "found malformed line after 'memcg ...',"
+ "token: '%s'", prefix);
+
+ /* At this point we know we have a node line. Parse the ID. */
+
+ TEST_ASSERT(id, "malformed node line; no node id");
+
+ found_node_id = strtol(id, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed node id '%s'", id);
+
+ ctx->current_node_idx = stats->nr_nodes++;
+ TEST_ASSERT(ctx->current_node_idx < MAX_NR_NODES,
+ "memcg has stats for too many nodes, max is %d",
+ MAX_NR_NODES);
+ stats->nodes[ctx->current_node_idx].node = found_node_id;
+
+ ctx->next_handler = memcg_stats_handle_in_node;
+}
+
+static void memcg_stats_handle_in_node(struct memcg_stats *stats,
+ struct memcg_stats_parse_context *ctx,
+ char *line)
+{
+ char *my_line = strdup(line);
+ struct split_iterator it = { .str = my_line };
+ char *gen, *age, *nr_anon, *nr_file;
+ struct node_stats *node_stats;
+ struct generation_stats *gen_stats;
+ char *end;
+
+ TEST_ASSERT(it.str, "failed to copy input line");
+
+ gen = split_next(&it);
+
+ if (!gen)
+ goto out_consume; /* Skip empty lines */
+
+ if (!strcmp("memcg", gen) || !strcmp("node", gen)) {
+ /*
+ * Reached next memcg or node section. Don't consume, let the
+ * other handler deal with this.
+ */
+ ctx->next_handler = memcg_stats_handle_in_memcg;
+ goto out;
+ }
+
+ node_stats = &stats->nodes[ctx->current_node_idx];
+ TEST_ASSERT(node_stats->nr_gens < MAX_NR_GENS,
+ "found too many generation lines; max is %d",
+ MAX_NR_GENS);
+ gen_stats = &node_stats->gens[node_stats->nr_gens++];
+
+ age = split_next(&it);
+ nr_anon = split_next(&it);
+ nr_file = split_next(&it);
+
+ TEST_ASSERT(age && nr_anon && nr_file,
+ "malformed generation line; not enough tokens");
+
+ gen_stats->gen = (int)strtol(gen, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed generation number '%s'", gen);
+
+ gen_stats->age_ms = strtol(age, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed generation age '%s'", age);
+
+ gen_stats->nr_anon = strtol(nr_anon, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed anonymous page count '%s'",
+ nr_anon);
+
+ gen_stats->nr_file = strtol(nr_file, &end, 10);
+ TEST_ASSERT(*end == '\0', "malformed file page count '%s'", nr_file);
+
+out_consume:
+ ctx->consumed = true;
+out:
+ free(my_line);
+}
+
+static void print_memcg_stats(const struct memcg_stats *stats, const char *name)
+{
+ int node, gen;
+
+ pr_debug("stats for memcg %s (id %lu):\n", name, stats->memcg_id);
+ for (node = 0; node < stats->nr_nodes; ++node) {
+ pr_debug("\tnode %d\n", stats->nodes[node].node);
+ for (gen = 0; gen < stats->nodes[node].nr_gens; ++gen) {
+ const struct generation_stats *gstats =
+ &stats->nodes[node].gens[gen];
+
+ pr_debug("\t\tgen %d\tage_ms %ld"
+ "\tnr_anon %ld\tnr_file %ld\n",
+ gstats->gen, gstats->age_ms, gstats->nr_anon,
+ gstats->nr_file);
+ }
+ }
+}
+
+/* Re-read lru_gen debugfs information for @memcg into @stats. */
+void lru_gen_read_memcg_stats(struct memcg_stats *stats, const char *memcg)
+{
+ FILE *f;
+ ssize_t read = 0;
+ char *line = NULL;
+ size_t bufsz;
+ struct memcg_stats_parse_context ctx = {
+ .next_handler = memcg_stats_handle_searching,
+ .name = memcg,
+ };
+
+ memset(stats, 0, sizeof(struct memcg_stats));
+
+ f = fopen(LRU_GEN_DEBUGFS, "r");
+ TEST_ASSERT(f, "fopen(%s) failed", LRU_GEN_DEBUGFS);
+
+ while (ctx.next_handler && (read = getline(&line, &bufsz, f)) > 0) {
+ ctx.consumed = false;
+
+ do {
+ ctx.next_handler(stats, &ctx, line);
+ if (!ctx.next_handler)
+ break;
+ } while (!ctx.consumed);
+ }
+
+ if (read < 0 && !feof(f))
+ TEST_ASSERT(false, "getline(%s) failed", LRU_GEN_DEBUGFS);
+
+ TEST_ASSERT(stats->memcg_id > 0, "Couldn't find memcg: %s\n"
+ "Did the memcg get created in the proper mount?",
+ memcg);
+ if (line)
+ free(line);
+ TEST_ASSERT(!fclose(f), "fclose(%s) failed", LRU_GEN_DEBUGFS);
+
+ print_memcg_stats(stats, memcg);
+}
+
+/*
+ * Find all pages tracked by lru_gen for this memcg in generation @target_gen.
+ *
+ * If @target_gen is negative, look for all generations.
+ */
+long lru_gen_sum_memcg_stats_for_gen(int target_gen,
+ const struct memcg_stats *stats)
+{
+ int node, gen;
+ long total_nr = 0;
+
+ for (node = 0; node < stats->nr_nodes; ++node) {
+ const struct node_stats *node_stats = &stats->nodes[node];
+
+ for (gen = 0; gen < node_stats->nr_gens; ++gen) {
+ const struct generation_stats *gen_stats =
+ &node_stats->gens[gen];
+
+ if (target_gen >= 0 && gen_stats->gen != target_gen)
+ continue;
+
+ total_nr += gen_stats->nr_anon + gen_stats->nr_file;
+ }
+ }
+
+ return total_nr;
+}
+
+/* Find all pages tracked by lru_gen for this memcg. */
+long lru_gen_sum_memcg_stats(const struct memcg_stats *stats)
+{
+ return lru_gen_sum_memcg_stats_for_gen(-1, stats);
+}
+
+/*
+ * If lru_gen aging should force page table scanning.
+ *
+ * If you want to set this to false, you will need to do eviction
+ * before doing extra aging passes.
+ */
+static const bool force_scan = true;
+
+static void run_aging_impl(unsigned long memcg_id, int node_id, int max_gen)
+{
+ FILE *f = fopen(LRU_GEN_DEBUGFS, "w");
+ char *command;
+ size_t sz;
+
+ TEST_ASSERT(f, "fopen(%s) failed", LRU_GEN_DEBUGFS);
+ sz = asprintf(&command, "+ %lu %d %d 1 %d\n",
+ memcg_id, node_id, max_gen, force_scan);
+ TEST_ASSERT(sz > 0, "creating aging command failed");
+
+ pr_debug("Running aging command: %s", command);
+ if (fwrite(command, sizeof(char), sz, f) < sz) {
+ TEST_ASSERT(false, "writing aging command %s to %s failed",
+ command, LRU_GEN_DEBUGFS);
+ }
+
+ TEST_ASSERT(!fclose(f), "fclose(%s) failed", LRU_GEN_DEBUGFS);
+}
+
+void lru_gen_do_aging(struct memcg_stats *stats, const char *memcg)
+{
+ int node, gen;
+
+ pr_debug("lru_gen: invoking aging...\n");
+
+ /* Must read memcg stats to construct the proper aging command. */
+ lru_gen_read_memcg_stats(stats, memcg);
+
+ for (node = 0; node < stats->nr_nodes; ++node) {
+ int max_gen = 0;
+
+ for (gen = 0; gen < stats->nodes[node].nr_gens; ++gen) {
+ int this_gen = stats->nodes[node].gens[gen].gen;
+
+ max_gen = max_gen > this_gen ? max_gen : this_gen;
+ }
+
+ run_aging_impl(stats->memcg_id, stats->nodes[node].node,
+ max_gen);
+ }
+
+ /* Re-read so callers get updated information */
+ lru_gen_read_memcg_stats(stats, memcg);
+}
+
+/*
+ * Find which generation contains at least @pages pages, assuming that
+ * such a generation exists.
+ */
+int lru_gen_find_generation(const struct memcg_stats *stats,
+ unsigned long pages)
+{
+ int node, gen, gen_idx, min_gen = INT_MAX, max_gen = -1;
+
+ for (node = 0; node < stats->nr_nodes; ++node)
+ for (gen_idx = 0; gen_idx < stats->nodes[node].nr_gens;
+ ++gen_idx) {
+ gen = stats->nodes[node].gens[gen_idx].gen;
+ max_gen = gen > max_gen ? gen : max_gen;
+ min_gen = gen < min_gen ? gen : min_gen;
+ }
+
+ for (gen = min_gen; gen < max_gen; ++gen)
+ /* See if this generation has enough pages. */
+ if (lru_gen_sum_memcg_stats_for_gen(gen, stats) > pages)
+ return gen;
+
+ return -1;
+}
+
+bool lru_gen_usable(void)
+{
+ long required_features = LRU_GEN_ENABLED | LRU_GEN_MM_WALK;
+ int lru_gen_fd, lru_gen_debug_fd;
+ char mglru_feature_str[8] = {};
+ long mglru_features;
+
+ lru_gen_fd = open(LRU_GEN_ENABLED_PATH, O_RDONLY);
+ if (lru_gen_fd < 0) {
+ puts("lru_gen: Could not open " LRU_GEN_ENABLED_PATH);
+ return false;
+ }
+ if (read(lru_gen_fd, &mglru_feature_str, 7) < 7) {
+ puts("lru_gen: Could not read from " LRU_GEN_ENABLED_PATH);
+ close(lru_gen_fd);
+ return false;
+ }
+ close(lru_gen_fd);
+
+ mglru_features = strtol(mglru_feature_str, NULL, 16);
+ if ((mglru_features & required_features) != required_features) {
+ printf("lru_gen: missing features, got: %s", mglru_feature_str);
+ return false;
+ }
+
+ lru_gen_debug_fd = open(LRU_GEN_DEBUGFS, O_RDWR);
+ __TEST_REQUIRE(lru_gen_debug_fd >= 0,
+ "lru_gen: Could not open " LRU_GEN_DEBUGFS ", "
+ "but lru_gen is enabled, so cannot use page_idle.");
+ close(lru_gen_debug_fd);
+ return true;
+}
--
2.49.0.395.g12beb8f557-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library
2025-03-27 1:23 ` [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library James Houghton
@ 2025-03-27 9:43 ` Michal Koutný
2025-03-27 18:07 ` James Houghton
2025-03-28 2:03 ` Yafang Shao
0 siblings, 2 replies; 12+ messages in thread
From: Michal Koutný @ 2025-03-27 9:43 UTC (permalink / raw)
To: James Houghton
Cc: Sean Christopherson, kvm, Maxim Levitsky, Axel Rasmussen,
Tejun Heo, Johannes Weiner, Yu Zhao, cgroups, linux-kernel,
David Matlack, Yafang Shao
[-- Attachment #1: Type: text/plain, Size: 768 bytes --]
Hello James.
On Thu, Mar 27, 2025 at 01:23:48AM +0000, James Houghton <jthoughton@google.com> wrote:
> KVM selftests will soon need to use some of the cgroup creation and
> deletion functionality from cgroup_util.
Thanks, I think cross-selftest sharing is better than duplicating
similar code.
+Cc: Yafang as it may worth porting/unifying with
tools/testing/selftests/bpf/cgroup_helpers.h too
> --- a/tools/testing/selftests/cgroup/cgroup_util.c
> +++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c
> @@ -16,8 +16,7 @@
> #include <sys/wait.h>
> #include <unistd.h>
>
> -#include "cgroup_util.h"
> -#include "../clone3/clone3_selftests.h"
> +#include <cgroup_util.h>
The clone3_selftests.h header is not needed anymore?
Michal
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library
2025-03-27 9:43 ` Michal Koutný
@ 2025-03-27 18:07 ` James Houghton
2025-03-28 2:03 ` Yafang Shao
1 sibling, 0 replies; 12+ messages in thread
From: James Houghton @ 2025-03-27 18:07 UTC (permalink / raw)
To: mkoutny
Cc: axelrasmussen, cgroups, dmatlack, hannes, jthoughton, kvm,
laoar.shao, linux-kernel, mlevitsk, seanjc, tj, yuzhao
On Thu, Mar 27, 2025 at 2:43 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello James.
>
> On Thu, Mar 27, 2025 at 01:23:48AM +0000, James Houghton <jthoughton@google.com> wrote:
> > KVM selftests will soon need to use some of the cgroup creation and
> > deletion functionality from cgroup_util.
>
> Thanks, I think cross-selftest sharing is better than duplicating
> similar code.
>
> +Cc: Yafang as it may worth porting/unifying with
> tools/testing/selftests/bpf/cgroup_helpers.h too
>
> > --- a/tools/testing/selftests/cgroup/cgroup_util.c
> > +++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c
> > @@ -16,8 +16,7 @@
> > #include <sys/wait.h>
> > #include <unistd.h>
> >
> > -#include "cgroup_util.h"
> > -#include "../clone3/clone3_selftests.h"
> > +#include <cgroup_util.h>
>
> The clone3_selftests.h header is not needed anymore?
Ah, sorry.
We do indeed still reference `sys_clone3()` from cgroup_util.c, so it should
stay in (as "../../clone3/clone3_selftests.h"). I realize now that it compiled
just fine because the call to `sys_clone3()` is dropped entirely when
clone3_selftests.h is not included.
So I'll apply the following diff:
diff --git a/tools/testing/selftests/cgroup/lib/cgroup_util.c b/tools/testing/selftests/cgroup/lib/cgroup_util.c
index d5649486a11df..fe15541f3a07d 100644
--- a/tools/testing/selftests/cgroup/lib/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/lib/cgroup_util.c
@@ -18,6 +18,8 @@
#include <cgroup_util.h>
+#include "../../clone3/clone3_selftests.h"
+
/* Returns read len on success, or -errno on failure. */
static ssize_t read_text(const char *path, char *buf, size_t max_len)
{
diff --git a/tools/testing/selftests/cgroup/lib/libcgroup.mk b/tools/testing/selftests/cgroup/lib/libcgroup.mk
index 2cbf07337c23f..12323041a5ce6 100644
--- a/tools/testing/selftests/cgroup/lib/libcgroup.mk
+++ b/tools/testing/selftests/cgroup/lib/libcgroup.mk
@@ -6,7 +6,9 @@ LIBCGROUP_O := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBCGROUP_C))
CFLAGS += -I$(CGROUP_DIR)/lib/include
-$(LIBCGROUP_O): $(OUTPUT)/%.o : $(CGROUP_DIR)/%.c
+EXTRA_HDRS := $(selfdir)/clone3/clone3_selftests.h
+
+$(LIBCGROUP_O): $(OUTPUT)/%.o : $(CGROUP_DIR)/%.c $(EXTRA_HDRS)
$(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@
EXTRA_CLEAN += $(LIBCGROUP_O)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking
2025-03-27 1:23 ` [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking James Houghton
@ 2025-03-27 18:26 ` James Houghton
0 siblings, 0 replies; 12+ messages in thread
From: James Houghton @ 2025-03-27 18:26 UTC (permalink / raw)
To: Sean Christopherson, kvm
Cc: Maxim Levitsky, Axel Rasmussen, Tejun Heo, Johannes Weiner,
mkoutny, Yu Zhao, cgroups, linux-kernel
On Wed, Mar 26, 2025 at 6:25 PM James Houghton <jthoughton@google.com> wrote:
> diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> index 0e594883ec13e..1c8e43e18e4c6 100644
> --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
> +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> @@ -318,6 +415,15 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> pr_info("\n");
> access_memory(vm, nr_vcpus, ACCESS_WRITE, "Populating memory");
>
> + if (use_lru_gen) {
> + struct memcg_stats stats;
> +
> + /* Do an initial page table scan */
This comment is wrong, sorry. I'll just drop it.
I initially had a lru_gen_do_aging() here to verify that everything
was tracked in the lru_gen debugfs, but everything is already tracked
anyway. Doing an aging pass here means that the "control" write after
this is writing to idle memory, so it ceases to be a control.
> + lru_gen_read_memcg_stats(&stats, TEST_MEMCG_NAME);
> + TEST_ASSERT(lru_gen_sum_memcg_stats(&stats) >= total_pages,
> + "Not all pages accounted for. Was the memcg set up correctly?");
> + }
> +
> /* As a control, read and write to the populated memory first. */
> access_memory(vm, nr_vcpus, ACCESS_WRITE, "Writing to populated memory");
> access_memory(vm, nr_vcpus, ACCESS_READ, "Reading from populated memory");
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library
2025-03-27 9:43 ` Michal Koutný
2025-03-27 18:07 ` James Houghton
@ 2025-03-28 2:03 ` Yafang Shao
1 sibling, 0 replies; 12+ messages in thread
From: Yafang Shao @ 2025-03-28 2:03 UTC (permalink / raw)
To: Michal Koutný
Cc: James Houghton, Sean Christopherson, kvm, Maxim Levitsky,
Axel Rasmussen, Tejun Heo, Johannes Weiner, Yu Zhao, cgroups,
linux-kernel, David Matlack
On Thu, Mar 27, 2025 at 5:43 PM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello James.
>
> On Thu, Mar 27, 2025 at 01:23:48AM +0000, James Houghton <jthoughton@google.com> wrote:
> > KVM selftests will soon need to use some of the cgroup creation and
> > deletion functionality from cgroup_util.
>
> Thanks, I think cross-selftest sharing is better than duplicating
> similar code.
>
> +Cc: Yafang as it may worth porting/unifying with
> tools/testing/selftests/bpf/cgroup_helpers.h too
Thanks for pointing that out—that’s a good suggestion. We should also
consider migrating the bpf cgroup helpers into the new libcgroup.
--
Regards
Yafang
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check
2025-03-27 1:23 ` [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check James Houghton
@ 2025-03-28 19:32 ` Maxim Levitsky
2025-03-28 21:26 ` James Houghton
0 siblings, 1 reply; 12+ messages in thread
From: Maxim Levitsky @ 2025-03-28 19:32 UTC (permalink / raw)
To: James Houghton, Sean Christopherson, kvm
Cc: Axel Rasmussen, Tejun Heo, Johannes Weiner, mkoutny, Yu Zhao,
cgroups, linux-kernel
On Thu, 2025-03-27 at 01:23 +0000, James Houghton wrote:
> From: Maxim Levitsky <mlevitsk@redhat.com>
>
> Add an option to skip sanity check of number of still idle pages,
> and set it by default to skip, in case hypervisor or NUMA balancing
> is detected.
>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> Co-developed-by: James Houghton <jthoughton@google.com>
> Signed-off-by: James Houghton <jthoughton@google.com>
> ---
> .../selftests/kvm/access_tracking_perf_test.c | 61 ++++++++++++++++---
> .../testing/selftests/kvm/include/test_util.h | 1 +
> tools/testing/selftests/kvm/lib/test_util.c | 7 +++
> 3 files changed, 60 insertions(+), 9 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> index 447e619cf856e..0e594883ec13e 100644
> --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
> +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> @@ -65,6 +65,16 @@ static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
> /* Whether to overlap the regions of memory vCPUs access. */
> static bool overlap_memory_access;
>
> +/*
> + * If the test should only warn if there are too many idle pages (i.e., it is
> + * expected).
> + * -1: Not yet set.
> + * 0: We do not expect too many idle pages, so FAIL if too many idle pages.
> + * 1: Having too many idle pages is expected, so merely print a warning if
> + * too many idle pages are found.
> + */
> +static int idle_pages_warn_only = -1;
> +
> struct test_params {
> /* The backing source for the region of memory. */
> enum vm_mem_backing_src_type backing_src;
> @@ -177,18 +187,12 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
> * arbitrary; high enough that we ensure most memory access went through
> * access tracking but low enough as to not make the test too brittle
> * over time and across architectures.
> - *
> - * When running the guest as a nested VM, "warn" instead of asserting
> - * as the TLB size is effectively unlimited and the KVM doesn't
> - * explicitly flush the TLB when aging SPTEs. As a result, more pages
> - * are cached and the guest won't see the "idle" bit cleared.
> */
> if (still_idle >= pages / 10) {
> -#ifdef __x86_64__
> - TEST_ASSERT(this_cpu_has(X86_FEATURE_HYPERVISOR),
> + TEST_ASSERT(idle_pages_warn_only,
> "vCPU%d: Too many pages still idle (%lu out of %lu)",
> vcpu_idx, still_idle, pages);
> -#endif
> +
> printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), "
> "this will affect performance results.\n",
> vcpu_idx, still_idle, pages);
> @@ -328,6 +332,31 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> memstress_destroy_vm(vm);
> }
>
> +static int access_tracking_unreliable(void)
> +{
> +#ifdef __x86_64__
> + /*
> + * When running nested, the TLB size is effectively unlimited and the
> + * KVM doesn't explicitly flush the TLB when aging SPTEs. As a result,
> + * more pages are cached and the guest won't see the "idle" bit cleared.
> + */
Tiny nitpick: nested on KVM, because on other hypervisors it might work differently,
but overall most of them probably suffer from the same problem,
so its probably better to say something like that:
'When running nested, the TLB size might be effectively unlimited,
for example this is the case when running on top of KVM L0'
> + if (this_cpu_has(X86_FEATURE_HYPERVISOR)) {
> + puts("Skipping idle page count sanity check, because the test is run nested");
> + return 1;
> + }
> +#endif
> + /*
> + * When NUMA balancing is enabled, guest memory can be mapped
> + * PROT_NONE, and the Accessed bits won't be queriable.
Tiny nitpick: the accessed bit in this case are lost, because KVM no longer propagates
it from secondary to primary paging, and the secondary mapping might be lost due to zapping,
and the biggest offender of this is NUMA balancing.
> + */
> + if (is_numa_balancing_enabled()) {
> + puts("Skipping idle page count sanity check, because NUMA balancing is enabled");
> + return 1;
> + }
> +
> + return 0;
> +}
Very good idea of extracting this logic into a function and documenting it.
> +
> static void help(char *name)
> {
> puts("");
> @@ -342,6 +371,12 @@ static void help(char *name)
> printf(" -v: specify the number of vCPUs to run.\n");
> printf(" -o: Overlap guest memory accesses instead of partitioning\n"
> " them into a separate region of memory for each vCPU.\n");
> + printf(" -w: Control whether the test warns or fails if more than 10%\n"
> + " of pages are still seen as idle/old after accessing guest\n"
> + " memory. >0 == warn only, 0 == fail, <0 == auto. For auto\n"
> + " mode, the test fails by default, but switches to warn only\n"
> + " if NUMA balancing is enabled or the test detects it's running\n"
> + " in a VM.\n");
> backing_src_help("-s");
> puts("");
> exit(0);
> @@ -359,7 +394,7 @@ int main(int argc, char *argv[])
>
> guest_modes_append_default();
>
> - while ((opt = getopt(argc, argv, "hm:b:v:os:")) != -1) {
> + while ((opt = getopt(argc, argv, "hm:b:v:os:w:")) != -1) {
> switch (opt) {
> case 'm':
> guest_modes_cmdline(optarg);
> @@ -376,6 +411,11 @@ int main(int argc, char *argv[])
> case 's':
> params.backing_src = parse_backing_src_type(optarg);
> break;
> + case 'w':
> + idle_pages_warn_only =
> + atoi_non_negative("Idle pages warning",
> + optarg);
> + break;
> case 'h':
> default:
> help(argv[0]);
> @@ -388,6 +428,9 @@ int main(int argc, char *argv[])
> "CONFIG_IDLE_PAGE_TRACKING is not enabled");
> close(page_idle_fd);
>
> + if (idle_pages_warn_only == -1)
> + idle_pages_warn_only = access_tracking_unreliable();
> +
> for_each_guest_mode(run_test, ¶ms);
>
> return 0;
> diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
> index 77d13d7920cb8..c6ef895fbd9ab 100644
> --- a/tools/testing/selftests/kvm/include/test_util.h
> +++ b/tools/testing/selftests/kvm/include/test_util.h
> @@ -153,6 +153,7 @@ bool is_backing_src_hugetlb(uint32_t i);
> void backing_src_help(const char *flag);
> enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
> long get_run_delay(void);
> +bool is_numa_balancing_enabled(void);
>
> /*
> * Whether or not the given source type is shared memory (as opposed to
> diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
> index 3dc8538f5d696..03eb99af9b8de 100644
> --- a/tools/testing/selftests/kvm/lib/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/test_util.c
> @@ -176,6 +176,13 @@ size_t get_trans_hugepagesz(void)
> return get_sysfs_val("/sys/kernel/mm/transparent_hugepage/hpage_pmd_size");
> }
>
> +bool is_numa_balancing_enabled(void)
> +{
> + if (!test_sysfs_path("/proc/sys/kernel/numa_balancing"))
> + return false;
> + return get_sysfs_val("/proc/sys/kernel/numa_balancing") == 1;
> +}
> +
> size_t get_def_hugetlb_pagesz(void)
> {
> char buf[64];
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check
2025-03-28 19:32 ` Maxim Levitsky
@ 2025-03-28 21:26 ` James Houghton
0 siblings, 0 replies; 12+ messages in thread
From: James Houghton @ 2025-03-28 21:26 UTC (permalink / raw)
To: mlevitsk
Cc: axelrasmussen, cgroups, hannes, jthoughton, kvm, linux-kernel,
mkoutny, seanjc, tj, yuzhao
On Fri, Mar 28, 2025 at 12:32 PM Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> On Thu, 2025-03-27 at 01:23 +0000, James Houghton wrote:
> > From: Maxim Levitsky <mlevitsk@redhat.com>
> >
> > Add an option to skip sanity check of number of still idle pages,
> > and set it by default to skip, in case hypervisor or NUMA balancing
> > is detected.
> >
> > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> > Co-developed-by: James Houghton <jthoughton@google.com>
> > Signed-off-by: James Houghton <jthoughton@google.com>
> > ---
> > .../selftests/kvm/access_tracking_perf_test.c | 61 ++++++++++++++++---
> > .../testing/selftests/kvm/include/test_util.h | 1 +
> > tools/testing/selftests/kvm/lib/test_util.c | 7 +++
> > 3 files changed, 60 insertions(+), 9 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> > index 447e619cf856e..0e594883ec13e 100644
> > --- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
> > +++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
> > @@ -65,6 +65,16 @@ static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
> > /* Whether to overlap the regions of memory vCPUs access. */
> > static bool overlap_memory_access;
> >
> > +/*
> > + * If the test should only warn if there are too many idle pages (i.e., it is
> > + * expected).
> > + * -1: Not yet set.
> > + * 0: We do not expect too many idle pages, so FAIL if too many idle pages.
> > + * 1: Having too many idle pages is expected, so merely print a warning if
> > + * too many idle pages are found.
> > + */
> > +static int idle_pages_warn_only = -1;
> > +
> > struct test_params {
> > /* The backing source for the region of memory. */
> > enum vm_mem_backing_src_type backing_src;
> > @@ -177,18 +187,12 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
> > * arbitrary; high enough that we ensure most memory access went through
> > * access tracking but low enough as to not make the test too brittle
> > * over time and across architectures.
> > - *
> > - * When running the guest as a nested VM, "warn" instead of asserting
> > - * as the TLB size is effectively unlimited and the KVM doesn't
> > - * explicitly flush the TLB when aging SPTEs. As a result, more pages
> > - * are cached and the guest won't see the "idle" bit cleared.
> > */
> > if (still_idle >= pages / 10) {
> > -#ifdef __x86_64__
> > - TEST_ASSERT(this_cpu_has(X86_FEATURE_HYPERVISOR),
> > + TEST_ASSERT(idle_pages_warn_only,
> > "vCPU%d: Too many pages still idle (%lu out of %lu)",
> > vcpu_idx, still_idle, pages);
> > -#endif
> > +
> > printf("WARNING: vCPU%d: Too many pages still idle (%lu out of %lu), "
> > "this will affect performance results.\n",
> > vcpu_idx, still_idle, pages);
> > @@ -328,6 +332,31 @@ static void run_test(enum vm_guest_mode mode, void *arg)
> > memstress_destroy_vm(vm);
> > }
> >
> > +static int access_tracking_unreliable(void)
> > +{
> > +#ifdef __x86_64__
> > + /*
> > + * When running nested, the TLB size is effectively unlimited and the
> > + * KVM doesn't explicitly flush the TLB when aging SPTEs. As a result,
> > + * more pages are cached and the guest won't see the "idle" bit cleared.
> > + */
> Tiny nitpick: nested on KVM, because on other hypervisors it might work differently,
> but overall most of them probably suffer from the same problem,
> so its probably better to say something like that:
>
> 'When running nested, the TLB size might be effectively unlimited,
> for example this is the case when running on top of KVM L0'
Added this clarification (see below), thanks!
This wording was left as-is from before, but I agree that it could be better.
>
>
> > + if (this_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > + puts("Skipping idle page count sanity check, because the test is run nested");
> > + return 1;
> > + }
> > +#endif
> > + /*
> > + * When NUMA balancing is enabled, guest memory can be mapped
> > + * PROT_NONE, and the Accessed bits won't be queriable.
>
> Tiny nitpick: the accessed bit in this case are lost, because KVM no longer propagates
> it from secondary to primary paging, and the secondary mapping might be lost due to zapping,
> and the biggest offender of this is NUMA balancing.
Reworded this bit. The current wording is indeed misleading.
> > + */
> > + if (is_numa_balancing_enabled()) {
> > + puts("Skipping idle page count sanity check, because NUMA balancing is enabled");
> > + return 1;
> > + }
> > +
> > + return 0;
> > +}
>
> Very good idea of extracting this logic into a function and documenting it.
:)
> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Thanks, though I've already included your Signed-off-by. I'll just keep your
Signed-off-by and From:.
I'm applying the following diff for the next version:
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 0e594883ec13e..1770998c7675b 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -336,9 +336,10 @@ static int access_tracking_unreliable(void)
{
#ifdef __x86_64__
/*
- * When running nested, the TLB size is effectively unlimited and the
- * KVM doesn't explicitly flush the TLB when aging SPTEs. As a result,
- * more pages are cached and the guest won't see the "idle" bit cleared.
+ * When running nested, the TLB size may be effectively unlimited (for
+ * example, this is the case when running on KVM L0), and KVM doesn't
+ * explicitly flush the TLB when aging SPTEs. As a result, more pages
+ * are cached and the guest won't see the "idle" bit cleared.
*/
if (this_cpu_has(X86_FEATURE_HYPERVISOR)) {
puts("Skipping idle page count sanity check, because the test is run nested");
@@ -346,8 +347,8 @@ static int access_tracking_unreliable(void)
}
#endif
/*
- * When NUMA balancing is enabled, guest memory can be mapped
- * PROT_NONE, and the Accessed bits won't be queriable.
+ * When NUMA balancing is enabled, guest memory will be unmapped to get
+ * NUMA faults, dropping the Accessed bits.
*/
if (is_numa_balancing_enabled()) {
puts("Skipping idle page count sanity check, because NUMA balancing is enabled");
^ permalink raw reply related [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-03-28 21:26 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-27 1:23 [PATCH 0/5] KVM: selftests: access_tracking_perf_test fixes for NUMA balancing and MGLRU James Houghton
2025-03-27 1:23 ` [PATCH 1/5] KVM: selftests: Extract guts of THP accessor to standalone sysfs helpers James Houghton
2025-03-27 1:23 ` [PATCH 2/5] KVM: selftests: access_tracking_perf_test: Add option to skip the sanity check James Houghton
2025-03-28 19:32 ` Maxim Levitsky
2025-03-28 21:26 ` James Houghton
2025-03-27 1:23 ` [PATCH 3/5] cgroup: selftests: Move cgroup_util into its own library James Houghton
2025-03-27 9:43 ` Michal Koutný
2025-03-27 18:07 ` James Houghton
2025-03-28 2:03 ` Yafang Shao
2025-03-27 1:23 ` [PATCH 4/5] KVM: selftests: Build and link selftests/cgroup/lib into KVM selftests James Houghton
2025-03-27 1:23 ` [PATCH 5/5] KVM: selftests: access_tracking_perf_test: Use MGLRU for access tracking James Houghton
2025-03-27 18:26 ` James Houghton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).