public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] KVM: selftests: Calculate memory latency stats
@ 2023-03-27 21:26 Colton Lewis
  2023-03-27 21:26 ` [PATCH v3 1/2] KVM: selftests: Provide generic way to read system counter Colton Lewis
  2023-03-27 21:26 ` [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution Colton Lewis
  0 siblings, 2 replies; 5+ messages in thread
From: Colton Lewis @ 2023-03-27 21:26 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson, David Matlack, Vipin Sharma,
	Andrew Jones, Marc Zyngier, Ben Gardon, Ricardo Koller,
	Oliver Upton
  Cc: kvm, Colton Lewis

Sample the latency of memory accesses in dirty_log_perf_test and
report summary stats to give a picture of the latency
distribution. Specifically, focus on the right tail with the 50th,
90th, and 99th percentile reported in ns.

v3:
* Move timing functions to lib/$ARCH/processor.c
* Add memory barriers to prevent reordering measured memory accesses
* Remove floating point from conversion functions
* Change command line flag to -l for latency and provide help summary
* Do not show latency measurements unless -l is given with argument
* Change output formatting to seconds for consistency with other output

v2: https://lore.kernel.org/kvm/20230316222752.1911001-1-coltonlewis@google.com/

v1: https://lore.kernel.org/kvm/20221115173258.2530923-1-coltonlewis@google.com/

Colton Lewis (2):
  KVM: selftests: Provide generic way to read system counter
  KVM: selftests: Print summary stats of memory latency distribution

 .../selftests/kvm/access_tracking_perf_test.c |  3 +-
 .../selftests/kvm/demand_paging_test.c        |  2 +-
 .../selftests/kvm/dirty_log_perf_test.c       | 21 +++++-
 .../selftests/kvm/include/aarch64/processor.h | 13 ++++
 .../testing/selftests/kvm/include/memstress.h | 10 ++-
 .../selftests/kvm/include/x86_64/processor.h  | 13 ++++
 .../selftests/kvm/lib/aarch64/processor.c     | 12 ++++
 tools/testing/selftests/kvm/lib/memstress.c   | 68 ++++++++++++++++---
 .../selftests/kvm/lib/x86_64/processor.c      | 13 ++++
 .../kvm/memslot_modification_stress_test.c    |  2 +-
 .../kvm/system_counter_offset_test.c          | 10 +--
 11 files changed, 143 insertions(+), 24 deletions(-)

--
2.40.0.348.gf938b09366-goog

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/2] KVM: selftests: Provide generic way to read system counter
  2023-03-27 21:26 [PATCH v3 0/2] KVM: selftests: Calculate memory latency stats Colton Lewis
@ 2023-03-27 21:26 ` Colton Lewis
  2023-03-27 21:26 ` [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution Colton Lewis
  1 sibling, 0 replies; 5+ messages in thread
From: Colton Lewis @ 2023-03-27 21:26 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson, David Matlack, Vipin Sharma,
	Andrew Jones, Marc Zyngier, Ben Gardon, Ricardo Koller,
	Oliver Upton
  Cc: kvm, Colton Lewis

Provide a generic function cycles_read to read the system counter from
the guest for timing purposes and a helper cycles_to_ns to convert to
nanoseconds. cycles_to_ns may overflow if the cycles argument goes
above 10 billion, but that is far outside the intended use of these
functions and different timing instruments should be
used. clocks_calc_mult_shift could be used to solve this problem, but
importing clocksource code here was annoying.

Substitute the previous custom implementation of a similar function in
system_counter_offset_test with this new implementation.

Signed-off-by: Colton Lewis <coltonlewis@google.com>
---
 .../selftests/kvm/include/aarch64/processor.h       |  3 +++
 .../selftests/kvm/include/x86_64/processor.h        |  3 +++
 tools/testing/selftests/kvm/lib/aarch64/processor.c | 12 ++++++++++++
 tools/testing/selftests/kvm/lib/x86_64/processor.c  | 13 +++++++++++++
 .../selftests/kvm/system_counter_offset_test.c      | 10 ++--------
 5 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 5f977528e09c..f65e491763e0 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -216,4 +216,7 @@ void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,

 uint32_t guest_get_vcpuid(void);

+uint64_t cycles_read(void);
+uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
+
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 53ffa43c90db..5d977f95d5f5 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -1134,4 +1134,7 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 #define PFERR_GUEST_PAGE_MASK	BIT_ULL(PFERR_GUEST_PAGE_BIT)
 #define PFERR_IMPLICIT_ACCESS	BIT_ULL(PFERR_IMPLICIT_ACCESS_BIT)

+uint64_t cycles_read(void);
+uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
+
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 5972a23b2765..5475a7e98d41 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -8,6 +8,7 @@
 #include <linux/compiler.h>
 #include <assert.h>

+#include "arch_timer.h"
 #include "guest_modes.h"
 #include "kvm_util.h"
 #include "processor.h"
@@ -551,3 +552,14 @@ void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
 	sparsebit_set_num(vm->vpages_valid, 0,
 			  (1ULL << vm->va_bits) >> vm->page_shift);
 }
+
+uint64_t cycles_read(void)
+{
+	return timer_get_cntct(VIRTUAL);
+}
+
+uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles)
+{
+	TEST_ASSERT(cycles < 10000000000, "Conversion to ns may overflow");
+	return cycles * NSEC_PER_SEC / timer_get_cntfrq();
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index ae1e573d94ce..adef76bebff3 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1270,3 +1270,16 @@ void kvm_selftest_arch_init(void)
 	host_cpu_is_intel = this_cpu_is_intel();
 	host_cpu_is_amd = this_cpu_is_amd();
 }
+
+uint64_t cycles_read(void)
+{
+	return rdtsc();
+}
+
+uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles)
+{
+	uint64_t tsc_khz = __vcpu_ioctl(vcpu, KVM_GET_TSC_KHZ, NULL);
+
+	TEST_ASSERT(cycles < 10000000000, "Conversion to ns may overflow");
+	return cycles * NSEC_PER_SEC / (tsc_khz * 1000);
+}
diff --git a/tools/testing/selftests/kvm/system_counter_offset_test.c b/tools/testing/selftests/kvm/system_counter_offset_test.c
index 7f5b330b6a1b..44101d0fcb48 100644
--- a/tools/testing/selftests/kvm/system_counter_offset_test.c
+++ b/tools/testing/selftests/kvm/system_counter_offset_test.c
@@ -39,14 +39,9 @@ static void setup_system_counter(struct kvm_vcpu *vcpu, struct test_case *test)
 			     &test->tsc_offset);
 }

-static uint64_t guest_read_system_counter(struct test_case *test)
-{
-	return rdtsc();
-}
-
 static uint64_t host_read_guest_system_counter(struct test_case *test)
 {
-	return rdtsc() + test->tsc_offset;
+	return cycles_read() + test->tsc_offset;
 }

 #else /* __x86_64__ */
@@ -63,9 +58,8 @@ static void guest_main(void)
 	int i;

 	for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
-		struct test_case *test = &test_cases[i];

-		GUEST_SYNC_CLOCK(i, guest_read_system_counter(test));
+		GUEST_SYNC_CLOCK(i, cycles_read());
 	}
 }

--
2.40.0.348.gf938b09366-goog

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution
  2023-03-27 21:26 [PATCH v3 0/2] KVM: selftests: Calculate memory latency stats Colton Lewis
  2023-03-27 21:26 ` [PATCH v3 1/2] KVM: selftests: Provide generic way to read system counter Colton Lewis
@ 2023-03-27 21:26 ` Colton Lewis
       [not found]   ` <ZHe1wEIYC6qsgupI@google.com>
  1 sibling, 1 reply; 5+ messages in thread
From: Colton Lewis @ 2023-03-27 21:26 UTC (permalink / raw)
  To: Paolo Bonzini, Sean Christopherson, David Matlack, Vipin Sharma,
	Andrew Jones, Marc Zyngier, Ben Gardon, Ricardo Koller,
	Oliver Upton
  Cc: kvm, Colton Lewis

Print summary stats of the memory latency distribution in nanoseconds
for dirty_log_perf_test. For every iteration, this prints the minimum,
the 50th percentile, the 90th percentile, the 99th percentile, and the
maximum. These percentiles are common in latency testing to give a
picture of the right half of the distribution, particularly the worst
cases.

Stats are calculated by sorting the samples taken from all vcpus and
picking from the index corresponding with each percentile.

Because keeping all samples is impractical due to the space required
in some cases (pooled memory w/ 64 vcpus would be 64 GB/vcpu * 64
vcpus * 250,000 samples/GB * 8 bytes/sample ~ 8 GB extra memory just
for samples), reservoir sampling [1] is used to only keep a small
number of samples per vcpu (settable via command argument). There is a
tradeoff between stat accuracy and memory usage. More samples means
more accuracy but also more memory use.

Because other selftests use memstress.c, those tests were hardcoded to
take 0 samples at this time.

[1] https://en.wikipedia.org/wiki/Reservoir_sampling#Simple:_Algorithm_R

Signed-off-by: Colton Lewis <coltonlewis@google.com>
---
 .../selftests/kvm/access_tracking_perf_test.c |  3 +-
 .../selftests/kvm/demand_paging_test.c        |  2 +-
 .../selftests/kvm/dirty_log_perf_test.c       | 21 +++++-
 .../selftests/kvm/include/aarch64/processor.h | 10 +++
 .../testing/selftests/kvm/include/memstress.h | 10 ++-
 .../selftests/kvm/include/x86_64/processor.h  | 10 +++
 tools/testing/selftests/kvm/lib/memstress.c   | 68 ++++++++++++++++---
 .../kvm/memslot_modification_stress_test.c    |  2 +-
 8 files changed, 110 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 3c7defd34f56..89947aa4ff3a 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -306,7 +306,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct kvm_vm *vm;
 	int nr_vcpus = params->nr_vcpus;

-	vm = memstress_create_vm(mode, nr_vcpus, params->vcpu_memory_bytes, 1,
+	vm = memstress_create_vm(mode, nr_vcpus, params->vcpu_memory_bytes,
+				 0, 1,
 				 params->backing_src, !overlap_memory_access);

 	memstress_start_vcpu_threads(nr_vcpus, vcpu_thread_main);
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index b0e1fc4de9e2..bc5c171eca9e 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -135,7 +135,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct kvm_vm *vm;
 	int i;

-	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
+	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 0, 1,
 				 p->src_type, p->partition_vcpu_memory_access);

 	demand_paging_size = get_backing_src_pagesz(p->src_type);
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index e9d6d1aecf89..79495f65c419 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -133,6 +133,7 @@ struct test_params {
 	int slots;
 	uint32_t write_percent;
 	uint32_t random_seed;
+	uint64_t samples_per_vcpu;
 	bool random_access;
 };

@@ -224,6 +225,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	int i;

 	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size,
+				 p->samples_per_vcpu,
 				 p->slots, p->backing_src,
 				 p->partition_vcpu_memory_access);

@@ -275,6 +277,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	pr_info("Populate memory time: %ld.%.9lds\n",
 		ts_diff.tv_sec, ts_diff.tv_nsec);

+	if (p->samples_per_vcpu)
+		memstress_print_percentiles(vm, nr_vcpus);
+
 	/* Enable dirty logging */
 	clock_gettime(CLOCK_MONOTONIC, &start);
 	enable_dirty_logging(vm, p->slots);
@@ -305,6 +310,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 		pr_info("Iteration %d dirty memory time: %ld.%.9lds\n",
 			iteration, ts_diff.tv_sec, ts_diff.tv_nsec);

+		if (p->samples_per_vcpu)
+			memstress_print_percentiles(vm, nr_vcpus);
+
 		clock_gettime(CLOCK_MONOTONIC, &start);
 		get_dirty_log(vm, bitmaps, p->slots);
 		ts_diff = timespec_elapsed(start);
@@ -367,8 +375,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 static void help(char *name)
 {
 	puts("");
-	printf("usage: %s [-h] [-a] [-i iterations] [-p offset] [-g] "
-	       "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type]"
+	printf("usage: %s [-h] [-a] [-i iterations] [-p offset] [-g] [-l samples per vcpu] "
+	       "[-m mode] [-n] [-b vcpu bytes] [-v vcpus] [-o] [-r random seed ] [-s mem type] "
 	       "[-x memslots] [-w percentage] [-c physical cpus to run test on]\n", name);
 	puts("");
 	printf(" -a: access memory randomly rather than in order.\n");
@@ -382,6 +390,9 @@ static void help(char *name)
 	       "     is not enabled).\n");
 	printf(" -p: specify guest physical test memory offset\n"
 	       "     Warning: a low offset can conflict with the loaded test code.\n");
+	printf(" -l: Turn on latency measurements and specify number of samples\n"
+	       "     to store per vcpu. More samples means more accuracy at the cost\n"
+	       "     of additional memory use. 1000 is a good value.\n");
 	guest_modes_help();
 	printf(" -n: Run the vCPUs in nested mode (L2)\n");
 	printf(" -e: Run vCPUs while dirty logging is being disabled.  This\n"
@@ -428,6 +439,7 @@ int main(int argc, char *argv[])
 		.slots = 1,
 		.random_seed = 1,
 		.write_percent = 100,
+		.samples_per_vcpu = 0,
 	};
 	int opt;

@@ -438,7 +450,7 @@ int main(int argc, char *argv[])

 	guest_modes_append_default();

-	while ((opt = getopt(argc, argv, "ab:c:eghi:m:nop:r:s:v:x:w:")) != -1) {
+	while ((opt = getopt(argc, argv, "ab:c:eghi:l:m:nop:r:s:v:x:w:")) != -1) {
 		switch (opt) {
 		case 'a':
 			p.random_access = true;
@@ -462,6 +474,9 @@ int main(int argc, char *argv[])
 		case 'i':
 			p.iterations = atoi_positive("Number of iterations", optarg);
 			break;
+		case 'l':
+			p.samples_per_vcpu = atoi_positive("Number of samples/vcpu", optarg);
+			break;
 		case 'm':
 			guest_modes_cmdline(optarg);
 			break;
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index f65e491763e0..d441f485e9c6 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -219,4 +219,14 @@ uint32_t guest_get_vcpuid(void);
 uint64_t cycles_read(void);
 uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);

+#define MEASURE_CYCLES(x)			\
+	({					\
+		uint64_t start;			\
+		start = cycles_read();		\
+		isb();				\
+		x;				\
+		dsb(nsh);			\
+		cycles_read() - start;		\
+	})
+
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index 72e3e358ef7b..25a3b5e308e9 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -37,6 +37,12 @@ struct memstress_args {
 	uint64_t guest_page_size;
 	uint32_t random_seed;
 	uint32_t write_percent;
+	uint64_t samples_per_vcpu;
+	/* Store all samples in a flat array so they can be easily
+	 * sorted later.
+	 */
+	vm_vaddr_t latency_samples;
+

 	/* Run vCPUs in L2 instead of L1, if the architecture supports it. */
 	bool nested;
@@ -56,7 +62,8 @@ struct memstress_args {
 extern struct memstress_args memstress_args;

 struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
-				   uint64_t vcpu_memory_bytes, int slots,
+				   uint64_t vcpu_memory_bytes,
+				   uint64_t samples_per_vcpu, int slots,
 				   enum vm_mem_backing_src_type backing_src,
 				   bool partition_vcpu_memory_access);
 void memstress_destroy_vm(struct kvm_vm *vm);
@@ -72,4 +79,5 @@ void memstress_guest_code(uint32_t vcpu_id);
 uint64_t memstress_nested_pages(int nr_vcpus);
 void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[]);

+void memstress_print_percentiles(struct kvm_vm *vm, int nr_vcpus);
 #endif /* SELFTEST_KVM_MEMSTRESS_H */
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 5d977f95d5f5..7352e02db4ee 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -1137,4 +1137,14 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
 uint64_t cycles_read(void);
 uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);

+#define MEASURE_CYCLES(x)			\
+	({					\
+		uint64_t start;			\
+		start = cycles_read();		\
+		asm volatile("mfence");		\
+		x;				\
+		asm volatile("mfence");		\
+		cycles_read() - start;		\
+	})
+
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index 5f1d3173c238..d9cae925d990 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -48,14 +48,17 @@ void memstress_guest_code(uint32_t vcpu_idx)
 {
 	struct memstress_args *args = &memstress_args;
 	struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx];
-	struct guest_random_state rand_state;
 	uint64_t gva;
 	uint64_t pages;
 	uint64_t addr;
 	uint64_t page;
 	int i;
-
-	rand_state = new_guest_random_state(args->random_seed + vcpu_idx);
+	struct guest_random_state rand_state =
+		new_guest_random_state(args->random_seed + vcpu_idx);
+	uint64_t *vcpu_samples = (uint64_t *)args->latency_samples
+		+ args->samples_per_vcpu * vcpu_idx;
+	uint64_t cycles;
+	uint32_t maybe_sample;

 	gva = vcpu_args->gva;
 	pages = vcpu_args->pages;
@@ -73,15 +76,46 @@ void memstress_guest_code(uint32_t vcpu_idx)
 			addr = gva + (page * args->guest_page_size);

 			if (guest_random_u32(&rand_state) % 100 < args->write_percent)
-				*(uint64_t *)addr = 0x0123456789ABCDEF;
+				cycles = MEASURE_CYCLES(*(uint64_t *)addr = 0x0123456789ABCDEF);
 			else
-				READ_ONCE(*(uint64_t *)addr);
+				cycles = MEASURE_CYCLES(READ_ONCE(*(uint64_t *)addr));
+
+			if (i < args->samples_per_vcpu) {
+				vcpu_samples[i] = cycles;
+				continue;
+			}
+
+			maybe_sample = guest_random_u32(&rand_state) % (i + 1);
+
+			if (maybe_sample < args->samples_per_vcpu)
+				vcpu_samples[maybe_sample] = cycles;
 		}

 		GUEST_SYNC(1);
 	}
 }

+/* compare function for qsort */
+static int memstress_qcmp(const void *a, const void *b)
+{
+	return *(int *)a - *(int *)b;
+}
+
+void memstress_print_percentiles(struct kvm_vm *vm, int nr_vcpus)
+{
+	uint64_t n_samples = nr_vcpus * memstress_args.samples_per_vcpu;
+	uint64_t *samples = addr_gva2hva(vm, memstress_args.latency_samples);
+
+	qsort(samples, n_samples, sizeof(uint64_t), &memstress_qcmp);
+
+	pr_info("Latency distribution = min:0.%.9lds, 50th:0.%.9lds, 90th:0.%.9lds, 99th:0.%.9lds, max:0.%.9lds\n",
+		cycles_to_ns(vcpus[0], samples[0]),
+		cycles_to_ns(vcpus[0], samples[n_samples / 2]),
+		cycles_to_ns(vcpus[0], samples[n_samples * 9 / 10]),
+		cycles_to_ns(vcpus[0], samples[n_samples * 99 / 100]),
+		cycles_to_ns(vcpus[0], samples[n_samples - 1]));
+}
+
 void memstress_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
 			   struct kvm_vcpu *vcpus[],
 			   uint64_t vcpu_memory_bytes,
@@ -119,21 +153,26 @@ void memstress_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
 }

 struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
-				   uint64_t vcpu_memory_bytes, int slots,
+				   uint64_t vcpu_memory_bytes,
+				   uint64_t samples_per_vcpu, int slots,
 				   enum vm_mem_backing_src_type backing_src,
 				   bool partition_vcpu_memory_access)
 {
 	struct memstress_args *args = &memstress_args;
 	struct kvm_vm *vm;
-	uint64_t guest_num_pages, slot0_pages = 0;
+	uint64_t guest_num_pages, sample_pages, slot0_pages = 0;
 	uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src);
 	uint64_t region_end_gfn;
+	uint64_t sample_size;
+	uint64_t sample_slot;
+	uint64_t sample_start;
 	int i;

 	pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));

 	/* By default vCPUs will write to memory. */
 	args->write_percent = 100;
+	args->samples_per_vcpu = samples_per_vcpu;

 	/*
 	 * Snapshot the non-huge page size.  This is used by the guest code to
@@ -159,12 +198,17 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 	if (args->nested)
 		slot0_pages += memstress_nested_pages(nr_vcpus);

+	sample_size = samples_per_vcpu * nr_vcpus * sizeof(uint64_t);
+	sample_pages = vm_adjust_num_guest_pages(
+		mode, 1 + sample_size / args->guest_page_size);
+
 	/*
 	 * Pass guest_num_pages to populate the page tables for test memory.
 	 * The memory is also added to memslot 0, but that's a benign side
 	 * effect as KVM allows aliasing HVAs in meslots.
 	 */
-	vm = __vm_create_with_vcpus(mode, nr_vcpus, slot0_pages + guest_num_pages,
+	vm = __vm_create_with_vcpus(mode, nr_vcpus,
+				    slot0_pages + guest_num_pages + sample_pages,
 				    memstress_guest_code, vcpus);

 	args->vm = vm;
@@ -190,7 +234,7 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 		    " nr_vcpus: %d wss: %" PRIx64 "]\n",
 		    guest_num_pages, region_end_gfn - 1, nr_vcpus, vcpu_memory_bytes);

-	args->gpa = (region_end_gfn - guest_num_pages - 1) * args->guest_page_size;
+	args->gpa = (region_end_gfn - guest_num_pages - sample_pages - 1) * args->guest_page_size;
 	args->gpa = align_down(args->gpa, backing_src_pagesz);
 #ifdef __s390x__
 	/* Align to 1M (segment size) */
@@ -213,6 +257,12 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
 	/* Do mapping for the demand paging memory slot */
 	virt_map(vm, guest_test_virt_mem, args->gpa, guest_num_pages);

+	memstress_args.latency_samples = guest_test_virt_mem + args->size;
+	sample_start = args->gpa + args->size;
+	sample_slot = MEMSTRESS_MEM_SLOT_INDEX + slots;
+	vm_userspace_mem_region_add(vm, backing_src, sample_start, sample_slot, sample_pages, 0);
+	virt_map(vm, memstress_args.latency_samples, sample_start, sample_pages);
+
 	memstress_setup_vcpus(vm, nr_vcpus, vcpus, vcpu_memory_bytes,
 			      partition_vcpu_memory_access);

diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 9855c41ca811..0a14f7d16e0c 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -95,7 +95,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
 	struct test_params *p = arg;
 	struct kvm_vm *vm;

-	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 1,
+	vm = memstress_create_vm(mode, nr_vcpus, guest_percpu_mem_size, 0, 1,
 				 VM_MEM_SRC_ANONYMOUS,
 				 p->partition_vcpu_memory_access);

--
2.40.0.348.gf938b09366-goog

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution
       [not found]   ` <ZHe1wEIYC6qsgupI@google.com>
@ 2023-06-01  5:51     ` Oliver Upton
  2023-06-01 18:12       ` Colton Lewis
  0 siblings, 1 reply; 5+ messages in thread
From: Oliver Upton @ 2023-06-01  5:51 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Colton Lewis, Paolo Bonzini, David Matlack, Vipin Sharma,
	Andrew Jones, Marc Zyngier, Ben Gardon, Ricardo Koller, kvm

On Wed, May 31, 2023 at 02:01:52PM -0700, Sean Christopherson wrote:
> On Mon, Mar 27, 2023, Colton Lewis wrote:
> > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> > index f65e491763e0..d441f485e9c6 100644
> > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> > @@ -219,4 +219,14 @@ uint32_t guest_get_vcpuid(void);
> >  uint64_t cycles_read(void);
> >  uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
> > 
> > +#define MEASURE_CYCLES(x)			\
> > +	({					\
> > +		uint64_t start;			\
> > +		start = cycles_read();		\
> > +		isb();				\
> 
> Would it make sense to put the necessary barriers inside the cycles_read() (or
> whatever we end up calling it)?  Or does that not make sense on ARM?

+1. Additionally, the function should have a name that implies ordering,
like read_system_counter_ordered() or similar.

> > +		x;				\
> > +		dsb(nsh);			\

I assume you're doing this because you want to wait for outstanding
loads and stores to complete due to 'x', right?

My knee-jerk reaction was that you could just do an mb() and share the
implementation between arches, but it would seem the tools/ flavor of
the barrier demotes to a DMB because... reasons.

> > +		cycles_read() - start;		\
> > +	})
> > +
> >  #endif /* SELFTEST_KVM_PROCESSOR_H */
> ...
> 
> > diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> > index 5d977f95d5f5..7352e02db4ee 100644
> > --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> > +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> > @@ -1137,4 +1137,14 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> >  uint64_t cycles_read(void);
> >  uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
> > 
> > +#define MEASURE_CYCLES(x)			\
> > +	({					\
> > +		uint64_t start;			\
> > +		start = cycles_read();		\
> > +		asm volatile("mfence");		\
> 
> This is incorrect as placing the barrier after the RDTSC allows the RDTSC to be
> executed before earlier loads, e.g. could measure memory accesses from whatever
> was before MEASURE_CYCLES().  And per the kernel's rdtsc_ordered(), it sounds like
> RDTSC can only be hoisted before prior loads, i.e. will be ordered with respect
> to future loads and stores.

Same thing goes for the arm64 variant of the function... You want to
insert an isb() immediately _before_ you read the counter register to
avoid speculation.

arch_timer_read_cntvct_el0() back over in the kernel is a good example of
this. You can very likely ignore the ECV alternative for now.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution
  2023-06-01  5:51     ` Oliver Upton
@ 2023-06-01 18:12       ` Colton Lewis
  0 siblings, 0 replies; 5+ messages in thread
From: Colton Lewis @ 2023-06-01 18:12 UTC (permalink / raw)
  To: Oliver Upton
  Cc: seanjc, pbonzini, dmatlack, vipinsh, andrew.jones, maz, bgardon,
	ricarkol, kvm

Oliver Upton <oliver.upton@linux.dev> writes:

> On Wed, May 31, 2023 at 02:01:52PM -0700, Sean Christopherson wrote:
>> On Mon, Mar 27, 2023, Colton Lewis wrote:
>> > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h  
>> b/tools/testing/selftests/kvm/include/aarch64/processor.h
>> > index f65e491763e0..d441f485e9c6 100644
>> > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
>> > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
>> > @@ -219,4 +219,14 @@ uint32_t guest_get_vcpuid(void);
>> >  uint64_t cycles_read(void);
>> >  uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
>> >
>> > +#define MEASURE_CYCLES(x)			\
>> > +	({					\
>> > +		uint64_t start;			\
>> > +		start = cycles_read();		\
>> > +		isb();				\

>> Would it make sense to put the necessary barriers inside the  
>> cycles_read() (or
>> whatever we end up calling it)?  Or does that not make sense on ARM?

> +1. Additionally, the function should have a name that implies ordering,
> like read_system_counter_ordered() or similar.

cycles_read() is currently a wrapper for timer_get_cntct(), which has
an isb() at the beginning but not the end. I think it would make more
sense to add the barrier there if there is no objection.

>> > +		x;				\
>> > +		dsb(nsh);			\

> I assume you're doing this because you want to wait for outstanding
> loads and stores to complete due to 'x', right?

Correct.

> My knee-jerk reaction was that you could just do an mb() and share the
> implementation between arches, but it would seem the tools/ flavor of
> the barrier demotes to a DMB because... reasons.

Yep, and what I read from the ARM manual says it has to be DSB.

>> > +		cycles_read() - start;		\
>> > +	})
>> > +
>> >  #endif /* SELFTEST_KVM_PROCESSOR_H */
>> ...

>> > diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h  
>> b/tools/testing/selftests/kvm/include/x86_64/processor.h
>> > index 5d977f95d5f5..7352e02db4ee 100644
>> > --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
>> > +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
>> > @@ -1137,4 +1137,14 @@ void virt_map_level(struct kvm_vm *vm, uint64_t  
>> vaddr, uint64_t paddr,
>> >  uint64_t cycles_read(void);
>> >  uint64_t cycles_to_ns(struct kvm_vcpu *vcpu, uint64_t cycles);
>> >
>> > +#define MEASURE_CYCLES(x)			\
>> > +	({					\
>> > +		uint64_t start;			\
>> > +		start = cycles_read();		\
>> > +		asm volatile("mfence");		\

>> This is incorrect as placing the barrier after the RDTSC allows the  
>> RDTSC to be
>> executed before earlier loads, e.g. could measure memory accesses from  
>> whatever
>> was before MEASURE_CYCLES().  And per the kernel's rdtsc_ordered(), it  
>> sounds like
>> RDTSC can only be hoisted before prior loads, i.e. will be ordered with  
>> respect
>> to future loads and stores.

Interesting, so I will swap the fence and the cycles_read()


> Same thing goes for the arm64 variant of the function... You want to
> insert an isb() immediately _before_ you read the counter register to
> avoid speculation.

That's taken care of. See my earlier comment about timer_get_cntct()

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-06-01 18:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-27 21:26 [PATCH v3 0/2] KVM: selftests: Calculate memory latency stats Colton Lewis
2023-03-27 21:26 ` [PATCH v3 1/2] KVM: selftests: Provide generic way to read system counter Colton Lewis
2023-03-27 21:26 ` [PATCH v3 2/2] KVM: selftests: Print summary stats of memory latency distribution Colton Lewis
     [not found]   ` <ZHe1wEIYC6qsgupI@google.com>
2023-06-01  5:51     ` Oliver Upton
2023-06-01 18:12       ` Colton Lewis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox