* [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests
@ 2020-08-31 19:34 Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 1/4] arm64: Move get_id_aa64dfr0() in processor.h Eric Auger
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Eric Auger @ 2020-08-31 19:34 UTC (permalink / raw)
To: eric.auger.pro, eric.auger, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
This series implements tests exercising the Statistical Profiling
Extensions.
This was tested with associated unmerged kernel [1] and QEMU [2]
series.
Depending on the comments, I can easily add other tests checking
more configs, additional events and testing migration too. I hope
this can be useful when respinning both series.
All SPE tests can be launched with:
./run_tests.sh -g spe
Tests also can be launched individually. For example:
./arm-run arm/spe.flat -append 'spe-buffer'
The series can be found at:
https://github.com/eauger/kut/tree/spe_rfc
References:
[1] [PATCH v2 00/18] arm64: KVM: add SPE profiling support
[2] [PATCH 0/7] target/arm: Add vSPE support to KVM guest
Eric Auger (4):
arm64: Move get_id_aa64dfr0() in processor.h
spe: Probing and Introspection Test
spe: Add profiling buffer test
spe: Test Profiling Buffer Events
arm/Makefile.common | 1 +
arm/pmu.c | 1 -
arm/spe.c | 463 ++++++++++++++++++++++++++++++++++++++
arm/unittests.cfg | 24 ++
lib/arm64/asm/barrier.h | 1 +
lib/arm64/asm/processor.h | 5 +
6 files changed, 494 insertions(+), 1 deletion(-)
create mode 100644 arm/spe.c
--
2.21.3
^ permalink raw reply [flat|nested] 8+ messages in thread
* [kvm-unit-tests RFC 1/4] arm64: Move get_id_aa64dfr0() in processor.h
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
@ 2020-08-31 19:34 ` Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 2/4] spe: Probing and Introspection Test Eric Auger
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Eric Auger @ 2020-08-31 19:34 UTC (permalink / raw)
To: eric.auger.pro, eric.auger, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
We plan to use get_id_aa64dfr0() for SPE tests.
So let's move this latter in processor.h header.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
arm/pmu.c | 1 -
lib/arm64/asm/processor.h | 5 +++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arm/pmu.c b/arm/pmu.c
index cece53e..e2cb51e 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -167,7 +167,6 @@ static void test_overflow_interrupt(void) {}
#define ID_DFR0_PMU_V3_8_5 0b0110
#define ID_DFR0_PMU_IMPDEF 0b1111
-static inline uint32_t get_id_aa64dfr0(void) { return read_sysreg(id_aa64dfr0_el1); }
static inline uint32_t get_pmcr(void) { return read_sysreg(pmcr_el0); }
static inline void set_pmcr(uint32_t v) { write_sysreg(v, pmcr_el0); }
static inline uint64_t get_pmccntr(void) { return read_sysreg(pmccntr_el0); }
diff --git a/lib/arm64/asm/processor.h b/lib/arm64/asm/processor.h
index 02665b8..11b7564 100644
--- a/lib/arm64/asm/processor.h
+++ b/lib/arm64/asm/processor.h
@@ -88,6 +88,11 @@ static inline uint64_t get_mpidr(void)
return read_sysreg(mpidr_el1);
}
+static inline uint64_t get_id_aa64dfr0(void)
+{
+ return read_sysreg(id_aa64dfr0_el1);
+}
+
#define MPIDR_HWID_BITMASK 0xff00ffffff
extern int mpidr_to_cpu(uint64_t mpidr);
--
2.21.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [kvm-unit-tests RFC 2/4] spe: Probing and Introspection Test
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 1/4] arm64: Move get_id_aa64dfr0() in processor.h Eric Auger
@ 2020-08-31 19:34 ` Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 3/4] spe: Add profiling buffer test Eric Auger
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Eric Auger @ 2020-08-31 19:34 UTC (permalink / raw)
To: eric.auger.pro, eric.auger, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
Test whether Statistical Profiling Extensions (SPE) are
supported and in the positive collect dimensioning data from
the IDR registers. The First test only validates those.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
arm/Makefile.common | 1 +
arm/spe.c | 163 ++++++++++++++++++++++++++++++++++++++++++++
arm/unittests.cfg | 8 +++
3 files changed, 172 insertions(+)
create mode 100644 arm/spe.c
diff --git a/arm/Makefile.common b/arm/Makefile.common
index a123e85..4e7e4eb 100644
--- a/arm/Makefile.common
+++ b/arm/Makefile.common
@@ -8,6 +8,7 @@ tests-common = $(TEST_DIR)/selftest.flat
tests-common += $(TEST_DIR)/spinlock-test.flat
tests-common += $(TEST_DIR)/pci-test.flat
tests-common += $(TEST_DIR)/pmu.flat
+tests-common += $(TEST_DIR)/spe.flat
tests-common += $(TEST_DIR)/gic.flat
tests-common += $(TEST_DIR)/psci.flat
tests-common += $(TEST_DIR)/sieve.flat
diff --git a/arm/spe.c b/arm/spe.c
new file mode 100644
index 0000000..153c182
--- /dev/null
+++ b/arm/spe.c
@@ -0,0 +1,163 @@
+/*
+ * Copyright (C) 2020, Red Hat Inc, Eric Auger <eric.auger@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU Lesser General Public License version 2.1 and
+ * only version 2.1 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
+ * for more details.
+ */
+#include "libcflat.h"
+#include "errata.h"
+#include "asm/barrier.h"
+#include "asm/sysreg.h"
+#include "asm/processor.h"
+#include "alloc_page.h"
+#include <bitops.h>
+
+struct spe {
+ int min_interval;
+ int maxsize;
+ int countsize;
+ bool fl_cap;
+ bool ft_cap;
+ bool fe_cap;
+ int align;
+ void *buffer;
+ bool unique_record_size;
+};
+
+static struct spe spe;
+
+#ifdef __arm__
+
+static bool spe_probe(void) { return false; }
+static void test_spe_introspection(void) { }
+
+#else
+
+#define ID_DFR0_PMSVER_SHIFT 32
+#define ID_DFR0_PMSVER_MASK 0xF
+
+#define PMBIDR_EL1_ALIGN_MASK 0xF
+#define PMBIDR_EL1_P 0x10
+#define PMBIDR_EL1_F 0x20
+
+#define PMSIDR_EL1_FE 0x1
+#define PMSIDR_EL1_FT 0x2
+#define PMSIDR_EL1_FL 0x4
+#define PMSIDR_EL1_ARCHINST 0x8
+#define PMSIDR_EL1_LDS 0x10
+#define PMSIDR_EL1_ERND 0x20
+#define PMSIDR_EL1_INTERVAL_SHIFT 8
+#define PMSIDR_EL1_INTERVAL_MASK 0xFUL
+#define PMSIDR_EL1_MAXSIZE_SHIFT 12
+#define PMSIDR_EL1_MAXSIZE_MASK 0xFUL
+#define PMSIDR_EL1_COUNTSIZE_SHIFT 16
+#define PMSIDR_EL1_COUNTSIZE_MASK 0xFUL
+
+#define PMSIDR_EL1 sys_reg(3, 0, 9, 9, 7)
+
+#define PMBIDR_EL1 sys_reg(3, 0, 9, 10, 7)
+
+static int min_interval(uint8_t idr_bits)
+{
+ switch (idr_bits) {
+ case 0x0:
+ return 256;
+ case 0x2:
+ return 512;
+ case 0x3:
+ return 768;
+ case 0x4:
+ return 1024;
+ case 0x5:
+ return 1536;
+ case 0x6:
+ return 2048;
+ case 0x7:
+ return 3072;
+ case 0x8:
+ return 4096;
+ default:
+ return -1;
+ }
+}
+
+static bool spe_probe(void)
+{
+ uint64_t pmbidr_el1, pmsidr_el1;
+ uint8_t pmsver;
+
+ pmsver = (get_id_aa64dfr0() >> ID_DFR0_PMSVER_SHIFT) & ID_DFR0_PMSVER_MASK;
+
+ report_info("PMSVer = %d", pmsver);
+ if (!pmsver || pmsver > 2)
+ return false;
+
+ pmbidr_el1 = read_sysreg_s(PMBIDR_EL1);
+ if (pmbidr_el1 & PMBIDR_EL1_P) {
+ report_info("MBIDR_EL1: Profiling buffer owned by this exception level");
+ return false;
+ }
+
+ spe.align = 1 << (pmbidr_el1 & PMBIDR_EL1_ALIGN_MASK);
+
+ pmsidr_el1 = read_sysreg_s(PMSIDR_EL1);
+
+ spe.min_interval = min_interval((pmsidr_el1 >> PMSIDR_EL1_INTERVAL_SHIFT) & PMSIDR_EL1_INTERVAL_MASK);
+ spe.maxsize = 1 << ((pmsidr_el1 >> PMSIDR_EL1_MAXSIZE_SHIFT) & PMSIDR_EL1_MAXSIZE_MASK);
+ spe.countsize = (pmsidr_el1 >> PMSIDR_EL1_COUNTSIZE_SHIFT) & PMSIDR_EL1_COUNTSIZE_MASK;
+
+ spe.fl_cap = pmsidr_el1 & PMSIDR_EL1_FL;
+ spe.ft_cap = pmsidr_el1 & PMSIDR_EL1_FT;
+ spe.fe_cap = pmsidr_el1 & PMSIDR_EL1_FE;
+
+ report_info("Align= %d bytes, Min Interval=%d Single record Max Size = %d bytes",
+ spe.align, spe.min_interval, spe.maxsize);
+ report_info("Filtering Caps: Lat=%d Type=%d Events=%d", spe.fl_cap, spe.ft_cap, spe.fe_cap);
+ if (spe.align == spe.maxsize) {
+ report_info("Each record is exactly %d bytes", spe.maxsize);
+ spe.unique_record_size = true;
+ }
+
+ spe.buffer = alloc_pages(0);
+
+ return true;
+}
+
+static void test_spe_introspection(void)
+{
+ report(spe.countsize == 0x2, "PMSIDR_EL1: CountSize = 0b0010");
+ report(spe.maxsize >= 16 && spe.maxsize <= 2048,
+ "PMSIDR_EL1: Single record max size = %d bytes", spe.maxsize);
+ report(spe.min_interval >= 256 && spe.min_interval <= 4096,
+ "PMSIDR_EL1: Minimal sampling interval = %d", spe.min_interval);
+}
+
+#endif
+
+int main(int argc, char *argv[])
+{
+ if (!spe_probe()) {
+ printf("SPE not supported, test skipped...\n");
+ return report_summary();
+ }
+
+ if (argc < 2)
+ report_abort("no test specified");
+
+ report_prefix_push("spe");
+
+ if (strcmp(argv[1], "spe-introspection") == 0) {
+ report_prefix_push(argv[1]);
+ test_spe_introspection();
+ report_prefix_pop();
+ } else {
+ report_abort("Unknown sub-test '%s'", argv[1]);
+ }
+ return report_summary();
+}
diff --git a/arm/unittests.cfg b/arm/unittests.cfg
index f776b66..c070939 100644
--- a/arm/unittests.cfg
+++ b/arm/unittests.cfg
@@ -134,6 +134,14 @@ extra_params = -append 'pmu-overflow-interrupt'
#groups = pmu
#accel = tcg
+[spe-introspection]
+file = spe.flat
+groups = spe
+arch = arm64
+extra_params = -append 'spe-introspection'
+accel = kvm
+arch = arm64
+
# Test GIC emulation
[gicv2-ipi]
file = gic.flat
--
2.21.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [kvm-unit-tests RFC 3/4] spe: Add profiling buffer test
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 1/4] arm64: Move get_id_aa64dfr0() in processor.h Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 2/4] spe: Probing and Introspection Test Eric Auger
@ 2020-08-31 19:34 ` Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events Eric Auger
2020-09-01 9:24 ` [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Alexandru Elisei
4 siblings, 0 replies; 8+ messages in thread
From: Eric Auger @ 2020-08-31 19:34 UTC (permalink / raw)
To: eric.auger.pro, eric.auger, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
Add the code to prepare for profiling at EL1. The code under profiling
is a simple loop doing memory addresses. We simply check the profiling
buffer write position increases, ie. the buffer gets filled. No event
is expected.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
To make sure no buffer full events is likely to be received, the number
of to be collected events should be estimated. This needs to be done.
Same for next patch. I tried to read PMSICR.COUNT after a single iteration
but I get a value greated than the set interval so I wonder whether this
is a bug or rather than reading this value gives unpredictable value.
---
arm/spe.c | 161 ++++++++++++++++++++++++++++++++++++++++
arm/unittests.cfg | 8 ++
lib/arm64/asm/barrier.h | 1 +
3 files changed, 170 insertions(+)
diff --git a/arm/spe.c b/arm/spe.c
index 153c182..7996f79 100644
--- a/arm/spe.c
+++ b/arm/spe.c
@@ -14,9 +14,11 @@
#include "errata.h"
#include "asm/barrier.h"
#include "asm/sysreg.h"
+#include "asm/page.h"
#include "asm/processor.h"
#include "alloc_page.h"
#include <bitops.h>
+#include "alloc.h"
struct spe {
int min_interval;
@@ -27,6 +29,10 @@ struct spe {
bool fe_cap;
int align;
void *buffer;
+ uint64_t pmbptr_el1;
+ uint64_t pmblimitr_el1;
+ uint64_t pmsirr_el1;
+ uint64_t pmscr_el1;
bool unique_record_size;
};
@@ -36,6 +42,7 @@ static struct spe spe;
static bool spe_probe(void) { return false; }
static void test_spe_introspection(void) { }
+static void test_spe_buffer(void) { }
#else
@@ -59,10 +66,35 @@ static void test_spe_introspection(void) { }
#define PMSIDR_EL1_COUNTSIZE_SHIFT 16
#define PMSIDR_EL1_COUNTSIZE_MASK 0xFUL
+#define PMSIRR_EL1_INTERVAL_SHIFT 8
+#define PMSIRR_EL1_INTERVAL_MASK 0xFFFFFF
+
+#define PMSFCR_EL1_FE 0x1
+#define PMSFCR_EL1_FT 0x2
+#define PMSFCR_EL1_FL 0x4
+#define PMSFCR_EL1_B 0x10000
+#define PMSFCR_EL1_LD 0x20000
+#define PMSFCR_EL1_ST 0x40000
+
+#define PMSCR_EL1 sys_reg(3, 0, 9, 9, 0)
+#define PMSICR_EL1 sys_reg(3, 0, 9, 9, 2)
+#define PMSIRR_EL1 sys_reg(3, 0, 9, 9, 3)
+#define PMSFCR_EL1 sys_reg(3, 0, 9, 9, 4)
+#define PMSEVFR_EL1 sys_reg(3, 0, 9, 9, 5)
#define PMSIDR_EL1 sys_reg(3, 0, 9, 9, 7)
+#define PMBLIMITR_EL1 sys_reg(3, 0, 9, 10, 0)
+#define PMBPTR_EL1 sys_reg(3, 0, 9, 10, 1)
+#define PMBSR_EL1 sys_reg(3, 0, 9, 10, 3)
#define PMBIDR_EL1 sys_reg(3, 0, 9, 10, 7)
+#define PMBLIMITR_EL1_E 0x1
+
+#define PMSCR_EL1_E1SPE 0x2
+#define PMSCR_EL1_PA 0x10
+#define PMSCR_EL1_TS 0x20
+#define PMSCR_EL1_PCT 0x40
+
static int min_interval(uint8_t idr_bits)
{
switch (idr_bits) {
@@ -138,6 +170,131 @@ static void test_spe_introspection(void)
"PMSIDR_EL1: Minimal sampling interval = %d", spe.min_interval);
}
+static void mem_access_loop(void *addr, int loop, uint64_t pmblimitr)
+{
+asm volatile(
+ " msr_s " xstr(PMBLIMITR_EL1) ", %[pmblimitr]\n"
+ " isb\n"
+ " mov x10, %[loop]\n"
+ "1: sub x10, x10, #1\n"
+ " ldr x9, [%[addr]]\n"
+ " cmp x10, #0x0\n"
+ " b.gt 1b\n"
+ " bfxil %[pmblimitr], xzr, 0, 1\n"
+ " msr_s " xstr(PMBLIMITR_EL1) ", %[pmblimitr]\n"
+ " isb\n"
+ :
+ : [addr] "r" (addr), [pmblimitr] "r" (pmblimitr), [loop] "r" (loop)
+ : "x8", "x9", "x10", "cc");
+}
+
+char null_buff[PAGE_SIZE] = {};
+
+static void reset(void)
+{
+ /* erase the profiling buffer, reset the start and limit addresses */
+ spe.pmbptr_el1 = (uint64_t)spe.buffer;
+ spe.pmblimitr_el1 = (uint64_t)(spe.buffer + PAGE_SIZE);
+ write_sysreg_s(spe.pmbptr_el1, PMBPTR_EL1);
+ write_sysreg_s(spe.pmblimitr_el1, PMBLIMITR_EL1);
+ isb();
+
+ /* Drain any buffered data */
+ psb_csync();
+ dsb(nsh);
+
+ memset(spe.buffer, 0, PAGE_SIZE);
+
+ /* reset the syndrome register */
+ write_sysreg_s(0, PMBSR_EL1);
+
+ /* SW must write 0 to PMSICR_EL1 before enabling sampling profiling */
+ write_sysreg_s(0, PMSICR_EL1);
+
+ /* Filtering disabled */
+ write_sysreg_s(0, PMSFCR_EL1);
+
+ /* Interval Reload Register */
+ spe.pmsirr_el1 = (spe.min_interval & PMSIRR_EL1_INTERVAL_MASK) << PMSIRR_EL1_INTERVAL_SHIFT;
+ write_sysreg_s(spe.pmsirr_el1, PMSIRR_EL1);
+
+ /* Control Registrer */
+ spe.pmscr_el1 = PMSCR_EL1_E1SPE | PMSCR_EL1_TS | PMSCR_EL1_PCT | PMSCR_EL1_PA;
+ write_sysreg_s(spe.pmscr_el1, PMSCR_EL1);
+
+ /* Make sure the syndrome register is void */
+ write_sysreg_s(0, PMBSR_EL1);
+}
+
+static inline void drain(void)
+{
+ /* ensure profiling data are written */
+ psb_csync();
+ dsb(nsh);
+}
+
+static void test_spe_buffer(void)
+{
+ uint64_t pmbsr_el1, val1, val2;
+ void *addr = malloc(10 * PAGE_SIZE);
+
+ reset();
+
+ val1 = read_sysreg_s(PMBPTR_EL1);
+ val2 = read_sysreg_s(PMBLIMITR_EL1);
+ report(val1 == spe.pmbptr_el1 && val2 == spe.pmblimitr_el1,
+ "PMBPTR_EL1, PMBLIMITR_EL1: reset");
+
+ val1 = read_sysreg_s(PMSIRR_EL1);
+ report(val1 == spe.pmsirr_el1, "PMSIRR_EL1: Sampling interval set to %d", spe.min_interval);
+
+ val1 = read_sysreg_s(PMSCR_EL1);
+ report(val1 == spe.pmscr_el1, "PMSCR_EL1: EL1 Statistical Profiling enabled");
+
+ val1 = read_sysreg_s(PMSFCR_EL1);
+ report(!val1, "PMSFCR_EL1: No Filter Control");
+
+ report(!memcmp(spe.buffer, null_buff, PAGE_SIZE),
+ "Profiling buffer empty before profiling");
+
+ val1 = read_sysreg_s(PMBSR_EL1);
+ report(!val1, "PMBSR_EL1: Syndrome Register void before profiling");
+
+ mem_access_loop(addr, 1, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
+ drain();
+ val1 = read_sysreg_s(PMSICR_EL1);
+ /*
+ * TODO: the value read in PMSICR_EL1.count currently seems not consistent with
+ * programmed interval. Getting a good value would allow to estimate the number
+ * of records to be collected in next step.
+ */
+ report_info("count for a single iteration: PMSICR_EL1.count=%lld interval=%d",
+ val1 & GENMASK_ULL(31, 0), spe.min_interval);
+
+ /* Stuff to profile */
+
+ mem_access_loop(addr, 1000000, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
+
+ /* end of stuff to profile */
+
+ drain();
+
+ report(memcmp(spe.buffer, null_buff, PAGE_SIZE), "Profiling buffer filled");
+
+ val1 = read_sysreg_s(PMBPTR_EL1);
+ val2 = val1 - (uint64_t)spe.buffer;
+ report(val1 > (uint64_t)spe.buffer,
+ "PMBPTR_EL1: Current write position has increased: 0x%lx -> 0x%lx (%ld bytes)",
+ (uint64_t)spe.buffer, val1, val2);
+ if (spe.unique_record_size)
+ report_info("This corresponds to %ld record(s) of %d bytes",
+ val2 / spe.maxsize, spe.maxsize);
+ pmbsr_el1 = read_sysreg_s(PMBSR_EL1);
+ report(!pmbsr_el1, "PMBSR_EL1: no event");
+
+ free(addr);
+}
+
#endif
int main(int argc, char *argv[])
@@ -156,6 +313,10 @@ int main(int argc, char *argv[])
report_prefix_push(argv[1]);
test_spe_introspection();
report_prefix_pop();
+ } else if (strcmp(argv[1], "spe-buffer") == 0) {
+ report_prefix_push(argv[1]);
+ test_spe_buffer();
+ report_prefix_pop();
} else {
report_abort("Unknown sub-test '%s'", argv[1]);
}
diff --git a/arm/unittests.cfg b/arm/unittests.cfg
index c070939..bb0e84c 100644
--- a/arm/unittests.cfg
+++ b/arm/unittests.cfg
@@ -142,6 +142,14 @@ extra_params = -append 'spe-introspection'
accel = kvm
arch = arm64
+[spe-buffer]
+file = spe.flat
+groups = spe
+arch = arm64
+extra_params = -append 'spe-buffer'
+accel = kvm
+arch = arm64
+
# Test GIC emulation
[gicv2-ipi]
file = gic.flat
diff --git a/lib/arm64/asm/barrier.h b/lib/arm64/asm/barrier.h
index 0e1904c..f9ede15 100644
--- a/lib/arm64/asm/barrier.h
+++ b/lib/arm64/asm/barrier.h
@@ -23,5 +23,6 @@
#define smp_mb() dmb(ish)
#define smp_rmb() dmb(ishld)
#define smp_wmb() dmb(ishst)
+#define psb_csync() asm volatile("hint #17" : : : "memory")
#endif /* _ASMARM64_BARRIER_H_ */
--
2.21.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
` (2 preceding siblings ...)
2020-08-31 19:34 ` [kvm-unit-tests RFC 3/4] spe: Add profiling buffer test Eric Auger
@ 2020-08-31 19:34 ` Eric Auger
2020-09-01 7:34 ` Auger Eric
2020-09-01 9:24 ` [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Alexandru Elisei
4 siblings, 1 reply; 8+ messages in thread
From: Eric Auger @ 2020-08-31 19:34 UTC (permalink / raw)
To: eric.auger.pro, eric.auger, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
Setup the infrastructure to check the occurence of events.
The test checks the Buffer Full event occurs when no space
is available. The PPI is handled and we check the syndrome register
against the expected event.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
arm/spe.c | 141 +++++++++++++++++++++++++++++++++++++++++++++-
arm/unittests.cfg | 8 +++
2 files changed, 148 insertions(+), 1 deletion(-)
diff --git a/arm/spe.c b/arm/spe.c
index 7996f79..2f5ee35 100644
--- a/arm/spe.c
+++ b/arm/spe.c
@@ -19,6 +19,7 @@
#include "alloc_page.h"
#include <bitops.h>
#include "alloc.h"
+#include <asm/gic.h>
struct spe {
int min_interval;
@@ -36,13 +37,37 @@ struct spe {
bool unique_record_size;
};
+enum spe_event_exception_class {
+ EC_STAGE1_DATA_ABORT = 0x24,
+ EC_STAGE2_DATA_ABORT = 0x25,
+ EC_OTHER = 0,
+};
+
+struct spe_event {
+ enum spe_event_exception_class ec;
+ bool dl; /* data lost */
+ bool ea; /* external abort */
+ bool s; /* service */
+ bool coll; /* collision */
+ union {
+ bool buffer_filled; /* ec = other */
+ } mss;
+};
+
static struct spe spe;
+struct spe_stats {
+ struct spe_event observed;
+ bool unexpected;
+};
+static struct spe_stats spe_stats;
+
#ifdef __arm__
static bool spe_probe(void) { return false; }
static void test_spe_introspection(void) { }
static void test_spe_buffer(void) { }
+static void test_spe_events(void) { }
#else
@@ -95,6 +120,16 @@ static void test_spe_buffer(void) { }
#define PMSCR_EL1_TS 0x20
#define PMSCR_EL1_PCT 0x40
+#define PMBSR_EL1_COLL 0x10000
+#define PMBSR_EL1_S 0x20000
+#define PMBSR_EL1_EA 0x40000
+#define PMBSR_EL1_DL 0x80000
+#define PMBSR_EL1_EC_SHIFT 26
+#define PMBSR_EL1_EC_MASK 0x3F
+#define PMBSR_EL1_MISS_MASK 0xFFFF
+
+#define SPE_PPI 21
+
static int min_interval(uint8_t idr_bits)
{
switch (idr_bits) {
@@ -119,6 +154,44 @@ static int min_interval(uint8_t idr_bits)
}
}
+static int decode_syndrome_register(uint64_t sr, struct spe_event *event, bool verbose)
+{
+ if (!sr)
+ return 0;
+
+ if (sr & PMBSR_EL1_S)
+ event->s = true;
+ if (sr & PMBSR_EL1_COLL)
+ event->coll = true;
+ if (sr & PMBSR_EL1_EA)
+ event->ea = true;
+ if (sr & PMBSR_EL1_DL)
+ event->dl = true;
+ if (verbose)
+ report_info("PMBSR_EL1: Service=%d Collision=%d External Fault=%d DataLost=%d",
+ event->s, event->coll, event->ea, event->dl);
+
+ switch ((sr >> PMBSR_EL1_EC_SHIFT) & PMBSR_EL1_EC_MASK) {
+ case EC_OTHER:
+ event->ec = EC_OTHER;
+ event->mss.buffer_filled = sr & 0x1;
+ if (verbose)
+ report_info("PMBSR_EL1: EC = OTHER buffer filled=%d", event->mss.buffer_filled);
+ break;
+ case EC_STAGE1_DATA_ABORT:
+ event->ec = EC_STAGE1_DATA_ABORT;
+ report_info("PMBSR_EL1: EC = stage 1 data abort");
+ break;
+ case EC_STAGE2_DATA_ABORT:
+ event->ec = EC_STAGE2_DATA_ABORT;
+ report_info("PMBSR_EL1: EC = stage 2 data abort");
+ break;
+ default:
+ return -1;
+ }
+ return 0;
+}
+
static bool spe_probe(void)
{
uint64_t pmbidr_el1, pmsidr_el1;
@@ -224,6 +297,13 @@ static void reset(void)
/* Make sure the syndrome register is void */
write_sysreg_s(0, PMBSR_EL1);
+
+ memset(&spe_stats, 0, sizeof(spe_stats));
+}
+
+inline bool event_match(struct spe_event *observed, struct spe_event *expected)
+{
+ return !memcmp(observed, expected, sizeof(struct spe_event));
}
static inline void drain(void)
@@ -235,6 +315,7 @@ static inline void drain(void)
static void test_spe_buffer(void)
{
+ struct spe_event observed = {}, expected = {};
uint64_t pmbsr_el1, val1, val2;
void *addr = malloc(10 * PAGE_SIZE);
@@ -290,7 +371,61 @@ static void test_spe_buffer(void)
report_info("This corresponds to %ld record(s) of %d bytes",
val2 / spe.maxsize, spe.maxsize);
pmbsr_el1 = read_sysreg_s(PMBSR_EL1);
- report(!pmbsr_el1, "PMBSR_EL1: no event");
+ report(!(decode_syndrome_register(pmbsr_el1, &observed, true)) &&
+ event_match(&observed, &expected), "PMBSR_EL1: no event");
+
+ free(addr);
+}
+
+static void irq_handler(struct pt_regs *regs)
+{
+ uint32_t irqstat, irqnr;
+
+ irqstat = gic_read_iar();
+ irqnr = gic_iar_irqnr(irqstat);
+
+ if (irqnr == SPE_PPI) {
+ uint64_t pmbsr_el1 = read_sysreg_s(PMBSR_EL1);
+
+ if (decode_syndrome_register(pmbsr_el1, &spe_stats.observed, true))
+ spe_stats.unexpected = true;
+ report_info("SPE IRQ! SR=0x%lx", pmbsr_el1);
+ write_sysreg_s(0, PMBSR_EL1);
+ } else {
+ spe_stats.unexpected = true;
+ }
+ gic_write_eoir(irqstat);
+}
+
+static inline bool has_event_occurred(struct spe_event *expected)
+{
+ return (!spe_stats.unexpected && event_match(&spe_stats.observed, expected));
+}
+
+static void test_spe_events(void)
+{
+ struct spe_event expected = {.ec = EC_OTHER, .mss.buffer_filled = true, .s = true};
+ void *addr = malloc(10 * PAGE_SIZE);
+
+ gic_enable_defaults();
+ install_irq_handler(EL1H_IRQ, irq_handler);
+ local_irq_enable();
+ gic_enable_irq(SPE_PPI);
+
+ reset();
+
+ /* Willingly set pmblimitr tp pmdptr */
+ spe.pmblimitr_el1 = spe.pmbptr_el1;
+
+ mem_access_loop(addr, 100000, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
+ drain();
+ report(has_event_occurred(&expected), "PMBSR_EL1: buffer full event");
+
+ /* redo it once */
+
+ mem_access_loop(addr, 100000, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
+ drain();
+ report(has_event_occurred(&expected), "PMBSR_EL1: buffer full event");
free(addr);
}
@@ -317,6 +452,10 @@ int main(int argc, char *argv[])
report_prefix_push(argv[1]);
test_spe_buffer();
report_prefix_pop();
+ } else if (strcmp(argv[1], "spe-events") == 0) {
+ report_prefix_push(argv[1]);
+ test_spe_events();
+ report_prefix_pop();
} else {
report_abort("Unknown sub-test '%s'", argv[1]);
}
diff --git a/arm/unittests.cfg b/arm/unittests.cfg
index bb0e84c..b2b07be 100644
--- a/arm/unittests.cfg
+++ b/arm/unittests.cfg
@@ -150,6 +150,14 @@ extra_params = -append 'spe-buffer'
accel = kvm
arch = arm64
+[spe-events]
+file = spe.flat
+groups = spe
+arch = arm64
+extra_params = -append 'spe-events'
+accel = kvm
+arch = arm64
+
# Test GIC emulation
[gicv2-ipi]
file = gic.flat
--
2.21.3
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events
2020-08-31 19:34 ` [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events Eric Auger
@ 2020-09-01 7:34 ` Auger Eric
0 siblings, 0 replies; 8+ messages in thread
From: Auger Eric @ 2020-09-01 7:34 UTC (permalink / raw)
To: eric.auger.pro, kvm, kvmarm, qemu-devel, drjones, andrew.murray,
sudeep.holla, maz, will, haibo.xu
Hi,
On 8/31/20 9:34 PM, Eric Auger wrote:
> Setup the infrastructure to check the occurence of events.
> The test checks the Buffer Full event occurs when no space
> is available. The PPI is handled and we check the syndrome register
> against the expected event.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> ---
> arm/spe.c | 141 +++++++++++++++++++++++++++++++++++++++++++++-
> arm/unittests.cfg | 8 +++
> 2 files changed, 148 insertions(+), 1 deletion(-)
>
> diff --git a/arm/spe.c b/arm/spe.c
> index 7996f79..2f5ee35 100644
> --- a/arm/spe.c
> +++ b/arm/spe.c
> @@ -19,6 +19,7 @@
> #include "alloc_page.h"
> #include <bitops.h>
> #include "alloc.h"
> +#include <asm/gic.h>
>
> struct spe {
> int min_interval;
> @@ -36,13 +37,37 @@ struct spe {
> bool unique_record_size;
> };
>
> +enum spe_event_exception_class {
> + EC_STAGE1_DATA_ABORT = 0x24,
> + EC_STAGE2_DATA_ABORT = 0x25,
> + EC_OTHER = 0,
> +};
> +
> +struct spe_event {
> + enum spe_event_exception_class ec;
> + bool dl; /* data lost */
> + bool ea; /* external abort */
> + bool s; /* service */
> + bool coll; /* collision */
> + union {
> + bool buffer_filled; /* ec = other */
> + } mss;
> +};
> +
> static struct spe spe;
>
> +struct spe_stats {
> + struct spe_event observed;
> + bool unexpected;
> +};
> +static struct spe_stats spe_stats;
> +
> #ifdef __arm__
>
> static bool spe_probe(void) { return false; }
> static void test_spe_introspection(void) { }
> static void test_spe_buffer(void) { }
> +static void test_spe_events(void) { }
>
> #else
>
> @@ -95,6 +120,16 @@ static void test_spe_buffer(void) { }
> #define PMSCR_EL1_TS 0x20
> #define PMSCR_EL1_PCT 0x40
>
> +#define PMBSR_EL1_COLL 0x10000
> +#define PMBSR_EL1_S 0x20000
> +#define PMBSR_EL1_EA 0x40000
> +#define PMBSR_EL1_DL 0x80000
> +#define PMBSR_EL1_EC_SHIFT 26
> +#define PMBSR_EL1_EC_MASK 0x3F
> +#define PMBSR_EL1_MISS_MASK 0xFFFF
> +
> +#define SPE_PPI 21
> +
> static int min_interval(uint8_t idr_bits)
> {
> switch (idr_bits) {
> @@ -119,6 +154,44 @@ static int min_interval(uint8_t idr_bits)
> }
> }
>
> +static int decode_syndrome_register(uint64_t sr, struct spe_event *event, bool verbose)
> +{
> + if (!sr)
> + return 0;
> +
> + if (sr & PMBSR_EL1_S)
> + event->s = true;
> + if (sr & PMBSR_EL1_COLL)
> + event->coll = true;
> + if (sr & PMBSR_EL1_EA)
> + event->ea = true;
> + if (sr & PMBSR_EL1_DL)
> + event->dl = true;
> + if (verbose)
> + report_info("PMBSR_EL1: Service=%d Collision=%d External Fault=%d DataLost=%d",
> + event->s, event->coll, event->ea, event->dl);
> +
> + switch ((sr >> PMBSR_EL1_EC_SHIFT) & PMBSR_EL1_EC_MASK) {
> + case EC_OTHER:
> + event->ec = EC_OTHER;
> + event->mss.buffer_filled = sr & 0x1;
> + if (verbose)
> + report_info("PMBSR_EL1: EC = OTHER buffer filled=%d", event->mss.buffer_filled);
> + break;
> + case EC_STAGE1_DATA_ABORT:
> + event->ec = EC_STAGE1_DATA_ABORT;
> + report_info("PMBSR_EL1: EC = stage 1 data abort");
> + break;
> + case EC_STAGE2_DATA_ABORT:
> + event->ec = EC_STAGE2_DATA_ABORT;
> + report_info("PMBSR_EL1: EC = stage 2 data abort");
> + break;
> + default:
> + return -1;
> + }
> + return 0;
> +}
> +
> static bool spe_probe(void)
> {
> uint64_t pmbidr_el1, pmsidr_el1;
> @@ -224,6 +297,13 @@ static void reset(void)
>
> /* Make sure the syndrome register is void */
> write_sysreg_s(0, PMBSR_EL1);
> +
> + memset(&spe_stats, 0, sizeof(spe_stats));
> +}
> +
> +inline bool event_match(struct spe_event *observed, struct spe_event *expected)
> +{
> + return !memcmp(observed, expected, sizeof(struct spe_event));
> }
>
> static inline void drain(void)
> @@ -235,6 +315,7 @@ static inline void drain(void)
>
> static void test_spe_buffer(void)
> {
> + struct spe_event observed = {}, expected = {};
> uint64_t pmbsr_el1, val1, val2;
> void *addr = malloc(10 * PAGE_SIZE);
>
> @@ -290,7 +371,61 @@ static void test_spe_buffer(void)
> report_info("This corresponds to %ld record(s) of %d bytes",
> val2 / spe.maxsize, spe.maxsize);
> pmbsr_el1 = read_sysreg_s(PMBSR_EL1);
> - report(!pmbsr_el1, "PMBSR_EL1: no event");
> + report(!(decode_syndrome_register(pmbsr_el1, &observed, true)) &&
> + event_match(&observed, &expected), "PMBSR_EL1: no event");
> +
> + free(addr);
> +}
> +
> +static void irq_handler(struct pt_regs *regs)
> +{
> + uint32_t irqstat, irqnr;
> +
> + irqstat = gic_read_iar();
> + irqnr = gic_iar_irqnr(irqstat);
> +
> + if (irqnr == SPE_PPI) {
> + uint64_t pmbsr_el1 = read_sysreg_s(PMBSR_EL1);
> +
> + if (decode_syndrome_register(pmbsr_el1, &spe_stats.observed, true))
> + spe_stats.unexpected = true;
> + report_info("SPE IRQ! SR=0x%lx", pmbsr_el1);
> + write_sysreg_s(0, PMBSR_EL1);
> + } else {
> + spe_stats.unexpected = true;
> + }
> + gic_write_eoir(irqstat);
> +}
> +
> +static inline bool has_event_occurred(struct spe_event *expected)
> +{
> + return (!spe_stats.unexpected && event_match(&spe_stats.observed, expected));
> +}
> +
> +static void test_spe_events(void)
> +{
> + struct spe_event expected = {.ec = EC_OTHER, .mss.buffer_filled = true, .s = true};
> + void *addr = malloc(10 * PAGE_SIZE);
> +
> + gic_enable_defaults();
> + install_irq_handler(EL1H_IRQ, irq_handler);
> + local_irq_enable();
> + gic_enable_irq(SPE_PPI);
> +
> + reset();
> +
> + /* Willingly set pmblimitr tp pmdptr */
> + spe.pmblimitr_el1 = spe.pmbptr_el1;
> +
> + mem_access_loop(addr, 100000, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
> + drain();
> + report(has_event_occurred(&expected), "PMBSR_EL1: buffer full event");
> +
> + /* redo it once */
I noticed I must reset the stats here. I will fix that in next version.
Thanks
Eric
> +
> + mem_access_loop(addr, 100000, spe.pmblimitr_el1 | PMBLIMITR_EL1_E);
> + drain();
> + report(has_event_occurred(&expected), "PMBSR_EL1: buffer full event");
>
> free(addr);
> }
> @@ -317,6 +452,10 @@ int main(int argc, char *argv[])
> report_prefix_push(argv[1]);
> test_spe_buffer();
> report_prefix_pop();
> + } else if (strcmp(argv[1], "spe-events") == 0) {
> + report_prefix_push(argv[1]);
> + test_spe_events();
> + report_prefix_pop();
> } else {
> report_abort("Unknown sub-test '%s'", argv[1]);
> }
> diff --git a/arm/unittests.cfg b/arm/unittests.cfg
> index bb0e84c..b2b07be 100644
> --- a/arm/unittests.cfg
> +++ b/arm/unittests.cfg
> @@ -150,6 +150,14 @@ extra_params = -append 'spe-buffer'
> accel = kvm
> arch = arm64
>
> +[spe-events]
> +file = spe.flat
> +groups = spe
> +arch = arm64
> +extra_params = -append 'spe-events'
> +accel = kvm
> +arch = arm64
> +
> # Test GIC emulation
> [gicv2-ipi]
> file = gic.flat
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
` (3 preceding siblings ...)
2020-08-31 19:34 ` [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events Eric Auger
@ 2020-09-01 9:24 ` Alexandru Elisei
2020-09-01 10:49 ` Auger Eric
4 siblings, 1 reply; 8+ messages in thread
From: Alexandru Elisei @ 2020-09-01 9:24 UTC (permalink / raw)
To: Eric Auger, eric.auger.pro, kvm, kvmarm, qemu-devel, drjones,
andrew.murray, sudeep.holla, maz, will, haibo.xu
Hi Eric,
These patches are extremely welcome! I took over the KVM SPE patches from Andrew
Murray, and I was working on something similar to help with development.
The KVM series on the public mailing list work only by chance because it is
impossible to reliably map the SPE buffer at EL2 when profiling triggers a stage 2
data abort. That's because the DABT is reported asynchronously via the buffer
management interrupt and the faulting IPA is not reported anywhere. I'm trying to
fix this issue in the next iteration of the series, and then I'll come back to
your patches for review and testing.
Thanks,
Alex
On 8/31/20 8:34 PM, Eric Auger wrote:
> This series implements tests exercising the Statistical Profiling
> Extensions.
>
> This was tested with associated unmerged kernel [1] and QEMU [2]
> series.
>
> Depending on the comments, I can easily add other tests checking
> more configs, additional events and testing migration too. I hope
> this can be useful when respinning both series.
>
> All SPE tests can be launched with:
> ./run_tests.sh -g spe
> Tests also can be launched individually. For example:
> ./arm-run arm/spe.flat -append 'spe-buffer'
>
> The series can be found at:
> https://github.com/eauger/kut/tree/spe_rfc
>
> References:
> [1] [PATCH v2 00/18] arm64: KVM: add SPE profiling support
> [2] [PATCH 0/7] target/arm: Add vSPE support to KVM guest
>
> Eric Auger (4):
> arm64: Move get_id_aa64dfr0() in processor.h
> spe: Probing and Introspection Test
> spe: Add profiling buffer test
> spe: Test Profiling Buffer Events
>
> arm/Makefile.common | 1 +
> arm/pmu.c | 1 -
> arm/spe.c | 463 ++++++++++++++++++++++++++++++++++++++
> arm/unittests.cfg | 24 ++
> lib/arm64/asm/barrier.h | 1 +
> lib/arm64/asm/processor.h | 5 +
> 6 files changed, 494 insertions(+), 1 deletion(-)
> create mode 100644 arm/spe.c
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests
2020-09-01 9:24 ` [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Alexandru Elisei
@ 2020-09-01 10:49 ` Auger Eric
0 siblings, 0 replies; 8+ messages in thread
From: Auger Eric @ 2020-09-01 10:49 UTC (permalink / raw)
To: Alexandru Elisei, eric.auger.pro, kvm, kvmarm, qemu-devel,
drjones, andrew.murray, sudeep.holla, maz, will, haibo.xu
Hi Alexandru,
On 9/1/20 11:24 AM, Alexandru Elisei wrote:
> Hi Eric,
>
> These patches are extremely welcome! I took over the KVM SPE patches from Andrew
> Murray, and I was working on something similar to help with development.
Cool.
>
> The KVM series on the public mailing list work only by chance because it is
> impossible to reliably map the SPE buffer at EL2 when profiling triggers a stage 2
> data abort. That's because the DABT is reported asynchronously via the buffer
> management interrupt and the faulting IPA is not reported anywhere. I'm trying to
> fix this issue in the next iteration of the series, and then I'll come back to
> your patches for review and testing.
Sure. Looking forward to reviewing your respin.
Thanks
Eric
>
> Thanks,
>
> Alex
>
> On 8/31/20 8:34 PM, Eric Auger wrote:
>> This series implements tests exercising the Statistical Profiling
>> Extensions.
>>
>> This was tested with associated unmerged kernel [1] and QEMU [2]
>> series.
>>
>> Depending on the comments, I can easily add other tests checking
>> more configs, additional events and testing migration too. I hope
>> this can be useful when respinning both series.
>>
>> All SPE tests can be launched with:
>> ./run_tests.sh -g spe
>> Tests also can be launched individually. For example:
>> ./arm-run arm/spe.flat -append 'spe-buffer'
>>
>> The series can be found at:
>> https://github.com/eauger/kut/tree/spe_rfc
>>
>> References:
>> [1] [PATCH v2 00/18] arm64: KVM: add SPE profiling support
>> [2] [PATCH 0/7] target/arm: Add vSPE support to KVM guest
>>
>> Eric Auger (4):
>> arm64: Move get_id_aa64dfr0() in processor.h
>> spe: Probing and Introspection Test
>> spe: Add profiling buffer test
>> spe: Test Profiling Buffer Events
>>
>> arm/Makefile.common | 1 +
>> arm/pmu.c | 1 -
>> arm/spe.c | 463 ++++++++++++++++++++++++++++++++++++++
>> arm/unittests.cfg | 24 ++
>> lib/arm64/asm/barrier.h | 1 +
>> lib/arm64/asm/processor.h | 5 +
>> 6 files changed, 494 insertions(+), 1 deletion(-)
>> create mode 100644 arm/spe.c
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-09-01 10:50 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-08-31 19:34 [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 1/4] arm64: Move get_id_aa64dfr0() in processor.h Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 2/4] spe: Probing and Introspection Test Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 3/4] spe: Add profiling buffer test Eric Auger
2020-08-31 19:34 ` [kvm-unit-tests RFC 4/4] spe: Test Profiling Buffer Events Eric Auger
2020-09-01 7:34 ` Auger Eric
2020-09-01 9:24 ` [kvm-unit-tests RFC 0/4] KVM: arm64: Statistical Profiling Extension Tests Alexandru Elisei
2020-09-01 10:49 ` Auger Eric
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).