* [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress
@ 2024-09-11 20:41 Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 01/13] KVM: Move KVM_REG_SIZE() definition to common uAPI header Sean Christopherson
` (13 more replies)
0 siblings, 14 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Marc/Oliver,
I would love a sanity check on patches 2 and 3 before I file a bug against
gcc. The code is pretty darn simple, so I don't think I've misdiagnosed the
problem, but I've also been second guessing myself _because_ it's so simple;
it seems super unlikely that no one else would have run into this before.
On to the patches...
The main purpose of this series is to convert the max_guest_memory_test into
a more generic mmu_stress_test. The patches were originally posted as part
a KVM x86/mmu series to test the x86/mmu changes, hence the v2.
The basic gist of the "conversion" is to have the test do mprotect() on
guest memory while vCPUs are accessing said memory, e.g. to verify KVM and
mmu_notifiers are working as intended.
Patches 1-4 are a somewhat unexpected side quest that I can (arguably should)
post separately if that would make things easier. The original plan was that
patch 2 would be a single patch, but things snowballed.
Patch 2 reworks vcpu_get_reg() to return a value instead of using an
out-param. This is the entire motivation for including these patches;
having to define a variable just to bump the program counter on arm64
annoyed me.
Patch 4 adds hardening to vcpu_{g,s}et_reg() to detect potential truncation,
as KVM's uAPI allows for registers greater than the 64 bits the are supported
in the "outer" selftests APIs ((vcpu_set_reg() takes a u64, vcpu_get_reg()
now returns a u64).
Patch 1 is a change to KVM's uAPI headers to move the KVM_REG_SIZE
definition to common code so that the selftests side of things doesn't
need #ifdefs to implement the hardening in patch 4.
Patch 3 is the truly unexpected part. With the vcpu_get_reg() rework,
arm64's vpmu_counter_test fails when compiled with gcc-13, and on gcc-11
with an added "noinline". AFAICT, the failure doesn't actually have
anything to with vcpu_get_reg(); I suspect the largely unrelated change
just happened to run afoul of a latent gcc bug.
Pending a sanity check, I will file a gcc bug. In the meantime, I am
hoping to fudge around the issue in KVM selftests so that the vcpu_get_reg()
cleanup isn't blocked, and because the hack-a-fix is arguably a cleanup
on its own.
v2:
- Rebase onto kvm/next.
- Add the aforementioned vcpu_get_reg() changes/disaster.
- Actually add arm64 support for the fancy mprotect() testcase (I did this
before v1, but managed to forget to include the changes when posting).
- Emit "mov %rax, (%rax)" on x86. [James]
- Add a comment to explain the fancy mprotect() vs. vCPUs logic.
- Drop the KVM x86 patches (applied and/or will be handled separately).
v1: https://lore.kernel.org/all/20240809194335.1726916-1-seanjc@google.com
Sean Christopherson (13):
KVM: Move KVM_REG_SIZE() definition to common uAPI header
KVM: selftests: Return a value from vcpu_get_reg() instead of using an
out-param
KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test
KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate
KVM: selftests: Check for a potential unhandled exception iff KVM_RUN
succeeded
KVM: selftests: Rename max_guest_memory_test to mmu_stress_test
KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test
KVM: selftests: Compute number of extra pages needed in
mmu_stress_test
KVM: selftests: Enable mmu_stress_test on arm64
KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test
KVM: selftests: Precisely limit the number of guest loops in
mmu_stress_test
KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test
KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)
arch/arm64/include/uapi/asm/kvm.h | 3 -
arch/riscv/include/uapi/asm/kvm.h | 3 -
include/uapi/linux/kvm.h | 4 +
tools/testing/selftests/kvm/Makefile | 3 +-
.../selftests/kvm/aarch64/aarch32_id_regs.c | 10 +-
.../selftests/kvm/aarch64/debug-exceptions.c | 4 +-
.../selftests/kvm/aarch64/hypercalls.c | 6 +-
.../testing/selftests/kvm/aarch64/psci_test.c | 6 +-
.../selftests/kvm/aarch64/set_id_regs.c | 18 +-
.../kvm/aarch64/vpmu_counter_access.c | 27 ++-
.../testing/selftests/kvm/include/kvm_util.h | 10 +-
.../selftests/kvm/lib/aarch64/processor.c | 8 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 3 +-
.../selftests/kvm/lib/riscv/processor.c | 66 +++----
..._guest_memory_test.c => mmu_stress_test.c} | 161 ++++++++++++++++--
.../testing/selftests/kvm/riscv/arch_timer.c | 2 +-
.../testing/selftests/kvm/riscv/ebreak_test.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
tools/testing/selftests/kvm/s390x/resets.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 3 +-
20 files changed, 236 insertions(+), 107 deletions(-)
rename tools/testing/selftests/kvm/{max_guest_memory_test.c => mmu_stress_test.c} (60%)
base-commit: 15e1c3d65975524c5c792fcd59f7d89f00402261
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 01/13] KVM: Move KVM_REG_SIZE() definition to common uAPI header
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param Sean Christopherson
` (12 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Define KVM_REG_SIZE() in the common kvm.h header, and delete the arm64 and
RISC-V versions. As evidenced by the surrounding definitions, all aspects
of the register size encoding are generic, i.e. RISC-V should have moved
arm64's definition to common code instead of copy+pasting.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/arm64/include/uapi/asm/kvm.h | 3 ---
arch/riscv/include/uapi/asm/kvm.h | 3 ---
include/uapi/linux/kvm.h | 4 ++++
3 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 964df31da975..80b26134e59e 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -43,9 +43,6 @@
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_DIRTY_LOG_PAGE_OFFSET 64
-#define KVM_REG_SIZE(id) \
- (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
-
struct kvm_regs {
struct user_pt_regs regs; /* sp = sp_el0 */
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index e97db3296456..4f8d0c04a47b 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -207,9 +207,6 @@ struct kvm_riscv_sbi_sta {
#define KVM_RISCV_TIMER_STATE_OFF 0
#define KVM_RISCV_TIMER_STATE_ON 1
-#define KVM_REG_SIZE(id) \
- (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
-
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000
#define KVM_REG_RISCV_TYPE_SHIFT 24
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 637efc055145..9deeb13e3e01 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1070,6 +1070,10 @@ struct kvm_dirty_tlb {
#define KVM_REG_SIZE_SHIFT 52
#define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
+
+#define KVM_REG_SIZE(id) \
+ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
+
#define KVM_REG_SIZE_U8 0x0000000000000000ULL
#define KVM_REG_SIZE_U16 0x0010000000000000ULL
#define KVM_REG_SIZE_U32 0x0020000000000000ULL
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 01/13] KVM: Move KVM_REG_SIZE() definition to common uAPI header Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-12 9:11 ` Andrew Jones
2024-09-11 20:41 ` [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test Sean Christopherson
` (11 subsequent siblings)
13 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Return a uint64_t from vcpu_get_reg() instead of having the caller provide
a pointer to storage, as none of the KVM_GET_ONE_REG usage in KVM selftests
accesses a register larger than 64 bits, and vcpu_set_reg() only accepts a
64-bit value. If a use case comes along that needs to get a register that
is larger than 64 bits, then a utility can be added to assert success and
take a void pointer, but until then, forcing an out param yields ugly code
and prevents feeding the output of vcpu_get_reg() into vcpu_set_reg().
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../selftests/kvm/aarch64/aarch32_id_regs.c | 10 +--
.../selftests/kvm/aarch64/debug-exceptions.c | 4 +-
.../selftests/kvm/aarch64/hypercalls.c | 6 +-
.../testing/selftests/kvm/aarch64/psci_test.c | 6 +-
.../selftests/kvm/aarch64/set_id_regs.c | 18 ++---
.../kvm/aarch64/vpmu_counter_access.c | 19 +++---
.../testing/selftests/kvm/include/kvm_util.h | 6 +-
.../selftests/kvm/lib/aarch64/processor.c | 8 +--
.../selftests/kvm/lib/riscv/processor.c | 66 +++++++++----------
.../testing/selftests/kvm/riscv/arch_timer.c | 2 +-
.../testing/selftests/kvm/riscv/ebreak_test.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
tools/testing/selftests/kvm/s390x/resets.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 3 +-
14 files changed, 77 insertions(+), 77 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c b/tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
index 8e5bd07a3727..447d61cae4db 100644
--- a/tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
+++ b/tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
@@ -97,7 +97,7 @@ static void test_user_raz_wi(struct kvm_vcpu *vcpu)
uint64_t reg_id = raz_wi_reg_ids[i];
uint64_t val;
- vcpu_get_reg(vcpu, reg_id, &val);
+ val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
/*
@@ -106,7 +106,7 @@ static void test_user_raz_wi(struct kvm_vcpu *vcpu)
*/
vcpu_set_reg(vcpu, reg_id, BAD_ID_REG_VAL);
- vcpu_get_reg(vcpu, reg_id, &val);
+ val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
}
}
@@ -126,14 +126,14 @@ static void test_user_raz_invariant(struct kvm_vcpu *vcpu)
uint64_t reg_id = raz_invariant_reg_ids[i];
uint64_t val;
- vcpu_get_reg(vcpu, reg_id, &val);
+ val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
r = __vcpu_set_reg(vcpu, reg_id, BAD_ID_REG_VAL);
TEST_ASSERT(r < 0 && errno == EINVAL,
"unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno);
- vcpu_get_reg(vcpu, reg_id, &val);
+ val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
}
}
@@ -144,7 +144,7 @@ static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
{
uint64_t val, el0;
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
+ val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
return el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY;
diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 2582c49e525a..b3f3025d2f02 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -501,7 +501,7 @@ void test_single_step_from_userspace(int test_cnt)
TEST_ASSERT(ss_enable, "Unexpected KVM_EXIT_DEBUG");
/* Check if the current pc is expected. */
- vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc), &pc);
+ pc = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc));
TEST_ASSERT(!test_pc || pc == test_pc,
"Unexpected pc 0x%lx (expected 0x%lx)",
pc, test_pc);
@@ -583,7 +583,7 @@ int main(int argc, char *argv[])
uint64_t aa64dfr0;
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+ aa64dfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1));
__TEST_REQUIRE(debug_version(aa64dfr0) >= 6,
"Armv8 debug architecture not supported.");
kvm_vm_free(vm);
diff --git a/tools/testing/selftests/kvm/aarch64/hypercalls.c b/tools/testing/selftests/kvm/aarch64/hypercalls.c
index 9d192ce0078d..ec54ec7726e9 100644
--- a/tools/testing/selftests/kvm/aarch64/hypercalls.c
+++ b/tools/testing/selftests/kvm/aarch64/hypercalls.c
@@ -173,7 +173,7 @@ static void test_fw_regs_before_vm_start(struct kvm_vcpu *vcpu)
const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i];
/* First 'read' should be an upper limit of the features supported */
- vcpu_get_reg(vcpu, reg_info->reg, &val);
+ val = vcpu_get_reg(vcpu, reg_info->reg);
TEST_ASSERT(val == FW_REG_ULIMIT_VAL(reg_info->max_feat_bit),
"Expected all the features to be set for reg: 0x%lx; expected: 0x%lx; read: 0x%lx",
reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit), val);
@@ -184,7 +184,7 @@ static void test_fw_regs_before_vm_start(struct kvm_vcpu *vcpu)
"Failed to clear all the features of reg: 0x%lx; ret: %d",
reg_info->reg, errno);
- vcpu_get_reg(vcpu, reg_info->reg, &val);
+ val = vcpu_get_reg(vcpu, reg_info->reg);
TEST_ASSERT(val == 0,
"Expected all the features to be cleared for reg: 0x%lx", reg_info->reg);
@@ -214,7 +214,7 @@ static void test_fw_regs_after_vm_start(struct kvm_vcpu *vcpu)
* Before starting the VM, the test clears all the bits.
* Check if that's still the case.
*/
- vcpu_get_reg(vcpu, reg_info->reg, &val);
+ val = vcpu_get_reg(vcpu, reg_info->reg);
TEST_ASSERT(val == 0,
"Expected all the features to be cleared for reg: 0x%lx",
reg_info->reg);
diff --git a/tools/testing/selftests/kvm/aarch64/psci_test.c b/tools/testing/selftests/kvm/aarch64/psci_test.c
index 61731a950def..544ebd2b121b 100644
--- a/tools/testing/selftests/kvm/aarch64/psci_test.c
+++ b/tools/testing/selftests/kvm/aarch64/psci_test.c
@@ -102,8 +102,8 @@ static void assert_vcpu_reset(struct kvm_vcpu *vcpu)
{
uint64_t obs_pc, obs_x0;
- vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc), &obs_pc);
- vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.regs[0]), &obs_x0);
+ obs_pc = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc));
+ obs_x0 = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.regs[0]));
TEST_ASSERT(obs_pc == CPU_ON_ENTRY_ADDR,
"unexpected target cpu pc: %lx (expected: %lx)",
@@ -143,7 +143,7 @@ static void host_test_cpu_on(void)
*/
vcpu_power_off(target);
- vcpu_get_reg(target, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), &target_mpidr);
+ target_mpidr = vcpu_get_reg(target, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1));
vcpu_args_set(source, 1, target_mpidr & MPIDR_HWID_BITMASK);
enter_guest(source);
diff --git a/tools/testing/selftests/kvm/aarch64/set_id_regs.c b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
index d20981663831..9ed667e1f445 100644
--- a/tools/testing/selftests/kvm/aarch64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
@@ -335,7 +335,7 @@ static uint64_t test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg,
uint64_t mask = ftr_bits->mask;
uint64_t val, new_val, ftr;
- vcpu_get_reg(vcpu, reg, &val);
+ val = vcpu_get_reg(vcpu, reg);
ftr = (val & mask) >> shift;
ftr = get_safe_value(ftr_bits, ftr);
@@ -345,7 +345,7 @@ static uint64_t test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg,
val |= ftr;
vcpu_set_reg(vcpu, reg, val);
- vcpu_get_reg(vcpu, reg, &new_val);
+ new_val = vcpu_get_reg(vcpu, reg);
TEST_ASSERT_EQ(new_val, val);
return new_val;
@@ -359,7 +359,7 @@ static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg,
uint64_t val, old_val, ftr;
int r;
- vcpu_get_reg(vcpu, reg, &val);
+ val = vcpu_get_reg(vcpu, reg);
ftr = (val & mask) >> shift;
ftr = get_invalid_value(ftr_bits, ftr);
@@ -373,7 +373,7 @@ static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg,
TEST_ASSERT(r < 0 && errno == EINVAL,
"Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno);
- vcpu_get_reg(vcpu, reg, &val);
+ val = vcpu_get_reg(vcpu, reg);
TEST_ASSERT_EQ(val, old_val);
}
@@ -470,7 +470,7 @@ static void test_clidr(struct kvm_vcpu *vcpu)
uint64_t clidr;
int level;
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CLIDR_EL1), &clidr);
+ clidr = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CLIDR_EL1));
/* find the first empty level in the cache hierarchy */
for (level = 1; level < 7; level++) {
@@ -495,7 +495,7 @@ static void test_ctr(struct kvm_vcpu *vcpu)
{
u64 ctr;
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CTR_EL0), &ctr);
+ ctr = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CTR_EL0));
ctr &= ~CTR_EL0_DIC_MASK;
if (ctr & CTR_EL0_IminLine_MASK)
ctr--;
@@ -511,7 +511,7 @@ static void test_vcpu_ftr_id_regs(struct kvm_vcpu *vcpu)
test_clidr(vcpu);
test_ctr(vcpu);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), &val);
+ val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1));
val++;
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), val);
@@ -524,7 +524,7 @@ static void test_assert_id_reg_unchanged(struct kvm_vcpu *vcpu, uint32_t encodin
size_t idx = encoding_to_range_idx(encoding);
uint64_t observed;
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(encoding), &observed);
+ observed = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(encoding));
TEST_ASSERT_EQ(test_reg_vals[idx], observed);
}
@@ -559,7 +559,7 @@ int main(void)
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
/* Check for AARCH64 only system */
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
+ val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
aarch64_only = (el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY);
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index d31b9f64ba14..30d9c9e7ae35 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -440,8 +440,7 @@ static void create_vpmu_vm(void *guest_code)
"Failed to create vgic-v3, skipping");
/* Make sure that PMUv3 support is indicated in the ID register */
- vcpu_get_reg(vpmu_vm.vcpu,
- KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+ dfr0 = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1));
pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
@@ -484,7 +483,7 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
create_vpmu_vm(guest_code);
vcpu = vpmu_vm.vcpu;
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+ pmcr_orig = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
pmcr = pmcr_orig;
/*
@@ -493,7 +492,7 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
*/
set_pmcr_n(&pmcr, pmcr_n);
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+ pmcr = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
if (expect_fail)
TEST_ASSERT(pmcr_orig == pmcr,
@@ -521,7 +520,7 @@ static void run_access_test(uint64_t pmcr_n)
vcpu = vpmu_vm.vcpu;
/* Save the initial sp to restore them later to run the guest again */
- vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+ sp = vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1));
run_vcpu(vcpu, pmcr_n);
@@ -572,12 +571,12 @@ static void run_pmregs_validity_test(uint64_t pmcr_n)
* Test if the 'set' and 'clr' variants of the registers
* are initialized based on the number of valid counters.
*/
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), ®_val);
+ reg_val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id));
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx",
KVM_ARM64_SYS_REG(set_reg_id), reg_val);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), ®_val);
+ reg_val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id));
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx",
KVM_ARM64_SYS_REG(clr_reg_id), reg_val);
@@ -589,12 +588,12 @@ static void run_pmregs_validity_test(uint64_t pmcr_n)
*/
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), max_counters_mask);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), ®_val);
+ reg_val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id));
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx",
KVM_ARM64_SYS_REG(set_reg_id), reg_val);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), ®_val);
+ reg_val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id));
TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0,
"Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx",
KVM_ARM64_SYS_REG(clr_reg_id), reg_val);
@@ -625,7 +624,7 @@ static uint64_t get_pmcr_n_limit(void)
uint64_t pmcr;
create_vpmu_vm(guest_code);
- vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+ pmcr = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
destroy_vpmu_vm();
return get_pmcr_n(pmcr);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 63c2aaae51f3..429a7f003fe3 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -678,11 +678,13 @@ static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t va
return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®);
}
-static inline void vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
+static inline uint64_t vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id)
{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
+ uint64_t val;
+ struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
+ return val;
}
static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
{
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 0ac7cc89f38c..d068afee3327 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -281,8 +281,8 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init)
*/
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CPACR_EL1), 3 << 20);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_SCTLR_EL1), &sctlr_el1);
- vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_TCR_EL1), &tcr_el1);
+ sctlr_el1 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_SCTLR_EL1));
+ tcr_el1 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_TCR_EL1));
/* Configure base granule size */
switch (vm->mode) {
@@ -360,8 +360,8 @@ void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
{
uint64_t pstate, pc;
- vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pstate), &pstate);
- vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc), &pc);
+ pstate = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pstate));
+ pc = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc));
fprintf(stream, "%*spstate: 0x%.16lx pc: 0x%.16lx\n",
indent, "", pstate, pc);
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 6ae47b3d6b25..dd663bcf0cc0 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -221,39 +221,39 @@ void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
{
struct kvm_riscv_core core;
- vcpu_get_reg(vcpu, RISCV_CORE_REG(mode), &core.mode);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.pc), &core.regs.pc);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.ra), &core.regs.ra);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.sp), &core.regs.sp);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.gp), &core.regs.gp);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.tp), &core.regs.tp);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t0), &core.regs.t0);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t1), &core.regs.t1);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t2), &core.regs.t2);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s0), &core.regs.s0);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s1), &core.regs.s1);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a0), &core.regs.a0);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a1), &core.regs.a1);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a2), &core.regs.a2);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a3), &core.regs.a3);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a4), &core.regs.a4);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a5), &core.regs.a5);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a6), &core.regs.a6);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a7), &core.regs.a7);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s2), &core.regs.s2);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s3), &core.regs.s3);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s4), &core.regs.s4);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s5), &core.regs.s5);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s6), &core.regs.s6);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s7), &core.regs.s7);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s8), &core.regs.s8);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s9), &core.regs.s9);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s10), &core.regs.s10);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s11), &core.regs.s11);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t3), &core.regs.t3);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t4), &core.regs.t4);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t5), &core.regs.t5);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t6), &core.regs.t6);
+ core.mode = vcpu_get_reg(vcpu, RISCV_CORE_REG(mode));
+ core.regs.pc = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.pc));
+ core.regs.ra = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.ra));
+ core.regs.sp = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.sp));
+ core.regs.gp = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.gp));
+ core.regs.tp = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.tp));
+ core.regs.t0 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t0));
+ core.regs.t1 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t1));
+ core.regs.t2 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t2));
+ core.regs.s0 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s0));
+ core.regs.s1 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s1));
+ core.regs.a0 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a0));
+ core.regs.a1 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a1));
+ core.regs.a2 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a2));
+ core.regs.a3 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a3));
+ core.regs.a4 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a4));
+ core.regs.a5 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a5));
+ core.regs.a6 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a6));
+ core.regs.a7 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.a7));
+ core.regs.s2 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s2));
+ core.regs.s3 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s3));
+ core.regs.s4 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s4));
+ core.regs.s5 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s5));
+ core.regs.s6 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s6));
+ core.regs.s7 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s7));
+ core.regs.s8 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s8));
+ core.regs.s9 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s9));
+ core.regs.s10 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s10));
+ core.regs.s11 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.s11));
+ core.regs.t3 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t3));
+ core.regs.t4 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t4));
+ core.regs.t5 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t5));
+ core.regs.t6 = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.t6));
fprintf(stream,
" MODE: 0x%lx\n", core.mode);
diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c
index 2c792228ac0b..9e370800a6a2 100644
--- a/tools/testing/selftests/kvm/riscv/arch_timer.c
+++ b/tools/testing/selftests/kvm/riscv/arch_timer.c
@@ -93,7 +93,7 @@ struct kvm_vm *test_vm_create(void)
vcpu_init_vector_tables(vcpus[i]);
/* Initialize guest timer frequency. */
- vcpu_get_reg(vcpus[0], RISCV_TIMER_REG(frequency), &timer_freq);
+ timer_freq = vcpu_get_reg(vcpus[0], RISCV_TIMER_REG(frequency));
sync_global_to_guest(vm, timer_freq);
pr_debug("timer_freq: %lu\n", timer_freq);
diff --git a/tools/testing/selftests/kvm/riscv/ebreak_test.c b/tools/testing/selftests/kvm/riscv/ebreak_test.c
index 0e0712854953..cfed6c727bfc 100644
--- a/tools/testing/selftests/kvm/riscv/ebreak_test.c
+++ b/tools/testing/selftests/kvm/riscv/ebreak_test.c
@@ -60,7 +60,7 @@ int main(void)
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_DEBUG);
- vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.pc), &pc);
+ pc = vcpu_get_reg(vcpu, RISCV_CORE_REG(regs.pc));
TEST_ASSERT_EQ(pc, LABEL_ADDRESS(sw_bp_1));
/* skip sw_bp_1 */
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index f299cbfd23ca..f45c0ecc902d 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -608,7 +608,7 @@ static void test_vm_events_overflow(void *guest_code)
vcpu_init_vector_tables(vcpu);
/* Initialize guest timer frequency. */
- vcpu_get_reg(vcpu, RISCV_TIMER_REG(frequency), &timer_freq);
+ timer_freq = vcpu_get_reg(vcpu, RISCV_TIMER_REG(frequency));
sync_global_to_guest(vm, timer_freq);
run_vcpu(vcpu);
diff --git a/tools/testing/selftests/kvm/s390x/resets.c b/tools/testing/selftests/kvm/s390x/resets.c
index 357943f2bea8..b58f75b381e5 100644
--- a/tools/testing/selftests/kvm/s390x/resets.c
+++ b/tools/testing/selftests/kvm/s390x/resets.c
@@ -61,7 +61,7 @@ static void test_one_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t value)
{
uint64_t eval_reg;
- vcpu_get_reg(vcpu, id, &eval_reg);
+ eval_reg = vcpu_get_reg(vcpu, id);
TEST_ASSERT(eval_reg == value, "value == 0x%lx", value);
}
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index a8d3afa0b86b..cce2520af720 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -269,9 +269,8 @@ static void guest_code(int cpu)
static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
{
uint64_t id = RISCV_SBI_EXT_REG(KVM_RISCV_SBI_EXT_STA);
- unsigned long enabled;
+ unsigned long enabled = vcpu_get_reg(vcpu, id);
- vcpu_get_reg(vcpu, id, &enabled);
TEST_ASSERT(enabled == 0 || enabled == 1, "Expected boolean result");
return enabled;
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 01/13] KVM: Move KVM_REG_SIZE() definition to common uAPI header Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-30 21:56 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate Sean Christopherson
` (10 subsequent siblings)
13 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Use u64_replace_bits() instead of u64p_replace_bits() to set PMCR.N in
arm64's vPMU counter access test to fudge around what appears to be a gcc
bug. With the recent change to have vcpu_get_reg() return a value in lieu
of an out-param, some versions of gcc completely ignore the operation
performed by set_pmcr_n(), i.e. ignore the output param.
The issue is most easily observed by making set_pmcr_n() noinline and
wrapping the call with printf(), e.g. sans comments, for this code:
printf("orig = %lx, next = %lx, want = %lu\n", pmcr_orig, pmcr, pmcr_n);
set_pmcr_n(&pmcr, pmcr_n);
printf("orig = %lx, next = %lx, want = %lu\n", pmcr_orig, pmcr, pmcr_n);
gcc-13 generates:
0000000000401c90 <set_pmcr_n>:
401c90: f9400002 ldr x2, [x0]
401c94: b3751022 bfi x2, x1, #11, #5
401c98: f9000002 str x2, [x0]
401c9c: d65f03c0 ret
0000000000402660 <test_create_vpmu_vm_with_pmcr_n>:
402724: aa1403e3 mov x3, x20
402728: aa1503e2 mov x2, x21
40272c: aa1603e0 mov x0, x22
402730: aa1503e1 mov x1, x21
402734: 940060ff bl 41ab30 <_IO_printf>
402738: aa1403e1 mov x1, x20
40273c: 910183e0 add x0, sp, #0x60
402740: 97fffd54 bl 401c90 <set_pmcr_n>
402744: aa1403e3 mov x3, x20
402748: aa1503e2 mov x2, x21
40274c: aa1503e1 mov x1, x21
402750: aa1603e0 mov x0, x22
402754: 940060f7 bl 41ab30 <_IO_printf>
with the value stored in [sp + 0x60] ignored by both printf() above and
in the test proper, resulting in a false failure due to vcpu_set_reg()
simply storing the original value, not the intended value.
$ ./vpmu_counter_access
Random seed: 0x6b8b4567
orig = 3040, next = 3040, want = 0
orig = 3040, next = 3040, want = 0
==== Test Assertion Failure ====
aarch64/vpmu_counter_access.c:505: pmcr_n == get_pmcr_n(pmcr)
pid=71578 tid=71578 errno=9 - Bad file descriptor
1 0x400673: run_access_test at vpmu_counter_access.c:522
2 (inlined by) main at vpmu_counter_access.c:643
3 0x4132d7: __libc_start_call_main at libc-start.o:0
4 0x413653: __libc_start_main at ??:0
5 0x40106f: _start at ??:0
Failed to update PMCR.N to 0 (received: 6)
Somewhat bizarrely, gcc-11 also exhibits the same behavior, but only if
set_pmcr_n() is marked noinline, whereas gcc-13 fails even if set_pmcr_n()
is inlined in its sole caller.
All signs point to this being a gcc bug, as clang doesn't exhibit the same
issue, the code generated by u64p_replace_bits() is correct, and the error
is somewhat transient, e.g. varies between gcc versions and depends on
surrounding code.
For now, work around the issue to unblock the vcpu_get_reg() cleanup, and
because arguably using u64_replace_bits() makes the code a wee bit more
intuitive.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 30d9c9e7ae35..74da8252b884 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -45,11 +45,6 @@ static uint64_t get_pmcr_n(uint64_t pmcr)
return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
}
-static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
-{
- u64p_replace_bits((__u64 *) pmcr, pmcr_n, ARMV8_PMU_PMCR_N);
-}
-
static uint64_t get_counters_mask(uint64_t n)
{
uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX);
@@ -484,13 +479,12 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
vcpu = vpmu_vm.vcpu;
pmcr_orig = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
- pmcr = pmcr_orig;
/*
* Setting a larger value of PMCR.N should not modify the field, and
* return a success.
*/
- set_pmcr_n(&pmcr, pmcr_n);
+ pmcr = u64_replace_bits(pmcr_orig, pmcr_n, ARMV8_PMU_PMCR_N);
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
pmcr = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (2 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-12 9:41 ` Andrew Jones
2024-09-11 20:41 ` [PATCH v2 05/13] KVM: selftests: Check for a potential unhandled exception iff KVM_RUN succeeded Sean Christopherson
` (9 subsequent siblings)
13 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Assert that the the register being read/written by vcpu_{g,s}et_reg() is
no larger than a uint64_t, i.e. that a selftest isn't unintentionally
truncating the value being read/written.
Ideally, the assert would be done at compile-time, but that would limit
the checks to hardcoded accesses and/or require fancier compile-time
assertion infrastructure to filter out dynamic usage.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/include/kvm_util.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 429a7f003fe3..80230e49e35f 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -683,6 +683,8 @@ static inline uint64_t vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id)
uint64_t val;
struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+ TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
+
vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
return val;
}
@@ -690,6 +692,8 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val
{
struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+ TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
+
vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®);
}
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 05/13] KVM: selftests: Check for a potential unhandled exception iff KVM_RUN succeeded
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (3 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 06/13] KVM: selftests: Rename max_guest_memory_test to mmu_stress_test Sean Christopherson
` (8 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Don't check for an unhandled exception if KVM_RUN failed, e.g. if it
returned errno=EFAULT, as reporting unhandled exceptions is done via a
ucall, i.e. requires KVM_RUN to exit cleanly. Theoretically, checking
for a ucall on a failed KVM_RUN could get a false positive, e.g. if there
were stale data in vcpu->run from a previous exit.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/lib/kvm_util.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 56b170b725b3..0e25011d9b51 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1719,7 +1719,8 @@ int _vcpu_run(struct kvm_vcpu *vcpu)
rc = __vcpu_run(vcpu);
} while (rc == -1 && errno == EINTR);
- assert_on_unhandled_exception(vcpu);
+ if (!rc)
+ assert_on_unhandled_exception(vcpu);
return rc;
}
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 06/13] KVM: selftests: Rename max_guest_memory_test to mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (4 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 05/13] KVM: selftests: Check for a potential unhandled exception iff KVM_RUN succeeded Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 07/13] KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test Sean Christopherson
` (7 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Rename max_guest_memory_test to mmu_stress_test so that the name isn't
horribly misleading when future changes extend the test to verify things
like mprotect() interactions, and because the test is useful even when its
configured to populate far less than the maximum amount of guest memory.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/Makefile | 2 +-
.../kvm/{max_guest_memory_test.c => mmu_stress_test.c} | 0
2 files changed, 1 insertion(+), 1 deletion(-)
rename tools/testing/selftests/kvm/{max_guest_memory_test.c => mmu_stress_test.c} (100%)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 48d32c5aa3eb..93d6e2596b3e 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -138,7 +138,7 @@ TEST_GEN_PROGS_x86_64 += guest_print_test
TEST_GEN_PROGS_x86_64 += hardware_disable_test
TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
TEST_GEN_PROGS_x86_64 += kvm_page_table_test
-TEST_GEN_PROGS_x86_64 += max_guest_memory_test
+TEST_GEN_PROGS_x86_64 += mmu_stress_test
TEST_GEN_PROGS_x86_64 += memslot_modification_stress_test
TEST_GEN_PROGS_x86_64 += memslot_perf_test
TEST_GEN_PROGS_x86_64 += rseq_test
diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
similarity index 100%
rename from tools/testing/selftests/kvm/max_guest_memory_test.c
rename to tools/testing/selftests/kvm/mmu_stress_test.c
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 07/13] KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (5 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 06/13] KVM: selftests: Rename max_guest_memory_test to mmu_stress_test Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 08/13] KVM: selftests: Compute number of extra pages needed " Sean Christopherson
` (6 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Try to get/set SREGS in mmu_stress_test only when running on x86, as the
ioctls are supported only by x86 and PPC, and the latter doesn't yet
support KVM selftests.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 0b9678858b6d..847da23ec1b1 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -59,10 +59,10 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
static void *vcpu_worker(void *data)
{
+ struct kvm_sregs __maybe_unused sregs;
struct vcpu_info *info = data;
struct kvm_vcpu *vcpu = info->vcpu;
struct kvm_vm *vm = vcpu->vm;
- struct kvm_sregs sregs;
vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size);
@@ -70,12 +70,12 @@ static void *vcpu_worker(void *data)
run_vcpu(vcpu);
rendezvous_with_boss();
+#ifdef __x86_64__
vcpu_sregs_get(vcpu, &sregs);
-#ifdef __x86_64__
/* Toggle CR0.WP to trigger a MMU context reset. */
sregs.cr0 ^= X86_CR0_WP;
-#endif
vcpu_sregs_set(vcpu, &sregs);
+#endif
rendezvous_with_boss();
run_vcpu(vcpu);
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 08/13] KVM: selftests: Compute number of extra pages needed in mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (6 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 07/13] KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 09/13] KVM: selftests: Enable mmu_stress_test on arm64 Sean Christopherson
` (5 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Create mmu_stress_tests's VM with the correct number of extra pages needed
to map all of memory in the guest. The bug hasn't been noticed before as
the test currently runs only on x86, which maps guest memory with 1GiB
pages, i.e. doesn't need much memory in the guest for page tables.
Reviewed-by: James Houghton <jthoughton@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 847da23ec1b1..5467b12f5903 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -209,7 +209,13 @@ int main(int argc, char *argv[])
vcpus = malloc(nr_vcpus * sizeof(*vcpus));
TEST_ASSERT(vcpus, "Failed to allocate vCPU array");
- vm = vm_create_with_vcpus(nr_vcpus, guest_code, vcpus);
+ vm = __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus,
+#ifdef __x86_64__
+ max_mem / SZ_1G,
+#else
+ max_mem / vm_guest_mode_params[VM_MODE_DEFAULT].page_size,
+#endif
+ guest_code, vcpus);
max_gpa = vm->max_gfn << vm->page_shift;
TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb ");
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 09/13] KVM: selftests: Enable mmu_stress_test on arm64
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (7 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 08/13] KVM: selftests: Compute number of extra pages needed " Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 10/13] KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test Sean Christopherson
` (4 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Enable the mmu_stress_test on arm64. The intent was to enable the test
across all architectures when it was first added, but a few goofs made it
unrunnable on !x86. Now that those goofs are fixed, at least for arm64,
enable the test.
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: Marc Zyngier <maz@kernel.org>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 93d6e2596b3e..5150fad7a8c0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -174,6 +174,7 @@ TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
TEST_GEN_PROGS_aarch64 += kvm_page_table_test
TEST_GEN_PROGS_aarch64 += memslot_modification_stress_test
TEST_GEN_PROGS_aarch64 += memslot_perf_test
+TEST_GEN_PROGS_aarch64 += mmu_stress_test
TEST_GEN_PROGS_aarch64 += rseq_test
TEST_GEN_PROGS_aarch64 += set_memory_region_test
TEST_GEN_PROGS_aarch64 += steal_time
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 10/13] KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (8 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 09/13] KVM: selftests: Enable mmu_stress_test on arm64 Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 11/13] KVM: selftests: Precisely limit the number of guest loops " Sean Christopherson
` (3 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Use vcpu_arch_put_guest() to write memory from the guest in
mmu_stress_test as an easy way to provide a bit of extra coverage.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 5467b12f5903..80863e8290db 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -22,7 +22,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
for (;;) {
for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
- *((volatile uint64_t *)gpa) = gpa;
+ vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
GUEST_SYNC(0);
}
}
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 11/13] KVM: selftests: Precisely limit the number of guest loops in mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (9 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 10/13] KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 12/13] KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test Sean Christopherson
` (2 subsequent siblings)
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Run the exact number of guest loops required in mmu_stress_test instead
of looping indefinitely in anticipation of adding more stages that run
different code (e.g. reads instead of writes).
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 25 ++++++++++++++-----
1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 80863e8290db..9573ed0e696d 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -19,12 +19,15 @@
static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
{
uint64_t gpa;
+ int i;
- for (;;) {
+ for (i = 0; i < 2; i++) {
for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
- GUEST_SYNC(0);
+ GUEST_SYNC(i);
}
+
+ GUEST_ASSERT(0);
}
struct vcpu_info {
@@ -51,10 +54,18 @@ static void rendezvous_with_boss(void)
}
}
-static void run_vcpu(struct kvm_vcpu *vcpu)
+static void assert_sync_stage(struct kvm_vcpu *vcpu, int stage)
+{
+ struct ucall uc;
+
+ TEST_ASSERT_EQ(get_ucall(vcpu, &uc), UCALL_SYNC);
+ TEST_ASSERT_EQ(uc.args[1], stage);
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, int stage)
{
vcpu_run(vcpu);
- TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC);
+ assert_sync_stage(vcpu, stage);
}
static void *vcpu_worker(void *data)
@@ -68,7 +79,8 @@ static void *vcpu_worker(void *data)
rendezvous_with_boss();
- run_vcpu(vcpu);
+ /* Stage 0, write all of guest memory. */
+ run_vcpu(vcpu, 0);
rendezvous_with_boss();
#ifdef __x86_64__
vcpu_sregs_get(vcpu, &sregs);
@@ -78,7 +90,8 @@ static void *vcpu_worker(void *data)
#endif
rendezvous_with_boss();
- run_vcpu(vcpu);
+ /* Stage 1, re-write all of guest memory. */
+ run_vcpu(vcpu, 1);
rendezvous_with_boss();
return NULL;
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 12/13] KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (10 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 11/13] KVM: selftests: Precisely limit the number of guest loops " Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) Sean Christopherson
2024-09-12 11:48 ` [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Andrew Jones
13 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Add a third phase of mmu_stress_test to verify that mprotect()ing guest
memory to make it read-only doesn't cause explosions, e.g. to verify KVM
correctly handles the resulting mmu_notifier invalidations.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 22 +++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 9573ed0e696d..50c3a17418c4 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -27,6 +27,10 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
GUEST_SYNC(i);
}
+ for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
+ *((volatile uint64_t *)gpa);
+ GUEST_SYNC(2);
+
GUEST_ASSERT(0);
}
@@ -94,6 +98,10 @@ static void *vcpu_worker(void *data)
run_vcpu(vcpu, 1);
rendezvous_with_boss();
+ /* Stage 2, read all of guest memory, which is now read-only. */
+ run_vcpu(vcpu, 2);
+ rendezvous_with_boss();
+
return NULL;
}
@@ -174,7 +182,7 @@ int main(int argc, char *argv[])
const uint64_t start_gpa = SZ_4G;
const int first_slot = 1;
- struct timespec time_start, time_run1, time_reset, time_run2;
+ struct timespec time_start, time_run1, time_reset, time_run2, time_ro;
uint64_t max_gpa, gpa, slot_size, max_mem, i;
int max_slots, slot, opt, fd;
bool hugepages = false;
@@ -278,14 +286,20 @@ int main(int argc, char *argv[])
rendezvous_with_vcpus(&time_reset, "reset");
rendezvous_with_vcpus(&time_run2, "run 2");
+ mprotect(mem, slot_size, PROT_READ);
+ rendezvous_with_vcpus(&time_ro, "mprotect RO");
+
+ time_ro = timespec_sub(time_ro, time_run2);
time_run2 = timespec_sub(time_run2, time_reset);
- time_reset = timespec_sub(time_reset, time_run1);
+ time_reset = timespec_sub(time_reset, time_run1);
time_run1 = timespec_sub(time_run1, time_start);
- pr_info("run1 = %ld.%.9lds, reset = %ld.%.9lds, run2 = %ld.%.9lds\n",
+ pr_info("run1 = %ld.%.9lds, reset = %ld.%.9lds, run2 = %ld.%.9lds, "
+ "ro = %ld.%.9lds\n",
time_run1.tv_sec, time_run1.tv_nsec,
time_reset.tv_sec, time_reset.tv_nsec,
- time_run2.tv_sec, time_run2.tv_nsec);
+ time_run2.tv_sec, time_run2.tv_nsec,
+ time_ro.tv_sec, time_ro.tv_nsec);
/*
* Delete even numbered slots (arbitrary) and unmap the first half of
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (11 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 12/13] KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test Sean Christopherson
@ 2024-09-11 20:41 ` Sean Christopherson
2024-09-12 0:19 ` James Houghton
2024-09-12 11:48 ` [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Andrew Jones
13 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2024-09-11 20:41 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda
Cc: linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, Sean Christopherson, James Houghton
Add two phases to mmu_stress_test to verify that KVM correctly handles
guest memory that was writable, and then made read-only in the primary MMU,
and then made writable again.
Add bonus coverage for x86 and arm64 to verify that all of guest memory was
marked read-only. Making forward progress (without making memory writable)
requires arch specific code to skip over the faulting instruction, but the
test can at least verify each vCPU's starting page was made read-only for
other architectures.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/mmu_stress_test.c | 104 +++++++++++++++++-
1 file changed, 101 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 50c3a17418c4..c07c15d7cc9a 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -16,6 +16,8 @@
#include "guest_modes.h"
#include "processor.h"
+static bool mprotect_ro_done;
+
static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
{
uint64_t gpa;
@@ -31,6 +33,42 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
*((volatile uint64_t *)gpa);
GUEST_SYNC(2);
+ /*
+ * Write to the region while mprotect(PROT_READ) is underway. Keep
+ * looping until the memory is guaranteed to be read-only, otherwise
+ * vCPUs may complete their writes and advance to the next stage
+ * prematurely.
+ *
+ * For architectures that support skipping the faulting instruction,
+ * generate the store via inline assembly to ensure the exact length
+ * of the instruction is known and stable (vcpu_arch_put_guest() on
+ * fixed-length architectures should work, but the cost of paranoia
+ * is low in this case). For x86, hand-code the exact opcode so that
+ * there is no room for variability in the generated instruction.
+ */
+ do {
+ for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
+#ifdef __x86_64__
+ asm volatile(".byte 0x48,0x89,0x00" :: "a"(gpa) : "memory"); /* mov %rax, (%rax) */
+#elif defined(__aarch64__)
+ asm volatile("str %0, [%0]" :: "r" (gpa) : "memory");
+#else
+ vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
+#endif
+ } while (!READ_ONCE(mprotect_ro_done));
+
+ /*
+ * Only architectures that write the entire range can explicitly sync,
+ * as other architectures will be stuck on the write fault.
+ */
+#if defined(__x86_64__) || defined(__aarch64__)
+ GUEST_SYNC(3);
+#endif
+
+ for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
+ vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
+ GUEST_SYNC(4);
+
GUEST_ASSERT(0);
}
@@ -78,6 +116,7 @@ static void *vcpu_worker(void *data)
struct vcpu_info *info = data;
struct kvm_vcpu *vcpu = info->vcpu;
struct kvm_vm *vm = vcpu->vm;
+ int r;
vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size);
@@ -100,6 +139,57 @@ static void *vcpu_worker(void *data)
/* Stage 2, read all of guest memory, which is now read-only. */
run_vcpu(vcpu, 2);
+
+ /*
+ * Stage 3, write guest memory and verify KVM returns -EFAULT for once
+ * the mprotect(PROT_READ) lands. Only architectures that support
+ * validating *all* of guest memory sync for this stage, as vCPUs will
+ * be stuck on the faulting instruction for other architectures. Go to
+ * stage 3 without a rendezvous
+ */
+ do {
+ r = _vcpu_run(vcpu);
+ } while (!r);
+ TEST_ASSERT(r == -1 && errno == EFAULT,
+ "Expected EFAULT on write to RO memory, got r = %d, errno = %d", r, errno);
+
+#if defined(__x86_64__) || defined(__aarch64__)
+ /*
+ * Verify *all* writes from the guest hit EFAULT due to the VMA now
+ * being read-only. x86 and arm64 only at this time as skipping the
+ * instruction that hits the EFAULT requires advancing the program
+ * counter, which is arch specific and relies on inline assembly.
+ */
+#ifdef __x86_64__
+ vcpu->run->kvm_valid_regs = KVM_SYNC_X86_REGS;
+#endif
+ for (;;) {
+ r = _vcpu_run(vcpu);
+ if (!r)
+ break;
+ TEST_ASSERT_EQ(errno, EFAULT);
+#if defined(__x86_64__)
+ WRITE_ONCE(vcpu->run->kvm_dirty_regs, KVM_SYNC_X86_REGS);
+ vcpu->run->s.regs.regs.rip += 3;
+#elif defined(__aarch64__)
+ vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc),
+ vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc)) + 4);
+#endif
+
+ }
+ assert_sync_stage(vcpu, 3);
+#endif /* __x86_64__ || __aarch64__ */
+ rendezvous_with_boss();
+
+ /*
+ * Stage 4. Run to completion, waiting for mprotect(PROT_WRITE) to
+ * make the memory writable again.
+ */
+ do {
+ r = _vcpu_run(vcpu);
+ } while (r && errno == EFAULT);
+ TEST_ASSERT_EQ(r, 0);
+ assert_sync_stage(vcpu, 4);
rendezvous_with_boss();
return NULL;
@@ -182,7 +272,7 @@ int main(int argc, char *argv[])
const uint64_t start_gpa = SZ_4G;
const int first_slot = 1;
- struct timespec time_start, time_run1, time_reset, time_run2, time_ro;
+ struct timespec time_start, time_run1, time_reset, time_run2, time_ro, time_rw;
uint64_t max_gpa, gpa, slot_size, max_mem, i;
int max_slots, slot, opt, fd;
bool hugepages = false;
@@ -287,19 +377,27 @@ int main(int argc, char *argv[])
rendezvous_with_vcpus(&time_run2, "run 2");
mprotect(mem, slot_size, PROT_READ);
+ usleep(10);
+ mprotect_ro_done = true;
+ sync_global_to_guest(vm, mprotect_ro_done);
+
rendezvous_with_vcpus(&time_ro, "mprotect RO");
+ mprotect(mem, slot_size, PROT_READ | PROT_WRITE);
+ rendezvous_with_vcpus(&time_rw, "mprotect RW");
+ time_rw = timespec_sub(time_rw, time_ro);
time_ro = timespec_sub(time_ro, time_run2);
time_run2 = timespec_sub(time_run2, time_reset);
time_reset = timespec_sub(time_reset, time_run1);
time_run1 = timespec_sub(time_run1, time_start);
pr_info("run1 = %ld.%.9lds, reset = %ld.%.9lds, run2 = %ld.%.9lds, "
- "ro = %ld.%.9lds\n",
+ "ro = %ld.%.9lds, rw = %ld.%.9lds\n",
time_run1.tv_sec, time_run1.tv_nsec,
time_reset.tv_sec, time_reset.tv_nsec,
time_run2.tv_sec, time_run2.tv_nsec,
- time_ro.tv_sec, time_ro.tv_nsec);
+ time_ro.tv_sec, time_ro.tv_nsec,
+ time_rw.tv_sec, time_rw.tv_nsec);
/*
* Delete even numbered slots (arbitrary) and unmap the first half of
--
2.46.0.598.g6f2099f65c-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)
2024-09-11 20:41 ` [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) Sean Christopherson
@ 2024-09-12 0:19 ` James Houghton
2024-09-12 14:36 ` Sean Christopherson
0 siblings, 1 reply; 24+ messages in thread
From: James Houghton @ 2024-09-12 0:19 UTC (permalink / raw)
To: Sean Christopherson
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel
[-- Attachment #1: Type: text/plain, Size: 3368 bytes --]
On Wed, Sep 11, 2024 at 1:42 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Add two phases to mmu_stress_test to verify that KVM correctly handles
> guest memory that was writable, and then made read-only in the primary MMU,
> and then made writable again.
>
> Add bonus coverage for x86 and arm64 to verify that all of guest memory was
> marked read-only. Making forward progress (without making memory writable)
> requires arch specific code to skip over the faulting instruction, but the
> test can at least verify each vCPU's starting page was made read-only for
> other architectures.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> tools/testing/selftests/kvm/mmu_stress_test.c | 104 +++++++++++++++++-
> 1 file changed, 101 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
> index 50c3a17418c4..c07c15d7cc9a 100644
> --- a/tools/testing/selftests/kvm/mmu_stress_test.c
> +++ b/tools/testing/selftests/kvm/mmu_stress_test.c
> @@ -16,6 +16,8 @@
> #include "guest_modes.h"
> #include "processor.h"
>
> +static bool mprotect_ro_done;
> +
> static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
> {
> uint64_t gpa;
> @@ -31,6 +33,42 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
> *((volatile uint64_t *)gpa);
> GUEST_SYNC(2);
>
> + /*
> + * Write to the region while mprotect(PROT_READ) is underway. Keep
> + * looping until the memory is guaranteed to be read-only, otherwise
> + * vCPUs may complete their writes and advance to the next stage
> + * prematurely.
> + *
> + * For architectures that support skipping the faulting instruction,
> + * generate the store via inline assembly to ensure the exact length
> + * of the instruction is known and stable (vcpu_arch_put_guest() on
> + * fixed-length architectures should work, but the cost of paranoia
> + * is low in this case). For x86, hand-code the exact opcode so that
> + * there is no room for variability in the generated instruction.
> + */
> + do {
> + for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
> +#ifdef __x86_64__
> + asm volatile(".byte 0x48,0x89,0x00" :: "a"(gpa) : "memory"); /* mov %rax, (%rax) */
I'm curious what you think about using labels (in asm, but perhaps
also in C) and *setting* the PC instead of incrementing the PC. Diff
attached (tested on x86). It might even be safe/okay to always use
vcpu_arch_put_guest(), just set the PC to a label immediately
following it.
I don't feel strongly, so feel free to ignore.
> +#elif defined(__aarch64__)
> + asm volatile("str %0, [%0]" :: "r" (gpa) : "memory");
> +#else
> + vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
> +#endif
> + } while (!READ_ONCE(mprotect_ro_done));
> +
> + /*
> + * Only architectures that write the entire range can explicitly sync,
> + * as other architectures will be stuck on the write fault.
> + */
> +#if defined(__x86_64__) || defined(__aarch64__)
> + GUEST_SYNC(3);
> +#endif
[-- Attachment #2: labels.diff --]
[-- Type: application/x-patch, Size: 2223 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param
2024-09-11 20:41 ` [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param Sean Christopherson
@ 2024-09-12 9:11 ` Andrew Jones
2024-09-12 13:49 ` Sean Christopherson
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2024-09-12 9:11 UTC (permalink / raw)
To: Sean Christopherson
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Wed, Sep 11, 2024 at 01:41:47PM GMT, Sean Christopherson wrote:
> Return a uint64_t from vcpu_get_reg() instead of having the caller provide
> a pointer to storage, as none of the KVM_GET_ONE_REG usage in KVM selftests
"none of the vcpu_get_reg() usage"
(There is KVM_GET_ONE_REG usage accessing larger registers, but those are
done through __vcpu_get_reg(). See get-reg-list.c)
> accesses a register larger than 64 bits, and vcpu_set_reg() only accepts a
> 64-bit value. If a use case comes along that needs to get a register that
> is larger than 64 bits, then a utility can be added to assert success and
> take a void pointer, but until then, forcing an out param yields ugly code
> and prevents feeding the output of vcpu_get_reg() into vcpu_set_reg().
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> .../selftests/kvm/aarch64/aarch32_id_regs.c | 10 +--
> .../selftests/kvm/aarch64/debug-exceptions.c | 4 +-
> .../selftests/kvm/aarch64/hypercalls.c | 6 +-
> .../testing/selftests/kvm/aarch64/psci_test.c | 6 +-
> .../selftests/kvm/aarch64/set_id_regs.c | 18 ++---
> .../kvm/aarch64/vpmu_counter_access.c | 19 +++---
> .../testing/selftests/kvm/include/kvm_util.h | 6 +-
> .../selftests/kvm/lib/aarch64/processor.c | 8 +--
> .../selftests/kvm/lib/riscv/processor.c | 66 +++++++++----------
> .../testing/selftests/kvm/riscv/arch_timer.c | 2 +-
> .../testing/selftests/kvm/riscv/ebreak_test.c | 2 +-
> .../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
> tools/testing/selftests/kvm/s390x/resets.c | 2 +-
> tools/testing/selftests/kvm/steal_time.c | 3 +-
> 14 files changed, 77 insertions(+), 77 deletions(-)
>
Other than the commit message not being quite right,
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Thanks,
drew
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate
2024-09-11 20:41 ` [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate Sean Christopherson
@ 2024-09-12 9:41 ` Andrew Jones
2024-09-12 16:17 ` Sean Christopherson
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2024-09-12 9:41 UTC (permalink / raw)
To: Sean Christopherson
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Wed, Sep 11, 2024 at 01:41:49PM GMT, Sean Christopherson wrote:
> Assert that the the register being read/written by vcpu_{g,s}et_reg() is
> no larger than a uint64_t, i.e. that a selftest isn't unintentionally
> truncating the value being read/written.
>
> Ideally, the assert would be done at compile-time, but that would limit
> the checks to hardcoded accesses and/or require fancier compile-time
> assertion infrastructure to filter out dynamic usage.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> tools/testing/selftests/kvm/include/kvm_util.h | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 429a7f003fe3..80230e49e35f 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -683,6 +683,8 @@ static inline uint64_t vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id)
> uint64_t val;
> struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
>
> + TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
> +
> vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
> return val;
> }
> @@ -690,6 +692,8 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val
> {
> struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
>
> + TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
> +
> vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®);
> }
>
> --
> 2.46.0.598.g6f2099f65c-goog
>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Shouldn't patches 3 and 4 come before patch 2 in this series?
Thanks,
drew
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
` (12 preceding siblings ...)
2024-09-11 20:41 ` [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) Sean Christopherson
@ 2024-09-12 11:48 ` Andrew Jones
2024-09-12 14:03 ` Sean Christopherson
13 siblings, 1 reply; 24+ messages in thread
From: Andrew Jones @ 2024-09-12 11:48 UTC (permalink / raw)
To: Sean Christopherson
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Wed, Sep 11, 2024 at 01:41:45PM GMT, Sean Christopherson wrote:
> Marc/Oliver,
>
> I would love a sanity check on patches 2 and 3 before I file a bug against
> gcc. The code is pretty darn simple, so I don't think I've misdiagnosed the
> problem, but I've also been second guessing myself _because_ it's so simple;
> it seems super unlikely that no one else would have run into this before.
>
> On to the patches...
>
> The main purpose of this series is to convert the max_guest_memory_test into
> a more generic mmu_stress_test. The patches were originally posted as part
> a KVM x86/mmu series to test the x86/mmu changes, hence the v2.
>
> The basic gist of the "conversion" is to have the test do mprotect() on
> guest memory while vCPUs are accessing said memory, e.g. to verify KVM and
> mmu_notifiers are working as intended.
>
> Patches 1-4 are a somewhat unexpected side quest that I can (arguably should)
> post separately if that would make things easier. The original plan was that
> patch 2 would be a single patch, but things snowballed.
>
> Patch 2 reworks vcpu_get_reg() to return a value instead of using an
> out-param. This is the entire motivation for including these patches;
> having to define a variable just to bump the program counter on arm64
> annoyed me.
>
> Patch 4 adds hardening to vcpu_{g,s}et_reg() to detect potential truncation,
> as KVM's uAPI allows for registers greater than the 64 bits the are supported
> in the "outer" selftests APIs ((vcpu_set_reg() takes a u64, vcpu_get_reg()
> now returns a u64).
>
> Patch 1 is a change to KVM's uAPI headers to move the KVM_REG_SIZE
> definition to common code so that the selftests side of things doesn't
> need #ifdefs to implement the hardening in patch 4.
>
> Patch 3 is the truly unexpected part. With the vcpu_get_reg() rework,
> arm64's vpmu_counter_test fails when compiled with gcc-13, and on gcc-11
> with an added "noinline". AFAICT, the failure doesn't actually have
> anything to with vcpu_get_reg(); I suspect the largely unrelated change
> just happened to run afoul of a latent gcc bug.
>
> Pending a sanity check, I will file a gcc bug. In the meantime, I am
> hoping to fudge around the issue in KVM selftests so that the vcpu_get_reg()
> cleanup isn't blocked, and because the hack-a-fix is arguably a cleanup
> on its own.
>
> v2:
> - Rebase onto kvm/next.
> - Add the aforementioned vcpu_get_reg() changes/disaster.
> - Actually add arm64 support for the fancy mprotect() testcase (I did this
> before v1, but managed to forget to include the changes when posting).
> - Emit "mov %rax, (%rax)" on x86. [James]
> - Add a comment to explain the fancy mprotect() vs. vCPUs logic.
> - Drop the KVM x86 patches (applied and/or will be handled separately).
>
> v1: https://lore.kernel.org/all/20240809194335.1726916-1-seanjc@google.com
>
> Sean Christopherson (13):
> KVM: Move KVM_REG_SIZE() definition to common uAPI header
> KVM: selftests: Return a value from vcpu_get_reg() instead of using an
> out-param
> KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test
> KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate
> KVM: selftests: Check for a potential unhandled exception iff KVM_RUN
> succeeded
> KVM: selftests: Rename max_guest_memory_test to mmu_stress_test
> KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test
> KVM: selftests: Compute number of extra pages needed in
> mmu_stress_test
> KVM: selftests: Enable mmu_stress_test on arm64
> KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test
> KVM: selftests: Precisely limit the number of guest loops in
> mmu_stress_test
> KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test
> KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)
>
> arch/arm64/include/uapi/asm/kvm.h | 3 -
> arch/riscv/include/uapi/asm/kvm.h | 3 -
> include/uapi/linux/kvm.h | 4 +
> tools/testing/selftests/kvm/Makefile | 3 +-
> .../selftests/kvm/aarch64/aarch32_id_regs.c | 10 +-
> .../selftests/kvm/aarch64/debug-exceptions.c | 4 +-
> .../selftests/kvm/aarch64/hypercalls.c | 6 +-
> .../testing/selftests/kvm/aarch64/psci_test.c | 6 +-
> .../selftests/kvm/aarch64/set_id_regs.c | 18 +-
> .../kvm/aarch64/vpmu_counter_access.c | 27 ++-
> .../testing/selftests/kvm/include/kvm_util.h | 10 +-
> .../selftests/kvm/lib/aarch64/processor.c | 8 +-
> tools/testing/selftests/kvm/lib/kvm_util.c | 3 +-
> .../selftests/kvm/lib/riscv/processor.c | 66 +++----
> ..._guest_memory_test.c => mmu_stress_test.c} | 161 ++++++++++++++++--
> .../testing/selftests/kvm/riscv/arch_timer.c | 2 +-
> .../testing/selftests/kvm/riscv/ebreak_test.c | 2 +-
> .../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
> tools/testing/selftests/kvm/s390x/resets.c | 2 +-
> tools/testing/selftests/kvm/steal_time.c | 3 +-
> 20 files changed, 236 insertions(+), 107 deletions(-)
> rename tools/testing/selftests/kvm/{max_guest_memory_test.c => mmu_stress_test.c} (60%)
>
>
> base-commit: 15e1c3d65975524c5c792fcd59f7d89f00402261
> --
> 2.46.0.598.g6f2099f65c-goog
I gave this test a try on riscv, but it appears to hang in
rendezvous_with_vcpus(). My platform is QEMU, so maybe I was just too
impatient. Anyway, I haven't read the test yet, so I don't even know
what it's doing. It's possibly it's trying to do something not yet
supported on riscv. I'll add investigating that to my TODO, but I'm
not sure when I'll get to it.
As for this series, another patch (or a sneaky change to one
of the patches...) should add
#include "ucall_common.h"
to mmu_stress_test.c since it's not there yet despite using get_ucall().
Building riscv faild because of that.
Thanks,
drew
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param
2024-09-12 9:11 ` Andrew Jones
@ 2024-09-12 13:49 ` Sean Christopherson
0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-12 13:49 UTC (permalink / raw)
To: Andrew Jones
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Thu, Sep 12, 2024, Andrew Jones wrote:
> On Wed, Sep 11, 2024 at 01:41:47PM GMT, Sean Christopherson wrote:
> > Return a uint64_t from vcpu_get_reg() instead of having the caller provide
> > a pointer to storage, as none of the KVM_GET_ONE_REG usage in KVM selftests
>
> "none of the vcpu_get_reg() usage"
>
> (There is KVM_GET_ONE_REG usage accessing larger registers, but those are
> done through __vcpu_get_reg(). See get-reg-list.c)
Doh, right, which was also part of my reasoning for making the conversion (tests
can use __vcpu_get_reg() if they need to get a larger register).
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress
2024-09-12 11:48 ` [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Andrew Jones
@ 2024-09-12 14:03 ` Sean Christopherson
0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-12 14:03 UTC (permalink / raw)
To: Andrew Jones
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Thu, Sep 12, 2024, Andrew Jones wrote:
> I gave this test a try on riscv, but it appears to hang in
> rendezvous_with_vcpus(). My platform is QEMU, so maybe I was just too
> impatient.
Try running with " -m 1 -s 1", which tells the test to use only 1GiB of memory.
That should run quite quickly, even in an emulator.
> Anyway, I haven't read the test yet, so I don't even know what it's doing.
> It's possibly it's trying to do something not yet supported on riscv. I'll
> add investigating that to my TODO, but I'm not sure when I'll get to it.
>
> As for this series, another patch (or a sneaky change to one
> of the patches...) should add
>
> #include "ucall_common.h"
>
> to mmu_stress_test.c since it's not there yet despite using get_ucall().
> Building riscv faild because of that.
Roger that.
Thanks!
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ)
2024-09-12 0:19 ` James Houghton
@ 2024-09-12 14:36 ` Sean Christopherson
0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-12 14:36 UTC (permalink / raw)
To: James Houghton
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel
On Wed, Sep 11, 2024, James Houghton wrote:
> On Wed, Sep 11, 2024 at 1:42 PM Sean Christopherson <seanjc@google.com> wrote:
> > @@ -31,6 +33,42 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
> > *((volatile uint64_t *)gpa);
> > GUEST_SYNC(2);
> >
> > + /*
> > + * Write to the region while mprotect(PROT_READ) is underway. Keep
> > + * looping until the memory is guaranteed to be read-only, otherwise
> > + * vCPUs may complete their writes and advance to the next stage
> > + * prematurely.
> > + *
> > + * For architectures that support skipping the faulting instruction,
> > + * generate the store via inline assembly to ensure the exact length
> > + * of the instruction is known and stable (vcpu_arch_put_guest() on
> > + * fixed-length architectures should work, but the cost of paranoia
> > + * is low in this case). For x86, hand-code the exact opcode so that
> > + * there is no room for variability in the generated instruction.
> > + */
> > + do {
> > + for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
> > +#ifdef __x86_64__
> > + asm volatile(".byte 0x48,0x89,0x00" :: "a"(gpa) : "memory"); /* mov %rax, (%rax) */
>
> I'm curious what you think about using labels (in asm, but perhaps
> also in C) and *setting* the PC instead of incrementing the PC.
I have nothing against asm labels, but generally speaking I don't like using
_global_ labels to skip instructions. E.g. __KVM_ASM_SAFE() uses labels to compute
the instruction size, but those labels are local and never directly used outside
of the macro.
The biggest problem with global labels is that they don't scale. E.g. if we
extend this test in the future with another testcase that needs to skip a gpa,
then we'll end up with skip_page1 and skip_page2, and the code starts to become
even harder to follow.
Don't get me wrong, skipping a fixed instruction size is awful too, but in my
experience they are less painful to maintain over the long haul.
> Diff attached (tested on x86).
Nit, in the future, just copy+pase the diff for small things like this (and even
for large diffs in many cases) so that readers don't need to open an attachment
(depending on their mail client), and so that it's easier to comment on the
proposed changed.
`git am --scissors` (a.k.a. `git am -c`) can be used to essentially extract and
apply such a diff from the mail.
> It might even be safe/okay to always use vcpu_arch_put_guest(), just set the
> PC to a label immediately following it.
That would not be safe/feasible. Labels in C code are scoped to the function.
And AFAIK, labels for use with goto are also not visible symbols, they are
statements. The "standard" way to expose a label from a function is to use inline
asm, at which point there are zero guarantees that nothing necessary is generated
between vcpu_arch_put_guest() and the next asm() block.
E.g. ignoring the inline asm for the moment, the compiler could generate multiple
paths for a loop, e.g. an unrolled version for a small number of iterations, and
an actual loop for a larger number of iterations. Trying to define a label as a
singular symbol for that is nonsensical.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate
2024-09-12 9:41 ` Andrew Jones
@ 2024-09-12 16:17 ` Sean Christopherson
0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-12 16:17 UTC (permalink / raw)
To: Andrew Jones
Cc: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Thu, Sep 12, 2024, Andrew Jones wrote:
> On Wed, Sep 11, 2024 at 01:41:49PM GMT, Sean Christopherson wrote:
> > Assert that the the register being read/written by vcpu_{g,s}et_reg() is
> > no larger than a uint64_t, i.e. that a selftest isn't unintentionally
> > truncating the value being read/written.
> >
> > Ideally, the assert would be done at compile-time, but that would limit
> > the checks to hardcoded accesses and/or require fancier compile-time
> > assertion infrastructure to filter out dynamic usage.
> >
> > Signed-off-by: Sean Christopherson <seanjc@google.com>
> > ---
> > tools/testing/selftests/kvm/include/kvm_util.h | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> > index 429a7f003fe3..80230e49e35f 100644
> > --- a/tools/testing/selftests/kvm/include/kvm_util.h
> > +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> > @@ -683,6 +683,8 @@ static inline uint64_t vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id)
> > uint64_t val;
> > struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> >
> > + TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
> > +
> > vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
> > return val;
> > }
> > @@ -690,6 +692,8 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val
> > {
> > struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> >
> > + TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
> > +
> > vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®);
> > }
> >
> > --
> > 2.46.0.598.g6f2099f65c-goog
> >
>
> Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
>
> Shouldn't patches 3 and 4 come before patch 2 in this series?
Ideally, yes, but for this patch, it gets weird because the output param of
vcpu_reg_get() isn't actually restricted to a 64-bit value prior to patch 2.
E.g. if this patch were merged without that rework, then the assert would be
confusing and arguably flat out wrong.
As for the hack-a-fix, I deliberately ordered it after patch 2 so that it would
be easier for others to (try to) reproduce the bug. I have no objection to
swapping 2 and 3 in the next version.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test
2024-09-11 20:41 ` [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test Sean Christopherson
@ 2024-09-30 21:56 ` Sean Christopherson
2024-09-30 22:48 ` Sean Christopherson
0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2024-09-30 21:56 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Wed, Sep 11, 2024, Sean Christopherson wrote:
> Use u64_replace_bits() instead of u64p_replace_bits() to set PMCR.N in
> arm64's vPMU counter access test to fudge around what appears to be a gcc
> bug. With the recent change to have vcpu_get_reg() return a value in lieu
> of an out-param, some versions of gcc completely ignore the operation
> performed by set_pmcr_n(), i.e. ignore the output param.
Filed a gcc bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116912
I'll report back if anything interesting comes out of that bug.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test
2024-09-30 21:56 ` Sean Christopherson
@ 2024-09-30 22:48 ` Sean Christopherson
0 siblings, 0 replies; 24+ messages in thread
From: Sean Christopherson @ 2024-09-30 22:48 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Anup Patel, Paolo Bonzini,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
linux-arm-kernel, kvmarm, kvm, kvm-riscv, linux-riscv,
linux-kernel, James Houghton
On Mon, Sep 30, 2024, Sean Christopherson wrote:
> On Wed, Sep 11, 2024, Sean Christopherson wrote:
> > Use u64_replace_bits() instead of u64p_replace_bits() to set PMCR.N in
> > arm64's vPMU counter access test to fudge around what appears to be a gcc
> > bug. With the recent change to have vcpu_get_reg() return a value in lieu
> > of an out-param, some versions of gcc completely ignore the operation
> > performed by set_pmcr_n(), i.e. ignore the output param.
>
> Filed a gcc bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116912
>
> I'll report back if anything interesting comes out of that bug.
Well, there goes several hours that I'll never get back. Selftests are compiled
with -O2, which enables strict-aliasing optimizations, and "unsigned long" and
"unsigned long long" technically don't alias despite being the same size on 64-bit
builds, so the compiler is allowed to optimize away the load. *sigh*
I'll replace this with a patch to disable strict-aliasing, which the kernel has
done since forever (literally predates git). Grr.
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 48d32c5aa3eb..a6f92129bb02 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -235,10 +235,10 @@ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
-Wno-gnu-variable-sized-type-not-at-end -MD -MP -DCONFIG_64BIT \
-fno-builtin-memcmp -fno-builtin-memcpy \
-fno-builtin-memset -fno-builtin-strnlen \
- -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
- -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
- -I$(<D) -Iinclude/$(ARCH_DIR) -I ../rseq -I.. $(EXTRA_CFLAGS) \
- $(KHDR_INCLUDES)
+ -fno-stack-protector -fno-PIE -fno-strict-aliasing \
+ -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_TOOL_ARCH_INCLUDE) \
+ -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -Iinclude/$(ARCH_DIR) \
+ -I ../rseq -I.. $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
^ permalink raw reply related [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-09-30 22:49 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-11 20:41 [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 01/13] KVM: Move KVM_REG_SIZE() definition to common uAPI header Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 02/13] KVM: selftests: Return a value from vcpu_get_reg() instead of using an out-param Sean Christopherson
2024-09-12 9:11 ` Andrew Jones
2024-09-12 13:49 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 03/13] KVM: selftests: Fudge around an apparent gcc bug in arm64's PMU test Sean Christopherson
2024-09-30 21:56 ` Sean Christopherson
2024-09-30 22:48 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 04/13] KVM: selftests: Assert that vcpu_{g,s}et_reg() won't truncate Sean Christopherson
2024-09-12 9:41 ` Andrew Jones
2024-09-12 16:17 ` Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 05/13] KVM: selftests: Check for a potential unhandled exception iff KVM_RUN succeeded Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 06/13] KVM: selftests: Rename max_guest_memory_test to mmu_stress_test Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 07/13] KVM: selftests: Only muck with SREGS on x86 in mmu_stress_test Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 08/13] KVM: selftests: Compute number of extra pages needed " Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 09/13] KVM: selftests: Enable mmu_stress_test on arm64 Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 10/13] KVM: selftests: Use vcpu_arch_put_guest() in mmu_stress_test Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 11/13] KVM: selftests: Precisely limit the number of guest loops " Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 12/13] KVM: selftests: Add a read-only mprotect() phase to mmu_stress_test Sean Christopherson
2024-09-11 20:41 ` [PATCH v2 13/13] KVM: selftests: Verify KVM correctly handles mprotect(PROT_READ) Sean Christopherson
2024-09-12 0:19 ` James Houghton
2024-09-12 14:36 ` Sean Christopherson
2024-09-12 11:48 ` [PATCH v2 00/13] KVM: selftests: Morph max_guest_mem to mmu_stress Andrew Jones
2024-09-12 14:03 ` Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).