* [kvm-unit-tests PATCH v6 1/4] x86: Allow setting of CR3 LAM bits if LAM supported
2024-01-22 8:53 [kvm-unit-tests PATCH v6 0/4] x86: Add test cases for LAM Binbin Wu
@ 2024-01-22 8:53 ` Binbin Wu
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP Binbin Wu
` (2 subsequent siblings)
3 siblings, 0 replies; 12+ messages in thread
From: Binbin Wu @ 2024-01-22 8:53 UTC (permalink / raw)
To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu
If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48
(bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.
Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
on vmlaunch tests when LAM is supported.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
lib/x86/processor.h | 3 +++
x86/vmx_tests.c | 6 +++++-
2 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 44f4fd1e..ca8665b3 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -68,6 +68,8 @@
#define X86_CR0_PG BIT(X86_CR0_PG_BIT)
#define X86_CR3_PCID_MASK GENMASK(11, 0)
+#define X86_CR3_LAM_U57_BIT (61)
+#define X86_CR3_LAM_U48_BIT (62)
#define X86_CR4_VME_BIT (0)
#define X86_CR4_VME BIT(X86_CR4_VME_BIT)
@@ -262,6 +264,7 @@ static inline bool is_intel(void)
#define X86_FEATURE_FLUSH_L1D (CPUID(0x7, 0, EDX, 28))
#define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29))
#define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31))
+#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26))
/*
* Extended Leafs, a.k.a. AMD defined
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index c1540d39..63080361 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -7012,7 +7012,11 @@ static void test_host_ctl_regs(void)
cr3 = cr3_saved | (1ul << i);
vmcs_write(HOST_CR3, cr3);
report_prefix_pushf("HOST_CR3 %lx", cr3);
- test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
+ if (this_cpu_has(X86_FEATURE_LAM) &&
+ ((i == X86_CR3_LAM_U57_BIT) || (i == X86_CR3_LAM_U48_BIT)))
+ test_vmx_vmlaunch(0);
+ else
+ test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
report_prefix_pop();
}
--
2.25.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP
2024-01-22 8:53 [kvm-unit-tests PATCH v6 0/4] x86: Add test cases for LAM Binbin Wu
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
@ 2024-01-22 8:53 ` Binbin Wu
2024-06-05 18:24 ` Sean Christopherson
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
3 siblings, 1 reply; 12+ messages in thread
From: Binbin Wu @ 2024-01-22 8:53 UTC (permalink / raw)
To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu
From: Robert Hoo <robert.hu@linux.intel.com>
This unit test covers:
1. CR4.LAM_SUP toggles.
2. Memory & MMIO access with supervisor mode address with LAM metadata.
3. INVLPG memory operand doesn't contain LAM meta data, if the address
is non-canonical form then the INVLPG is the same as a NOP (no #GP).
4. INVPCID memory operand (descriptor pointer) could contain LAM meta data,
however, the address in the descriptor should be canonical.
In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.
LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368,
Chapter LINEAR ADDRESS MASKING (LAM)
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
lib/x86/processor.h | 10 ++
x86/Makefile.x86_64 | 1 +
x86/lam.c | 243 ++++++++++++++++++++++++++++++++++++++++++++
x86/unittests.cfg | 10 ++
4 files changed, 264 insertions(+)
create mode 100644 x86/lam.c
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index ca8665b3..2d1fe3f5 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -8,6 +8,14 @@
#include <stdint.h>
#define NONCANONICAL 0xaaaaaaaaaaaaaaaaull
+#define LAM57_MASK GENMASK_ULL(62, 57)
+#define LAM48_MASK GENMASK_ULL(62, 48)
+
+/* Set metadata with non-canonical pattern in mask bits of a linear address */
+static inline u64 set_la_non_canonical(u64 src, u64 mask)
+{
+ return (src & ~mask) | (NONCANONICAL & mask);
+}
#ifdef __x86_64__
# define R "r"
@@ -120,6 +128,8 @@
#define X86_CR4_CET BIT(X86_CR4_CET_BIT)
#define X86_CR4_PKS_BIT (24)
#define X86_CR4_PKS BIT(X86_CR4_PKS_BIT)
+#define X86_CR4_LAM_SUP_BIT (28)
+#define X86_CR4_LAM_SUP BIT(X86_CR4_LAM_SUP_BIT)
#define X86_EFLAGS_CF_BIT (0)
#define X86_EFLAGS_CF BIT(X86_EFLAGS_CF_BIT)
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index 2771a6fa..e5db2365 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -38,6 +38,7 @@ tests += $(TEST_DIR)/rdpru.$(exe)
tests += $(TEST_DIR)/pks.$(exe)
tests += $(TEST_DIR)/pmu_lbr.$(exe)
tests += $(TEST_DIR)/pmu_pebs.$(exe)
+tests += $(TEST_DIR)/lam.$(exe)
ifeq ($(CONFIG_EFI),y)
tests += $(TEST_DIR)/amd_sev.$(exe)
diff --git a/x86/lam.c b/x86/lam.c
new file mode 100644
index 00000000..0ad16be5
--- /dev/null
+++ b/x86/lam.c
@@ -0,0 +1,243 @@
+/*
+ * Intel LAM unit test
+ *
+ * Copyright (C) 2023 Intel
+ *
+ * Author: Robert Hoo <robert.hu@linux.intel.com>
+ * Binbin Wu <binbin.wu@linux.intel.com>
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2 or
+ * later.
+ */
+
+#include "libcflat.h"
+#include "processor.h"
+#include "desc.h"
+#include "vmalloc.h"
+#include "alloc_page.h"
+#include "vm.h"
+#include "asm/io.h"
+#include "ioram.h"
+
+#define FLAGS_LAM_ACTIVE BIT_ULL(0)
+#define FLAGS_LA57 BIT_ULL(1)
+
+struct invpcid_desc {
+ u64 pcid : 12;
+ u64 rsv : 52;
+ u64 addr;
+};
+
+static inline bool is_la57(void)
+{
+ return !!(read_cr4() & X86_CR4_LA57);
+}
+
+static inline bool lam_sup_active(void)
+{
+ return !!(read_cr4() & X86_CR4_LAM_SUP);
+}
+
+static void cr4_set_lam_sup(void *data)
+{
+ unsigned long cr4;
+
+ cr4 = read_cr4();
+ write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
+}
+
+static void cr4_clear_lam_sup(void *data)
+{
+ unsigned long cr4;
+
+ cr4 = read_cr4();
+ write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
+}
+
+static void test_cr4_lam_set_clear(bool has_lam)
+{
+ bool fault;
+
+ fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+ report((fault != has_lam) && (lam_sup_active() == has_lam),
+ "Set CR4.LAM_SUP");
+
+ fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
+ report(!fault, "Clear CR4.LAM_SUP");
+}
+
+/* Refer to emulator.c */
+static void do_mov(void *mem)
+{
+ unsigned long t1, t2;
+
+ t1 = 0x123456789abcdefull & -1ul;
+ asm volatile("mov %[t1], (%[mem])\n\t"
+ "mov (%[mem]), %[t2]"
+ : [t2]"=r"(t2)
+ : [t1]"r"(t1), [mem]"r"(mem)
+ : "memory");
+ report(t1 == t2, "Mov result check");
+}
+
+static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
+{
+ bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+ u64 lam_mask = arg2;
+ u64 *ptr = (u64 *)arg3;
+ bool is_mmio = !!arg4;
+ bool fault;
+
+ fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+ report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory");
+
+ ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask);
+ fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+ report(fault != lam_active,"Test tagged addr (%s)",
+ is_mmio ? "MMIO" : "Memory");
+
+ return 0;
+}
+
+static void do_invlpg(void *mem)
+{
+ invlpg(mem);
+}
+
+static void do_invlpg_fep(void *mem)
+{
+ asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
+}
+
+/* invlpg with tagged address is same as NOP, no #GP expected. */
+static void test_invlpg(u64 lam_mask, void *va, bool fep)
+{
+ bool fault;
+ u64 *ptr;
+
+ ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask);
+ if (fep)
+ fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
+ else
+ fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
+
+ report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
+}
+
+static void do_invpcid(void *desc)
+{
+ struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
+
+ asm volatile("invpcid %0, %1" :
+ : "m" (*desc_ptr), "r" (0UL)
+ : "memory");
+}
+
+/* LAM doesn't apply to the linear address in the descriptor of invpcid */
+static void test_invpcid(u64 flags, u64 lam_mask, void *data)
+{
+ /*
+ * Reuse the memory address for the descriptor since stack memory
+ * address in KUT doesn't follow the kernel address space partitions.
+ */
+ struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
+ bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
+ bool fault;
+
+ if (!this_cpu_has(X86_FEATURE_PCID) ||
+ !this_cpu_has(X86_FEATURE_INVPCID)) {
+ report_skip("INVPCID not supported");
+ return;
+ }
+
+ memset(desc_ptr, 0, sizeof(struct invpcid_desc));
+ desc_ptr->addr = (u64)data;
+
+ fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+ report(!fault, "INVPCID: untagged pointer + untagged addr");
+
+ desc_ptr->addr = set_la_non_canonical(desc_ptr->addr, lam_mask);
+ fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+ report(fault, "INVPCID: untagged pointer + tagged addr");
+
+ desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr,
+ lam_mask);
+ fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+ report(fault, "INVPCID: tagged pointer + tagged addr");
+
+ desc_ptr = (struct invpcid_desc *)data;
+ desc_ptr->addr = (u64)data;
+ desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr,
+ lam_mask);
+ fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+ report(fault != lam_active, "INVPCID: tagged pointer + untagged addr");
+}
+
+static void test_lam_sup(bool has_lam, bool fep_available)
+{
+ void *vaddr, *vaddr_mmio;
+ phys_addr_t paddr;
+ u64 lam_mask = LAM48_MASK;
+ u64 flags = 0;
+ bool fault;
+
+ /*
+ * KUT initializes vfree_top to 0 for X86_64, and each virtual address
+ * allocation decreases the size from vfree_top. It's guaranteed that
+ * the return value of alloc_vpage() is considered as kernel mode
+ * address and canonical since only a small mount virtual address range
+ * is allocated in this test.
+ */
+ vaddr = alloc_vpage();
+ vaddr_mmio = alloc_vpage();
+ paddr = virt_to_phys(alloc_page());
+ install_page(current_page_table(), paddr, vaddr);
+ install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio);
+
+ test_cr4_lam_set_clear(has_lam);
+
+ /* Set for the following LAM_SUP tests */
+ if (has_lam) {
+ fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+ report(!fault && lam_sup_active(), "Set CR4.LAM_SUP");
+ }
+
+ if (lam_sup_active())
+ flags |= FLAGS_LAM_ACTIVE;
+
+ if (is_la57()) {
+ flags |= FLAGS_LA57;
+ lam_mask = LAM57_MASK;
+ }
+
+ /* Test for normal memory */
+ test_ptr(flags, lam_mask, (u64)vaddr, false);
+ /* Test for MMIO to trigger instruction emulation */
+ test_ptr(flags, lam_mask, (u64)vaddr_mmio, true);
+ test_invpcid(flags, lam_mask, vaddr);
+ test_invlpg(lam_mask, vaddr, false);
+ if (fep_available)
+ test_invlpg(lam_mask, vaddr, true);
+}
+
+int main(int ac, char **av)
+{
+ bool has_lam;
+ bool fep_available = is_fep_available();
+
+ setup_vm();
+
+ has_lam = this_cpu_has(X86_FEATURE_LAM);
+ if (!has_lam)
+ report_info("This CPU doesn't support LAM feature\n");
+ else
+ report_info("This CPU supports LAM feature\n");
+
+ if (!fep_available)
+ report_skip("Skipping tests the forced emulation, "
+ "use kvm.force_emulation_prefix=1 to enable\n");
+
+ test_lam_sup(has_lam, fep_available);
+
+ return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 3fe59449..224df45b 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -491,3 +491,13 @@ file = cet.flat
arch = x86_64
smp = 2
extra_params = -enable-kvm -m 2048 -cpu host
+
+[intel-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host
+
+[intel-no-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host,-lam
--
2.25.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP Binbin Wu
@ 2024-06-05 18:24 ` Sean Christopherson
2024-06-18 5:48 ` Binbin Wu
0 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2024-06-05 18:24 UTC (permalink / raw)
To: Binbin Wu; +Cc: kvm, pbonzini, chao.gao, robert.hu
On Mon, Jan 22, 2024, Binbin Wu wrote:
> diff --git a/x86/lam.c b/x86/lam.c
> new file mode 100644
> index 00000000..0ad16be5
> --- /dev/null
> +++ b/x86/lam.c
> @@ -0,0 +1,243 @@
> +/*
> + * Intel LAM unit test
> + *
> + * Copyright (C) 2023 Intel
> + *
> + * Author: Robert Hoo <robert.hu@linux.intel.com>
> + * Binbin Wu <binbin.wu@linux.intel.com>
> + *
> + * This work is licensed under the terms of the GNU LGPL, version 2 or
> + * later.
> + */
> +
> +#include "libcflat.h"
> +#include "processor.h"
> +#include "desc.h"
> +#include "vmalloc.h"
> +#include "alloc_page.h"
> +#include "vm.h"
> +#include "asm/io.h"
> +#include "ioram.h"
> +
> +#define FLAGS_LAM_ACTIVE BIT_ULL(0)
> +#define FLAGS_LA57 BIT_ULL(1)
> +
> +struct invpcid_desc {
> + u64 pcid : 12;
> + u64 rsv : 52;
> + u64 addr;
> +};
> +
> +static inline bool is_la57(void)
> +{
> + return !!(read_cr4() & X86_CR4_LA57);
> +}
> +
> +static inline bool lam_sup_active(void)
Needs an "is_" prefix. And be consistent, e.g. is_lam_sup() to go with is_la57(),
or is_lam_sup_enabled() and is_la57_enabled(). I'd probably vote for the latter,
though KVM does have is_paging() and the like, so I'm fine either way.
And these belong in processor.h
> +{
> + return !!(read_cr4() & X86_CR4_LAM_SUP);
> +}
> +
> +static void cr4_set_lam_sup(void *data)
> +{
> + unsigned long cr4;
> +
> + cr4 = read_cr4();
> + write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
> +}
> +
> +static void cr4_clear_lam_sup(void *data)
> +{
> + unsigned long cr4;
> +
> + cr4 = read_cr4();
> + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
> +}
Please drop these helpers and instead use _safe() variants when possible, e.g.
vector = write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
report(has_lam ? !vector : vector == GP_VECTOR,
"Expected CR4.LAM_SUP=1 to %s" ? has_lam ? "succeed" : "#GP");
vector = write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
report(!vector, "Expected CR4.LAM_SUP=0 to succeed");
> +static void test_cr4_lam_set_clear(bool has_lam)
> +{
> + bool fault;
> +
> + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
> + report((fault != has_lam) && (lam_sup_active() == has_lam),
> + "Set CR4.LAM_SUP");
> +
> + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
> + report(!fault, "Clear CR4.LAM_SUP");
> +}
> +
> +/* Refer to emulator.c */
> +static void do_mov(void *mem)
> +{
> + unsigned long t1, t2;
> +
> + t1 = 0x123456789abcdefull & -1ul;
> + asm volatile("mov %[t1], (%[mem])\n\t"
> + "mov (%[mem]), %[t2]"
> + : [t2]"=r"(t2)
> + : [t1]"r"(t1), [mem]"r"(mem)
> + : "memory");
> + report(t1 == t2, "Mov result check");
> +}
> +
> +static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
There's no reason to name these arg1..arg4. And unless I'm missing something,
there's no need for these flags at all. All the info is derived from vCPU state,
so just re-grab it. The cost of a CR4 read is negligible relative to the expected
runtime of these tests.
> +{
> + bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
> + u64 lam_mask = arg2;
> + u64 *ptr = (u64 *)arg3;
> + bool is_mmio = !!arg4;
> + bool fault;
> +
> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
> + report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory");
> +
> + ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask);
> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
> + report(fault != lam_active,"Test tagged addr (%s)",
> + is_mmio ? "MMIO" : "Memory");
> +
> + return 0;
> +}
> +
> +static void do_invlpg(void *mem)
> +{
> + invlpg(mem);
> +}
> +
> +static void do_invlpg_fep(void *mem)
> +{
> + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
> +}
> +
> +/* invlpg with tagged address is same as NOP, no #GP expected. */
> +static void test_invlpg(u64 lam_mask, void *va, bool fep)
> +{
> + bool fault;
> + u64 *ptr;
> +
> + ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask);
> + if (fep)
> + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
> + else
> + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
INVLPG never faults, so don't bother with wrappers. If INVPLG faults, the test
fails, i.e. mission accomplished.
> +
> + report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
> +}
> +
> +static void do_invpcid(void *desc)
> +{
> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
> +
> + asm volatile("invpcid %0, %1" :
> + : "m" (*desc_ptr), "r" (0UL)
> + : "memory");
> +}
Similar thing here, invpcid() belongs in processor.h, alongside invpcid_safe().
> +/* LAM doesn't apply to the linear address in the descriptor of invpcid */
> +static void test_invpcid(u64 flags, u64 lam_mask, void *data)
> +{
> + /*
> + * Reuse the memory address for the descriptor since stack memory
> + * address in KUT doesn't follow the kernel address space partitions.
> + */
> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
> + bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
> + bool fault;
> +
> + if (!this_cpu_has(X86_FEATURE_PCID) ||
I don't _think_ we need to check for PCID support. It's a KVM/QEMU bug if INVPCID
is advertised but it doesn't work for the "all PCIDs" flavor.
> + !this_cpu_has(X86_FEATURE_INVPCID)) {
> + report_skip("INVPCID not supported");
> + return;
> + }
> +
...
> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> index 3fe59449..224df45b 100644
> --- a/x86/unittests.cfg
> +++ b/x86/unittests.cfg
> @@ -491,3 +491,13 @@ file = cet.flat
> arch = x86_64
> smp = 2
> extra_params = -enable-kvm -m 2048 -cpu host
> +
> +[intel-lam]
> +file = lam.flat
> +arch = x86_64
> +extra_params = -enable-kvm -cpu host
> +
> +[intel-no-lam]
> +file = lam.flat
> +arch = x86_64
> +extra_params = -enable-kvm -cpu host,-lam
Hrm, not something that needs to be solved now, but we really need a better
interface for iterating over features in tests :-/
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP
2024-06-05 18:24 ` Sean Christopherson
@ 2024-06-18 5:48 ` Binbin Wu
2024-06-18 6:37 ` Binbin Wu
0 siblings, 1 reply; 12+ messages in thread
From: Binbin Wu @ 2024-06-18 5:48 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, pbonzini, chao.gao, robert.hu
On 6/6/2024 2:24 AM, Sean Christopherson wrote:
> On Mon, Jan 22, 2024, Binbin Wu wrote:
>> diff --git a/x86/lam.c b/x86/lam.c
>> new file mode 100644
>> index 00000000..0ad16be5
>> --- /dev/null
>> +++ b/x86/lam.c
>> @@ -0,0 +1,243 @@
>> +/*
>> + * Intel LAM unit test
>> + *
>> + * Copyright (C) 2023 Intel
>> + *
>> + * Author: Robert Hoo <robert.hu@linux.intel.com>
>> + * Binbin Wu <binbin.wu@linux.intel.com>
>> + *
>> + * This work is licensed under the terms of the GNU LGPL, version 2 or
>> + * later.
>> + */
>> +
>> +#include "libcflat.h"
>> +#include "processor.h"
>> +#include "desc.h"
>> +#include "vmalloc.h"
>> +#include "alloc_page.h"
>> +#include "vm.h"
>> +#include "asm/io.h"
>> +#include "ioram.h"
>> +
>> +#define FLAGS_LAM_ACTIVE BIT_ULL(0)
>> +#define FLAGS_LA57 BIT_ULL(1)
>> +
>> +struct invpcid_desc {
>> + u64 pcid : 12;
>> + u64 rsv : 52;
>> + u64 addr;
>> +};
>> +
>> +static inline bool is_la57(void)
>> +{
>> + return !!(read_cr4() & X86_CR4_LA57);
>> +}
>> +
>> +static inline bool lam_sup_active(void)
> Needs an "is_" prefix. And be consistent, e.g. is_lam_sup() to go with is_la57(),
> or is_lam_sup_enabled() and is_la57_enabled(). I'd probably vote for the latter,
> though KVM does have is_paging() and the like, so I'm fine either way.
>
> And these belong in processor.h
OK, will use is_lam_sup_enabled() / is_la57_enabled() and move them to
processor.h.
>
>> +{
>> + return !!(read_cr4() & X86_CR4_LAM_SUP);
>> +}
>> +
>> +static void cr4_set_lam_sup(void *data)
>> +{
>> + unsigned long cr4;
>> +
>> + cr4 = read_cr4();
>> + write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
>> +}
>> +
>> +static void cr4_clear_lam_sup(void *data)
>> +{
>> + unsigned long cr4;
>> +
>> + cr4 = read_cr4();
>> + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
>> +}
> Please drop these helpers and instead use _safe() variants when possible, e.g.
>
> vector = write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
> report(has_lam ? !vector : vector == GP_VECTOR,
> "Expected CR4.LAM_SUP=1 to %s" ? has_lam ? "succeed" : "#GP");
>
> vector = write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
> report(!vector, "Expected CR4.LAM_SUP=0 to succeed");
OK.
>
>> +static void test_cr4_lam_set_clear(bool has_lam)
>> +{
>> + bool fault;
>> +
>> + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
>> + report((fault != has_lam) && (lam_sup_active() == has_lam),
>> + "Set CR4.LAM_SUP");
>> +
>> + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
>> + report(!fault, "Clear CR4.LAM_SUP");
>> +}
>> +
>> +/* Refer to emulator.c */
>> +static void do_mov(void *mem)
>> +{
>> + unsigned long t1, t2;
>> +
>> + t1 = 0x123456789abcdefull & -1ul;
>> + asm volatile("mov %[t1], (%[mem])\n\t"
>> + "mov (%[mem]), %[t2]"
>> + : [t2]"=r"(t2)
>> + : [t1]"r"(t1), [mem]"r"(mem)
>> + : "memory");
>> + report(t1 == t2, "Mov result check");
>> +}
>> +
>> +static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
> There's no reason to name these arg1..arg4. And unless I'm missing something,
> there's no need for these flags at all. All the info is derived from vCPU state,
> so just re-grab it. The cost of a CR4 read is negligible relative to the expected
> runtime of these tests.
The reason for these arguments is the userspace pointer will be tested
in userspace using the same function,
and CR3/CR4 can not be read in ring3, so these CR3/CR4 LAM/LA57 related
information needs to be passed.
Based on your comment and the comment from patch 3
https://lore.kernel.org/kvm/ZmC0GC3wAdiO0Dp2@google.com/
Does the number of arguments bothering you?
If not, I can pass CR3 and CR4 value to test_ptr() and let the test
function to retrieve LAM information itself.
I.e, the 4 inputs will be CR3, CR4, memory pointer, is_mmio.
>
>> +{
>> + bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
>> + u64 lam_mask = arg2;
>> + u64 *ptr = (u64 *)arg3;
>> + bool is_mmio = !!arg4;
>> + bool fault;
>> +
>> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
>> + report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory");
>> +
>> + ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask);
>> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
>> + report(fault != lam_active,"Test tagged addr (%s)",
>> + is_mmio ? "MMIO" : "Memory");
>> +
>> + return 0;
>> +}
>> +
>> +static void do_invlpg(void *mem)
>> +{
>> + invlpg(mem);
>> +}
>> +
>> +static void do_invlpg_fep(void *mem)
>> +{
>> + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
>> +}
>> +
>> +/* invlpg with tagged address is same as NOP, no #GP expected. */
>> +static void test_invlpg(u64 lam_mask, void *va, bool fep)
>> +{
>> + bool fault;
>> + u64 *ptr;
>> +
>> + ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask);
>> + if (fep)
>> + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
>> + else
>> + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
> INVLPG never faults, so don't bother with wrappers. If INVPLG faults, the test
> fails, i.e. mission accomplished.
OK.
>
>> +
>> + report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
>> +}
>> +
>> +static void do_invpcid(void *desc)
>> +{
>> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
>> +
>> + asm volatile("invpcid %0, %1" :
>> + : "m" (*desc_ptr), "r" (0UL)
>> + : "memory");
>> +}
> Similar thing here, invpcid() belongs in processor.h, alongside invpcid_safe().
OK, I think I can use invpcid_safe() directly for test cases.
>
>> +/* LAM doesn't apply to the linear address in the descriptor of invpcid */
>> +static void test_invpcid(u64 flags, u64 lam_mask, void *data)
>> +{
>> + /*
>> + * Reuse the memory address for the descriptor since stack memory
>> + * address in KUT doesn't follow the kernel address space partitions.
>> + */
>> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
>> + bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
>> + bool fault;
>> +
>> + if (!this_cpu_has(X86_FEATURE_PCID) ||
> I don't _think_ we need to check for PCID support. It's a KVM/QEMU bug if INVPCID
> is advertised but it doesn't work for the "all PCIDs" flavor.
OK, will remove it.
>
>> + !this_cpu_has(X86_FEATURE_INVPCID)) {
>> + report_skip("INVPCID not supported");
>> + return;
>> + }
>> +
> ...
>
>> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
>> index 3fe59449..224df45b 100644
>> --- a/x86/unittests.cfg
>> +++ b/x86/unittests.cfg
>> @@ -491,3 +491,13 @@ file = cet.flat
>> arch = x86_64
>> smp = 2
>> extra_params = -enable-kvm -m 2048 -cpu host
>> +
>> +[intel-lam]
>> +file = lam.flat
>> +arch = x86_64
>> +extra_params = -enable-kvm -cpu host
>> +
>> +[intel-no-lam]
>> +file = lam.flat
>> +arch = x86_64
>> +extra_params = -enable-kvm -cpu host,-lam
> Hrm, not something that needs to be solved now, but we really need a better
> interface for iterating over features in tests :-/
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP
2024-06-18 5:48 ` Binbin Wu
@ 2024-06-18 6:37 ` Binbin Wu
0 siblings, 0 replies; 12+ messages in thread
From: Binbin Wu @ 2024-06-18 6:37 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, pbonzini, chao.gao, robert.hu
On 6/18/2024 1:48 PM, Binbin Wu wrote:
>
>
> On 6/6/2024 2:24 AM, Sean Christopherson wrote:
>> On Mon, Jan 22, 2024, Binbin Wu wrote:
>>> diff --git a/x86/lam.c b/x86/lam.c
>>> new file mode 100644
>>> index 00000000..0ad16be5
>>> --- /dev/null
>>> +++ b/x86/lam.c
>>> @@ -0,0 +1,243 @@
>>> +/*
>>> + * Intel LAM unit test
>>> + *
>>> + * Copyright (C) 2023 Intel
>>> + *
>>> + * Author: Robert Hoo <robert.hu@linux.intel.com>
>>> + * Binbin Wu <binbin.wu@linux.intel.com>
>>> + *
>>> + * This work is licensed under the terms of the GNU LGPL, version 2 or
>>> + * later.
>>> + */
>>> +
>>> +#include "libcflat.h"
>>> +#include "processor.h"
>>> +#include "desc.h"
>>> +#include "vmalloc.h"
>>> +#include "alloc_page.h"
>>> +#include "vm.h"
>>> +#include "asm/io.h"
>>> +#include "ioram.h"
>>> +
>>> +#define FLAGS_LAM_ACTIVE BIT_ULL(0)
>>> +#define FLAGS_LA57 BIT_ULL(1)
>>> +
>>> +struct invpcid_desc {
>>> + u64 pcid : 12;
>>> + u64 rsv : 52;
>>> + u64 addr;
>>> +};
>>> +
>>> +static inline bool is_la57(void)
>>> +{
>>> + return !!(read_cr4() & X86_CR4_LA57);
>>> +}
>>> +
>>> +static inline bool lam_sup_active(void)
>> Needs an "is_" prefix. And be consistent, e.g. is_lam_sup() to go
>> with is_la57(),
>> or is_lam_sup_enabled() and is_la57_enabled(). I'd probably vote for
>> the latter,
>> though KVM does have is_paging() and the like, so I'm fine either way.
>>
>> And these belong in processor.h
>
> OK, will use is_lam_sup_enabled() / is_la57_enabled() and move them to
> processor.h.
>
>>
>>> +{
>>> + return !!(read_cr4() & X86_CR4_LAM_SUP);
>>> +}
>>> +
>>> +static void cr4_set_lam_sup(void *data)
>>> +{
>>> + unsigned long cr4;
>>> +
>>> + cr4 = read_cr4();
>>> + write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
>>> +}
>>> +
>>> +static void cr4_clear_lam_sup(void *data)
>>> +{
>>> + unsigned long cr4;
>>> +
>>> + cr4 = read_cr4();
>>> + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
>>> +}
>> Please drop these helpers and instead use _safe() variants when
>> possible, e.g.
>>
>> vector = write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
>> report(has_lam ? !vector : vector == GP_VECTOR,
>> "Expected CR4.LAM_SUP=1 to %s" ? has_lam ? "succeed" :
>> "#GP");
>>
>> vector = write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
>> report(!vector, "Expected CR4.LAM_SUP=0 to succeed");
> OK.
>
>>
>>> +static void test_cr4_lam_set_clear(bool has_lam)
>>> +{
>>> + bool fault;
>>> +
>>> + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
>>> + report((fault != has_lam) && (lam_sup_active() == has_lam),
>>> + "Set CR4.LAM_SUP");
>>> +
>>> + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
>>> + report(!fault, "Clear CR4.LAM_SUP");
>>> +}
>>> +
>>> +/* Refer to emulator.c */
>>> +static void do_mov(void *mem)
>>> +{
>>> + unsigned long t1, t2;
>>> +
>>> + t1 = 0x123456789abcdefull & -1ul;
>>> + asm volatile("mov %[t1], (%[mem])\n\t"
>>> + "mov (%[mem]), %[t2]"
>>> + : [t2]"=r"(t2)
>>> + : [t1]"r"(t1), [mem]"r"(mem)
>>> + : "memory");
>>> + report(t1 == t2, "Mov result check");
>>> +}
>>> +
>>> +static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
>> There's no reason to name these arg1..arg4. And unless I'm missing
>> something,
>> there's no need for these flags at all. All the info is derived from
>> vCPU state,
>> so just re-grab it. The cost of a CR4 read is negligible relative to
>> the expected
>> runtime of these tests.
> The reason for these arguments is the userspace pointer will be tested
> in userspace using the same function,
> and CR3/CR4 can not be read in ring3, so these CR3/CR4 LAM/LA57
> related information needs to be passed.
Another way to simplify it is: LAM identifies a pointer as kernel or
user pointer only based on the pointer itself, there is no need to run
the test in userspace to test user pointers.
Since SMAP is not enabled by default in KUT, we can use user pointers in
ring0 directly, then no need to pass CR3/CR4 info to the function.
>
> Based on your comment and the comment from patch 3
> https://lore.kernel.org/kvm/ZmC0GC3wAdiO0Dp2@google.com/
> Does the number of arguments bothering you?
> If not, I can pass CR3 and CR4 value to test_ptr() and let the test
> function to retrieve LAM information itself.
> I.e, the 4 inputs will be CR3, CR4, memory pointer, is_mmio.
>
>
>>
>>> +{
>>> + bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
>>> + u64 lam_mask = arg2;
>>> + u64 *ptr = (u64 *)arg3;
>>> + bool is_mmio = !!arg4;
>>> + bool fault;
>>> +
>>> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
>>> + report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" :
>>> "Memory");
>>> +
>>> + ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask);
>>> + fault = test_for_exception(GP_VECTOR, do_mov, ptr);
>>> + report(fault != lam_active,"Test tagged addr (%s)",
>>> + is_mmio ? "MMIO" : "Memory");
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void do_invlpg(void *mem)
>>> +{
>>> + invlpg(mem);
>>> +}
>>> +
>>> +static void do_invlpg_fep(void *mem)
>>> +{
>>> + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
>>> +}
>>> +
>>> +/* invlpg with tagged address is same as NOP, no #GP expected. */
>>> +static void test_invlpg(u64 lam_mask, void *va, bool fep)
>>> +{
>>> + bool fault;
>>> + u64 *ptr;
>>> +
>>> + ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask);
>>> + if (fep)
>>> + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
>>> + else
>>> + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
>> INVLPG never faults, so don't bother with wrappers. If INVPLG
>> faults, the test
>> fails, i.e. mission accomplished.
>
> OK.
>>
>>> +
>>> + report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
>>> +}
>>> +
>>> +static void do_invpcid(void *desc)
>>> +{
>>> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
>>> +
>>> + asm volatile("invpcid %0, %1" :
>>> + : "m" (*desc_ptr), "r" (0UL)
>>> + : "memory");
>>> +}
>> Similar thing here, invpcid() belongs in processor.h, alongside
>> invpcid_safe().
>
> OK, I think I can use invpcid_safe() directly for test cases.
>
>
>>
>>> +/* LAM doesn't apply to the linear address in the descriptor of
>>> invpcid */
>>> +static void test_invpcid(u64 flags, u64 lam_mask, void *data)
>>> +{
>>> + /*
>>> + * Reuse the memory address for the descriptor since stack memory
>>> + * address in KUT doesn't follow the kernel address space
>>> partitions.
>>> + */
>>> + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
>>> + bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
>>> + bool fault;
>>> +
>>> + if (!this_cpu_has(X86_FEATURE_PCID) ||
>> I don't _think_ we need to check for PCID support. It's a KVM/QEMU
>> bug if INVPCID
>> is advertised but it doesn't work for the "all PCIDs" flavor.
> OK, will remove it.
>
>>
>>> + !this_cpu_has(X86_FEATURE_INVPCID)) {
>>> + report_skip("INVPCID not supported");
>>> + return;
>>> + }
>>> +
>> ...
>>
>>> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
>>> index 3fe59449..224df45b 100644
>>> --- a/x86/unittests.cfg
>>> +++ b/x86/unittests.cfg
>>> @@ -491,3 +491,13 @@ file = cet.flat
>>> arch = x86_64
>>> smp = 2
>>> extra_params = -enable-kvm -m 2048 -cpu host
>>> +
>>> +[intel-lam]
>>> +file = lam.flat
>>> +arch = x86_64
>>> +extra_params = -enable-kvm -cpu host
>>> +
>>> +[intel-no-lam]
>>> +file = lam.flat
>>> +arch = x86_64
>>> +extra_params = -enable-kvm -cpu host,-lam
>> Hrm, not something that needs to be solved now, but we really need a
>> better
>> interface for iterating over features in tests :-/
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [kvm-unit-tests PATCH v6 3/4] x86: Add test cases for LAM_{U48,U57}
2024-01-22 8:53 [kvm-unit-tests PATCH v6 0/4] x86: Add test cases for LAM Binbin Wu
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 2/4] x86: Add test case for LAM_SUP Binbin Wu
@ 2024-01-22 8:53 ` Binbin Wu
2024-06-05 18:53 ` Sean Christopherson
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
3 siblings, 1 reply; 12+ messages in thread
From: Binbin Wu @ 2024-01-22 8:53 UTC (permalink / raw)
To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu
This unit test covers:
1. CR3 LAM bits toggles.
2. Memory/MMIO access with user mode address containing LAM metadata.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
lib/x86/processor.h | 2 ++
x86/lam.c | 76 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 78 insertions(+)
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 2d1fe3f5..e3340f13 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -77,7 +77,9 @@ static inline u64 set_la_non_canonical(u64 src, u64 mask)
#define X86_CR3_PCID_MASK GENMASK(11, 0)
#define X86_CR3_LAM_U57_BIT (61)
+#define X86_CR3_LAM_U57 BIT_ULL(X86_CR3_LAM_U57_BIT)
#define X86_CR3_LAM_U48_BIT (62)
+#define X86_CR3_LAM_U48 BIT_ULL(X86_CR3_LAM_U48_BIT)
#define X86_CR4_VME_BIT (0)
#define X86_CR4_VME BIT(X86_CR4_VME_BIT)
diff --git a/x86/lam.c b/x86/lam.c
index 0ad16be5..3aa6b313 100644
--- a/x86/lam.c
+++ b/x86/lam.c
@@ -18,6 +18,9 @@
#include "vm.h"
#include "asm/io.h"
#include "ioram.h"
+#include "usermode.h"
+
+#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)
#define FLAGS_LAM_ACTIVE BIT_ULL(0)
#define FLAGS_LA57 BIT_ULL(1)
@@ -38,6 +41,16 @@ static inline bool lam_sup_active(void)
return !!(read_cr4() & X86_CR4_LAM_SUP);
}
+static inline bool lam_u48_active(void)
+{
+ return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48;
+}
+
+static inline bool lam_u57_active(void)
+{
+ return !!(read_cr3() & X86_CR3_LAM_U57);
+}
+
static void cr4_set_lam_sup(void *data)
{
unsigned long cr4;
@@ -83,6 +96,7 @@ static void do_mov(void *mem)
static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
{
bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+ bool la_57 = !!(arg1 & FLAGS_LA57);
u64 lam_mask = arg2;
u64 *ptr = (u64 *)arg3;
bool is_mmio = !!arg4;
@@ -96,6 +110,17 @@ static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
report(fault != lam_active,"Test tagged addr (%s)",
is_mmio ? "MMIO" : "Memory");
+ /*
+ * This test case is only triggered when LAM_U57 is active and 4-level
+ * paging is used. For the case, bit[56:47] aren't all 0 triggers #GP.
+ */
+ if (lam_active && (lam_mask == LAM57_MASK) && !la_57) {
+ ptr = (u64 *)set_la_non_canonical((u64)ptr, LAM48_MASK);
+ fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+ report(fault, "Test non-LAM-canonical addr (%s)",
+ is_mmio ? "MMIO" : "Memory");
+ }
+
return 0;
}
@@ -220,6 +245,56 @@ static void test_lam_sup(bool has_lam, bool fep_available)
test_invlpg(lam_mask, vaddr, true);
}
+static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio)
+{
+ unsigned r;
+ bool raised_vector;
+ unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK;
+ u64 flags = 0;
+
+ if (is_la57())
+ flags |= FLAGS_LA57;
+
+ if (has_lam) {
+ if (lam_mask == LAM48_MASK) {
+ r = write_cr3_safe(cr3 | X86_CR3_LAM_U48);
+ report((r == 0) && lam_u48_active(), "Set LAM_U48");
+ } else {
+ r = write_cr3_safe(cr3 | X86_CR3_LAM_U57);
+ report((r == 0) && lam_u57_active(), "Set LAM_U57");
+ }
+ }
+ if (lam_u48_active() || lam_u57_active())
+ flags |= FLAGS_LAM_ACTIVE;
+
+ run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem,
+ false, &raised_vector);
+ run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio,
+ true, &raised_vector);
+}
+
+static void test_lam_user(bool has_lam)
+{
+ phys_addr_t paddr;
+
+ /*
+ * The physical address of AREA_NORMAL is within 36 bits, so that using
+ * identical mapping, the linear address will be considered as user mode
+ * address from the view of LAM, and the metadata bits are not used as
+ * address for both LAM48 and LAM57.
+ */
+ paddr = virt_to_phys(alloc_pages_flags(0, AREA_NORMAL));
+ _Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL,
+ "Identical mapping range check");
+
+ /*
+ * Physical memory & MMIO have already been identical mapped in
+ * setup_mmu().
+ */
+ test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS);
+ test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS);
+}
+
int main(int ac, char **av)
{
bool has_lam;
@@ -238,6 +313,7 @@ int main(int ac, char **av)
"use kvm.force_emulation_prefix=1 to enable\n");
test_lam_sup(has_lam, fep_available);
+ test_lam_user(has_lam);
return report_summary();
}
--
2.25.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [kvm-unit-tests PATCH v6 3/4] x86: Add test cases for LAM_{U48,U57}
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
@ 2024-06-05 18:53 ` Sean Christopherson
0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2024-06-05 18:53 UTC (permalink / raw)
To: Binbin Wu; +Cc: kvm, pbonzini, chao.gao, robert.hu
On Mon, Jan 22, 2024, Binbin Wu wrote:
> +static inline bool lam_u48_active(void)
> +{
> + return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48;
> +}
> +
> +static inline bool lam_u57_active(void)
> +{
> + return !!(read_cr3() & X86_CR3_LAM_U57);
> +}
Same comments as the supervisor patches.
> +static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio)
> +{
> + unsigned r;
> + bool raised_vector;
> + unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK;
> + u64 flags = 0;
> +
> + if (is_la57())
> + flags |= FLAGS_LA57;
> +
> + if (has_lam) {
> + if (lam_mask == LAM48_MASK) {
> + r = write_cr3_safe(cr3 | X86_CR3_LAM_U48);
> + report((r == 0) && lam_u48_active(), "Set LAM_U48");
!r is the preferred style
> + } else {
> + r = write_cr3_safe(cr3 | X86_CR3_LAM_U57);
> + report((r == 0) && lam_u57_active(), "Set LAM_U57");
> + }
> + }
> + if (lam_u48_active() || lam_u57_active())
> + flags |= FLAGS_LAM_ACTIVE;
> +
> + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem,
> + false, &raised_vector);
Ah, this is why test_ptr() doesn't recompute things. Hrm. Ah, I think we can at
least compute the mask dynamically, e.g. test_ptr() just need to know if LAM48
and/or LAM57 is enabled, right?
> + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio,
> + true, &raised_vector);
> +}
^ permalink raw reply [flat|nested] 12+ messages in thread
* [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM
2024-01-22 8:53 [kvm-unit-tests PATCH v6 0/4] x86: Add test cases for LAM Binbin Wu
` (2 preceding siblings ...)
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
@ 2024-01-22 8:53 ` Binbin Wu
2024-06-05 18:57 ` Sean Christopherson
3 siblings, 1 reply; 12+ messages in thread
From: Binbin Wu @ 2024-01-22 8:53 UTC (permalink / raw)
To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu
LAM applies to the linear address of INVVPID operand, however,
it doesn't apply to the linear address in the INVVPID descriptor.
The added cases use tagged operand or tagged target invalidation
address to make sure the behaviors are expected when LAM is on.
Also, INVVPID case using tagged operand can be used as the common
test cases for VMX instruction VMExits.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
x86/vmx_tests.c | 46 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 45 insertions(+), 1 deletion(-)
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 63080361..a855bdcb 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -3220,6 +3220,48 @@ static void invvpid_test_not_in_vmx_operation(void)
TEST_ASSERT(!vmx_on());
}
+/* LAM doesn't apply to the linear address inside the descriptor of invvpid */
+static void invvpid_test_lam(void)
+{
+ void *vaddr;
+ struct invvpid_operand *operand;
+ u64 lam_mask = LAM48_MASK;
+ bool fault;
+
+ if (!this_cpu_has(X86_FEATURE_LAM)) {
+ report_skip("LAM is not supported, skip INVVPID with LAM");
+ return;
+ }
+ write_cr4(read_cr4() | X86_CR4_LAM_SUP);
+
+ if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
+ lam_mask = LAM57_MASK;
+
+ vaddr = alloc_vpage();
+ install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
+ /*
+ * Since the stack memory address in KUT doesn't follow kernel address
+ * space partition rule, reuse the memory address for descriptor and
+ * the target address in the descriptor of invvpid.
+ */
+ operand = (struct invvpid_operand *)vaddr;
+ operand->vpid = 0xffff;
+ operand->gla = (u64)vaddr;
+ operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
+ lam_mask);
+ fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+ report(!fault, "INVVPID (LAM on): tagged operand");
+
+ /*
+ * Verify that LAM doesn't apply to the address inside the descriptor
+ * even when LAM is enabled. i.e., the address in the descriptor should
+ * be canonical.
+ */
+ try_invvpid(INVVPID_ADDR, 0xffff, (u64)operand);
+
+ write_cr4(read_cr4() & ~X86_CR4_LAM_SUP);
+}
+
/*
* This does not test real-address mode, virtual-8086 mode, protected mode,
* or CPL > 0.
@@ -3269,8 +3311,10 @@ static void invvpid_test(void)
/*
* The gla operand is only validated for single-address INVVPID.
*/
- if (types & (1u << INVVPID_ADDR))
+ if (types & (1u << INVVPID_ADDR)) {
try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL);
+ invvpid_test_lam();
+ }
invvpid_test_gp();
invvpid_test_ss();
--
2.25.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM
2024-01-22 8:53 ` [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
@ 2024-06-05 18:57 ` Sean Christopherson
2024-06-18 5:55 ` Binbin Wu
0 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2024-06-05 18:57 UTC (permalink / raw)
To: Binbin Wu; +Cc: kvm, pbonzini, chao.gao, robert.hu
On Mon, Jan 22, 2024, Binbin Wu wrote:
> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
Checking for feature support seems superfluous, e.g. LA57 should never be set if
it's unsupported. Then you can do
lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK;
> + lam_mask = LAM57_MASK;
> +
> + vaddr = alloc_vpage();
> + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
> + /*
> + * Since the stack memory address in KUT doesn't follow kernel address
> + * space partition rule, reuse the memory address for descriptor and
> + * the target address in the descriptor of invvpid.
> + */
> + operand = (struct invvpid_operand *)vaddr;
Why bother backing the virtual address? MOV needs a valid translation, but
INVVPID does not (ditto for INVLPG and INVPCID, though it might be simpler and
easier to just use the allocated address for those).
> + operand->vpid = 0xffff;
> + operand->gla = (u64)vaddr;
> + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
> + lam_mask);
> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
> + report(!fault, "INVVPID (LAM on): tagged operand");
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM
2024-06-05 18:57 ` Sean Christopherson
@ 2024-06-18 5:55 ` Binbin Wu
2024-06-24 8:20 ` Binbin Wu
0 siblings, 1 reply; 12+ messages in thread
From: Binbin Wu @ 2024-06-18 5:55 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, pbonzini, chao.gao, robert.hu
On 6/6/2024 2:57 AM, Sean Christopherson wrote:
> On Mon, Jan 22, 2024, Binbin Wu wrote:
>> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
> Checking for feature support seems superfluous, e.g. LA57 should never be set if
> it's unsupported. Then you can do
>
> lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK;
OK, will drop the feature check.
>
>> + lam_mask = LAM57_MASK;
>> +
>> + vaddr = alloc_vpage();
>> + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
>> + /*
>> + * Since the stack memory address in KUT doesn't follow kernel address
>> + * space partition rule, reuse the memory address for descriptor and
>> + * the target address in the descriptor of invvpid.
>> + */
>> + operand = (struct invvpid_operand *)vaddr;
> Why bother backing the virtual address? MOV needs a valid translation, but
> INVVPID does not (ditto for INVLPG and INVPCID, though it might be simpler and
> easier to just use the allocated address for those).
OK, will remove the unnecessary code.
>
>> + operand->vpid = 0xffff;
>> + operand->gla = (u64)vaddr;
>> + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
>> + lam_mask);
>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>> + report(!fault, "INVVPID (LAM on): tagged operand");
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [kvm-unit-tests PATCH v6 4/4] x86: Add test case for INVVPID with LAM
2024-06-18 5:55 ` Binbin Wu
@ 2024-06-24 8:20 ` Binbin Wu
0 siblings, 0 replies; 12+ messages in thread
From: Binbin Wu @ 2024-06-24 8:20 UTC (permalink / raw)
To: Sean Christopherson; +Cc: kvm, pbonzini, chao.gao, robert.hu
On 6/18/2024 1:55 PM, Binbin Wu wrote:
>
>
> On 6/6/2024 2:57 AM, Sean Christopherson wrote:
>> On Mon, Jan 22, 2024, Binbin Wu wrote:
>>> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
>> Checking for feature support seems superfluous, e.g. LA57 should
>> never be set if
>> it's unsupported. Then you can do
>>
>> lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK;
> OK, will drop the feature check.
>
>>
>>> + lam_mask = LAM57_MASK;
>>> +
>>> + vaddr = alloc_vpage();
>>> + install_page(current_page_table(), virt_to_phys(alloc_page()),
>>> vaddr);
>>> + /*
>>> + * Since the stack memory address in KUT doesn't follow kernel
>>> address
>>> + * space partition rule, reuse the memory address for
>>> descriptor and
>>> + * the target address in the descriptor of invvpid.
>>> + */
>>> + operand = (struct invvpid_operand *)vaddr;
>> Why bother backing the virtual address? MOV needs a valid
>> translation, but
>> INVVPID does not (ditto for INVLPG and INVPCID, though it might be
>> simpler and
>> easier to just use the allocated address for those).
>
> OK, will remove the unnecessary code.
Sorry, the backing is still needed here.
The target address inside the invvpid descriptor (operand->gla) doesn't
need a valid translation, but the invvpid descriptor itself needs a
valid translation.
>
>>
>>> + operand->vpid = 0xffff;
>>> + operand->gla = (u64)vaddr;
>>> + operand = (struct invvpid_operand
>>> *)set_la_non_canonical((u64)operand,
>>> + lam_mask);
>>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>>> + report(!fault, "INVVPID (LAM on): tagged operand");
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread