public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests v3 0/4] x86: Add test cases for LAM
@ 2023-04-12  7:51 Binbin Wu
  2023-04-12  7:51 ` [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Binbin Wu @ 2023-04-12  7:51 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini; +Cc: binbin.wu, chao.gao, robert.hu

Intel Linear-address masking (LAM) [1], modifies the checking that is applied to
*64-bit* linear addresses, allowing software to use of the untranslated address
bits for metadata.

The patch series add test cases for KVM LAM:

Patch 1 makes change to allow setting of CR3 LAM bits in vmlaunch tests.
Patch 2~4 add test cases for LAM supervisor mode and user mode, including:
- For supervisor mode
  CR4.LAM_SUP toggle
  Memory/MMIO access with tagged pointer
  INVLPG
  INVPCID
  INVVPID (also used to cover VMX instruction VMExit path)
- For user mode
  CR3 LAM bits toggle 
  Memory/MMIO access with tagged pointer

[1] Intel ISE https://cdrdv2.intel.com/v1/dl/getContent/671368
    Chapter Linear Address Masking (LAM)

---
Changelog
v2 --> v3:
Simplify the implementation of set_metadata() (Suggested by Chao Gao)
Move definition of X86_FEATURE_LAM to patch 1.
Remove strcpy case and unify memory & MMIO address access using MOV instruction.
Some code cleanups.
Add reviewed-by from Chao Gao in patch 1. 

v1 --> v2:
Add cases to test INVLPG, INVPCID, INVVPID with LAM_SUP
Add cases to test LAM_{U48,U57}

Binbin Wu (3):
  x86: Allow setting of CR3 LAM bits if LAM supported
  x86: Add test cases for LAM_{U48,U57}
  x86: Add test case for INVVPID with LAM

Robert Hoo (1):
  x86: Add test case for LAM_SUP

 lib/x86/processor.h |   7 +
 x86/Makefile.x86_64 |   1 +
 x86/lam.c           | 315 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |  10 ++
 x86/vmx_tests.c     |  66 +++++++++-
 5 files changed, 398 insertions(+), 1 deletion(-)
 create mode 100644 x86/lam.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported
  2023-04-12  7:51 [kvm-unit-tests v3 0/4] x86: Add test cases for LAM Binbin Wu
@ 2023-04-12  7:51 ` Binbin Wu
  2023-04-21  2:02   ` Chao Gao
  2023-04-12  7:51 ` [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP Binbin Wu
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Binbin Wu @ 2023-04-12  7:51 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini; +Cc: binbin.wu, chao.gao, robert.hu

If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57
(bit 61) to be set in CR3 field.

Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
on vmlaunch tests when LAM is supported.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 lib/x86/processor.h | 3 +++
 x86/vmx_tests.c     | 6 +++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 3d58ef7..e00a32b 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -55,6 +55,8 @@
 #define X86_CR0_PG		BIT(X86_CR0_PG_BIT)
 
 #define X86_CR3_PCID_MASK	GENMASK(11, 0)
+#define X86_CR3_LAM_U57_BIT	(61)
+#define X86_CR3_LAM_U48_BIT	(62)
 
 #define X86_CR4_VME_BIT		(0)
 #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
@@ -248,6 +250,7 @@ static inline bool is_intel(void)
 #define	X86_FEATURE_SPEC_CTRL		(CPUID(0x7, 0, EDX, 26))
 #define	X86_FEATURE_ARCH_CAPABILITIES	(CPUID(0x7, 0, EDX, 29))
 #define	X86_FEATURE_PKS			(CPUID(0x7, 0, ECX, 31))
+#define	X86_FEATURE_LAM			(CPUID(0x7, 1, EAX, 26))
 
 /*
  * Extended Leafs, a.k.a. AMD defined
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 7bba816..5ee1264 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -7000,7 +7000,11 @@ static void test_host_ctl_regs(void)
 		cr3 = cr3_saved | (1ul << i);
 		vmcs_write(HOST_CR3, cr3);
 		report_prefix_pushf("HOST_CR3 %lx", cr3);
-		test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
+		if (this_cpu_has(X86_FEATURE_LAM) &&
+		    ((i == X86_CR3_LAM_U57_BIT) || ( i == X86_CR3_LAM_U48_BIT)))
+			test_vmx_vmlaunch(0);
+		else
+			test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
 		report_prefix_pop();
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP
  2023-04-12  7:51 [kvm-unit-tests v3 0/4] x86: Add test cases for LAM Binbin Wu
  2023-04-12  7:51 ` [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
@ 2023-04-12  7:51 ` Binbin Wu
  2023-04-21  3:32   ` Chao Gao
  2023-04-12  7:51 ` [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
  2023-04-12  7:51 ` [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
  3 siblings, 1 reply; 13+ messages in thread
From: Binbin Wu @ 2023-04-12  7:51 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini; +Cc: binbin.wu, chao.gao, robert.hu

From: Robert Hoo <robert.hu@linux.intel.com>

This unit test covers:
1. CR4.LAM_SUP toggles.
2. Memory & MMIO access with supervisor mode address with LAM metadata.
3. INVLPG memory operand doesn't contain LAM meta data, if the address
   is non-canonical form then the INVLPG is the same as a NOP (no #GP).
4. INVPCID memory operand (descriptor pointer) could contain LAM meta data,
   however, the address in the descriptor should be canonical.

In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.

LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chap 7
LINEAR ADDRESS MASKING (LAM)

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 lib/x86/processor.h |   2 +
 x86/Makefile.x86_64 |   1 +
 x86/lam.c           | 244 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |  10 ++
 4 files changed, 257 insertions(+)
 create mode 100644 x86/lam.c

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index e00a32b..4bb8cd7 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -107,6 +107,8 @@
 #define X86_CR4_CET		BIT(X86_CR4_CET_BIT)
 #define X86_CR4_PKS_BIT		(24)
 #define X86_CR4_PKS		BIT(X86_CR4_PKS_BIT)
+#define X86_CR4_LAM_SUP_BIT	(28)
+#define X86_CR4_LAM_SUP	BIT(X86_CR4_LAM_SUP_BIT)
 
 #define X86_EFLAGS_CF_BIT	(0)
 #define X86_EFLAGS_CF		BIT(X86_EFLAGS_CF_BIT)
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f483dea..fa11eb3 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe)
 tests += $(TEST_DIR)/pks.$(exe)
 tests += $(TEST_DIR)/pmu_lbr.$(exe)
 tests += $(TEST_DIR)/pmu_pebs.$(exe)
+tests += $(TEST_DIR)/lam.$(exe)
 
 ifeq ($(CONFIG_EFI),y)
 tests += $(TEST_DIR)/amd_sev.$(exe)
diff --git a/x86/lam.c b/x86/lam.c
new file mode 100644
index 0000000..63c3fde
--- /dev/null
+++ b/x86/lam.c
@@ -0,0 +1,244 @@
+/*
+ * Intel LAM unit test
+ *
+ * Copyright (C) 2023 Intel
+ *
+ * Author: Robert Hoo <robert.hu@linux.intel.com>
+ *         Binbin Wu <binbin.wu@linux.intel.com>
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2 or
+ * later.
+ */
+
+#include "libcflat.h"
+#include "processor.h"
+#include "desc.h"
+#include "vmalloc.h"
+#include "alloc_page.h"
+#include "vm.h"
+#include "asm/io.h"
+#include "ioram.h"
+
+#define LAM57_MASK	GENMASK_ULL(62, 57)
+#define LAM48_MASK	GENMASK_ULL(62, 48)
+
+#define FLAGS_LAM_ACTIVE	BIT_ULL(0)
+#define FLAGS_LA57		BIT_ULL(1)
+
+struct invpcid_desc {
+    u64 pcid : 12;
+    u64 rsv  : 52;
+    u64 addr;
+};
+
+static inline bool is_la57(void)
+{
+	return !!(read_cr4() & X86_CR4_LA57);
+}
+
+static inline bool lam_sup_active(void)
+{
+	return !!(read_cr4() & X86_CR4_LAM_SUP);
+}
+
+/* According to LAM mode, set metadata in high bits */
+static inline u64 set_metadata(u64 src, u64 metadata_mask)
+{
+	return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask);
+}
+
+static void cr4_set_lam_sup(void *data)
+{
+	unsigned long cr4;
+
+	cr4 = read_cr4();
+	write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
+}
+
+static void cr4_clear_lam_sup(void *data)
+{
+	unsigned long cr4;
+
+	cr4 = read_cr4();
+	write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
+}
+
+static void test_cr4_lam_set_clear(bool has_lam)
+{
+	bool fault;
+
+	fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+	report((fault != has_lam) && (lam_sup_active() == has_lam),
+	       "Set CR4.LAM_SUP");
+
+	fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
+	report(!fault, "Clear CR4.LAM_SUP");
+}
+
+/* Refer to emulator.c */
+static void do_mov(void *mem)
+{
+	unsigned long t1, t2;
+
+	t1 = 0x123456789abcdefull & -1ul;
+	asm volatile("mov %[t1], (%[mem])\n\t"
+		     "mov (%[mem]), %[t2]"
+		     : [t2]"=r"(t2)
+		     : [t1]"r"(t1), [mem]"r"(mem)
+		     : "memory");
+	report(t1 == t2, "Mov result check");
+}
+
+static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
+{
+	bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+	u64 lam_mask = arg2;
+	u64 *ptr = (u64 *)arg3;
+	bool is_mmio = !!arg4;
+	bool fault;
+
+	fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+	report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory");
+
+	ptr = (u64 *)set_metadata((u64)ptr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+	report(fault != lam_active,"Test tagged addr (%s)",
+	       is_mmio ? "MMIO" : "Memory");
+
+	return 0;
+}
+
+static void do_invlpg(void *mem)
+{
+	invlpg(mem);
+}
+
+static void do_invlpg_fep(void *mem)
+{
+	asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
+}
+
+/* invlpg with tagged address is same as NOP, no #GP expected. */
+static void test_invlpg(u64 lam_mask, void *va, bool fep)
+{
+	bool fault;
+	u64 *ptr;
+
+	ptr = (u64 *)set_metadata((u64)va, lam_mask);
+	if (fep)
+		fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
+	else
+		fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
+
+	report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
+}
+
+static void do_invpcid(void *desc)
+{
+	unsigned long type = 0;
+	struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
+
+	asm volatile("invpcid %0, %1" :
+	                              : "m" (*desc_ptr), "r" (type)
+	                              : "memory");
+}
+
+/* LAM doesn't apply to the target address in the descriptor of invpcid */
+static void test_invpcid(u64 flags, u64 lam_mask, void *data)
+{
+	/*
+	 * Reuse the memory address for the descriptor since stack memory
+	 * address in KUT doesn't follow the kernel address space partitions.
+	 */
+	struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
+	bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
+	bool fault;
+
+	if (!this_cpu_has(X86_FEATURE_PCID) ||
+	    !this_cpu_has(X86_FEATURE_INVPCID)) {
+		report_skip("INVPCID not supported");
+		return;
+	}
+
+	memset(desc_ptr, 0, sizeof(struct invpcid_desc));
+	desc_ptr->addr = (u64)data;
+
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(!fault, "INVPCID: untagged pointer + untagged addr");
+
+	desc_ptr->addr = set_metadata(desc_ptr->addr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault, "INVPCID: untagged pointer + tagged addr");
+
+	desc_ptr = (struct invpcid_desc *)set_metadata((u64)desc_ptr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault, "INVPCID: tagged pointer + tagged addr");
+
+	desc_ptr = (struct invpcid_desc *)data;
+	desc_ptr->addr = (u64)data;
+	desc_ptr = (struct invpcid_desc *)set_metadata((u64)desc_ptr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault != lam_active, "INVPCID: tagged pointer + untagged addr");
+}
+
+static void test_lam_sup(bool has_lam, bool fep_available)
+{
+	void *vaddr, *vaddr_mmio;
+	phys_addr_t paddr;
+	u64 lam_mask = LAM48_MASK;
+	u64 flags = 0;
+	bool fault;
+
+	vaddr = alloc_vpage();
+	vaddr_mmio = alloc_vpage();
+	paddr = virt_to_phys(alloc_page());
+	install_page(current_page_table(), paddr, vaddr);
+	install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio);
+
+	test_cr4_lam_set_clear(has_lam);
+
+	/* Set for the following LAM_SUP tests */
+	if (has_lam) {
+		fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+		report(!fault && lam_sup_active(), "Set CR4.LAM_SUP");
+	}
+
+	if (lam_sup_active())
+		flags |= FLAGS_LAM_ACTIVE;
+
+	if (is_la57()) {
+		flags |= FLAGS_LA57;
+		lam_mask = LAM57_MASK;
+	}
+
+	/* Test for normal memory */
+	test_ptr(flags, lam_mask, (u64)vaddr, false);
+	/* Test for MMIO to trigger instruction emulation */
+	test_ptr(flags, lam_mask, (u64)vaddr_mmio, true);
+	test_invpcid(flags, lam_mask, vaddr);
+	test_invlpg(lam_mask, vaddr, false);
+	if (fep_available)
+		test_invlpg(lam_mask, vaddr, true);
+}
+
+int main(int ac, char **av)
+{
+	bool has_lam;
+	bool fep_available = is_fep_available();
+
+	setup_vm();
+
+	has_lam = this_cpu_has(X86_FEATURE_LAM);
+	if (!has_lam)
+		report_info("This CPU doesn't support LAM feature\n");
+	else
+		report_info("This CPU supports LAM feature\n");
+
+	if (!fep_available)
+		report_skip("Skipping tests the forced emulation, "
+			    "use kvm.force_emulation_prefix=1 to enable\n");
+
+	test_lam_sup(has_lam, fep_available);
+
+	return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index f324e32..34b09eb 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -478,3 +478,13 @@ file = cet.flat
 arch = x86_64
 smp = 2
 extra_params = -enable-kvm -m 2048 -cpu host
+
+[intel-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host
+
+[intel-no-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host,-lam
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57}
  2023-04-12  7:51 [kvm-unit-tests v3 0/4] x86: Add test cases for LAM Binbin Wu
  2023-04-12  7:51 ` [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
  2023-04-12  7:51 ` [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP Binbin Wu
@ 2023-04-12  7:51 ` Binbin Wu
  2023-04-21  5:06   ` Chao Gao
  2023-04-12  7:51 ` [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
  3 siblings, 1 reply; 13+ messages in thread
From: Binbin Wu @ 2023-04-12  7:51 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini; +Cc: binbin.wu, chao.gao, robert.hu

This unit test covers:
1. CR3 LAM bits toggles.
2. Memory/MMIO access with user mode address containing LAM metadata.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 lib/x86/processor.h |  2 ++
 x86/lam.c           | 71 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 73 insertions(+)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 4bb8cd7..a181e0b 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -56,7 +56,9 @@
 
 #define X86_CR3_PCID_MASK	GENMASK(11, 0)
 #define X86_CR3_LAM_U57_BIT	(61)
+#define X86_CR3_LAM_U57		BIT_ULL(X86_CR3_LAM_U57_BIT)
 #define X86_CR3_LAM_U48_BIT	(62)
+#define X86_CR3_LAM_U48		BIT_ULL(X86_CR3_LAM_U48_BIT)
 
 #define X86_CR4_VME_BIT		(0)
 #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
diff --git a/x86/lam.c b/x86/lam.c
index 63c3fde..50bcdf5 100644
--- a/x86/lam.c
+++ b/x86/lam.c
@@ -18,9 +18,11 @@
 #include "vm.h"
 #include "asm/io.h"
 #include "ioram.h"
+#include "usermode.h"
 
 #define LAM57_MASK	GENMASK_ULL(62, 57)
 #define LAM48_MASK	GENMASK_ULL(62, 48)
+#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)
 
 #define FLAGS_LAM_ACTIVE	BIT_ULL(0)
 #define FLAGS_LA57		BIT_ULL(1)
@@ -41,6 +43,16 @@ static inline bool lam_sup_active(void)
 	return !!(read_cr4() & X86_CR4_LAM_SUP);
 }
 
+static inline bool lam_u48_active(void)
+{
+	return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48;
+}
+
+static inline bool lam_u57_active(void)
+{
+	return !!(read_cr3() & X86_CR3_LAM_U57);
+}
+
 /* According to LAM mode, set metadata in high bits */
 static inline u64 set_metadata(u64 src, u64 metadata_mask)
 {
@@ -92,6 +104,7 @@ static void do_mov(void *mem)
 static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
 {
 	bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+	bool la_57 = !!(arg1 & FLAGS_LA57);
 	u64 lam_mask = arg2;
 	u64 *ptr = (u64 *)arg3;
 	bool is_mmio = !!arg4;
@@ -105,6 +118,17 @@ static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
 	report(fault != lam_active,"Test tagged addr (%s)",
 	       is_mmio ? "MMIO" : "Memory");
 
+	/*
+	 * This test case is only triggered when LAM_U57 is active and 4-level
+	 * paging is used. For the case, bit[56:47] aren't all 0 triggers #GP.
+	 */
+	if (lam_active && (lam_mask == LAM57_MASK) && !la_57) {
+		ptr = (u64 *)set_metadata((u64)ptr, LAM48_MASK);
+		fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+		report(fault,  "Test non-LAM-canonical addr (%s)",
+		       is_mmio ? "MMIO" : "Memory");
+	}
+
 	return 0;
 }
 
@@ -221,6 +245,52 @@ static void test_lam_sup(bool has_lam, bool fep_available)
 		test_invlpg(lam_mask, vaddr, true);
 }
 
+static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio)
+{
+	unsigned r;
+	bool raised_vector;
+	unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK;
+	u64 flags = 0;
+
+	if (is_la57())
+		flags |= FLAGS_LA57;
+
+	if (has_lam) {
+		if (lam_mask == LAM48_MASK) {
+			r = write_cr3_safe(cr3 | X86_CR3_LAM_U48);
+			report((r == 0) && lam_u48_active(), "Set LAM_U48");
+		} else {
+			r = write_cr3_safe(cr3 | X86_CR3_LAM_U57);
+			report((r == 0) && lam_u57_active(), "Set LAM_U57");
+		}
+	}
+	if (lam_u48_active() || lam_u57_active())
+		flags |= FLAGS_LAM_ACTIVE;
+
+	run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem,
+		    false, &raised_vector);
+	run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio,
+		    true, &raised_vector);
+}
+
+static void test_lam_user(bool has_lam)
+{
+	phys_addr_t paddr;
+	unsigned long cr3 = read_cr3();
+
+	/*
+	 * The physical address width is within 36 bits, so that using identical
+	 * mapping, the linear address will be considered as user mode address
+	 * from the view of LAM.
+	 */
+	paddr = virt_to_phys(alloc_page());
+	install_page((void *)cr3, paddr, (void *)paddr);
+	install_page((void *)cr3, IORAM_BASE_PHYS, (void *)IORAM_BASE_PHYS);
+
+	test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS);
+	test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS);
+}
+
 int main(int ac, char **av)
 {
 	bool has_lam;
@@ -239,6 +309,7 @@ int main(int ac, char **av)
 			    "use kvm.force_emulation_prefix=1 to enable\n");
 
 	test_lam_sup(has_lam, fep_available);
+	test_lam_user(has_lam);
 
 	return report_summary();
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM
  2023-04-12  7:51 [kvm-unit-tests v3 0/4] x86: Add test cases for LAM Binbin Wu
                   ` (2 preceding siblings ...)
  2023-04-12  7:51 ` [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
@ 2023-04-12  7:51 ` Binbin Wu
  2023-04-21  5:38   ` Chao Gao
  3 siblings, 1 reply; 13+ messages in thread
From: Binbin Wu @ 2023-04-12  7:51 UTC (permalink / raw)
  To: kvm, seanjc, pbonzini; +Cc: binbin.wu, chao.gao, robert.hu

When LAM is on, the linear address of INVVPID operand can contain
metadata, and the linear address in the INVVPID descriptor can
contain metadata.

The added cases use tagged descriptor address or/and tagged target
invalidation address to make sure the behaviors are expected when
LAM is on.
Also, INVVPID cases can be used as the common test cases for VMX
instruction VMExits.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 5ee1264..381ca1c 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void)
 	TEST_ASSERT(!vmx_on());
 }
 
+#define LAM57_MASK	GENMASK_ULL(62, 57)
+#define LAM48_MASK	GENMASK_ULL(62, 48)
+
+static inline u64 set_metadata(u64 src, u64 metadata_mask)
+{
+	return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask);
+}
+
+/* LAM applies to the target address inside the descriptor of invvpid */
+static void invvpid_test_lam(void)
+{
+	void *vaddr;
+	struct invvpid_operand *operand;
+	u64 lam_mask = LAM48_MASK;
+	bool fault;
+
+	if (!this_cpu_has(X86_FEATURE_LAM)) {
+		report_skip("LAM is not supported, skip INVVPID with LAM");
+		return;
+	}
+
+	if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
+		lam_mask = LAM57_MASK;
+
+	vaddr = alloc_vpage();
+	install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
+	/*
+	 * Since the stack memory address in KUT doesn't follow kernel address
+	 * space partition rule, reuse the memory address for descriptor and
+	 * the target address in the descriptor of invvpid.
+	 */
+	operand = (struct invvpid_operand *)vaddr;
+	operand->vpid = 0xffff;
+	operand->gla = (u64)vaddr;
+
+	write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
+	if (!(read_cr4() & X86_CR4_LAM_SUP)) {
+		report_skip("Failed to enable LAM_SUP");
+		return;
+	}
+
+	operand = (struct invvpid_operand *)vaddr;
+	operand->gla = set_metadata(operand->gla, lam_mask);
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr");
+
+	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
+	operand->gla = (u64)vaddr;
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr");
+
+	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
+	operand->gla = set_metadata(operand->gla, lam_mask);
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
+
+	write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
+}
+
 /*
  * This does not test real-address mode, virtual-8086 mode, protected mode,
  * or CPL > 0.
@@ -3282,6 +3341,7 @@ static void invvpid_test(void)
 	invvpid_test_pf();
 	invvpid_test_compatibility_mode();
 	invvpid_test_not_in_vmx_operation();
+	invvpid_test_lam();
 }
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported
  2023-04-12  7:51 ` [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
@ 2023-04-21  2:02   ` Chao Gao
  2023-04-23  1:42     ` Binbin Wu
  0 siblings, 1 reply; 13+ messages in thread
From: Chao Gao @ 2023-04-21  2:02 UTC (permalink / raw)
  To: Binbin Wu; +Cc: kvm, seanjc, pbonzini, robert.hu

On Wed, Apr 12, 2023 at 03:51:31PM +0800, Binbin Wu wrote:
>If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57
>(bit 61) to be set in CR3 field.
>
>Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
>on vmlaunch tests when LAM is supported.
>
>Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>Reviewed-by: Chao Gao <chao.gao@intel.com>
>---
> lib/x86/processor.h | 3 +++
> x86/vmx_tests.c     | 6 +++++-
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
>diff --git a/lib/x86/processor.h b/lib/x86/processor.h
>index 3d58ef7..e00a32b 100644
>--- a/lib/x86/processor.h
>+++ b/lib/x86/processor.h
>@@ -55,6 +55,8 @@
> #define X86_CR0_PG		BIT(X86_CR0_PG_BIT)
> 
> #define X86_CR3_PCID_MASK	GENMASK(11, 0)
>+#define X86_CR3_LAM_U57_BIT	(61)
>+#define X86_CR3_LAM_U48_BIT	(62)
> 
> #define X86_CR4_VME_BIT		(0)
> #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
>@@ -248,6 +250,7 @@ static inline bool is_intel(void)
> #define	X86_FEATURE_SPEC_CTRL		(CPUID(0x7, 0, EDX, 26))
> #define	X86_FEATURE_ARCH_CAPABILITIES	(CPUID(0x7, 0, EDX, 29))
> #define	X86_FEATURE_PKS			(CPUID(0x7, 0, ECX, 31))
>+#define	X86_FEATURE_LAM			(CPUID(0x7, 1, EAX, 26))
> 
> /*
>  * Extended Leafs, a.k.a. AMD defined
>diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>index 7bba816..5ee1264 100644
>--- a/x86/vmx_tests.c
>+++ b/x86/vmx_tests.c
>@@ -7000,7 +7000,11 @@ static void test_host_ctl_regs(void)
> 		cr3 = cr3_saved | (1ul << i);
> 		vmcs_write(HOST_CR3, cr3);
> 		report_prefix_pushf("HOST_CR3 %lx", cr3);
>-		test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
>+		if (this_cpu_has(X86_FEATURE_LAM) &&
>+		    ((i == X86_CR3_LAM_U57_BIT) || ( i == X86_CR3_LAM_U48_BIT)))

						    ^ stray space

>+			test_vmx_vmlaunch(0);
>+		else
>+			test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
> 		report_prefix_pop();
> 	}
> 
>-- 
>2.25.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP
  2023-04-12  7:51 ` [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP Binbin Wu
@ 2023-04-21  3:32   ` Chao Gao
  0 siblings, 0 replies; 13+ messages in thread
From: Chao Gao @ 2023-04-21  3:32 UTC (permalink / raw)
  To: Binbin Wu; +Cc: kvm, seanjc, pbonzini, robert.hu

On Wed, Apr 12, 2023 at 03:51:32PM +0800, Binbin Wu wrote:
>From: Robert Hoo <robert.hu@linux.intel.com>
>
>This unit test covers:
>1. CR4.LAM_SUP toggles.
>2. Memory & MMIO access with supervisor mode address with LAM metadata.
>3. INVLPG memory operand doesn't contain LAM meta data, if the address
>   is non-canonical form then the INVLPG is the same as a NOP (no #GP).
>4. INVPCID memory operand (descriptor pointer) could contain LAM meta data,
>   however, the address in the descriptor should be canonical.
>
>In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.
>
>LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chap 7
>LINEAR ADDRESS MASKING (LAM)
>
>Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
>Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
>Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

one nit below

>+static void do_invpcid(void *desc)
>+{
>+	unsigned long type = 0;

the local variable isn't needed ...

>+	struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
>+
>+	asm volatile("invpcid %0, %1" :
>+	                              : "m" (*desc_ptr), "r" (type)

You can simply use
	"r" (0)
.

>+	                              : "memory");
>+}

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57}
  2023-04-12  7:51 ` [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
@ 2023-04-21  5:06   ` Chao Gao
  2023-04-23  3:07     ` Binbin Wu
  0 siblings, 1 reply; 13+ messages in thread
From: Chao Gao @ 2023-04-21  5:06 UTC (permalink / raw)
  To: Binbin Wu; +Cc: kvm, seanjc, pbonzini, robert.hu

On Wed, Apr 12, 2023 at 03:51:33PM +0800, Binbin Wu wrote:
>This unit test covers:
>1. CR3 LAM bits toggles.
>2. Memory/MMIO access with user mode address containing LAM metadata.
>
>Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

two nits below:

>---
>+static void test_lam_user(bool has_lam)
>+{
>+	phys_addr_t paddr;
>+	unsigned long cr3 = read_cr3();
>+
>+	/*
>+	 * The physical address width is within 36 bits, so that using identical
>+	 * mapping, the linear address will be considered as user mode address
>+	 * from the view of LAM.
>+	 */

Why 36 bits (i.e., 64G)?

would you mind adding a comment in patch 2 to explain why the virtual
addresses are kernel mode addresses?

>+	paddr = virt_to_phys(alloc_page());
>+	install_page((void *)cr3, paddr, (void *)paddr);
>+	install_page((void *)cr3, IORAM_BASE_PHYS, (void *)IORAM_BASE_PHYS);

are the two lines necessary?

>+
>+	test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS);
>+	test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS);
>+}
>+
> int main(int ac, char **av)
> {
> 	bool has_lam;
>@@ -239,6 +309,7 @@ int main(int ac, char **av)
> 			    "use kvm.force_emulation_prefix=1 to enable\n");
> 
> 	test_lam_sup(has_lam, fep_available);
>+	test_lam_user(has_lam);
> 
> 	return report_summary();
> }
>-- 
>2.25.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM
  2023-04-12  7:51 ` [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
@ 2023-04-21  5:38   ` Chao Gao
  2023-04-23  3:41     ` Binbin Wu
  0 siblings, 1 reply; 13+ messages in thread
From: Chao Gao @ 2023-04-21  5:38 UTC (permalink / raw)
  To: Binbin Wu; +Cc: kvm, seanjc, pbonzini, robert.hu

On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote:
>When LAM is on, the linear address of INVVPID operand can contain
>metadata, and the linear address in the INVVPID descriptor can
>contain metadata.
>
>The added cases use tagged descriptor address or/and tagged target
>invalidation address to make sure the behaviors are expected when
>LAM is on.
>Also, INVVPID cases can be used as the common test cases for VMX
>instruction VMExits.
>
>Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>

Reviewed-by: Chao Gao <chao.gao@intel.com>

with a few cosmetic comments below:

>---
> x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 60 insertions(+)
>
>diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>index 5ee1264..381ca1c 100644
>--- a/x86/vmx_tests.c
>+++ b/x86/vmx_tests.c
>@@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void)
> 	TEST_ASSERT(!vmx_on());
> }
> 
>+#define LAM57_MASK	GENMASK_ULL(62, 57)
>+#define LAM48_MASK	GENMASK_ULL(62, 48)
>+
>+static inline u64 set_metadata(u64 src, u64 metadata_mask)
>+{
>+	return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask);
>+}

Can you move the duplicate defintions and functions to a header file?

>+
>+/* LAM applies to the target address inside the descriptor of invvpid */
>+static void invvpid_test_lam(void)
>+{
>+	void *vaddr;
>+	struct invvpid_operand *operand;
>+	u64 lam_mask = LAM48_MASK;
>+	bool fault;
>+
>+	if (!this_cpu_has(X86_FEATURE_LAM)) {
>+		report_skip("LAM is not supported, skip INVVPID with LAM");
>+		return;
>+	}

...

>+
>+	if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
>+		lam_mask = LAM57_MASK;
>+
>+	vaddr = alloc_vpage();
>+	install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
>+	/*
>+	 * Since the stack memory address in KUT doesn't follow kernel address
>+	 * space partition rule, reuse the memory address for descriptor and
>+	 * the target address in the descriptor of invvpid.
>+	 */
>+	operand = (struct invvpid_operand *)vaddr;
>+	operand->vpid = 0xffff;
>+	operand->gla = (u64)vaddr;
>+
>+	write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
>+	if (!(read_cr4() & X86_CR4_LAM_SUP)) {
>+		report_skip("Failed to enable LAM_SUP");
>+		return;
>+	}

It might be better to enable LAM_SUP right after above check for the LAM CPUID
bit. And no need to verify the result because there is a dedicated test case
already in patch 2.

>+
>+	operand = (struct invvpid_operand *)vaddr;
>+	operand->gla = set_metadata(operand->gla, lam_mask);
>+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>+	report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr");
>+
>+	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
>+	operand->gla = (u64)vaddr;
>+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>+	report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr");
>+
>+	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
>+	operand->gla = set_metadata(operand->gla, lam_mask);
>+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>+	report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
>+
>+	write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
>+}
>+
> /*
>  * This does not test real-address mode, virtual-8086 mode, protected mode,
>  * or CPL > 0.
>@@ -3282,6 +3341,7 @@ static void invvpid_test(void)
> 	invvpid_test_pf();
> 	invvpid_test_compatibility_mode();
> 	invvpid_test_not_in_vmx_operation();
>+	invvpid_test_lam();

operand->gla is checked only in INVVPID_ADDR mode. So, the lam test should be
moved under "if (types & (1u << INVVPID_ADDR))" a few lines above.

> }
> 
> /*
>-- 
>2.25.1
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported
  2023-04-21  2:02   ` Chao Gao
@ 2023-04-23  1:42     ` Binbin Wu
  0 siblings, 0 replies; 13+ messages in thread
From: Binbin Wu @ 2023-04-23  1:42 UTC (permalink / raw)
  To: Chao Gao; +Cc: kvm, seanjc, pbonzini, robert.hu


On 4/21/2023 10:02 AM, Chao Gao wrote:
> On Wed, Apr 12, 2023 at 03:51:31PM +0800, Binbin Wu wrote:
>> If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57
>> (bit 61) to be set in CR3 field.
>>
>> Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
>> on vmlaunch tests when LAM is supported.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> Reviewed-by: Chao Gao <chao.gao@intel.com>
>> ---
>> lib/x86/processor.h | 3 +++
>> x86/vmx_tests.c     | 6 +++++-
>> 2 files changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/x86/processor.h b/lib/x86/processor.h
>> index 3d58ef7..e00a32b 100644
>> --- a/lib/x86/processor.h
>> +++ b/lib/x86/processor.h
>> @@ -55,6 +55,8 @@
>> #define X86_CR0_PG		BIT(X86_CR0_PG_BIT)
>>
>> #define X86_CR3_PCID_MASK	GENMASK(11, 0)
>> +#define X86_CR3_LAM_U57_BIT	(61)
>> +#define X86_CR3_LAM_U48_BIT	(62)
>>
>> #define X86_CR4_VME_BIT		(0)
>> #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
>> @@ -248,6 +250,7 @@ static inline bool is_intel(void)
>> #define	X86_FEATURE_SPEC_CTRL		(CPUID(0x7, 0, EDX, 26))
>> #define	X86_FEATURE_ARCH_CAPABILITIES	(CPUID(0x7, 0, EDX, 29))
>> #define	X86_FEATURE_PKS			(CPUID(0x7, 0, ECX, 31))
>> +#define	X86_FEATURE_LAM			(CPUID(0x7, 1, EAX, 26))
>>
>> /*
>>   * Extended Leafs, a.k.a. AMD defined
>> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>> index 7bba816..5ee1264 100644
>> --- a/x86/vmx_tests.c
>> +++ b/x86/vmx_tests.c
>> @@ -7000,7 +7000,11 @@ static void test_host_ctl_regs(void)
>> 		cr3 = cr3_saved | (1ul << i);
>> 		vmcs_write(HOST_CR3, cr3);
>> 		report_prefix_pushf("HOST_CR3 %lx", cr3);
>> -		test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
>> +		if (this_cpu_has(X86_FEATURE_LAM) &&
>> +		    ((i == X86_CR3_LAM_U57_BIT) || ( i == X86_CR3_LAM_U48_BIT)))
> 						    ^ stray space


Thanks, will remove it.

>
>> +			test_vmx_vmlaunch(0);
>> +		else
>> +			test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
>> 		report_prefix_pop();
>> 	}
>>
>> -- 
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57}
  2023-04-21  5:06   ` Chao Gao
@ 2023-04-23  3:07     ` Binbin Wu
  0 siblings, 0 replies; 13+ messages in thread
From: Binbin Wu @ 2023-04-23  3:07 UTC (permalink / raw)
  To: Chao Gao; +Cc: kvm, seanjc, pbonzini, robert.hu


On 4/21/2023 1:06 PM, Chao Gao wrote:
> On Wed, Apr 12, 2023 at 03:51:33PM +0800, Binbin Wu wrote:
>> This unit test covers:
>> 1. CR3 LAM bits toggles.
>> 2. Memory/MMIO access with user mode address containing LAM metadata.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
>
> two nits below:
>
>> ---
>> +static void test_lam_user(bool has_lam)
>> +{
>> +	phys_addr_t paddr;
>> +	unsigned long cr3 = read_cr3();
>> +
>> +	/*
>> +	 * The physical address width is within 36 bits, so that using identical
>> +	 * mapping, the linear address will be considered as user mode address
>> +	 * from the view of LAM.
>> +	 */
> Why 36 bits (i.e., 64G)?

KUT memory allocator supports 4 Memory Areas:
AREA_NORMAL, AREA_HIGH, AREA_LOW and AREA_LOWEST.
AREA_NORMAL has the maximum address width, which is 36 bits.

How about change the comment to this:

+	/*
+	 * The physical address width supported by KUT memory allocator is within 36 bits,
+	 * so that using identical mapping, the linear address will be considered as user
+        * mode address from the view of LAM.
+	 */


>
> would you mind adding a comment in patch 2 to explain why the virtual
> addresses are kernel mode addresses?

OK. Will add the following comments:

     /*
      * KUT initializes vfree_top to 0 for X86_64, and each virtual address
      * allocation decreases the size from vfree_top. It's guaranteed that
      * the return value of alloc_vpage() is considered as kernel mode
      * address and canonical since only a small mount virtual address range
      * is allocated in this test.
      */



>
>> +	paddr = virt_to_phys(alloc_page());
>> +	install_page((void *)cr3, paddr, (void *)paddr);
>> +	install_page((void *)cr3, IORAM_BASE_PHYS, (void *)IORAM_BASE_PHYS);
> are the two lines necessary?

The two lines are not necessary.
Check setup_mmu(), it already sets up the identical mapping for avialble 
physcial memory/MMIO.
Will remove them.


>
>> +
>> +	test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS);
>> +	test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS);
>> +}
>> +
>> int main(int ac, char **av)
>> {
>> 	bool has_lam;
>> @@ -239,6 +309,7 @@ int main(int ac, char **av)
>> 			    "use kvm.force_emulation_prefix=1 to enable\n");
>>
>> 	test_lam_sup(has_lam, fep_available);
>> +	test_lam_user(has_lam);
>>
>> 	return report_summary();
>> }
>> -- 
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM
  2023-04-21  5:38   ` Chao Gao
@ 2023-04-23  3:41     ` Binbin Wu
  2023-04-23  6:13       ` Binbin Wu
  0 siblings, 1 reply; 13+ messages in thread
From: Binbin Wu @ 2023-04-23  3:41 UTC (permalink / raw)
  To: Chao Gao; +Cc: kvm, seanjc, pbonzini, robert.hu


On 4/21/2023 1:38 PM, Chao Gao wrote:
> On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote:
>> When LAM is on, the linear address of INVVPID operand can contain
>> metadata, and the linear address in the INVVPID descriptor can
>> contain metadata.
>>
>> The added cases use tagged descriptor address or/and tagged target
>> invalidation address to make sure the behaviors are expected when
>> LAM is on.
>> Also, INVVPID cases can be used as the common test cases for VMX
>> instruction VMExits.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
>
> with a few cosmetic comments below:
>
>> ---
>> x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 60 insertions(+)
>>
>> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>> index 5ee1264..381ca1c 100644
>> --- a/x86/vmx_tests.c
>> +++ b/x86/vmx_tests.c
>> @@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void)
>> 	TEST_ASSERT(!vmx_on());
>> }
>>
>> +#define LAM57_MASK	GENMASK_ULL(62, 57)
>> +#define LAM48_MASK	GENMASK_ULL(62, 48)
>> +
>> +static inline u64 set_metadata(u64 src, u64 metadata_mask)
>> +{
>> +	return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask);
>> +}
> Can you move the duplicate defintions and functions to a header file?

Then add a new header file lam.h?
Didn't find a suitable existant header file to add these definitions.


>
>> +
>> +/* LAM applies to the target address inside the descriptor of invvpid */
>> +static void invvpid_test_lam(void)
>> +{
>> +	void *vaddr;
>> +	struct invvpid_operand *operand;
>> +	u64 lam_mask = LAM48_MASK;
>> +	bool fault;
>> +
>> +	if (!this_cpu_has(X86_FEATURE_LAM)) {
>> +		report_skip("LAM is not supported, skip INVVPID with LAM");
>> +		return;
>> +	}
> ...
>
>> +
>> +	if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
>> +		lam_mask = LAM57_MASK;
>> +
>> +	vaddr = alloc_vpage();
>> +	install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
>> +	/*
>> +	 * Since the stack memory address in KUT doesn't follow kernel address
>> +	 * space partition rule, reuse the memory address for descriptor and
>> +	 * the target address in the descriptor of invvpid.
>> +	 */
>> +	operand = (struct invvpid_operand *)vaddr;
>> +	operand->vpid = 0xffff;
>> +	operand->gla = (u64)vaddr;
>> +
>> +	write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
>> +	if (!(read_cr4() & X86_CR4_LAM_SUP)) {
>> +		report_skip("Failed to enable LAM_SUP");
>> +		return;
>> +	}
> It might be better to enable LAM_SUP right after above check for the LAM CPUID
> bit. And no need to verify the result because there is a dedicated test case
> already in patch 2.

OK.


>> +
>> +	operand = (struct invvpid_operand *)vaddr;
>> +	operand->gla = set_metadata(operand->gla, lam_mask);
>> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>> +	report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr");
>> +
>> +	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
>> +	operand->gla = (u64)vaddr;
>> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>> +	report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr");
>> +
>> +	operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask);
>> +	operand->gla = set_metadata(operand->gla, lam_mask);
>> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>> +	report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
>> +
>> +	write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
>> +}
>> +
>> /*
>>   * This does not test real-address mode, virtual-8086 mode, protected mode,
>>   * or CPL > 0.
>> @@ -3282,6 +3341,7 @@ static void invvpid_test(void)
>> 	invvpid_test_pf();
>> 	invvpid_test_compatibility_mode();
>> 	invvpid_test_not_in_vmx_operation();
>> +	invvpid_test_lam();
> operand->gla is checked only in INVVPID_ADDR mode. So, the lam test should be
> moved under "if (types & (1u << INVVPID_ADDR))" a few lines above.

Yes, will update it.


>
>> }
>>
>> /*
>> -- 
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM
  2023-04-23  3:41     ` Binbin Wu
@ 2023-04-23  6:13       ` Binbin Wu
  0 siblings, 0 replies; 13+ messages in thread
From: Binbin Wu @ 2023-04-23  6:13 UTC (permalink / raw)
  To: Chao Gao; +Cc: kvm, seanjc, pbonzini, robert.hu


On 4/23/2023 11:41 AM, Binbin Wu wrote:
>
> On 4/21/2023 1:38 PM, Chao Gao wrote:
>> On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote:
>>> When LAM is on, the linear address of INVVPID operand can contain
>>> metadata, and the linear address in the INVVPID descriptor can
>>> contain metadata.
>>>
>>> The added cases use tagged descriptor address or/and tagged target
>>> invalidation address to make sure the behaviors are expected when
>>> LAM is on.
>>> Also, INVVPID cases can be used as the common test cases for VMX
>>> instruction VMExits.
>>>
>>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> Reviewed-by: Chao Gao <chao.gao@intel.com>
>>
>> with a few cosmetic comments below:
>>
>>> ---
>>> x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 60 insertions(+)
>>>
>>> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
>>> index 5ee1264..381ca1c 100644
>>> --- a/x86/vmx_tests.c
>>> +++ b/x86/vmx_tests.c
>>> @@ -3225,6 +3225,65 @@ static void 
>>> invvpid_test_not_in_vmx_operation(void)
>>>     TEST_ASSERT(!vmx_on());
>>> }
>>>
>>> +#define LAM57_MASK    GENMASK_ULL(62, 57)
>>> +#define LAM48_MASK    GENMASK_ULL(62, 48)
>>> +
>>> +static inline u64 set_metadata(u64 src, u64 metadata_mask)
>>> +{
>>> +    return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask);
>>> +}
>> Can you move the duplicate defintions and functions to a header file?
>
> Then add a new header file lam.h?
> Didn't find a suitable existant header file to add these definitions.

I am about to move the definitions to lib/x86/processor.h and
rename set_metadata() to set_la_non_canonical() to be more generic.


index e00a32b..236b537 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -8,6 +8,14 @@
  #include <stdint.h>

  #define NONCANONICAL    0xaaaaaaaaaaaaaaaaull
+#define LAM57_MASK    GENMASK_ULL(62, 57)
+#define LAM48_MASK    GENMASK_ULL(62, 48)
+
+/* Set metadata with non-canonical pattern in mask bits of a linear 
address */
+static inline u64 set_la_non_canonical(u64 src, u64 mask)
+{
+    return (src & ~mask) | (NONCANONICAL & mask);
+}



>
>
>>
>>> +
>>> +/* LAM applies to the target address inside the descriptor of 
>>> invvpid */
>>> +static void invvpid_test_lam(void)
>>> +{
>>> +    void *vaddr;
>>> +    struct invvpid_operand *operand;
>>> +    u64 lam_mask = LAM48_MASK;
>>> +    bool fault;
>>> +
>>> +    if (!this_cpu_has(X86_FEATURE_LAM)) {
>>> +        report_skip("LAM is not supported, skip INVVPID with LAM");
>>> +        return;
>>> +    }
>> ...
>>
>>> +
>>> +    if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
>>> +        lam_mask = LAM57_MASK;
>>> +
>>> +    vaddr = alloc_vpage();
>>> +    install_page(current_page_table(), virt_to_phys(alloc_page()), 
>>> vaddr);
>>> +    /*
>>> +     * Since the stack memory address in KUT doesn't follow kernel 
>>> address
>>> +     * space partition rule, reuse the memory address for 
>>> descriptor and
>>> +     * the target address in the descriptor of invvpid.
>>> +     */
>>> +    operand = (struct invvpid_operand *)vaddr;
>>> +    operand->vpid = 0xffff;
>>> +    operand->gla = (u64)vaddr;
>>> +
>>> +    write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
>>> +    if (!(read_cr4() & X86_CR4_LAM_SUP)) {
>>> +        report_skip("Failed to enable LAM_SUP");
>>> +        return;
>>> +    }
>> It might be better to enable LAM_SUP right after above check for the 
>> LAM CPUID
>> bit. And no need to verify the result because there is a dedicated 
>> test case
>> already in patch 2.
>
> OK.
>
>
>>> +
>>> +    operand = (struct invvpid_operand *)vaddr;
>>> +    operand->gla = set_metadata(operand->gla, lam_mask);
>>> +    fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>>> +    report(!fault, "INVVPID (LAM on): untagged pointer + tagged 
>>> addr");
>>> +
>>> +    operand = (struct invvpid_operand *)set_metadata((u64)operand, 
>>> lam_mask);
>>> +    operand->gla = (u64)vaddr;
>>> +    fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>>> +    report(!fault, "INVVPID (LAM on): tagged pointer + untagged 
>>> addr");
>>> +
>>> +    operand = (struct invvpid_operand *)set_metadata((u64)operand, 
>>> lam_mask);
>>> +    operand->gla = set_metadata(operand->gla, lam_mask);
>>> +    fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
>>> +    report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
>>> +
>>> +    write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
>>> +}
>>> +
>>> /*
>>>   * This does not test real-address mode, virtual-8086 mode, 
>>> protected mode,
>>>   * or CPL > 0.
>>> @@ -3282,6 +3341,7 @@ static void invvpid_test(void)
>>>     invvpid_test_pf();
>>>     invvpid_test_compatibility_mode();
>>>     invvpid_test_not_in_vmx_operation();
>>> +    invvpid_test_lam();
>> operand->gla is checked only in INVVPID_ADDR mode. So, the lam test 
>> should be
>> moved under "if (types & (1u << INVVPID_ADDR))" a few lines above.
>
> Yes, will update it.
>
>
>>
>>> }
>>>
>>> /*
>>> -- 
>>> 2.25.1
>>>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-04-23  6:13 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-12  7:51 [kvm-unit-tests v3 0/4] x86: Add test cases for LAM Binbin Wu
2023-04-12  7:51 ` [kvm-unit-tests v3 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
2023-04-21  2:02   ` Chao Gao
2023-04-23  1:42     ` Binbin Wu
2023-04-12  7:51 ` [kvm-unit-tests v3 2/4] x86: Add test case for LAM_SUP Binbin Wu
2023-04-21  3:32   ` Chao Gao
2023-04-12  7:51 ` [kvm-unit-tests v3 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
2023-04-21  5:06   ` Chao Gao
2023-04-23  3:07     ` Binbin Wu
2023-04-12  7:51 ` [kvm-unit-tests v3 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
2023-04-21  5:38   ` Chao Gao
2023-04-23  3:41     ` Binbin Wu
2023-04-23  6:13       ` Binbin Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox