public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
* [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions
@ 2024-05-14 12:07 Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 1/4] KVM: Disable EBP register use in 32bit code Martin Doucha
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Martin Doucha @ 2024-05-14 12:07 UTC (permalink / raw)
  To: ltp

Thanks to a minor bug in LTP KVM library, LTP test kvm_svm02 found a kernel
bug in emulation of VMSAVE and VMLOAD instructions in nested VMs. Add
a thorough functional test for both instructions which can detect and pinpoint
emulation bugs.

Also implement basic printf-like formatting for tst_res() and tst_brk() so
that the test can print incorrect register values to simplify result analysis.
Only standard integer and character conversions are supported. Floating point
number conversions, field alignment and padding are not implemented.

Martin Doucha (4):
  KVM: Disable EBP register use in 32bit code
  KVM: Implement strchr() and basic sprintf()
  KVM: Implement printf-like formatting for tst_res() and tst_brk()
  KVM: Add functional test for VMSAVE/VMLOAD instructions

 configure.ac                             |   2 +
 include/mk/config.mk.in                  |   1 +
 runtest/kvm                              |   1 +
 testcases/kernel/kvm/.gitignore          |   1 +
 testcases/kernel/kvm/Makefile            |   4 +
 testcases/kernel/kvm/include/kvm_guest.h |  19 +-
 testcases/kernel/kvm/kvm_svm04.c         | 307 ++++++++++++++++++++
 testcases/kernel/kvm/lib_guest.c         | 348 ++++++++++++++++++++++-
 8 files changed, 675 insertions(+), 8 deletions(-)
 create mode 100644 testcases/kernel/kvm/kvm_svm04.c

-- 
2.44.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [LTP] [PATCH v2 1/4] KVM: Disable EBP register use in 32bit code
  2024-05-14 12:07 [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
@ 2024-05-14 12:07 ` Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 2/4] KVM: Implement strchr() and basic sprintf() Martin Doucha
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Martin Doucha @ 2024-05-14 12:07 UTC (permalink / raw)
  To: ltp

The EBP register points to the stack segment by default but GCC uses
it to access data segment without the proper prefix. This works fine
on most systems because the stack and data segments are usually
identical. However, KVM environment intentionally enforces strict
limits on the stack segment and access to the data segment using
unprefixed EBP would trigger stack segment fault exception in 32bit
LTP builds (stack segment limits are ignored in 64bit mode).

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---

Changes since v1:
- Detect -ffixed-ebp support in configure script and disable EBP conditionally

 configure.ac                  | 2 ++
 include/mk/config.mk.in       | 1 +
 testcases/kernel/kvm/Makefile | 4 ++++
 3 files changed, 7 insertions(+)

diff --git a/configure.ac b/configure.ac
index 1f7aa70bd..6d7009763 100644
--- a/configure.ac
+++ b/configure.ac
@@ -393,7 +393,9 @@ LTP_CHECK_SYSCALL_FCNTL
 LTP_CHECK_FSVERITY
 
 AX_CHECK_COMPILE_FLAG([-no-pie], [LTP_CFLAGS_NOPIE=1])
+AX_CHECK_COMPILE_FLAG([-ffixed-ebp], [LTP_CFLAGS_FFIXED_EBP=1])
 AC_SUBST([LTP_CFLAGS_NOPIE])
+AC_SUBST([LTP_CFLAGS_FFIXED_EBP])
 
 if test "x$with_numa" = xyes; then
 	LTP_CHECK_SYSCALL_NUMA
diff --git a/include/mk/config.mk.in b/include/mk/config.mk.in
index 145b887fa..f6e02eaeb 100644
--- a/include/mk/config.mk.in
+++ b/include/mk/config.mk.in
@@ -86,6 +86,7 @@ LDFLAGS			+= $(WLDFLAGS)
 CFLAGS			+= $(DEBUG_CFLAGS) $(OPT_CFLAGS) $(WCFLAGS) $(STDCFLAGS)
 
 LTP_CFLAGS_NOPIE	:= @LTP_CFLAGS_NOPIE@
+LTP_CFLAGS_FFIXED_EBP	:= @LTP_CFLAGS_FFIXED_EBP@
 
 ifeq ($(strip $(HOST_CFLAGS)),)
 HOST_CFLAGS := $(CFLAGS)
diff --git a/testcases/kernel/kvm/Makefile b/testcases/kernel/kvm/Makefile
index ce4a5ede2..07bdd9705 100644
--- a/testcases/kernel/kvm/Makefile
+++ b/testcases/kernel/kvm/Makefile
@@ -24,6 +24,10 @@ endif
 ifeq ($(HOST_CPU),x86)
 	GUEST_CFLAGS += -m32
 	ASFLAGS += --32
+
+	ifdef LTP_CFLAGS_FFIXED_EBP
+		GUEST_CFLAGS += -ffixed-ebp
+	endif
 endif
 
 # Some distros enable -pie by default. That breaks KVM payload linking.
-- 
2.44.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [LTP] [PATCH v2 2/4] KVM: Implement strchr() and basic sprintf()
  2024-05-14 12:07 [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 1/4] KVM: Disable EBP register use in 32bit code Martin Doucha
@ 2024-05-14 12:07 ` Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 3/4] KVM: Implement printf-like formatting for tst_res() and tst_brk() Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
  3 siblings, 0 replies; 6+ messages in thread
From: Martin Doucha @ 2024-05-14 12:07 UTC (permalink / raw)
  To: ltp

Add basic implementation of sprintf() that supports string, pointer
and integer arguments but without advanced formatting options like
field alignment and padding.

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---

Changes since v1:
- #include <stdarg.h> instead of defining custom va_list macros

The C standard requires that <stdarg.h> must support freestanding environment
so it's safe to #include it in KVM guest code.

 testcases/kernel/kvm/include/kvm_guest.h |   7 +
 testcases/kernel/kvm/lib_guest.c         | 312 +++++++++++++++++++++++
 2 files changed, 319 insertions(+)

diff --git a/testcases/kernel/kvm/include/kvm_guest.h b/testcases/kernel/kvm/include/kvm_guest.h
index 96f246155..3cfafa313 100644
--- a/testcases/kernel/kvm/include/kvm_guest.h
+++ b/testcases/kernel/kvm/include/kvm_guest.h
@@ -8,6 +8,8 @@
 #ifndef KVM_GUEST_H_
 #define KVM_GUEST_H_
 
+#include <stdarg.h>
+
 /* The main LTP include dir is intentionally excluded during payload build */
 #include "../../../../include/tst_res_flags.h"
 #undef TERRNO
@@ -49,6 +51,11 @@ void *memcpy(void *dest, const void *src, size_t size);
 char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 size_t strlen(const char *str);
+char *strchr(const char *s, int c);
+char *strrchr(const char *s, int c);
+
+int vsprintf(char *dest, const char *fmt, va_list ap);
+int sprintf(char *dest, const char *fmt, ...);
 
 /* Exit the VM by looping on a HLT instruction forever */
 void kvm_exit(void) __attribute__((noreturn));
diff --git a/testcases/kernel/kvm/lib_guest.c b/testcases/kernel/kvm/lib_guest.c
index f3e21d3d6..73a76ccb1 100644
--- a/testcases/kernel/kvm/lib_guest.c
+++ b/testcases/kernel/kvm/lib_guest.c
@@ -76,6 +76,74 @@ size_t strlen(const char *str)
 	return ret;
 }
 
+char *strchr(const char *s, int c)
+{
+	for (; *s; s++) {
+		if (*s == c)
+			return (char *)s;
+	}
+
+	return NULL;
+}
+
+char *strrchr(const char *s, int c)
+{
+	const char *ret = NULL;
+
+	for (; *s; s++) {
+		if (*s == c)
+			ret = s;
+	}
+
+	return (char *)ret;
+}
+
+#if defined(__x86_64__) && !defined(__ILP32__)
+uint64_t u64divu16(uint64_t a, uint16_t b)
+{
+	return a / b;
+}
+
+unsigned int u64modu16(uint64_t a, uint16_t b)
+{
+	return a % b;
+}
+
+#else /* defined(__x86_64__) && !defined(__ILP32__) */
+
+/* u64 short division helpers to avoid need to link libgcc on 32bit archs */
+uint64_t u64divu16(uint64_t a, uint16_t b)
+{
+	uint64_t ret = 0;
+	uint32_t tmp = a >> 32;
+
+	ret = tmp / b;
+	ret <<= 32;
+	tmp %= b;
+	tmp <<= 16;
+	tmp |= (a >> 16) & 0xffff;
+	ret |= (tmp / b) << 16;
+	tmp %= b;
+	tmp <<= 16;
+	tmp |= a & 0xffff;
+	ret |= tmp / b;
+	return ret;
+}
+
+unsigned int u64modu16(uint64_t a, uint16_t b)
+{
+	uint32_t tmp = a >> 32;
+
+	tmp %= b;
+	tmp <<= 16;
+	tmp |= (a >> 16) & 0xffff;
+	tmp %= b;
+	tmp <<= 16;
+	tmp |= a & 0xffff;
+	return tmp % b;
+}
+#endif /* defined(__x86_64__) && !defined(__ILP32__) */
+
 char *ptr2hex(char *dest, uintptr_t val)
 {
 	unsigned int i;
@@ -95,6 +163,250 @@ char *ptr2hex(char *dest, uintptr_t val)
 	return ret;
 }
 
+char *u64tostr(char *dest, uint64_t val, uint16_t base, int caps)
+{
+	unsigned int i;
+	uintptr_t tmp = u64divu16(val, base);
+	char hex = caps ? 'A' : 'a';
+	char *ret = dest;
+
+	for (i = 1; tmp; i++, tmp = u64divu16(tmp, base))
+		;
+
+	dest[i] = '\0';
+
+	do {
+		tmp = u64modu16(val, base);
+		dest[--i] = tmp + (tmp >= 10 ? hex - 10 : '0');
+		val = u64divu16(val, base);
+	} while (i);
+
+	return ret;
+}
+
+char *i64tostr(char *dest, int64_t val)
+{
+	if (val < 0) {
+		dest[0] = '-';
+		u64tostr(dest + 1, -val, 10, 0);
+		return dest;
+	}
+
+	return u64tostr(dest, val, 10, 0);
+}
+
+int vsprintf(char *dest, const char *fmt, va_list ap)
+{
+	va_list args;
+	int ret = 0;
+	char conv;
+	uint64_t u64val = 0;
+	int64_t i64val = 0;
+	const char * const uint_conv = "ouxX";
+
+	va_copy(args, ap);
+
+	for (; *fmt; fmt++) {
+		if (*fmt != '%') {
+			dest[ret++] = *fmt;
+			continue;
+		}
+
+		conv = 0;
+		fmt++;
+
+		switch (*fmt) {
+		case '%':
+			dest[ret++] = *fmt;
+			break;
+
+		case 'c':
+			dest[ret++] = va_arg(args, int);
+			break;
+
+		case 's':
+			strcpy(dest + ret, va_arg(args, const char *));
+			ret += strlen(dest + ret);
+			break;
+
+		case 'p':
+			strcpy(dest + ret, "0x");
+			ptr2hex(dest + ret + 2,
+				(uintptr_t)va_arg(args, void *));
+			ret += strlen(dest + ret);
+			break;
+
+		case 'l':
+			fmt++;
+
+			switch (*fmt) {
+			case 'l':
+				fmt++;
+
+				if (*fmt == 'd' || *fmt == 'i') {
+					i64val = va_arg(args, long long);
+					conv = *fmt;
+					break;
+				}
+
+				if (strchr(uint_conv, *fmt)) {
+					u64val = va_arg(args,
+						unsigned long long);
+					conv = *fmt;
+					break;
+				}
+
+				va_end(args);
+				return -1;
+
+			case 'd':
+			case 'i':
+				i64val = va_arg(args, long);
+				conv = *fmt;
+				break;
+
+			default:
+				if (strchr(uint_conv, *fmt)) {
+					u64val = va_arg(args,
+						unsigned long);
+					conv = *fmt;
+					break;
+				}
+
+				va_end(args);
+				return -1;
+			}
+			break;
+
+		case 'h':
+			fmt++;
+
+			switch (*fmt) {
+			case 'h':
+				fmt++;
+
+				if (*fmt == 'd' || *fmt == 'i') {
+					i64val = (signed char)va_arg(args, int);
+					conv = *fmt;
+					break;
+				}
+
+				if (strchr(uint_conv, *fmt)) {
+					u64val = (unsigned char)va_arg(args,
+						unsigned int);
+					conv = *fmt;
+					break;
+				}
+
+				va_end(args);
+				return -1;
+
+			case 'd':
+			case 'i':
+				i64val = (short int)va_arg(args, int);
+				conv = *fmt;
+				break;
+
+			default:
+				if (strchr(uint_conv, *fmt)) {
+					u64val = (unsigned short int)va_arg(
+						args, unsigned int);
+					conv = *fmt;
+					break;
+				}
+
+				va_end(args);
+				return -1;
+			}
+			break;
+
+		case 'z':
+			fmt++;
+
+			if (*fmt == 'd' || *fmt == 'i') {
+				i64val = va_arg(args, ssize_t);
+				conv = *fmt;
+				break;
+			}
+
+			if (strchr(uint_conv, *fmt)) {
+				u64val = va_arg(args, size_t);
+				conv = *fmt;
+				break;
+			}
+
+			va_end(args);
+			return -1;
+
+		case 'd':
+		case 'i':
+			i64val = va_arg(args, int);
+			conv = *fmt;
+			break;
+
+		default:
+			if (strchr(uint_conv, *fmt)) {
+				u64val = va_arg(args, unsigned int);
+				conv = *fmt;
+				break;
+			}
+
+			va_end(args);
+			return -1;
+		}
+
+		switch (conv) {
+		case 0:
+			continue;
+
+		case 'd':
+		case 'i':
+			i64tostr(dest + ret, i64val);
+			ret += strlen(dest + ret);
+			break;
+
+		case 'o':
+			u64tostr(dest + ret, u64val, 8, 0);
+			ret += strlen(dest + ret);
+			break;
+
+		case 'u':
+			u64tostr(dest + ret, u64val, 10, 0);
+			ret += strlen(dest + ret);
+			break;
+
+		case 'x':
+			u64tostr(dest + ret, u64val, 16, 0);
+			ret += strlen(dest + ret);
+			break;
+
+		case 'X':
+			u64tostr(dest + ret, u64val, 16, 1);
+			ret += strlen(dest + ret);
+			break;
+
+		default:
+			va_end(args);
+			return -1;
+		}
+	}
+
+	va_end(args);
+	dest[ret++] = '\0';
+	return ret;
+}
+
+int sprintf(char *dest, const char *fmt, ...)
+{
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	ret = vsprintf(dest, fmt, args);
+	va_end(args);
+	return ret;
+}
+
 void *tst_heap_alloc_aligned(size_t size, size_t align)
 {
 	uintptr_t addr = (uintptr_t)heap_end;
-- 
2.44.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [LTP] [PATCH v2 3/4] KVM: Implement printf-like formatting for tst_res() and tst_brk()
  2024-05-14 12:07 [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 1/4] KVM: Disable EBP register use in 32bit code Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 2/4] KVM: Implement strchr() and basic sprintf() Martin Doucha
@ 2024-05-14 12:07 ` Martin Doucha
  2024-05-14 12:07 ` [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
  3 siblings, 0 replies; 6+ messages in thread
From: Martin Doucha @ 2024-05-14 12:07 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---

Changes since v1: None

 testcases/kernel/kvm/include/kvm_guest.h | 12 +++++---
 testcases/kernel/kvm/lib_guest.c         | 36 +++++++++++++++++++++---
 2 files changed, 40 insertions(+), 8 deletions(-)

diff --git a/testcases/kernel/kvm/include/kvm_guest.h b/testcases/kernel/kvm/include/kvm_guest.h
index 3cfafa313..0eabfb9a0 100644
--- a/testcases/kernel/kvm/include/kvm_guest.h
+++ b/testcases/kernel/kvm/include/kvm_guest.h
@@ -64,12 +64,16 @@ void kvm_exit(void) __attribute__((noreturn));
 void kvm_yield(void);
 
 void tst_res_(const char *file, const int lineno, int result,
-	const char *message);
-#define tst_res(result, msg) tst_res_(__FILE__, __LINE__, (result), (msg))
+	const char *fmt, ...)
+	__attribute__ ((format (printf, 4, 5)));
+#define tst_res(result, fmt, ...) \
+	tst_res_(__FILE__, __LINE__, (result), (fmt), ##__VA_ARGS__)
 
 void tst_brk_(const char *file, const int lineno, int result,
-	const char *message) __attribute__((noreturn));
-#define tst_brk(result, msg) tst_brk_(__FILE__, __LINE__, (result), (msg))
+	const char *fmt, ...) __attribute__((noreturn))
+	__attribute__ ((format (printf, 4, 5)));
+#define tst_brk(result, fmt, ...) \
+	tst_brk_(__FILE__, __LINE__, (result), (fmt), ##__VA_ARGS__)
 
 /*
  * Send asynchronous notification to host without stopping VM execution and
diff --git a/testcases/kernel/kvm/lib_guest.c b/testcases/kernel/kvm/lib_guest.c
index 73a76ccb1..2e3e9cb6e 100644
--- a/testcases/kernel/kvm/lib_guest.c
+++ b/testcases/kernel/kvm/lib_guest.c
@@ -443,6 +443,7 @@ static void tst_fatal_error(const char *file, const int lineno,
 	test_result->result = TBROK;
 	test_result->lineno = lineno;
 	test_result->file_addr = (uintptr_t)file;
+	/* Avoid sprintf() here in case of bugs */
 	strcpy(test_result->message, message);
 	strcat(test_result->message, " at address 0x");
 	ptr2hex(test_result->message + strlen(test_result->message), ip);
@@ -451,19 +452,46 @@ static void tst_fatal_error(const char *file, const int lineno,
 }
 
 void tst_res_(const char *file, const int lineno, int result,
-	const char *message)
+	const char *fmt, ...)
 {
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
 	test_result->result = result;
 	test_result->lineno = lineno;
 	test_result->file_addr = (uintptr_t)file;
-	strcpy(test_result->message, message);
+	ret = vsprintf(test_result->message, fmt, args);
+	va_end(args);
+
+	if (ret < 0) {
+		tst_brk_(file, lineno, TBROK, "Invalid tst_res() format: %s",
+			fmt);
+	}
+
 	kvm_yield();
 }
 
 void tst_brk_(const char *file, const int lineno, int result,
-	const char *message)
+	const char *fmt, ...)
 {
-	tst_res_(file, lineno, result, message);
+	va_list args;
+	int ret;
+
+	va_start(args, fmt);
+	test_result->result = result;
+	test_result->lineno = lineno;
+	test_result->file_addr = (uintptr_t)file;
+	ret = vsprintf(test_result->message, fmt, args);
+	va_end(args);
+
+	if (ret < 0) {
+		test_result->result = TBROK;
+		strcpy(test_result->message, "Invalid tst_brk() format: ");
+		strcat(test_result->message, fmt);
+	}
+
+	kvm_yield();
 	kvm_exit();
 }
 
-- 
2.44.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions
  2024-05-14 12:07 [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
                   ` (2 preceding siblings ...)
  2024-05-14 12:07 ` [LTP] [PATCH v2 3/4] KVM: Implement printf-like formatting for tst_res() and tst_brk() Martin Doucha
@ 2024-05-14 12:07 ` Martin Doucha
  2024-05-14 12:59   ` Petr Vorel
  3 siblings, 1 reply; 6+ messages in thread
From: Martin Doucha @ 2024-05-14 12:07 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---

Changes since v1: None

 runtest/kvm                      |   1 +
 testcases/kernel/kvm/.gitignore  |   1 +
 testcases/kernel/kvm/kvm_svm04.c | 307 +++++++++++++++++++++++++++++++
 3 files changed, 309 insertions(+)
 create mode 100644 testcases/kernel/kvm/kvm_svm04.c

diff --git a/runtest/kvm b/runtest/kvm
index 9de846a09..74a517add 100644
--- a/runtest/kvm
+++ b/runtest/kvm
@@ -1,5 +1,6 @@
 kvm_svm01 kvm_svm01
 kvm_svm02 kvm_svm02
 kvm_svm03 kvm_svm03
+kvm_svm04 kvm_svm04
 # Tests below may interfere with bug reproducibility
 kvm_pagefault01 kvm_pagefault01
diff --git a/testcases/kernel/kvm/.gitignore b/testcases/kernel/kvm/.gitignore
index 9638a6fc7..661472cae 100644
--- a/testcases/kernel/kvm/.gitignore
+++ b/testcases/kernel/kvm/.gitignore
@@ -2,3 +2,4 @@
 /kvm_svm01
 /kvm_svm02
 /kvm_svm03
+/kvm_svm04
diff --git a/testcases/kernel/kvm/kvm_svm04.c b/testcases/kernel/kvm/kvm_svm04.c
new file mode 100644
index 000000000..e69f0d4be
--- /dev/null
+++ b/testcases/kernel/kvm/kvm_svm04.c
@@ -0,0 +1,307 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2023 SUSE LLC <mdoucha@suse.cz>
+ */
+
+/*\
+ * [Description]
+ *
+ * Functional test for VMSAVE/VMLOAD instructions in KVM environment. Verify
+ * that both instructions save/load the CPU state according to CPU
+ * documentation.
+ */
+
+#include "kvm_test.h"
+
+#ifdef COMPILE_PAYLOAD
+#if defined(__i386__) || defined(__x86_64__)
+
+#include "kvm_x86_svm.h"
+
+static struct kvm_vmcb *src_vmcb, *dest_vmcb, *msr_vmcb;
+static struct kvm_sregs sregs_buf;
+
+static int check_descriptor(const char *name,
+	const struct kvm_vmcb_descriptor *data,
+	const struct kvm_vmcb_descriptor *exp)
+{
+	int ret = 0;
+
+	if (data->selector != exp->selector) {
+		tst_res(TFAIL, "%s.selector = %hx (expected %hx)",
+			name, data->selector, exp->selector);
+		ret = 1;
+	}
+
+	if (data->attrib != exp->attrib) {
+		tst_res(TFAIL, "%s.attrib = 0x%hx (expected 0x%hx)",
+			name, data->attrib, exp->attrib);
+		ret = 1;
+	}
+
+	if (data->limit != exp->limit) {
+		tst_res(TFAIL, "%s.limit = 0x%x (expected 0x%x)",
+			name, data->limit, exp->limit);
+		ret = 1;
+	}
+
+	if (data->base != exp->base) {
+		tst_res(TFAIL, "%s.base = 0x%llx (expected 0x%llx)",
+			name, data->base, exp->base);
+		ret = 1;
+	}
+
+	return ret;
+}
+
+static int check_value(const char *name, uint64_t val, uint64_t exp,
+	uint64_t backup, uint64_t reg, uint64_t nested_val)
+{
+	int ret = 0;
+
+	if (exp != backup) {
+		tst_res(TFAIL, "%s source was modified (0x%llx != 0x%llx)",
+			name, exp, backup);
+		ret = 1;
+	}
+
+	if (reg != exp) {
+		tst_res(TFAIL, "%s was not loaded (0x%llx != 0x%llx)",
+			name, reg, exp);
+		ret = 1;
+	}
+
+	if (val != exp) {
+		tst_res(TFAIL, "%s was not saved (0x%llx != 0x%llx)",
+			name, val, exp);
+		ret = 1;
+	}
+
+	if (val != nested_val) {
+		tst_res(TFAIL, "Inconsistent %s on VM exit (0x%llx != 0x%llx)",
+			name, val, nested_val);
+		ret = 1;
+	}
+
+	if (!ret)
+		tst_res(TPASS, "%s has correct value 0x%llx", name, val);
+
+	return ret;
+}
+
+static int vmsave_copy(void)
+{
+	kvm_svm_vmload(src_vmcb);
+	kvm_read_sregs(&sregs_buf);
+	msr_vmcb->star = kvm_rdmsr(MSR_STAR);
+	msr_vmcb->lstar = kvm_rdmsr(MSR_LSTAR);
+	msr_vmcb->cstar = kvm_rdmsr(MSR_CSTAR);
+	msr_vmcb->sfmask = kvm_rdmsr(MSR_SFMASK);
+	msr_vmcb->fs.base = kvm_rdmsr(MSR_FS_BASE);
+	msr_vmcb->gs.base = kvm_rdmsr(MSR_GS_BASE);
+	msr_vmcb->kernel_gs_base = kvm_rdmsr(MSR_KERNEL_GS_BASE);
+	msr_vmcb->sysenter_cs = kvm_rdmsr(MSR_SYSENTER_CS);
+	msr_vmcb->sysenter_esp = kvm_rdmsr(MSR_SYSENTER_ESP);
+	msr_vmcb->sysenter_eip = kvm_rdmsr(MSR_SYSENTER_EIP);
+	kvm_svm_vmsave(dest_vmcb);
+	return 0;
+}
+
+static int check_vmsave_result(struct kvm_vmcb *copy_vmcb,
+	struct kvm_vmcb *nested_vmcb)
+{
+	int ret = 0;
+
+	/* Nested VMCB is only compared to dest VMCB, bypass the check */
+	if (!nested_vmcb)
+		nested_vmcb = dest_vmcb;
+
+	ret = check_descriptor("FS", &dest_vmcb->fs, &src_vmcb->fs);
+	ret = check_value("FS.selector", dest_vmcb->fs.selector,
+		src_vmcb->fs.selector, copy_vmcb->fs.selector,
+		sregs_buf.fs, nested_vmcb->fs.selector) || ret;
+	ret = check_descriptor("GS", &dest_vmcb->gs, &src_vmcb->gs) || ret;
+	ret = check_value("GS.selector", dest_vmcb->gs.selector,
+		src_vmcb->gs.selector, copy_vmcb->gs.selector,
+		sregs_buf.gs, nested_vmcb->gs.selector) || ret;
+	ret = check_descriptor("LDTR", &dest_vmcb->ldtr, &src_vmcb->ldtr) ||
+		ret;
+	ret = check_descriptor("TR", &dest_vmcb->tr, &src_vmcb->tr) || ret;
+	ret = check_value("STAR", dest_vmcb->star, src_vmcb->star,
+		copy_vmcb->star, msr_vmcb->star, nested_vmcb->star) || ret;
+	ret = check_value("LSTAR", dest_vmcb->lstar, src_vmcb->lstar,
+		copy_vmcb->lstar, msr_vmcb->lstar, nested_vmcb->lstar) || ret;
+	ret = check_value("CSTAR", dest_vmcb->cstar, src_vmcb->cstar,
+		copy_vmcb->cstar, msr_vmcb->cstar, nested_vmcb->cstar) || ret;
+	ret = check_value("SFMASK", dest_vmcb->sfmask, src_vmcb->sfmask,
+		copy_vmcb->sfmask, msr_vmcb->sfmask, nested_vmcb->sfmask) ||
+		ret;
+	ret = check_value("FS.base", dest_vmcb->fs.base, src_vmcb->fs.base,
+		copy_vmcb->fs.base, msr_vmcb->fs.base, nested_vmcb->fs.base) ||
+		ret;
+	ret = check_value("GS.base", dest_vmcb->gs.base, src_vmcb->gs.base,
+		copy_vmcb->gs.base, msr_vmcb->gs.base, nested_vmcb->gs.base) ||
+		ret;
+	ret = check_value("KernelGSBase", dest_vmcb->kernel_gs_base,
+		src_vmcb->kernel_gs_base, copy_vmcb->kernel_gs_base,
+		msr_vmcb->kernel_gs_base, nested_vmcb->kernel_gs_base) || ret;
+	ret = check_value("Sysenter_CS", dest_vmcb->sysenter_cs,
+		src_vmcb->sysenter_cs, copy_vmcb->sysenter_cs,
+		msr_vmcb->sysenter_cs, nested_vmcb->sysenter_cs) || ret;
+	ret = check_value("Sysenter_ESP", dest_vmcb->sysenter_esp,
+		src_vmcb->sysenter_esp, copy_vmcb->sysenter_esp,
+		msr_vmcb->sysenter_esp, nested_vmcb->sysenter_esp) || ret;
+	ret = check_value("Sysenter_EIP", dest_vmcb->sysenter_eip,
+		src_vmcb->sysenter_eip, copy_vmcb->sysenter_eip,
+		msr_vmcb->sysenter_eip, nested_vmcb->sysenter_eip) || ret;
+
+	return ret;
+}
+
+static int create_segment_descriptor(uint64_t baseaddr, uint32_t limit,
+	unsigned int flags)
+{
+	int ret = kvm_find_free_descriptor(kvm_gdt, KVM_GDT_SIZE);
+
+	if (ret < 0)
+		tst_brk(TBROK, "Descriptor table is full");
+
+	kvm_set_segment_descriptor(kvm_gdt + ret, baseaddr, limit, flags);
+	return ret;
+}
+
+static void dirty_vmcb(struct kvm_vmcb *buf)
+{
+	buf->fs.selector = 0x60;
+	buf->fs.attrib = SEGTYPE_RWDATA | SEGFLAG_PRESENT;
+	buf->fs.limit = 0xffff;
+	buf->fs.base = 0xfff000;
+	buf->gs.selector = 0x68;
+	buf->gs.attrib = SEGTYPE_RWDATA | SEGFLAG_PRESENT;
+	buf->gs.limit = 0xffff;
+	buf->gs.base = 0xfff000;
+	buf->ldtr.selector = 0x70;
+	buf->ldtr.attrib = SEGTYPE_LDT | SEGFLAG_PRESENT;
+	buf->ldtr.limit = 0xffff;
+	buf->ldtr.base = 0xfff000;
+	buf->tr.selector = 0x78;
+	buf->tr.attrib = SEGTYPE_TSS | SEGFLAG_PRESENT;
+	buf->tr.limit = 0xffff;
+	buf->tr.base = 0xfff000;
+	buf->star = 0xffff;
+	buf->lstar = 0xffff;
+	buf->cstar = 0xffff;
+	buf->sfmask = 0xffff;
+	buf->fs.base = 0xffff;
+	buf->gs.base = 0xffff;
+	buf->kernel_gs_base = 0xffff;
+	buf->sysenter_cs = 0xffff;
+	buf->sysenter_esp = 0xffff;
+	buf->sysenter_eip = 0xffff;
+}
+
+void main(void)
+{
+	uint16_t ss;
+	uint64_t rsp;
+	struct kvm_svm_vcpu *vcpu;
+	int data_seg1, data_seg2, ldt_seg, task_seg;
+	struct segment_descriptor *ldt;
+	struct kvm_vmcb *backup_vmcb, *zero_vmcb;
+	unsigned int ldt_size = KVM_GDT_SIZE*sizeof(struct segment_descriptor);
+
+	kvm_init_svm();
+
+	src_vmcb = kvm_alloc_vmcb();
+	dest_vmcb = kvm_alloc_vmcb();
+	msr_vmcb = kvm_alloc_vmcb();
+	backup_vmcb = kvm_alloc_vmcb();
+	zero_vmcb = kvm_alloc_vmcb();
+
+	vcpu = kvm_create_svm_vcpu(vmsave_copy, 1);
+	kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMLOAD, 0);
+	kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMSAVE, 0);
+	/* Save allocated stack for later VM reinit */
+	ss = vcpu->vmcb->ss.selector >> 3;
+	rsp = vcpu->vmcb->rsp;
+
+	ldt = tst_heap_alloc_aligned(ldt_size, 8);
+	memset(ldt, 0, ldt_size);
+	data_seg1 = create_segment_descriptor(0xda7a1000, 0x1000,
+		SEGTYPE_RODATA | SEGFLAG_PRESENT);
+	data_seg2 = create_segment_descriptor(0xda7a2000, 2,
+		SEGTYPE_RWDATA | SEGFLAG_PRESENT | SEGFLAG_PAGE_LIMIT);
+	ldt_seg = create_segment_descriptor((uintptr_t)ldt, ldt_size,
+		SEGTYPE_LDT | SEGFLAG_PRESENT);
+	task_seg = create_segment_descriptor(0x7a53000, 0x1000,
+		SEGTYPE_TSS | SEGFLAG_PRESENT);
+	kvm_vmcb_copy_gdt_descriptor(&src_vmcb->fs, data_seg1);
+	kvm_vmcb_copy_gdt_descriptor(&src_vmcb->gs, data_seg2);
+	kvm_vmcb_copy_gdt_descriptor(&src_vmcb->ldtr, ldt_seg);
+	kvm_vmcb_copy_gdt_descriptor(&src_vmcb->tr, task_seg);
+
+	src_vmcb->star = 0x5742;
+	src_vmcb->lstar = 0x15742;
+	src_vmcb->cstar = 0xc5742;
+	src_vmcb->sfmask = 0xf731;
+	src_vmcb->fs.base = 0xf000;
+	src_vmcb->gs.base = 0x10000;
+	src_vmcb->kernel_gs_base = 0x20000;
+	src_vmcb->sysenter_cs = 0x595c5;
+	src_vmcb->sysenter_esp = 0x595e50;
+	src_vmcb->sysenter_eip = 0x595e10;
+
+	memcpy(backup_vmcb, src_vmcb, sizeof(struct kvm_vmcb));
+	tst_res(TINFO, "VMLOAD/VMSAVE non-zero values");
+	vmsave_copy();
+	check_vmsave_result(backup_vmcb, NULL);
+
+	memset(src_vmcb, 0, sizeof(struct kvm_vmcb));
+	tst_res(TINFO, "VMLOAD/VMSAVE zero values");
+	dirty_vmcb(dest_vmcb);
+	vmsave_copy();
+	check_vmsave_result(zero_vmcb, NULL);
+
+	memcpy(src_vmcb, backup_vmcb, sizeof(struct kvm_vmcb));
+	tst_res(TINFO, "Nested VMLOAD/VMSAVE non-zero values");
+	dirty_vmcb(vcpu->vmcb);
+	memset(dest_vmcb, 0, sizeof(struct kvm_vmcb));
+	kvm_svm_vmrun(vcpu);
+
+	if (vcpu->vmcb->exitcode != SVM_EXIT_HLT)
+		tst_brk(TBROK, "Nested VM exited unexpectedly");
+
+	check_vmsave_result(backup_vmcb, vcpu->vmcb);
+
+	memset(src_vmcb, 0, sizeof(struct kvm_vmcb));
+	tst_res(TINFO, "Nested VMLOAD/VMSAVE zero values");
+	kvm_init_guest_vmcb(vcpu->vmcb, 1, ss, (void *)rsp, vmsave_copy);
+	kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMLOAD, 0);
+	kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMSAVE, 0);
+	dirty_vmcb(vcpu->vmcb);
+	kvm_svm_vmrun(vcpu);
+
+	if (vcpu->vmcb->exitcode != SVM_EXIT_HLT)
+		tst_brk(TBROK, "Nested VM exited unexpectedly");
+
+	check_vmsave_result(zero_vmcb, vcpu->vmcb);
+}
+
+#else /* defined(__i386__) || defined(__x86_64__) */
+TST_TEST_TCONF("Test supported only on x86");
+#endif /* defined(__i386__) || defined(__x86_64__) */
+
+#else /* COMPILE_PAYLOAD */
+
+static struct tst_test test = {
+	.test_all = tst_kvm_run,
+	.setup = tst_kvm_setup,
+	.cleanup = tst_kvm_cleanup,
+	.supported_archs = (const char *const []) {
+		"x86_64",
+		"x86",
+		NULL
+	},
+};
+
+#endif /* COMPILE_PAYLOAD */
-- 
2.44.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions
  2024-05-14 12:07 ` [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
@ 2024-05-14 12:59   ` Petr Vorel
  0 siblings, 0 replies; 6+ messages in thread
From: Petr Vorel @ 2024-05-14 12:59 UTC (permalink / raw)
  To: Martin Doucha; +Cc: ltp

Hi Martin,

thanks a lot merged!

I particularly like the previous commit printf-like formatting implementation.

Kind regards,
Petr

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-05-14 12:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-14 12:07 [LTP] [PATCH v2 0/4] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
2024-05-14 12:07 ` [LTP] [PATCH v2 1/4] KVM: Disable EBP register use in 32bit code Martin Doucha
2024-05-14 12:07 ` [LTP] [PATCH v2 2/4] KVM: Implement strchr() and basic sprintf() Martin Doucha
2024-05-14 12:07 ` [LTP] [PATCH v2 3/4] KVM: Implement printf-like formatting for tst_res() and tst_brk() Martin Doucha
2024-05-14 12:07 ` [LTP] [PATCH v2 4/4] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
2024-05-14 12:59   ` Petr Vorel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox