* [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
@ 2024-04-30 12:21 ` Martin Doucha
2024-05-06 19:41 ` Petr Vorel
2024-04-30 12:21 ` [LTP] [PATCH 2/9] KVM: Implement strchr() and basic sprintf() Martin Doucha
` (7 subsequent siblings)
8 siblings, 1 reply; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:21 UTC (permalink / raw)
To: ltp
The EBP register points to the stack segment by default but GCC uses
it to access data segment without the proper prefix. This works fine
on most systems because the stack and data segments are usually
identical. However, KVM environment intentionally enforces strict
limits on the stack segment and access to the data segment using
unprefixed EBP would trigger stack segment fault exception in 32bit
LTP builds (stack segment limits are ignored in 64bit mode).
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/testcases/kernel/kvm/Makefile b/testcases/kernel/kvm/Makefile
index ce4a5ede2..c85790e11 100644
--- a/testcases/kernel/kvm/Makefile
+++ b/testcases/kernel/kvm/Makefile
@@ -22,7 +22,7 @@ ifeq ($(HOST_CPU),x86_64)
endif
ifeq ($(HOST_CPU),x86)
- GUEST_CFLAGS += -m32
+ GUEST_CFLAGS += -m32 -ffixed-ebp
ASFLAGS += --32
endif
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-04-30 12:21 ` [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code Martin Doucha
@ 2024-05-06 19:41 ` Petr Vorel
2024-05-07 14:10 ` Martin Doucha
0 siblings, 1 reply; 18+ messages in thread
From: Petr Vorel @ 2024-05-06 19:41 UTC (permalink / raw)
To: Martin Doucha; +Cc: ltp
Hi Martin,
Reviewed-by: Petr Vorel <pvorel@suse.cz>
> The EBP register points to the stack segment by default but GCC uses
> it to access data segment without the proper prefix. This works fine
> on most systems because the stack and data segments are usually
> identical. However, KVM environment intentionally enforces strict
> limits on the stack segment and access to the data segment using
> unprefixed EBP would trigger stack segment fault exception in 32bit
> LTP builds (stack segment limits are ignored in 64bit mode).
> Signed-off-by: Martin Doucha <mdoucha@suse.cz>
> ---
> testcases/kernel/kvm/Makefile | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> diff --git a/testcases/kernel/kvm/Makefile b/testcases/kernel/kvm/Makefile
> index ce4a5ede2..c85790e11 100644
> --- a/testcases/kernel/kvm/Makefile
> +++ b/testcases/kernel/kvm/Makefile
> @@ -22,7 +22,7 @@ ifeq ($(HOST_CPU),x86_64)
> endif
> ifeq ($(HOST_CPU),x86)
> - GUEST_CFLAGS += -m32
> + GUEST_CFLAGS += -m32 -ffixed-ebp
FYI this will fail on 32 bit build on clang:
clang: error: unknown argument: '-ffixed-ebp'
I don't want to block this patchset which brings important test, but it'd be
great to fix it.
Is there clang equivalent? Or is it even needed for clang?
Either way, we need to detect clang. I don't think simple
ifeq ($(CXX),clang)
would be enough, because cc can be alias to clang.
Maybe wrap it with version detection:
ifeq ($(shell $(CC) -v 2>&1 | grep "clang version"), 1)
GUEST_CFLAGS += -ffixed-ebp
endif
Kind regards,
Petr
> ASFLAGS += --32
> endif
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-05-06 19:41 ` Petr Vorel
@ 2024-05-07 14:10 ` Martin Doucha
2024-05-07 14:22 ` Petr Vorel
2024-05-07 14:25 ` Petr Vorel
0 siblings, 2 replies; 18+ messages in thread
From: Martin Doucha @ 2024-05-07 14:10 UTC (permalink / raw)
To: Petr Vorel; +Cc: ltp
On 06. 05. 24 21:41, Petr Vorel wrote:
>> ifeq ($(HOST_CPU),x86)
>> - GUEST_CFLAGS += -m32
>> + GUEST_CFLAGS += -m32 -ffixed-ebp
>
> FYI this will fail on 32 bit build on clang:
>
> clang: error: unknown argument: '-ffixed-ebp'
>
> I don't want to block this patchset which brings important test, but it'd be
> great to fix it.
>
> Is there clang equivalent? Or is it even needed for clang?
>
> Either way, we need to detect clang. I don't think simple
>
> ifeq ($(CXX),clang)
>
> would be enough, because cc can be alias to clang.
Hmm, I need to fix this. I guess that configure should just check for
-ffixed-ebp support. Fortunately, clang doesn't generate code that would
trigger stack segment fault so the workaround is only needed for GCC.
Could you review and merge the trivial patches (4, 5, 6, 7) so that I
don't need to resubmit everything?
--
Martin Doucha mdoucha@suse.cz
SW Quality Engineer
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-05-07 14:10 ` Martin Doucha
@ 2024-05-07 14:22 ` Petr Vorel
2024-05-07 14:25 ` Petr Vorel
1 sibling, 0 replies; 18+ messages in thread
From: Petr Vorel @ 2024-05-07 14:22 UTC (permalink / raw)
To: Martin Doucha; +Cc: ltp
> On 06. 05. 24 21:41, Petr Vorel wrote:
> > > ifeq ($(HOST_CPU),x86)
> > > - GUEST_CFLAGS += -m32
> > > + GUEST_CFLAGS += -m32 -ffixed-ebp
> > FYI this will fail on 32 bit build on clang:
> > clang: error: unknown argument: '-ffixed-ebp'
> > I don't want to block this patchset which brings important test, but it'd be
> > great to fix it.
> > Is there clang equivalent? Or is it even needed for clang?
> > Either way, we need to detect clang. I don't think simple
> > ifeq ($(CXX),clang)
> > would be enough, because cc can be alias to clang.
> Hmm, I need to fix this. I guess that configure should just check for
> -ffixed-ebp support.
Yes, that would be ideal solution which I definitely don't want to force you.
> Fortunately, clang doesn't generate code that would
> trigger stack segment fault so the workaround is only needed for GCC.
Great!
> Could you review and merge the trivial patches (4, 5, 6, 7) so that I don't
> need to resubmit everything?
Sure!
Kind regards,
Petr
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-05-07 14:10 ` Martin Doucha
2024-05-07 14:22 ` Petr Vorel
@ 2024-05-07 14:25 ` Petr Vorel
2024-05-07 14:45 ` Martin Doucha
1 sibling, 1 reply; 18+ messages in thread
From: Petr Vorel @ 2024-05-07 14:25 UTC (permalink / raw)
To: Martin Doucha; +Cc: ltp
Hi Martin,
...
> Could you review and merge the trivial patches (4, 5, 6, 7) so that I don't
> need to resubmit everything?
I could even rebase 9th (missing kvm_svm04) to merge it before. Please let me
know if change order was actually needed only for kvm_svm04 (it does not look
like from the commit message).
Kind regards,
Petr
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code
2024-05-07 14:25 ` Petr Vorel
@ 2024-05-07 14:45 ` Martin Doucha
0 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-05-07 14:45 UTC (permalink / raw)
To: Petr Vorel; +Cc: ltp
On 07. 05. 24 16:25, Petr Vorel wrote:
> Hi Martin,
>
> ...
>> Could you review and merge the trivial patches (4, 5, 6, 7) so that I don't
>> need to resubmit everything?
>
> I could even rebase 9th (missing kvm_svm04) to merge it before. Please let me
> know if change order was actually needed only for kvm_svm04 (it does not look
> like from the commit message).
Feel free to rebase it. It's needed for kvm_svm01 and kvm_svm02 as well.
--
Martin Doucha mdoucha@suse.cz
SW Quality Engineer
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 18+ messages in thread
* [LTP] [PATCH 2/9] KVM: Implement strchr() and basic sprintf()
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
2024-04-30 12:21 ` [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code Martin Doucha
@ 2024-04-30 12:21 ` Martin Doucha
2024-04-30 12:21 ` [LTP] [PATCH 3/9] KVM: Implement printf-like formatting for tst_res() and tst_brk() Martin Doucha
` (6 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:21 UTC (permalink / raw)
To: ltp
Add basic implementation of sprintf() that supports string, pointer
and integer arguments but without advanced formatting options like
field alignment and padding.
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/include/kvm_guest.h | 11 +
testcases/kernel/kvm/lib_guest.c | 312 +++++++++++++++++++++++
2 files changed, 323 insertions(+)
diff --git a/testcases/kernel/kvm/include/kvm_guest.h b/testcases/kernel/kvm/include/kvm_guest.h
index 96f246155..f19bacc39 100644
--- a/testcases/kernel/kvm/include/kvm_guest.h
+++ b/testcases/kernel/kvm/include/kvm_guest.h
@@ -39,9 +39,15 @@ typedef unsigned int uint32_t;
typedef long long int64_t;
typedef unsigned long long uint64_t;
typedef unsigned long uintptr_t;
+typedef __builtin_va_list va_list;
#define NULL ((void *)0)
+#define va_start __builtin_va_start
+#define va_arg __builtin_va_arg
+#define va_end __builtin_va_end
+#define va_copy __builtin_va_copy
+
void *memset(void *dest, int val, size_t size);
void *memzero(void *dest, size_t size);
void *memcpy(void *dest, const void *src, size_t size);
@@ -49,6 +55,11 @@ void *memcpy(void *dest, const void *src, size_t size);
char *strcpy(char *dest, const char *src);
char *strcat(char *dest, const char *src);
size_t strlen(const char *str);
+char *strchr(const char *s, int c);
+char *strrchr(const char *s, int c);
+
+int vsprintf(char *dest, const char *fmt, va_list ap);
+int sprintf(char *dest, const char *fmt, ...);
/* Exit the VM by looping on a HLT instruction forever */
void kvm_exit(void) __attribute__((noreturn));
diff --git a/testcases/kernel/kvm/lib_guest.c b/testcases/kernel/kvm/lib_guest.c
index f3e21d3d6..73a76ccb1 100644
--- a/testcases/kernel/kvm/lib_guest.c
+++ b/testcases/kernel/kvm/lib_guest.c
@@ -76,6 +76,74 @@ size_t strlen(const char *str)
return ret;
}
+char *strchr(const char *s, int c)
+{
+ for (; *s; s++) {
+ if (*s == c)
+ return (char *)s;
+ }
+
+ return NULL;
+}
+
+char *strrchr(const char *s, int c)
+{
+ const char *ret = NULL;
+
+ for (; *s; s++) {
+ if (*s == c)
+ ret = s;
+ }
+
+ return (char *)ret;
+}
+
+#if defined(__x86_64__) && !defined(__ILP32__)
+uint64_t u64divu16(uint64_t a, uint16_t b)
+{
+ return a / b;
+}
+
+unsigned int u64modu16(uint64_t a, uint16_t b)
+{
+ return a % b;
+}
+
+#else /* defined(__x86_64__) && !defined(__ILP32__) */
+
+/* u64 short division helpers to avoid need to link libgcc on 32bit archs */
+uint64_t u64divu16(uint64_t a, uint16_t b)
+{
+ uint64_t ret = 0;
+ uint32_t tmp = a >> 32;
+
+ ret = tmp / b;
+ ret <<= 32;
+ tmp %= b;
+ tmp <<= 16;
+ tmp |= (a >> 16) & 0xffff;
+ ret |= (tmp / b) << 16;
+ tmp %= b;
+ tmp <<= 16;
+ tmp |= a & 0xffff;
+ ret |= tmp / b;
+ return ret;
+}
+
+unsigned int u64modu16(uint64_t a, uint16_t b)
+{
+ uint32_t tmp = a >> 32;
+
+ tmp %= b;
+ tmp <<= 16;
+ tmp |= (a >> 16) & 0xffff;
+ tmp %= b;
+ tmp <<= 16;
+ tmp |= a & 0xffff;
+ return tmp % b;
+}
+#endif /* defined(__x86_64__) && !defined(__ILP32__) */
+
char *ptr2hex(char *dest, uintptr_t val)
{
unsigned int i;
@@ -95,6 +163,250 @@ char *ptr2hex(char *dest, uintptr_t val)
return ret;
}
+char *u64tostr(char *dest, uint64_t val, uint16_t base, int caps)
+{
+ unsigned int i;
+ uintptr_t tmp = u64divu16(val, base);
+ char hex = caps ? 'A' : 'a';
+ char *ret = dest;
+
+ for (i = 1; tmp; i++, tmp = u64divu16(tmp, base))
+ ;
+
+ dest[i] = '\0';
+
+ do {
+ tmp = u64modu16(val, base);
+ dest[--i] = tmp + (tmp >= 10 ? hex - 10 : '0');
+ val = u64divu16(val, base);
+ } while (i);
+
+ return ret;
+}
+
+char *i64tostr(char *dest, int64_t val)
+{
+ if (val < 0) {
+ dest[0] = '-';
+ u64tostr(dest + 1, -val, 10, 0);
+ return dest;
+ }
+
+ return u64tostr(dest, val, 10, 0);
+}
+
+int vsprintf(char *dest, const char *fmt, va_list ap)
+{
+ va_list args;
+ int ret = 0;
+ char conv;
+ uint64_t u64val = 0;
+ int64_t i64val = 0;
+ const char * const uint_conv = "ouxX";
+
+ va_copy(args, ap);
+
+ for (; *fmt; fmt++) {
+ if (*fmt != '%') {
+ dest[ret++] = *fmt;
+ continue;
+ }
+
+ conv = 0;
+ fmt++;
+
+ switch (*fmt) {
+ case '%':
+ dest[ret++] = *fmt;
+ break;
+
+ case 'c':
+ dest[ret++] = va_arg(args, int);
+ break;
+
+ case 's':
+ strcpy(dest + ret, va_arg(args, const char *));
+ ret += strlen(dest + ret);
+ break;
+
+ case 'p':
+ strcpy(dest + ret, "0x");
+ ptr2hex(dest + ret + 2,
+ (uintptr_t)va_arg(args, void *));
+ ret += strlen(dest + ret);
+ break;
+
+ case 'l':
+ fmt++;
+
+ switch (*fmt) {
+ case 'l':
+ fmt++;
+
+ if (*fmt == 'd' || *fmt == 'i') {
+ i64val = va_arg(args, long long);
+ conv = *fmt;
+ break;
+ }
+
+ if (strchr(uint_conv, *fmt)) {
+ u64val = va_arg(args,
+ unsigned long long);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+
+ case 'd':
+ case 'i':
+ i64val = va_arg(args, long);
+ conv = *fmt;
+ break;
+
+ default:
+ if (strchr(uint_conv, *fmt)) {
+ u64val = va_arg(args,
+ unsigned long);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+ }
+ break;
+
+ case 'h':
+ fmt++;
+
+ switch (*fmt) {
+ case 'h':
+ fmt++;
+
+ if (*fmt == 'd' || *fmt == 'i') {
+ i64val = (signed char)va_arg(args, int);
+ conv = *fmt;
+ break;
+ }
+
+ if (strchr(uint_conv, *fmt)) {
+ u64val = (unsigned char)va_arg(args,
+ unsigned int);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+
+ case 'd':
+ case 'i':
+ i64val = (short int)va_arg(args, int);
+ conv = *fmt;
+ break;
+
+ default:
+ if (strchr(uint_conv, *fmt)) {
+ u64val = (unsigned short int)va_arg(
+ args, unsigned int);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+ }
+ break;
+
+ case 'z':
+ fmt++;
+
+ if (*fmt == 'd' || *fmt == 'i') {
+ i64val = va_arg(args, ssize_t);
+ conv = *fmt;
+ break;
+ }
+
+ if (strchr(uint_conv, *fmt)) {
+ u64val = va_arg(args, size_t);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+
+ case 'd':
+ case 'i':
+ i64val = va_arg(args, int);
+ conv = *fmt;
+ break;
+
+ default:
+ if (strchr(uint_conv, *fmt)) {
+ u64val = va_arg(args, unsigned int);
+ conv = *fmt;
+ break;
+ }
+
+ va_end(args);
+ return -1;
+ }
+
+ switch (conv) {
+ case 0:
+ continue;
+
+ case 'd':
+ case 'i':
+ i64tostr(dest + ret, i64val);
+ ret += strlen(dest + ret);
+ break;
+
+ case 'o':
+ u64tostr(dest + ret, u64val, 8, 0);
+ ret += strlen(dest + ret);
+ break;
+
+ case 'u':
+ u64tostr(dest + ret, u64val, 10, 0);
+ ret += strlen(dest + ret);
+ break;
+
+ case 'x':
+ u64tostr(dest + ret, u64val, 16, 0);
+ ret += strlen(dest + ret);
+ break;
+
+ case 'X':
+ u64tostr(dest + ret, u64val, 16, 1);
+ ret += strlen(dest + ret);
+ break;
+
+ default:
+ va_end(args);
+ return -1;
+ }
+ }
+
+ va_end(args);
+ dest[ret++] = '\0';
+ return ret;
+}
+
+int sprintf(char *dest, const char *fmt, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ ret = vsprintf(dest, fmt, args);
+ va_end(args);
+ return ret;
+}
+
void *tst_heap_alloc_aligned(size_t size, size_t align)
{
uintptr_t addr = (uintptr_t)heap_end;
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 3/9] KVM: Implement printf-like formatting for tst_res() and tst_brk()
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
2024-04-30 12:21 ` [LTP] [PATCH 1/9] KVM: Disable EBP register use in 32bit code Martin Doucha
2024-04-30 12:21 ` [LTP] [PATCH 2/9] KVM: Implement strchr() and basic sprintf() Martin Doucha
@ 2024-04-30 12:21 ` Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 4/9] kvm_svm02: Fix saved stack segment index value Martin Doucha
` (5 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:21 UTC (permalink / raw)
To: ltp
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/include/kvm_guest.h | 12 +++++---
testcases/kernel/kvm/lib_guest.c | 36 +++++++++++++++++++++---
2 files changed, 40 insertions(+), 8 deletions(-)
diff --git a/testcases/kernel/kvm/include/kvm_guest.h b/testcases/kernel/kvm/include/kvm_guest.h
index f19bacc39..080d0ac2b 100644
--- a/testcases/kernel/kvm/include/kvm_guest.h
+++ b/testcases/kernel/kvm/include/kvm_guest.h
@@ -68,12 +68,16 @@ void kvm_exit(void) __attribute__((noreturn));
void kvm_yield(void);
void tst_res_(const char *file, const int lineno, int result,
- const char *message);
-#define tst_res(result, msg) tst_res_(__FILE__, __LINE__, (result), (msg))
+ const char *fmt, ...)
+ __attribute__ ((format (printf, 4, 5)));
+#define tst_res(result, fmt, ...) \
+ tst_res_(__FILE__, __LINE__, (result), (fmt), ##__VA_ARGS__)
void tst_brk_(const char *file, const int lineno, int result,
- const char *message) __attribute__((noreturn));
-#define tst_brk(result, msg) tst_brk_(__FILE__, __LINE__, (result), (msg))
+ const char *fmt, ...) __attribute__((noreturn))
+ __attribute__ ((format (printf, 4, 5)));
+#define tst_brk(result, fmt, ...) \
+ tst_brk_(__FILE__, __LINE__, (result), (fmt), ##__VA_ARGS__)
/*
* Send asynchronous notification to host without stopping VM execution and
diff --git a/testcases/kernel/kvm/lib_guest.c b/testcases/kernel/kvm/lib_guest.c
index 73a76ccb1..2e3e9cb6e 100644
--- a/testcases/kernel/kvm/lib_guest.c
+++ b/testcases/kernel/kvm/lib_guest.c
@@ -443,6 +443,7 @@ static void tst_fatal_error(const char *file, const int lineno,
test_result->result = TBROK;
test_result->lineno = lineno;
test_result->file_addr = (uintptr_t)file;
+ /* Avoid sprintf() here in case of bugs */
strcpy(test_result->message, message);
strcat(test_result->message, " at address 0x");
ptr2hex(test_result->message + strlen(test_result->message), ip);
@@ -451,19 +452,46 @@ static void tst_fatal_error(const char *file, const int lineno,
}
void tst_res_(const char *file, const int lineno, int result,
- const char *message)
+ const char *fmt, ...)
{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
test_result->result = result;
test_result->lineno = lineno;
test_result->file_addr = (uintptr_t)file;
- strcpy(test_result->message, message);
+ ret = vsprintf(test_result->message, fmt, args);
+ va_end(args);
+
+ if (ret < 0) {
+ tst_brk_(file, lineno, TBROK, "Invalid tst_res() format: %s",
+ fmt);
+ }
+
kvm_yield();
}
void tst_brk_(const char *file, const int lineno, int result,
- const char *message)
+ const char *fmt, ...)
{
- tst_res_(file, lineno, result, message);
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ test_result->result = result;
+ test_result->lineno = lineno;
+ test_result->file_addr = (uintptr_t)file;
+ ret = vsprintf(test_result->message, fmt, args);
+ va_end(args);
+
+ if (ret < 0) {
+ test_result->result = TBROK;
+ strcpy(test_result->message, "Invalid tst_brk() format: ");
+ strcat(test_result->message, fmt);
+ }
+
+ kvm_yield();
kvm_exit();
}
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 4/9] kvm_svm02: Fix saved stack segment index value
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (2 preceding siblings ...)
2024-04-30 12:21 ` [LTP] [PATCH 3/9] KVM: Implement printf-like formatting for tst_res() and tst_brk() Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 5/9] kvm_find_free_descriptor(): Skip descriptor 0 Martin Doucha
` (4 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
The VMCB init helper function expects GDT item index while the VMCB
segment selector values are byte offsets. Shift the byte offset
of saved stack segment selector to get the correct GDT index.
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/kvm_svm02.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/testcases/kernel/kvm/kvm_svm02.c b/testcases/kernel/kvm/kvm_svm02.c
index 5d2e2ce37..f72fb3812 100644
--- a/testcases/kernel/kvm/kvm_svm02.c
+++ b/testcases/kernel/kvm/kvm_svm02.c
@@ -96,7 +96,7 @@ void main(void)
vmsave_buf = kvm_alloc_vmcb();
/* Save allocated stack for later VM reinit */
- ss = vcpu->vmcb->ss.selector;
+ ss = vcpu->vmcb->ss.selector >> 3;
rsp = vcpu->vmcb->rsp;
/* Load partial state from vmsave_buf and save it to vcpu->vmcb */
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 5/9] kvm_find_free_descriptor(): Skip descriptor 0
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (3 preceding siblings ...)
2024-04-30 12:22 ` [LTP] [PATCH 4/9] kvm_svm02: Fix saved stack segment index value Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 6/9] KVM: Add system control MSR constants Martin Doucha
` (3 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
The GDT/LDT descriptor 0 should always be empty. Start search for free
descriptor table entry at index 1.
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/lib_x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/testcases/kernel/kvm/lib_x86.c b/testcases/kernel/kvm/lib_x86.c
index 3e6656f11..1c0e629c3 100644
--- a/testcases/kernel/kvm/lib_x86.c
+++ b/testcases/kernel/kvm/lib_x86.c
@@ -174,7 +174,7 @@ int kvm_find_free_descriptor(const struct segment_descriptor *table,
const struct segment_descriptor *ptr;
size_t i;
- for (i = 0, ptr = table; i < size; i++, ptr++) {
+ for (i = 1, ptr = table + 1; i < size; i++, ptr++) {
if (!(ptr->flags_lo & SEGFLAG_PRESENT))
return i;
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 6/9] KVM: Add system control MSR constants
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (4 preceding siblings ...)
2024-04-30 12:22 ` [LTP] [PATCH 5/9] kvm_find_free_descriptor(): Skip descriptor 0 Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 7/9] KVM: Add VMSAVE/VMLOAD functions to x86 SVM library Martin Doucha
` (2 subsequent siblings)
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/include/kvm_x86.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/testcases/kernel/kvm/include/kvm_x86.h b/testcases/kernel/kvm/include/kvm_x86.h
index bc36c0e0f..08d3f6759 100644
--- a/testcases/kernel/kvm/include/kvm_x86.h
+++ b/testcases/kernel/kvm/include/kvm_x86.h
@@ -68,7 +68,17 @@
/* Model-specific CPU register constants */
+#define MSR_SYSENTER_CS 0x174
+#define MSR_SYSENTER_ESP 0x175
+#define MSR_SYSENTER_EIP 0x176
#define MSR_EFER 0xc0000080
+#define MSR_STAR 0xc0000081
+#define MSR_LSTAR 0xc0000082
+#define MSR_CSTAR 0xc0000083
+#define MSR_SFMASK 0xc0000084
+#define MSR_FS_BASE 0xc0000100
+#define MSR_GS_BASE 0xc0000101
+#define MSR_KERNEL_GS_BASE 0xc0000102
#define MSR_VM_CR 0xc0010114
#define MSR_VM_HSAVE_PA 0xc0010117
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 7/9] KVM: Add VMSAVE/VMLOAD functions to x86 SVM library
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (5 preceding siblings ...)
2024-04-30 12:22 ` [LTP] [PATCH 6/9] KVM: Add system control MSR constants Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-05-07 14:57 ` Petr Vorel
2024-04-30 12:22 ` [LTP] [PATCH 8/9] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 9/9] KVM: Move kvm_pagefault01 to the end of KVM runfile Martin Doucha
8 siblings, 1 reply; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
testcases/kernel/kvm/include/kvm_x86_svm.h | 6 ++++++
testcases/kernel/kvm/kvm_svm02.c | 12 ++----------
testcases/kernel/kvm/lib_x86.c | 18 ++++++++++++++++++
3 files changed, 26 insertions(+), 10 deletions(-)
diff --git a/testcases/kernel/kvm/include/kvm_x86_svm.h b/testcases/kernel/kvm/include/kvm_x86_svm.h
index b4b1b80e2..73563ed2d 100644
--- a/testcases/kernel/kvm/include/kvm_x86_svm.h
+++ b/testcases/kernel/kvm/include/kvm_x86_svm.h
@@ -163,4 +163,10 @@ struct kvm_svm_vcpu *kvm_create_svm_vcpu(int (*guest_main)(void),
void kvm_svm_vmrun(struct kvm_svm_vcpu *cpu);
+/* Load FS, GS, TR and LDTR state from vmsave_buf */
+void kvm_svm_vmload(struct kvm_vmcb *buf);
+
+/* Save current FS, GS, TR and LDTR state to vmsave_buf */
+void kvm_svm_vmsave(struct kvm_vmcb *buf);
+
#endif /* KVM_X86_SVM_H_ */
diff --git a/testcases/kernel/kvm/kvm_svm02.c b/testcases/kernel/kvm/kvm_svm02.c
index f72fb3812..6914fdcba 100644
--- a/testcases/kernel/kvm/kvm_svm02.c
+++ b/testcases/kernel/kvm/kvm_svm02.c
@@ -33,22 +33,14 @@ static void *vmsave_buf;
/* Load FS, GS, TR and LDTR state from vmsave_buf */
static int guest_vmload(void)
{
- asm (
- "vmload %0\n"
- :
- : "a" (vmsave_buf)
- );
+ kvm_svm_vmload(vmsave_buf);
return 0;
}
/* Save current FS, GS, TR and LDTR state to vmsave_buf */
static int guest_vmsave(void)
{
- asm (
- "vmsave %0\n"
- :
- : "a" (vmsave_buf)
- );
+ kvm_svm_vmsave(vmsave_buf);
return 0;
}
diff --git a/testcases/kernel/kvm/lib_x86.c b/testcases/kernel/kvm/lib_x86.c
index 1c0e629c3..8db3abd3f 100644
--- a/testcases/kernel/kvm/lib_x86.c
+++ b/testcases/kernel/kvm/lib_x86.c
@@ -393,3 +393,21 @@ struct kvm_svm_vcpu *kvm_create_svm_vcpu(int (*guest_main)(void),
ret->vmcb = vmcb;
return ret;
}
+
+void kvm_svm_vmload(struct kvm_vmcb *buf)
+{
+ asm (
+ "vmload %0\n"
+ :
+ : "a" (buf)
+ );
+}
+
+void kvm_svm_vmsave(struct kvm_vmcb *buf)
+{
+ asm (
+ "vmsave %0\n"
+ :
+ : "a" (buf)
+ );
+}
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 8/9] KVM: Add functional test for VMSAVE/VMLOAD instructions
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (6 preceding siblings ...)
2024-04-30 12:22 ` [LTP] [PATCH 7/9] KVM: Add VMSAVE/VMLOAD functions to x86 SVM library Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-04-30 12:22 ` [LTP] [PATCH 9/9] KVM: Move kvm_pagefault01 to the end of KVM runfile Martin Doucha
8 siblings, 0 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
runtest/kvm | 1 +
testcases/kernel/kvm/.gitignore | 1 +
testcases/kernel/kvm/kvm_svm04.c | 307 +++++++++++++++++++++++++++++++
3 files changed, 309 insertions(+)
create mode 100644 testcases/kernel/kvm/kvm_svm04.c
diff --git a/runtest/kvm b/runtest/kvm
index 4094a21a8..0e1b2e555 100644
--- a/runtest/kvm
+++ b/runtest/kvm
@@ -2,3 +2,4 @@ kvm_pagefault01 kvm_pagefault01
kvm_svm01 kvm_svm01
kvm_svm02 kvm_svm02
kvm_svm03 kvm_svm03
+kvm_svm04 kvm_svm04
diff --git a/testcases/kernel/kvm/.gitignore b/testcases/kernel/kvm/.gitignore
index 9638a6fc7..661472cae 100644
--- a/testcases/kernel/kvm/.gitignore
+++ b/testcases/kernel/kvm/.gitignore
@@ -2,3 +2,4 @@
/kvm_svm01
/kvm_svm02
/kvm_svm03
+/kvm_svm04
diff --git a/testcases/kernel/kvm/kvm_svm04.c b/testcases/kernel/kvm/kvm_svm04.c
new file mode 100644
index 000000000..e69f0d4be
--- /dev/null
+++ b/testcases/kernel/kvm/kvm_svm04.c
@@ -0,0 +1,307 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2023 SUSE LLC <mdoucha@suse.cz>
+ */
+
+/*\
+ * [Description]
+ *
+ * Functional test for VMSAVE/VMLOAD instructions in KVM environment. Verify
+ * that both instructions save/load the CPU state according to CPU
+ * documentation.
+ */
+
+#include "kvm_test.h"
+
+#ifdef COMPILE_PAYLOAD
+#if defined(__i386__) || defined(__x86_64__)
+
+#include "kvm_x86_svm.h"
+
+static struct kvm_vmcb *src_vmcb, *dest_vmcb, *msr_vmcb;
+static struct kvm_sregs sregs_buf;
+
+static int check_descriptor(const char *name,
+ const struct kvm_vmcb_descriptor *data,
+ const struct kvm_vmcb_descriptor *exp)
+{
+ int ret = 0;
+
+ if (data->selector != exp->selector) {
+ tst_res(TFAIL, "%s.selector = %hx (expected %hx)",
+ name, data->selector, exp->selector);
+ ret = 1;
+ }
+
+ if (data->attrib != exp->attrib) {
+ tst_res(TFAIL, "%s.attrib = 0x%hx (expected 0x%hx)",
+ name, data->attrib, exp->attrib);
+ ret = 1;
+ }
+
+ if (data->limit != exp->limit) {
+ tst_res(TFAIL, "%s.limit = 0x%x (expected 0x%x)",
+ name, data->limit, exp->limit);
+ ret = 1;
+ }
+
+ if (data->base != exp->base) {
+ tst_res(TFAIL, "%s.base = 0x%llx (expected 0x%llx)",
+ name, data->base, exp->base);
+ ret = 1;
+ }
+
+ return ret;
+}
+
+static int check_value(const char *name, uint64_t val, uint64_t exp,
+ uint64_t backup, uint64_t reg, uint64_t nested_val)
+{
+ int ret = 0;
+
+ if (exp != backup) {
+ tst_res(TFAIL, "%s source was modified (0x%llx != 0x%llx)",
+ name, exp, backup);
+ ret = 1;
+ }
+
+ if (reg != exp) {
+ tst_res(TFAIL, "%s was not loaded (0x%llx != 0x%llx)",
+ name, reg, exp);
+ ret = 1;
+ }
+
+ if (val != exp) {
+ tst_res(TFAIL, "%s was not saved (0x%llx != 0x%llx)",
+ name, val, exp);
+ ret = 1;
+ }
+
+ if (val != nested_val) {
+ tst_res(TFAIL, "Inconsistent %s on VM exit (0x%llx != 0x%llx)",
+ name, val, nested_val);
+ ret = 1;
+ }
+
+ if (!ret)
+ tst_res(TPASS, "%s has correct value 0x%llx", name, val);
+
+ return ret;
+}
+
+static int vmsave_copy(void)
+{
+ kvm_svm_vmload(src_vmcb);
+ kvm_read_sregs(&sregs_buf);
+ msr_vmcb->star = kvm_rdmsr(MSR_STAR);
+ msr_vmcb->lstar = kvm_rdmsr(MSR_LSTAR);
+ msr_vmcb->cstar = kvm_rdmsr(MSR_CSTAR);
+ msr_vmcb->sfmask = kvm_rdmsr(MSR_SFMASK);
+ msr_vmcb->fs.base = kvm_rdmsr(MSR_FS_BASE);
+ msr_vmcb->gs.base = kvm_rdmsr(MSR_GS_BASE);
+ msr_vmcb->kernel_gs_base = kvm_rdmsr(MSR_KERNEL_GS_BASE);
+ msr_vmcb->sysenter_cs = kvm_rdmsr(MSR_SYSENTER_CS);
+ msr_vmcb->sysenter_esp = kvm_rdmsr(MSR_SYSENTER_ESP);
+ msr_vmcb->sysenter_eip = kvm_rdmsr(MSR_SYSENTER_EIP);
+ kvm_svm_vmsave(dest_vmcb);
+ return 0;
+}
+
+static int check_vmsave_result(struct kvm_vmcb *copy_vmcb,
+ struct kvm_vmcb *nested_vmcb)
+{
+ int ret = 0;
+
+ /* Nested VMCB is only compared to dest VMCB, bypass the check */
+ if (!nested_vmcb)
+ nested_vmcb = dest_vmcb;
+
+ ret = check_descriptor("FS", &dest_vmcb->fs, &src_vmcb->fs);
+ ret = check_value("FS.selector", dest_vmcb->fs.selector,
+ src_vmcb->fs.selector, copy_vmcb->fs.selector,
+ sregs_buf.fs, nested_vmcb->fs.selector) || ret;
+ ret = check_descriptor("GS", &dest_vmcb->gs, &src_vmcb->gs) || ret;
+ ret = check_value("GS.selector", dest_vmcb->gs.selector,
+ src_vmcb->gs.selector, copy_vmcb->gs.selector,
+ sregs_buf.gs, nested_vmcb->gs.selector) || ret;
+ ret = check_descriptor("LDTR", &dest_vmcb->ldtr, &src_vmcb->ldtr) ||
+ ret;
+ ret = check_descriptor("TR", &dest_vmcb->tr, &src_vmcb->tr) || ret;
+ ret = check_value("STAR", dest_vmcb->star, src_vmcb->star,
+ copy_vmcb->star, msr_vmcb->star, nested_vmcb->star) || ret;
+ ret = check_value("LSTAR", dest_vmcb->lstar, src_vmcb->lstar,
+ copy_vmcb->lstar, msr_vmcb->lstar, nested_vmcb->lstar) || ret;
+ ret = check_value("CSTAR", dest_vmcb->cstar, src_vmcb->cstar,
+ copy_vmcb->cstar, msr_vmcb->cstar, nested_vmcb->cstar) || ret;
+ ret = check_value("SFMASK", dest_vmcb->sfmask, src_vmcb->sfmask,
+ copy_vmcb->sfmask, msr_vmcb->sfmask, nested_vmcb->sfmask) ||
+ ret;
+ ret = check_value("FS.base", dest_vmcb->fs.base, src_vmcb->fs.base,
+ copy_vmcb->fs.base, msr_vmcb->fs.base, nested_vmcb->fs.base) ||
+ ret;
+ ret = check_value("GS.base", dest_vmcb->gs.base, src_vmcb->gs.base,
+ copy_vmcb->gs.base, msr_vmcb->gs.base, nested_vmcb->gs.base) ||
+ ret;
+ ret = check_value("KernelGSBase", dest_vmcb->kernel_gs_base,
+ src_vmcb->kernel_gs_base, copy_vmcb->kernel_gs_base,
+ msr_vmcb->kernel_gs_base, nested_vmcb->kernel_gs_base) || ret;
+ ret = check_value("Sysenter_CS", dest_vmcb->sysenter_cs,
+ src_vmcb->sysenter_cs, copy_vmcb->sysenter_cs,
+ msr_vmcb->sysenter_cs, nested_vmcb->sysenter_cs) || ret;
+ ret = check_value("Sysenter_ESP", dest_vmcb->sysenter_esp,
+ src_vmcb->sysenter_esp, copy_vmcb->sysenter_esp,
+ msr_vmcb->sysenter_esp, nested_vmcb->sysenter_esp) || ret;
+ ret = check_value("Sysenter_EIP", dest_vmcb->sysenter_eip,
+ src_vmcb->sysenter_eip, copy_vmcb->sysenter_eip,
+ msr_vmcb->sysenter_eip, nested_vmcb->sysenter_eip) || ret;
+
+ return ret;
+}
+
+static int create_segment_descriptor(uint64_t baseaddr, uint32_t limit,
+ unsigned int flags)
+{
+ int ret = kvm_find_free_descriptor(kvm_gdt, KVM_GDT_SIZE);
+
+ if (ret < 0)
+ tst_brk(TBROK, "Descriptor table is full");
+
+ kvm_set_segment_descriptor(kvm_gdt + ret, baseaddr, limit, flags);
+ return ret;
+}
+
+static void dirty_vmcb(struct kvm_vmcb *buf)
+{
+ buf->fs.selector = 0x60;
+ buf->fs.attrib = SEGTYPE_RWDATA | SEGFLAG_PRESENT;
+ buf->fs.limit = 0xffff;
+ buf->fs.base = 0xfff000;
+ buf->gs.selector = 0x68;
+ buf->gs.attrib = SEGTYPE_RWDATA | SEGFLAG_PRESENT;
+ buf->gs.limit = 0xffff;
+ buf->gs.base = 0xfff000;
+ buf->ldtr.selector = 0x70;
+ buf->ldtr.attrib = SEGTYPE_LDT | SEGFLAG_PRESENT;
+ buf->ldtr.limit = 0xffff;
+ buf->ldtr.base = 0xfff000;
+ buf->tr.selector = 0x78;
+ buf->tr.attrib = SEGTYPE_TSS | SEGFLAG_PRESENT;
+ buf->tr.limit = 0xffff;
+ buf->tr.base = 0xfff000;
+ buf->star = 0xffff;
+ buf->lstar = 0xffff;
+ buf->cstar = 0xffff;
+ buf->sfmask = 0xffff;
+ buf->fs.base = 0xffff;
+ buf->gs.base = 0xffff;
+ buf->kernel_gs_base = 0xffff;
+ buf->sysenter_cs = 0xffff;
+ buf->sysenter_esp = 0xffff;
+ buf->sysenter_eip = 0xffff;
+}
+
+void main(void)
+{
+ uint16_t ss;
+ uint64_t rsp;
+ struct kvm_svm_vcpu *vcpu;
+ int data_seg1, data_seg2, ldt_seg, task_seg;
+ struct segment_descriptor *ldt;
+ struct kvm_vmcb *backup_vmcb, *zero_vmcb;
+ unsigned int ldt_size = KVM_GDT_SIZE*sizeof(struct segment_descriptor);
+
+ kvm_init_svm();
+
+ src_vmcb = kvm_alloc_vmcb();
+ dest_vmcb = kvm_alloc_vmcb();
+ msr_vmcb = kvm_alloc_vmcb();
+ backup_vmcb = kvm_alloc_vmcb();
+ zero_vmcb = kvm_alloc_vmcb();
+
+ vcpu = kvm_create_svm_vcpu(vmsave_copy, 1);
+ kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMLOAD, 0);
+ kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMSAVE, 0);
+ /* Save allocated stack for later VM reinit */
+ ss = vcpu->vmcb->ss.selector >> 3;
+ rsp = vcpu->vmcb->rsp;
+
+ ldt = tst_heap_alloc_aligned(ldt_size, 8);
+ memset(ldt, 0, ldt_size);
+ data_seg1 = create_segment_descriptor(0xda7a1000, 0x1000,
+ SEGTYPE_RODATA | SEGFLAG_PRESENT);
+ data_seg2 = create_segment_descriptor(0xda7a2000, 2,
+ SEGTYPE_RWDATA | SEGFLAG_PRESENT | SEGFLAG_PAGE_LIMIT);
+ ldt_seg = create_segment_descriptor((uintptr_t)ldt, ldt_size,
+ SEGTYPE_LDT | SEGFLAG_PRESENT);
+ task_seg = create_segment_descriptor(0x7a53000, 0x1000,
+ SEGTYPE_TSS | SEGFLAG_PRESENT);
+ kvm_vmcb_copy_gdt_descriptor(&src_vmcb->fs, data_seg1);
+ kvm_vmcb_copy_gdt_descriptor(&src_vmcb->gs, data_seg2);
+ kvm_vmcb_copy_gdt_descriptor(&src_vmcb->ldtr, ldt_seg);
+ kvm_vmcb_copy_gdt_descriptor(&src_vmcb->tr, task_seg);
+
+ src_vmcb->star = 0x5742;
+ src_vmcb->lstar = 0x15742;
+ src_vmcb->cstar = 0xc5742;
+ src_vmcb->sfmask = 0xf731;
+ src_vmcb->fs.base = 0xf000;
+ src_vmcb->gs.base = 0x10000;
+ src_vmcb->kernel_gs_base = 0x20000;
+ src_vmcb->sysenter_cs = 0x595c5;
+ src_vmcb->sysenter_esp = 0x595e50;
+ src_vmcb->sysenter_eip = 0x595e10;
+
+ memcpy(backup_vmcb, src_vmcb, sizeof(struct kvm_vmcb));
+ tst_res(TINFO, "VMLOAD/VMSAVE non-zero values");
+ vmsave_copy();
+ check_vmsave_result(backup_vmcb, NULL);
+
+ memset(src_vmcb, 0, sizeof(struct kvm_vmcb));
+ tst_res(TINFO, "VMLOAD/VMSAVE zero values");
+ dirty_vmcb(dest_vmcb);
+ vmsave_copy();
+ check_vmsave_result(zero_vmcb, NULL);
+
+ memcpy(src_vmcb, backup_vmcb, sizeof(struct kvm_vmcb));
+ tst_res(TINFO, "Nested VMLOAD/VMSAVE non-zero values");
+ dirty_vmcb(vcpu->vmcb);
+ memset(dest_vmcb, 0, sizeof(struct kvm_vmcb));
+ kvm_svm_vmrun(vcpu);
+
+ if (vcpu->vmcb->exitcode != SVM_EXIT_HLT)
+ tst_brk(TBROK, "Nested VM exited unexpectedly");
+
+ check_vmsave_result(backup_vmcb, vcpu->vmcb);
+
+ memset(src_vmcb, 0, sizeof(struct kvm_vmcb));
+ tst_res(TINFO, "Nested VMLOAD/VMSAVE zero values");
+ kvm_init_guest_vmcb(vcpu->vmcb, 1, ss, (void *)rsp, vmsave_copy);
+ kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMLOAD, 0);
+ kvm_vmcb_set_intercept(vcpu->vmcb, SVM_INTERCEPT_VMSAVE, 0);
+ dirty_vmcb(vcpu->vmcb);
+ kvm_svm_vmrun(vcpu);
+
+ if (vcpu->vmcb->exitcode != SVM_EXIT_HLT)
+ tst_brk(TBROK, "Nested VM exited unexpectedly");
+
+ check_vmsave_result(zero_vmcb, vcpu->vmcb);
+}
+
+#else /* defined(__i386__) || defined(__x86_64__) */
+TST_TEST_TCONF("Test supported only on x86");
+#endif /* defined(__i386__) || defined(__x86_64__) */
+
+#else /* COMPILE_PAYLOAD */
+
+static struct tst_test test = {
+ .test_all = tst_kvm_run,
+ .setup = tst_kvm_setup,
+ .cleanup = tst_kvm_cleanup,
+ .supported_archs = (const char *const []) {
+ "x86_64",
+ "x86",
+ NULL
+ },
+};
+
+#endif /* COMPILE_PAYLOAD */
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread* [LTP] [PATCH 9/9] KVM: Move kvm_pagefault01 to the end of KVM runfile
2024-04-30 12:21 [LTP] [PATCH 0/9] Add functional test for AMD VMSAVE/VMLOAD instructions Martin Doucha
` (7 preceding siblings ...)
2024-04-30 12:22 ` [LTP] [PATCH 8/9] KVM: Add functional test for VMSAVE/VMLOAD instructions Martin Doucha
@ 2024-04-30 12:22 ` Martin Doucha
2024-05-06 4:34 ` Petr Vorel
2024-05-07 14:59 ` Petr Vorel
8 siblings, 2 replies; 18+ messages in thread
From: Martin Doucha @ 2024-04-30 12:22 UTC (permalink / raw)
To: ltp
The kvm_pagefault01 test reloads the KVM module with special parameters
which may change kernel code paths and hide open bugs from other KVM
tests. Move the test to the end of KVM schedule to prevent interference.
Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
runtest/kvm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/runtest/kvm b/runtest/kvm
index 0e1b2e555..74a517add 100644
--- a/runtest/kvm
+++ b/runtest/kvm
@@ -1,5 +1,6 @@
-kvm_pagefault01 kvm_pagefault01
kvm_svm01 kvm_svm01
kvm_svm02 kvm_svm02
kvm_svm03 kvm_svm03
kvm_svm04 kvm_svm04
+# Tests below may interfere with bug reproducibility
+kvm_pagefault01 kvm_pagefault01
--
2.44.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 18+ messages in thread