public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
* [LTP] [PATCH 00/10] Basic KVM test for Intel VMX
@ 2025-01-21 16:44 Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 01/10] kvm_read_sregs(): Read the TR segment register Martin Doucha
                   ` (9 more replies)
  0 siblings, 10 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Since we already have some functional and CVE tests for AMD SVM, we should
also add tests for Intel VMX. This patchset adds the necessary support code
for creating nested VMs on Intel VMX and a simple functional test
for VMREAD/VMWRITE instructions similar to kvm_svm04.

The changes include refactoring of existing tests so let's merge it
after the upcoming LTP release.

Martin Doucha (10):
  kvm_read_sregs(): Read the TR segment register
  kvm_svm_vmrun(): Simplify VM state save/load with macros
  kvm_x86: Define CR0 flags and additional CPUID/MSR constants
  KVM: Implement helper functions for setting x86 control registers
  KVM: Add memcmp() helper function
  KVM: Add helper functions for nested Intel VMX virtualization
  lib: Add helper function for reloading kernel modules
  lib: Add helper function for reading boolean sysconf files
  kvm_pagefault01: Use library functions to reload KVM modules
  KVM: Add functional test for emulated VMREAD/VMWRITE instructions

 include/tst_module.h                       |   3 +
 include/tst_sys_conf.h                     |   2 +
 lib/tst_module.c                           |  28 ++
 lib/tst_sys_conf.c                         |  35 ++
 testcases/kernel/kvm/bootstrap_x86.S       | 123 ++++-
 testcases/kernel/kvm/bootstrap_x86_64.S    | 153 ++++--
 testcases/kernel/kvm/include/kvm_guest.h   |   2 +
 testcases/kernel/kvm/include/kvm_x86.h     |  19 +-
 testcases/kernel/kvm/include/kvm_x86_vmx.h | 201 ++++++++
 testcases/kernel/kvm/kvm_pagefault01.c     |  59 +--
 testcases/kernel/kvm/kvm_vmx01.c           | 282 +++++++++++
 testcases/kernel/kvm/lib_guest.c           |  12 +
 testcases/kernel/kvm/lib_x86.c             | 515 +++++++++++++++++++++
 13 files changed, 1322 insertions(+), 112 deletions(-)
 create mode 100644 testcases/kernel/kvm/include/kvm_x86_vmx.h
 create mode 100644 testcases/kernel/kvm/kvm_vmx01.c

-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 01/10] kvm_read_sregs(): Read the TR segment register
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 02/10] kvm_svm_vmrun(): Simplify VM state save/load with macros Martin Doucha
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/bootstrap_x86.S    | 2 ++
 testcases/kernel/kvm/bootstrap_x86_64.S | 2 ++
 testcases/kernel/kvm/include/kvm_x86.h  | 2 +-
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/testcases/kernel/kvm/bootstrap_x86.S b/testcases/kernel/kvm/bootstrap_x86.S
index a39c6bea7..79d2218d3 100644
--- a/testcases/kernel/kvm/bootstrap_x86.S
+++ b/testcases/kernel/kvm/bootstrap_x86.S
@@ -215,6 +215,8 @@ kvm_read_sregs:
 	movw %ax, 8(%edi)
 	mov %ss, %ax
 	movw %ax, 10(%edi)
+	str %ax
+	movw %ax, 12(%edi)
 	pop %edi
 	ret
 
diff --git a/testcases/kernel/kvm/bootstrap_x86_64.S b/testcases/kernel/kvm/bootstrap_x86_64.S
index b02dd4d92..32170f7c9 100644
--- a/testcases/kernel/kvm/bootstrap_x86_64.S
+++ b/testcases/kernel/kvm/bootstrap_x86_64.S
@@ -319,6 +319,8 @@ kvm_read_sregs:
 	movw %ax, 8(%rdi)
 	mov %ss, %ax
 	movw %ax, 10(%rdi)
+	str %ax
+	movw %ax, 12(%rdi)
 	retq
 
 handle_interrupt:
diff --git a/testcases/kernel/kvm/include/kvm_x86.h b/testcases/kernel/kvm/include/kvm_x86.h
index 08d3f6759..f99fedbca 100644
--- a/testcases/kernel/kvm/include/kvm_x86.h
+++ b/testcases/kernel/kvm/include/kvm_x86.h
@@ -178,7 +178,7 @@ struct kvm_cregs {
 };
 
 struct kvm_sregs {
-	uint16_t cs, ds, es, fs, gs, ss;
+	uint16_t cs, ds, es, fs, gs, ss, tr;
 };
 
 struct kvm_regs64 {
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 02/10] kvm_svm_vmrun(): Simplify VM state save/load with macros
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 01/10] kvm_read_sregs(): Read the TR segment register Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 03/10] kvm_x86: Define CR0 flags and additional CPUID/MSR constants Martin Doucha
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/bootstrap_x86.S    | 57 +++++++++-----
 testcases/kernel/kvm/bootstrap_x86_64.S | 99 +++++++++++++++----------
 2 files changed, 98 insertions(+), 58 deletions(-)

diff --git a/testcases/kernel/kvm/bootstrap_x86.S b/testcases/kernel/kvm/bootstrap_x86.S
index 79d2218d3..f08282461 100644
--- a/testcases/kernel/kvm/bootstrap_x86.S
+++ b/testcases/kernel/kvm/bootstrap_x86.S
@@ -361,6 +361,34 @@ kvm_svm_guest_entry:
 1:	hlt
 	jmp 1b
 
+/* vcpu structure address must be in %rdi */
+.macro load_vcpu_regs
+	movl 0x04(%edi), %eax
+	movl 0x0c(%edi), %ebx
+	movl 0x14(%edi), %ecx
+	movl 0x1c(%edi), %edx
+	/* save %edi last */
+	movl 0x2c(%edi), %esi
+	movl 0x34(%edi), %ebp
+	/* skip %esp */
+	movl 0x24(%edi), %edi
+.endm
+
+/* vcpu structure address must be on top of the stack */
+.macro save_vcpu_regs
+	push %edi
+	movl 4(%esp), %edi
+	movl %eax, 0x04(%edi)
+	movl %ebx, 0x0c(%edi)
+	movl %ecx, 0x14(%edi)
+	movl %edx, 0x1c(%edi)
+	pop %eax
+	movl %eax, 0x24(%edi)
+	movl %esi, 0x2c(%edi)
+	movl %ebp, 0x34(%edi)
+	/* skip %esp */
+.endm
+
 .global kvm_svm_vmrun
 kvm_svm_vmrun:
 	push %edi
@@ -377,17 +405,11 @@ kvm_svm_vmrun:
 	vmsave
 	push %eax
 
-	/* Load guest registers */
 	push %edi
-	movl (%edi), %eax
-	/* %eax is loaded by vmrun from VMCB */
-	movl 0x0c(%edi), %ebx
-	movl 0x14(%edi), %ecx
-	movl 0x1c(%edi), %edx
-	movl 0x2c(%edi), %esi
-	movl 0x34(%edi), %ebp
-	/* %esp is loaded by vmrun from VMCB */
-	movl 0x24(%edi), %edi
+	load_vcpu_regs
+	/* %eax = vcpu->vmcb; */
+	movl (%esp), %eax
+	movl (%eax), %eax
 
 	vmload
 	vmrun
@@ -395,8 +417,9 @@ kvm_svm_vmrun:
 
 	/* Clear guest register buffer */
 	push %edi
+	push %eax
 	push %ecx
-	movl 8(%esp), %edi
+	movl 12(%esp), %edi
 	addl $4, %edi
 	xorl %eax, %eax
 	mov $32, %ecx
@@ -404,17 +427,13 @@ kvm_svm_vmrun:
 	cld
 	rep stosl
 	popfl
-
-	/* Save guest registers */
 	pop %ecx
 	pop %eax
 	pop %edi
-	movl %ebx, 0x0c(%edi)
-	movl %ecx, 0x14(%edi)
-	movl %edx, 0x1c(%edi)
-	movl %eax, 0x24(%edi)
-	movl %esi, 0x2c(%edi)
-	movl %ebp, 0x34(%edi)
+
+	save_vcpu_regs
+	pop %edi
+
 	/* Copy %eax and %esp from VMCB */
 	movl (%edi), %esi
 	movl 0x5f8(%esi), %eax
diff --git a/testcases/kernel/kvm/bootstrap_x86_64.S b/testcases/kernel/kvm/bootstrap_x86_64.S
index 32170f7c9..1e0a2952d 100644
--- a/testcases/kernel/kvm/bootstrap_x86_64.S
+++ b/testcases/kernel/kvm/bootstrap_x86_64.S
@@ -484,35 +484,16 @@ kvm_svm_guest_entry:
 1:	hlt
 	jmp 1b
 
-.global kvm_svm_vmrun
-kvm_svm_vmrun:
-	pushq %rbx
-	pushq %rbp
-	pushq %r12
-	pushq %r13
-	pushq %r14
-	pushq %r15
-
-	clgi
-
-	/* Save full host state */
-	movq $MSR_VM_HSAVE_PA, %rcx
-	rdmsr
-	shlq $32, %rdx
-	orq %rdx, %rax
-	vmsave
-	pushq %rax
-
-	/* Load guest registers */
-	pushq %rdi
-	movq (%rdi), %rax
-	/* %rax is loaded by vmrun from VMCB */
+/* vcpu structure address must be in %rdi */
+.macro load_vcpu_regs
+	movq 0x08(%rdi), %rax
 	movq 0x10(%rdi), %rbx
 	movq 0x18(%rdi), %rcx
 	movq 0x20(%rdi), %rdx
+	/* load %rdi last */
 	movq 0x30(%rdi), %rsi
 	movq 0x38(%rdi), %rbp
-	/* %rsp is loaded by vmrun from VMCB */
+	/* skip %rsp */
 	movq 0x48(%rdi), %r8
 	movq 0x50(%rdi), %r9
 	movq 0x58(%rdi), %r10
@@ -522,21 +503,21 @@ kvm_svm_vmrun:
 	movq 0x78(%rdi), %r14
 	movq 0x80(%rdi), %r15
 	movq 0x28(%rdi), %rdi
+.endm
 
-	vmload
-	vmrun
-	vmsave
-
-	/* Save guest registers */
-	movq %rdi, %rax
-	popq %rdi
+/* vcpu structure address must be on top of the stack */
+.macro save_vcpu_regs
+	pushq %rdi
+	movq 8(%rsp), %rdi
+	movq %rax, 0x08(%rdi)
 	movq %rbx, 0x10(%rdi)
 	movq %rcx, 0x18(%rdi)
 	movq %rdx, 0x20(%rdi)
-	/* %rax contains guest %rdi */
+	popq %rax
 	movq %rax, 0x28(%rdi)
 	movq %rsi, 0x30(%rdi)
 	movq %rbp, 0x38(%rdi)
+	/* skip %rsp */
 	movq %r8,  0x48(%rdi)
 	movq %r9,  0x50(%rdi)
 	movq %r10, 0x58(%rdi)
@@ -545,6 +526,52 @@ kvm_svm_vmrun:
 	movq %r13, 0x70(%rdi)
 	movq %r14, 0x78(%rdi)
 	movq %r15, 0x80(%rdi)
+.endm
+
+.macro push_local
+	pushq %rbx
+	pushq %rbp
+	pushq %r12
+	pushq %r13
+	pushq %r14
+	pushq %r15
+.endm
+
+.macro pop_local
+	popq %r15
+	popq %r14
+	popq %r13
+	popq %r12
+	popq %rbp
+	popq %rbx
+.endm
+
+.global kvm_svm_vmrun
+kvm_svm_vmrun:
+	push_local
+	clgi
+
+	/* Save full host state */
+	movq $MSR_VM_HSAVE_PA, %rcx
+	rdmsr
+	shlq $32, %rdx
+	orq %rdx, %rax
+	vmsave
+	pushq %rax
+
+	pushq %rdi
+	load_vcpu_regs
+	/* %rax = vcpu->vmcb; */
+	movq (%rsp), %rax
+	movq (%rax), %rax
+
+	vmload
+	vmrun
+	vmsave
+
+	save_vcpu_regs
+	popq %rdi
+
 	/* copy guest %rax and %rsp from VMCB*/
 	movq (%rdi), %rsi
 	movq 0x5f8(%rsi), %rax
@@ -557,13 +584,7 @@ kvm_svm_vmrun:
 	vmload
 
 	stgi
-
-	popq %r15
-	popq %r14
-	popq %r13
-	popq %r12
-	popq %rbp
-	popq %rbx
+	pop_local
 	retq
 
 .section .bss.pgtables, "aw", @nobits
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 03/10] kvm_x86: Define CR0 flags and additional CPUID/MSR constants
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 01/10] kvm_read_sregs(): Read the TR segment register Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 02/10] kvm_svm_vmrun(): Simplify VM state save/load with macros Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 04/10] KVM: Implement helper functions for setting x86 control registers Martin Doucha
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/include/kvm_x86.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/testcases/kernel/kvm/include/kvm_x86.h b/testcases/kernel/kvm/include/kvm_x86.h
index f99fedbca..c782a64ec 100644
--- a/testcases/kernel/kvm/include/kvm_x86.h
+++ b/testcases/kernel/kvm/include/kvm_x86.h
@@ -62,12 +62,14 @@
 
 
 /* CPUID constants */
+#define CPUID_GET_MODEL_INFO 0x1
 #define CPUID_GET_INPUT_RANGE 0x80000000
 #define CPUID_GET_EXT_FEATURES 0x80000001
 #define CPUID_GET_SVM_FEATURES 0x8000000a
 
 
 /* Model-specific CPU register constants */
+#define MSR_IA32_FEATURE_CONTROL 0x3a
 #define MSR_SYSENTER_CS 0x174
 #define MSR_SYSENTER_ESP 0x175
 #define MSR_SYSENTER_EIP 0x176
@@ -95,6 +97,18 @@
 #define VM_CR_SVMDIS (1 << 4)
 
 /* Control register constants */
+#define CR0_PE (1 << 0)
+#define CR0_MP (1 << 1)
+#define CR0_EM (1 << 2)
+#define CR0_TS (1 << 3)
+#define CR0_ET (1 << 4)
+#define CR0_NE (1 << 5)
+#define CR0_WP (1 << 16)
+#define CR0_AM (1 << 18)
+#define CR0_NW (1 << 29)
+#define CR0_CD (1 << 30)
+#define CR0_PG (1 << 31)
+
 #define CR4_VME (1 << 0)
 #define CR4_PVI (1 << 1)
 #define CR4_TSD (1 << 2)
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 04/10] KVM: Implement helper functions for setting x86 control registers
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (2 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 03/10] kvm_x86: Define CR0 flags and additional CPUID/MSR constants Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 05/10] KVM: Add memcmp() helper function Martin Doucha
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/include/kvm_x86.h |  3 +++
 testcases/kernel/kvm/lib_x86.c         | 27 ++++++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/testcases/kernel/kvm/include/kvm_x86.h b/testcases/kernel/kvm/include/kvm_x86.h
index c782a64ec..296dc3859 100644
--- a/testcases/kernel/kvm/include/kvm_x86.h
+++ b/testcases/kernel/kvm/include/kvm_x86.h
@@ -221,6 +221,9 @@ unsigned int kvm_create_stack_descriptor(struct segment_descriptor *table,
 void kvm_get_cpuid(unsigned int eax, unsigned int ecx, struct kvm_cpuid *buf);
 void kvm_read_cregs(struct kvm_cregs *buf);
 void kvm_read_sregs(struct kvm_sregs *buf);
+void kvm_set_cr0(unsigned long val);
+void kvm_set_cr3(unsigned long val);
+void kvm_set_cr4(unsigned long val);
 uint64_t kvm_rdmsr(unsigned int msr);
 void kvm_wrmsr(unsigned int msr, uint64_t value);
 
diff --git a/testcases/kernel/kvm/lib_x86.c b/testcases/kernel/kvm/lib_x86.c
index 8db3abd3f..266d7195c 100644
--- a/testcases/kernel/kvm/lib_x86.c
+++ b/testcases/kernel/kvm/lib_x86.c
@@ -214,6 +214,33 @@ void kvm_get_cpuid(unsigned int eax, unsigned int ecx, struct kvm_cpuid *buf)
 	);
 }
 
+void kvm_set_cr0(unsigned long val)
+{
+	asm (
+		"mov %0, %%cr0\n"
+		:
+		: "r" (val)
+	);
+}
+
+void kvm_set_cr3(unsigned long val)
+{
+	asm (
+		"mov %0, %%cr3\n"
+		:
+		: "r" (val)
+	);
+}
+
+void kvm_set_cr4(unsigned long val)
+{
+	asm (
+		"mov %0, %%cr4\n"
+		:
+		: "r" (val)
+	);
+}
+
 uint64_t kvm_rdmsr(unsigned int msr)
 {
 	unsigned int ret_lo, ret_hi;
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 05/10] KVM: Add memcmp() helper function
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (3 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 04/10] KVM: Implement helper functions for setting x86 control registers Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 06/10] KVM: Add helper functions for nested Intel VMX virtualization Martin Doucha
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/include/kvm_guest.h |  2 ++
 testcases/kernel/kvm/lib_guest.c         | 12 ++++++++++++
 2 files changed, 14 insertions(+)

diff --git a/testcases/kernel/kvm/include/kvm_guest.h b/testcases/kernel/kvm/include/kvm_guest.h
index 0eabfb9a0..3f3e2f16c 100644
--- a/testcases/kernel/kvm/include/kvm_guest.h
+++ b/testcases/kernel/kvm/include/kvm_guest.h
@@ -48,6 +48,8 @@ void *memset(void *dest, int val, size_t size);
 void *memzero(void *dest, size_t size);
 void *memcpy(void *dest, const void *src, size_t size);
 
+int memcmp(const void *a, const void *b, size_t length);
+
 char *strcpy(char *dest, const char *src);
 char *strcat(char *dest, const char *src);
 size_t strlen(const char *str);
diff --git a/testcases/kernel/kvm/lib_guest.c b/testcases/kernel/kvm/lib_guest.c
index 2e3e9cb6e..6f0b2824c 100644
--- a/testcases/kernel/kvm/lib_guest.c
+++ b/testcases/kernel/kvm/lib_guest.c
@@ -45,6 +45,18 @@ void *memcpy(void *dest, const void *src, size_t size)
 	return dest;
 }
 
+int memcmp(const void *a, const void *b, size_t length)
+{
+	const unsigned char *x = a, *y = b;
+
+	for (; length; x++, y++, length--) {
+		if (*x != *y)
+			return (int)*x - (int)*y;
+	}
+
+	return 0;
+}
+
 char *strcpy(char *dest, const char *src)
 {
 	char *ret = dest;
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 06/10] KVM: Add helper functions for nested Intel VMX virtualization
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (4 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 05/10] KVM: Add memcmp() helper function Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 07/10] lib: Add helper function for reloading kernel modules Martin Doucha
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/bootstrap_x86.S       |  64 +++
 testcases/kernel/kvm/bootstrap_x86_64.S    |  54 +++
 testcases/kernel/kvm/include/kvm_x86_vmx.h | 201 +++++++++
 testcases/kernel/kvm/lib_x86.c             | 488 +++++++++++++++++++++
 4 files changed, 807 insertions(+)
 create mode 100644 testcases/kernel/kvm/include/kvm_x86_vmx.h

diff --git a/testcases/kernel/kvm/bootstrap_x86.S b/testcases/kernel/kvm/bootstrap_x86.S
index f08282461..f19a9ea55 100644
--- a/testcases/kernel/kvm/bootstrap_x86.S
+++ b/testcases/kernel/kvm/bootstrap_x86.S
@@ -11,6 +11,9 @@
 
 .set MSR_VM_HSAVE_PA, 0xc0010117
 
+.set VMX_VMCS_HOST_RSP, 0x6c14
+.set VMX_VMCS_HOST_RIP, 0x6c16
+
 /*
  * This section will be allocated at address 0x1000 and
  * jumped to from the reset stub provided by kvm_run.
@@ -451,6 +454,67 @@ kvm_svm_vmrun:
 	pop %edi
 	ret
 
+.global kvm_vmx_vmlaunch
+kvm_vmx_vmlaunch:
+	push %edi
+	mov 8(%esp), %edi
+	push %ebx
+	push %esi
+	push %ebp
+	push %edi
+
+	mov $VMX_VMCS_HOST_RSP, %eax
+	vmwrite %esp, %eax
+	jna vmx_vmwrite_error
+	mov $VMX_VMCS_HOST_RIP, %eax
+	lea vmx_vm_exit, %ebx
+	vmwrite %ebx, %eax
+	jna vmx_vmwrite_error
+
+	load_vcpu_regs
+	vmlaunch
+	jmp vmx_vm_exit
+
+.global kvm_vmx_vmresume
+kvm_vmx_vmresume:
+	push %edi
+	mov 8(%esp), %edi
+	push %ebx
+	push %esi
+	push %ebp
+	push %edi
+
+	mov $VMX_VMCS_HOST_RSP, %eax
+	vmwrite %esp, %eax
+	jna vmx_vmwrite_error
+	mov $VMX_VMCS_HOST_RIP, %eax
+	lea vmx_vm_exit, %ebx
+	vmwrite %ebx, %eax
+	jna vmx_vmwrite_error
+
+	load_vcpu_regs
+	vmresume
+
+vmx_vm_exit:
+	jna vmx_vmentry_error
+	save_vcpu_regs
+	xorl %eax, %eax
+
+vmx_vm_ret:
+	pop %edi
+	pop %ebp
+	pop %esi
+	pop %ebx
+	pop %edi
+	ret
+
+vmx_vmwrite_error:
+	movl $2, %eax
+	jmp vmx_vm_ret
+
+vmx_vmentry_error:
+	movl $1, %eax
+	jmp vmx_vm_ret
 
 .section .bss.pgtables, "aw", @nobits
 .global kvm_pagetable
diff --git a/testcases/kernel/kvm/bootstrap_x86_64.S b/testcases/kernel/kvm/bootstrap_x86_64.S
index 1e0a2952d..d4b501280 100644
--- a/testcases/kernel/kvm/bootstrap_x86_64.S
+++ b/testcases/kernel/kvm/bootstrap_x86_64.S
@@ -12,6 +12,9 @@
 
 .set MSR_VM_HSAVE_PA, 0xc0010117
 
+.set VMX_VMCS_HOST_RSP, 0x6c14
+.set VMX_VMCS_HOST_RIP, 0x6c16
+
 /*
  * This section will be allocated at address 0x1000 and
  * jumped to from the reset stub provided by kvm_run.
@@ -587,6 +590,57 @@ kvm_svm_vmrun:
 	pop_local
 	retq
 
+.global kvm_vmx_vmlaunch
+kvm_vmx_vmlaunch:
+	push_local
+	pushq %rdi
+
+	mov $VMX_VMCS_HOST_RSP, %rax
+	vmwrite %rsp, %rax
+	jna vmx_vmwrite_error
+	mov $VMX_VMCS_HOST_RIP, %rax
+	lea vmx_vm_exit, %rbx
+	vmwrite %rbx, %rax
+	jna vmx_vmwrite_error
+
+	load_vcpu_regs
+	vmlaunch
+	jmp vmx_vm_exit
+
+.global kvm_vmx_vmresume
+kvm_vmx_vmresume:
+	push_local
+	pushq %rdi
+
+	movq $VMX_VMCS_HOST_RSP, %rax
+	vmwrite %rsp, %rax
+	jna vmx_vmwrite_error
+	movq $VMX_VMCS_HOST_RIP, %rax
+	lea vmx_vm_exit, %rbx
+	vmwrite %rbx, %rax
+	jna vmx_vmwrite_error
+
+	load_vcpu_regs
+	vmresume
+
+vmx_vm_exit:
+	jna vmx_vmentry_error
+	save_vcpu_regs
+	xorq %rax, %rax
+
+vmx_vm_ret:
+	popq %rdi
+	pop_local
+	retq
+
+vmx_vmwrite_error:
+	movq $2, %rax
+	jmp vmx_vm_ret
+
+vmx_vmentry_error:
+	movq $1, %rax
+	jmp vmx_vm_ret
+
 .section .bss.pgtables, "aw", @nobits
 .global kvm_pagetable
 kvm_pagetable:
diff --git a/testcases/kernel/kvm/include/kvm_x86_vmx.h b/testcases/kernel/kvm/include/kvm_x86_vmx.h
new file mode 100644
index 000000000..180a114e7
--- /dev/null
+++ b/testcases/kernel/kvm/include/kvm_x86_vmx.h
@@ -0,0 +1,201 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Copyright (C) 2024 SUSE LLC <mdoucha@suse.cz>
+ *
+ * x86-specific KVM helper functions and structures for Intel VMX
+ */
+
+#ifndef KVM_X86_VMX_H_
+#define KVM_X86_VMX_H_
+
+#include "kvm_x86.h"
+
+/* CPUID_GET_MODEL_INFO flags returned in ECX */
+#define CPUID_MODEL_VMX (1 << 5)
+#define CPUID_MODEL_SMX (1 << 6)
+
+#define MSR_IA32_VMX_BASIC 0x480
+#define MSR_IA32_VMX_PINX_MASK 0x481
+#define MSR_IA32_VMX_EXECCTL_MASK 0x482
+#define MSR_IA32_VMX_EXITCTL_MASK 0x483
+#define MSR_IA32_VMX_ENTRYCTL_MASK 0x484
+#define MSR_IA32_VMX_CR0_FIXED0 0x486
+#define MSR_IA32_VMX_CR0_FIXED1 0x487
+#define MSR_IA32_VMX_CR4_FIXED0 0x488
+#define MSR_IA32_VMX_CR4_FIXED1 0x489
+#define MSR_IA32_VMX_EXECCTL2_MASK 0x48b
+#define MSR_IA32_VMX_PINX_MASK2 0x48d
+#define MSR_IA32_VMX_EXECCTL_MASK2 0x48e
+#define MSR_IA32_VMX_EXITCTL_MASK2 0x48f
+#define MSR_IA32_VMX_ENTRYCTL_MASK2 0x490
+
+#define IA32FC_LOCK (1 << 0)
+#define IA32FC_VMXON_SMX (1 << 1)
+#define IA32FC_VMXON_NORMAL (1 << 2)
+
+#define IA32_VMXBASIC_USELESS_CTL_MASKS (1ULL << 55)
+
+#define VMX_VMCS_GUEST_ES	0x800
+#define VMX_VMCS_GUEST_CS	0x802
+#define VMX_VMCS_GUEST_SS	0x804
+#define VMX_VMCS_GUEST_DS	0x806
+#define VMX_VMCS_GUEST_FS	0x808
+#define VMX_VMCS_GUEST_GS	0x80a
+#define VMX_VMCS_GUEST_LDTR	0x80c
+#define VMX_VMCS_GUEST_TR	0x80e
+#define VMX_VMCS_GUEST_INTR	0x810
+#define VMX_VMCS_HOST_ES	0xc00
+#define VMX_VMCS_HOST_CS	0xc02
+#define VMX_VMCS_HOST_SS	0xc04
+#define VMX_VMCS_HOST_DS	0xc06
+#define VMX_VMCS_HOST_FS	0xc08
+#define VMX_VMCS_HOST_GS	0xc0a
+#define VMX_VMCS_HOST_TR	0xc0c
+
+#define VMX_VMCS_LINK_POINTER	0x2800
+
+#define VMX_VMCS_GUEST_ES_LIMIT		0x4800
+#define VMX_VMCS_GUEST_CS_LIMIT		0x4802
+#define VMX_VMCS_GUEST_SS_LIMIT		0x4804
+#define VMX_VMCS_GUEST_DS_LIMIT		0x4806
+#define VMX_VMCS_GUEST_FS_LIMIT		0x4808
+#define VMX_VMCS_GUEST_GS_LIMIT		0x480a
+#define VMX_VMCS_GUEST_LDTR_LIMIT	0x480c
+#define VMX_VMCS_GUEST_TR_LIMIT		0x480e
+#define VMX_VMCS_GUEST_GDTR_LIMIT	0x4810
+#define VMX_VMCS_GUEST_IDTR_LIMIT	0x4812
+#define VMX_VMCS_GUEST_ES_ACCESS	0x4814
+#define VMX_VMCS_GUEST_CS_ACCESS	0x4816
+#define VMX_VMCS_GUEST_SS_ACCESS	0x4818
+#define VMX_VMCS_GUEST_DS_ACCESS	0x481a
+#define VMX_VMCS_GUEST_FS_ACCESS	0x481c
+#define VMX_VMCS_GUEST_GS_ACCESS	0x481e
+#define VMX_VMCS_GUEST_LDTR_ACCESS	0x4820
+#define VMX_VMCS_GUEST_TR_ACCESS	0x4822
+#define VMX_VMCS_GUEST_INTR_STATE	0x4824
+#define VMX_VMCS_GUEST_ACT_STATE	0x4826
+#define VMX_VMCS_GUEST_SMBASE		0x4828
+#define VMX_VMCS_GUEST_SYSENTER_CS	0x482a
+#define VMX_VMCS_HOST_SYSENTER_CS	0x4c00
+
+#define VMX_VMCS_GUEST_CR0		0x6800
+#define VMX_VMCS_GUEST_CR3		0x6802
+#define VMX_VMCS_GUEST_CR4		0x6804
+#define VMX_VMCS_GUEST_ES_BASE		0x6806
+#define VMX_VMCS_GUEST_CS_BASE		0x6808
+#define VMX_VMCS_GUEST_SS_BASE		0x680a
+#define VMX_VMCS_GUEST_DS_BASE		0x680c
+#define VMX_VMCS_GUEST_FS_BASE		0x680e
+#define VMX_VMCS_GUEST_GS_BASE		0x6810
+#define VMX_VMCS_GUEST_LDTR_BASE	0x6812
+#define VMX_VMCS_GUEST_TR_BASE		0x6814
+#define VMX_VMCS_GUEST_GDTR_BASE	0x6816
+#define VMX_VMCS_GUEST_IDTR_BASE	0x6818
+#define VMX_VMCS_GUEST_DR7		0x681a
+#define VMX_VMCS_GUEST_RSP		0x681c
+#define VMX_VMCS_GUEST_RIP		0x681e
+#define VMX_VMCS_GUEST_RFLAGS		0x6820
+#define VMX_VMCS_GUEST_DEBUG_EXC	0x6822
+#define VMX_VMCS_GUEST_SYSENTER_ESP	0x6824
+#define VMX_VMCS_GUEST_SYSENTER_EIP	0x6826
+#define VMX_VMCS_HOST_CR0		0x6c00
+#define VMX_VMCS_HOST_CR3		0x6c02
+#define VMX_VMCS_HOST_CR4		0x6c04
+#define VMX_VMCS_HOST_FS_BASE		0x6c06
+#define VMX_VMCS_HOST_GS_BASE		0x6c08
+#define VMX_VMCS_HOST_TR_BASE		0x6c0a
+#define VMX_VMCS_HOST_GDTR_BASE		0x6c0c
+#define VMX_VMCS_HOST_IDTR_BASE		0x6c0e
+#define VMX_VMCS_HOST_SYSENTER_ESP	0x6c10
+#define VMX_VMCS_HOST_SYSENTER_EIP	0x6c12
+#define VMX_VMCS_HOST_RSP		0x6c14
+#define VMX_VMCS_HOST_RIP		0x6c16
+
+#define VMX_VMCS_VMPINX_CTL		0x4000
+#define VMX_VMCS_VMEXEC_CTL		0x4002
+#define VMX_VMCS_VMEXIT_CTL		0x400c
+#define VMX_VMCS_VMEXIT_MSR_STORE	0x400e
+#define VMX_VMCS_VMEXIT_MSR_LOAD	0x4010
+#define VMX_VMCS_VMENTRY_CTL		0x4012
+#define VMX_VMCS_VMENTRY_MSR_LOAD	0x4014
+#define VMX_VMCS_VMENTRY_INTR		0x4016
+#define VMX_VMCS_VMENTRY_EXC		0x4018
+#define VMX_VMCS_VMENTRY_INST_LEN	0x401a
+#define VMX_VMCS_VMEXEC_CTL2		0x401e
+
+#define VMX_VMCS_VMINST_ERROR		0x4400
+#define VMX_VMCS_EXIT_REASON		0x4402
+#define VMX_VMCS_VMEXIT_INTR_INFO	0x4404
+#define VMX_VMCS_VMEXIT_INTR_ERRNO	0x4406
+#define VMX_VMCS_IDTVEC_INFO		0x4408
+#define VMX_VMCS_IDTVEC_ERRNO		0x440a
+#define VMX_VMCS_VMEXIT_INST_LEN	0x440c
+#define VMX_VMCS_VMEXIT_INST_INFO	0x440e
+#define VMX_VMCS_EXIT_QUALIFICATION	0x6400
+
+#define VMX_INTERCEPT_HLT (1 << 7)
+#define VMX_EXECCTL_ENABLE_CTL2 (1 << 31)
+
+#define VMX_EXECCTL2_SHADOW_VMCS (1 << 14)
+
+#define VMX_EXITCTL_SAVE_DR (1 << 2)
+#define VMX_EXITCTL_X64 (1 << 9)
+
+#define VMX_ENTRYCTL_LOAD_DR (1 << 2)
+#define VMX_ENTRYCTL_X64 (1 << 9)
+
+#define VMX_SHADOW_VMCS 0x80000000
+#define VMX_VMCSFIELD_64BIT 0x2000
+#define VMX_VMCSFIELD_SIZE_MASK 0x6000
+
+#define VMX_INVALID_VMCS 0xffffffffffffffffULL
+
+#define VMX_EXIT_HLT 12
+#define VMX_EXIT_FAILED_ENTRY 0x80000000
+
+struct kvm_vmcs {
+	uint32_t version;
+	uint32_t abort;
+	uint8_t data[4088];
+};
+
+struct kvm_vmx_vcpu {
+	struct kvm_vmcs *vmcs;
+	struct kvm_regs64 regs;
+	int launched;
+};
+
+/* Intel VMX virtualization helper functions */
+int kvm_is_vmx_supported(void);
+void kvm_set_vmx_state(int enabled);
+struct kvm_vmcs *kvm_alloc_vmcs(void);
+
+/* Copy GDT entry to given fields of the current VMCS */
+void kvm_vmcs_copy_gdt_descriptor(unsigned int gdt_id,
+	unsigned long vmcs_selector, unsigned long vmcs_flags,
+	unsigned long vmcs_limit, unsigned long vmcs_baseaddr);
+void kvm_init_vmx_vcpu(struct kvm_vmx_vcpu *cpu, uint16_t ss, void *rsp,
+	int (*guest_main)(void));
+struct kvm_vmx_vcpu *kvm_create_vmx_vcpu(int (*guest_main)(void),
+	int alloc_stack);
+
+/* Set the VMCS as current and update the host state fields */
+void kvm_vmx_activate_vcpu(struct kvm_vmx_vcpu *cpu);
+void kvm_vmx_vmrun(struct kvm_vmx_vcpu *cpu);
+
+void kvm_vmx_vmclear(struct kvm_vmcs *buf);
+void kvm_vmx_vmptrld(struct kvm_vmcs *buf);
+uint64_t kvm_vmx_vmptrst(void);
+uint64_t kvm_vmx_vmread(unsigned long var_id);
+void kvm_vmx_vmwrite(unsigned long var_id, uint64_t value);
+int kvm_vmx_vmlaunch(struct kvm_vmx_vcpu *buf);
+int kvm_vmx_vmresume(struct kvm_vmx_vcpu *buf);
+
+/* Read last VMX instruction error from current VMCS */
+int kvm_vmx_inst_errno(void);
+/* Get VMX instruction error description */
+const char *kvm_vmx_inst_strerr(int vmx_errno);
+/* Get description of last VMX instruction error in current VMCS */
+const char *kvm_vmx_inst_err(void);
+
+#endif /* KVM_X86_VMX_H_ */
diff --git a/testcases/kernel/kvm/lib_x86.c b/testcases/kernel/kvm/lib_x86.c
index 266d7195c..e6acc0797 100644
--- a/testcases/kernel/kvm/lib_x86.c
+++ b/testcases/kernel/kvm/lib_x86.c
@@ -6,6 +6,9 @@
  */
 
 #include "kvm_x86_svm.h"
+#include "kvm_x86_vmx.h"
+
+#define VMX_VMINST_ERR_COUNT 29
 
 void kvm_svm_guest_entry(void);
 
@@ -84,6 +87,38 @@ static uintptr_t intr_handlers[] = {
 	0
 };
 
+static const char *vmx_error_description[VMX_VMINST_ERR_COUNT] = {
+	"Success",
+	"VMCALL executed in VMX root",
+	"VMCLEAR on invalid pointer",
+	"VMCLEAR on VMXON pointer",
+	"VMLAUNCH with non-clear VMCS",
+	"VMRESUME with non-launched VMCS",
+	"VMRESUME after VMXOFF",
+	"VM entry with invalid VMCS control fields",
+	"VM entry with invalid VMCS host state",
+	"VMPTRLD with invalid pointer",
+	"VMPTRLD with VMXON pointer",
+	"VMPTRLD with incorrect VMCS version field",
+	"Invalid VMCS field code",
+	"VMWRITE to read-only VMCS field",
+	"Unknown error",
+	"VMXON called twice",
+	"VM entry with invalid executive VMCS pointer",
+	"VM entry with non-launched executive VMCS",
+	"VM entry with executive VMCS pointer != VMXON pointer",
+	"VMCALL with non-clear VMCS",
+	"VMCALL with invalid VMCS exit control fields",
+	"Unknown error",
+	"VMCALL with incorrect MSEG revision ID",
+	"VMXOFF under dual-monitor SMIs and SMM",
+	"VMCALL with invalid SMM-monitor features",
+	"VM entry with invalid executive VMCS execution control fields",
+	"VM entry with events blocked by MOV SS",
+	"Unknown error",
+	"Invalid operand to INVEPT/INVVPID"
+};
+
 static void kvm_set_intr_handler(unsigned int id, uintptr_t func)
 {
 	memset(kvm_idt + id, 0, sizeof(kvm_idt[0]));
@@ -438,3 +473,456 @@ void kvm_svm_vmsave(struct kvm_vmcb *buf)
 		: "a" (buf)
 	);
 }
+
+int kvm_is_vmx_supported(void)
+{
+	struct kvm_cpuid buf;
+
+	kvm_get_cpuid(CPUID_GET_MODEL_INFO, 0, &buf);
+	return buf.ecx & CPUID_MODEL_VMX;
+}
+
+void kvm_vmx_vmclear(struct kvm_vmcs *buf)
+{
+	uint64_t tmp = (uintptr_t)buf;
+
+	asm goto(
+		"vmclear (%0)\n"
+		"jna %l[error]\n"
+		:
+		: "r" (&tmp)
+		: "cc", "memory"
+		: error
+	);
+
+	return;
+
+error:
+	tst_brk(TBROK, "VMCLEAR(%p) failed", buf);
+}
+
+void kvm_vmx_vmptrld(struct kvm_vmcs *buf)
+{
+	uint64_t tmp = (uintptr_t)buf;
+
+	asm goto(
+		"vmptrld (%0)\n"
+		"jna %l[error]\n"
+		:
+		: "r" (&tmp)
+		: "cc"
+		: error
+	);
+
+	return;
+
+error:
+	tst_brk(TBROK, "VMPTRLD(%p) failed", buf);
+}
+
+uint64_t kvm_vmx_vmptrst(void)
+{
+	uint64_t ret;
+
+	asm (
+		"vmptrst (%0)\n"
+		:
+		: "r" (&ret)
+		: "cc", "memory"
+	);
+
+	return ret;
+}
+
+uint64_t kvm_vmx_vmread(unsigned long var_id)
+{
+	uint64_t ret = 0;
+	unsigned long tmp;
+
+#ifndef __x86_64__
+	if ((var_id & VMX_VMCSFIELD_SIZE_MASK) == VMX_VMCSFIELD_64BIT) {
+		asm goto(
+			"vmread %1, (%0)\n"
+			"jna %l[error]\n"
+			:
+			: "r" (&tmp), "r" (var_id + 1)
+			: "cc", "memory"
+			: error
+		);
+
+		ret = tmp;
+		ret <<= 32;
+	}
+#endif /* __x86_64__ */
+
+	asm goto(
+		"vmread %1, (%0)\n"
+		"jna %l[error]\n"
+		:
+		: "r" (&tmp), "r" (var_id)
+		: "cc", "memory"
+		: error
+	);
+
+	ret |= tmp;
+	return ret;
+
+error:
+	tst_brk(TBROK, "VMREAD(%lx) failed", var_id);
+}
+
+void kvm_vmx_vmwrite(unsigned long var_id, uint64_t value)
+{
+	unsigned long tmp = value;
+
+	asm goto(
+		"vmwrite %0, %1\n"
+		"jna %l[error]\n"
+		:
+		: "r" (tmp), "r" (var_id)
+		: "cc"
+		: error
+	);
+
+#ifndef __x86_64__
+	if ((var_id & VMX_VMCSFIELD_SIZE_MASK) == VMX_VMCSFIELD_64BIT) {
+		tmp = value >> 32;
+
+		asm goto(
+			"vmwrite %0, %1\n"
+			"jna %l[error]\n"
+			:
+			: "r" (tmp), "r" (var_id + 1)
+			: "cc"
+			: error
+		);
+
+	}
+#endif /* __x86_64__ */
+
+	return;
+
+error:
+	tst_brk(TBROK, "VMWRITE(%lx, %llx) failed", var_id, value);
+}
+
+static void kvm_vmx_vmxon(struct kvm_vmcs *buf)
+{
+	uint64_t tmp = (uintptr_t)buf;
+
+	asm goto(
+		"vmxon (%0)\n"
+		"jna %l[error]\n"
+		:
+		: "r" (&tmp)
+		: "cc"
+		: error
+	);
+
+	return;
+
+error:
+	tst_brk(TBROK, "VMXON(%p) failed", buf);
+}
+
+static void kvm_vmx_vmxoff(void)
+{
+	asm goto(
+		"vmxoff\n"
+		"jna %l[error]\n"
+		:
+		:
+		: "cc"
+		: error
+	);
+
+	return;
+
+error:
+	tst_brk(TBROK, "VMXOFF failed");
+}
+
+struct kvm_vmcs *kvm_alloc_vmcs(void)
+{
+	struct kvm_vmcs *ret;
+
+	ret = tst_heap_alloc_aligned(sizeof(struct kvm_vmcs), PAGESIZE);
+	memset(ret, 0, sizeof(struct kvm_vmcs));
+	ret->version = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_BASIC);
+	return ret;
+}
+
+void kvm_set_vmx_state(int enabled)
+{
+	static struct kvm_vmcs *vmm_buf;
+	uint64_t value;
+	struct kvm_cregs cregs;
+
+	if (!kvm_is_vmx_supported())
+		tst_brk(TCONF, "CPU does not support VMX");
+
+	kvm_read_cregs(&cregs);
+	kvm_set_cr0(cregs.cr0 | CR0_NE);
+	kvm_set_cr4(cregs.cr4 | CR4_VMXE);
+	value = kvm_rdmsr(MSR_IA32_FEATURE_CONTROL);
+	value |= IA32FC_LOCK | IA32FC_VMXON_NORMAL;
+	kvm_wrmsr(MSR_IA32_FEATURE_CONTROL, value);
+
+	if (!vmm_buf)
+		vmm_buf = kvm_alloc_vmcs();
+
+	if (enabled)
+		kvm_vmx_vmxon(vmm_buf);
+	else
+		kvm_vmx_vmxoff();
+}
+
+void kvm_vmcs_copy_gdt_descriptor(unsigned int gdt_id,
+	unsigned long vmcs_selector, unsigned long vmcs_flags,
+	unsigned long vmcs_limit, unsigned long vmcs_baseaddr)
+{
+	uint64_t baseaddr;
+	uint32_t limit;
+	unsigned int flags;
+
+	if (gdt_id >= KVM_GDT_SIZE)
+		tst_brk(TBROK, "GDT descriptor ID out of range");
+
+	kvm_parse_segment_descriptor(kvm_gdt + gdt_id, &baseaddr, &limit,
+		&flags);
+
+	if (!(flags & SEGFLAG_PRESENT)) {
+		gdt_id = 0;
+		baseaddr = 0;
+		flags = 0x10000;
+		limit = 0;
+	} else if (flags & SEGFLAG_PAGE_LIMIT) {
+		limit = (limit << 12) | 0xfff;
+	}
+
+	if (!(flags & 0x10000)) {
+		// insert the reserved limit bits and force accessed bit to 1
+		flags = ((flags & 0xf00) << 4) | (flags & 0xff) | 0x1;
+	}
+
+	kvm_vmx_vmwrite(vmcs_selector, gdt_id << 3);
+	kvm_vmx_vmwrite(vmcs_flags, flags);
+	kvm_vmx_vmwrite(vmcs_limit, limit);
+	kvm_vmx_vmwrite(vmcs_baseaddr, baseaddr);
+}
+
+void kvm_init_vmx_vcpu(struct kvm_vmx_vcpu *cpu, uint16_t ss, void *rsp,
+	int (*guest_main)(void))
+{
+	uint64_t old_vmcs, pinxctl, execctl, entryctl, exitctl;
+	unsigned long crx;
+	struct kvm_cregs cregs;
+	struct kvm_sregs sregs;
+
+	kvm_read_cregs(&cregs);
+	kvm_read_sregs(&sregs);
+
+	/* Clear cpu->vmcs first in case it's the current VMCS */
+	kvm_vmx_vmclear(cpu->vmcs);
+	memset(&cpu->regs, 0, sizeof(struct kvm_regs64));
+	cpu->launched = 0;
+	old_vmcs = kvm_vmx_vmptrst();
+	kvm_vmx_vmptrld(cpu->vmcs);
+
+	/* Configure VM execution control fields */
+	if (kvm_rdmsr(MSR_IA32_VMX_BASIC) & IA32_VMXBASIC_USELESS_CTL_MASKS) {
+		pinxctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_PINX_MASK2);
+		execctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_EXECCTL_MASK2);
+		exitctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_EXITCTL_MASK2);
+		entryctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_ENTRYCTL_MASK2);
+	} else {
+		pinxctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_PINX_MASK);
+		execctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_EXECCTL_MASK);
+		exitctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_EXITCTL_MASK);
+		entryctl = (uint32_t)kvm_rdmsr(MSR_IA32_VMX_ENTRYCTL_MASK);
+	}
+
+	execctl |= VMX_INTERCEPT_HLT;
+
+	if (kvm_rdmsr(MSR_EFER) & EFER_LME) {
+		entryctl |= VMX_ENTRYCTL_X64;
+		exitctl |= VMX_EXITCTL_X64;
+	}
+
+	kvm_vmx_vmwrite(VMX_VMCS_VMPINX_CTL, pinxctl);
+	kvm_vmx_vmwrite(VMX_VMCS_VMEXEC_CTL, execctl);
+	kvm_vmx_vmwrite(VMX_VMCS_VMENTRY_CTL, entryctl);
+	kvm_vmx_vmwrite(VMX_VMCS_VMEXIT_CTL, exitctl);
+	kvm_vmx_vmwrite(VMX_VMCS_LINK_POINTER, VMX_INVALID_VMCS);
+	kvm_vmcs_copy_gdt_descriptor(sregs.es >> 3, VMX_VMCS_GUEST_ES,
+		VMX_VMCS_GUEST_ES_ACCESS, VMX_VMCS_GUEST_ES_LIMIT,
+		VMX_VMCS_GUEST_ES_BASE);
+	kvm_vmcs_copy_gdt_descriptor(sregs.cs >> 3, VMX_VMCS_GUEST_CS,
+		VMX_VMCS_GUEST_CS_ACCESS, VMX_VMCS_GUEST_CS_LIMIT,
+		VMX_VMCS_GUEST_CS_BASE);
+	kvm_vmcs_copy_gdt_descriptor(ss, VMX_VMCS_GUEST_SS,
+		VMX_VMCS_GUEST_SS_ACCESS, VMX_VMCS_GUEST_SS_LIMIT,
+		VMX_VMCS_GUEST_SS_BASE);
+	kvm_vmcs_copy_gdt_descriptor(sregs.ds >> 3, VMX_VMCS_GUEST_DS,
+		VMX_VMCS_GUEST_DS_ACCESS, VMX_VMCS_GUEST_DS_LIMIT,
+		VMX_VMCS_GUEST_DS_BASE);
+	kvm_vmcs_copy_gdt_descriptor(sregs.fs >> 3, VMX_VMCS_GUEST_FS,
+		VMX_VMCS_GUEST_FS_ACCESS, VMX_VMCS_GUEST_FS_LIMIT,
+		VMX_VMCS_GUEST_FS_BASE);
+	kvm_vmcs_copy_gdt_descriptor(sregs.gs >> 3, VMX_VMCS_GUEST_GS,
+		VMX_VMCS_GUEST_GS_ACCESS, VMX_VMCS_GUEST_GS_LIMIT,
+		VMX_VMCS_GUEST_GS_BASE);
+	kvm_vmcs_copy_gdt_descriptor(sregs.tr >> 3, VMX_VMCS_GUEST_TR,
+		VMX_VMCS_GUEST_TR_ACCESS, VMX_VMCS_GUEST_TR_LIMIT,
+		VMX_VMCS_GUEST_TR_BASE);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_LDTR, 0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_LDTR_ACCESS, 0x10000);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_LDTR_LIMIT, 0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_LDTR_BASE, 0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_GDTR_BASE, (uintptr_t)kvm_gdt);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_GDTR_LIMIT,
+		(KVM_GDT_SIZE * sizeof(struct segment_descriptor)) - 1);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_IDTR_BASE, (uintptr_t)kvm_idt);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_IDTR_LIMIT,
+		(X86_INTR_COUNT * sizeof(struct intr_descriptor)) - 1);
+
+	crx = cregs.cr0 & kvm_rdmsr(MSR_IA32_VMX_CR0_FIXED1);
+	crx |= kvm_rdmsr(MSR_IA32_VMX_CR0_FIXED0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_CR0, crx);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_CR3, cregs.cr3);
+	crx = cregs.cr4 & kvm_rdmsr(MSR_IA32_VMX_CR4_FIXED1);
+	crx |= kvm_rdmsr(MSR_IA32_VMX_CR4_FIXED0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_CR4, crx);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_RSP, (uintptr_t)rsp);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_RIP, (uintptr_t)kvm_svm_guest_entry);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_RFLAGS, 0x202); /* Interrupts enabled */
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_SYSENTER_ESP, 0);
+	kvm_vmx_vmwrite(VMX_VMCS_GUEST_SYSENTER_EIP, 0);
+	cpu->regs.rax = (uintptr_t)guest_main;
+
+	/* Reactivate previous VMCS (if any) */
+	if (old_vmcs != VMX_INVALID_VMCS)
+		kvm_vmx_vmptrld((struct kvm_vmcs *)(uintptr_t)old_vmcs);
+}
+
+struct kvm_vmx_vcpu *kvm_create_vmx_vcpu(int (*guest_main)(void),
+	int alloc_stack)
+{
+	uint16_t ss = 0;
+	char *stack = NULL;
+	struct kvm_vmcs *vmcs;
+	struct kvm_vmx_vcpu *ret;
+
+	vmcs = kvm_alloc_vmcs();
+
+	if (alloc_stack) {
+		stack = tst_heap_alloc_aligned(2 * PAGESIZE, PAGESIZE);
+		ss = kvm_create_stack_descriptor(kvm_gdt, KVM_GDT_SIZE, stack);
+		stack += 2 * PAGESIZE;
+	}
+
+	ret = tst_heap_alloc(sizeof(struct kvm_vmx_vcpu));
+	memset(ret, 0, sizeof(struct kvm_vmx_vcpu));
+	ret->vmcs = vmcs;
+	kvm_init_vmx_vcpu(ret, ss, stack, guest_main);
+	return ret;
+}
+
+void kvm_vmx_activate_vcpu(struct kvm_vmx_vcpu *cpu)
+{
+	struct kvm_cregs cregs;
+	struct kvm_sregs sregs;
+	uint64_t baseaddr;
+
+	kvm_read_cregs(&cregs);
+	kvm_read_sregs(&sregs);
+
+	kvm_vmx_vmptrld(cpu->vmcs);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_ES, sregs.es);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_CS, sregs.cs);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_SS, sregs.ss);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_DS, sregs.ds);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_FS, sregs.fs);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_GS, sregs.gs);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_TR, sregs.tr);
+
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_CR0, cregs.cr0);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_CR3, cregs.cr3);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_CR4, cregs.cr4);
+	kvm_parse_segment_descriptor(kvm_gdt + (sregs.fs >> 3), &baseaddr,
+		NULL, NULL);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_FS_BASE, baseaddr);
+	kvm_parse_segment_descriptor(kvm_gdt + (sregs.gs >> 3), &baseaddr,
+		NULL, NULL);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_GS_BASE, baseaddr);
+	kvm_parse_segment_descriptor(kvm_gdt + (sregs.tr >> 3), &baseaddr,
+		NULL, NULL);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_TR_BASE, baseaddr);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_GDTR_BASE, (uintptr_t)kvm_gdt);
+	kvm_vmx_vmwrite(VMX_VMCS_HOST_IDTR_BASE, (uintptr_t)kvm_idt);
+}
+
+void kvm_vmx_vmrun(struct kvm_vmx_vcpu *cpu)
+{
+	int ret, err;
+	uint64_t reason;
+
+	kvm_vmx_activate_vcpu(cpu);
+
+	if (cpu->launched) {
+		ret = kvm_vmx_vmresume(cpu);
+	} else {
+		ret = kvm_vmx_vmlaunch(cpu);
+		cpu->launched = 1;
+	}
+
+	if (ret) {
+		err = kvm_vmx_inst_errno();
+		tst_brk(TBROK, "VMLAUNCH/VMRESUME failed: %s (%d)",
+			kvm_vmx_inst_strerr(err), err);
+	}
+
+	reason = kvm_vmx_vmread(VMX_VMCS_EXIT_REASON);
+
+	if (reason & VMX_EXIT_FAILED_ENTRY) {
+		tst_brk(TBROK, "VM entry failed. Reason: %llu, qualif.: %llu",
+			reason & 0xffff,
+			kvm_vmx_vmread(VMX_VMCS_EXIT_QUALIFICATION));
+	}
+}
+
+int kvm_vmx_inst_errno(void)
+{
+	unsigned long ret, var_id = VMX_VMCS_VMINST_ERROR;
+
+	/* Do not use kvm_vmx_vmread() to avoid tst_brk() on failure */
+	asm goto(
+		"vmread %1, (%0)\n"
+		"jna %l[error]\n"
+		:
+		: "r" (&ret), "r" (var_id)
+		: "cc", "memory"
+		: error
+	);
+
+	return ret;
+
+error:
+	return -1;
+}
+
+const char *kvm_vmx_inst_strerr(int vmx_errno)
+{
+	if (vmx_errno < 0)
+		return "Cannot read VM errno - invalid current VMCS?";
+
+	if (vmx_errno >= VMX_VMINST_ERR_COUNT)
+		return "Unknown error";
+
+	return vmx_error_description[vmx_errno];
+}
+
+const char *kvm_vmx_inst_err(void)
+{
+	return kvm_vmx_inst_strerr(kvm_vmx_inst_errno());
+}
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 07/10] lib: Add helper function for reloading kernel modules
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (5 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 06/10] KVM: Add helper functions for nested Intel VMX virtualization Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 08/10] lib: Add helper function for reading boolean sysconf files Martin Doucha
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 include/tst_module.h |  3 +++
 lib/tst_module.c     | 28 ++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/tst_module.h b/include/tst_module.h
index 8bbaf08f3..e55321d19 100644
--- a/include/tst_module.h
+++ b/include/tst_module.h
@@ -47,4 +47,7 @@ static inline void tst_requires_module_signature_disabled(void)
 	tst_requires_module_signature_disabled_();
 }
 
+void tst_modprobe(const char *mod_name, char *const argv[]);
+void tst_module_reload(const char *mod_name, char *const argv[]);
+
 #endif /* TST_MODULE_H */
diff --git a/lib/tst_module.c b/lib/tst_module.c
index cec20524f..42d63ede6 100644
--- a/lib/tst_module.c
+++ b/lib/tst_module.c
@@ -146,3 +146,31 @@ void tst_requires_module_signature_disabled_(void)
 	if (tst_module_signature_enforced_())
 		tst_brkm(TCONF, NULL, "module signature is enforced, skip test");
 }
+
+void tst_modprobe(const char *mod_name, char *const argv[])
+{
+	const int offset = 2; /* command name & module path */
+	int i, size = 0;
+
+	while (argv && argv[size])
+		++size;
+	size += offset;
+
+	const char *mod_argv[size + 1]; /* + NULL in the end */
+
+	mod_argv[size] = NULL;
+	mod_argv[0] = "modprobe";
+	mod_argv[1] = mod_name;
+
+	for (i = offset; i < size; ++i)
+		mod_argv[i] = argv[i - offset];
+
+	tst_cmd(NULL, mod_argv, NULL, NULL, 0);
+}
+
+void tst_module_reload(const char *mod_name, char *const argv[])
+{
+	tst_resm(TINFO, "Reloading kernel module %s", mod_name);
+	tst_module_unload_(NULL, mod_name);
+	tst_modprobe(mod_name, argv);
+}
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 08/10] lib: Add helper function for reading boolean sysconf files
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (6 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 07/10] lib: Add helper function for reloading kernel modules Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 09/10] kvm_pagefault01: Use library functions to reload KVM modules Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions Martin Doucha
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 include/tst_sys_conf.h |  2 ++
 lib/tst_sys_conf.c     | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/include/tst_sys_conf.h b/include/tst_sys_conf.h
index 4c85767be..6bbf39672 100644
--- a/include/tst_sys_conf.h
+++ b/include/tst_sys_conf.h
@@ -28,4 +28,6 @@ int tst_sys_conf_save(const struct tst_path_val *conf);
 void tst_sys_conf_restore(int verbose);
 void tst_sys_conf_dump(void);
 
+int tst_read_bool_sys_param(const char *filename);
+
 #endif
diff --git a/lib/tst_sys_conf.c b/lib/tst_sys_conf.c
index c0981dcb1..91203ea9e 100644
--- a/lib/tst_sys_conf.c
+++ b/lib/tst_sys_conf.c
@@ -7,6 +7,7 @@
 #include <stdio.h>
 #include <unistd.h>
 #include <string.h>
+#include <ctype.h>
 
 #define TST_NO_DEFAULT_MAIN
 #include "tst_test.h"
@@ -145,3 +146,37 @@ void tst_sys_conf_restore(int verbose)
 	}
 }
 
+int tst_read_bool_sys_param(const char *filename)
+{
+	char buf[PATH_MAX];
+	int i, fd, ret;
+
+	fd = open(filename, O_RDONLY);
+
+	if (fd < 0)
+		return -1;
+
+	ret = read(fd, buf, PATH_MAX - 1);
+	SAFE_CLOSE(fd);
+
+	if (ret < 1)
+		return -1;
+
+	buf[ret] = '\0';
+
+	for (i = 0; buf[i] && !isspace(buf[i]); i++)
+		;
+
+	buf[i] = '\0';
+
+	if (isdigit(buf[0])) {
+		tst_parse_int(buf, &ret, INT_MIN, INT_MAX);
+		return ret;
+	}
+
+	if (!strcasecmp(buf, "N"))
+		return 0;
+
+	/* Assume that any other value than 0 or N means the param is enabled */
+	return 1;
+}
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 09/10] kvm_pagefault01: Use library functions to reload KVM modules
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (7 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 08/10] lib: Add helper function for reading boolean sysconf files Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-21 16:44 ` [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions Martin Doucha
  9 siblings, 0 replies; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/kvm_pagefault01.c | 59 +++-----------------------
 1 file changed, 5 insertions(+), 54 deletions(-)

diff --git a/testcases/kernel/kvm/kvm_pagefault01.c b/testcases/kernel/kvm/kvm_pagefault01.c
index 16b3137c0..db526cb7e 100644
--- a/testcases/kernel/kvm/kvm_pagefault01.c
+++ b/testcases/kernel/kvm/kvm_pagefault01.c
@@ -136,70 +136,21 @@ TST_TEST_TCONF("Test supported only on x86_64");
 
 #else /* COMPILE_PAYLOAD */
 
-#include <ctype.h>
-#include <stdio.h>
-#include <unistd.h>
 #include "tst_module.h"
 
 #define TDP_MMU_SYSFILE "/sys/module/kvm/parameters/tdp_mmu"
 #define TDP_AMD_SYSFILE "/sys/module/kvm_amd/parameters/npt"
 #define TDP_INTEL_SYSFILE "/sys/module/kvm_intel/parameters/ept"
 
-#define BUF_SIZE 64
-
-static int read_bool_sys_param(const char *filename)
-{
-	char buf[BUF_SIZE];
-	int i, fd, ret;
-
-	fd = open(filename, O_RDONLY);
-
-	if (fd < 0)
-		return -1;
-
-	ret = read(fd, buf, BUF_SIZE - 1);
-	SAFE_CLOSE(fd);
-
-	if (ret < 1)
-		return -1;
-
-	buf[ret] = '\0';
-
-	for (i = 0; buf[i] && !isspace(buf[i]); i++)
-		;
-
-	buf[i] = '\0';
-
-	if (isdigit(buf[0])) {
-		tst_parse_int(buf, &ret, INT_MIN, INT_MAX);
-		return ret;
-	}
-
-	if (!strcasecmp(buf, "N"))
-		return 0;
-
-	/* Assume that any other value than 0 or N means the param is enabled */
-	return 1;
-}
-
-static void reload_module(const char *module, char *arg)
-{
-	const char *const argv[] = {"modprobe", module, arg, NULL};
-
-	tst_res(TINFO, "Reloading module %s with parameter %s", module, arg);
-	tst_module_unload(module);
-	tst_cmd(argv, NULL, NULL, 0);
-}
-
 static void disable_tdp(void)
 {
-	if (read_bool_sys_param(TDP_AMD_SYSFILE) > 0)
-		reload_module("kvm_amd", "npt=0");
+	if (tst_read_bool_sys_param(TDP_AMD_SYSFILE) > 0)
+		tst_module_reload("kvm_amd", (char *const[]){"npt=0", NULL});
 
-	if (read_bool_sys_param(TDP_INTEL_SYSFILE) > 0)
-		reload_module("kvm_intel", "ept=0");
+	if (tst_read_bool_sys_param(TDP_INTEL_SYSFILE) > 0)
+		tst_module_reload("kvm_intel", (char *const[]){"ept=0", NULL});
 
-	if (read_bool_sys_param(TDP_MMU_SYSFILE) > 0)
+	if (tst_read_bool_sys_param(TDP_MMU_SYSFILE) > 0)
 		tst_res(TINFO, "WARNING: tdp_mmu is enabled, beware of false negatives");
 }
 
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions
  2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
                   ` (8 preceding siblings ...)
  2025-01-21 16:44 ` [LTP] [PATCH 09/10] kvm_pagefault01: Use library functions to reload KVM modules Martin Doucha
@ 2025-01-21 16:44 ` Martin Doucha
  2025-01-31  7:40   ` Petr Vorel
  9 siblings, 1 reply; 12+ messages in thread
From: Martin Doucha @ 2025-01-21 16:44 UTC (permalink / raw)
  To: ltp

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---
 testcases/kernel/kvm/kvm_vmx01.c | 282 +++++++++++++++++++++++++++++++
 1 file changed, 282 insertions(+)
 create mode 100644 testcases/kernel/kvm/kvm_vmx01.c

diff --git a/testcases/kernel/kvm/kvm_vmx01.c b/testcases/kernel/kvm/kvm_vmx01.c
new file mode 100644
index 000000000..c413b4148
--- /dev/null
+++ b/testcases/kernel/kvm/kvm_vmx01.c
@@ -0,0 +1,282 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2024 SUSE LLC <mdoucha@suse.cz>
+ */
+
+/*\
+ * Basic functional test for VMREAD/VMWRITE instructions in KVM environment.
+ * Verify that VMWRITE instruction changes the contents of current VMCS and
+ * the values written into shadow VMCS can be read in both parent and nested
+ * VM.
+ */
+
+#include "kvm_test.h"
+
+#ifdef COMPILE_PAYLOAD
+#if defined(__i386__) || defined(__x86_64__)
+
+#include "kvm_x86_vmx.h"
+
+#define GUEST_READ_ERROR 1
+#define GUEST_WRITE_ERROR 2
+#define SHADOW_DATA_LENGTH 37
+#define VMCS_FIELD(x) x, #x
+
+struct vmcs_field_table {
+	unsigned long field_id;
+	const char *name;
+	uint64_t value;
+};
+
+/* Data written into shadow VMCS by the parent VM and read by the nested VM */
+static struct vmcs_field_table host_data[SHADOW_DATA_LENGTH] = {
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES), 0xe5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS), 0xc5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS), 0x55},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS), 0xd5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS), 0xf5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS), 0x65},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR), 0x1d72},
+	{VMCS_FIELD(VMX_VMCS_GUEST_TR), 0x72},
+	{VMCS_FIELD(VMX_VMCS_HOST_ES), 0x5e},
+	{VMCS_FIELD(VMX_VMCS_HOST_CS), 0x5c},
+	{VMCS_FIELD(VMX_VMCS_HOST_SS), 0x55},
+	{VMCS_FIELD(VMX_VMCS_HOST_DS), 0x5d},
+	{VMCS_FIELD(VMX_VMCS_HOST_FS), 0x5f},
+	{VMCS_FIELD(VMX_VMCS_HOST_GS), 0x56},
+	{VMCS_FIELD(VMX_VMCS_HOST_TR), 0x27},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_LIMIT), 0xe51},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_LIMIT), 0xc51},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_LIMIT), 0x551},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_LIMIT), 0xd51},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_LIMIT), 0xf51},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_LIMIT), 0x651},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR_LIMIT), 0x1d721},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_ACCESS), 0xa0e5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_ACCESS), 0xa0c5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_ACCESS), 0xa055},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_ACCESS), 0xa0d5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_ACCESS), 0xa0f5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_ACCESS), 0xa065},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SYSENTER_CS), 0x65c},
+	{VMCS_FIELD(VMX_VMCS_HOST_SYSENTER_CS), 0x45c},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_BASE), 0xe5b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_BASE), 0xc5b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_BASE), 0x55b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_BASE), 0xd5b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_BASE), 0xf5b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_BASE), 0x65b},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR_BASE), 0x1d72b}
+};
+
+/* Data written into shadow VMCS by the nested VM and read by the parent VM */
+static struct vmcs_field_table guest_data[SHADOW_DATA_LENGTH] = {
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES), 0x5e},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS), 0x5c},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS), 0x55},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS), 0x5d},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS), 0x5f},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS), 0x56},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR), 0x721d},
+	{VMCS_FIELD(VMX_VMCS_GUEST_TR), 0x27},
+	{VMCS_FIELD(VMX_VMCS_HOST_ES), 0xe5},
+	{VMCS_FIELD(VMX_VMCS_HOST_CS), 0xc5},
+	{VMCS_FIELD(VMX_VMCS_HOST_SS), 0x55},
+	{VMCS_FIELD(VMX_VMCS_HOST_DS), 0xd5},
+	{VMCS_FIELD(VMX_VMCS_HOST_FS), 0xf5},
+	{VMCS_FIELD(VMX_VMCS_HOST_GS), 0x65},
+	{VMCS_FIELD(VMX_VMCS_HOST_TR), 0x72},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_LIMIT), 0x1e5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_LIMIT), 0x1c5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_LIMIT), 0x155},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_LIMIT), 0x1d5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_LIMIT), 0x1f5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_LIMIT), 0x165},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR_LIMIT), 0x11d72},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_ACCESS), 0xa05e},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_ACCESS), 0xa05c},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_ACCESS), 0xa055},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_ACCESS), 0xa05d},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_ACCESS), 0xa05f},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_ACCESS), 0xa056},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SYSENTER_CS), 0x5c6},
+	{VMCS_FIELD(VMX_VMCS_HOST_SYSENTER_CS), 0x5c4},
+	{VMCS_FIELD(VMX_VMCS_GUEST_ES_BASE), 0xbe5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_CS_BASE), 0xbc5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_SS_BASE), 0xb55},
+	{VMCS_FIELD(VMX_VMCS_GUEST_DS_BASE), 0xbd5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_FS_BASE), 0xbf5},
+	{VMCS_FIELD(VMX_VMCS_GUEST_GS_BASE), 0xb65},
+	{VMCS_FIELD(VMX_VMCS_GUEST_LDTR_BASE), 0xb1d72}
+};
+
+static uint64_t vmread_buffer[SHADOW_DATA_LENGTH];
+
+int guest_main(void)
+{
+	int i;
+
+	/* kvm_vmx_vmread() calls tst_brk(), don't use it in nested VM */
+	for (i = 0; i < SHADOW_DATA_LENGTH; i++) {
+		asm goto(
+			"vmread %1, (%0)\n"
+			"jna %l[read_error]\n"
+			"vmwrite %2, %3\n"
+			"jna %l[write_error]\n"
+			:
+			: "r" (&vmread_buffer[i]), "r" (host_data[i].field_id),
+				"r" (guest_data[i].value),
+				"r" (guest_data[i].field_id)
+			: "cc", "memory"
+			: read_error, write_error
+		);
+	}
+
+	return 0;
+
+read_error:
+	return GUEST_READ_ERROR;
+
+write_error:
+	return GUEST_WRITE_ERROR;
+}
+
+void main(void)
+{
+	struct kvm_vmx_vcpu *vcpu;
+	struct kvm_vmcs *shadow_vmcs;
+	char *vmcs_backup;
+	int i, errors;
+	uint64_t val;
+
+	kvm_set_vmx_state(1);
+
+	/* Check secondary VMCS execctl support */
+	if (kvm_rdmsr(MSR_IA32_VMX_BASIC) & IA32_VMXBASIC_USELESS_CTL_MASKS)
+		val = kvm_rdmsr(MSR_IA32_VMX_EXECCTL_MASK2);
+	else
+		val = kvm_rdmsr(MSR_IA32_VMX_EXECCTL_MASK);
+
+	if (!((val >> 32) & VMX_EXECCTL_ENABLE_CTL2))
+		tst_brk(TCONF, "CPU does not support shadow VMCS");
+
+	/* Create and configure guest VMCS */
+	shadow_vmcs = kvm_alloc_vmcs();
+	kvm_vmx_vmclear(shadow_vmcs);
+	shadow_vmcs->version |= VMX_SHADOW_VMCS;
+	vcpu = kvm_create_vmx_vcpu(guest_main, 1);
+	kvm_vmx_vmptrld(vcpu->vmcs);
+	val = kvm_vmx_vmread(VMX_VMCS_VMEXEC_CTL);
+	val |= VMX_EXECCTL_ENABLE_CTL2;
+	kvm_vmx_vmwrite(VMX_VMCS_VMEXEC_CTL, val);
+	val = kvm_rdmsr(MSR_IA32_VMX_EXECCTL2_MASK);
+
+	if (!((val >> 32) & VMX_EXECCTL2_SHADOW_VMCS))
+		tst_brk(TCONF, "CPU does not support shadow VMCS");
+
+	val = VMX_EXECCTL2_SHADOW_VMCS | (uint32_t)val;
+	kvm_vmx_vmwrite(VMX_VMCS_VMEXEC_CTL2, val);
+	kvm_vmx_vmwrite(VMX_VMCS_LINK_POINTER, (uintptr_t)shadow_vmcs);
+
+	/* Configure shadow VMCS */
+	vmcs_backup = tst_heap_alloc(sizeof(struct kvm_vmcs));
+	memcpy(vmcs_backup, shadow_vmcs, sizeof(struct kvm_vmcs));
+	kvm_vmx_vmptrld(shadow_vmcs);
+
+	for (i = 0; i < SHADOW_DATA_LENGTH; i++)
+		kvm_vmx_vmwrite(host_data[i].field_id, host_data[i].value);
+
+	/* Flush shadow VMCS just in case */
+	kvm_vmx_vmptrld(vcpu->vmcs);
+
+	if (!memcmp(vmcs_backup, shadow_vmcs, sizeof(struct kvm_vmcs)))
+		tst_res(TFAIL, "VMWRITE did not modify raw VMCS data");
+
+	/* Run nested VM */
+	memcpy(vmcs_backup, shadow_vmcs, sizeof(struct kvm_vmcs));
+	kvm_vmx_vmrun(vcpu);
+	val = kvm_vmx_vmread(VMX_VMCS_EXIT_REASON);
+
+	if (val != VMX_EXIT_HLT) {
+		tst_res(TFAIL, "Unexpected guest exit reason %llx", val);
+		return;
+	}
+
+	if (vcpu->regs.rax == GUEST_READ_ERROR) {
+		tst_res(TFAIL, "Guest failed to read shadow VMCS");
+		return;
+	}
+
+	if (vcpu->regs.rax == GUEST_WRITE_ERROR) {
+		tst_res(TFAIL, "Guest failed to write shadow VMCS");
+		return;
+	}
+
+	if (!memcmp(vmcs_backup, shadow_vmcs, sizeof(struct kvm_vmcs)))
+		tst_res(TFAIL, "Nested VMWRITE did not modify raw VMCS data");
+
+	/* Check values read by the nested VM from shadow VMCS */
+	for (i = 0, errors = 0; i < SHADOW_DATA_LENGTH; i++) {
+		if (vmread_buffer[i] == host_data[i].value)
+			continue;
+
+		errors++;
+		tst_res(TFAIL, "Shadow %s guest mismatch: %llx != %llx",
+			host_data[i].name, vmread_buffer[i],
+			host_data[i].value);
+	}
+
+	if (!errors)
+		tst_res(TPASS, "Guest read correct values from shadow VMCS");
+
+	/* Check values written by the nested VM to shadow VMCS */
+	kvm_vmx_vmptrld(shadow_vmcs);
+
+	for (i = 0, errors = 0; i < SHADOW_DATA_LENGTH; i++) {
+		val = kvm_vmx_vmread(guest_data[i].field_id);
+
+		if (val == guest_data[i].value)
+			continue;
+
+		errors++;
+		tst_res(TFAIL, "Shadow %s parent mismatch: %llx != %llx",
+			guest_data[i].name, val, guest_data[i].value);
+	}
+
+	if (!errors)
+		tst_res(TPASS, "Parent read correct values from shadow VMCS");
+}
+
+#else /* defined(__i386__) || defined(__x86_64__) */
+TST_TEST_TCONF("Test supported only on x86");
+#endif /* defined(__i386__) || defined(__x86_64__) */
+
+#else /* COMPILE_PAYLOAD */
+
+#include "tst_module.h"
+
+#define NESTED_INTEL_SYSFILE "/sys/module/kvm_intel/parameters/nested"
+
+static void setup(void)
+{
+	if (!tst_read_bool_sys_param(NESTED_INTEL_SYSFILE)) {
+		tst_module_reload("kvm_intel",
+			(char *const[]){"nested=1", NULL});
+	}
+
+	tst_kvm_setup();
+}
+
+static struct tst_test test = {
+	.test_all = tst_kvm_run,
+	.setup = setup,
+	.cleanup = tst_kvm_cleanup,
+	.needs_root = 1,
+	.supported_archs = (const char *const []) {
+		"x86_64",
+		"x86",
+		NULL
+	},
+};
+
+#endif /* COMPILE_PAYLOAD */
-- 
2.47.0


-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions
  2025-01-21 16:44 ` [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions Martin Doucha
@ 2025-01-31  7:40   ` Petr Vorel
  0 siblings, 0 replies; 12+ messages in thread
From: Petr Vorel @ 2025-01-31  7:40 UTC (permalink / raw)
  To: Martin Doucha; +Cc: ltp

Hi Martin,

whole patchset merged, excellent work. Thanks!

Kind regards,
Petr

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-01-31  7:41 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-21 16:44 [LTP] [PATCH 00/10] Basic KVM test for Intel VMX Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 01/10] kvm_read_sregs(): Read the TR segment register Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 02/10] kvm_svm_vmrun(): Simplify VM state save/load with macros Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 03/10] kvm_x86: Define CR0 flags and additional CPUID/MSR constants Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 04/10] KVM: Implement helper functions for setting x86 control registers Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 05/10] KVM: Add memcmp() helper function Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 06/10] KVM: Add helper functions for nested Intel VMX virtualization Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 07/10] lib: Add helper function for reloading kernel modules Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 08/10] lib: Add helper function for reading boolean sysconf files Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 09/10] kvm_pagefault01: Use library functions to reload KVM modules Martin Doucha
2025-01-21 16:44 ` [LTP] [PATCH 10/10] KVM: Add functional test for emulated VMREAD/VMWRITE instructions Martin Doucha
2025-01-31  7:40   ` Petr Vorel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox