virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Adalbert Lazăr" <alazar@bitdefender.com>
To: kvm@vger.kernel.org
Cc: "Tamas K Lengyel" <tamas@tklengyel.com>,
	"Wanpeng Li" <wanpengli@tencent.com>,
	"Nicușor Cîțu" <nicu.citu@icloud.com>,
	"Sean Christopherson" <seanjc@google.com>,
	"Joerg Roedel" <joro@8bytes.org>,
	virtualization@lists.linux-foundation.org,
	"Adalbert Lazăr" <alazar@bitdefender.com>,
	"Mathieu Tarral" <mathieu.tarral@protonmail.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Mihai Donțu" <mdontu@bitdefender.com>,
	"Jim Mattson" <jmattson@google.com>
Subject: [PATCH v12 60/77] KVM: introspection: add KVMI_VCPU_INJECT_EXCEPTION + KVMI_VCPU_EVENT_TRAP
Date: Wed,  6 Oct 2021 20:30:56 +0300	[thread overview]
Message-ID: <20211006173113.26445-61-alazar@bitdefender.com> (raw)
In-Reply-To: <20211006173113.26445-1-alazar@bitdefender.com>

From: Mihai Donțu <mdontu@bitdefender.com>

The KVMI_VCPU_INJECT_EXCEPTION command is used by the introspection tool
to inject exceptions, for example, to get a page from swap.

The exception is injected right before entering in guest unless there is
already an exception pending. The introspection tool is notified with
an KVMI_VCPU_EVENT_TRAP event about the success of the injection. In
case of failure, the introspection tool is expected to try again later.

Signed-off-by: Mihai Donțu <mdontu@bitdefender.com>
Co-developed-by: Nicușor Cîțu <nicu.citu@icloud.com>
Signed-off-by: Nicușor Cîțu <nicu.citu@icloud.com>
Co-developed-by: Adalbert Lazăr <alazar@bitdefender.com>
Signed-off-by: Adalbert Lazăr <alazar@bitdefender.com>
---
 Documentation/virt/kvm/kvmi.rst               |  76 +++++++++++
 arch/x86/include/asm/kvmi_host.h              |  11 ++
 arch/x86/include/uapi/asm/kvmi.h              |  16 +++
 arch/x86/kvm/kvmi.c                           | 110 ++++++++++++++++
 arch/x86/kvm/kvmi.h                           |   3 +
 arch/x86/kvm/kvmi_msg.c                       |  52 +++++++-
 arch/x86/kvm/x86.c                            |   2 +
 include/uapi/linux/kvmi.h                     |  14 +-
 .../testing/selftests/kvm/x86_64/kvmi_test.c  | 124 ++++++++++++++++++
 virt/kvm/introspection/kvmi.c                 |   2 +
 virt/kvm/introspection/kvmi_int.h             |   4 +
 virt/kvm/introspection/kvmi_msg.c             |  16 ++-
 12 files changed, 416 insertions(+), 14 deletions(-)

diff --git a/Documentation/virt/kvm/kvmi.rst b/Documentation/virt/kvm/kvmi.rst
index a4705acddeb2..1fbc2a03f5bd 100644
--- a/Documentation/virt/kvm/kvmi.rst
+++ b/Documentation/virt/kvm/kvmi.rst
@@ -550,6 +550,7 @@ because these are sent as a result of certain commands (but they can be
 disallowed by the device manager) ::
 
 	KVMI_VCPU_EVENT_PAUSE
+	KVMI_VCPU_EVENT_TRAP
 
 The VM events (e.g. *KVMI_VM_EVENT_UNHOOK*) are controlled with
 the *KVMI_VM_CONTROL_EVENTS* command.
@@ -736,6 +737,46 @@ ID set.
 * -KVM_EINVAL - the padding is not zero
 * -KVM_EAGAIN - the selected vCPU can't be introspected yet
 
+16. KVMI_VCPU_INJECT_EXCEPTION
+------------------------------
+
+:Architectures: x86
+:Versions: >= 1
+:Parameters:
+
+::
+
+	struct kvmi_vcpu_hdr;
+	struct kvmi_vcpu_inject_exception {
+		__u8 nr;
+		__u8 padding1;
+		__u16 padding2;
+		__u32 error_code;
+		__u64 address;
+	};
+
+:Returns:
+
+::
+
+	struct kvmi_error_code
+
+Injects a vCPU exception (``nr``) with or without an error code (``error_code``).
+For page fault exceptions, the guest virtual address (``address``)
+has to be specified too.
+
+The *KVMI_VCPU_EVENT_TRAP* event will be sent with the effective injected
+exception.
+
+:Errors:
+
+* -KVM_EPERM  - the *KVMI_VCPU_EVENT_TRAP* event is disallowed
+* -KVM_EINVAL - the selected vCPU is invalid
+* -KVM_EINVAL - the padding is not zero
+* -KVM_EAGAIN - the selected vCPU can't be introspected yet
+* -KVM_EBUSY - another *KVMI_VCPU_INJECT_EXCEPTION*-*KVMI_VCPU_EVENT_TRAP*
+               pair is in progress
+
 Events
 ======
 
@@ -966,3 +1007,38 @@ register (see **KVMI_VCPU_CONTROL_EVENTS**).
 (``cr``), the old value (``old_value``) and the new value (``new_value``)
 are sent to the introspection tool. The *CONTINUE* action will set the
 ``new_val``.
+
+6. KVMI_VCPU_EVENT_TRAP
+-----------------------
+
+:Architectures: x86
+:Versions: >= 1
+:Actions: CONTINUE, CRASH
+:Parameters:
+
+::
+
+	struct kvmi_vcpu_event;
+	struct kvmi_vcpu_event_trap {
+		__u8 nr;
+		__u8 padding1;
+		__u16 padding2;
+		__u32 error_code;
+		__u64 address;
+	};
+
+:Returns:
+
+::
+
+	struct kvmi_vcpu_hdr;
+	struct kvmi_vcpu_event_reply;
+
+This event is sent if a previous *KVMI_VCPU_INJECT_EXCEPTION* command
+took place. Because it has a high priority, it will be sent before any
+other vCPU introspection event.
+
+``kvmi_vcpu_event`` (with the vCPU state), exception/interrupt number
+(``nr``), exception code (``error_code``) and ``address`` are sent to
+the introspection tool, which should check if its exception has been
+injected or overridden.
diff --git a/arch/x86/include/asm/kvmi_host.h b/arch/x86/include/asm/kvmi_host.h
index edbedf031467..97f5b1a01c9e 100644
--- a/arch/x86/include/asm/kvmi_host.h
+++ b/arch/x86/include/asm/kvmi_host.h
@@ -24,6 +24,15 @@ struct kvm_vcpu_arch_introspection {
 	bool have_delayed_regs;
 
 	DECLARE_BITMAP(cr_mask, KVMI_NUM_CR);
+
+	struct {
+		u8 nr;
+		u32 error_code;
+		bool error_code_valid;
+		u64 address;
+		bool pending;
+		bool send_event;
+	} exception;
 };
 
 struct kvm_arch_introspection {
@@ -36,6 +45,7 @@ bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
 		   unsigned long old_value, unsigned long *new_value);
 bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu);
 bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu, bool enable);
+void kvmi_enter_guest(struct kvm_vcpu *vcpu);
 
 #else /* CONFIG_KVM_INTROSPECTION */
 
@@ -48,6 +58,7 @@ static inline bool kvmi_cr_event(struct kvm_vcpu *vcpu, unsigned int cr,
 static inline bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu) { return false; }
 static inline bool kvmi_monitor_cr3w_intercept(struct kvm_vcpu *vcpu,
 						bool enable) { return false; }
+static inline void kvmi_enter_guest(struct kvm_vcpu *vcpu) { }
 
 #endif /* CONFIG_KVM_INTROSPECTION */
 
diff --git a/arch/x86/include/uapi/asm/kvmi.h b/arch/x86/include/uapi/asm/kvmi.h
index 32cd17488058..aa991fbab473 100644
--- a/arch/x86/include/uapi/asm/kvmi.h
+++ b/arch/x86/include/uapi/asm/kvmi.h
@@ -79,4 +79,20 @@ struct kvmi_vcpu_event_cr_reply {
 	__u64 new_val;
 };
 
+struct kvmi_vcpu_event_trap {
+	__u8 nr;
+	__u8 padding1;
+	__u16 padding2;
+	__u32 error_code;
+	__u64 address;
+};
+
+struct kvmi_vcpu_inject_exception {
+	__u8 nr;
+	__u8 padding1;
+	__u16 padding2;
+	__u32 error_code;
+	__u64 address;
+};
+
 #endif /* _UAPI_ASM_X86_KVMI_H */
diff --git a/arch/x86/kvm/kvmi.c b/arch/x86/kvm/kvmi.c
index acd655ab770d..93123d47752c 100644
--- a/arch/x86/kvm/kvmi.c
+++ b/arch/x86/kvm/kvmi.c
@@ -15,6 +15,7 @@ void kvmi_arch_init_vcpu_events_mask(unsigned long *supported)
 	set_bit(KVMI_VCPU_EVENT_BREAKPOINT, supported);
 	set_bit(KVMI_VCPU_EVENT_CR, supported);
 	set_bit(KVMI_VCPU_EVENT_HYPERCALL, supported);
+	set_bit(KVMI_VCPU_EVENT_TRAP, supported);
 }
 
 static unsigned int kvmi_vcpu_mode(const struct kvm_vcpu *vcpu,
@@ -457,3 +458,112 @@ bool kvmi_cr3_intercepted(struct kvm_vcpu *vcpu)
 	return ret;
 }
 EXPORT_SYMBOL(kvmi_cr3_intercepted);
+
+int kvmi_arch_cmd_vcpu_inject_exception(struct kvm_vcpu *vcpu,
+					const struct kvmi_vcpu_inject_exception *req)
+{
+	struct kvm_vcpu_arch_introspection *arch = &VCPUI(vcpu)->arch;
+	bool has_error;
+
+	arch->exception.pending = true;
+
+	has_error = x86_exception_has_error_code(req->nr);
+
+	arch->exception.nr = req->nr;
+	arch->exception.error_code = has_error ? req->error_code : 0;
+	arch->exception.error_code_valid = has_error;
+	arch->exception.address = req->address;
+
+	return 0;
+}
+
+static void kvmi_queue_exception(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_arch_introspection *arch = &VCPUI(vcpu)->arch;
+	struct x86_exception e = {
+		.vector = arch->exception.nr,
+		.error_code_valid = arch->exception.error_code_valid,
+		.error_code = arch->exception.error_code,
+		.address = arch->exception.address,
+	};
+
+	if (e.vector == PF_VECTOR)
+		kvm_inject_page_fault(vcpu, &e);
+	else if (e.error_code_valid)
+		kvm_queue_exception_e(vcpu, e.vector, e.error_code);
+	else
+		kvm_queue_exception(vcpu, e.vector);
+}
+
+static void kvmi_save_injected_event(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+	vcpui->arch.exception.error_code = 0;
+	vcpui->arch.exception.error_code_valid = false;
+
+	vcpui->arch.exception.address = vcpu->arch.cr2;
+	if (vcpu->arch.exception.injected) {
+		vcpui->arch.exception.nr = vcpu->arch.exception.nr;
+		vcpui->arch.exception.error_code_valid =
+			x86_exception_has_error_code(vcpu->arch.exception.nr);
+		vcpui->arch.exception.error_code = vcpu->arch.exception.error_code;
+	} else if (vcpu->arch.interrupt.injected) {
+		vcpui->arch.exception.nr = vcpu->arch.interrupt.nr;
+	}
+}
+
+static void kvmi_inject_pending_exception(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+	if (!kvm_event_needs_reinjection(vcpu)) {
+		kvmi_queue_exception(vcpu);
+		kvm_inject_pending_exception(vcpu);
+	}
+
+	kvmi_save_injected_event(vcpu);
+
+	vcpui->arch.exception.pending = false;
+	vcpui->arch.exception.send_event = true;
+	kvm_make_request(KVM_REQ_INTROSPECTION, vcpu);
+}
+
+void kvmi_enter_guest(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_introspection *vcpui;
+	struct kvm_introspection *kvmi;
+
+	kvmi = kvmi_get(vcpu->kvm);
+	if (kvmi) {
+		vcpui = VCPUI(vcpu);
+
+		if (vcpui->arch.exception.pending)
+			kvmi_inject_pending_exception(vcpu);
+
+		kvmi_put(vcpu->kvm);
+	}
+}
+
+static void kvmi_send_trap_event(struct kvm_vcpu *vcpu)
+{
+	u32 action;
+
+	action = kvmi_msg_send_vcpu_trap(vcpu);
+	switch (action) {
+	case KVMI_EVENT_ACTION_CONTINUE:
+		break;
+	default:
+		kvmi_handle_common_event_actions(vcpu, action);
+	}
+}
+
+void kvmi_arch_send_pending_event(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+
+	if (vcpui->arch.exception.send_event) {
+		vcpui->arch.exception.send_event = false;
+		kvmi_send_trap_event(vcpu);
+	}
+}
diff --git a/arch/x86/kvm/kvmi.h b/arch/x86/kvm/kvmi.h
index 6a444428b831..265fece148d2 100644
--- a/arch/x86/kvm/kvmi.h
+++ b/arch/x86/kvm/kvmi.h
@@ -8,8 +8,11 @@ int kvmi_arch_cmd_vcpu_get_registers(struct kvm_vcpu *vcpu,
 void kvmi_arch_cmd_vcpu_set_registers(struct kvm_vcpu *vcpu,
 				      const struct kvm_regs *regs);
 int kvmi_arch_cmd_vcpu_control_cr(struct kvm_vcpu *vcpu, int cr, bool enable);
+int kvmi_arch_cmd_vcpu_inject_exception(struct kvm_vcpu *vcpu,
+				const struct kvmi_vcpu_inject_exception *req);
 
 u32 kvmi_msg_send_vcpu_cr(struct kvm_vcpu *vcpu, u32 cr, u64 old_value,
 			  u64 new_value, u64 *ret_value);
+u32 kvmi_msg_send_vcpu_trap(struct kvm_vcpu *vcpu);
 
 #endif
diff --git a/arch/x86/kvm/kvmi_msg.c b/arch/x86/kvm/kvmi_msg.c
index 4a2dbc38aef8..39cbab9f293a 100644
--- a/arch/x86/kvm/kvmi_msg.c
+++ b/arch/x86/kvm/kvmi_msg.c
@@ -153,12 +153,34 @@ static int handle_vcpu_control_cr(const struct kvmi_vcpu_msg_job *job,
 	return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
 }
 
+static int handle_vcpu_inject_exception(const struct kvmi_vcpu_msg_job *job,
+					const struct kvmi_msg_hdr *msg,
+					const void *_req)
+{
+	const struct kvmi_vcpu_inject_exception *req = _req;
+	struct kvm_vcpu *vcpu = job->vcpu;
+	int ec;
+
+	if (!kvmi_is_event_allowed(KVMI(vcpu->kvm), KVMI_VCPU_EVENT_TRAP))
+		ec = -KVM_EPERM;
+	else if (req->padding1 || req->padding2)
+		ec = -KVM_EINVAL;
+	else if (VCPUI(vcpu)->arch.exception.pending ||
+			VCPUI(vcpu)->arch.exception.send_event)
+		ec = -KVM_EBUSY;
+	else
+		ec = kvmi_arch_cmd_vcpu_inject_exception(vcpu, req);
+
+	return kvmi_msg_vcpu_reply(job, msg, ec, NULL, 0);
+}
+
 static const kvmi_vcpu_msg_job_fct msg_vcpu[] = {
-	[KVMI_VCPU_CONTROL_CR]    = handle_vcpu_control_cr,
-	[KVMI_VCPU_GET_CPUID]     = handle_vcpu_get_cpuid,
-	[KVMI_VCPU_GET_INFO]      = handle_vcpu_get_info,
-	[KVMI_VCPU_GET_REGISTERS] = handle_vcpu_get_registers,
-	[KVMI_VCPU_SET_REGISTERS] = handle_vcpu_set_registers,
+	[KVMI_VCPU_CONTROL_CR]       = handle_vcpu_control_cr,
+	[KVMI_VCPU_GET_CPUID]        = handle_vcpu_get_cpuid,
+	[KVMI_VCPU_GET_INFO]         = handle_vcpu_get_info,
+	[KVMI_VCPU_GET_REGISTERS]    = handle_vcpu_get_registers,
+	[KVMI_VCPU_INJECT_EXCEPTION] = handle_vcpu_inject_exception,
+	[KVMI_VCPU_SET_REGISTERS]    = handle_vcpu_set_registers,
 };
 
 kvmi_vcpu_msg_job_fct kvmi_arch_vcpu_msg_handler(u16 id)
@@ -190,3 +212,23 @@ u32 kvmi_msg_send_vcpu_cr(struct kvm_vcpu *vcpu, u32 cr, u64 old_value,
 
 	return action;
 }
+
+u32 kvmi_msg_send_vcpu_trap(struct kvm_vcpu *vcpu)
+{
+	struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
+	struct kvmi_vcpu_event_trap e;
+	u32 action;
+	int err;
+
+	memset(&e, 0, sizeof(e));
+	e.nr = vcpui->arch.exception.nr;
+	e.error_code = vcpui->arch.exception.error_code;
+	e.address = vcpui->arch.exception.address;
+
+	err = __kvmi_send_vcpu_event(vcpu, KVMI_VCPU_EVENT_TRAP,
+				     &e, sizeof(e), NULL, 0, &action);
+	if (err)
+		action = KVMI_EVENT_ACTION_CONTINUE;
+
+	return action;
+}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2caa6929a78f..bc017c2bf7bb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9668,6 +9668,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 		goto cancel_injection;
 	}
 
+	kvmi_enter_guest(vcpu);
+
 	if (req_immediate_exit) {
 		kvm_make_request(KVM_REQ_EVENT, vcpu);
 		static_call(kvm_x86_request_immediate_exit)(vcpu);
diff --git a/include/uapi/linux/kvmi.h b/include/uapi/linux/kvmi.h
index c1d8cf02018b..263d98a5903e 100644
--- a/include/uapi/linux/kvmi.h
+++ b/include/uapi/linux/kvmi.h
@@ -36,12 +36,13 @@ enum {
 enum {
 	KVMI_VCPU_EVENT = KVMI_VCPU_MESSAGE_ID(0),
 
-	KVMI_VCPU_GET_INFO       = KVMI_VCPU_MESSAGE_ID(1),
-	KVMI_VCPU_CONTROL_EVENTS = KVMI_VCPU_MESSAGE_ID(2),
-	KVMI_VCPU_GET_REGISTERS  = KVMI_VCPU_MESSAGE_ID(3),
-	KVMI_VCPU_SET_REGISTERS  = KVMI_VCPU_MESSAGE_ID(4),
-	KVMI_VCPU_GET_CPUID      = KVMI_VCPU_MESSAGE_ID(5),
-	KVMI_VCPU_CONTROL_CR     = KVMI_VCPU_MESSAGE_ID(6),
+	KVMI_VCPU_GET_INFO         = KVMI_VCPU_MESSAGE_ID(1),
+	KVMI_VCPU_CONTROL_EVENTS   = KVMI_VCPU_MESSAGE_ID(2),
+	KVMI_VCPU_GET_REGISTERS    = KVMI_VCPU_MESSAGE_ID(3),
+	KVMI_VCPU_SET_REGISTERS    = KVMI_VCPU_MESSAGE_ID(4),
+	KVMI_VCPU_GET_CPUID        = KVMI_VCPU_MESSAGE_ID(5),
+	KVMI_VCPU_CONTROL_CR       = KVMI_VCPU_MESSAGE_ID(6),
+	KVMI_VCPU_INJECT_EXCEPTION = KVMI_VCPU_MESSAGE_ID(7),
 
 	KVMI_NEXT_VCPU_MESSAGE
 };
@@ -60,6 +61,7 @@ enum {
 	KVMI_VCPU_EVENT_HYPERCALL  = KVMI_VCPU_EVENT_ID(1),
 	KVMI_VCPU_EVENT_BREAKPOINT = KVMI_VCPU_EVENT_ID(2),
 	KVMI_VCPU_EVENT_CR         = KVMI_VCPU_EVENT_ID(3),
+	KVMI_VCPU_EVENT_TRAP       = KVMI_VCPU_EVENT_ID(4),
 
 	KVMI_NEXT_VCPU_EVENT
 };
diff --git a/tools/testing/selftests/kvm/x86_64/kvmi_test.c b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
index 86978b2ab6c9..7814626cba77 100644
--- a/tools/testing/selftests/kvm/x86_64/kvmi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvmi_test.c
@@ -49,6 +49,7 @@ struct vcpu_worker_data {
 	struct kvm_vm *vm;
 	int vcpu_id;
 	int test_id;
+	bool restart_on_shutdown;
 };
 
 enum {
@@ -633,6 +634,10 @@ static void *vcpu_worker(void *data)
 
 		vcpu_run(ctx->vm, ctx->vcpu_id);
 
+		if (run->exit_reason == KVM_EXIT_SHUTDOWN &&
+		    ctx->restart_on_shutdown)
+			continue;
+
 		TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
 			"vcpu_run() failed, test_id %d, exit reason %u (%s)\n",
 			ctx->test_id, run->exit_reason,
@@ -1199,6 +1204,124 @@ static void test_cmd_vcpu_control_cr(struct kvm_vm *vm)
 	test_invalid_vcpu_control_cr(vm);
 }
 
+static void __inject_exception(int nr)
+{
+	struct {
+		struct kvmi_msg_hdr hdr;
+		struct kvmi_vcpu_hdr vcpu_hdr;
+		struct kvmi_vcpu_inject_exception cmd;
+	} req = {};
+	int r;
+
+	req.cmd.nr = nr;
+
+	r = __do_vcpu0_command(KVMI_VCPU_INJECT_EXCEPTION,
+			       &req.hdr, sizeof(req), NULL, 0);
+	TEST_ASSERT(r == 0,
+		"KVMI_VCPU_INJECT_EXCEPTION failed, error %d(%s)\n",
+		-r, kvm_strerror(-r));
+}
+
+static void test_disallowed_trap_event(struct kvm_vm *vm)
+{
+	struct {
+		struct kvmi_msg_hdr hdr;
+		struct kvmi_vcpu_hdr vcpu_hdr;
+		struct kvmi_vcpu_inject_exception cmd;
+	} req = {};
+
+	disallow_event(vm, KVMI_VCPU_EVENT_TRAP);
+	test_vcpu0_command(vm, KVMI_VCPU_INJECT_EXCEPTION,
+			   &req.hdr, sizeof(req), NULL, 0, -KVM_EPERM);
+	allow_event(vm, KVMI_VCPU_EVENT_TRAP);
+}
+
+static void receive_exception_event(int nr)
+{
+	struct kvmi_msg_hdr hdr;
+	struct {
+		struct vcpu_event vcpu_ev;
+		struct kvmi_vcpu_event_trap trap;
+	} ev;
+	struct vcpu_reply rpl = {};
+
+	receive_vcpu_event(&hdr, &ev.vcpu_ev, sizeof(ev), KVMI_VCPU_EVENT_TRAP);
+
+	pr_debug("Exception event: vector %u, error_code 0x%x, address 0x%llx\n",
+		ev.trap.nr, ev.trap.error_code, ev.trap.address);
+
+	TEST_ASSERT(ev.trap.nr == nr,
+		"Injected exception %u instead of %u\n",
+		ev.trap.nr, nr);
+
+	reply_to_event(&hdr, &ev.vcpu_ev, KVMI_EVENT_ACTION_CONTINUE,
+			&rpl, sizeof(rpl));
+}
+
+static void test_succeded_ud_injection(void)
+{
+	__u8 ud_vector = 6;
+
+	__inject_exception(ud_vector);
+
+	receive_exception_event(ud_vector);
+}
+
+static void test_failed_ud_injection(struct kvm_vm *vm,
+				     struct vcpu_worker_data *data)
+{
+	struct kvmi_msg_hdr hdr;
+	struct {
+		struct vcpu_event vcpu_ev;
+		struct kvmi_vcpu_event_breakpoint bp;
+	} ev;
+	struct vcpu_reply rpl = {};
+	__u8 ud_vector = 6, bp_vector = 3;
+
+	WRITE_ONCE(data->test_id, GUEST_TEST_BP);
+
+	receive_vcpu_event(&hdr, &ev.vcpu_ev, sizeof(ev),
+			   KVMI_VCPU_EVENT_BREAKPOINT);
+
+	/* skip the breakpoint instruction, next time guest_bp_test() runs */
+	ev.vcpu_ev.common.arch.regs.rip += ev.bp.insn_len;
+	__set_registers(vm, &ev.vcpu_ev.common.arch.regs);
+
+	__inject_exception(ud_vector);
+
+	/* reinject the #BP exception because of the continue action */
+	reply_to_event(&hdr, &ev.vcpu_ev, KVMI_EVENT_ACTION_CONTINUE,
+			&rpl, sizeof(rpl));
+
+	receive_exception_event(bp_vector);
+}
+
+static void test_cmd_vcpu_inject_exception(struct kvm_vm *vm)
+{
+	struct vcpu_worker_data data = {
+		.vm = vm,
+		.vcpu_id = VCPU_ID,
+		.restart_on_shutdown = true,
+	};
+	pthread_t vcpu_thread;
+
+	if (!is_intel_cpu()) {
+		print_skip("TODO: %s() - make it work with AMD", __func__);
+		return;
+	}
+
+	test_disallowed_trap_event(vm);
+
+	enable_vcpu_event(vm, KVMI_VCPU_EVENT_BREAKPOINT);
+	vcpu_thread = start_vcpu_worker(&data);
+
+	test_succeded_ud_injection();
+	test_failed_ud_injection(vm, &data);
+
+	wait_vcpu_worker(vcpu_thread);
+	disable_vcpu_event(vm, KVMI_VCPU_EVENT_BREAKPOINT);
+}
+
 static void test_introspection(struct kvm_vm *vm)
 {
 	srandom(time(0));
@@ -1223,6 +1346,7 @@ static void test_introspection(struct kvm_vm *vm)
 	test_event_breakpoint(vm);
 	test_cmd_vm_control_cleanup(vm);
 	test_cmd_vcpu_control_cr(vm);
+	test_cmd_vcpu_inject_exception(vm);
 
 	unhook_introspection(vm);
 }
diff --git a/virt/kvm/introspection/kvmi.c b/virt/kvm/introspection/kvmi.c
index b5fcd8a3d8ae..5d65314db7b8 100644
--- a/virt/kvm/introspection/kvmi.c
+++ b/virt/kvm/introspection/kvmi.c
@@ -846,6 +846,8 @@ void kvmi_handle_requests(struct kvm_vcpu *vcpu)
 	if (!kvmi)
 		goto out;
 
+	kvmi_arch_send_pending_event(vcpu);
+
 	for (;;) {
 		kvmi_run_jobs(vcpu);
 
diff --git a/virt/kvm/introspection/kvmi_int.h b/virt/kvm/introspection/kvmi_int.h
index b1877a770fcb..0a7a8285b981 100644
--- a/virt/kvm/introspection/kvmi_int.h
+++ b/virt/kvm/introspection/kvmi_int.h
@@ -48,6 +48,9 @@ int kvmi_msg_send_unhook(struct kvm_introspection *kvmi);
 int kvmi_send_vcpu_event(struct kvm_vcpu *vcpu, u32 ev_id,
 			 void *ev, size_t ev_size,
 			 void *rpl, size_t rpl_size, u32 *action);
+int __kvmi_send_vcpu_event(struct kvm_vcpu *vcpu, u32 ev_id,
+			   void *ev, size_t ev_size,
+			   void *rpl, size_t rpl_size, u32 *action);
 int kvmi_msg_vcpu_reply(const struct kvmi_vcpu_msg_job *job,
 			const struct kvmi_msg_hdr *msg, int err,
 			const void *rpl, size_t rpl_size);
@@ -100,5 +103,6 @@ bool kvmi_arch_is_agent_hypercall(struct kvm_vcpu *vcpu);
 void kvmi_arch_breakpoint_event(struct kvm_vcpu *vcpu, u64 gva, u8 insn_len);
 int kvmi_arch_cmd_control_intercept(struct kvm_vcpu *vcpu,
 				    unsigned int event_id, bool enable);
+void kvmi_arch_send_pending_event(struct kvm_vcpu *vcpu);
 
 #endif
diff --git a/virt/kvm/introspection/kvmi_msg.c b/virt/kvm/introspection/kvmi_msg.c
index 74061304ca83..1fedfa6d4814 100644
--- a/virt/kvm/introspection/kvmi_msg.c
+++ b/virt/kvm/introspection/kvmi_msg.c
@@ -718,9 +718,9 @@ static int kvmi_fill_and_sent_vcpu_event(struct kvm_vcpu *vcpu,
 	return kvmi_sock_write(kvmi, vec, n, msg_size);
 }
 
-int kvmi_send_vcpu_event(struct kvm_vcpu *vcpu, u32 ev_id,
-			 void *ev, size_t ev_size,
-			 void *rpl, size_t rpl_size, u32 *action)
+int __kvmi_send_vcpu_event(struct kvm_vcpu *vcpu, u32 ev_id,
+			   void *ev, size_t ev_size,
+			   void *rpl, size_t rpl_size, u32 *action)
 {
 	struct kvm_vcpu_introspection *vcpui = VCPUI(vcpu);
 	struct kvm_introspection *kvmi = KVMI(vcpu->kvm);
@@ -750,6 +750,16 @@ int kvmi_send_vcpu_event(struct kvm_vcpu *vcpu, u32 ev_id,
 	return err;
 }
 
+int kvmi_send_vcpu_event(struct kvm_vcpu *vcpu,
+			 u32 ev_id, void *ev, size_t ev_size,
+			 void *rpl, size_t rpl_size, u32 *action)
+{
+	kvmi_arch_send_pending_event(vcpu);
+
+	return __kvmi_send_vcpu_event(vcpu, ev_id, ev, ev_size,
+				      rpl, rpl_size, action);
+}
+
 u32 kvmi_msg_send_vcpu_pause(struct kvm_vcpu *vcpu)
 {
 	u32 action;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-10-06 17:41 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-06 17:29 [PATCH v12 00/77] VM introspection Adalbert Lazăr
2021-10-06 17:29 ` [PATCH v12 01/77] KVM: UAPI: add error codes used by the VM introspection code Adalbert Lazăr
2021-10-06 17:29 ` [PATCH v12 02/77] KVM: add kvm_vcpu_kick_and_wait() Adalbert Lazăr
2021-10-06 17:29 ` [PATCH v12 03/77] KVM: x86: add kvm_arch_vcpu_get_regs() and kvm_arch_vcpu_get_sregs() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 04/77] KVM: x86: add kvm_arch_vcpu_set_regs() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 05/77] KVM: x86: avoid injecting #PF when emulate the VMCALL instruction Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 06/77] KVM: x86: add kvm_x86_ops.bp_intercepted() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 07/77] KVM: x86: add kvm_x86_ops.control_cr3_intercept() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 08/77] KVM: x86: add kvm_x86_ops.cr3_write_intercepted() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 09/77] KVM: x86: add kvm_x86_ops.desc_ctrl_supported() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 10/77] KVM: svm: add support for descriptor-table VM-exits Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 11/77] KVM: x86: add kvm_x86_ops.control_desc_intercept() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 12/77] KVM: x86: add kvm_x86_ops.desc_intercepted() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 13/77] KVM: x86: add kvm_x86_ops.msr_write_intercepted() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 14/77] KVM: x86: svm: use the vmx convention to control the MSR interception Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 15/77] KVM: x86: add kvm_x86_ops.control_msr_intercept() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 16/77] KVM: x86: save the error code during EPT/NPF exits handling Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 17/77] KVM: x86: add kvm_x86_ops.fault_gla() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 18/77] KVM: x86: add kvm_x86_ops.control_singlestep() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 19/77] KVM: x86: export kvm_arch_vcpu_set_guest_debug() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 20/77] KVM: x86: extend kvm_mmu_gva_to_gpa_system() with the 'access' parameter Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 21/77] KVM: x86: export kvm_inject_pending_exception() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 22/77] KVM: x86: export kvm_vcpu_ioctl_x86_get_xsave() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 23/77] KVM: x86: export kvm_vcpu_ioctl_x86_set_xsave() Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 24/77] KVM: x86: page track: provide all callbacks with the guest virtual address Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 25/77] KVM: x86: page track: add track_create_slot() callback Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 26/77] KVM: x86: page_track: add support for preread, prewrite and preexec Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 27/77] KVM: x86: wire in the preread/prewrite/preexec page trackers Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 28/77] KVM: x86: disable gpa_available optimization for fetch and page-walk SPT violations Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 29/77] KVM: introduce VM introspection Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 30/77] KVM: introspection: add hook/unhook ioctls Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 31/77] KVM: introspection: add permission access ioctls Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 32/77] KVM: introspection: add the read/dispatch message function Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 33/77] KVM: introspection: add KVMI_GET_VERSION Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 34/77] KVM: introspection: add KVMI_VM_CHECK_COMMAND and KVMI_VM_CHECK_EVENT Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 35/77] KVM: introspection: add KVMI_VM_GET_INFO Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 36/77] KVM: introspection: add KVM_INTROSPECTION_PREUNHOOK Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 37/77] KVM: introspection: add KVMI_VM_EVENT_UNHOOK Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 38/77] KVM: introspection: add KVMI_VM_CONTROL_EVENTS Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 39/77] KVM: introspection: add KVMI_VM_READ_PHYSICAL/KVMI_VM_WRITE_PHYSICAL Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 40/77] KVM: introspection: add vCPU related data Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 41/77] KVM: introspection: add a jobs list to every introspected vCPU Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 42/77] KVM: introspection: handle vCPU introspection requests Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 43/77] KVM: introspection: handle vCPU commands Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 44/77] KVM: introspection: add KVMI_VCPU_GET_INFO Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 45/77] KVM: introspection: add KVMI_VM_PAUSE_VCPU Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 46/77] KVM: introspection: add support for vCPU events Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 47/77] KVM: introspection: add KVMI_VCPU_EVENT_PAUSE Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 48/77] KVM: introspection: add the crash action handling on the event reply Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 49/77] KVM: introspection: add KVMI_VCPU_CONTROL_EVENTS Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 50/77] KVM: introspection: add KVMI_VCPU_GET_REGISTERS Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 51/77] KVM: introspection: add KVMI_VCPU_SET_REGISTERS Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 52/77] KVM: introspection: add KVMI_VCPU_GET_CPUID Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 53/77] KVM: introspection: add KVMI_VCPU_EVENT_HYPERCALL Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 54/77] KVM: introspection: add KVMI_VCPU_EVENT_BREAKPOINT Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 55/77] KVM: introspection: add cleanup support for vCPUs Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 56/77] KVM: introspection: restore the state of #BP interception on unhook Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 57/77] KVM: introspection: add KVMI_VM_CONTROL_CLEANUP Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 58/77] KVM: introspection: add KVMI_VCPU_CONTROL_CR and KVMI_VCPU_EVENT_CR Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 59/77] KVM: introspection: restore the state of CR3 interception on unhook Adalbert Lazăr
2021-10-06 17:30 ` Adalbert Lazăr [this message]
2021-10-06 17:30 ` [PATCH v12 61/77] KVM: introspection: add KVMI_VCPU_EVENT_XSETBV Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 62/77] KVM: introspection: add KVMI_VCPU_GET_XCR Adalbert Lazăr
2021-10-06 17:30 ` [PATCH v12 63/77] KVM: introspection: add KVMI_VCPU_GET_XSAVE Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 64/77] KVM: introspection: add KVMI_VCPU_SET_XSAVE Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 65/77] KVM: introspection: add KVMI_VCPU_GET_MTRR_TYPE Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 66/77] KVM: introspection: add KVMI_VCPU_EVENT_DESCRIPTOR Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 67/77] KVM: introspection: restore the state of descriptor-table register interception on unhook Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 68/77] KVM: introspection: add KVMI_VCPU_CONTROL_MSR and KVMI_VCPU_EVENT_MSR Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 69/77] KVM: introspection: restore the state of MSR interception on unhook Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 70/77] KVM: introspection: add KVMI_VM_SET_PAGE_ACCESS Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 71/77] KVM: introspection: add KVMI_VCPU_EVENT_PF Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 72/77] KVM: introspection: extend KVMI_GET_VERSION with struct kvmi_features Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 73/77] KVM: introspection: add KVMI_VCPU_CONTROL_SINGLESTEP Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 74/77] KVM: introspection: add KVMI_VCPU_EVENT_SINGLESTEP Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 75/77] KVM: introspection: add KVMI_VCPU_TRANSLATE_GVA Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 76/77] KVM: introspection: emulate a guest page table walk on SPT violations due to A/D bit updates Adalbert Lazăr
2021-10-06 17:31 ` [PATCH v12 77/77] KVM: x86: call the page tracking code on emulation failure Adalbert Lazăr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211006173113.26445-61-alazar@bitdefender.com \
    --to=alazar@bitdefender.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=mathieu.tarral@protonmail.com \
    --cc=mdontu@bitdefender.com \
    --cc=nicu.citu@icloud.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=tamas@tklengyel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).