From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
Cc: x86@kernel.org, linux-arch@vger.kernel.org,
Will Deacon <will@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
Mark Rutland <mark.rutland@arm.com>,
Kees Cook <keescook@chromium.org>,
Keno Fischer <keno@juliacomputing.com>,
Paolo Bonzini <pbonzini@redhat.com>,
kvm@vger.kernel.org,
Gabriel Krisman Bertazi <krisman@collabora.com>,
Sean Christopherson <sean.j.christopherson@intel.com>
Subject: [patch V5 05/15] entry: Provide infrastructure for work before transitioning to guest mode
Date: Wed, 22 Jul 2020 23:59:59 +0200 [thread overview]
Message-ID: <20200722220519.833296398@linutronix.de> (raw)
In-Reply-To: 20200722215954.464281930@linutronix.de
From: Thomas Gleixner <tglx@linutronix.de>
Entering a guest is similar to exiting to user space. Pending work like
handling signals, rescheduling, task work etc. needs to be handled before
that.
Provide generic infrastructure to avoid duplication of the same handling
code all over the place.
The transfer to guest mode handling is different from the exit to usermode
handling, e.g. vs. rseq and live patching, so a separate function is used.
The initial list of work items handled is:
TIF_SIGPENDING, TIF_NEED_RESCHED, TIF_NOTIFY_RESUME
Architecture specific TIF flags can be added via defines in the
architecture specific include files.
The calling convention is also different from the syscall/interrupt entry
functions as KVM invokes this from the outer vcpu_run() loop with
interrupts and preemption enabled. To prevent missing a pending work item
it invokes a check for pending TIF work from interrupt disabled code right
before transitioning to guest mode. The lockdep, RCU and tracing state
handling is also done directly around the switch to and from guest mode.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V5: Rename exit -> xfer (Sean)
V3: Reworked and simplified version adopted to recent X86 and KVM changes
V2: Moved KVM specific functions to kvm (Paolo)
Added lockdep assert (Andy)
Dropped live patching from enter guest mode work (Miroslav)
---
include/linux/entry-kvm.h | 80 ++++++++++++++++++++++++++++++++++++++++++++++
include/linux/kvm_host.h | 8 ++++
kernel/entry/Makefile | 3 +
kernel/entry/kvm.c | 51 +++++++++++++++++++++++++++++
virt/kvm/Kconfig | 3 +
5 files changed, 144 insertions(+), 1 deletion(-)
--- /dev/null
+++ b/include/linux/entry-kvm.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_ENTRYKVM_H
+#define __LINUX_ENTRYKVM_H
+
+#include <linux/entry-common.h>
+
+/* Transfer to guest mode work */
+#ifdef CONFIG_KVM_XFER_TO_GUEST_WORK
+
+#ifndef ARCH_XFER_TO_GUEST_MODE_WORK
+# define ARCH_XFER_TO_GUEST_MODE_WORK (0)
+#endif
+
+#define XFER_TO_GUEST_MODE_WORK \
+ (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+ _TIF_NOTIFY_RESUME | ARCH_XFER_TO_GUEST_MODE_WORK)
+
+struct kvm_vcpu;
+
+/**
+ * arch_xfer_to_guest_mode_work - Architecture specific xfer to guest mode
+ * work function.
+ * @vcpu: Pointer to current's VCPU data
+ * @ti_work: Cached TIF flags gathered in xfer_to_guest_mode()
+ *
+ * Invoked from xfer_to_guest_mode_work(). Defaults to NOOP. Can be
+ * replaced by architecture specific code.
+ */
+static inline int arch_xfer_to_guest_mode_work(struct kvm_vcpu *vcpu,
+ unsigned long ti_work);
+
+#ifndef arch_xfer_to_guest_mode_work
+static inline int arch_xfer_to_guest_mode_work(struct kvm_vcpu *vcpu,
+ unsigned long ti_work)
+{
+ return 0;
+}
+#endif
+
+/**
+ * xfer_to_guest_mode - Check and handle pending work which needs to be
+ * handled before returning to guest mode
+ * @vcpu: Pointer to current's VCPU data
+ *
+ * Returns: 0 or an error code
+ */
+int xfer_to_guest_mode(struct kvm_vcpu *vcpu);
+
+/**
+ * __xfer_to_guest_mode_work_pending - Check if work is pending
+ *
+ * Returns: True if work pending, False otherwise.
+ *
+ * Bare variant of xfer_to_guest_mode_work_pending(). Can be called from
+ * interrupt enabled code for racy quick checks with care.
+ */
+static inline bool __xfer_to_guest_mode_work_pending(void)
+{
+ unsigned long ti_work = READ_ONCE(current_thread_info()->flags);
+
+ return !!(ti_work & XFER_TO_GUEST_MODE_WORK);
+}
+
+/**
+ * xfer_to_guest_mode_work_pending - Check if work is pending which needs to be
+ * handled before returning to guest mode
+ *
+ * Returns: True if work pending, False otherwise.
+ *
+ * Has to be invoked with interrupts disabled before the transition to
+ * guest mode.
+ */
+static inline bool xfer_to_guest_mode_work_pending(void)
+{
+ lockdep_assert_irqs_disabled();
+ return __xfer_to_guest_mode_work_pending();
+}
+#endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */
+
+#endif
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1439,4 +1439,12 @@ int kvm_vm_create_worker_thread(struct k
uintptr_t data, const char *name,
struct task_struct **thread_ptr);
+#ifdef CONFIG_KVM_XFER_TO_GUEST_WORK
+static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
+{
+ vcpu->run->exit_reason = KVM_EXIT_INTR;
+ vcpu->stat.signal_exits++;
+}
+#endif /* CONFIG_KVM_XFER_TO_GUEST_WORK */
+
#endif
--- a/kernel/entry/Makefile
+++ b/kernel/entry/Makefile
@@ -9,4 +9,5 @@ KCOV_INSTRUMENT := n
CFLAGS_REMOVE_common.o = -fstack-protector -fstack-protector-strong
CFLAGS_common.o += -fno-stack-protector
-obj-$(CONFIG_GENERIC_ENTRY) += common.o
+obj-$(CONFIG_GENERIC_ENTRY) += common.o
+obj-$(CONFIG_KVM_XFER_TO_GUEST_WORK) += kvm.o
--- /dev/null
+++ b/kernel/entry/kvm.c
@@ -0,0 +1,51 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/entry-kvm.h>
+#include <linux/kvm_host.h>
+
+static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
+{
+ do {
+ int ret;
+
+ if (ti_work & _TIF_SIGPENDING) {
+ kvm_handle_signal_exit(vcpu);
+ return -EINTR;
+ }
+
+ if (ti_work & _TIF_NEED_RESCHED)
+ schedule();
+
+ if (ti_work & _TIF_NOTIFY_RESUME) {
+ clear_thread_flag(TIF_NOTIFY_RESUME);
+ tracehook_notify_resume(NULL);
+ }
+
+ ret = arch_xfer_to_guest_mode_work(vcpu, ti_work);
+ if (ret)
+ return ret;
+
+ ti_work = READ_ONCE(current_thread_info()->flags);
+ } while (ti_work & XFER_TO_GUEST_MODE_WORK || need_resched());
+ return 0;
+}
+
+int xfer_to_guest_mode(struct kvm_vcpu *vcpu)
+{
+ unsigned long ti_work;
+
+ /*
+ * This is invoked from the outer guest loop with interrupts and
+ * preemption enabled.
+ *
+ * KVM invokes xfer_to_guest_mode_work_pending() with interrupts
+ * disabled in the inner loop before going into guest mode. No need
+ * to disable interrupts here.
+ */
+ ti_work = READ_ONCE(current_thread_info()->flags);
+ if (!(ti_work & XFER_TO_GUEST_MODE_WORK))
+ return 0;
+
+ return xfer_to_guest_mode_work(vcpu, ti_work);
+}
+EXPORT_SYMBOL_GPL(xfer_to_guest_mode);
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -60,3 +60,6 @@ config HAVE_KVM_VCPU_RUN_PID_CHANGE
config HAVE_KVM_NO_POLL
bool
+
+config KVM_XFER_TO_GUEST_WORK
+ bool
next prev parent reply other threads:[~2020-07-22 22:14 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-22 21:59 [patch V5 00/15] entry, x86, kvm: Generic entry/exit functionality for host and guest Thomas Gleixner
2020-07-22 21:59 ` [patch V5 01/15] seccomp: Provide stub for __secure_computing() Thomas Gleixner
2020-07-24 19:08 ` [tip: core/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 21:59 ` [patch V5 02/15] entry: Provide generic syscall entry functionality Thomas Gleixner
2020-07-24 19:08 ` [tip: core/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 21:59 ` [patch V5 03/15] entry: Provide generic syscall exit function Thomas Gleixner
2020-07-24 19:08 ` [tip: core/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 21:59 ` [patch V5 04/15] entry: Provide generic interrupt entry/exit code Thomas Gleixner
2020-07-24 19:08 ` [tip: core/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 21:59 ` Thomas Gleixner [this message]
2020-07-24 19:08 ` [tip: core/entry] entry: Provide infrastructure for work before transitioning to guest mode tip-bot2 for Thomas Gleixner
2020-07-29 16:55 ` [patch V5 05/15] " Qian Cai
2020-07-30 7:19 ` Thomas Gleixner
2020-07-30 10:34 ` [tip: x86/entry] x86/kvm: Use __xfer_to_guest_mode_work_pending() in kvm_run_vcpu() tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 06/15] x86/entry: Consolidate check_user_regs() Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 07/15] x86/entry: Consolidate 32/64 bit syscall entry Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-26 18:33 ` Brian Gerst
2020-07-27 13:38 ` Thomas Gleixner
2020-07-22 22:00 ` [patch V5 08/15] x86/entry: Move user return notifier out of loop Thomas Gleixner
2020-07-23 23:41 ` Sean Christopherson
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 09/15] x86/ptrace: Provide pt_regs helper for entry/exit Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 10/15] x86/entry: Use generic syscall entry function Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 11/15] x86/entry: Use generic syscall exit functionality Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 12/15] x86/entry: Cleanup idtentry_entry/exit_user Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 13/15] x86/entry: Use generic interrupt entry/exit code Thomas Gleixner
2020-07-24 14:28 ` Ingo Molnar
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 14/15] x86/entry: Cleanup idtentry_enter/exit Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-22 22:00 ` [patch V5 15/15] x86/kvm: Use generic xfer to guest work function Thomas Gleixner
2020-07-24 0:17 ` Sean Christopherson
2020-07-24 0:46 ` Thomas Gleixner
2020-07-24 0:55 ` Sean Christopherson
2020-07-24 14:24 ` Ingo Molnar
2020-07-24 19:08 ` Thomas Gleixner
2020-07-24 20:11 ` [tip: x86/entry] " tip-bot2 for Thomas Gleixner
2020-07-24 20:51 ` [patch V5 00/15] entry, x86, kvm: Generic entry/exit functionality for host and guest Thomas Gleixner
2020-07-29 13:39 ` Steven Price
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200722220519.833296398@linutronix.de \
--to=tglx@linutronix.de \
--cc=arnd@arndb.de \
--cc=keescook@chromium.org \
--cc=keno@juliacomputing.com \
--cc=krisman@collabora.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=pbonzini@redhat.com \
--cc=sean.j.christopherson@intel.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox