From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6CA52E7BCC for ; Mon, 8 Sep 2025 21:31:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757367104; cv=none; b=lrBYXAp4DKQtcYTxu6f7GD1pBLPs9l60rlvka7DvSYpOm6x8mcd8SA8xLc4xjuamij45M6CJ4FcBNOGDUTjoC4S7LgFxytAdhRd+NWFrq0neT/2nxKLfvQBLwqgC86B0tbnpk9exKWb3H5k0dMFiz0K7t5emeUHJM3XEC1erCaQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757367104; c=relaxed/simple; bh=zXn2Hilq7JgETAgS203r++Qk1ot5FmpXs+P3sB1JjNQ=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=O34VQnDnUgc4N5Wq7nalop5Ij2OYwTmnrqijYCxRaBmO3umMXjvPIfnhiaKVqSQZTmIxNce6iEbQ82VblDl/GaTENkV8kVIhR3imHjVb1yU8omdjEbMhtAcigGvvQAciXZQr+JpCKfMF+QKzL3VQjw18q/qMriuRrcNnqJC8+WQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=mwWOl1tW; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=W7a4jyT2; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="mwWOl1tW"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="W7a4jyT2" Message-ID: <20250908212925.708769718@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1757367099; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=lOGOdkGw6DvUuxW1WUVswtkbF/cVX7fdUrXI1TMFuQs=; b=mwWOl1tWOlQVyMeOMdDi/OwB6Iwak10pJMVAvKpmr1ldNbWuPBXEVfxtGm3fZ7jnzLP2bI MlwS639FXSwfAdE5qZzJ614bdaJjDXh7ZR3JP9mnWh3b7RFaaX/3NCO0FRervgKi4u4h7y qFZZeY74V52krmEmJhLawRqhQODP+H2jHgLzHKyPoWiAbpz6Htx91zPHt3DLPXH+PsAERI V8etF8dTlQvlGvtr+gJqHEnTBXIrx5HkA7Copi6CfPTKfB/sPjmBXP61UB63mrqIRFIs8g Armo3c1EIPPvIypzU00uFTU63GFUsrxBaVOIA3UXUH11xqdKVatrsuNNxBH5/Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1757367099; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=lOGOdkGw6DvUuxW1WUVswtkbF/cVX7fdUrXI1TMFuQs=; b=W7a4jyT21Q/XzIUPpIqIDjLTEda/OKQpq96LtKZbVmPFppjg+Wm4bMKBWZcYEKIXFQlnbi SRpoz+6EmJicCgDw== From: Thomas Gleixner To: LKML Cc: Michael Jeanson , Jens Axboe , Paolo Bonzini , Sean Christopherson , Wei Liu , Dexuan Cui , Mathieu Desnoyers , Peter Zijlstra , "Paul E. McKenney" , Boqun Feng , x86@kernel.org, Arnd Bergmann , Heiko Carstens , Christian Borntraeger , Sven Schnelle , Huacai Chen , Paul Walmsley , Palmer Dabbelt Subject: [patch V4 07/36] rseq, virt: Retrigger RSEQ after vcpu_run() References: <20250908212737.353775467@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Date: Mon, 8 Sep 2025 23:31:38 +0200 (CEST) Hypervisors invoke resume_user_mode_work() before entering the guest, which clears TIF_NOTIFY_RESUME. The @regs argument is NULL as there is no user space context available to them, so the rseq notify handler skips inspecting the critical section, but updates the CPU/MM CID values unconditionally so that the eventual pending rseq event is not lost on the way to user space. This is a pointless exercise as the task might be rescheduled before actually returning to user space and it creates unnecessary work in the vcpu_run() loops. It's way more efficient to ignore that invocation based on @regs == NULL and let the hypervisors re-raise TIF_NOTIFY_RESUME after returning from the vcpu_run() loop before returning from the ioctl(). This ensures that a pending RSEQ update is not lost and the IDs are updated before returning to user space. Once the RSEQ handling is decoupled from TIF_NOTIFY_RESUME, this turns into a NOOP. Signed-off-by: Thomas Gleixner Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Wei Liu Cc: Dexuan Cui --- V3: Add the missing rseq.h include for HV - 0-day --- drivers/hv/mshv_root_main.c | 3 + include/linux/rseq.h | 17 +++++++++ kernel/rseq.c | 76 +++++++++++++++++++++++--------------------- virt/kvm/kvm_main.c | 3 + 4 files changed, 63 insertions(+), 36 deletions(-) --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -28,6 +28,7 @@ #include #include #include +#include #include "mshv_eventfd.h" #include "mshv.h" @@ -585,6 +586,8 @@ static long mshv_run_vp_with_root_schedu } } while (!vp->run.flags.intercept_suspend); + rseq_virt_userspace_exit(); + return ret; } --- a/include/linux/rseq.h +++ b/include/linux/rseq.h @@ -38,6 +38,22 @@ static __always_inline void rseq_exit_to } /* + * KVM/HYPERV invoke resume_user_mode_work() before entering guest mode, + * which clears TIF_NOTIFY_RESUME. To avoid updating user space RSEQ in + * that case just to do it eventually again before returning to user space, + * the entry resume_user_mode_work() invocation is ignored as the register + * argument is NULL. + * + * After returning from guest mode, they have to invoke this function to + * re-raise TIF_NOTIFY_RESUME if necessary. + */ +static inline void rseq_virt_userspace_exit(void) +{ + if (current->rseq_event_pending) + set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); +} + +/* * If parent process has a registered restartable sequences area, the * child inherits. Unregister rseq for a clone with CLONE_VM set. */ @@ -68,6 +84,7 @@ static inline void rseq_execve(struct ta static inline void rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs) { } static inline void rseq_signal_deliver(struct ksignal *ksig, struct pt_regs *regs) { } static inline void rseq_sched_switch_event(struct task_struct *t) { } +static inline void rseq_virt_userspace_exit(void) { } static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags) { } static inline void rseq_execve(struct task_struct *t) { } static inline void rseq_exit_to_user_mode(void) { } --- a/kernel/rseq.c +++ b/kernel/rseq.c @@ -422,50 +422,54 @@ void __rseq_handle_notify_resume(struct { struct task_struct *t = current; int ret, sig; + bool event; + + /* + * If invoked from hypervisors before entering the guest via + * resume_user_mode_work(), then @regs is a NULL pointer. + * + * resume_user_mode_work() clears TIF_NOTIFY_RESUME and re-raises + * it before returning from the ioctl() to user space when + * rseq_event.sched_switch is set. + * + * So it's safe to ignore here instead of pointlessly updating it + * in the vcpu_run() loop. + */ + if (!regs) + return; if (unlikely(t->flags & PF_EXITING)) return; /* - * If invoked from hypervisors or IO-URING, then @regs is a NULL - * pointer, so fixup cannot be done. If the syscall which led to - * this invocation was invoked inside a critical section, then it - * will either end up in this code again or a possible violation of - * a syscall inside a critical region can only be detected by the - * debug code in rseq_syscall() in a debug enabled kernel. + * Read and clear the event pending bit first. If the task + * was not preempted or migrated or a signal is on the way, + * there is no point in doing any of the heavy lifting here + * on production kernels. In that case TIF_NOTIFY_RESUME + * was raised by some other functionality. + * + * This is correct because the read/clear operation is + * guarded against scheduler preemption, which makes it CPU + * local atomic. If the task is preempted right after + * re-enabling preemption then TIF_NOTIFY_RESUME is set + * again and this function is invoked another time _before_ + * the task is able to return to user mode. + * + * On a debug kernel, invoke the fixup code unconditionally + * with the result handed in to allow the detection of + * inconsistencies. */ - if (regs) { - /* - * Read and clear the event pending bit first. If the task - * was not preempted or migrated or a signal is on the way, - * there is no point in doing any of the heavy lifting here - * on production kernels. In that case TIF_NOTIFY_RESUME - * was raised by some other functionality. - * - * This is correct because the read/clear operation is - * guarded against scheduler preemption, which makes it CPU - * local atomic. If the task is preempted right after - * re-enabling preemption then TIF_NOTIFY_RESUME is set - * again and this function is invoked another time _before_ - * the task is able to return to user mode. - * - * On a debug kernel, invoke the fixup code unconditionally - * with the result handed in to allow the detection of - * inconsistencies. - */ - bool event; - - scoped_guard(RSEQ_EVENT_GUARD) { - event = t->rseq_event_pending; - t->rseq_event_pending = false; - } + scoped_guard(RSEQ_EVENT_GUARD) { + event = t->rseq_event_pending; + t->rseq_event_pending = false; + } - if (IS_ENABLED(CONFIG_DEBUG_RSEQ) || event) { - ret = rseq_ip_fixup(regs, event); - if (unlikely(ret < 0)) - goto error; - } + if (IS_ENABLED(CONFIG_DEBUG_RSEQ) || event) { + ret = rseq_ip_fixup(regs, event); + if (unlikely(ret < 0)) + goto error; } + if (unlikely(rseq_update_cpu_node_id(t))) goto error; return; --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -49,6 +49,7 @@ #include #include #include +#include #include #include @@ -4466,6 +4467,8 @@ static long kvm_vcpu_ioctl(struct file * r = kvm_arch_vcpu_ioctl_run(vcpu); vcpu->wants_to_run = false; + rseq_virt_userspace_exit(); + trace_kvm_userspace_exit(vcpu->run->exit_reason, r); break; }