From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 044C12F39D0 for ; Sat, 23 Aug 2025 16:40:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755967223; cv=none; b=aUYbWI7ojryQLaWWeeRfb1PkHklaEFXSprKb4Rht0BxsRsil757Gzi/0b0golA5qiWE499UWAgZmCZj3Lt5wgrX573FD5v2FVE7B+TxKCk+YP4rQYLjSrGeW9cAoLCjOBywrwBZo9pPPaKbC3B7zqoUT7E66MKo7nDKHsDSPmSo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755967223; c=relaxed/simple; bh=hSU6jbmXqZa1RejXYxiw64+TW9TMWEObgTKqEy4HGko=; h=Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type:Date; b=I3O1ZYWX7x0/HrHIKfXaIrLI4zY2zXpw/YmJHrEJu/pzGjsE5i21a43UrJURrLic2G3pTYFfDoaCL6FUhBUkELfQn071dHpEXy+x+vpuLRpKx/7Fi7dt4JPpAO48QQgcdtvybyXDCr28XAiBfEZINOWksNy3+Z+OtdTnyMWZY6o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=bRvX2S6G; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=OCD8q566; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="bRvX2S6G"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="OCD8q566" Message-ID: <20250823161655.127882103@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1755967220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uzJ7JI8HEWKH1yX2ahMCaw99qymb5514bPc6ymNcw3s=; b=bRvX2S6G+6APdrWsotD1tKtiYmi+yOvarvfwm/nlNk8fUk7KcAH+rPPcR4FMnA0svBhu+r Tnv/Op9nwtmYxkjejlDuPcivs7T13HgHKRvzE9v4XW7/H70ujJEdbco5WsMU+LkJYIJu8g evHjSaD3XfYOQg/lRGLMrTqmR6Z5d0GG+rRe8EftM+ZQxKI0Dqbapr6GC1w0hICy4b79co 5ixoQ3u2y6JgR99Ah70RKvd4Tw1w9ADOx3rqzqCA/grk8etJc+wve1+TJ1YtVGYCB8PPJS Fml2NjR2mbsacKgJZ+dszXz46Z+FQcvL0mhnUkk/0zy0vhRk4B5I8fHfmWjdqQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1755967220; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=uzJ7JI8HEWKH1yX2ahMCaw99qymb5514bPc6ymNcw3s=; b=OCD8q566eIfCooI4hrPfoL7IIPSmtqF+bbTApP2gLnCUAKCyStsdEpMWwEVa6bG5M2Ip4H y3t0x9MwwQzE8sAg== From: Thomas Gleixner To: LKML Cc: Jens Axboe , Mathieu Desnoyers , Peter Zijlstra , "Paul E. McKenney" , Boqun Feng , Paolo Bonzini , Sean Christopherson , Wei Liu , Dexuan Cui , x86@kernel.org, Arnd Bergmann , Heiko Carstens , Christian Borntraeger , Sven Schnelle , Huacai Chen , Paul Walmsley , Palmer Dabbelt Subject: [patch V2 29/37] entry: Split up exit_to_user_mode_prepare() References: <20250823161326.635281786@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Date: Sat, 23 Aug 2025 18:40:19 +0200 (CEST) exit_to_user_mode_prepare() is used for both interrupts and syscalls, but there is extra rseq work, which is only required for in the interrupt exit case. Split up the function and provide wrappers for syscalls and interrupts, which allows to seperate the rseq exit work in the next step. Signed-off-by: Thomas Gleixner --- include/linux/entry-common.h | 2 - include/linux/irq-entry-common.h | 42 ++++++++++++++++++++++++++++++++++----- 2 files changed, 38 insertions(+), 6 deletions(-) --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -156,7 +156,7 @@ static __always_inline void syscall_exit if (unlikely(work & SYSCALL_WORK_EXIT)) syscall_exit_work(regs, work); local_irq_disable_exit_to_user(); - exit_to_user_mode_prepare(regs); + syscall_exit_to_user_mode_prepare(regs); } /** --- a/include/linux/irq-entry-common.h +++ b/include/linux/irq-entry-common.h @@ -201,7 +201,7 @@ void arch_do_signal_or_restart(struct pt unsigned long exit_to_user_mode_loop(struct pt_regs *regs, unsigned long ti_work); /** - * exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required + * __exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required * @regs: Pointer to pt_regs on entry stack * * 1) check that interrupts are disabled @@ -209,8 +209,10 @@ unsigned long exit_to_user_mode_loop(str * 3) call exit_to_user_mode_loop() if any flags from * EXIT_TO_USER_MODE_WORK are set * 4) check that interrupts are still disabled + * + * Don't invoke directly, use the syscall/irqentry_ prefixed variants below */ -static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) +static __always_inline void __exit_to_user_mode_prepare(struct pt_regs *regs) { unsigned long ti_work; @@ -224,15 +226,45 @@ static __always_inline void exit_to_user ti_work = exit_to_user_mode_loop(regs, ti_work); arch_exit_to_user_mode_prepare(regs, ti_work); +} - rseq_exit_to_user_mode(); - +static __always_inline void __exit_to_user_mode_validate(void) +{ /* Ensure that kernel state is sane for a return to userspace */ kmap_assert_nomap(); lockdep_assert_irqs_disabled(); lockdep_sys_exit(); } + +/** + * syscall_exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required + * @regs: Pointer to pt_regs on entry stack + * + * Wrapper around __exit_to_user_mode_prepare() to seperate the exit work for + * syscalls and interrupts. + */ +static __always_inline void syscall_exit_to_user_mode_prepare(struct pt_regs *regs) +{ + __exit_to_user_mode_prepare(regs); + rseq_exit_to_user_mode(); + __exit_to_user_mode_validate(); +} + +/** + * irqentry_exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required + * @regs: Pointer to pt_regs on entry stack + * + * Wrapper around __exit_to_user_mode_prepare() to seperate the exit work for + * syscalls and interrupts. + */ +static __always_inline void irqentry_exit_to_user_mode_prepare(struct pt_regs *regs) +{ + __exit_to_user_mode_prepare(regs); + rseq_exit_to_user_mode(); + __exit_to_user_mode_validate(); +} + /** * exit_to_user_mode - Fixup state when exiting to user mode * @@ -297,7 +329,7 @@ static __always_inline void irqentry_ent static __always_inline void irqentry_exit_to_user_mode(struct pt_regs *regs) { instrumentation_begin(); - exit_to_user_mode_prepare(regs); + irqentry_exit_to_user_mode_prepare(regs); instrumentation_end(); exit_to_user_mode(); }