From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69498FED3D9 for ; Fri, 24 Apr 2026 15:03:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jp5I3YVV3IMi72Sbykn4N4JHT7WAZ2XdR9LFxVLpZI0=; b=Af2GvjU2k79s9Vy9QYJjXVTcEm DFXltO0Agzd9w2JUI8peL/8u8OQKSVcMloT7ZnEwk2xVAuIOHoGcubDP1hqyclkGoUizJrOmjEYbc d6mJ8rtMFRKiOx9GbDPuZgxwrqzOga/GpAiOJvD/QgMzbvFvKGC4u+3Wj/x1SLEdAMGmz4hgmoYex JbndoOUTLxL0CaLoAywPj5loxM1DbwFyZFHx/mYpQmCOlmyw6x3tq5ICuAenplWe7CGFqf298XEQr /j8s9d0w7TX0WCf7a9PSLJoN791SVBLySM0u1ud4gFfWY1uvIpjIQq6DtKYtXbeWItAANfbuW2iTj UTEvaZPQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wGI3w-0000000DMNG-2u9D; Fri, 24 Apr 2026 15:03:24 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wGI3v-0000000DMNA-2C96 for linux-arm-kernel@bombadil.infradead.org; Fri, 24 Apr 2026 15:03:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=jp5I3YVV3IMi72Sbykn4N4JHT7WAZ2XdR9LFxVLpZI0=; b=JY7598aQXXRv4U8//4S4j/GI5s PjkN9CxXhZKejaSm6OwtRSjuzpD016ogOdLyq2+MN6fEef+jK8Ze0MfdZwToG3MRMOX94phLv+oih /OdUbK5f76RK0N8jvjirAZf1aEDNaUEikSVKDEHe15zoB4z5W8vmWnXP4p8o6VwxNykpyMVRVOJ4t tkV2WObks+GpYhtbYsFh7ueTzhKPcUxHCJAEZZfsDYf19SI12uTbXde57PJsd/pM9bsl++noXtvvx w+S67lp0i5o9ihE4U5NBXqXeyfa3AT8BrCw5z1RP3EDbdn7aCIRzX7uOBzKWt36nSCP5RSoxqTtvg fM0sM37Q==; Received: from 2001-1c00-8d85-4b00-266e-96ff-fe07-7dcc.cable.dynamic.v6.ziggo.nl ([2001:1c00:8d85:4b00:266e:96ff:fe07:7dcc] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1wGI3r-0000000FNsV-38KB; Fri, 24 Apr 2026 15:03:19 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id 56C6A301261; Fri, 24 Apr 2026 17:03:18 +0200 (CEST) Date: Fri, 24 Apr 2026 17:03:18 +0200 From: Peter Zijlstra To: Thomas Gleixner Cc: Mathias Stearn , Dmitry Vyukov , Jinjie Ruan , linux-man@vger.kernel.org, Mark Rutland , Mathieu Desnoyers , Catalin Marinas , Will Deacon , Boqun Feng , "Paul E. McKenney" , Chris Kennelly , regressions@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ingo Molnar , Blake Oler Subject: Re: [REGRESSION] rseq: refactoring in v6.19 broke everyone on arm64 and tcmalloc everywhere Message-ID: <20260424150318.GE641209@noisy.programming.kicks-ass.net> References: <87wlxy22x7.ffs@tglx> <87ik9i0xlj.ffs@tglx> <87a4ut1njh.ffs@tglx> <87v7dgzbo7.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87v7dgzbo7.ffs@tglx> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Apr 24, 2026 at 04:16:08PM +0200, Thomas Gleixner wrote: > On Fri, Apr 24 2026 at 10:32, Mathias Stearn wrote: > > On Fri, Apr 24, 2026 at 9:57 AM Dmitry Vyukov wrote: > >> The only problem is with membarrier (it used to force write to > >> __rseq_abi.cpu_id_start for all threads, but now it does not). > >> Otherwise the caching scheme works. > > > > I almost wrote a message last night saying that we didn't need > > cpu_id_start invalidation on preemption. However, I remembered that > > the Grow() function[1] does a load outside of a critical section then > > stores a derived value inside the critical section, guarded only by > > the cpu_id_start invalidation check in StoreCurrentCpu[2]. It really > > should be doing a compare against the original value inside the > > critical section (or just do the whole thing inside), but it doesn't. > > I haven't reasoned end-to-end through this fully to prove corruption > > is possible, but I suspect that it is if another thread same-cpu > > preempts between the loads and the store and updates the header before > > the original thread resumes and writes its original intended header > > value. Ditto for signals, which sometimes allocate even though they > > shouldn't. > > > > I was really hoping that we would only need to do the "redundant" > > cpu_id_start writes would only be needed on membarrier_rseq IPIs where > > it really is a pay-for-what-you-use functionality, > > That's fine and can be solved without adding this sequence overhead into > the scheduler hotpath. Something like so? (probably needs help for !GENERIC bits) --- diff --git a/include/asm-generic/thread_info_tif.h b/include/asm-generic/thread_info_tif.h index 528e6fc7efe9..1d786003e42a 100644 --- a/include/asm-generic/thread_info_tif.h +++ b/include/asm-generic/thread_info_tif.h @@ -48,7 +48,10 @@ #define TIF_RSEQ 11 // Run RSEQ fast path #define _TIF_RSEQ BIT(TIF_RSEQ) -#define TIF_HRTIMER_REARM 12 // re-arm the timer +#define TIF_RSEQ_FORCE_RESTART 12 // Reset RSEQ-CS from membarrier +#define _TIF_RSEQ_FORCE_RESTART BIT(TIF_RSEQ_FORCE_RESTART) + +#define TIF_HRTIMER_REARM 13 // re-arm the timer #define _TIF_HRTIMER_REARM BIT(TIF_HRTIMER_REARM) #endif /* _ASM_GENERIC_THREAD_INFO_TIF_H_ */ diff --git a/include/linux/rseq.h b/include/linux/rseq.h index b9d62fc2140d..2cbee6d41198 100644 --- a/include/linux/rseq.h +++ b/include/linux/rseq.h @@ -158,6 +158,8 @@ static inline unsigned int rseq_alloc_align(void) return 1U << get_count_order(offsetof(struct rseq, end)); } +extern void rseq_prepare_membarrier(struct mm_struct *mm); + #else /* CONFIG_RSEQ */ static inline void rseq_handle_slowpath(struct pt_regs *regs) { } static inline void rseq_signal_deliver(struct ksignal *ksig, struct pt_regs *regs) { } @@ -167,6 +169,7 @@ static inline void rseq_force_update(void) { } static inline void rseq_virt_userspace_exit(void) { } static inline void rseq_fork(struct task_struct *t, u64 clone_flags) { } static inline void rseq_execve(struct task_struct *t) { } +static inline void rseq_prepare_membarrier(struct mm_struct *mm) { } #endif /* !CONFIG_RSEQ */ #ifdef CONFIG_DEBUG_RSEQ diff --git a/include/linux/rseq_entry.h b/include/linux/rseq_entry.h index f11ebd34f8b9..3dfaca776971 100644 --- a/include/linux/rseq_entry.h +++ b/include/linux/rseq_entry.h @@ -686,7 +686,12 @@ static __always_inline bool __rseq_exit_to_user_mode_restart(struct pt_regs *reg #ifdef CONFIG_HAVE_GENERIC_TIF_BITS static __always_inline bool test_tif_rseq(unsigned long ti_work) { - return ti_work & _TIF_RSEQ; + return ti_work & (_TIF_RSEQ | _TIF_RSEQ_FORCE_RESTART); +} + +static __always_inline void clear_tif_rseq_force_restart(void) +{ + clear_thread_flag(TIF_RSEQ_FORCE_RESTART); } static __always_inline void clear_tif_rseq(void) @@ -696,6 +701,7 @@ static __always_inline void clear_tif_rseq(void) } #else static __always_inline bool test_tif_rseq(unsigned long ti_work) { return true; } +static __always_inline void clear_tif_rseq_force_restart(void) { } static __always_inline void clear_tif_rseq(void) { } #endif @@ -703,6 +709,11 @@ static __always_inline bool rseq_exit_to_user_mode_restart(struct pt_regs *regs, unsigned long ti_work) { if (unlikely(test_tif_rseq(ti_work))) { + if (unlikely(ti_work & _TIF_RSEQ_FORCE_RESTART)) { + current->rseq.event.sched_switch = true; + current->rseq.event.ids_changed = true; + clear_tif_rseq_force_restart(); + } if (unlikely(__rseq_exit_to_user_mode_restart(regs))) { current->rseq.event.slowpath = true; set_tsk_thread_flag(current, TIF_NOTIFY_RESUME); diff --git a/kernel/rseq.c b/kernel/rseq.c index 38d3ef540760..9adc7f63adf5 100644 --- a/kernel/rseq.c +++ b/kernel/rseq.c @@ -255,6 +255,19 @@ static bool rseq_handle_cs(struct task_struct *t, struct pt_regs *regs) return false; } +void rseq_prepare_membarrier(struct mm_struct *mm) +{ + struct task_struct *t; + + guard(mutex)(&mm->mm_cid.mutex); + + hlist_for_each_entry(t, &mm->mm_cid.user_list, mm_cid.node) { + if (t == current) + continue; + set_tsk_thread_flag(t, TIF_RSEQ_FORCE_RESTART); + } +} + static void rseq_slowpath_update_usr(struct pt_regs *regs) { /* diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index 623445603725..696988bb991b 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -334,6 +334,7 @@ static int membarrier_private_expedited(int flags, int cpu_id) MEMBARRIER_STATE_PRIVATE_EXPEDITED_RSEQ_READY)) return -EPERM; ipi_func = ipi_rseq; + rseq_prepare_membarrier(mm); } else { WARN_ON_ONCE(flags); if (!(atomic_read(&mm->membarrier_state) &