From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9E06B1EB9EE for ; Wed, 30 Oct 2024 13:45:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=63.228.1.57 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730295953; cv=none; b=psZioTflmMX2BGIiZjsccHKOccDN+eh+MX/IJr2LiFxLIekRJGGCNaI1ZmrQxefX0PBk90kvkJqPuBvJv7yuwY5vIvM8smDhbC806jVSUpwLT5bVc3npUoEqaKg6JErqmc28LvRja9DQHbLwTNvhjOSW/VvHgzCO5px7JQiPONc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730295953; c=relaxed/simple; bh=d/UhrnxIRgcd8zLd+YLCrItWJ/wc2ffutL+HOEFfCso=; h=Date:From:To:Cc:Subject:Message-ID:References:Mime-Version: Content-Type:Content-Disposition:In-Reply-To; b=ga72nE2TGAj4zMVd+F8ma1dlw1z12Yva46kf4kGburIKofMqY0Ww2kykNWLpFhLOCtqpWgrqyHuYy4nF+JmUiNe+9G8qIV/zOtViVzZWHm3bOHZoAgECClmmVJxDmnA42CP6SkFSqhzwK/ziybVvJu7KeLs1uU0DomoR3yMi6oM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.crashing.org; spf=pass smtp.mailfrom=kernel.crashing.org; arc=none smtp.client-ip=63.228.1.57 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.crashing.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=kernel.crashing.org Received: from gate.crashing.org (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.14.1) with ESMTP id 49UDXklJ032424; Wed, 30 Oct 2024 08:33:46 -0500 Received: (from segher@localhost) by gate.crashing.org (8.14.1/8.14.1/Submit) id 49UDXe5O032403; Wed, 30 Oct 2024 08:33:40 -0500 X-Authentication-Warning: gate.crashing.org: segher set sender to segher@kernel.crashing.org using -f Date: Wed, 30 Oct 2024 08:33:39 -0500 From: Segher Boessenkool To: "Nysal Jan K.A." Cc: Andrew Morton , Michael Ellerman , Mathieu Desnoyers , Stephen Rothwell , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org, Nathan Chancellor , Nick Desaulniers , Bill Wendling , Justin Stitt , Vlastimil Babka , "Liam R. Howlett" , Mark Brown , Michal Hocko , Kent Overstreet , Rick Edgecombe , linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: Re: [PATCH v2] sched/membarrier: Fix redundant load of membarrier_state Message-ID: <20241030133339.GQ29862@gate.crashing.org> References: <20241007053936.833392-1-nysal@linux.ibm.com> <20241029055133.121418-1-nysal@linux.ibm.com> Precedence: bulk X-Mailing-List: llvm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241029055133.121418-1-nysal@linux.ibm.com> User-Agent: Mutt/1.4.2.3i Hi! On Tue, Oct 29, 2024 at 11:21:28AM +0530, Nysal Jan K.A. wrote: > On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE > is not selected, sync_core_before_usermode() is a no-op. > In membarrier_mm_sync_core_before_usermode() the compiler does not > eliminate redundant branches and load of mm->membarrier_state > for this case as the atomic_read() cannot be optimized away. > > Here's a snippet of the code generated for finish_task_switch() on powerpc > prior to this change: > > 1b786c: ld r26,2624(r30) # mm = rq->prev_mm; > ....... > 1b78c8: cmpdi cr7,r26,0 > 1b78cc: beq cr7,1b78e4 > 1b78d0: ld r9,2312(r13) # current > 1b78d4: ld r9,1888(r9) # current->mm > 1b78d8: cmpd cr7,r26,r9 > 1b78dc: beq cr7,1b7a70 > 1b78e0: hwsync > 1b78e4: cmplwi cr7,r27,128 > ....... > 1b7a70: lwz r9,176(r26) # atomic_read(&mm->membarrier_state) > 1b7a74: b 1b78e0 > > This was found while analyzing "perf c2c" reports on kernels prior > to commit c1753fd02a00 ("mm: move mm_count into its own cache line") > where mm_count was false sharing with membarrier_state. > > There is a minor improvement in the size of finish_task_switch(). > The following are results from bloat-o-meter for ppc64le: > > GCC 7.5.0 > --------- > add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-32 (-32) > Function old new delta > finish_task_switch 884 852 -32 > > GCC 12.2.1 > ---------- > add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-32 (-32) > Function old new delta > finish_task_switch.isra 852 820 -32 > > LLVM 17.0.6 > ----------- > add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-36 (-36) > Function old new delta > rt_mutex_schedule 120 104 -16 > finish_task_switch 792 772 -20 > > Results on aarch64: > > GCC 14.1.1 > ---------- > add/remove: 0/2 grow/shrink: 1/1 up/down: 4/-60 (-56) > Function old new delta > get_nohz_timer_target 352 356 +4 > e843419@0b02_0000d7e7_408 8 - -8 > e843419@01bb_000021d2_868 8 - -8 > finish_task_switch.isra 592 548 -44 > > Signed-off-by: Nysal Jan K.A. > --- > V1 -> V2: > - Add results for aarch64 > - Add a comment describing the changes > --- > include/linux/sched/mm.h | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index 928a626725e6..b13474825130 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -531,6 +531,13 @@ enum { > > static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm) > { > + /* > + * The atomic_read() below prevents CSE. The following should > + * help the compiler generate more efficient code on architectures > + * where sync_core_before_usermode() is a no-op. > + */ > + if (!IS_ENABLED(CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE)) > + return; > if (current->mm != mm) > return; > if (likely(!(atomic_read(&mm->membarrier_state) & I'd say "CSE and similar transformations", but yeah, in this case CSE. The point is that any access to a volatile object is a necessary side- effect, so it has to be performed on the actual machine just as on the abstract machine (on all the same paths, and as often). It might be nice to have an atomic_read (for PowerPC) that can generate better machine code. Not a trivial task though! Reviewed-by: Segher Boessenkool Segher