From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753334AbdHJUts (ORCPT ); Thu, 10 Aug 2017 16:49:48 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60791 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753139AbdHJUtr (ORCPT ); Thu, 10 Aug 2017 16:49:47 -0400 Date: Thu, 10 Aug 2017 13:49:34 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: Boqun Feng , Waiman Long , Ingo Molnar , linux-kernel@vger.kernel.org, Pan Xinhui , Andrea Parri , Will Deacon Subject: Re: [RESEND PATCH v5] locking/pvqspinlock: Relax cmpxchg's to improve performance on some archs Reply-To: paulmck@linux.vnet.ibm.com References: <1495633108-12818-1-git-send-email-longman@redhat.com> <20170809150603.6z43zkxnz3hew3jb@hirez.programming.kicks-ass.net> <20170809151533.ipzeu7vmwi5ttcab@hirez.programming.kicks-ass.net> <20170810081213.wtd7hgunnnekvm56@tardis> <20170810091317.hv6smfz5fxgolu2n@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170810091317.hv6smfz5fxgolu2n@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17081020-0008-0000-0000-0000026CE733 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007521; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000219; SDB=6.00900449; UDB=6.00450805; IPR=6.00680651; BA=6.00005522; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016633; XFM=3.00000015; UTC=2017-08-10 20:49:37 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17081020-0009-0000-0000-00003655485F Message-Id: <20170810204934.GS3730@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-08-10_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=3 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1706020000 definitions=main-1708100327 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 10, 2017 at 11:13:17AM +0200, Peter Zijlstra wrote: > On Thu, Aug 10, 2017 at 04:12:13PM +0800, Boqun Feng wrote: > > > > Or is the reason this doesn't work on PPC that its RCpc? > > So that :-) > > > Here is an example why PPC needs a sync() before the cmpxchg(): > > > > https://marc.info/?l=linux-kernel&m=144485396224519&w=2 > > > > and Paul Mckenney's detailed explanation about why this could happen: > > > > https://marc.info/?l=linux-kernel&m=144485909826241&w=2 > > > > (Somehow, I feel like he was answering to a similar question question as > > you ask here ;-)) > > Yes, and I had vague memories of having gone over this before, but > couldn't quickly find things. Thanks! > > > And I think aarch64 doesn't have a problem here because it is "(other) > > multi-copy atomic". Will? > > Right, its the RCpc vs RCsc thing. The ARM64 release is as you say > multi-copy atomic, whereas the PPC lwsync is not. > > This still leaves us with the situation that we need an smp_mb() between > smp_store_release() and a possibly failing cmpxchg() if we want to > guarantee the cmpxchg()'s load comes after the store-release. For whatever it is worth, this is why C11 allows specifying one memory-order strength for the success case and another for the failure case. But it is not immediately clear that we need another level of combinatorial API explosion... Thanx, Paul