From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1425169AbdEYWT3 (ORCPT ); Thu, 25 May 2017 18:19:29 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:51783 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S942721AbdEYWAM (ORCPT ); Thu, 25 May 2017 18:00:12 -0400 From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com, "Paul E. McKenney" Subject: [PATCH tip/core/rcu 20/88] atomics: Add header comment so spin_unlock_wait() Date: Thu, 25 May 2017 14:58:53 -0700 X-Mailer: git-send-email 2.5.2 In-Reply-To: <20170525215934.GA11578@linux.vnet.ibm.com> References: <20170525215934.GA11578@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17052522-0024-0000-0000-0000027B8C82 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007117; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00865574; UDB=6.00429833; IPR=6.00645396; BA=6.00005375; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015583; XFM=3.00000015; UTC=2017-05-25 22:00:10 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17052522-0025-0000-0000-00004422F447 Message-Id: <1495749601-21574-20-git-send-email-paulmck@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-05-25_17:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1705250401 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There is material describing the ordering guarantees provided by spin_unlock_wait(), but it is not necessarily easy to find. This commit therefore adds a docbook header comment to this function informally describing its semantics. Signed-off-by: Paul E. McKenney Acked-by: Peter Zijlstra --- include/linux/spinlock.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 59248dcc6ef3..d9510e8522d4 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock) raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ }) +/** + * spin_unlock_wait - Interpose between successive critical sections + * @lock: the spinlock whose critical sections are to be interposed. + * + * Semantically this is equivalent to a spin_lock() immediately + * followed by a spin_unlock(). However, most architectures have + * more efficient implementations in which the spin_unlock_wait() + * cannot block concurrent lock acquisition, and in some cases + * where spin_unlock_wait() does not write to the lock variable. + * Nevertheless, spin_unlock_wait() can have high overhead, so if + * you feel the need to use it, please check to see if there is + * a better way to get your job done. + * + * The ordering guarantees provided by spin_unlock_wait() are: + * + * 1. All accesses preceding the spin_unlock_wait() happen before + * any accesses in later critical sections for this same lock. + * 2. All accesses following the spin_unlock_wait() happen after + * any accesses in earlier critical sections for this same lock. + */ static __always_inline void spin_unlock_wait(spinlock_t *lock) { raw_spin_unlock_wait(&lock->rlock); -- 2.5.2