From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754907AbcH2RiS (ORCPT ); Mon, 29 Aug 2016 13:38:18 -0400 Received: from mx2.suse.de ([195.135.220.15]:33090 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751195AbcH2RiR (ORCPT ); Mon, 29 Aug 2016 13:38:17 -0400 Date: Mon, 29 Aug 2016 10:38:02 -0700 From: Davidlohr Bueso To: Manfred Spraul Cc: benh@kernel.crashing.org, paulmck@linux.vnet.ibm.com, Ingo Molnar , Boqun Feng , Peter Zijlstra , Andrew Morton , LKML , 1vier1@web.de Subject: Re: [PATCH 1/4 v4] spinlock: Document memory barrier rules Message-ID: <20160829173802.GA27002@linux-80c1.suse> References: <1472477669-27508-1-git-send-email-manfred@colorfullife.com> <1472477669-27508-2-git-send-email-manfred@colorfullife.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <1472477669-27508-2-git-send-email-manfred@colorfullife.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 29 Aug 2016, Manfred Spraul wrote: >Right now, the spinlock machinery tries to guarantee barriers even for >unorthodox locking cases, which ends up as a constant stream of updates >as the architectures try to support new unorthodox ideas. > >The patch proposes to clarify the rules: >spin_lock is ACQUIRE, spin_unlock is RELEASE. >spin_unlock_wait is also ACQUIRE. >Code that needs further guarantees must use appropriate explicit barriers. > >Architectures that can implement some barriers for free can define the >barriers as NOPs. > >As the initial step, the patch converts ipc/sem.c to the new defines: >- With commit 2c6100227116 > ("locking/qspinlock: Fix spin_unlock_wait() some more"), > (and the commits for the other archs), spin_unlock_wait() is an > ACQUIRE. > Therefore the smp_rmb() after spin_unlock_wait() can be removed. >- smp_mb__after_spin_lock() instead of a direct smp_mb(). > This allows that architectures override it with a less expensive > barrier if this is sufficient for their hardware/spinlock > implementation. > >For overriding, the same approach as for smp_mb__before_spin_lock() >is used: If smp_mb__after_spin_lock is already defined, then it is >not changed. > >Signed-off-by: Manfred Spraul >--- > Documentation/locking/spinlocks.txt | 5 +++++ > include/linux/spinlock.h | 12 ++++++++++++ > ipc/sem.c | 16 +--------------- Preferably this would have been two patches, specially since you remove the redundant barrier in complexmode_enter(), which is kind of mixing core spinlocking and core sysv sems. But anyway, this will be the patch that we _don't_ backport to stable, right? Reviewed-by: Davidlohr Bueso