From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60116) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b320V-0006Bw-OP for qemu-devel@nongnu.org; Wed, 18 May 2016 09:59:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b320Q-00050O-1F for qemu-devel@nongnu.org; Wed, 18 May 2016 09:59:26 -0400 Received: from mail-lf0-x232.google.com ([2a00:1450:4010:c07::232]:35449) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b320P-00050F-Fv for qemu-devel@nongnu.org; Wed, 18 May 2016 09:59:21 -0400 Received: by mail-lf0-x232.google.com with SMTP id j8so20382714lfd.2 for ; Wed, 18 May 2016 06:59:21 -0700 (PDT) References: <1463196873-17737-1-git-send-email-cota@braap.org> <1463196873-17737-8-git-send-email-cota@braap.org> <573B5134.8060104@gmail.com> <20160517193842.GB30174@flamenco> <573B80AD.50503@gmail.com> <20160517231809.GA17517@flamenco> From: Sergey Fedorov Message-ID: <573C7536.7080104@gmail.com> Date: Wed, 18 May 2016 16:59:18 +0300 MIME-Version: 1.0 In-Reply-To: <20160517231809.GA17517@flamenco> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v5 07/18] qemu-thread: add simple test-and-set spinlock List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Emilio G. Cota" Cc: QEMU Developers , MTTCG Devel , =?UTF-8?Q?Alex_Benn=c3=a9e?= , Paolo Bonzini , Peter Crosthwaite , Richard Henderson On 18/05/16 02:18, Emilio G. Cota wrote: > On Tue, May 17, 2016 at 23:35:57 +0300, Sergey Fedorov wrote: >> On 17/05/16 22:38, Emilio G. Cota wrote: >>> On Tue, May 17, 2016 at 20:13:24 +0300, Sergey Fedorov wrote: >>>> On 14/05/16 06:34, Emilio G. Cota wrote: >> (snip) >>>>> + while (atomic_read(&spin->value)) { >>>>> + cpu_relax(); >>>>> + } >>>>> + } >>>> Looks like relaxed atomic access can be a subject to various >>>> optimisations according to >>>> https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync#Relaxed. >>> The important thing here is that the read actually happens >>> on every iteration; this is achieved with atomic_read(). >>> Barriers etc. do not matter here because once we exit >>> the loop, the try to acquire the lock -- and if we succeed, >>> we then emit the right barrier. >> I just can't find where it is stated that an expression like >> "__atomic_load(ptr, &_val, __ATOMIC_RELAXED)" has a _compiler_ barrier >> or volatile access semantic. Hopefully, cpu_relax() serves as a compiler >> barrier. If we rely on that, we'd better put a comment about it. > I treat atomic_read/set as ACCESS_ONCE[1], i.e. volatile cast. > From docs/atomics.txt: > > COMPARISON WITH LINUX KERNEL MEMORY BARRIERS > ============================================ > [...] > - atomic_read and atomic_set in Linux give no guarantee at all; > atomic_read and atomic_set in QEMU include a compiler barrier > (similar to the ACCESS_ONCE macro in Linux). > > [1] https://lwn.net/Articles/508991/ But actually (cf include/qemu/atomic.h) we can have: #define atomic_read(ptr) \ ({ \ QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \ typeof(*ptr) _val; \ __atomic_load(ptr, &_val, __ATOMIC_RELAXED); \ _val; \ }) I can't find anywhere if this __atomic_load() has volatile/compiler barrier semantics... Kind regards, Sergey