From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933838AbcBZVO1 (ORCPT ); Fri, 26 Feb 2016 16:14:27 -0500 Received: from mail-lf0-f47.google.com ([209.85.215.47]:35198 "EHLO mail-lf0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755242AbcBZVOZ (ORCPT ); Fri, 26 Feb 2016 16:14:25 -0500 To: linux-kernel@vger.kernel.org From: Sergey Fedorov Subject: Documentation/memory-barriers.txt: How can READ_ONCE() and WRITE_ONCE() provide cache coherence? Cc: "Paul E. McKenney" Message-ID: <56D0C02D.6000905@gmail.com> Date: Sat, 27 Feb 2016 00:14:21 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, I just can't understand how this kind of compiler barrier macros may provide any form of cache coherence. Sure, such kind of compiler barrier is necessary to "reliably" access a variable from multiple CPUs. But why it is stated that these macros *provide* cache coherence? From Documentation/memory-barriers.txt: > The READ_ONCE() and WRITE_ONCE() functions can prevent any number of > optimizations that, while perfectly safe in single-threaded code, can > be fatal in concurrent code. Here are some examples of these sorts > of optimizations: > > (*) The compiler is within its rights to reorder loads and stores > to the same variable, and in some cases, the CPU is within its > rights to reorder loads to the same variable. This means that > the following code: > > a[0] = x; > a[1] = x; > > Might result in an older value of x stored in a[1] than in a[0]. > Prevent both the compiler and the CPU from doing this as follows: > > a[0] = READ_ONCE(x); > a[1] = READ_ONCE(x); > > In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for > accesses from multiple CPUs to a single variable. Thanks, Sergey