From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35A53C43215 for ; Fri, 22 Nov 2019 11:20:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0F30420730 for ; Fri, 22 Nov 2019 11:20:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574421640; bh=yxcY8STyPiHTZD/EdOAtLWjNbE47VPbwr991XD3FQic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=VZRfj1wabKrFuahWewB7/j/7k8syXC70mb+X904oEZU5MDFT5PoHHjtV08xC4RRNx rQP3vSyh3VQ5dMwvaFFDWuqZAhcWbJgok+2ahHJbVV2KX6kS5zYz7STbFbAW+HP04Q KzGQpVl80qgLWB3qI6oNADVanat0/NDN239bKdIs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729553AbfKVKpw (ORCPT ); Fri, 22 Nov 2019 05:45:52 -0500 Received: from mail.kernel.org ([198.145.29.99]:52466 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729021AbfKVKpv (ORCPT ); Fri, 22 Nov 2019 05:45:51 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D7D1420718; Fri, 22 Nov 2019 10:45:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1574419550; bh=yxcY8STyPiHTZD/EdOAtLWjNbE47VPbwr991XD3FQic=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jh/9x+oj72fqlWo8ur6idny6Tpwkf2Mgw48aOHA1AXqlNCpPxTmdGxKeG9wxT8Wa/ 3xj97n+TV+o9VHvKWkQtxQshy6Irh4chZd5vGAmHkE5H+fUCqH2aJfVlrgOJKVDyM9 jbReWhcks7EaMPgVx5HGr9HiQ4i259SlPKxQFBQk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Peter Zijlstra (Intel)" , Linus Torvalds , Thomas Gleixner , Ingo Molnar , Jari Ruusu Subject: [PATCH 4.9 149/222] x86/atomic: Fix smp_mb__{before,after}_atomic() Date: Fri, 22 Nov 2019 11:28:09 +0100 Message-Id: <20191122100913.528035379@linuxfoundation.org> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191122100830.874290814@linuxfoundation.org> References: <20191122100830.874290814@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Peter Zijlstra commit 69d927bba39517d0980462efc051875b7f4db185 upstream. Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as x86) fail for: *x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y; Because, while the atomic_inc() implies memory order, it (surprisingly) does not provide a compiler barrier. This then allows the compiler to re-order like so: atomic_inc(u); *x = 1; smp_mb__after_atomic(); r0 = *y; Which the CPU is then allowed to re-order (under TSO rules) like: atomic_inc(u); r0 = *y; *x = 1; And this very much was not intended. Therefore strengthen the atomic RmW ops to include a compiler barrier. NOTE: atomic_{or,and,xor} and the bitops already had the compiler barrier. Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Signed-off-by: Ingo Molnar Signed-off-by: Jari Ruusu Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/atomic.h | 8 ++++---- arch/x86/include/asm/atomic64_64.h | 8 ++++---- arch/x86/include/asm/barrier.h | 4 ++-- 3 files changed, 10 insertions(+), 10 deletions(-) --- a/arch/x86/include/asm/atomic.h +++ b/arch/x86/include/asm/atomic.h @@ -49,7 +49,7 @@ static __always_inline void atomic_add(i { asm volatile(LOCK_PREFIX "addl %1,%0" : "+m" (v->counter) - : "ir" (i)); + : "ir" (i) : "memory"); } /** @@ -63,7 +63,7 @@ static __always_inline void atomic_sub(i { asm volatile(LOCK_PREFIX "subl %1,%0" : "+m" (v->counter) - : "ir" (i)); + : "ir" (i) : "memory"); } /** @@ -89,7 +89,7 @@ static __always_inline bool atomic_sub_a static __always_inline void atomic_inc(atomic_t *v) { asm volatile(LOCK_PREFIX "incl %0" - : "+m" (v->counter)); + : "+m" (v->counter) :: "memory"); } /** @@ -101,7 +101,7 @@ static __always_inline void atomic_inc(a static __always_inline void atomic_dec(atomic_t *v) { asm volatile(LOCK_PREFIX "decl %0" - : "+m" (v->counter)); + : "+m" (v->counter) :: "memory"); } /** --- a/arch/x86/include/asm/atomic64_64.h +++ b/arch/x86/include/asm/atomic64_64.h @@ -44,7 +44,7 @@ static __always_inline void atomic64_add { asm volatile(LOCK_PREFIX "addq %1,%0" : "=m" (v->counter) - : "er" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter) : "memory"); } /** @@ -58,7 +58,7 @@ static inline void atomic64_sub(long i, { asm volatile(LOCK_PREFIX "subq %1,%0" : "=m" (v->counter) - : "er" (i), "m" (v->counter)); + : "er" (i), "m" (v->counter) : "memory"); } /** @@ -85,7 +85,7 @@ static __always_inline void atomic64_inc { asm volatile(LOCK_PREFIX "incq %0" : "=m" (v->counter) - : "m" (v->counter)); + : "m" (v->counter) : "memory"); } /** @@ -98,7 +98,7 @@ static __always_inline void atomic64_dec { asm volatile(LOCK_PREFIX "decq %0" : "=m" (v->counter) - : "m" (v->counter)); + : "m" (v->counter) : "memory"); } /** --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -105,8 +105,8 @@ do { \ #endif /* Atomic operations are already serializing on x86 */ -#define __smp_mb__before_atomic() barrier() -#define __smp_mb__after_atomic() barrier() +#define __smp_mb__before_atomic() do { } while (0) +#define __smp_mb__after_atomic() do { } while (0) #include