From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757680AbdKGS5P convert rfc822-to-8bit (ORCPT ); Tue, 7 Nov 2017 13:57:15 -0500 Received: from mx1.redhat.com ([209.132.183.28]:17237 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752047AbdKGS5N (ORCPT ); Tue, 7 Nov 2017 13:57:13 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com EA5F4C058ED4 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=longman@redhat.com Subject: Re: [PATCH v4] lib/dlock-list: Scale dlock_lists_empty() To: Andreas Dilger , Jan Kara Cc: Davidlohr Bueso , Alexander Viro , Jan Kara , Jeff Layton , "J. Bruce Fields" , Tejun Heo , Christoph Lameter , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andi Kleen , Dave Chinner , Boqun Feng References: <1509475860-16139-1-git-send-email-longman@redhat.com> <1509475860-16139-2-git-send-email-longman@redhat.com> <20171102170431.oq3i5mxtjcg53uot@linux-n805> <81bb3365-63f3-fea8-d238-e3880a4c8033@redhat.com> <20171103133420.pngmrsfmtimataz4@linux-n805> <20171103142254.d55bu2n44xe4aruf@linux-n805> <20171106184708.kmwfcchjwjzucuja@linux-n805> <20171107115921.GC11391@quack2.suse.cz> From: Waiman Long Organization: Red Hat Message-ID: <4486fb94-a9fc-5bee-5241-e1e7558eeaa7@redhat.com> Date: Tue, 7 Nov 2017 13:57:10 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 07 Nov 2017 18:57:13 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/07/2017 12:59 PM, Andreas Dilger wrote: > On Nov 7, 2017, at 4:59 AM, Jan Kara wrote: >> On Mon 06-11-17 10:47:08, Davidlohr Bueso wrote: >>> + /* >>> + * Serialize dlist->used_lists such that a 0->1 transition is not >>> + * missed by another thread checking if any of the dlock lists are >>> + * used. >>> + * >>> + * CPU0 CPU1 >>> + * dlock_list_add() dlock_lists_empty() >>> + * [S] atomic_inc(used_lists); >>> + * smp_mb__after_atomic(); >>> + * smp_mb__before_atomic(); >>> + * [L] atomic_read(used_lists) >>> + * list_add() >>> + */ >>> + smp_mb__before_atomic(); >>> + return !atomic_read(&dlist->used_lists); > Just a general kernel programming question here - I thought the whole point > of atomics is that they are, well, atomic across all CPUs so there is no > need for a memory barrier? If there is a need for a memory barrier for > each atomic access (assuming it isn't accessed under another lock, which would > make the use of atomic types pointless, IMHO) then I'd think there is a lot > of code in the kernel that isn't doing this properly. > > What am I missing here? Atomic update and memory barrier are 2 different things. Atomic update means other CPUs see either the value before or after the update. They won't see anything in between. For a counter, it means we won't miss any counts. However, not all atomic operations give an ordering guarantee. The atomic_read() and atomic_inc() are examples that do not provide memory ordering guarantee. See Documentation/memory-barriers.txt for more information about it. A CPU can perform atomic operations 1 & 2 in program order, but other CPUs may see operation 2 first before operation 1. Here memory barrier can be used to guarantee that other CPUs see the memory updates in certain order. Hope this help. Longman