From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from g1t6220.austin.hp.com ([15.73.96.84]:38189 "EHLO g1t6220.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965255AbcA1Qpo (ORCPT ); Thu, 28 Jan 2016 11:45:44 -0500 Message-ID: <56AA45B4.20401@hpe.com> Date: Thu, 28 Jan 2016 11:45:40 -0500 From: Waiman Long MIME-Version: 1.0 To: Peter Zijlstra CC: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Alexander Viro , linux-fsdevel@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch Subject: Re: [RFC PATCH 1/3] lib/list_batch: A simple list insertion/deletion batching facility References: <1453824219-51437-1-git-send-email-Waiman.Long@hpe.com> <1453824219-51437-2-git-send-email-Waiman.Long@hpe.com> <20160127163450.GU6357@twins.programming.kicks-ass.net> <56A926FB.80609@hpe.com> <20160127205400.GZ6357@twins.programming.kicks-ass.net> In-Reply-To: <20160127205400.GZ6357@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On 01/27/2016 03:54 PM, Peter Zijlstra wrote: > On Wed, Jan 27, 2016 at 03:22:19PM -0500, Waiman Long wrote: > >>>> + /* >>>> + * Put itself into the list_batch queue >>>> + */ >>>> + node.next = NULL; >>>> + node.entry = entry; >>>> + node.cmd = cmd; >>>> + node.state = lb_state_waiting; >>>> + >>> Here we rely on the release barrier implied by xchg() to ensure the node >>> initialization is complete before the xchg() publishes the thing. >>> >>> But do we also need the acquire part of this barrier? From what I could >>> tell, the primitive as a whole does not imply any ordering. >> I think we probably won't need the acquire part, but I don't have a non-x86 >> machine that can really test out the more relaxed versions of the atomic >> ops. That is why I use the strict versions. We can always relax it later on >> with additional patches. > Yeah, I have no hardware either. But at least we should comment the bits > we do know to rely upon. > Using xchg_release() looks OK to me. As this feature is enabled on x86 only for this patch, we can make the change and whoever enabling it for other architectures that have a real release function will have to test it. >>>> + if (!next) { >>>> + /* >>>> + * The queue tail should equal to nptr, so clear it to >>>> + * mark the queue as empty. >>>> + */ >>>> + if (cmpxchg(&batch->tail, nptr, NULL) != nptr) { >>>> + /* >>>> + * Queue not empty, wait until the next pointer is >>>> + * initialized. >>>> + */ >>>> + while (!(next = READ_ONCE(nptr->next))) >>>> + cpu_relax(); >>>> + } >>>> + /* The above cmpxchg acts as a memory barrier */ >>> for what? :-) >>> >>> Also, if that cmpxchg() fails, it very much does _not_ act as one. >>> >>> I suspect you want smp_store_release() setting the state_done, just as >>> above, and then use cmpxchg_relaxed(). >> You are right. I did forgot about there was no memory barrier guarantee when >> cmpxchg() fails. >> However, in that case, the READ_ONCE() and WRITE_ONCE() >> macros should still provide the necessary ordering, IMO. > READ/WRITE_ONCE() provide _no_ order what so ever. And the issue here is > that we must not do any other stores to nptr after the state_done. > I thought if those macros are accessing the same cacheline, the compiler won't change the ordering and the hardware will take care of the proper ordering. Anyway, I do intended to change to use smp_store_release() for safety. >> I can certainly >> change it to use cmpxchg_relaxed() and smp_store_release() instead. > That seems a safe combination and would still generate the exact same > code on x86. Cheers, Longman