From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nick Piggin Subject: Re: [patch 10/10] fs: brlock vfsmount_lock Date: Fri, 20 Aug 2010 20:09:48 +1000 Message-ID: <20100820100948.GD12105@amd> References: <20100817183729.613117146@kernel.dk> <20100817184122.344010346@kernel.dk> <87eidwkocs.fsf@basil.nowhere.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Nick Piggin , Al Viro , linux-fsdevel@vger.kernel.org To: Andi Kleen Return-path: Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:47634 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751906Ab0HTKJw (ORCPT ); Fri, 20 Aug 2010 06:09:52 -0400 Content-Disposition: inline In-Reply-To: <87eidwkocs.fsf@basil.nowhere.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wed, Aug 18, 2010 at 04:05:39PM +0200, Andi Kleen wrote: > Nick Piggin writes: > > BTW one way to make the slow path faster would be to start sharing > per cpu locks inside a core on SMT at least. The same cores have the same > caches and sharing cache lines is free. That would cut it in half > on a 2x HT system. Yes it's possible. brlock code is encapsulated, so you could experiment. One problem is that vfsmount lock gets held for read for a relatively long time in the store-free path walk patches. So you could get multiple threads contending on it. > > > - > > static int event; > > static DEFINE_IDA(mnt_id_ida); > > static DEFINE_IDA(mnt_group_ida); > > +static DEFINE_SPINLOCK(mnt_id_lock); > > Can you add a scope comment to that lock? It protects mnt_id_ida; I should have explicitly commented that. I'll put a patch to do that at the head of my next queue to submit. Thanks for reviewing. > > > @@ -623,39 +653,43 @@ static inline void __mntput(struct vfsmo > > void mntput_no_expire(struct vfsmount *mnt) > > { > > repeat: > > - if (atomic_dec_and_lock(&mnt->mnt_count, &vfsmount_lock)) { > > - if (likely(!mnt->mnt_pinned)) { > > - spin_unlock(&vfsmount_lock); > > - __mntput(mnt); > > - return; > > - } > > - atomic_add(mnt->mnt_pinned + 1, &mnt->mnt_count); > > - mnt->mnt_pinned = 0; > > - spin_unlock(&vfsmount_lock); > > - acct_auto_close_mnt(mnt); > > - goto repeat; > > + if (atomic_add_unless(&mnt->mnt_count, -1, 1)) > > + return; > > Hmm that's a unrelated change? It's because we don't have atomic_dec_and_br_lock()... > > The rest looks all good and quite straight forward > > Reviewed-by: Andi Kleen Thanks, Nick