From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andi Kleen Subject: Re: [patch 10/10] fs: brlock vfsmount_lock Date: Wed, 18 Aug 2010 16:05:39 +0200 Message-ID: <87eidwkocs.fsf@basil.nowhere.org> References: <20100817183729.613117146@kernel.dk> <20100817184122.344010346@kernel.dk> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Al Viro , linux-fsdevel@vger.kernel.org To: Nick Piggin Return-path: Received: from one.firstfloor.org ([213.235.205.2]:35894 "EHLO one.firstfloor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752827Ab0HROFl (ORCPT ); Wed, 18 Aug 2010 10:05:41 -0400 In-Reply-To: <20100817184122.344010346@kernel.dk> (Nick Piggin's message of "Wed, 18 Aug 2010 04:37:39 +1000") Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Nick Piggin writes: BTW one way to make the slow path faster would be to start sharing per cpu locks inside a core on SMT at least. The same cores have the same caches and sharing cache lines is free. That would cut it in half on a 2x HT system. > - > static int event; > static DEFINE_IDA(mnt_id_ida); > static DEFINE_IDA(mnt_group_ida); > +static DEFINE_SPINLOCK(mnt_id_lock); Can you add a scope comment to that lock? > @@ -623,39 +653,43 @@ static inline void __mntput(struct vfsmo > void mntput_no_expire(struct vfsmount *mnt) > { > repeat: > - if (atomic_dec_and_lock(&mnt->mnt_count, &vfsmount_lock)) { > - if (likely(!mnt->mnt_pinned)) { > - spin_unlock(&vfsmount_lock); > - __mntput(mnt); > - return; > - } > - atomic_add(mnt->mnt_pinned + 1, &mnt->mnt_count); > - mnt->mnt_pinned = 0; > - spin_unlock(&vfsmount_lock); > - acct_auto_close_mnt(mnt); > - goto repeat; > + if (atomic_add_unless(&mnt->mnt_count, -1, 1)) > + return; Hmm that's a unrelated change? The rest looks all good and quite straight forward Reviewed-by: Andi Kleen -Andi -- ak@linux.intel.com -- Speaking for myself only.