From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Miller Subject: Re: [rfc][patch] store-free path walking Date: Thu, 08 Oct 2009 23:15:27 -0700 (PDT) Message-ID: <20091008.231527.251721755.davem@davemloft.net> References: <20091008123622.GA30316@wotan.suse.de> <20091009035050.GC4287@wotan.suse.de> Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: torvalds@linux-foundation.org, jens.axboe@oracle.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, kiran@scalex86.org, peterz@infradead.org To: npiggin@suse.de Return-path: In-Reply-To: <20091009035050.GC4287@wotan.suse.de> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org From: Nick Piggin Date: Fri, 9 Oct 2009 05:50:50 +0200 > OK, I got rid of this guy from the RCU walk. Basically now hold > vfsmount_lock over the entire RCU path walk (which also pins the mnt) > and use a seqlock in the fs struct to get a consistent mnt,dentry > pair. This also simplifies the walk because we don't need the > complexity to avoid mntget/mntput (just do one final mntget on the > resulting mnt before dropping vfsmount_lock). > > vfsmount_lock adds one per-cpu atomic for the spinlock, and we > remove two thread-shared atomics for fs->lock so a net win for > both single threaded performance and thread-shared scalability. > Latency is no problem because we hold rcu_read_lock for the same > length of time anyway. > > The parallel git diff workload is improved by serveral percent. Sounds sweet Nick, can't wait to play with your next set of patches here.