linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@kernel.dk>
To: Valerie Aurora <vaurora@redhat.com>
Cc: Nick Piggin <npiggin@kernel.dk>,
	Al Viro <viro@zeniv.linux.org.uk>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [patch 04/10] fs: fs_struct rwlock to spinlock
Date: Fri, 20 Aug 2010 20:05:44 +1000	[thread overview]
Message-ID: <20100820100544.GC12105@amd> (raw)
In-Reply-To: <20100817231442.GN5556@shell>

On Tue, Aug 17, 2010 at 07:14:42PM -0400, Valerie Aurora wrote:
> On Wed, Aug 18, 2010 at 04:37:33AM +1000, Nick Piggin wrote:
> > -	read_lock(&current->fs->lock);
> > +	spin_lock(&current->fs->lock);
> >  	root = dget(current->fs->root.dentry);
> > -	read_unlock(&current->fs->lock);
> > +	spin_unlock(&current->fs->lock);
> >  
> >  	spin_lock(&dcache_lock);
> 
> Your reasoning makes sense to me.  Shared reader access seems very
> unlikely whereas the cost of taking the lock is certain.

Yes, shared reader will only help if we have multiple threads inside
the critical section at the same time, and if they can actually get
any extra parallelism (which they can't, because all they're doing here
is just hitting another contended cacheline).

I doubt this will change scalability at all, but it will improve single
threaded performance a little bit.

With the store-free path walk, I can actually do some tricks to entirely
remove the fs->lock spinlock in the common case here (using a seqlock
instead). Combined with avoiding refcounts to the cwd, so it avoids all
the contention between threads.


  reply	other threads:[~2010-08-20 10:05 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-17 18:37 [patch 00/10] first set of vfs scale patches Nick Piggin
2010-08-17 18:37 ` [patch 01/10] fs: fix do_lookup false negative Nick Piggin
2010-08-17 22:45   ` Valerie Aurora
2010-08-17 23:04   ` Sage Weil
2010-08-18 13:41   ` Andi Kleen
2010-08-17 18:37 ` [patch 02/10] fs: dentry allocation consolidation Nick Piggin
2010-08-17 22:45   ` Valerie Aurora
2010-08-17 18:37 ` [patch 03/10] apparmor: use task path helpers Nick Piggin
2010-08-17 22:59   ` Valerie Aurora
2010-08-17 18:37 ` [patch 04/10] fs: fs_struct rwlock to spinlock Nick Piggin
2010-08-17 23:14   ` Valerie Aurora
2010-08-20 10:05     ` Nick Piggin [this message]
2010-08-17 18:37 ` [patch 05/10] fs: remove extra lookup in __lookup_hash Nick Piggin
2010-08-18 13:57   ` Andi Kleen
2010-08-18 21:13     ` Andi Kleen
2010-08-18 19:34   ` Valerie Aurora
2010-08-17 18:37 ` [patch 06/10] fs: cleanup files_lock locking Nick Piggin
2010-08-18 19:46   ` Valerie Aurora
2010-08-17 18:37 ` [patch 07/10] tty: fix fu_list abuse Nick Piggin
2010-08-17 18:37 ` [patch 08/10] lglock: introduce special lglock and brlock spin locks Nick Piggin
2010-08-17 18:37 ` [patch 09/10] fs: scale files_lock Nick Piggin
2010-08-17 18:37 ` [patch 10/10] fs: brlock vfsmount_lock Nick Piggin
2010-08-18 14:05   ` Andi Kleen
2010-08-20 10:09     ` Nick Piggin
2010-08-17 21:14 ` [patch 00/10] first set of vfs scale patches Al Viro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100820100544.GC12105@amd \
    --to=npiggin@kernel.dk \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=vaurora@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).