From: Nick Piggin <npiggin@suse.de>
To: Al Viro <viro@ZenIV.linux.org.uk>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 00/27] [rfc] vfs scalability patchset
Date: Sat, 25 Apr 2009 07:02:32 +0200 [thread overview]
Message-ID: <20090425050232.GA10088@wotan.suse.de> (raw)
In-Reply-To: <20090425041829.GX8633@ZenIV.linux.org.uk>
Thanks for taking a look. I'll spend a bit of time to go over your
feedback.
On Sat, Apr 25, 2009 at 05:18:29AM +0100, Al Viro wrote:
> On Sat, Apr 25, 2009 at 11:20:20AM +1000, npiggin@suse.de wrote:
> > Here is my current patchset for improving vfs locking scalability. Since
> > last posting, I have fixed several bugs, solved several more problems, and
> > done an initial sweep of filesystems (autofs4 is probably the trickiest,
> > and unfortunately I don't have a good test setup here for that yet, but
> > at least I've looked through it).
> >
> > Also started to tackle files_lock, vfsmount_lock, and inode_lock.
> > (I included my mnt_want_write patches before the vfsmount_lock scalability
> > stuff because that just made it a bit easier...). These appear to be the
> > problematic global locks in the vfs.
> >
> > It's running stably here so far on basic stress testing here on several file
> > systems (xfs, tmpfs, ext?). But it still might eat your data of course.
> >
> > Would be very interested in any feedback.
>
> First of all, I happily admit that wrt locking I'm a barbarian, and proud
> of it. I.e. simpler locking scheme beats theoretical improvement, unless
> we have really good evidence that there's a real-world problem. All things
> equal, complexity loses. All things not quite equal - ditto. Amount of
> fuckups is at least quadratic by the number of lock types, with quite a big
> chunk on top added by each per-something kind of lock.
Yes definitely. What recently prompted me to finally look at this is
the nasty looking "batched dput/iput" stuff that came out of google.
Unfortunately I don't remember seeing a description of the workload
but I'll ping them.
I do know that SGI has had problems with these locks on NFS server
workloads too (and not on insanely sized systems). I should be able
to get a recipe for reproducing this.
And this is an open call for anyone else seeing scalability problems
here too.
> Said that, I like mnt_want_write part, vfsmount_lock splitup (modulo
> several questions) and _maybe_ doing something about files_lock.
> Like as in "would seriously consider merging next cycle".
OK that's a good start. I do admit I didn't take enough time to grok
the tty stuff :P But I'll try to get it in shape.
> I'd keep
> dcache and icache parts separate for now.
Yes they need a lot more review and results.
> However, files_lock part 2 looks very dubious - if nothing else, I would
> expect that you'll get *more* cross-CPU traffic that way, since the CPU
> where final fput() runs will correlate only weakly (if at all) with one
> where open() had been done. So you are getting more cachelines bouncing.
You think? Weakly? Well I guess it will depend on the workload. In some
cases it will be. Although the alternative is all CPUs bouncing a single
lock cacheline, so with multiple lock cachelines then at least we have
less contention at the cache coherency level (ie. we can have multiple
cacheline bounces in flight across the entire machine). But... enough
handwaving from me, I agree it needs results.
> I want to see the numbers for this one, and on different kinds of loads,
> but as it is I've very sceptical. BTW, could you try to collect stats
> along the lines of "CPU #i has done N_{i,j} removals from sb list for
> files that had been in list #j"?
>
> Splitting files_lock on per-sb basis might be an interesting variant, too.
Yes that could help, although I had been trying to keep in mind
single-sb scalability too.
> Another thing: could you pull outright bugfixes as early as possible in the
> queue?
Sure thing.
Thanks,
Nick
next prev parent reply other threads:[~2009-04-25 5:02 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-25 1:20 [patch 00/27] [rfc] vfs scalability patchset npiggin
2009-04-25 1:20 ` [patch 01/27] fs: cleanup files_lock npiggin
2009-04-25 3:20 ` Al Viro
2009-04-25 5:35 ` Eric W. Biederman
2009-04-26 6:12 ` Nick Piggin
2009-04-25 9:42 ` Alan Cox
2009-04-26 6:15 ` Nick Piggin
2009-04-25 1:20 ` [patch 02/27] fs: scale files_lock npiggin
2009-04-25 3:32 ` Al Viro
2009-04-25 1:20 ` [patch 03/27] fs: mnt_want_write speedup npiggin
2009-04-25 1:20 ` [patch 04/27] fs: introduce mnt_clone_write npiggin
2009-04-25 3:35 ` Al Viro
2009-04-25 1:20 ` [patch 05/27] fs: brlock vfsmount_lock npiggin
2009-04-25 3:50 ` Al Viro
2009-04-26 6:36 ` Nick Piggin
2009-04-25 1:20 ` [patch 06/27] fs: dcache fix LRU ordering npiggin
2009-04-25 1:20 ` [patch 07/27] fs: dcache scale hash npiggin
2009-04-25 1:20 ` [patch 08/27] fs: dcache scale lru npiggin
2009-04-25 1:20 ` [patch 09/27] fs: dcache scale nr_dentry npiggin
2009-04-25 1:20 ` [patch 10/27] fs: dcache scale dentry refcount npiggin
2009-04-25 1:20 ` [patch 11/27] fs: dcache scale d_unhashed npiggin
2009-04-25 1:20 ` [patch 12/27] fs: dcache scale subdirs npiggin
2009-04-25 1:20 ` [patch 13/27] fs: scale inode alias list npiggin
2009-04-25 1:20 ` [patch 14/27] fs: use RCU / seqlock logic for reverse and multi-step operaitons npiggin
2009-04-25 1:20 ` [patch 15/27] fs: dcache remove dcache_lock npiggin
2009-04-25 1:20 ` [patch 16/27] fs: dcache reduce dput locking npiggin
2009-04-25 1:20 ` [patch 17/27] fs: dcache per-bucket dcache hash locking npiggin
2009-04-25 1:20 ` [patch 18/27] fs: dcache reduce dcache_inode_lock npiggin
2009-04-25 1:20 ` [patch 19/27] fs: dcache per-inode inode alias locking npiggin
2009-04-25 1:20 ` [patch 20/27] fs: icache lock s_inodes list npiggin
2009-04-25 1:20 ` [patch 21/27] fs: icache lock inode hash npiggin
2009-04-25 1:20 ` [patch 22/27] fs: icache lock i_state npiggin
2009-04-25 1:20 ` [patch 23/27] fs: icache lock i_count npiggin
2009-04-25 1:20 ` [patch 24/27] fs: icache atomic inodes_stat npiggin
2009-04-25 1:20 ` [patch 25/27] fs: icache lock lru/writeback lists npiggin
2009-04-25 1:20 ` [patch 26/27] fs: icache protect inode state npiggin
2009-04-25 1:20 ` [patch 27/27] fs: icache remove inode_lock npiggin
2009-04-25 4:18 ` [patch 00/27] [rfc] vfs scalability patchset Al Viro
2009-04-25 5:02 ` Nick Piggin [this message]
2009-04-25 8:01 ` Christoph Hellwig
2009-04-25 8:06 ` Al Viro
2009-04-28 9:09 ` Christoph Hellwig
2009-04-28 9:48 ` Nick Piggin
2009-04-28 10:58 ` Peter Zijlstra
2009-04-28 11:32 ` Eric W. Biederman
2009-04-30 6:14 ` Nick Piggin
2009-04-25 19:08 ` Eric W. Biederman
2009-04-25 19:31 ` Al Viro
2009-04-25 20:29 ` Eric W. Biederman
2009-04-25 22:05 ` Theodore Tso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090425050232.GA10088@wotan.suse.de \
--to=npiggin@suse.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@ZenIV.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).