From: Nick Piggin <npiggin@suse.de>
To: Anton Blanchard <anton@samba.org>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-fsdevel@vger.kernel.org,
Ravikiran G Thirumalai <kiran@scalex86.org>,
Peter Zijlstra <peterz@infradead.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Jens Axboe <axboe@kernel.dk>
Subject: Re: Latest vfs scalability patch
Date: Thu, 15 Oct 2009 13:48:56 +0200 [thread overview]
Message-ID: <20091015114856.GF3127@wotan.suse.de> (raw)
In-Reply-To: <20091015114119.GE3127@wotan.suse.de>
On Thu, Oct 15, 2009 at 01:41:19PM +0200, Nick Piggin wrote:
> On Thu, Oct 15, 2009 at 10:23:29PM +1100, Anton Blanchard wrote:
> >
> > Hi Nick,
> >
> > > I wonder what other good performance tests you can add to your test
> > > framework? creat/unlink is another easy one. And for each case, putting
> > > threads in their own cwd versus a common cwd are the variants.
> >
> > I did try the two combinations of creat/unlink but haven't had a chance to
> > digest the profiles yet. I've attached them (taken at 64 cores, ie worst
> > case :)
> >
> > In both cases performance was significantly better than mainline.
> >
> > > BTW. for these cases in your tests it will be nice if you can run on
> > > ramfs because that will isolate purely the vfs. Perhaps also include
> > > other filesystems as you get time, but I think ramfs is the most
> > > useful for us to start with.
> >
> > Good point. I'll add that into the setup scripts.
> >
> > Anton
>
> > # Samples: 82617
> > #
> > # Overhead Command Shared Object Symbol
> > # ........ ............... ................................. ......
> > #
> > 99.16% unlink1_process [kernel] [k] ._spin_lock
> > |
> > |--99.98%-- ._spin_lock
> > | |
> > | |--49.80%-- .path_get
> > | |--49.58%-- .dput
>
> Hmm, both your profiles look like they are hammering on a common cwd
> here. The lock-free path walk can probably be extended to help a bit,
> but you would still end up hitting locks on the parent dentry/inode
> when doing the create destroy. My 64-way numbers look like this:
>
>
> create-unlink 1 processes seperate-cwd 105306.58 ops/s
> create-unlink 2 processes seperate-cwd 103004.20 ops/s
> create-unlink 4 processes seperate-cwd 92438.69 ops/s
> create-unlink 8 processes seperate-cwd 91138.93 ops/s
> create-unlink 16 processes seperate-cwd 91025.36 ops/s
> create-unlink 32 processes seperate-cwd 83757.75 ops/s
> create-unlink 64 processes seperate-cwd 81718.29 ops/s
dumb profile for this guy looks like this:
206681 total 0.0270
25851 _spin_lock 161.5687
13628 kmem_cache_free 7.3427
9890 _spin_unlock 61.8125
7087 kmem_cache_alloc 6.5138
6770 _read_lock 35.2604
5587 __call_rcu 4.8498
5580 __link_path_walk 0.5571
5246 do_filp_open 0.9476
4946 __rcu_process_callbacks 2.0608
4904 __percpu_counter_add 11.7885
3933 d_alloc 5.1211
3906 memset 3.6989
3807 path_init_rcu 3.2154
3370 __mutex_init 35.1042
3254 mnt_want_write 4.6222
oprofile isn't working on this guy either, and I no longer have
the patience to try working out where such locking is coming from
without lockdep or perf ;) But it sure is a lot better than your
profiles...
prev parent reply other threads:[~2009-10-15 11:49 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-06 6:49 Latest vfs scalability patch Nick Piggin
2009-10-06 10:14 ` Jens Axboe
2009-10-06 10:26 ` Jens Axboe
2009-10-06 11:10 ` Peter Zijlstra
2009-10-06 12:51 ` Jens Axboe
2009-10-06 12:26 ` Nick Piggin
2009-10-06 12:49 ` Jens Axboe
2009-10-07 8:58 ` [rfc][patch] store-free path walking Nick Piggin
2009-10-07 9:56 ` Jens Axboe
2009-10-07 10:10 ` Nick Piggin
2009-10-12 3:58 ` Nick Piggin
2009-10-12 5:59 ` Nick Piggin
2009-10-12 8:20 ` Jens Axboe
2009-10-12 11:00 ` Jens Axboe
2009-10-13 1:26 ` Christoph Hellwig
2009-10-13 1:52 ` Nick Piggin
2009-10-07 14:56 ` Linus Torvalds
2009-10-07 16:27 ` Linus Torvalds
2009-10-07 16:46 ` Nick Piggin
2009-10-07 19:25 ` Linus Torvalds
2009-10-07 20:34 ` Andi Kleen
2009-10-07 20:51 ` Linus Torvalds
2009-10-07 21:06 ` Andi Kleen
2009-10-07 21:20 ` Linus Torvalds
2009-10-07 21:57 ` Linus Torvalds
2009-10-07 22:22 ` Andi Kleen
2009-10-08 7:39 ` Nick Piggin
2009-10-09 17:53 ` Andi Kleen
2009-10-08 13:12 ` Denys Vlasenko
2009-10-09 7:47 ` Nick Piggin
2009-10-09 17:49 ` Andi Kleen
2009-10-07 16:29 ` Nick Piggin
2009-10-08 12:36 ` Nick Piggin
2009-10-08 12:57 ` Jens Axboe
2009-10-08 13:22 ` Nick Piggin
2009-10-08 13:30 ` Jens Axboe
2009-10-08 18:00 ` Peter Zijlstra
2009-10-09 4:04 ` Nick Piggin
2009-10-09 8:54 ` Jens Axboe
2009-10-09 9:51 ` Jens Axboe
2009-10-09 10:02 ` Nick Piggin
2009-10-09 10:08 ` Jens Axboe
2009-10-09 10:07 ` Nick Piggin
2009-10-09 3:50 ` Nick Piggin
2009-10-09 6:15 ` David Miller
2009-10-09 10:40 ` Nick Piggin
2009-10-09 11:09 ` Jens Axboe
2009-10-09 10:44 ` Nick Piggin
2009-10-09 10:48 ` Jens Axboe
2009-10-09 23:16 ` Paul E. McKenney
2009-10-15 10:08 ` Latest vfs scalability patch Anton Blanchard
2009-10-15 10:39 ` Nick Piggin
2009-10-15 10:46 ` Anton Blanchard
2009-10-15 10:53 ` Nick Piggin
2009-10-15 11:23 ` Anton Blanchard
2009-10-15 11:41 ` Nick Piggin
2009-10-15 11:48 ` Nick Piggin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091015114856.GF3127@wotan.suse.de \
--to=npiggin@suse.de \
--cc=anton@samba.org \
--cc=axboe@kernel.dk \
--cc=kiran@scalex86.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).