From: Jens Axboe <jens.axboe@oracle.com>
To: Nick Piggin <npiggin@suse.de>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-fsdevel@vger.kernel.org,
Ravikiran G Thirumalai <kiran@scalex86.org>,
Peter Zijlstra <peterz@infradead.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
samba-technical@lists.samba.org
Subject: Re: [rfc][patch] store-free path walking
Date: Mon, 12 Oct 2009 10:20:04 +0200 [thread overview]
Message-ID: <20091012082004.GY9228@kernel.dk> (raw)
In-Reply-To: <20091012055920.GD25882@wotan.suse.de>
On Mon, Oct 12 2009, Nick Piggin wrote:
> On Mon, Oct 12, 2009 at 05:58:43AM +0200, Nick Piggin wrote:
> > On Wed, Oct 07, 2009 at 11:56:57AM +0200, Jens Axboe wrote:
> > Try changing the 'statvfs' syscall in dbench to 'statfs'.
> > glibc has to do some nasty stuff parsing /proc/mounts to
> > make statvfs work. On my 2s8c opteron it goes like this:
> > clients vanilla kernel vfs scale (MB/s)
> > 1 476 447
> > 2 1092 1128
> > 4 2027 2260
> > 8 2398 4200
> >
> > Single threaded performance isn't as good so I need to look
> > at the reasons for that :(. But it's practically linearly
> > scalable now. The dropoff at 8 I'd say is probably due to
> > the memory controllers running out of steam rather than
> > cacheline or lock contention.
>
> Ah, no on a bigger machine it starts slowing down again due
> to shared cwd contention, possibly due to creat/unlink type
> events. This could be improved by not restarting the entire
> path walk when we run into trouble but just trying to proceed
> from the last successful element.
>
I was starting to do a few runs, but there's something funky going on
here. The throughput rates are consistent throughout a single run, but
not at all between runs. I suspect this may be due to CPU placement.
The numbers also look pretty odd, here's an example from a patched
kernel with dbench using statfs:
Clients Patched
------------------------
1 1.00
2 1.23
4 2.96
8 1.22
16 0.89
32 0.83
64 0.83
So while the numbers fluctuate by as much as 20% from run to run.
OK, so it seems FAIR_SLEEPERS sched feature is responsible for this, if
I turn that off I get more consistent numbers. Below table is -git vs
vfs patches on -git. Baseline is -git with 1 client, > 1.00 is faster
and vice versa.
Clients Vanilla VFS scale
-----------------------------------------
1 1.00 0.96
2 1.69 1.71
4 2.16 2.98
8 0.99 1.00
16 0.90 0.85
As you can see, it's still quickling spiralling into most of the time (>
95%) spinning on a lock and killing scaling.
> Anyway, if you do get a chance to run dbench with this
> modification, I would appreciate seeing a profile with clal
> traces (my bigger system is ia64 which doesn't do perf yet).
For what number of clients?
--
Jens Axboe
next prev parent reply other threads:[~2009-10-12 8:20 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-06 6:49 Latest vfs scalability patch Nick Piggin
2009-10-06 10:14 ` Jens Axboe
2009-10-06 10:26 ` Jens Axboe
2009-10-06 11:10 ` Peter Zijlstra
2009-10-06 12:51 ` Jens Axboe
2009-10-06 12:26 ` Nick Piggin
2009-10-06 12:49 ` Jens Axboe
2009-10-07 8:58 ` [rfc][patch] store-free path walking Nick Piggin
2009-10-07 9:56 ` Jens Axboe
2009-10-07 10:10 ` Nick Piggin
2009-10-12 3:58 ` Nick Piggin
2009-10-12 5:59 ` Nick Piggin
2009-10-12 8:20 ` Jens Axboe [this message]
2009-10-12 11:00 ` Jens Axboe
2009-10-13 1:26 ` Christoph Hellwig
2009-10-13 1:52 ` Nick Piggin
2009-10-07 14:56 ` Linus Torvalds
2009-10-07 16:27 ` Linus Torvalds
2009-10-07 16:46 ` Nick Piggin
2009-10-07 19:25 ` Linus Torvalds
2009-10-07 20:34 ` Andi Kleen
2009-10-07 20:51 ` Linus Torvalds
2009-10-07 21:06 ` Andi Kleen
2009-10-07 21:20 ` Linus Torvalds
2009-10-07 21:57 ` Linus Torvalds
2009-10-07 22:22 ` Andi Kleen
2009-10-08 7:39 ` Nick Piggin
2009-10-09 17:53 ` Andi Kleen
2009-10-08 13:12 ` Denys Vlasenko
2009-10-09 7:47 ` Nick Piggin
2009-10-09 17:49 ` Andi Kleen
2009-10-07 16:29 ` Nick Piggin
2009-10-08 12:36 ` Nick Piggin
2009-10-08 12:57 ` Jens Axboe
2009-10-08 13:22 ` Nick Piggin
2009-10-08 13:30 ` Jens Axboe
2009-10-08 18:00 ` Peter Zijlstra
2009-10-09 4:04 ` Nick Piggin
2009-10-09 8:54 ` Jens Axboe
2009-10-09 9:51 ` Jens Axboe
2009-10-09 10:02 ` Nick Piggin
2009-10-09 10:08 ` Jens Axboe
2009-10-09 10:07 ` Nick Piggin
2009-10-09 3:50 ` Nick Piggin
2009-10-09 6:15 ` David Miller
2009-10-09 10:40 ` Nick Piggin
2009-10-09 11:09 ` Jens Axboe
2009-10-09 10:44 ` Nick Piggin
2009-10-09 10:48 ` Jens Axboe
2009-10-09 23:16 ` Paul E. McKenney
2009-10-15 10:08 ` Latest vfs scalability patch Anton Blanchard
2009-10-15 10:39 ` Nick Piggin
2009-10-15 10:46 ` Anton Blanchard
2009-10-15 10:53 ` Nick Piggin
2009-10-15 11:23 ` Anton Blanchard
2009-10-15 11:41 ` Nick Piggin
2009-10-15 11:48 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091012082004.GY9228@kernel.dk \
--to=jens.axboe@oracle.com \
--cc=kiran@scalex86.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=npiggin@suse.de \
--cc=peterz@infradead.org \
--cc=samba-technical@lists.samba.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).