From: Nick Piggin <npiggin@kernel.dk>
To: Nick Piggin <npiggin@kernel.dk>
Cc: Dave Chinner <david@fromorbit.com>,
Nick Piggin <npiggin@gmail.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Eric Dumazet <eric.dumazet@gmail.com>,
Al Viro <viro@zeniv.linux.org.uk>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [patch 1/6] fs: icache RCU free inodes
Date: Wed, 17 Nov 2010 16:56:33 +1100 [thread overview]
Message-ID: <20101117055633.GA3861@amd> (raw)
In-Reply-To: <20101117041812.GD3302@amd>
On Wed, Nov 17, 2010 at 03:18:12PM +1100, Nick Piggin wrote:
> On Wed, Nov 17, 2010 at 12:12:54PM +1100, Dave Chinner wrote:
> > On Tue, Nov 16, 2010 at 02:49:06PM +1100, Nick Piggin wrote:
> > > On Tue, Nov 16, 2010 at 02:02:43PM +1100, Dave Chinner wrote:
> > > > On Mon, Nov 15, 2010 at 03:21:00PM +1100, Nick Piggin wrote:
> > > > > This is 30K inodes per second per CPU, versus nearly 800K per second
> > > > > number that I measured the 12% slowdown with. About 25x slower.
> > > >
> > > > Hi Nick, the ramfs (800k/12%) numbers are not the context I was
> > > > responding to - you're comparing apples to oranges. I was responding to
> > > > the "XFS [on a ramdisk] is about 4.9% slower" result.
> > >
> > > Well xfs on ramdisk was (85k/4.9%).
> >
> > How many threads? On a 2.26GHz nehalem-class Xeon CPU, I'm seeing:
> >
> > threads files/s
> > 1 45k
> > 2 70k
> > 4 130k
> > 8 230k
> >
> > With scalability mainly limited by the dcache_lock. I'm not sure
> > what you 85k number relates to in the above chart. Is it a single
>
> Yes, a single thread. 86385 inodes created and destroyed per second.
> upstream kernel.
92K actually, with delaylog. Still a long way off ext4, which itself is
a very long way off ramfs. Do you have lots of people migrating off xfs
to ext4 because it is so much quicker? I doubt it because xfs I'm sure
is often as good or better at what people are actually doing.
Yes it's great if it can avoid hitting the disk and runing from cache,
but my point was that real workloads are not going to follow the busy
loop create/destroy pattern, in the slightest. And real IO actually will
get in the way quite often.
So you are going to be a long way off even the 4-5% theoretical worst
case. Every time a creat is followed by something other than an unlink
(eg. another creat, a lock, some IO, some calculation, a write), will
see that gap reduced.
So the closest creat/unlink intensive benchmark I have found was fs_mark
with 0 file size, and no syncs. It's basically just inode create and
destroy in something slightly better than a busy loop.
I ran that on ramdisk, on xfs with delaylog. 100 times.
Average files/s:
vanilla - 39648.76
rcu - 39916.66
Ie. RCU actually had a slightly higher mean, but assuming a normal
distribution, there was no significant difference at 95% confidence.
Mind you, this is still 40k files/s -- so it's still on the high side
compared to anything doing _real_ work, doing real IO, anything non
trivial with the damn things.
So there. I re state my case. I have put up the numbers, and I have
shown that even worst cases is not the end of the world. I don't know
why I've had to repeat it so many times, but honestly at this point I've
done enough. The case is closed until any *actual* significant numbers
to the contrary turn up.
I've been much more dilligent than most people at examining worst cases
and doing benchmarks, and we really don't hold up kernel development
beyond that, without a basis on actual numbers.
Thanks,
Nick
next prev parent reply other threads:[~2010-11-17 5:56 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-09 12:46 [patch 1/6] fs: icache RCU free inodes Nick Piggin
2010-11-09 12:47 ` [patch 2/6] fs: icache avoid RCU freeing for pseudo fs Nick Piggin
2010-11-09 12:58 ` [patch 3/6] fs: dcache documentation cleanup Nick Piggin
2010-11-09 16:24 ` Christoph Hellwig
2010-11-09 22:06 ` Nick Piggin
2010-11-10 16:27 ` Christoph Hellwig
2010-11-09 13:01 ` [patch 4/6] fs: d_delete change Nick Piggin
2010-11-09 16:25 ` Christoph Hellwig
2010-11-09 22:08 ` Nick Piggin
2010-11-10 16:32 ` Christoph Hellwig
2010-11-11 0:27 ` Nick Piggin
2010-11-11 22:07 ` Linus Torvalds
2010-11-09 13:02 ` [patch 5/6] fs: d_compare change for rcu-walk Nick Piggin
2010-11-09 16:25 ` Christoph Hellwig
2010-11-10 1:48 ` Nick Piggin
2010-11-09 13:03 ` [patch 6/6] fs: d_hash " Nick Piggin
2010-11-09 14:19 ` [patch 1/6] fs: icache RCU free inodes Andi Kleen
2010-11-09 21:36 ` Nick Piggin
2010-11-10 14:47 ` Andi Kleen
2010-11-11 4:27 ` Nick Piggin
2010-11-09 16:02 ` Linus Torvalds
2010-11-09 16:21 ` Christoph Hellwig
2010-11-09 21:48 ` Nick Piggin
2010-11-09 16:21 ` Eric Dumazet
2010-11-09 17:08 ` Linus Torvalds
2010-11-09 17:15 ` Christoph Hellwig
2010-11-09 21:55 ` Nick Piggin
2010-11-09 22:05 ` Nick Piggin
2010-11-12 1:24 ` Nick Piggin
2010-11-12 4:48 ` Linus Torvalds
2010-11-12 6:02 ` Nick Piggin
2010-11-12 6:49 ` Nick Piggin
2010-11-12 17:33 ` Linus Torvalds
2010-11-12 23:17 ` Nick Piggin
2010-11-15 1:00 ` Dave Chinner
2010-11-15 4:21 ` Nick Piggin
2010-11-16 3:02 ` Dave Chinner
2010-11-16 3:49 ` Nick Piggin
2010-11-17 1:12 ` Dave Chinner
2010-11-17 4:18 ` Nick Piggin
2010-11-17 5:56 ` Nick Piggin [this message]
2010-11-17 6:04 ` Nick Piggin
2010-11-09 21:44 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101117055633.GA3861@amd \
--to=npiggin@kernel.dk \
--cc=david@fromorbit.com \
--cc=eric.dumazet@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=npiggin@gmail.com \
--cc=torvalds@linux-foundation.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).