From: john stultz <johnstul@us.ibm.com>
To: "Ted Ts'o" <tytso@mit.edu>
Cc: Ext4 Developers List <linux-ext4@vger.kernel.org>,
Keith Maanthey <kmannth@us.ibm.com>,
Eric Whitney <eric.whitney@hp.com>
Subject: Re: [PATCH] jbd2: Use atomic variables to avoid taking t_handle_lock in jbd2_journal_stop
Date: Mon, 02 Aug 2010 17:53:43 -0700 [thread overview]
Message-ID: <1280796823.3966.74.camel@localhost.localdomain> (raw)
In-Reply-To: <20100803000609.GI25653@thunk.org>
On Mon, 2010-08-02 at 20:06 -0400, Ted Ts'o wrote:
> On Mon, Aug 02, 2010 at 04:02:32PM -0700, john stultz wrote:
> > >From these numbers, it looks like the atomic variables are a minor
> > improvement for -rt, but the improvement isn't as drastic as the earlier
> > j_state lock change, or the vfs scalability patchset.
>
> Thanks for doing this quick test run! I was expecting to see a more
> dramatic difference, since the j_state_lock patch removed one of the
> two global locks in jbd2_journal_stop, and the t_handle_lock patch
> removed the second of the two global locks. But I guess the
> j_state_lock contention in start_this_handle() is still the dominating factor.
>
> It's interesting that apparently the latest t_handle_lock patch
> doesn't seem to make much difference unless the VFS scalability patch
> is also applied. I'm not sure why that makes a difference, but it's
> nice to know that with the VFS scalability patch it does seem to help,
> even if it doesn't help as much as I had hoped.
Well, its likely that with the -rt kernel and without the
vfs-scalability changes, we're just burning way more time on vfs lock
contention then we are on anything in the ext4 code. Just a theory, but
I can try to verify with perf logs if you'd like.
> OK, I guess we'll have to start working on the more aggressive
> scalability fix ups....
I'm generated mainline results w/ w/o Nick's current vfs-scalability
tree. So far any benefit from the atomic patch seems to be < 1% there,
but I'm probably not hitting much contention at only 8 cores:
2.6.35-rc6
Throughput 2345.72 MB/sec 8 procs
Throughput 1424.11 MB/sec 4 procs
Throughput 811.371 MB/sec 2 procs
Throughput 444.129 MB/sec 1 procs
2.6.35-rc6 + atomic
Throughput 2354.66 MB/sec 8 procs
Throughput 1427.64 MB/sec 4 procs
Throughput 794.961 MB/sec 2 procs
Throughput 443.464 MB/sec 1 procs
2.6.35-rc6-vfs
Throughput 2639.04 MB/sec 8 procs
Throughput 1583.28 MB/sec 4 procs
Throughput 858.337 MB/sec 2 procs
Throughput 452.774 MB/sec 1 procs
2.6.35-rc6-vfs + atomic
Throughput 2648.42 MB/sec 8 procs
Throughput 1586.68 MB/sec 4 procs
Throughput 851.545 MB/sec 2 procs
Throughput 453.106 MB/sec 1 procs
thanks
-john
next prev parent reply other threads:[~2010-08-03 0:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-02 12:48 [PATCH] jbd2: Use atomic variables to avoid taking t_handle_lock in jbd2_journal_stop Theodore Ts'o
2010-08-02 23:02 ` john stultz
2010-08-03 0:06 ` Ted Ts'o
2010-08-03 0:53 ` john stultz [this message]
2010-08-03 2:52 ` john stultz
2010-08-03 16:06 ` Ted Ts'o
2010-08-03 19:22 ` Eric Whitney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1280796823.3966.74.camel@localhost.localdomain \
--to=johnstul@us.ibm.com \
--cc=eric.whitney@hp.com \
--cc=kmannth@us.ibm.com \
--cc=linux-ext4@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).