From: Chris Mason <chris.mason@fusionio.com>
To: Dave Chinner <david@fromorbit.com>
Cc: <xfs@oss.sgi.com>, <linux-fsdevel@vger.kernel.org>
Subject: Re: [BULK] Re: Some baseline tests on new hardware (was Re: [PATCH] xfs: optimise CIL insertion during transaction commit [RFC])
Date: Mon, 8 Jul 2013 21:54:19 -0400 [thread overview]
Message-ID: <20130709015419.3855.98373@localhost.localdomain> (raw)
In-Reply-To: <20130709012614.GH3438@dastard>
Quoting Dave Chinner (2013-07-08 21:26:14)
> On Mon, Jul 08, 2013 at 09:15:33PM -0400, Chris Mason wrote:
> > Quoting Dave Chinner (2013-07-08 08:44:53)
> > > [cc fsdevel because after all the XFS stuff I did a some testing on
> > > mmotm w.r.t per-node LRU lock contention avoidance, and also some
> > > scalability tests against ext4 and btrfs for comparison on some new
> > > hardware. That bit ain't pretty. ]
> > >
> > > And, well, the less said about btrfs unlinks the better:
> > >
> > > + 37.14% [kernel] [k] _raw_spin_unlock_irqrestore
> > > + 33.18% [kernel] [k] __write_lock_failed
> > > + 17.96% [kernel] [k] __read_lock_failed
> > > + 1.35% [kernel] [k] _raw_spin_unlock_irq
> > > + 0.82% [kernel] [k] __do_softirq
> > > + 0.53% [kernel] [k] btrfs_tree_lock
> > > + 0.41% [kernel] [k] btrfs_tree_read_lock
> > > + 0.41% [kernel] [k] do_raw_read_lock
> > > + 0.39% [kernel] [k] do_raw_write_lock
> > > + 0.38% [kernel] [k] btrfs_clear_lock_blocking_rw
> > > + 0.37% [kernel] [k] free_extent_buffer
> > > + 0.36% [kernel] [k] btrfs_tree_read_unlock
> > > + 0.32% [kernel] [k] do_raw_write_unlock
> > >
> >
> > Hi Dave,
> >
> > Thanks for doing these runs. At least on Btrfs the best way to resolve
> > the tree locking today is to break things up into more subvolumes.
>
> Sure, but you can't do that most workloads. Only on specialised
> workloads (e.g. hashed directory tree based object stores) is this
> really a viable option....
Yes and no. It makes a huge difference even when you have 8 procs all
working on the same 8 subvolumes. It's not perfect but it's all I
have ;)
>
> > I've
> > got another run at the root lock contention in the queue after I get
> > the skiplists in place in a few other parts of the Btrfs code.
>
> It will be interesting to see how these new structures play out ;)
The skiplists don't translate well to the tree roots, so I'll probably
have to do something different there. But I'll get the onion peeled one
way or another.
-chris
next prev parent reply other threads:[~2013-07-09 1:54 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1372657476-9241-1-git-send-email-david@fromorbit.com>
2013-07-08 12:44 ` Some baseline tests on new hardware (was Re: [PATCH] xfs: optimise CIL insertion during transaction commit [RFC]) Dave Chinner
2013-07-08 13:59 ` Jan Kara
2013-07-08 15:22 ` Marco Stornelli
2013-07-08 15:38 ` Jan Kara
2013-07-09 0:15 ` Dave Chinner
2013-07-09 0:56 ` Theodore Ts'o
2013-07-09 0:43 ` Zheng Liu
2013-07-09 1:23 ` Dave Chinner
2013-07-09 1:15 ` Chris Mason
2013-07-09 1:26 ` Dave Chinner
2013-07-09 1:54 ` Chris Mason [this message]
2013-07-09 8:26 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130709015419.3855.98373@localhost.localdomain \
--to=chris.mason@fusionio.com \
--cc=david@fromorbit.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).