From: "A. James Lewis" <james@fsck.co.uk>
To: linux-btrfs@vger.kernel.org
Subject: Re: 2.6.35 performance results
Date: Sun, 08 Aug 2010 05:18:24 +0100 [thread overview]
Message-ID: <1281241104.2492.11.camel@hardline> (raw)
> On Fri, Aug 06, 2010 at 01:44:11PM -0500, Steven Pratt wrote:
> > Here is the latest set of performance runs from the 2.6.35-rc5 tree.
> > Included is a refresh of all the other filesystems with some changes
> > for barriers on and off since this has been somewhat of a hot topic
> > recently.
> >
> > New data linked in to the history graphs here:
> > http://btrfs.boxacle.net/repository/raid/history/History.html
> >
> > From a BTRFS performance perspective, we took a major regression on
> > write heavy workloads. As much as a 10x hit! The problems seems to
> > be due to this changeset:
> > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commit;h=5da9d01b66458b180a6bee0e637a1d0a3effc622
>
> Ouch! The problem is we're not being aggressive enough about
> allocating chunks for data, which makes the flusher come in and start
> data IO.
>
> Thanks a lot for finding the regression, my machine definitely didn't
> show this.
>
> I'll reproduce and fix it up.
>
The guys testing this in Ubuntu's Maverick Alpha's have noticed this too...
It happens only in some configurations... for example, I tested in a VM
and did not see any issue, but when installing for real on the same hardware
performance fell through the floor.
https://bugs.launchpad.net/ubuntu/+source/linux-meta/+bug/601299?comments=all
>
> -chris
>
> > Btrfs: Shrink delay allocated space in a synchronized
> >
> > Shrink delayed allocation space in a synchronized manner is more
> > controllable than flushing all delay allocated space in an async
> > thread.
> >
> > This changeset introduced "btrfs_start_one_delalloc_inode" in
> > http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commitdiff;h=5da9d01b66458b180a6bee0e637a1d0a3effc622
> >
> > In heavy write workloads this new function is now dominating the profiles:
> >
> > samples % app name symbol name
> > 8914973 65.1261 btrfs.ko btrfs_start_one_delalloc_inode
> > 1024841 7.4867 vmlinux-2.6.35-rc5-autokern1 rb_get_reader_page
> > 716046 5.2309 vmlinux-2.6.35-rc5-autokern1 ring_buffer_consume
> > 315354 2.3037 oprofile.ko add_event_entry
> > 202484 1.4792 vmlinux-2.6.35-rc5-autokern1 write_inode_now
> > 195018 1.4247 btrfs.ko btrfs_tree_lock
> >
> >
> > Appears to be major contention on the spin lock, as this gets worse
> > with more threads. This needs to be redone.
> >
> >
> > Steve
>
next reply other threads:[~2010-08-08 4:18 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-08 4:18 A. James Lewis [this message]
-- strict thread matches above, loose matches on Subject: below --
2010-08-06 18:44 2.6.35 performance results Steven Pratt
2010-08-06 18:58 ` Chris Mason
2010-08-16 20:04 ` Chris Mason
2010-08-16 21:51 ` Steven Pratt
2010-08-19 1:00 ` Chris Mason
2010-08-21 15:25 ` Steven Pratt
2010-08-23 19:13 ` Steven Pratt
2010-08-23 19:33 ` Chris Mason
2010-08-23 20:10 ` Steven Pratt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1281241104.2492.11.camel@hardline \
--to=james@fsck.co.uk \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).