linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Mason <clm@fb.com>
To: Dave Chinner <david@fromorbit.com>
Cc: <linux-btrfs@vger.kernel.org>
Subject: Re: [4.8] btrfs heats my room with lock contention
Date: Fri, 5 Aug 2016 15:34:28 -0400	[thread overview]
Message-ID: <3b07c682-e885-13fc-048c-b4b1b657e6a6@fb.com> (raw)
In-Reply-To: <20160805030127.GW12670@dastard>



On 08/04/2016 11:01 PM, Dave Chinner wrote:
> On Thu, Aug 04, 2016 at 10:28:44AM -0400, Chris Mason wrote:
>>
>>
>> On 08/04/2016 02:41 AM, Dave Chinner wrote:
>>>
>>> Simple test. 8GB pmem device on a 16p machine:
>>>
>>> # mkfs.btrfs /dev/pmem1
>>> # mount /dev/pmem1 /mnt/scratch
>>> # dbench -t 60 -D /mnt/scratch 16
>>>
>>> And heat your room with the warm air rising from your CPUs. Top
>>> half of the btrfs profile looks like:
> .....
>>> Performance vs CPu usage is:
>>>
>>> nprocs		throughput		cpu usage
>>> 1		440MB/s			 50%
>>> 2		770MB/s			100%
>>> 4		880MB/s			250%
>>> 8		690MB/s			450%
>>> 16		280MB/s			950%
>>>
>>> In comparision, at 8-16 threads ext4 is running at ~2600MB/s and
>>> XFS is running at ~3800MB/s. Even if I throw 300-400 processes at
>>> ext4 and XFS, they only drop to ~1500-2000MB/s as they hit internal
>>> limits.
>>>
>> Yes, with dbench btrfs does much much better if you make a subvol
>> per dbench dir.  The difference is pretty dramatic.  I'm working on
>> it this month, but focusing more on database workloads right now.
>
> You've been giving this answer to lock contention reports for the
> past 6-7 years, Chris.  I really don't care about getting big
> benchmark numbers with contrived setups - the "use multiple
> subvolumes" solution is simply not practical for users or their
> workloads.  The default config should behave sanely and not not
> contribute to global warming like this.
>

The btree setup that makes lock contention here makes some other 
benchmarks faster.  Needing to create subvolumes in order to fix 
performance problems on dbench is far from ideal, but in production here 
the tradeoffs have been worth it.

Basically this one definitely comes up during dbench and fs_mark and 
much less often elsewhere.  For the workloads that hit this lock 
contention, splitting things out into subvolumes hugely reduces metadata 
fragmentation on reads.  So it's not just CPU we're helping with 
subvolumes but spindle time too.

It's true I haven't invested time into guessing when the admin wants to 
split on a per-subvolume basis.  Still, I do love the polar bears, so 
I'll take another shot at the btree lock.

-chris

      reply	other threads:[~2016-08-05 19:34 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-04  6:41 [4.8] btrfs heats my room with lock contention Dave Chinner
2016-08-04 14:28 ` Chris Mason
2016-08-05  3:01   ` Dave Chinner
2016-08-05 19:34     ` Chris Mason [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3b07c682-e885-13fc-048c-b4b1b657e6a6@fb.com \
    --to=clm@fb.com \
    --cc=david@fromorbit.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).