linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [4.8] btrfs heats my room with lock contention
@ 2016-08-04  6:41 Dave Chinner
  2016-08-04 14:28 ` Chris Mason
  0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2016-08-04  6:41 UTC (permalink / raw)
  To: linux-btrfs


Simple test. 8GB pmem device on a 16p machine:

# mkfs.btrfs /dev/pmem1
# mount /dev/pmem1 /mnt/scratch
# dbench -t 60 -D /mnt/scratch 16

And heat your room with the warm air rising from your CPUs. Top
half of the btrfs profile looks like:

  36.71%  [kernel]  [k] _raw_spin_unlock_irqrestore                                                                                                                    ¿
  32.29%  [kernel]  [k] native_queued_spin_lock_slowpath                                                                                                               ¿
   5.14%  [kernel]  [k] queued_write_lock_slowpath                                                                                                                     ¿
   2.46%  [kernel]  [k] _raw_spin_unlock_irq                                                                                                                           ¿
   2.15%  [kernel]  [k] queued_read_lock_slowpath                                                                                                                      ¿
   1.54%  [kernel]  [k] _find_next_bit.part.0                                                                                                                          ¿
   1.06%  [kernel]  [k] __crc32c_le                                                                                                                                    ¿
   0.82%  [kernel]  [k] btrfs_tree_lock                                                                                                                                ¿
   0.79%  [kernel]  [k] steal_from_bitmap.part.29                                                                                                                      ¿
   0.70%  [kernel]  [k] __copy_user_nocache                                                                                                                            ¿
   0.69%  [kernel]  [k] btrfs_tree_read_lock                                                                                                                           ¿
   0.69%  [kernel]  [k] delay_tsc                                                                                                                                      ¿
   0.64%  [kernel]  [k] btrfs_set_lock_blocking_rw                                                                                                                     ¿
   0.63%  [kernel]  [k] copy_user_generic_string                                                                                                                       ¿
   0.51%  [kernel]  [k] do_raw_read_unlock                                                                                                                             ¿
   0.48%  [kernel]  [k] do_raw_spin_lock                                                                                                                               ¿
   0.47%  [kernel]  [k] do_raw_read_lock                                                                                                                               ¿
   0.46%  [kernel]  [k] btrfs_clear_lock_blocking_rw                                                                                                                   ¿
   0.44%  [kernel]  [k] do_raw_write_lock                                                                                                                              ¿
   0.41%  [kernel]  [k] __do_softirq                                                                                                                                   ¿
   0.28%  [kernel]  [k] __memcpy                                                                                                                                       ¿
   0.24%  [kernel]  [k] map_private_extent_buffer                                                                                                                      ¿
   0.23%  [kernel]  [k] find_next_zero_bit                                                                                                                             ¿
   0.22%  [kernel]  [k] btrfs_tree_read_unlock                                                                                                                         ¿

Performance vs CPu usage is:

nprocs		throughput		cpu usage
1		440MB/s			 50%
2		770MB/s			100%
4		880MB/s			250%
8		690MB/s			450%
16		280MB/s			950%

In comparision, at 8-16 threads ext4 is running at ~2600MB/s and
XFS is running at ~3800MB/s. Even if I throw 300-400 processes at
ext4 and XFS, they only drop to ~1500-2000MB/s as they hit internal
limits.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [4.8] btrfs heats my room with lock contention
  2016-08-04  6:41 [4.8] btrfs heats my room with lock contention Dave Chinner
@ 2016-08-04 14:28 ` Chris Mason
  2016-08-05  3:01   ` Dave Chinner
  0 siblings, 1 reply; 4+ messages in thread
From: Chris Mason @ 2016-08-04 14:28 UTC (permalink / raw)
  To: Dave Chinner, linux-btrfs



On 08/04/2016 02:41 AM, Dave Chinner wrote:
>
> Simple test. 8GB pmem device on a 16p machine:
>
> # mkfs.btrfs /dev/pmem1
> # mount /dev/pmem1 /mnt/scratch
> # dbench -t 60 -D /mnt/scratch 16
>
> And heat your room with the warm air rising from your CPUs. Top
> half of the btrfs profile looks like:
>
>   36.71%  [kernel]  [k] _raw_spin_unlock_irqrestore                                                                                                                    ¿
>   32.29%  [kernel]  [k] native_queued_spin_lock_slowpath                                                                                                               ¿
>    5.14%  [kernel]  [k] queued_write_lock_slowpath                                                                                                                     ¿
>    2.46%  [kernel]  [k] _raw_spin_unlock_irq                                                                                                                           ¿
>    2.15%  [kernel]  [k] queued_read_lock_slowpath                                                                                                                      ¿
>    1.54%  [kernel]  [k] _find_next_bit.part.0                                                                                                                          ¿
>    1.06%  [kernel]  [k] __crc32c_le                                                                                                                                    ¿
>    0.82%  [kernel]  [k] btrfs_tree_lock                                                                                                                                ¿
>    0.79%  [kernel]  [k] steal_from_bitmap.part.29                                                                                                                      ¿
>    0.70%  [kernel]  [k] __copy_user_nocache                                                                                                                            ¿
>    0.69%  [kernel]  [k] btrfs_tree_read_lock                                                                                                                           ¿
>    0.69%  [kernel]  [k] delay_tsc                                                                                                                                      ¿
>    0.64%  [kernel]  [k] btrfs_set_lock_blocking_rw                                                                                                                     ¿
>    0.63%  [kernel]  [k] copy_user_generic_string                                                                                                                       ¿
>    0.51%  [kernel]  [k] do_raw_read_unlock                                                                                                                             ¿
>    0.48%  [kernel]  [k] do_raw_spin_lock                                                                                                                               ¿
>    0.47%  [kernel]  [k] do_raw_read_lock                                                                                                                               ¿
>    0.46%  [kernel]  [k] btrfs_clear_lock_blocking_rw                                                                                                                   ¿
>    0.44%  [kernel]  [k] do_raw_write_lock                                                                                                                              ¿
>    0.41%  [kernel]  [k] __do_softirq                                                                                                                                   ¿
>    0.28%  [kernel]  [k] __memcpy                                                                                                                                       ¿
>    0.24%  [kernel]  [k] map_private_extent_buffer                                                                                                                      ¿
>    0.23%  [kernel]  [k] find_next_zero_bit                                                                                                                             ¿
>    0.22%  [kernel]  [k] btrfs_tree_read_unlock                                                                                                                         ¿
>
> Performance vs CPu usage is:
>
> nprocs		throughput		cpu usage
> 1		440MB/s			 50%
> 2		770MB/s			100%
> 4		880MB/s			250%
> 8		690MB/s			450%
> 16		280MB/s			950%
>
> In comparision, at 8-16 threads ext4 is running at ~2600MB/s and
> XFS is running at ~3800MB/s. Even if I throw 300-400 processes at
> ext4 and XFS, they only drop to ~1500-2000MB/s as they hit internal
> limits.
>
Yes, with dbench btrfs does much much better if you make a subvol per 
dbench dir.  The difference is pretty dramatic.  I'm working on it this 
month, but focusing more on database workloads right now.

-chris


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [4.8] btrfs heats my room with lock contention
  2016-08-04 14:28 ` Chris Mason
@ 2016-08-05  3:01   ` Dave Chinner
  2016-08-05 19:34     ` Chris Mason
  0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2016-08-05  3:01 UTC (permalink / raw)
  To: Chris Mason; +Cc: linux-btrfs

On Thu, Aug 04, 2016 at 10:28:44AM -0400, Chris Mason wrote:
> 
> 
> On 08/04/2016 02:41 AM, Dave Chinner wrote:
> >
> >Simple test. 8GB pmem device on a 16p machine:
> >
> ># mkfs.btrfs /dev/pmem1
> ># mount /dev/pmem1 /mnt/scratch
> ># dbench -t 60 -D /mnt/scratch 16
> >
> >And heat your room with the warm air rising from your CPUs. Top
> >half of the btrfs profile looks like:
.....
> >Performance vs CPu usage is:
> >
> >nprocs		throughput		cpu usage
> >1		440MB/s			 50%
> >2		770MB/s			100%
> >4		880MB/s			250%
> >8		690MB/s			450%
> >16		280MB/s			950%
> >
> >In comparision, at 8-16 threads ext4 is running at ~2600MB/s and
> >XFS is running at ~3800MB/s. Even if I throw 300-400 processes at
> >ext4 and XFS, they only drop to ~1500-2000MB/s as they hit internal
> >limits.
> >
> Yes, with dbench btrfs does much much better if you make a subvol
> per dbench dir.  The difference is pretty dramatic.  I'm working on
> it this month, but focusing more on database workloads right now.

You've been giving this answer to lock contention reports for the
past 6-7 years, Chris.  I really don't care about getting big
benchmark numbers with contrived setups - the "use multiple
subvolumes" solution is simply not practical for users or their
workloads.  The default config should behave sanely and not not
contribute to global warming like this.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [4.8] btrfs heats my room with lock contention
  2016-08-05  3:01   ` Dave Chinner
@ 2016-08-05 19:34     ` Chris Mason
  0 siblings, 0 replies; 4+ messages in thread
From: Chris Mason @ 2016-08-05 19:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-btrfs



On 08/04/2016 11:01 PM, Dave Chinner wrote:
> On Thu, Aug 04, 2016 at 10:28:44AM -0400, Chris Mason wrote:
>>
>>
>> On 08/04/2016 02:41 AM, Dave Chinner wrote:
>>>
>>> Simple test. 8GB pmem device on a 16p machine:
>>>
>>> # mkfs.btrfs /dev/pmem1
>>> # mount /dev/pmem1 /mnt/scratch
>>> # dbench -t 60 -D /mnt/scratch 16
>>>
>>> And heat your room with the warm air rising from your CPUs. Top
>>> half of the btrfs profile looks like:
> .....
>>> Performance vs CPu usage is:
>>>
>>> nprocs		throughput		cpu usage
>>> 1		440MB/s			 50%
>>> 2		770MB/s			100%
>>> 4		880MB/s			250%
>>> 8		690MB/s			450%
>>> 16		280MB/s			950%
>>>
>>> In comparision, at 8-16 threads ext4 is running at ~2600MB/s and
>>> XFS is running at ~3800MB/s. Even if I throw 300-400 processes at
>>> ext4 and XFS, they only drop to ~1500-2000MB/s as they hit internal
>>> limits.
>>>
>> Yes, with dbench btrfs does much much better if you make a subvol
>> per dbench dir.  The difference is pretty dramatic.  I'm working on
>> it this month, but focusing more on database workloads right now.
>
> You've been giving this answer to lock contention reports for the
> past 6-7 years, Chris.  I really don't care about getting big
> benchmark numbers with contrived setups - the "use multiple
> subvolumes" solution is simply not practical for users or their
> workloads.  The default config should behave sanely and not not
> contribute to global warming like this.
>

The btree setup that makes lock contention here makes some other 
benchmarks faster.  Needing to create subvolumes in order to fix 
performance problems on dbench is far from ideal, but in production here 
the tradeoffs have been worth it.

Basically this one definitely comes up during dbench and fs_mark and 
much less often elsewhere.  For the workloads that hit this lock 
contention, splitting things out into subvolumes hugely reduces metadata 
fragmentation on reads.  So it's not just CPU we're helping with 
subvolumes but spindle time too.

It's true I haven't invested time into guessing when the admin wants to 
split on a per-subvolume basis.  Still, I do love the polar bears, so 
I'll take another shot at the btree lock.

-chris

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-08-05 19:34 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-04  6:41 [4.8] btrfs heats my room with lock contention Dave Chinner
2016-08-04 14:28 ` Chris Mason
2016-08-05  3:01   ` Dave Chinner
2016-08-05 19:34     ` Chris Mason

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).