linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Excessive slab use deteriorates performance
@ 2015-09-26 11:59 Ferry Toth
  2015-09-29 12:25 ` David Sterba
  2015-10-04 20:35 ` Ferry Toth
  0 siblings, 2 replies; 3+ messages in thread
From: Ferry Toth @ 2015-09-26 11:59 UTC (permalink / raw)
  To: linux-btrfs

We have 2 almost identical servers, with the following difference:
1 16GB RAM, 4 disks in RAID10
2 8GB RAM, 2 disks in RAID1, but tried also in RAID0

The 2nd machine actually started life as a restore of a snapshot of the 
first, so running much the same services except the one we intentionally 
disabled.

We have noticed on both machines that after a day of up-time logging into 
KDE (as well as other file intensive tasks, like apt) are extremely slow 
(especially compared to the desktop/laptop machines with slower CPU, single 
disk).

It seems that actually the machine with the most memory suffers the most. 
Looking at the memory use of user space, that is very low (< 1GB), but the 
total memory consumed is actually almost 100%. That would be fine if it went 
to file buffers, but it doesn't. It goes to slab, specifically to 
btrfs_inode, dentry, radix_tree_node, btrfs_extent_buffer, leaving almost no 
memory available for buffers, or user space.

As far I we know these slabs should be freeed when needed, and there should 
be no reason to tinker with it. However, we found that:
sync ; echo 2 > /proc/sys/vm/drop_caches  # free dentries and inodes
restores performance for another day.

Looking into the jobs that run at night, it appears to be caused by the cron 
script that runs updatedb (to create the database for locate).

In the past (when using ext4) we never saw this problem, but then (in the 
old days) the server didn't have so much memory.

Could there be any relation to btrfs causing the mentioned slabs to not be 
automatically freeed?

In the mean time we put:
vfs_cache_pressure = 10000 

And this seems to be keeping slabs total at 2.5GB (with btrfs_inode at 
1.7GB).

Still, manually doing drop_caches, will reduce slabs total to 0.4GB, with 
btrfs_inode at 0.01GB.

I'm not sure, but just having a RO operation scanning all files on the disk 
causing kernel memory to be used to cache things that are almost not used 12 
hours later, while keeping more useful file caches from growing, seems not 
really optimal.

Has anybody else seen this behavior?

---
Ferry Toth



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Excessive slab use deteriorates performance
  2015-09-26 11:59 Excessive slab use deteriorates performance Ferry Toth
@ 2015-09-29 12:25 ` David Sterba
  2015-10-04 20:35 ` Ferry Toth
  1 sibling, 0 replies; 3+ messages in thread
From: David Sterba @ 2015-09-29 12:25 UTC (permalink / raw)
  To: Ferry Toth; +Cc: linux-btrfs

On Sat, Sep 26, 2015 at 01:59:13PM +0200, Ferry Toth wrote:
> Could there be any relation to btrfs causing the mentioned slabs to not be 
> automatically freeed?
> 
> In the mean time we put:
> vfs_cache_pressure = 10000 
> 
> And this seems to be keeping slabs total at 2.5GB (with btrfs_inode at 
> 1.7GB).
> 
> Still, manually doing drop_caches, will reduce slabs total to 0.4GB, with 
> btrfs_inode at 0.01GB.
> 
> I'm not sure, but just having a RO operation scanning all files on the disk 
> causing kernel memory to be used to cache things that are almost not used 12 
> hours later, while keeping more useful file caches from growing, seems not 
> really optimal.
> 
> Has anybody else seen this behavior?

Yes, a friend showed me almost identical situtation some time ago. Slab
caches full, not reclaimed and dropping caches "fixed" that, likely
caused by overnight cron jobs. At that time I was interested whether it
was a leak but if the slab usage dropped and did not debug it further.

The sysctl vfs_cache_pressure should help to tune the system. The btrfs
allocates slabs with the "reclaimable" flag on so there should not be
any direct obstacle for that so there must be something else going on.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Excessive slab use deteriorates performance
  2015-09-26 11:59 Excessive slab use deteriorates performance Ferry Toth
  2015-09-29 12:25 ` David Sterba
@ 2015-10-04 20:35 ` Ferry Toth
  1 sibling, 0 replies; 3+ messages in thread
From: Ferry Toth @ 2015-10-04 20:35 UTC (permalink / raw)
  To: linux-btrfs

David Sterba wrote:

> On Sat, Sep 26, 2015 at 01:59:13PM +0200, Ferry Toth wrote:
>> Could there be any relation to btrfs causing the mentioned slabs to not
>> be automatically freeed?
>> 
>> In the mean time we put:
>> vfs_cache_pressure = 10000
>> 
>> And this seems to be keeping slabs total at 2.5GB (with btrfs_inode at
>> 1.7GB).
>> 
>> Still, manually doing drop_caches, will reduce slabs total to 0.4GB, with
>> btrfs_inode at 0.01GB.
>> 
>> I'm not sure, but just having a RO operation scanning all files on the
>> disk causing kernel memory to be used to cache things that are almost not
>> used 12 hours later, while keeping more useful file caches from growing,
>> seems not really optimal.
>> 
>> Has anybody else seen this behavior?
> 
> Yes, a friend showed me almost identical situtation some time ago. Slab
> caches full, not reclaimed and dropping caches "fixed" that, likely
> caused by overnight cron jobs. At that time I was interested whether it
> was a leak but if the slab usage dropped and did not debug it further.

In our nightly cron jobs I found locate and mlocate. Now we are suspecting 
locate and removed that - as we understand mlocate should be a more 
efficient replacement for mlocate, so that might have been redundant.

> The sysctl vfs_cache_pressure should help to tune the system. The btrfs
> allocates slabs with the "reclaimable" flag on so there should not be
> any direct obstacle for that so there must be something else going on.

Exactly, as it reclaims without trouble when manually dropping the slabs 
AND 
reclaims better when putting vfs pressure excessively high, there might be 
something wrong either in btrfs or the reclaiming mechanism elsewhere in 
the 
kernel.

But if it is the latter, you would expect ext users to have similar 
troubles, at least with excessive slabs for dentry.

I would like to report this as a bug, but am not sure where to put it.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
---
Ferry 


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-10-04 20:36 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-26 11:59 Excessive slab use deteriorates performance Ferry Toth
2015-09-29 12:25 ` David Sterba
2015-10-04 20:35 ` Ferry Toth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).