From: Dave Chinner <david@fromorbit.com>
To: Lin Feng <linf@chinanetcenter.com>
Cc: dchinner@redhat.com, xfs@oss.sgi.com
Subject: Re: [BUG REPORT] missing memory counter introduced by xfs
Date: Thu, 8 Sep 2016 07:22:06 +1000 [thread overview]
Message-ID: <20160907212206.GP30056@dastard> (raw)
In-Reply-To: <57CFEDA3.9000005@chinanetcenter.com>
On Wed, Sep 07, 2016 at 06:36:19PM +0800, Lin Feng wrote:
> Hi all nice xfs folks,
>
> I'm a rookie and really fresh new in xfs and currently I ran into an
> issue same as the following link described:
> http://oss.sgi.com/archives/xfs/2014-04/msg00058.html
>
> In my box(running cephfs osd using xfs kernel 2.6.32-358) and I sum
> all possible memory counter can be find but it seems that nearlly
> 26GB memory has gone and they are back after I echo 2 >
> /proc/sys/vm/drop_caches, so seems these memory can be reclaimed by
> slab.
It isn't "reclaimed by slab". The XFS metadata buffer cache is
reclaimed by a memory shrinker, which are for reclaiming objects
from caches that aren't the page cache. "echo 2 >
/proc/sys/vm/drop_caches" runs the memory shrinkers rather than page
cache reclaim. Many slab caches are backed by memory shrinkers,
which is why it is thought that "2" is "slab reclaim"....
> And according to what David said replying in the list:
..
> That's where your memory is - in metadata buffers. The xfs_buf slab
> entries are just the handles - the metadata pages in the buffers
> usually take much more space and it's not accounted to the slab
> cache nor the page cache.
That's exactly the case.
> Minimum / Average / Maximum Object : 0.02K / 0.33K / 4096.00K
>
> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
> 4383036 4383014 99% 1.00K 1095759 4 4383036K xfs_inode
> 5394610 5394544 99% 0.38K 539461 10 2157844K xfs_buf
So, you have *5.4 million* active metadata buffers. Each buffer will
hold 1 or 2 4k pages on your kernel, so simple math says 4M * 4k +
1.4M * 8k = 26G. There's no missing counter here....
Obviously your workload is doing something extremely metadata
intensive to have a cache footprint like this - you have more cached
buffers than inodes, dentries, etc. That in itself is very unusual -
can you describe what is stored on that filesystem and how large the
attributes being stored in each inode are?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2016-09-07 21:22 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-07 10:36 [BUG REPORT] missing memory counter introduced by xfs Lin Feng
2016-09-07 21:22 ` Dave Chinner [this message]
2016-09-08 10:07 ` Lin Feng
2016-09-08 20:44 ` Dave Chinner
2016-09-09 6:32 ` Lin Feng
2016-09-09 23:13 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160907212206.GP30056@dastard \
--to=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=linf@chinanetcenter.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).