public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Jianan Wang <wangjianan.zju@gmail.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: Question on the xfs inode slab memory
Date: Fri, 2 Jun 2023 07:43:57 +1000	[thread overview]
Message-ID: <ZHkRHW9Fd19du0Zv@dread.disaster.area> (raw)
In-Reply-To: <7572072d-8132-d918-285c-3391cb041cff@gmail.com>

On Wed, May 31, 2023 at 11:21:41PM -0700, Jianan Wang wrote:
> Seems the auto-wraping issue is on my gmail.... using thunderbird should be better...

Thanks!

> Resend the slabinfo and meminfo output here:
> 
> Linux # cat /proc/slabinfo
> slabinfo - version: 2.1
> # name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
.....
> xfs_dqtrx              0      0    528   31    4 : tunables    0    0    0 : slabdata      0      0      0
> xfs_dquot              0      0    496   33    4 : tunables    0    0    0 : slabdata      0      0      0
> xfs_buf           2545661 3291582    384   42    4 : tunables    0    0    0 : slabdata  78371  78371      0
> xfs_rui_item           0      0    696   47    8 : tunables    0    0    0 : slabdata      0      0      0
> xfs_rud_item           0      0    176   46    2 : tunables    0    0    0 : slabdata      0      0      0
> xfs_inode         23063278 77479540   1024   32    8 : tunables    0    0    0 : slabdata 2425069 2425069      0
> xfs_efd_item        4662   4847    440   37    4 : tunables    0    0    0 : slabdata    131    131      0
> xfs_buf_item        8610   8760    272   30    2 : tunables    0    0    0 : slabdata    292    292      0
> xfs_trans           1925   1925    232   35    2 : tunables    0    0    0 : slabdata     55     55      0
> xfs_da_state        1632   1632    480   34    4 : tunables    0    0    0 : slabdata     48     48      0
> xfs_btree_cur       1728   1728    224   36    2 : tunables    0    0    0 : slabdata     48     48      0

There's no xfs_ili slab cache - this kernel must be using merged
slabs, so I'm going to have to infer how many inodes are dirty from
other slabs. The inode log item is ~190 bytes in size, so....

> skbuff_ext_cache  16454495 32746392    192   42    2 : tunables    0    0    0 : slabdata 779676 779676      0

Yup, there were - 192 byte slab, 16 million active objects. Not all
of those inodes will be dirty right now, but ~65% of the inodes
cached in memory have been dirty at some point. 

So, yes, it is highly likely that your memory reclaim/OOM problems
are caused by blocking on dirty inodes in memory reclaim, which you
can only fix by upgrading to a newer kernel.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2023-06-01 21:44 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-31 21:29 Question on the xfs inode slab memory Jianan Wang
2023-06-01  0:08 ` Dave Chinner
2023-06-01  5:25   ` Jianan Wang
2023-06-01 15:06     ` Darrick J. Wong
2023-06-01  6:21   ` Jianan Wang
2023-06-01 21:43     ` Dave Chinner [this message]
2023-06-01 23:59       ` Jianan Wang
2023-06-06 23:00       ` Jianan Wang
2023-06-07  2:21         ` Dave Chinner
2023-06-27 18:40           ` Jianan Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZHkRHW9Fd19du0Zv@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=wangjianan.zju@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox