From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Chris Holcombe <cholcombe@box.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: XFS: possible memory allocation deadlock in kmem_alloc
Date: Mon, 4 Nov 2019 16:01:38 -0800 [thread overview]
Message-ID: <20191105000138.GT4153244@magnolia> (raw)
In-Reply-To: <CAL3_v4PZLtb4hVWksWR_tkia+A6rjeR2Xc3H-buCp7pMySxE2Q@mail.gmail.com>
On Mon, Nov 04, 2019 at 03:38:12PM -0800, Chris Holcombe wrote:
> After upgrading from scientific linux 6 -> centos 7 i'm starting to
> see a sharp uptick in dmesg lines about xfs having a possible memory
> allocation deadlock. All the searching I did through previous mailing
> list archives and blog posts show all pointing to large files having
> too many extents.
> I don't think that is the case with these servers so I'm reaching out
> in the hopes of getting an answer to what is going on. The largest
> file sizes I can find on the servers are roughly 15GB with maybe 9
> extents total. The vast majority small with only a few extents.
> I've setup a cron job to drop the cache every 5 minutes which is
> helping but not eliminating the problem. These servers are dedicated
> to storing data that is written through nginx webdav. AFAIK nginx
> webdav put does not use sparse files.
>
> Some info about the servers this issue is occurring on:
>
> nginx is writing to 82TB filesystems:
> xfs_info /dev/sdb1
> meta-data=/dev/sdb1 isize=512 agcount=82, agsize=268435424 blks
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=0 spinodes=0
> data = bsize=4096 blocks=21973302784, imaxpct=1
> = sunit=16 swidth=144 blks
> naming =version 2 bsize=65536 ascii-ci=0 ftype=1
> log =internal bsize=4096 blocks=521728, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> xfs_db -r /dev/sdb1
> xfs_db> frag
> actual 6565, ideal 5996, fragmentation factor 8.67%
> Note, this number is largely meaningless.
> Files on this filesystem average 1.09 extents per file
>
> I see dmesg lines with various size numbers in the line:
> [6262080.803537] XFS: nginx(2514) possible memory allocation deadlock
> size 50184 in kmem_alloc (mode:0x250)
Full kernel logs, please. There's not enough info here to tell what's
trying to grab a 50K memory buffer.
--D
> Typical extents for the largest files on the filesystem are:
>
> find /mnt/jbod/ -type f -size +15G -printf '%s %p\n' -exec xfs_bmap
> -vp {} \; | tee extents
> 17093242444 /mnt/jbod/boxfiler3038-sdb1/data/220190411/ephemeral/2019-08-12/18/0f6bee4d6ee0136af3b58eef611e2586.enc
> /mnt/jbod/boxfiler3038-sdb1/data/220190411/ephemeral/2019-08-12/18/0f6bee4d6ee0136af3b58eef611e2586.enc:
> EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET
> TOTAL FLAGS
> 0: [0..1919]: 51660187008..51660188927 24
> (120585600..120587519) 1920 00010
> 1: [1920..8063]: 51660189056..51660195199 24
> (120587648..120593791) 6144 00011
> 2: [8064..4194175]: 51660210816..51664396927 24
> (120609408..124795519) 4186112 00001
> 3: [4194176..11552759]: 51664560768..51671919351 24
> (124959360..132317943) 7358584 00101
> 4: [11552760..33385239]: 51678355840..51700188319 24
> (138754432..160586911) 21832480 00111
>
>
> Memory size:
> free -m
> total used free shared buff/cache available
> Mem: 64150 6338 421 2 57390 57123
> Swap: 2047 6 2041
>
> cat /etc/redhat-release
> CentOS Linux release 7.6.1810 (Core)
>
> cat /proc/buddyinfo
> Node 0, zone DMA 0 0 1 0 1 0 0
> 0 0 1 3
> Node 0, zone DMA32 31577 88 2 0 0 0 0
> 0 0 0 0
> Node 0, zone Normal 33331 3323 582 87 0 0 0
> 0 0 0 0
> Node 1, zone Normal 51121 6343 822 77 1 0 0
> 0 0 0 0
>
> tuned-adm shows 'balanced' as the current tuning profile.
>
> Thanks for your help!
next prev parent reply other threads:[~2019-11-05 0:03 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-04 23:38 XFS: possible memory allocation deadlock in kmem_alloc Chris Holcombe
2019-11-05 0:01 ` Darrick J. Wong [this message]
2019-11-05 0:31 ` Eric Sandeen
[not found] ` <CAC752AmahECFry9x=pvqDkwQUj1PEJjoWGa2KFG1uaTzT1Bbnw@mail.gmail.com>
2019-11-05 4:21 ` Eric Sandeen
2019-11-05 16:25 ` Chris Holcombe
2019-11-05 17:11 ` Eric Sandeen
2019-11-05 19:53 ` Chris Holcombe
2019-11-05 20:08 ` Eric Sandeen
[not found] ` <CAC752AnZ4biDGk6V17URQm5YVp=MwZBhiMH8=t733zaypxUsmA@mail.gmail.com>
2019-11-05 20:47 ` Eric Sandeen
[not found] ` <CAC752A=y9PMEQ1e4mXskha1GFeKXWi8PsdBW-nX40pgFCYp1Uw@mail.gmail.com>
2019-11-05 21:23 ` Eric Sandeen
-- strict thread matches above, loose matches on Subject: below --
2016-05-30 4:45 baotiao
2016-05-30 5:04 ` Dave Chinner
2016-05-30 8:48 ` baotiao
2016-05-30 9:20 ` Carlos Maiolino
2016-05-31 2:43 ` 陈宗志
2016-05-31 3:10 ` Dave Chinner
2016-05-31 11:00 ` 陈宗志
2016-05-31 12:14 ` Carlos Maiolino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191105000138.GT4153244@magnolia \
--to=darrick.wong@oracle.com \
--cc=cholcombe@box.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox