public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
@ 2013-11-28  9:13 Emmanuel Lacour
  2013-11-28 10:05 ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Emmanuel Lacour @ 2013-11-28  9:13 UTC (permalink / raw)
  To: xfs


Dear XFS users,


I run a Ceph cluster using XFS on Debian wheezy servers and Linux 3.10
(debian backports). I see the following line in our logs:

XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

does this reveal a problem in my setup or may I ignore it? If it's a
problem, can someone give me any hint on solving this?


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread
* XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
@ 2013-08-21 15:24 Josef 'Jeff' Sipek
  2013-08-22  2:25 ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Josef 'Jeff' Sipek @ 2013-08-21 15:24 UTC (permalink / raw)
  To: xfs

We've started experimenting with larger directory block sizes to avoid
directory fragmentation.  Everything seems to work fine, except that the log
is spammed with these lovely debug messages:

	XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

>From looking at the code, it looks like that each of those messages (there
are thousands) equates to 100 trips through the loop.  My guess is that the
larger blocks require multi-page allocations which are harder to satisfy.
This is with 3.10 kernel.

The hardware is something like (I can find out the exact config is you want):

	32 cores
	128 GB RAM
	LSI 9271-8i RAID (one big RAID-60 with 36 disks, partitioned)

As I hinted at earlier, we end up with pretty big directories.  We can
semi-reliably trigger this when we run rsync on the data between two
(identical) hosts over 10GbitE.

# xfs_info /dev/sda9 
meta-data=/dev/sda9              isize=256    agcount=6, agsize=268435455 blks 
         =                       sectsz=512   attr=2 
data     =                       bsize=4096   blocks=1454213211, imaxpct=5 
         =                       sunit=0      swidth=0 blks 
naming   =version 2              bsize=65536  ascii-ci=0 
log      =internal               bsize=4096   blocks=521728, version=2 
         =                       sectsz=512   sunit=0 blks, lazy-count=1 
realtime =none                   extsz=4096   blocks=0, rtextents=0

/proc/slabinfo: https://www.copy.com/s/1x1yZFjYO2EI/slab.txt
sysrq m output: https://www.copy.com/s/mYfMYfJJl2EB/sysrq-m.txt


While I realize that the message isn't bad, it does mean that the system is
having hard time allocating memory.  This could potentially lead to bad
performance, or even an actual deadlock.  Do you have any suggestions?

Thanks,

Jeff.

-- 
The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man.
		- George Bernard Shaw

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-12-11 23:53 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-28  9:13 XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250) Emmanuel Lacour
2013-11-28 10:05 ` Dave Chinner
2013-12-03  9:53   ` Emmanuel Lacour
2013-12-03 12:50     ` Dave Chinner
2013-12-03 16:28       ` Yann Dupont
2013-12-09  9:47       ` Emmanuel Lacour
2013-12-11 20:22       ` Ben Myers
2013-12-11 23:53         ` Dave Chinner
  -- strict thread matches above, loose matches on Subject: below --
2013-08-21 15:24 Josef 'Jeff' Sipek
2013-08-22  2:25 ` Dave Chinner
2013-08-22 15:07   ` Josef 'Jeff' Sipek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox