linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
@ 2016-12-03 19:08 Cyril Peponnet
  2016-12-04 21:49 ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-03 19:08 UTC (permalink / raw)
  To: linux-xfs

Hi xfs community :),

We have a glusterfs setup running under centos7.2. We are using xfs as underlying storage for the bricks.
The volume is used to store vm snapshots that’s gets created dynamically from the hypervisors through glusterfs mount points.

While this is working fine we have some issues from time to time that make the mount points hang.

We have the famous XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250) errors appearing from time to time on some of our gluster nodes.

Here is a related RH solution link that does not provide a “defintivie” fix.
https://access.redhat.com/solutions/532663

While a drop 3 of caches and a xfs_fsr can fix the issue for a random amount of time, it still occurring from time to time.

As a workaround I implemented a drop of the caches when the message is appearing + a defrag every 2 days… but that’s not really convenient and we still have the issue.

I know that doing an extent size hint will fix that for those files but I cannot use that as the hypervisors are creating their snapshots using glusterfs or nfs protocol. (and it’s not recommended to write on the glusterfs bricks either :/)

Do you have any idea on how to fix this deadlock in our setup?

Let me know if you need more information about our systems (like kernel version, underlying storage…)

Thanks a lot for your input :)

—
Regards,
Cyril

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-03 19:08 XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup Cyril Peponnet
@ 2016-12-04 21:49 ` Dave Chinner
  2016-12-04 22:07   ` Cyril Peponnet
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2016-12-04 21:49 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Sat, Dec 03, 2016 at 11:08:58AM -0800, Cyril Peponnet wrote:
> Hi xfs community :),
> 
> We have a glusterfs setup running under centos7.2. We are using
> xfs as underlying storage for the bricks.  The volume is used to
> store vm snapshots that’s gets created dynamically from the
> hypervisors through glusterfs mount points.
> 
> While this is working fine we have some issues from time to time
> that make the mount points hang.
> 
> We have the famous XFS: possible memory allocation deadlock in
> kmem_alloc (mode:0x250) errors appearing from time to time on some
> of our gluster nodes.

And the complete output in dmesg is?

> Let me know if you need more information about our systems (like
> kernel version, underlying storage…)

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-04 21:49 ` Dave Chinner
@ 2016-12-04 22:07   ` Cyril Peponnet
  2016-12-04 22:46     ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-04 22:07 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs

Hi here is the details. The issue is on the scratch RAID array (used to store kvm snapshots). The other raid array is fine (no snapshot storage).

	• kernel version (uname -a) : Linux gluster05 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

	• xfsprogs version (xfs_repair -V) : xfs_repair version 3.2.0-alpha2

	• number of CPUs Bi Intel® Xeon® CPU E5-2630 v3 @ 2.40GHz (8 cores)

	• contents of /proc/meminfo
MemTotal:       65699268 kB
MemFree:         2058304 kB
MemAvailable:   62753028 kB
Buffers:              12 kB
Cached:         57664044 kB
SwapCached:        14840 kB
Active:         26757700 kB
Inactive:       31967204 kB
Active(anon):     502064 kB
Inactive(anon):   719452 kB
Active(file):   26255636 kB
Inactive(file): 31247752 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4194300 kB
SwapFree:        3679388 kB
Dirty:              6804 kB
Writeback:           120 kB
AnonPages:       1048576 kB
Mapped:            55104 kB
Shmem:            160888 kB
Slab:            3999548 kB
SReclaimable:    3529220 kB
SUnreclaim:       470328 kB
KernelStack:        4240 kB
PageTables:         9464 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    37043932 kB
Committed_AS:    2200816 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      422024 kB
VmallocChunk:   34324311040 kB
HardwareCorrupted:     0 kB
AnonHugePages:    899072 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      111936 kB
DirectMap2M:     7118848 kB
DirectMap1G:    61865984 kB

	• contents of /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=32842984k,nr_inodes=8210746,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/system-root_vol / xfs rw,relatime,attr2,inode64,noquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=36,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
sunrpc /proc/fs/nfsd nfsd rw,relatime 0 0
/dev/sdc1 /boot ext4 rw,relatime,data=ordered 0 0
/dev/sda /export/raid/data xfs rw,noatime,nodiratime,attr2,nobarrier,inode64,logbsize=128k,sunit=256,swidth=1536,noquota 0 0
/dev/sdb /export/raid/scratch xfs rw,noatime,nodiratime,attr2,nobarrier,inode64,logbsize=128k,sunit=256,swidth=1024,noquota 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
	
	• contents of /proc/partitions
major minor  #blocks  name

   8        0 11717836800 sda
   8       16 7811891200 sdb
   8       32  500107608 sdc
   8       33     512000 sdc1
   8       34    4194304 sdc2
   8       35  495399936 sdc3
 253        0  495386624 dm-0
	• RAID layout (hardware and/or software) - Hardware
VD LIST :
=======

----------------------------------------------------------------
DG/VD TYPE  State Access Consist Cache Cac sCC     Size Name
----------------------------------------------------------------
0/0   RAID0 Optl  RW     Yes     RAWBC -   ON  7.275 TB scratch
----------------------------------------------------------------

Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|B=Blocked|Consist=Consistent|
R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency

Physical Drives = 4

PD LIST :
=======

-----------------------------------------------------------------------------
EID:Slt DID State DG     Size Intf Med SED PI SeSz Model                  Sp
-----------------------------------------------------------------------------
252:0     8 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD2000FYYZ-01UL1B2 U
252:1    10 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD2000FYYZ-01UL1B2 U
252:2    11 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD2000FYYZ-01UL1B2 U
252:3     9 Onln   0 1.818 TB SATA HDD N   N  512B WDC WD2000FYYZ-01UL1B2 U
-----------------------------------------------------------------------------

	• xfs_info output on the filesystem in question

xfs_info /export/raid/scratch/
meta-data=/dev/sdb               isize=256    agcount=32, agsize=61030368 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=1952971776, imaxpct=5
         =                       sunit=32     swidth=128 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=32 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

	• dmesg output showing all error messages and stack traces

Nothing relevant in dmesg except several occurences of the following.

[7649583.386283] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
[7649585.370830] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
[7649587.241290] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
[7649589.243881] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

Thanks

> On Dec 4, 2016, at 1:49 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Sat, Dec 03, 2016 at 11:08:58AM -0800, Cyril Peponnet wrote:
>> Hi xfs community :),
>> 
>> We have a glusterfs setup running under centos7.2. We are using
>> xfs as underlying storage for the bricks.  The volume is used to
>> store vm snapshots that’s gets created dynamically from the
>> hypervisors through glusterfs mount points.
>> 
>> While this is working fine we have some issues from time to time
>> that make the mount points hang.
>> 
>> We have the famous XFS: possible memory allocation deadlock in
>> kmem_alloc (mode:0x250) errors appearing from time to time on some
>> of our gluster nodes.
> 
> And the complete output in dmesg is?
> 
>> Let me know if you need more information about our systems (like
>> kernel version, underlying storage…)
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
> -Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-04 22:07   ` Cyril Peponnet
@ 2016-12-04 22:46     ` Dave Chinner
  2016-12-04 23:24       ` Cyril Peponnet
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2016-12-04 22:46 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Sun, Dec 04, 2016 at 02:07:18PM -0800, Cyril Peponnet wrote:
> Hi here is the details. The issue is on the scratch RAID array
> (used to store kvm snapshots). The other raid array is fine (no
> snapshot storage).

How do you know that? There's no indication which filesystem is
generating the warnings....

FWIW, the only gluster snapshot proposal that I was aware of
was this one:

https://lists.gnu.org/archive/html/gluster-devel/2013-08/msg00004.html

Which used LVM snapshots to take snapshots of the entire brick. I
don't see any LVM in your config, so I'm not sure what snapshot
implementation you are using here. What are you using to take the
snapshots of your VM image files? Are you actually using the
qemu qcow2 snapshot functionality rather than anything native to
gluster?

Also, can you attach the 'xfs_bmap -vp' output of some of these
image files and their snapshots?

> MemTotal:       65699268 kB

64GB RAM...

> MemFree:         2058304 kB
> MemAvailable:   62753028 kB
> Buffers:              12 kB
> Cached:         57664044 kB

56GB of cached file data. If you're getting high order allocation
failures (which I suspect is the problem) then this is a memory
fragmentation problem more than anything.

> ----------------------------------------------------------------
> DG/VD TYPE  State Access Consist Cache Cac sCC     Size Name
> ----------------------------------------------------------------
> 0/0   RAID0 Optl  RW     Yes     RAWBC -   ON  7.275 TB scratch
> ----------------------------------------------------------------
> 
> Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
> Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|B=Blocked|Consist=Consistent|
> R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
> AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled

IIRC, AWB means that if the cache goes into degraded/offline mode,
you're vulnerable to corruption/loss on power failure...

> xfs_info /export/raid/scratch/
> meta-data=/dev/sdb               isize=256    agcount=32, agsize=61030368 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0
> data     =                       bsize=4096   blocks=1952971776, imaxpct=5
>          =                       sunit=32     swidth=128 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=512   sunit=32 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

Nothing unusual there.

> Nothing relevant in dmesg except several occurences of the following.
> 
> [7649583.386283] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
> [7649585.370830] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
> [7649587.241290] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
> [7649589.243881] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)

Ah, the kernel is old enough it doesn't have the added reporting to
tell us what the process and size of the allocation being requested
is.

Hmm - it's an xfs_err() call, thet means we should be able to get a
stack trace out of the kernel if we turn the error level up to 11.

# echo 11 > /proc/sys/fs/xfs/error_level

And wait for it to happena again. that should give a stack trace
telling us where the issue is.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-04 22:46     ` Dave Chinner
@ 2016-12-04 23:24       ` Cyril Peponnet
  2016-12-04 23:50         ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-04 23:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Dec 4, 2016, at 2:46 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Sun, Dec 04, 2016 at 02:07:18PM -0800, Cyril Peponnet wrote:
>> Hi here is the details. The issue is on the scratch RAID array
>> (used to store kvm snapshots). The other raid array is fine (no
>> snapshot storage).
> 
> How do you know that? There's no indication which filesystem is
> generating the warnings….

Because on the hypervisors I have both mount points and only the one that access to scratch array was hanging when the deadlock occurs.

> 
> FWIW, the only gluster snapshot proposal that I was aware of
> was this one:
> 
> https://lists.gnu.org/archive/html/gluster-devel/2013-08/msg00004.html
> 
> Which used LVM snapshots to take snapshots of the entire brick. I
> don't see any LVM in your config, so I'm not sure what snapshot
> implementation you are using here. What are you using to take the
> snapshots of your VM image files? Are you actually using the
> qemu qcow2 snapshot functionality rather than anything native to
> gluster?
> 

Yes sorry it was not clear enough, qemu-img snapshots no native snapshots.

> Also, can you attach the 'xfs_bmap -vp' output of some of these
> image files and their snapshots?

A snapshot: https://gist.github.com/CyrilPeponnet/8108c74b9e8fd1d9edbf239b2872378d (let me know if you need more basically there is around 600 live snapshots sitting here).


> 
>> MemTotal:       65699268 kB
> 
> 64GB RAM...
> 
>> MemFree:         2058304 kB
>> MemAvailable:   62753028 kB
>> Buffers:              12 kB
>> Cached:         57664044 kB
> 
> 56GB of cached file data. If you're getting high order allocation
> failures (which I suspect is the problem) then this is a memory
> fragmentation problem more than anything.
> 
>> ----------------------------------------------------------------
>> DG/VD TYPE  State Access Consist Cache Cac sCC     Size Name
>> ----------------------------------------------------------------
>> 0/0   RAID0 Optl  RW     Yes     RAWBC -   ON  7.275 TB scratch
>> ----------------------------------------------------------------
>> 
>> Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|dgrd=Degraded
>> Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|B=Blocked|Consist=Consistent|
>> R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
>> AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
> 
> IIRC, AWB means that if the cache goes into degraded/offline mode,
> you’re vulnerable to corruption/loss on power failure…

Yes we have BBU + redundant PSU to address that.

> 
>> xfs_info /export/raid/scratch/
>> meta-data=/dev/sdb               isize=256    agcount=32, agsize=61030368 blks
>>         =                       sectsz=512   attr=2, projid32bit=1
>>         =                       crc=0
>> data     =                       bsize=4096   blocks=1952971776, imaxpct=5
>>         =                       sunit=32     swidth=128 blks
>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>> log      =internal               bsize=4096   blocks=521728, version=2
>>         =                       sectsz=512   sunit=32 blks, lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> Nothing unusual there.
> 
>> Nothing relevant in dmesg except several occurences of the following.
>> 
>> [7649583.386283] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
>> [7649585.370830] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
>> [7649587.241290] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
>> [7649589.243881] XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
> 
> Ah, the kernel is old enough it doesn't have the added reporting to
> tell us what the process and size of the allocation being requested
> is.
> 
> Hmm - it's an xfs_err() call, thet means we should be able to get a
> stack trace out of the kernel if we turn the error level up to 11.
> 
> # echo 11 > /proc/sys/fs/xfs/error_level
> 
> And wait for it to happena again. that should give a stack trace
> telling us where the issue is.

It’s done I will post it as soon as the issue occurs again.

Thanks Dave.

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-04 23:24       ` Cyril Peponnet
@ 2016-12-04 23:50         ` Dave Chinner
  2016-12-05  1:14           ` Cyril Peponnet
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2016-12-04 23:50 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Sun, Dec 04, 2016 at 03:24:50PM -0800, Cyril Peponnet wrote:
> > On Dec 4, 2016, at 2:46 PM, Dave Chinner <david@fromorbit.com>
> > Which used LVM snapshots to take snapshots of the entire brick.
> > I don't see any LVM in your config, so I'm not sure what
> > snapshot implementation you are using here. What are you using
> > to take the snapshots of your VM image files? Are you actually
> > using the qemu qcow2 snapshot functionality rather than anything
> > native to gluster?
> > 
> 
> Yes sorry it was not clear enough, qemu-img snapshots no native
> snapshots.

Ok, so that's a fragmentation problem in it's own right. both
internal qcow2 fragmentation and file fragmentation.

> > Also, can you attach the 'xfs_bmap -vp' output of some of these
> > image files and their snapshots?
> 
> A snapshot:
> https://gist.github.com/CyrilPeponnet/8108c74b9e8fd1d9edbf239b2872378d
> (let me know if you need more basically there is around 600 live
> snapshots sitting here).

1200 extents, mostly small, almost entirely adjacent. Typical qcow2
file fragmentation pattern. That's not going to cause your memory
allocation problems - can you find one that has hundreds of
thousands of extents?

> > 
> > 56GB of cached file data. If you're getting high order
> > allocation failures (which I suspect is the problem) then this
> > is a memory fragmentation problem more than anything.
> > 
> >> ----------------------------------------------------------------
> >> DG/VD TYPE  State Access Consist Cache Cac sCC     Size Name
> >> ----------------------------------------------------------------
> >> 0/0   RAID0 Optl  RW     Yes     RAWBC -   ON  7.275 TB scratch
> >> ----------------------------------------------------------------
> >> 
> >> Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially
> >> Degraded|dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read
> >> Write|HD=Hidden|B=Blocked|Consist=Consistent| R=Read Ahead
> >> Always|NR=No Read Ahead|WB=WriteBack| AWB=Always
> >> WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
> > 
> > IIRC, AWB means that if the cache goes into degraded/offline
> > mode, you’re vulnerable to corruption/loss on power
> > failure…
> 
> Yes we have BBU + redundant PSU to address that.

BBU fails, data center loses power, corruption/data loss still
occurs. Not my problem, though.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-04 23:50         ` Dave Chinner
@ 2016-12-05  1:14           ` Cyril Peponnet
  2016-12-05  1:22             ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-05  1:14 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Dec 4, 2016, at 3:50 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Sun, Dec 04, 2016 at 03:24:50PM -0800, Cyril Peponnet wrote:
>>> On Dec 4, 2016, at 2:46 PM, Dave Chinner <david@fromorbit.com>
>>> Which used LVM snapshots to take snapshots of the entire brick.
>>> I don't see any LVM in your config, so I'm not sure what
>>> snapshot implementation you are using here. What are you using
>>> to take the snapshots of your VM image files? Are you actually
>>> using the qemu qcow2 snapshot functionality rather than anything
>>> native to gluster?
>>> 
>> 
>> Yes sorry it was not clear enough, qemu-img snapshots no native
>> snapshots.
> 
> Ok, so that's a fragmentation problem in it's own right. both
> internal qcow2 fragmentation and file fragmentation.
> 
>>> Also, can you attach the 'xfs_bmap -vp' output of some of these
>>> image files and their snapshots?
>> 
>> A snapshot:
>> https://gist.github.com/CyrilPeponnet/8108c74b9e8fd1d9edbf239b2872378d
>> (let me know if you need more basically there is around 600 live
>> snapshots sitting here).
> 
> 1200 extents, mostly small, almost entirely adjacent. Typical qcow2
> file fragmentation pattern. That's not going to cause your memory
> allocation problems - can you find one that has hundreds of
> thousands of extents?

I found one with 10799109 :/ 576GB in size (I need to find why this one is so big this is not normal…)… Could it lead to the issue? I mean could one file cause the deadlock of the entire FS?

> 
>>> 
>>> 56GB of cached file data. If you're getting high order
>>> allocation failures (which I suspect is the problem) then this
>>> is a memory fragmentation problem more than anything.
>>> 
>>>> ----------------------------------------------------------------
>>>> DG/VD TYPE  State Access Consist Cache Cac sCC     Size Name
>>>> ----------------------------------------------------------------
>>>> 0/0   RAID0 Optl  RW     Yes     RAWBC -   ON  7.275 TB scratch
>>>> ----------------------------------------------------------------
>>>> 
>>>> Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially
>>>> Degraded|dgrd=Degraded Optl=Optimal|RO=Read Only|RW=Read
>>>> Write|HD=Hidden|B=Blocked|Consist=Consistent| R=Read Ahead
>>>> Always|NR=No Read Ahead|WB=WriteBack| AWB=Always
>>>> WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
>>> 
>>> IIRC, AWB means that if the cache goes into degraded/offline
>>> mode, you’re vulnerable to corruption/loss on power
>>> failure…
>> 
>> Yes we have BBU + redundant PSU to address that.
> 
> BBU fails, data center loses power, corruption/data loss still
> occurs. Not my problem, though.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-05  1:14           ` Cyril Peponnet
@ 2016-12-05  1:22             ` Dave Chinner
  2016-12-05  1:48               ` Cyril Peponnet
       [not found]               ` <C07DD929-5600-4934-A6B0-C0A7D83D7247@nuagenetworks.net>
  0 siblings, 2 replies; 15+ messages in thread
From: Dave Chinner @ 2016-12-05  1:22 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Sun, Dec 04, 2016 at 05:14:51PM -0800, Cyril Peponnet wrote:
> 
> > On Dec 4, 2016, at 3:50 PM, Dave Chinner <david@fromorbit.com>
> > wrote:
> > 
> > On Sun, Dec 04, 2016 at 03:24:50PM -0800, Cyril Peponnet wrote:
> >>> On Dec 4, 2016, at 2:46 PM, Dave Chinner <david@fromorbit.com>
> >>> Which used LVM snapshots to take snapshots of the entire
> >>> brick.  I don't see any LVM in your config, so I'm not sure
> >>> what snapshot implementation you are using here. What are you
> >>> using to take the snapshots of your VM image files? Are you
> >>> actually using the qemu qcow2 snapshot functionality rather
> >>> than anything native to gluster?
> >>> 
> >> 
> >> Yes sorry it was not clear enough, qemu-img snapshots no native
> >> snapshots.
> > 
> > Ok, so that's a fragmentation problem in it's own right. both
> > internal qcow2 fragmentation and file fragmentation.
> > 
> >>> Also, can you attach the 'xfs_bmap -vp' output of some of
> >>> these image files and their snapshots?
> >> 
> >> A snapshot:
> >> https://gist.github.com/CyrilPeponnet/8108c74b9e8fd1d9edbf239b2872378d
> >> (let me know if you need more basically there is around 600
> >> live snapshots sitting here).
> > 
> > 1200 extents, mostly small, almost entirely adjacent. Typical
> > qcow2 file fragmentation pattern. That's not going to cause your
> > memory allocation problems - can you find one that has hundreds
> > of thousands of extents?
> 
> I found one with 10799109 :/ 576GB in size (I need to find why
> this one is so big this is not normal…)… Could it lead to
> the issue?

The memory allocation issue, yes. 10 million extents is
unusually high even for VM image files...

> I mean could one file cause the deadlock of the entire
> FS?

What deadlock is that? XFS is reporting memory allocation issues,
not that there is a filesystem deadlock. Your comments that dropping
caches make the problem go away indicate that there isn't any
deadlock, just blocking on memory allocation that is taking a long
time to resolve...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-05  1:22             ` Dave Chinner
@ 2016-12-05  1:48               ` Cyril Peponnet
       [not found]               ` <C07DD929-5600-4934-A6B0-C0A7D83D7247@nuagenetworks.net>
  1 sibling, 0 replies; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-05  1:48 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Dec 4, 2016, at 5:22 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Sun, Dec 04, 2016 at 05:14:51PM -0800, Cyril Peponnet wrote:
>> 
>>> On Dec 4, 2016, at 3:50 PM, Dave Chinner <david@fromorbit.com>
>>> wrote:
>>> 
>>> On Sun, Dec 04, 2016 at 03:24:50PM -0800, Cyril Peponnet wrote:
>>>>> On Dec 4, 2016, at 2:46 PM, Dave Chinner <david@fromorbit.com>
>>>>> Which used LVM snapshots to take snapshots of the entire
>>>>> brick.  I don't see any LVM in your config, so I'm not sure
>>>>> what snapshot implementation you are using here. What are you
>>>>> using to take the snapshots of your VM image files? Are you
>>>>> actually using the qemu qcow2 snapshot functionality rather
>>>>> than anything native to gluster?
>>>>> 
>>>> 
>>>> Yes sorry it was not clear enough, qemu-img snapshots no native
>>>> snapshots.
>>> 
>>> Ok, so that's a fragmentation problem in it's own right. both
>>> internal qcow2 fragmentation and file fragmentation.
>>> 
>>>>> Also, can you attach the 'xfs_bmap -vp' output of some of
>>>>> these image files and their snapshots?
>>>> 
>>>> A snapshot:
>>>> https://gist.github.com/CyrilPeponnet/8108c74b9e8fd1d9edbf239b2872378d
>>>> (let me know if you need more basically there is around 600
>>>> live snapshots sitting here).
>>> 
>>> 1200 extents, mostly small, almost entirely adjacent. Typical
>>> qcow2 file fragmentation pattern. That's not going to cause your
>>> memory allocation problems - can you find one that has hundreds
>>> of thousands of extents?
>> 
>> I found one with 10799109 :/ 576GB in size (I need to find why
>> this one is so big this is not normal…)… Could it lead to
>> the issue?
> 
> The memory allocation issue, yes. 10 million extents is
> unusually high even for VM image files…

Will dig on that one. Its size is also unusual…

> 
>> I mean could one file cause the deadlock of the entire
>> FS?
> 
> What deadlock is that? XFS is reporting memory allocation issues,
> not that there is a filesystem deadlock. Your comments that dropping
> caches make the problem go away indicate that there isn't any
> deadlock, just blocking on memory allocation that is taking a long
> time to resolve…

You are right by deadlock I meant that mount point hang for ls commands for instance (from the server it self) and basically the glusterfs mount on the hypervisors is also hanging for all vms (it makes the vms to remount RO).

On this server I have 1200 living snapshots, usually not too much write in it except for some of them…

Is there a way to optimize the memory allocation? Or we likely need to add more ram…

Thanks for your time Dave I apreciate.

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
       [not found]               ` <C07DD929-5600-4934-A6B0-C0A7D83D7247@nuagenetworks.net>
@ 2016-12-05  7:46                 ` Dave Chinner
  2016-12-05 15:51                   ` Cyril Peponnet
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2016-12-05  7:46 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Sun, Dec 04, 2016 at 05:47:04PM -0800, Cyril Peponnet wrote:
> > On Dec 4, 2016, at 5:22 PM, Dave Chinner <david@fromorbit.com>
> > wrote: What deadlock is that? XFS is reporting memory allocation
> > issues, not that there is a filesystem deadlock. Your comments
> > that dropping caches make the problem go away indicate that
> > there isn't any deadlock, just blocking on memory allocation
> > that is taking a long time to resolve…
> 
> You are right by deadlock I meant that mount point hang for ls
> commands for instance (from the server it self) and basically the
> glusterfs mount on the hypervisors is also hanging for all vms (it
> makes the vms to remount RO).

So everything is /blocked/, not deadlocked. If the memory allocation
then makes progress, we're all ok.

> On this server I have 1200 living snapshots, usually not too much
> write in it except for some of them…
> 
> Is there a way to optimize the memory allocation? Or we likely
> need to add more ram…

More RAM won't help - it's likely a memory fragmentation problem
made worse by the fact that older kernels won't do memory compaction
(i.e. memory defrag) when high order memory allocation fails. If the
problem is file fragmentation, then the best you can probably do
right now is limit fragmentation.

Stopping qcow2 from fragmenting the crap out of the files will
prevent the problem from re-occurring. This is the primary use case
for extent size hints in XFS these days, though I know you can't
actually control the gluster back end to use this. perhaps there are
other tweaks to qcow formats (cluster size?) or gluster config to
limit the amount of fragmentation....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-05  7:46                 ` Dave Chinner
@ 2016-12-05 15:51                   ` Cyril Peponnet
  2016-12-05 21:45                     ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-05 15:51 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Dec 4, 2016, at 11:46 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Sun, Dec 04, 2016 at 05:47:04PM -0800, Cyril Peponnet wrote:
>>> On Dec 4, 2016, at 5:22 PM, Dave Chinner <david@fromorbit.com>
>>> wrote: What deadlock is that? XFS is reporting memory allocation
>>> issues, not that there is a filesystem deadlock. Your comments
>>> that dropping caches make the problem go away indicate that
>>> there isn't any deadlock, just blocking on memory allocation
>>> that is taking a long time to resolve…
>> 
>> You are right by deadlock I meant that mount point hang for ls
>> commands for instance (from the server it self) and basically the
>> glusterfs mount on the hypervisors is also hanging for all vms (it
>> makes the vms to remount RO).
> 
> So everything is /blocked/, not deadlocked. If the memory allocation
> then makes progress, we're all ok.
> 
>> On this server I have 1200 living snapshots, usually not too much
>> write in it except for some of them…
>> 
>> Is there a way to optimize the memory allocation? Or we likely
>> need to add more ram…
> 
> More RAM won't help - it's likely a memory fragmentation problem
> made worse by the fact that older kernels won't do memory compaction
> (i.e. memory defrag) when high order memory allocation fails. If the
> problem is file fragmentation, then the best you can probably do
> right now is limit fragmentation.
> 
> Stopping qcow2 from fragmenting the crap out of the files will
> prevent the problem from re-occurring. This is the primary use case
> for extent size hints in XFS these days, though I know you can't
> actually control the gluster back end to use this. perhaps there are
> other tweaks to qcow formats (cluster size?) or gluster config to
> limit the amount of fragmentation….


I had the issue again but I don’t have more output in dmesg or journalctl even with the echo 11 > /proc/sys/fs/xfs/error_level set.

Is there another location where I should look at ?

Thanks

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-05 15:51                   ` Cyril Peponnet
@ 2016-12-05 21:45                     ` Dave Chinner
  2016-12-06 17:54                       ` Cyril Peponnet
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2016-12-05 21:45 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Mon, Dec 05, 2016 at 07:51:45AM -0800, Cyril Peponnet wrote:
> I had the issue again but I don’t have more output in dmesg or
> journalctl even with the echo 11 > /proc/sys/fs/xfs/error_level
> set.

Which means your kernel does not have this commit:

commit 847f9f6875fb02b576035e3dc31f5e647b7617a7
Author: Eric Sandeen <sandeen@redhat.com>
Date:   Mon Oct 12 16:04:45 2015 +1100

    xfs: more info from kmem deadlocks and high-level error msgs
    
    In an effort to get more useful out of "possible memory
    allocation deadlock" messages, print the size of the
    requested allocation, and dump the stack if the xfs error
    level is tuned high.
    
    The stack dump is implemented in define_xfs_printk_level()
    for error levels >= LOGLEVEL_ERR, partly because it
    seems generically useful, and also because kmem.c has
    no knowledge of xfs error level tunables or other such bits,
    it's very kmem-specific.
    
    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
    Reviewed-by: Dave Chinner <dchinner@redhat.com>
    Signed-off-by: Dave Chinner <david@fromorbit.com>

> Is there another location where I should look at ?

Nope, there's nothing in your kernel we can use to identify the
source of memory allocations. I'm pretty sure that RH have used
systemtap scripts to pull this information from these kernels for
RHEL customers - we've added additional debug help here to avoid
that need, but your kernel doesn't have that code....

Essentially, best guess is that it's file fragmentation causing
problems with extent list allocation. Finding out why that one
snapshot is fragmenting so much and mitigating it is probably the
only thing you can do right now (i.e. extent size hints). Long term
is to get gluster to do the mitigation for VM images automatically.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-05 21:45                     ` Dave Chinner
@ 2016-12-06 17:54                       ` Cyril Peponnet
  2016-12-07  6:16                         ` Dave Chinner
       [not found]                         ` <473936408.4772.1481091425441@itfw6.prod.google.com>
  0 siblings, 2 replies; 15+ messages in thread
From: Cyril Peponnet @ 2016-12-06 17:54 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-xfs


> On Dec 5, 2016, at 1:45 PM, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Mon, Dec 05, 2016 at 07:51:45AM -0800, Cyril Peponnet wrote:
>> I had the issue again but I don’t have more output in dmesg or
>> journalctl even with the echo 11 > /proc/sys/fs/xfs/error_level
>> set.
> 
> Which means your kernel does not have this commit:
> 
> commit 847f9f6875fb02b576035e3dc31f5e647b7617a7
> Author: Eric Sandeen <sandeen@redhat.com>
> Date:   Mon Oct 12 16:04:45 2015 +1100
> 
>    xfs: more info from kmem deadlocks and high-level error msgs
> 
>    In an effort to get more useful out of "possible memory
>    allocation deadlock" messages, print the size of the
>    requested allocation, and dump the stack if the xfs error
>    level is tuned high.
> 
>    The stack dump is implemented in define_xfs_printk_level()
>    for error levels >= LOGLEVEL_ERR, partly because it
>    seems generically useful, and also because kmem.c has
>    no knowledge of xfs error level tunables or other such bits,
>    it's very kmem-specific.
> 
>    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
>    Reviewed-by: Dave Chinner <dchinner@redhat.com>
>    Signed-off-by: Dave Chinner <david@fromorbit.com>

Indeed we should plan an upgrade window.

> 
>> Is there another location where I should look at ?
> 
> Nope, there's nothing in your kernel we can use to identify the
> source of memory allocations. I'm pretty sure that RH have used
> systemtap scripts to pull this information from these kernels for
> RHEL customers - we've added additional debug help here to avoid
> that need, but your kernel doesn't have that code....
> 
> Essentially, best guess is that it's file fragmentation causing
> problems with extent list allocation. Finding out why that one
> snapshot is fragmenting so much and mitigating it is probably the
> only thing you can do right now (i.e. extent size hints). Long term
> is to get gluster to do the mitigation for VM images automatically.
> 

Looks like it’s better since I disabled the vm that was taking a lot of disk space:

qemu-img info disk0.snapshot.qcow2
image: disk0.snapshot.qcow2
file format: qcow2
virtual size: 265G (284541583360 bytes)
disk size: 798G
cluster_size: 65536
backing file: base.qcow2

Note the virtual size vs the disk size, looks pretty fragmented.

I will follow up with glusters guys.

One dumb question, can the extent size hint be done at the root level ? This way all new files will have the extent size hint by inheritance. Maybe that’s overkill or simply will not work. Just wanted to know :)

Anyway thanks Dave for all the detailed answers you provided to me. That was really help full.


> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
  2016-12-06 17:54                       ` Cyril Peponnet
@ 2016-12-07  6:16                         ` Dave Chinner
       [not found]                         ` <473936408.4772.1481091425441@itfw6.prod.google.com>
  1 sibling, 0 replies; 15+ messages in thread
From: Dave Chinner @ 2016-12-07  6:16 UTC (permalink / raw)
  To: Cyril Peponnet; +Cc: linux-xfs

On Tue, Dec 06, 2016 at 09:54:37AM -0800, Cyril Peponnet wrote:
> 
> > On Dec 5, 2016, at 1:45 PM, Dave Chinner <david@fromorbit.com> wrote:
> > 
> > On Mon, Dec 05, 2016 at 07:51:45AM -0800, Cyril Peponnet wrote:
> >> I had the issue again but I don’t have more output in dmesg or
> >> journalctl even with the echo 11 > /proc/sys/fs/xfs/error_level
> >> set.
> > 
> > Which means your kernel does not have this commit:
> > 
> > commit 847f9f6875fb02b576035e3dc31f5e647b7617a7
> > Author: Eric Sandeen <sandeen@redhat.com>
> > Date:   Mon Oct 12 16:04:45 2015 +1100
> > 
> >    xfs: more info from kmem deadlocks and high-level error msgs
> > 
> >    In an effort to get more useful out of "possible memory
> >    allocation deadlock" messages, print the size of the
> >    requested allocation, and dump the stack if the xfs error
> >    level is tuned high.
> > 
> >    The stack dump is implemented in define_xfs_printk_level()
> >    for error levels >= LOGLEVEL_ERR, partly because it
> >    seems generically useful, and also because kmem.c has
> >    no knowledge of xfs error level tunables or other such bits,
> >    it's very kmem-specific.
> > 
> >    Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> >    Reviewed-by: Dave Chinner <dchinner@redhat.com>
> >    Signed-off-by: Dave Chinner <david@fromorbit.com>
> 
> Indeed we should plan an upgrade window.
> 
> > 
> >> Is there another location where I should look at ?
> > 
> > Nope, there's nothing in your kernel we can use to identify the
> > source of memory allocations. I'm pretty sure that RH have used
> > systemtap scripts to pull this information from these kernels for
> > RHEL customers - we've added additional debug help here to avoid
> > that need, but your kernel doesn't have that code....
> > 
> > Essentially, best guess is that it's file fragmentation causing
> > problems with extent list allocation. Finding out why that one
> > snapshot is fragmenting so much and mitigating it is probably the
> > only thing you can do right now (i.e. extent size hints). Long term
> > is to get gluster to do the mitigation for VM images automatically.
> > 
> 
> Looks like it’s better since I disabled the vm that was taking a lot of disk space:
> 
> qemu-img info disk0.snapshot.qcow2
> image: disk0.snapshot.qcow2
> file format: qcow2
> virtual size: 265G (284541583360 bytes)
> disk size: 798G
> cluster_size: 65536
> backing file: base.qcow2
> 
> Note the virtual size vs the disk size, looks pretty fragmented.
> 
> I will follow up with glusters guys.
> 
> One dumb question, can the extent size hint be done at the root
> level?

Yes. Just set it immediately after mkfs on the root directory inode
and everything in the filesystem will inherit that extent size hint
at create time.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup
       [not found]                           ` <8176484246282250577@unknownmsgid>
@ 2016-12-07 19:44                             ` Dave Chinner
  0 siblings, 0 replies; 15+ messages in thread
From: Dave Chinner @ 2016-12-07 19:44 UTC (permalink / raw)
  To: Cyril PEPONNET; +Cc: linux-xfs@vger.kernel.org

On Wed, Dec 07, 2016 at 07:16:55AM -0800, Cyril PEPONNET wrote:
> Our snapshots are deleted / recreated quite often. If I apply the extent
> size hint to the root folder of an existing tree, does it will still apply
> for new created files ?

Only for new files/directories created in the root directory. THe
hint is not retro-actively applied. You'd need to walk the tree
setting the hint on all directories to get it to apply to all new
files created.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-12-07 19:44 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-03 19:08 XFS: possible memory allocation deadlock in kmem_alloc on glusterfs setup Cyril Peponnet
2016-12-04 21:49 ` Dave Chinner
2016-12-04 22:07   ` Cyril Peponnet
2016-12-04 22:46     ` Dave Chinner
2016-12-04 23:24       ` Cyril Peponnet
2016-12-04 23:50         ` Dave Chinner
2016-12-05  1:14           ` Cyril Peponnet
2016-12-05  1:22             ` Dave Chinner
2016-12-05  1:48               ` Cyril Peponnet
     [not found]               ` <C07DD929-5600-4934-A6B0-C0A7D83D7247@nuagenetworks.net>
2016-12-05  7:46                 ` Dave Chinner
2016-12-05 15:51                   ` Cyril Peponnet
2016-12-05 21:45                     ` Dave Chinner
2016-12-06 17:54                       ` Cyril Peponnet
2016-12-07  6:16                         ` Dave Chinner
     [not found]                         ` <473936408.4772.1481091425441@itfw6.prod.google.com>
     [not found]                           ` <8176484246282250577@unknownmsgid>
2016-12-07 19:44                             ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).