linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 10GB memorys occupied by XFS
@ 2014-04-04  7:26 daiguochao
  2014-04-04 20:24 ` Stan Hoeppner
  2014-04-11  2:40 ` daiguochao
  0 siblings, 2 replies; 7+ messages in thread
From: daiguochao @ 2014-04-04  7:26 UTC (permalink / raw)
  To: xfs

Hello folks,

I used xfs file system in kernel-2.6.32-220.13.1.el6.x86_64 for store
pictures. About 100 days system memorys is lost and some nginx process is
killed by oom-killer.So,I looked /proc/meminfo and find memorys is
lost.Finally, I try to umount xfs system and 10GB memorys is coming back. l
look xfs bugzilla no such BUG.I have no idea for it.

Cheers,

Guochao.

some memorys info:

0> free -m
             total       used       free     shared    buffers     cached
Mem:         11887      11668        219          0          0          2
-/+ buffers/cache:      11665        222
Swap:            0          0          0

130> cat /proc/meminfo 
MemTotal:       12173268 kB
MemFree:          223044 kB
Buffers:             244 kB
Cached:             4540 kB
SwapCached:            0 kB
Active:             1700 kB
Inactive:           5312 kB
Active(anon):       1616 kB
Inactive(anon):     1128 kB
Active(file):         84 kB
Inactive(file):     4184 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:          2556 kB
Mapped:             1088 kB
Shmem:               196 kB
Slab:             509708 kB
SReclaimable:       7596 kB
SUnreclaim:       502112 kB
KernelStack:        1096 kB
PageTables:          748 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6086632 kB
Committed_AS:       9440 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      303488 kB
VmallocChunk:   34359426132 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        6152 kB
DirectMap2M:     2070528 kB
DirectMap1G:    10485760 kB



--
View this message in context: http://xfs.9218.n7.nabble.com/10GB-memorys-occupied-by-XFS-tp35015.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 10GB memorys occupied by XFS
  2014-04-04  7:26 10GB memorys occupied by XFS daiguochao
@ 2014-04-04 20:24 ` Stan Hoeppner
       [not found]   ` <76016fc7.13c84.14546bad411.Coremail.dx-wl@163.com>
  2014-04-11  2:40 ` daiguochao
  1 sibling, 1 reply; 7+ messages in thread
From: Stan Hoeppner @ 2014-04-04 20:24 UTC (permalink / raw)
  To: daiguochao, xfs

On 4/4/2014 2:26 AM, daiguochao wrote:
> Hello folks,

Hello,

Note that your problems are not XFS specific, but can occur with any
Linux filesystem.

> I used xfs file system in kernel-2.6.32-220.13.1.el6.x86_64 for store
> pictures. About 100 days system memorys is lost and some nginx process is
> killed by oom-killer.So,I looked /proc/meminfo and find memorys is
> lost.Finally, I try to umount xfs system and 10GB memorys is coming back. l
> look xfs bugzilla no such BUG.I have no idea for it.
> 
> Cheers,
> 
> Guochao.
> 
> some memorys info:
> 
> 0> free -m
>              total       used       free     shared    buffers     cached
> Mem:         11887      11668        219          0          0          2
> -/+ buffers/cache:      11665        222
> Swap:            0          0          0


First problem:  no swap
Second problem: cache is not being reclaimed

Read vfs_cache_pressure at:
https://www.kernel.org/doc/Documentation/sysctl/vm.txt

You've likely set this value to zero.  Changing it to 200 should prompt
the kernel to reclaim dentries and inodes aggressively, preventing the
oom-killer from kicking in.

Cheers,

Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* 答复: Re: 10GB memorys occupied by XFS
       [not found]   ` <76016fc7.13c84.14546bad411.Coremail.dx-wl@163.com>
@ 2014-04-10  1:41     ` 戴国超
  2014-04-11  5:09     ` Stan Hoeppner
  1 sibling, 0 replies; 7+ messages in thread
From: 戴国超 @ 2014-04-10  1:41 UTC (permalink / raw)
  To: stan@hardwarefreak.com; +Cc: xfs@oss.sgi.com

Dear Stan,
Thank you for your kind assistance.

In accordance with your suggestion, we executed "echo 3 > /proc/sysm/drop_caches" for trying to release vfs dentries and inodes. Really, our lost memory came back. But we learned that the memory of vfs dentries and inodes is distributed from slab. Please check our system "Slab:  509708 kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take up 450MB among. And /proc/meminfo indicated that our system memory is anomalous, there is about 10GB out of the statistics. We want to know how the system could observe the usage amount of vfs dentries and iodes through the system interface. If the memory usage of system is not reflected in /proc/meminfo as we can not find the statistics, and we thought it as a bug of xfs.

My  vm.vfs_cache_pressure of linux system is 100. We think that the system will proactively take the memory back when the memory is not enough, rather than oom-killer kills our work process. Our datas of /proc/meminfo occurred during the system problem as below:
130> cat /proc/meminfo
MemTotal:       12173268 kB
MemFree:          223044 kB
Buffers:             244 kB
Cached:             4540 kB
SwapCached:            0 kB
Active:             1700 kB
Inactive:           5312 kB
Active(anon):       1616 kB
Inactive(anon):     1128 kB
Active(file):         84 kB
Inactive(file):     4184 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:          2556 kB
Mapped:             1088 kB
Shmem:               196 kB
Slab:             509708 kB
SReclaimable:       7596 kB
SUnreclaim:       502112 kB
KernelStack:        1096 kB
PageTables:          748 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6086632 kB
Committed_AS:       9440 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      303488 kB
VmallocChunk:   34359426132 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        6152 kB
DirectMap2M:     2070528 kB
DirectMap1G:    10485760 kB 

I look forward to hearing from you and thank you very much for your kind assistance.

Best Regards,

Guochao


At 2014-04-05 04:24:12,"Stan Hoeppner" <stan@hardwarefreak.com> wrote:
>On 4/4/2014 2:26 AM, daiguochao wrote:
>> Hello folks,
>
>Hello,
>
>Note that your problems are not XFS specific, but can occur with any 
>Linux filesystem.
>
>> I used xfs file system in kernel-2.6.32-220.13.1.el6.x86_64 for store
>> pictures. About 100 days system memorys is lost and some nginx 
>>process is
>> killed by oom-killer.So,I looked /proc/meminfo and find memorys is
>> lost.Finally, I try to umount xfs system and 10GB memorys is coming 
>>back. l
>> look xfs bugzilla no such BUG.I have no idea for it.
>> 
>> Cheers,
>> 
>> Guochao.
>> 
>> some memorys info:
>> 
>> 0> free -m
>>              total       used       free     shared    buffers     
>>cached
>> Mem:         11887      11668        219          0          0          
>>2
>> -/+ buffers/cache:      11665        222
>> Swap:            0          0          0
>
>
>First problem:  no swap
>Second problem: cache is not being reclaimed
>
>Read vfs_cache_pressure at:
>https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>
>You've likely set this value to zero.  Changing it to 200 should prompt 
>the kernel to reclaim dentries and inodes aggressively, preventing the 
>oom-killer from kicking in.
>
>Cheers,
>
>Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 10GB memorys occupied by XFS
  2014-04-04  7:26 10GB memorys occupied by XFS daiguochao
  2014-04-04 20:24 ` Stan Hoeppner
@ 2014-04-11  2:40 ` daiguochao
  2014-04-11  4:26   ` Dave Chinner
  2014-04-11 21:35   ` Stan Hoeppner
  1 sibling, 2 replies; 7+ messages in thread
From: daiguochao @ 2014-04-11  2:40 UTC (permalink / raw)
  To: xfs

Dear Stan, I can't send email to you.So I leave a message here.I hope not to
bother you.
Thank you for your kind assistance.

In accordance with your suggestion, we executed "echo 3 >
/proc/sysm/drop_caches" for trying to release vfs dentries and inodes.
Really,
our lost memory came back. But we learned that the memory of vfs dentries
and inodes is distributed from slab. Please check our system "Slab:  509708
kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take
up 450MB among. And /proc/meminfo indicated that our system memory is
anomalous, there is about 10GB out of the statistics. We want to know how
the system could observe the usage amount of vfs dentries and iodes through
the system interface. If the memory usage of system is not reflected in
/proc/meminfo as we can not find the statistics, and we thought it as a bug
of xfs.

My  vm.vfs_cache_pressure of linux system is 100. We think that the system
will proactively take the memory back when the memory is not enough, rather
than oom-killer kills our work process. Our datas of /proc/meminfo occurred
during the system problem as below:
130> cat /proc/meminfo 
MemTotal:       12173268 kB 
MemFree:          223044 kB 
Buffers:             244 kB 
Cached:             4540 kB 
SwapCached:            0 kB 
Active:             1700 kB 
Inactive:           5312 kB 
Active(anon):       1616 kB 
Inactive(anon):     1128 kB 
Active(file):         84 kB 
Inactive(file):     4184 kB 
Unevictable:           0 kB 
Mlocked:               0 kB 
SwapTotal:             0 kB 
SwapFree:              0 kB 
Dirty:                 0 kB 
Writeback:             0 kB 
AnonPages:          2556 kB 
Mapped:             1088 kB 
Shmem:               196 kB 
Slab:             509708 kB 
SReclaimable:       7596 kB 
SUnreclaim:       502112 kB 
KernelStack:        1096 kB 
PageTables:          748 kB 
NFS_Unstable:          0 kB 
Bounce:                0 kB 
WritebackTmp:          0 kB 
CommitLimit:     6086632 kB 
Committed_AS:       9440 kB 
VmallocTotal:   34359738367 kB 
VmallocUsed:      303488 kB 
VmallocChunk:   34359426132 kB 
HardwareCorrupted:     0 kB 
AnonHugePages:         0 kB 
HugePages_Total:       0 
HugePages_Free:        0 
HugePages_Rsvd:        0 
HugePages_Surp:        0 
Hugepagesize:       2048 kB 
DirectMap4k:        6152 kB 
DirectMap2M:     2070528 kB 
DirectMap1G:    10485760 kB

Best Regards,

Guochao



--
View this message in context: http://xfs.9218.n7.nabble.com/10GB-memorys-occupied-by-XFS-tp35015p35016.html
Sent from the Xfs - General mailing list archive at Nabble.com.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 10GB memorys occupied by XFS
  2014-04-11  2:40 ` daiguochao
@ 2014-04-11  4:26   ` Dave Chinner
  2014-04-11 21:35   ` Stan Hoeppner
  1 sibling, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2014-04-11  4:26 UTC (permalink / raw)
  To: daiguochao; +Cc: xfs

On Thu, Apr 10, 2014 at 07:40:44PM -0700, daiguochao wrote:
> Dear Stan, I can't send email to you.So I leave a message here.I hope not to
> bother you.
> Thank you for your kind assistance.
> 
> In accordance with your suggestion, we executed "echo 3 >
> /proc/sysm/drop_caches" for trying to release vfs dentries and inodes.
> Really,
> our lost memory came back. But we learned that the memory of vfs dentries
> and inodes is distributed from slab. Please check our system "Slab:  509708
> kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take
> up 450MB among.

That's where your memory is - in metadata buffers. The xfs_buf slab
entries are just the handles - the metadata pages in the buffers
usually take much more space and it's not accounted to the slab
cache nor the page cache.

Can you post the output of /proc/slabinfo, and what is the output of
xfs_info on the filesystem in question? Also, a description of your
workload that is resulting in large amounts of cached metadata
buffers but no inodes or dentries would be helpful.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 10GB memorys occupied by XFS
       [not found]   ` <76016fc7.13c84.14546bad411.Coremail.dx-wl@163.com>
  2014-04-10  1:41     ` 答复: " 戴国超
@ 2014-04-11  5:09     ` Stan Hoeppner
  1 sibling, 0 replies; 7+ messages in thread
From: Stan Hoeppner @ 2014-04-11  5:09 UTC (permalink / raw)
  To: 戴国超; +Cc: xfs

On 4/9/2014 8:43 AM, 戴国超 wrote:
> Dear Stan,
> Thank you for your kind assistance.
> 
> In accordance with your suggestion, we executed "echo 3 > /proc/sysm/drop_caches" for trying to release vfs dentries and inodes. Really,
> our lost memory came back. But we learned that the memory of vfs dentries and inodes is distributed from slab. Please check our system "Slab:  509708 kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take up 450MB among. 

To free pagecache:
	echo 1 > /proc/sys/vm/drop_caches
To free reclaimable slab objects (includes dentries and inodes):
	echo 2 > /proc/sys/vm/drop_caches
To free slab objects and pagecache:
	echo 3 > /proc/sys/vm/drop_caches

> And /proc/meminfo indicated that our system memory is anomalous, there is about 10GB out of the statistics. We want to know how the system could observe the usage amount of vfs dentries and iodes through the system interface. If the memory usage of system is not reflected in /proc/meminfo as we can not find the statistics, and we thought it as a bug of xfs.

It seems much of this 10 GB of memory is being consumed by pagecache,
not dentries and inodes.  So the question is:

Why is pagecache not being reclaimed without manual intervention?

> My  vm.vfs_cache_pressure of linux system is 100. We think that the system will proactively take the memory back when the memory is not enough, rather than oom-killer kills our work process. Our datas of /proc/meminfo occurred during the system problem as below:

Except most of your slab is reported as not reclaimable, see below.

> 130> cat /proc/meminfo 
> MemTotal:       12173268 kB 
> MemFree:          223044 kB 
> Buffers:             244 kB 
> Cached:             4540 kB  <------

4.5 MB is reported use by the pagecache.  But some 10 GB is being
consumed by the page cache and not reported here.

> SwapCached:            0 kB 
> Active:             1700 kB 
> Inactive:           5312 kB 
> Active(anon):       1616 kB 
> Inactive(anon):     1128 kB 
> Active(file):         84 kB 
> Inactive(file):     4184 kB 
> Unevictable:           0 kB 
> Mlocked:               0 kB 
> SwapTotal:             0 kB 
> SwapFree:              0 kB 
> Dirty:                 0 kB 
> Writeback:             0 kB 
> AnonPages:          2556 kB 
> Mapped:             1088 kB 
> Shmem:               196 kB 
> Slab:             509708 kB  <------
> SReclaimable:       7596 kB  <------
> SUnreclaim:       502112 kB  <------

This indicates that your slab is not being reclaimed, but not why.

> KernelStack:        1096 kB 
> PageTables:          748 kB 
> NFS_Unstable:          0 kB 
> Bounce:                0 kB 
> WritebackTmp:          0 kB 
> CommitLimit:     6086632 kB 
> Committed_AS:       9440 kB 
> VmallocTotal:   34359738367 kB 
> VmallocUsed:      303488 kB 
> VmallocChunk:   34359426132 kB 
> HardwareCorrupted:     0 kB 
> AnonHugePages:         0 kB 
> HugePages_Total:       0 
> HugePages_Free:        0 
> HugePages_Rsvd:        0 
> HugePages_Surp:        0 
> Hugepagesize:       2048 kB 
> DirectMap4k:        6152 kB 
> DirectMap2M:     2070528 kB 
> DirectMap1G:    10485760 kB 
> 
> I look forward to hearing from you and thank you very much for your kind assistance.

Unfortunately I don't have solid answers for you at this point, nor a
solution.  This is beyond my expertise.  Maybe someone else with more
knowledge/experience will jump in.  I suspect your application may be
doing something a bit unusual.

Cheers,

Stan



> Best Regards,
> 
> Guochao
> 
> 
> At 2014-04-05 04:24:12,"Stan Hoeppner" <stan@hardwarefreak.com> wrote:
>> On 4/4/2014 2:26 AM, daiguochao wrote:
>>>  Hello folks,
>>
>> Hello,
>>
>> Note that your problems are not XFS specific, but can occur with any
>> Linux filesystem.
>>
>>>  I used xfs file system in kernel-2.6.32-220.13.1.el6.x86_64 for store
>>>  pictures. About 100 days system memorys is lost and some nginx process is
>>>  killed by oom-killer.So,I looked /proc/meminfo and find memorys is
>>>  lost.Finally, I try to umount xfs system and 10GB memorys is coming back. l
>>>  look xfs bugzilla no such BUG.I have no idea for it.
>>>  
>>>  Cheers,
>>>  
>>>  Guochao.
>>>  
>>>  some memorys info:
>>>  
>>>  0> free -m
>>>               total       used       free     shared    buffers     cached
>>>  Mem:         11887      11668        219          0          0          2
>>>  -/+ buffers/cache:      11665        222
>>>  Swap:            0          0          0
>>
>>
>> First problem:  no swap
>> Second problem: cache is not being reclaimed
>>
>> Read vfs_cache_pressure at:
>> https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>>
>> You've likely set this value to zero.  Changing it to 200 should prompt
>> the kernel to reclaim dentries and inodes aggressively, preventing the
>> oom-killer from kicking in.
>>
>> Cheers,
>>
>> Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 10GB memorys occupied by XFS
  2014-04-11  2:40 ` daiguochao
  2014-04-11  4:26   ` Dave Chinner
@ 2014-04-11 21:35   ` Stan Hoeppner
  1 sibling, 0 replies; 7+ messages in thread
From: Stan Hoeppner @ 2014-04-11 21:35 UTC (permalink / raw)
  To: daiguochao, xfs

On 4/10/2014 9:40 PM, daiguochao wrote:
> Dear Stan, I can't send email to you.So I leave a message here.I hope not to
> bother you.
> Thank you for your kind assistance.

I received all of the ones you sent to the list and that should always
be the case.  One that you sent directly to me was rejected but I think
I've fixed that now.  And I think my delayed reply made things seem
worse than they are.

Anyway, Dave replied while I was typing my last response.  He'll be much
more able to assist you.  Your problem seems beyond the edge of my
knowledge.

Cheers,

Stan



> In accordance with your suggestion, we executed "echo 3 >
> /proc/sysm/drop_caches" for trying to release vfs dentries and inodes.
> Really,
> our lost memory came back. But we learned that the memory of vfs dentries
> and inodes is distributed from slab. Please check our system "Slab:  509708
> kB" from /proc/meminfo, and it seems only be took up 500MB and xfs_buf take
> up 450MB among. And /proc/meminfo indicated that our system memory is
> anomalous, there is about 10GB out of the statistics. We want to know how
> the system could observe the usage amount of vfs dentries and iodes through
> the system interface. If the memory usage of system is not reflected in
> /proc/meminfo as we can not find the statistics, and we thought it as a bug
> of xfs.
> 
> My  vm.vfs_cache_pressure of linux system is 100. We think that the system
> will proactively take the memory back when the memory is not enough, rather
> than oom-killer kills our work process. Our datas of /proc/meminfo occurred
> during the system problem as below:
> 130> cat /proc/meminfo 
> MemTotal:       12173268 kB 
> MemFree:          223044 kB 
> Buffers:             244 kB 
> Cached:             4540 kB 
> SwapCached:            0 kB 
> Active:             1700 kB 
> Inactive:           5312 kB 
> Active(anon):       1616 kB 
> Inactive(anon):     1128 kB 
> Active(file):         84 kB 
> Inactive(file):     4184 kB 
> Unevictable:           0 kB 
> Mlocked:               0 kB 
> SwapTotal:             0 kB 
> SwapFree:              0 kB 
> Dirty:                 0 kB 
> Writeback:             0 kB 
> AnonPages:          2556 kB 
> Mapped:             1088 kB 
> Shmem:               196 kB 
> Slab:             509708 kB 
> SReclaimable:       7596 kB 
> SUnreclaim:       502112 kB 
> KernelStack:        1096 kB 
> PageTables:          748 kB 
> NFS_Unstable:          0 kB 
> Bounce:                0 kB 
> WritebackTmp:          0 kB 
> CommitLimit:     6086632 kB 
> Committed_AS:       9440 kB 
> VmallocTotal:   34359738367 kB 
> VmallocUsed:      303488 kB 
> VmallocChunk:   34359426132 kB 
> HardwareCorrupted:     0 kB 
> AnonHugePages:         0 kB 
> HugePages_Total:       0 
> HugePages_Free:        0 
> HugePages_Rsvd:        0 
> HugePages_Surp:        0 
> Hugepagesize:       2048 kB 
> DirectMap4k:        6152 kB 
> DirectMap2M:     2070528 kB 
> DirectMap1G:    10485760 kB
> 
> Best Regards,
> 
> Guochao
> 
> 
> 
> --
> View this message in context: http://xfs.9218.n7.nabble.com/10GB-memorys-occupied-by-XFS-tp35015p35016.html
> Sent from the Xfs - General mailing list archive at Nabble.com.
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-04-11 21:35 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-04  7:26 10GB memorys occupied by XFS daiguochao
2014-04-04 20:24 ` Stan Hoeppner
     [not found]   ` <76016fc7.13c84.14546bad411.Coremail.dx-wl@163.com>
2014-04-10  1:41     ` 答复: " 戴国超
2014-04-11  5:09     ` Stan Hoeppner
2014-04-11  2:40 ` daiguochao
2014-04-11  4:26   ` Dave Chinner
2014-04-11 21:35   ` Stan Hoeppner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).