public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Very aggressive memory reclaim
@ 2011-03-28 16:39 John Lepikhin
  2011-03-28 17:42 ` Steven Rostedt
  2011-03-28 21:53 ` Dave Chinner
  0 siblings, 2 replies; 11+ messages in thread
From: John Lepikhin @ 2011-03-28 16:39 UTC (permalink / raw)
  To: linux-kernel

Hello,

I use high-loaded machine with 10M+ inodes inside XFS, 50+ GB of
memory, intensive HDD traffic and 20..50 forks per second. Vanilla
kernel 2.6.37.4. The problem is that kernel frees memory very
aggressively.

For example:

25% of memory is used by processes
50% for page caches
7% for slabs, etc.
18% free.

That's bad but works. After few hours:

25% of memory is used by processes
62% for page caches
7% for slabs, etc.
5% free.

Most of files are cached, works perfectly. This is the moment when
kernel decides to free some memory. After memory reclaim:

25% of memory is used by processes
25% for page caches(!)
7% for slabs, etc.
43% free(!)

Page cache is dropped, server becomes too slow. This is the beginning
of new cycle.

I didn't found any huge mallocs at that moment. Looks like because of
large number of small mallocs (forks) kernel have pessimistic forecast
about future memory usage and frees too much memory. Is there any
options of tuning this? Any other variants?

Thanks!

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-03-29  8:59 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-28 16:39 Very aggressive memory reclaim John Lepikhin
2011-03-28 17:42 ` Steven Rostedt
2011-03-28 21:53 ` Dave Chinner
2011-03-28 22:52   ` Minchan Kim
2011-03-29  2:55     ` KOSAKI Motohiro
2011-03-29  7:33       ` John Lepikhin
2011-03-29  7:22     ` John Lepikhin
2011-03-28 23:58   ` Andi Kleen
2011-03-29  1:57     ` Dave Chinner
2011-03-29  7:26   ` John Lepikhin
2011-03-29  8:59     ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox