linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <riel@redhat.com>
To: Pavel Machek <pavel@ucw.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	LKML <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	Wu Fengguang <fengguang.wu@intel.com>
Subject: Re: [PATCH -mm] throttle direct reclaim when too many pages are	isolated already (v3)
Date: Wed, 29 Jul 2009 12:19:34 -0400	[thread overview]
Message-ID: <4A707696.6080301@redhat.com> (raw)
In-Reply-To: <20090729150443.GB1534@ucw.cz>

Pavel Machek wrote:
> On Wed 2009-07-15 23:53:18, Rik van Riel wrote:
>> When way too many processes go into direct reclaim, it is possible
>> for all of the pages to be taken off the LRU.  One result of this
>> is that the next process in the page reclaim code thinks there are
>> no reclaimable pages left and triggers an out of memory kill.
>>
>> One solution to this problem is to never let so many processes into
>> the page reclaim path that the entire LRU is emptied.  Limiting the
>> system to only having half of each inactive list isolated for
>> reclaim should be safe.
> 
> Is this still racy? Like on 100cpu machine, with LRU size of 50...?

If a 100 CPU system gets down to just 100 reclaimable pages,
getting the OOM killer to trigger sounds desirable.

The goal of this patch is to avoid _false_ OOM kills, when
the system still has enough reclaimable memory available.

-- 
All rights reversed.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-07-29 16:19 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-16  2:38 [PATCH -mm] throttle direct reclaim when too many pages are isolated already Rik van Riel
2009-07-16  2:48 ` Andrew Morton
2009-07-16  3:10   ` Rik van Riel
2009-07-16  3:21     ` Andrew Morton
2009-07-16  3:28       ` Rik van Riel
2009-07-16  3:38         ` Andrew Morton
2009-07-16  3:42           ` Rik van Riel
2009-07-16  3:51             ` Andrew Morton
2009-07-16  3:53           ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already (v3) Rik van Riel
2009-07-16  4:02             ` Andrew Morton
2009-07-16  4:09               ` Rik van Riel
2009-07-16  4:26                 ` Andrew Morton
2009-07-29 15:04             ` Pavel Machek
2009-07-29 16:19               ` Rik van Riel [this message]
2009-07-16  3:36   ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already (v2) Rik van Riel
2009-07-16  3:19 ` [PATCH -mm] throttle direct reclaim when too many pages are isolated already KAMEZAWA Hiroyuki
2009-07-16  3:32   ` Rik van Riel
2009-07-16  3:42     ` KAMEZAWA Hiroyuki
2009-07-16  3:47       ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A707696.6080301@redhat.com \
    --to=riel@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengguang.wu@intel.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pavel@ucw.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).