public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Paul Jackson <pj@sgi.com>
To: Christoph Lameter <clameter@engr.sgi.com>
Cc: ak@suse.de, linux-kernel@vger.kernel.org, akpm@osdl.org
Subject: Re: OOM behavior in constrained memory situations
Date: Mon, 6 Feb 2006 14:59:22 -0800	[thread overview]
Message-ID: <20060206145922.3eb3c404.pj@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.62.0602061253020.18594@schroedinger.engr.sgi.com>

Christoph wrote:
> There are situations in which memory allocations are restricted by policy, 
> by a cpuset or by type of allocation. 
> 
> I propose that we need different OOM behavior for the cases in which the
> user has imposed a limit on what type of memory to be allocated. In that 
> case the application should be terminate with OOM. The OOM killer should 
> not run.

I'll duck the discussion that followed your post as to whether some
sort of error or null return would be better than killing something.

If it is the case that some code path leads to the OOM killer, then
I don't agree that memory restrictions such as cpuset constraints
should mean we avoid the OOM killer.

I've already changed the OOM killer to only go after tasks in or
overlapping with the same cpuset.

static struct task_struct *select_bad_process(unsigned long *ppoints)
{
	...
	do_each_thread(g, p) {
		...
		/* If p's nodes don't overlap ours, it won't help to kill p. */
		if (!cpuset_excl_nodes_overlap(p))
			continue;
                                                    
I am guessing (you don't say) that your concern is that it seems
unfair for some app in some small cpuset to be able to trigger the
system-wide OOM killer.  The basic problem that this caused, that
of killing unrelated processes in entirely non-overlapping cpusets,
which was of no use in reducing the memory stress in the faulting
cpuset, is no longer a problem.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@sgi.com> 1.925.600.0401

  parent reply	other threads:[~2006-02-06 22:59 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-02-06 20:59 OOM behavior in constrained memory situations Christoph Lameter
2006-02-06 21:10 ` Andrew Morton
2006-02-06 21:22   ` Andi Kleen
2006-02-06 22:16     ` Christoph Lameter
2006-02-06 22:25       ` Andi Kleen
2006-02-06 22:30       ` Andrew Morton
2006-02-07  0:03         ` Christoph Lameter
2006-02-09 23:08           ` David Gibson
2006-02-06 22:11   ` Christoph Lameter
2006-02-06 22:26     ` Andrew Morton
2006-02-06 22:59 ` Paul Jackson [this message]
2006-02-07  0:39   ` Christoph Lameter
2006-02-07  1:55   ` Christoph Lameter
2006-02-07  9:23     ` Andi Kleen
2006-02-07 17:29       ` Christoph Lameter
2006-02-07 17:45         ` Andi Kleen
2006-02-07 17:51           ` Christoph Lameter
2006-02-07 17:58             ` Andi Kleen
2006-02-07 18:10               ` Christoph Lameter
2006-02-07 18:19               ` Christoph Lameter
2006-02-07 18:31                 ` Andi Kleen
2006-02-07 19:00                   ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060206145922.3eb3c404.pj@sgi.com \
    --to=pj@sgi.com \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=clameter@engr.sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox