public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* oom kill oddness.
@ 2006-09-27 20:54 Dave Jones
  2006-09-27 23:59 ` Andrew Morton
  2006-09-28 23:03 ` Roman Zippel
  0 siblings, 2 replies; 9+ messages in thread
From: Dave Jones @ 2006-09-27 20:54 UTC (permalink / raw)
  To: Linux Kernel

So I have two boxes that are very similar.
Both have 2GB of RAM & 1GB of swap space.
One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.

The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
without incident. (Although it takes ~4 minutes longer than a -j2)

The faster box goes absolutely nuts, oomkilling everything in sight,
until eventually after about 10 minutes, the box locks up dead,
and won't even respond to pings.

Oh, the only other difference - the slower box has 1 disk, whereas the
faster box has two in RAID0.   I'm not surprised that stuff is getting
oom-killed given the pathological scenario, but the fact that the
box never recovered at all is a little odd.  Does md lack some means
of dealing with low memory scenarios ?

	Dave

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-27 20:54 Dave Jones
@ 2006-09-27 23:59 ` Andrew Morton
  2006-09-28 23:03 ` Roman Zippel
  1 sibling, 0 replies; 9+ messages in thread
From: Andrew Morton @ 2006-09-27 23:59 UTC (permalink / raw)
  To: Dave Jones; +Cc: Linux Kernel

On Wed, 27 Sep 2006 16:54:35 -0400
Dave Jones <davej@redhat.com> wrote:

> So I have two boxes that are very similar.
> Both have 2GB of RAM & 1GB of swap space.
> One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
>
> The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> without incident. (Although it takes ~4 minutes longer than a -j2)
>
> The faster box goes absolutely nuts, oomkilling everything in sight,
> until eventually after about 10 minutes, the box locks up dead,
> and won't even respond to pings.
> 
> Oh, the only other difference - the slower box has 1 disk, whereas the
> faster box has two in RAID0.   I'm not surprised that stuff is getting
> oom-killed given the pathological scenario, but the fact that the
> box never recovered at all is a little odd.  Does md lack some means
> of dealing with low memory scenarios ?

Are you sure it isn't a memory leak?

Suggest you kill things just before it locks up, have a look at
/proc/meminfo, /proc/slabinfo, sysrq-M, echo 3>/proc/sys/vm/drop_caches,
etc.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-27 20:54 Dave Jones
  2006-09-27 23:59 ` Andrew Morton
@ 2006-09-28 23:03 ` Roman Zippel
  2006-09-29  0:17   ` Andrew Morton
  1 sibling, 1 reply; 9+ messages in thread
From: Roman Zippel @ 2006-09-28 23:03 UTC (permalink / raw)
  To: Dave Jones; +Cc: Linux Kernel

Hi,

On Wed, 27 Sep 2006, Dave Jones wrote:

> So I have two boxes that are very similar.
> Both have 2GB of RAM & 1GB of swap space.
> One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
> 
> The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> without incident. (Although it takes ~4 minutes longer than a -j2)
> 
> The faster box goes absolutely nuts, oomkilling everything in sight,
> until eventually after about 10 minutes, the box locks up dead,
> and won't even respond to pings.
> 
> Oh, the only other difference - the slower box has 1 disk, whereas the
> faster box has two in RAID0.   I'm not surprised that stuff is getting
> oom-killed given the pathological scenario, but the fact that the
> box never recovered at all is a little odd.  Does md lack some means
> of dealing with low memory scenarios ?

I think I see the same thing on the other end on slow machines, here it 
only takes a single compile job, which doesn't quite fit into memory and 
another task (like top) which occasionally wakes up and tries to allocate 
memory and then kills the compile job - that's very annoying.

AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is 
rather local information, other processes can still make progress and keep 
this process from making progress, which gets grumpy and starts killing. 
What's happing here is that most memory is either mapped or in the swap 
cache, so we have a race between processes trying to free memory from the 
cache and processes mapping memory back into their address space.

If someone wants to play with the problem, the example program below 
triggers the problem relatively easily (booting with only little ram 
helps), it starts a number of readers, which should touch a bit more 
memory than is available and a few writers, which occasionally allocate 
memory.

bye, Roman


#include <stdlib.h>
#include <string.h>
#include <time.h>

#define MEM_SIZE (24 << 20)

int main(int ac, char **av)
{
	volatile char *mem;
	int i, memsize;

	memsize = MEM_SIZE;
	if (ac > 1)
		memsize = atoi(av[1]) << 20;
	mem = malloc(memsize);

	memset(mem, 0, memsize);
	for (i = 0; i < 32; i++) {
		if (!fork()) {
			while (1) {
				*(mem + random() % memsize);
			}
		}
	}
	for (i = 0; i < 5; i++) {
		if (!fork()) {
			while (1) {
				volatile char *p;
				struct timespec ts;
				int t = random() % 5000;
				ts.tv_sec = t / 1000;
				ts.tv_nsec = (t % 1000) * 1000000;
				nanosleep(&ts, NULL);
				p = malloc(1 << 16);
				memset(p, 0, 1 << 16);
				free(p);
			}
		}
	}
	while (1)
		pause();
}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-28 23:03 ` Roman Zippel
@ 2006-09-29  0:17   ` Andrew Morton
  2006-09-29  0:22     ` Dave Jones
  2006-09-29  0:57     ` Roman Zippel
  0 siblings, 2 replies; 9+ messages in thread
From: Andrew Morton @ 2006-09-29  0:17 UTC (permalink / raw)
  To: Roman Zippel; +Cc: Dave Jones, Linux Kernel

On Fri, 29 Sep 2006 01:03:16 +0200 (CEST)
Roman Zippel <zippel@linux-m68k.org> wrote:

> Hi,
> 
> On Wed, 27 Sep 2006, Dave Jones wrote:
> 
> > So I have two boxes that are very similar.
> > Both have 2GB of RAM & 1GB of swap space.
> > One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
> > 
> > The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
> > without incident. (Although it takes ~4 minutes longer than a -j2)
> > 
> > The faster box goes absolutely nuts, oomkilling everything in sight,
> > until eventually after about 10 minutes, the box locks up dead,
> > and won't even respond to pings.
> > 
> > Oh, the only other difference - the slower box has 1 disk, whereas the
> > faster box has two in RAID0.   I'm not surprised that stuff is getting
> > oom-killed given the pathological scenario, but the fact that the
> > box never recovered at all is a little odd.  Does md lack some means
> > of dealing with low memory scenarios ?
> 
> I think I see the same thing on the other end on slow machines, here it 
> only takes a single compile job, which doesn't quite fit into memory and 
> another task (like top) which occasionally wakes up and tries to allocate 
> memory and then kills the compile job - that's very annoying.
> 
> AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is 
> rather local information, other processes can still make progress and keep 
> this process from making progress, which gets grumpy and starts killing. 
> What's happing here is that most memory is either mapped or in the swap 
> cache, so we have a race between processes trying to free memory from the 
> cache and processes mapping memory back into their address space.

Kernel versions please, guys.  There have been a lot of oom-killer changes
post-2.6.18.

> If someone wants to play with the problem, the example program below 
> triggers the problem relatively easily (booting with only little ram 
> helps), it starts a number of readers, which should touch a bit more 
> memory than is available and a few writers, which occasionally allocate 
> memory.
> 

How much ram, how much swap?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-29  0:17   ` Andrew Morton
@ 2006-09-29  0:22     ` Dave Jones
  2006-09-29  0:57     ` Roman Zippel
  1 sibling, 0 replies; 9+ messages in thread
From: Dave Jones @ 2006-09-29  0:22 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Roman Zippel, Linux Kernel

On Thu, Sep 28, 2006 at 05:17:06PM -0700, Andrew Morton wrote:
 > On Fri, 29 Sep 2006 01:03:16 +0200 (CEST)
 > Roman Zippel <zippel@linux-m68k.org> wrote:
 > 
 > > Hi,
 > > 
 > > On Wed, 27 Sep 2006, Dave Jones wrote:
 > > 
 > > > So I have two boxes that are very similar.
 > > > Both have 2GB of RAM & 1GB of swap space.
 > > > One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
 > > > 
 > > > The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
 > > > without incident. (Although it takes ~4 minutes longer than a -j2)
 > > > 
 > > > The faster box goes absolutely nuts, oomkilling everything in sight,
 > > > until eventually after about 10 minutes, the box locks up dead,
 > > > and won't even respond to pings.
 > > > 
 > > > Oh, the only other difference - the slower box has 1 disk, whereas the
 > > > faster box has two in RAID0.   I'm not surprised that stuff is getting
 > > > oom-killed given the pathological scenario, but the fact that the
 > > > box never recovered at all is a little odd.  Does md lack some means
 > > > of dealing with low memory scenarios ?
 > > 
 > > I think I see the same thing on the other end on slow machines, here it 
 > > only takes a single compile job, which doesn't quite fit into memory and 
 > > another task (like top) which occasionally wakes up and tries to allocate 
 > > memory and then kills the compile job - that's very annoying.
 > > 
 > > AFAICT the basic problem is that "did_some_progress" in __alloc_pages() is 
 > > rather local information, other processes can still make progress and keep 
 > > this process from making progress, which gets grumpy and starts killing. 
 > > What's happing here is that most memory is either mapped or in the swap 
 > > cache, so we have a race between processes trying to free memory from the 
 > > cache and processes mapping memory back into their address space.
 > 
 > Kernel versions please, guys.  There have been a lot of oom-killer changes
 > post-2.6.18.

Sorry, I've been stuck on 2.6.18 as that's what we're shipping in FC6 soon.

	Dave

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-29  0:17   ` Andrew Morton
  2006-09-29  0:22     ` Dave Jones
@ 2006-09-29  0:57     ` Roman Zippel
  2006-09-29  1:39       ` Nick Piggin
  1 sibling, 1 reply; 9+ messages in thread
From: Roman Zippel @ 2006-09-29  0:57 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Dave Jones, Linux Kernel

Hi,

On Thu, 28 Sep 2006, Andrew Morton wrote:

> Kernel versions please, guys.  There have been a lot of oom-killer changes
> post-2.6.18.

Last I tested this was with 2.6.18.
The latest changes to vmscan.c should help...

> > If someone wants to play with the problem, the example program below 
> > triggers the problem relatively easily (booting with only little ram 
> > helps), it starts a number of readers, which should touch a bit more 
> > memory than is available and a few writers, which occasionally allocate 
> > memory.
> > 
> 
> How much ram, how much swap?

I tested it with 32MB and 64MB and plenty of swap.

bye, Roman

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-29  0:57     ` Roman Zippel
@ 2006-09-29  1:39       ` Nick Piggin
  0 siblings, 0 replies; 9+ messages in thread
From: Nick Piggin @ 2006-09-29  1:39 UTC (permalink / raw)
  To: Roman Zippel; +Cc: Andrew Morton, Dave Jones, Linux Kernel

Roman Zippel wrote:

>Hi,
>
>On Thu, 28 Sep 2006, Andrew Morton wrote:
>
>
>>Kernel versions please, guys.  There have been a lot of oom-killer changes
>>post-2.6.18.
>>
>
>Last I tested this was with 2.6.18.
>The latest changes to vmscan.c should help...
>

It would be good if you could confirm that. I basically got the kernel to
the point where it used up all swap before going OOM on the workload I
was looking at (MySQL running in virtual machines).

--

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
@ 2006-09-29 20:03 Larry Woodman
  2006-09-29 21:34 ` Dave Jones
  0 siblings, 1 reply; 9+ messages in thread
From: Larry Woodman @ 2006-09-29 20:03 UTC (permalink / raw)
  To: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1227 bytes --]

>
>
>So I have two boxes that are very similar.
>Both have 2GB of RAM & 1GB of swap space.
>One has a 2.8GHz CPU, the other a 2.93GHz CPU, both dualcore.
>
>The slower box survives a 'make -j bzImage' of a 2.6.18 kernel tree
>without incident. (Although it takes ~4 minutes longer than a -j2)
>
>The faster box goes absolutely nuts, oomkilling everything in sight,
>until eventually after about 10 minutes, the box locks up dead,
>and won't even respond to pings.
>
>Oh, the only other difference - the slower box has 1 disk, whereas the
>faster box has two in RAID0.   I'm not surprised that stuff is getting
>oom-killed given the pathological scenario, but the fact that the
>box never recovered at all is a little odd.  Does md lack some means
>of dealing with low memory scenarios ?
>
>	Dave
>
Dave, this has been a problem since the out_of_memory() function was 
changed
between 2.6.10 and 2.6.11.  Before this change out_of_memory() required 
multiple
calls within 5 seconds before actually OOM killed a process.  After the 
change(in 2.6.11)
a single call to out_of_memory() results in OOM killing a process.  The 
following patch
allows the 2.6.18 system to run under much more memory pressure before 
it OOM kills.




[-- Attachment #2: oomkill.patch --]
[-- Type: text/plain, Size: 2191 bytes --]

--- linux-2.6.18.noarch/mm/oom_kill.c.orig
+++ linux-2.6.18.noarch/mm/oom_kill.c
@@ -306,6 +306,69 @@ static int oom_kill_process(struct task_
 	return oom_kill_task(p, message);
 }
 
+int should_oom_kill(void)
+{
+	static spinlock_t oom_lock = SPIN_LOCK_UNLOCKED;
+	static unsigned long first, last, count, lastkill;
+	unsigned long now, since;
+	int ret = 0;
+
+	spin_lock(&oom_lock);
+	now = jiffies;
+	since = now - last;
+	last = now;
+
+	/*
+	 * If it's been a long time since last failure,
+	 * we're not oom.
+	 */
+	if (since > 5*HZ)
+		goto reset;
+
+	/*
+	 * If we haven't tried for at least one second,
+	 * we're not really oom.
+	 */
+	since = now - first;
+	if (since < HZ)
+		goto out_unlock;
+
+	/*
+	 * If we have gotten only a few failures,
+	 * we're not really oom.
+	 */
+	if (++count < 10)
+		goto out_unlock;
+
+	/*
+	 * If we just killed a process, wait a while
+	 * to give that task a chance to exit. This
+	 * avoids killing multiple processes needlessly.
+	 */
+	since = now - lastkill;
+	if (since < HZ*5)
+		goto out_unlock;
+
+	/*
+	 * Ok, really out of memory. Kill something.
+	 */
+	lastkill = now;
+	ret = 1;
+
+reset:
+/*
+ * We dropped the lock above, so check to be sure the variable
+ * first only ever increases to prevent false OOM's.
+ */
+	if (time_after(now, first))
+		first = now;
+	count = 0;
+
+out_unlock:
+	spin_unlock(&oom_lock);
+	return ret;
+}
+
 /**
  * out_of_memory - kill the "best" process when we run out of memory
  *
@@ -326,6 +389,9 @@ void out_of_memory(struct zonelist *zone
 		show_mem();
 	}
 
+	if (!should_oom_kill())
+		return;
+
 	cpuset_lock();
 	read_lock(&tasklist_lock);
 
--- linux-2.6.18.noarch/mm/vmscan.c.orig
+++ linux-2.6.18.noarch/mm/vmscan.c
@@ -999,10 +999,8 @@ unsigned long try_to_free_pages(struct z
 			reclaim_state->reclaimed_slab = 0;
 		}
 		total_scanned += sc.nr_scanned;
-		if (nr_reclaimed >= sc.swap_cluster_max) {
-			ret = 1;
+		if (nr_reclaimed >= sc.swap_cluster_max)
 			goto out;
-		}
 
 		/*
 		 * Try to write back as many pages as we just scanned.  This
@@ -1030,6 +1028,8 @@ out:
 
 		zone->prev_priority = zone->temp_priority;
 	}
+	if (nr_reclaimed)
+		ret = 1;
 	return ret;
 }
 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: oom kill oddness.
  2006-09-29 20:03 oom kill oddness Larry Woodman
@ 2006-09-29 21:34 ` Dave Jones
  0 siblings, 0 replies; 9+ messages in thread
From: Dave Jones @ 2006-09-29 21:34 UTC (permalink / raw)
  To: Larry Woodman; +Cc: linux-kernel

On Fri, Sep 29, 2006 at 04:03:14PM -0400, Larry Woodman wrote:
 
 > Dave, this has been a problem since the out_of_memory() function was 
 > changed
 > between 2.6.10 and 2.6.11.  Before this change out_of_memory() required 
 > multiple
 > calls within 5 seconds before actually OOM killed a process.  After the 
 > change(in 2.6.11)
 > a single call to out_of_memory() results in OOM killing a process.  The 
 > following patch
 > allows the 2.6.18 system to run under much more memory pressure before 
 > it OOM kills.

Some of these tests do seem to be readded in Linus' current tree.

[PATCH] oom: don't kill current when another OOM in progress

went in earlier today for eg.
I'm curious why these checks were ever removed in the first place though.

	Dave


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-09-29 21:34 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-09-29 20:03 oom kill oddness Larry Woodman
2006-09-29 21:34 ` Dave Jones
  -- strict thread matches above, loose matches on Subject: below --
2006-09-27 20:54 Dave Jones
2006-09-27 23:59 ` Andrew Morton
2006-09-28 23:03 ` Roman Zippel
2006-09-29  0:17   ` Andrew Morton
2006-09-29  0:22     ` Dave Jones
2006-09-29  0:57     ` Roman Zippel
2006-09-29  1:39       ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox