From: Marcelo Tosatti <marcelo.tosatti@cyclades.com>
To: Roy Keene <rkeene@psislidell.com>
Cc: Alexander Nyberg <alexn@telia.com>,
Anthony DiSante <theant@nodivisions.com>,
andrea@suse.de, akpm@osdl.org,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: oom-killings, but I'm not out of memory!
Date: Sun, 3 Jul 2005 19:45:55 -0300 [thread overview]
Message-ID: <20050703224555.GB21450@logos.cnet> (raw)
In-Reply-To: <Pine.LNX.4.62.0507032142290.25063@hammer.psislidell.com>
Hi Roy,
On Sun, Jul 03, 2005 at 09:44:37PM -0500, Roy Keene wrote:
> I think I'm having the same issue.
>
> I've 2 systems with 4GB of RAM and 2GB of swap that kill processes when
> they get a lot of disk I/O. I've attached the full dmesg output which
> includes portions where it killed stuff despite having massive amounts of
> free memory.
What kernel version is that?
> oom-killer: gfp_mask=0xd0
> Mem-info:
> DMA per-cpu:
> cpu 0 hot: low 2, high 6, batch 1
> cpu 0 cold: low 0, high 2, batch 1
> cpu 1 hot: low 2, high 6, batch 1
> cpu 1 cold: low 0, high 2, batch 1
> cpu 2 hot: low 2, high 6, batch 1
> cpu 2 cold: low 0, high 2, batch 1
> cpu 3 hot: low 2, high 6, batch 1
> cpu 3 cold: low 0, high 2, batch 1
> Normal per-cpu:
> cpu 0 hot: low 32, high 96, batch 16
> cpu 0 cold: low 0, high 32, batch 16
> cpu 1 hot: low 32, high 96, batch 16
> cpu 1 cold: low 0, high 32, batch 16
> cpu 2 hot: low 32, high 96, batch 16
> cpu 2 cold: low 0, high 32, batch 16
> cpu 3 hot: low 32, high 96, batch 16
> cpu 3 cold: low 0, high 32, batch 16
> HighMem per-cpu:
> cpu 0 hot: low 32, high 96, batch 16
> cpu 0 cold: low 0, high 32, batch 16
> cpu 1 hot: low 32, high 96, batch 16
> cpu 1 cold: low 0, high 32, batch 16
> cpu 2 hot: low 32, high 96, batch 16
> cpu 2 cold: low 0, high 32, batch 16
> cpu 3 hot: low 32, high 96, batch 16
> cpu 3 cold: low 0, high 32, batch 16
>
> Free pages: 14304kB (1664kB HighMem)
> Active:7971 inactive:994335 dirty:327523 writeback:25721 unstable:0 free:3576 slab:29113 mapped:7996 pagetables:341
There are about 100M of writeout data onflight - I suppose thats too much.
Guess: can you switch to another IO scheduler than CFQ or reduce its queue size?
IIRC you can do that by reducing /sys/block/device/queue/nr_requests.
> DMA free:12640kB min:16kB low:32kB high:48kB active:0kB inactive:0kB present:16384kB pages_scanned:877 all_unreclaimable? yes
> protections[]: 0 0 0
>
> Normal free:0kB min:928kB low:1856kB high:2784kB active:0kB inactive:739100kB present:901120kB pages_scanned:1556742 all_unreclaimable? yes
> protections[]: 0 0 0
You've got no reservations for the normal zone either.
How does /proc/sys/vm/lowmem_reserve_ratio looks like?
next prev parent reply other threads:[~2005-07-04 3:42 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-06-28 16:24 oom-killings, but I'm not out of memory! Anthony DiSante
2005-06-28 16:44 ` Alexander Nyberg
2005-06-28 16:52 ` Anthony DiSante
2005-06-29 12:57 ` Alexander Nyberg
2005-07-03 20:53 ` Marcelo Tosatti
2005-07-04 2:44 ` Roy Keene
2005-07-03 22:45 ` Marcelo Tosatti [this message]
2005-07-04 3:52 ` Roy Keene
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050703224555.GB21450@logos.cnet \
--to=marcelo.tosatti@cyclades.com \
--cc=akpm@osdl.org \
--cc=alexn@telia.com \
--cc=andrea@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=rkeene@psislidell.com \
--cc=theant@nodivisions.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox