From: James Stevens <James.Stevens@jrcs.co.uk>
To: balbir@linux.vnet.ibm.com, kvm@vger.kernel.org
Subject: Re: KVM and the OOM-Killer
Date: Fri, 14 May 2010 09:43:04 +0100 [thread overview]
Message-ID: <4BED0D18.80209@jrcs.co.uk> (raw)
In-Reply-To: <20100514082106.GG3296@balbir.in.ibm.com>
> Have you looked at memory cgroups and using that with limits with VMs?
The problem was *NOT* that my VMs exhausted all memory. I know that is
what "normally" triggers oom-killer, but you have to understand this
mine was a very different scenario, hence I wanted to bring it to
people's attention. I had about 10Gb of *FREE* HIGH and 34GB of *FREE*
SWAP when oom-killer was activated - yep, didn't make sense to me
either. If you want to study the logs :-
https://bugzilla.kernel.org/show_bug.cgi?id=15058
Looks like the problem was LOWMEM exhaust that triggered oom-killer.
Which is dumb, because it was cache that was exhausting LOWMEM, and
killing userland processes isn't a great way to deal with that issue.
[My] VMs generally alloate all resource at start-up and that's it.
Committed_AS: 14345016 kB
I tried "vm.overcommit_memory=2" and that didn't help. On a 48Gb system
oom-killer should NEVER be invoked with that kind of memory profile -
Its a quirk of running a 32bit system with *so* much memory, and the way
pre-2.6.33 handled LOWMEM.
We've now moved all VM guests onto one server in preparation for a
re-install of the other with 64bit host O/S.
Tests with 2.6.33.3 (+ latest qemu) appear to show this issue is fixed
in the latest kernel (I can see it has much improved LOWMEM management),
but we've only been running it days, and it can take 3 to 4 weeks to
trigger.
FYI: We run about 100 VM guests on 7 VM hosts in five data centres -
mostly production, some development. We've been using KVM in a
production environment for a while now - starting [in production] at
about KVM-82 on 2.6.28 - our oldest live systems now are two on KVM-84
on 2.6.28.4 and they are rock solid (one gets more punishment than it
deserves) - but they only have 16Gb, so aren't seeing LOWMEM exhaust cos
their memory map is *so* much smaller.
James
next prev parent reply other threads:[~2010-05-14 8:43 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-13 12:20 KVM and the OOM-Killer James Stevens
2010-05-13 12:39 ` Avi Kivity
2010-05-13 13:39 ` James Stevens
2010-05-13 13:53 ` Avi Kivity
2010-05-13 18:55 ` David S. Ahern
2010-05-13 13:42 ` Johannes Stezenbach
2010-05-14 7:33 ` Athanasius
2010-05-14 8:10 ` James Stevens
2010-05-14 8:21 ` Balbir Singh
2010-05-14 8:43 ` James Stevens [this message]
2010-05-14 12:28 ` Balbir Singh
2010-05-14 13:01 ` James Stevens
2010-05-14 8:19 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BED0D18.80209@jrcs.co.uk \
--to=james.stevens@jrcs.co.uk \
--cc=balbir@linux.vnet.ibm.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).