kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: dbareiro@gmx.net, KVM General <kvm@vger.kernel.org>
Subject: Re: Very high memory usage with KVM
Date: Sun, 26 Jul 2009 14:31:57 +0300	[thread overview]
Message-ID: <4A6C3EAD.9040303@redhat.com> (raw)
In-Reply-To: <20090725174340.GA21733@defiant.freesoftware.org>

On 07/25/2009 08:43 PM, Daniel Bareiro wrote:
> Hi all!
>
> I have an installation with Ubuntu Hardy Heron server amd64 with KVM-62
> from Ubuntu repositories installed on an HP Proliant DL380 G5 with two
> Xeon E5405 quadcore processors and 16 GiB of RAM which has six VMs with
> the following configuration of memory:
>
> Hostname       |      RAM
> ===============+===============
> Ganimedes      |    2 GiB
> Os             |    1 GiB
> Aprender       |    2 GiB
> Aps0           |    2 GiB
> Aps2           |    4 GiB
> Ratatoskr      |    4 GiB
> ===============+===============
> TOTAL          |   15 GiB
>
>
> Initially the host was created with a swap partition of 1 GiB (more 1
> GiB than was free for use of host) but this amount with the time
> remained short and I had to add a LV of 7 GiB to be used with swap,
> being now a total of 8 GiB of swap of which at this moment I have only a
> 9% free. Is 'normal' this use of memory?
>
> root@ss02:~# ps -e --sort -rss -Ho user,start_time,pid,pcpu,pmem,rss,size,vsz,args
> USER     START   PID %CPU %MEM   RSS    SZ    VSZ COMMAND
> [...]
> root     Jul06 27471 52.3 24.4 4023232 4292200 4350296   kvm<ratatoskr>
> root     Jul24  9955  137 23.8 3923620 4308592 4350308   kvm<aps2>
> root     Jul06  8751  5.8  8.3 1368228 2171808 2229888   kvm<aps0>
> root     Jul07  8565  2.7  5.2 862844 2204704 2246416   kvm<aprender>
> root     Apr22  7842  0.6  3.6 600072 2172056 2230136   kvm<ganimedes>
> root     Jul01  7944  0.6  2.0 334860 1119916 1177996   kvm<os>
>
> root@ss02:~# free
>               total       used       free     shared    buffers     cached
> Mem:      16463388   16377844      85544          0     894216      66328
> -/+ buffers/cache:   15417300    1046088
> Swap:      8319948    7621916     698032
>
>
> Updating to KVM-84 or superior can improve this situation?
>    

What is the storage configuration?  Are you using qcow2?  What are the 
image logical and physical sizes?

What is the host kernel (uname -a)?

-- 
error compiling committee.c: too many arguments to function


  reply	other threads:[~2009-07-26 11:27 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-25 17:43 Very high memory usage with KVM Daniel Bareiro
2009-07-26 11:31 ` Avi Kivity [this message]
2009-07-26 14:56   ` Daniel Bareiro
2009-07-26 15:11     ` Avi Kivity
2009-07-26 15:50       ` Daniel Bareiro
2009-07-26 16:19         ` Avi Kivity
2009-08-08  0:54           ` Daniel Bareiro
2009-08-09  9:12             ` Avi Kivity
2009-08-10  6:40             ` Bernhard Held
2009-08-10 15:22               ` Daniel Bareiro
2009-08-10 16:15                 ` Bernhard Held
2009-08-22  2:28                   ` Daniel Bareiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A6C3EAD.9040303@redhat.com \
    --to=avi@redhat.com \
    --cc=dbareiro@gmx.net \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).