From: Dan Magenheimer <dan.magenheimer@oracle.com>
To: Andi Kleen <andi@firstfloor.org>, Daniel Kiper <dkiper@net-space.pl>
Cc: jeremy@goop.org, xen-devel@lists.xensource.com,
linux-kernel@vger.kernel.org
Subject: RE: Re: GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen
Date: Thu, 8 Jul 2010 16:12:01 -0700 (PDT) [thread overview]
Message-ID: <592dc9b4-d329-4ce3-a5a1-9e6e7044b90c@default> (raw)
In-Reply-To: <871vbdr4ey.fsf@basil.nowhere.org>
> From: Andi Kleen [mailto:andi@firstfloor.org]
>
> Daniel Kiper <dkiper@net-space.pl> writes:
> >
> > OK, let's go to details. When I was playing with Xen I saw that
> > ballooning does not give possibility to extend memory over boundary
> > declared at the start of system. Yes, I know that is by desing
> however
> > I thought that it is a limitation which could by very annoing in some
> > enviroments (I think especially about servers). That is why I decided
> to
> > develop some code which remove that one. At the beggining I thought
> > that it should be replaced by memory hotplyg however after some test
> > and discussion with Jeremy we decided to link balooning (for memory
> > removal) with memory hotplug (for extending memory above boundary
> > declared at the startup of system). Additionaly, we decided to
> implement
> > this solution for Linux Xen gustes in all forms (PV/i386,x86_64 and
> > HVM/i386,x86_64).
>
> While you can do that the value is not very large because you
> could just start the guests with more memory, but ballooned in
> the first place (so that they don't actually use it)
>
> The only advantage of using memory hotadd is that the mem_map doesn't
> need to be pre-allocated, but that's only a few percent of the memory.
>
> So it would only help if you want to add gigantic amounts of memory
> to a VM (like >20-30x of what it already has).
One can envision a scenario where a cloud customer launches a
business-critical VM with some reasonably large "maxmem" set,
balloons up to the max, then finds out it isn't enough after
all and would like to avoid rebooting. Or a cloud provider
might charge for a specific maxmem, but allow the customer
to increase maxmem if they pay more money.
Dan
next prev parent reply other threads:[~2010-07-08 23:12 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-08 19:45 GSoC 2010 - Migration from memory ballooning to memory hotplug in Xen Daniel Kiper
2010-07-08 22:32 ` Andi Kleen
2010-07-08 22:58 ` Re: GSoC 2010 - Migration from memory ballooning tomemory " James Harper
2010-07-09 17:34 ` [Xen-devel] " Daniel Kiper
2010-07-10 5:17 ` James Harper
2010-07-10 12:36 ` [Xen-devel] " Daniel Kiper
2010-07-08 23:12 ` Dan Magenheimer [this message]
2010-07-09 15:53 ` Re: GSoC 2010 - Migration from memory ballooning to memory " Daniel Kiper
2010-07-08 23:51 ` [Xen-devel] " Jeremy Fitzhardinge
2010-07-09 0:34 ` Andi Kleen
2010-07-09 17:32 ` [Xen-devel] " Daniel Kiper
2010-07-08 23:16 ` Jeremy Fitzhardinge
2010-07-09 17:11 ` [Xen-devel] " Daniel Kiper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=592dc9b4-d329-4ce3-a5a1-9e6e7044b90c@default \
--to=dan.magenheimer@oracle.com \
--cc=andi@firstfloor.org \
--cc=dkiper@net-space.pl \
--cc=jeremy@goop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).