virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Luiz Capitulino <lcapitulino@redhat.com>
To: Adam Litke <alitke@redhat.com>
Cc: msivak@redhat.com, Daniel Kiper <daniel.kiper@oracle.com>,
	dfediuck@redhat.com, virtualization@lists.linuxfoundation.org
Subject: Re: Users of ballooning, please come forth!
Date: Thu, 20 Feb 2014 08:42:46 -0500	[thread overview]
Message-ID: <20140220084246.2b22c0bc@redhat.com> (raw)
In-Reply-To: <20140220131706.GB18487@redhat.com>

On Thu, 20 Feb 2014 08:17:06 -0500
Adam Litke <alitke@redhat.com> wrote:

> On 20/02/14 14:53 +1030, Rusty Russell wrote:
> >Adam Litke <alitke@redhat.com> writes:
> >>> On Tue Feb 11 06:01:10 UTC 2014, Rusty Russell wrote:
> >>> Hi all!
> >>>
> >>>         We're debating the design of the balloon for the OASIS spec.
> >>> Noone likes the current one, but there are fundamental usage pattern
> >>> questions which we're fumbling with.
> >>>
> >>> So if you know anyone who is using it in production?  If, so, how?  In
> >>> particular, would you be happy with guests simply giving the host back
> >>> whatever memory they can spare (as Xen's self-balloon does)?  Or do
> >>> you
> >>> require the host-forcing approach?  Comment or email please!
> >>
> >> Hi Rusty,
> >>
> >> I do not maintain any production setups but I have played with
> >> ballooning (especially automatic ballooning) for quite some time now.
> >> Most recently, I am working with the oVirt project [1] to enable
> >> memory over-commitment and offer SLAs around VM memory usage.
> >
> >Hi Adam,
> >
> >        Thanks for the comprehensive thoughts.
> >
> >> To address the question about whether the Xen self-balloon approach
> >> would be enough...  I think a guest-driven approach such as this would
> >> work very well in self-hosted/private cloud deployments where a single
> >> entity owns all of the virtual machines that are sharing memory.  As
> >> soon as you move to a "public" cloud environment where multiple
> >> customers are sharing a single host then you will need a "bad cop" to
> >> enforce some limits.  (Yes I know ballooning always requires guest
> >> cooperation, but when you combine it with punative cgroups on the host
> >> the guest has a compelling reason to cooperate.)  When I say "bad
> >> cop", I mean a completely host-controlled balloon as we currently do
> >> in oVirt with the Memory Overcommitment Manager [2].  This allows
> >> customers to expect a certain minimum amount of performance.
> >
> >It's interesting that Dan Magenheimer made the opposite point: that
> >if you're charging customers by the MB of memory, it's easy to get them
> >to balloon themselves.
> 
> Sure, it's all about how the incentives are structured and what the
> workload is.  Some people will insist on having a certain amount of
> memory "reserved" and available immediately.  If you meter memory
> usage you would certainly shift the burden of conservation onto the
> guest and this could be preferred for some customers.
> 
> >
> >> In order to support both modes of operation (at the same time) how
> >> about supporting two virtio configuration variables in the balloon
> >> driver: auto_min and auto_max.  These variables would allow the host
> >> to restrict the range in which the auto-balloon algorithm may operate.
> >> Setting both to 0 would disable auto-ballooning and require all
> >> inflate/deflate commands to come from the host.  I think there are
> >> some very interesting possibilities how auto-balloon can be combined
> >> with host directed ballooning to yield good results in a variety of
> >> configurations [3].
> >
> >I think we're headed to the same destination here; the variant which I
> >came up with (and suggested to Daniel and Luiz, CC'd) is similar: the
> >guest self-balloons, giving up pages when it can, but the host sets a
> >ceiling.
> >
> >This way, if the host really needs to set a limit, it can: a disobedient
> >guest will start paging.  But generally, a guest should use its
> >judgement to balloon its own pages as it can (below the ceiling).
> 
> It sounds similar but it sounds like you are suggesting one limit
> value and I am suggesting two.  Your ceiling value sounds like a soft
> limit on total guest memory (aka minimum balloon size).  This is the
> more important limit of the two I have suggested.  Do you think it's
> also worthwhile to have a maximum balloon size (floor value) to keep
> the allowable balloon size between two points?

I was already planning for everything you asked (QMP commands to
disable/enable automatic ballooning, min and max sizes). I still have to
think a bit how those settings will fit in the guest-led design, but it
it should be fine in principle.

  reply	other threads:[~2014-02-20 13:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-11  6:01 Users of ballooning, please come forth! Rusty Russell
2014-02-19 14:49 ` Adam Litke
2014-02-20  4:23   ` Rusty Russell
2014-02-20 13:17     ` Adam Litke
2014-02-20 13:42       ` Luiz Capitulino [this message]
2014-02-21  1:28       ` Rusty Russell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140220084246.2b22c0bc@redhat.com \
    --to=lcapitulino@redhat.com \
    --cc=alitke@redhat.com \
    --cc=daniel.kiper@oracle.com \
    --cc=dfediuck@redhat.com \
    --cc=msivak@redhat.com \
    --cc=virtualization@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).