virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Users of ballooning, please come forth!
@ 2014-02-11  6:01 Rusty Russell
  2014-02-19 14:49 ` Adam Litke
  0 siblings, 1 reply; 6+ messages in thread
From: Rusty Russell @ 2014-02-11  6:01 UTC (permalink / raw)
  To: virtualization

Hi all!

        We're debating the design of the balloon for the OASIS spec.
Noone likes the current one, but there are fundamental usage pattern
questions which we're fumbling with.

So if you know anyone who is using it in production?  If, so, how?  In
particular, would you be happy with guests simply giving the host back
whatever memory they can spare (as Xen's self-balloon does)?  Or do you
require the host-forcing approach?  Comment or email please!

Thanks,
Rusty.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Users of ballooning, please come forth!
  2014-02-11  6:01 Users of ballooning, please come forth! Rusty Russell
@ 2014-02-19 14:49 ` Adam Litke
  2014-02-20  4:23   ` Rusty Russell
  0 siblings, 1 reply; 6+ messages in thread
From: Adam Litke @ 2014-02-19 14:49 UTC (permalink / raw)
  To: rusty; +Cc: msivak, virtualization, dfediuck

> On Tue Feb 11 06:01:10 UTC 2014, Rusty Russell wrote:
> Hi all!
> 
>         We're debating the design of the balloon for the OASIS spec.
> Noone likes the current one, but there are fundamental usage pattern
> questions which we're fumbling with.
> 
> So if you know anyone who is using it in production?  If, so, how?  In
> particular, would you be happy with guests simply giving the host back
> whatever memory they can spare (as Xen's self-balloon does)?  Or do
> you
> require the host-forcing approach?  Comment or email please!

Hi Rusty,

I do not maintain any production setups but I have played with
ballooning (especially automatic ballooning) for quite some time now.
Most recently, I am working with the oVirt project [1] to enable
memory over-commitment and offer SLAs around VM memory usage.

To address the question about whether the Xen self-balloon approach
would be enough...  I think a guest-driven approach such as this would
work very well in self-hosted/private cloud deployments where a single
entity owns all of the virtual machines that are sharing memory.  As
soon as you move to a "public" cloud environment where multiple
customers are sharing a single host then you will need a "bad cop" to
enforce some limits.  (Yes I know ballooning always requires guest
cooperation, but when you combine it with punative cgroups on the host
the guest has a compelling reason to cooperate.)  When I say "bad
cop", I mean a completely host-controlled balloon as we currently do
in oVirt with the Memory Overcommitment Manager [2].  This allows
customers to expect a certain minimum amount of performance.

In order to support both modes of operation (at the same time) how
about supporting two virtio configuration variables in the balloon
driver: auto_min and auto_max.  These variables would allow the host
to restrict the range in which the auto-balloon algorithm may operate.
Setting both to 0 would disable auto-ballooning and require all
inflate/deflate commands to come from the host.  I think there are
some very interesting possibilities how auto-balloon can be combined
with host directed ballooning to yield good results in a variety of
configurations [3].

[1] http://www.ovirt.org/Home
[2] http://www.ovirt.org/MoM

[3] While composing this email I thought of an idea for making limited
use of auto-balloon in a public cloud environment to provide the host
with a memory stress heuristic for guests.  In this scenario, auto_min
and auto_max would be zero (most of the time) and ballooning would be
controlled by MOM.  Occasionally, auto_min and auto_max would be set
to values slightly above and below the current balloon size.  MOM
would then observe the change in balloon size to gauge whether the
guest currently has a memory surplus or deficit.

-- 
Adam Litke

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Users of ballooning, please come forth!
  2014-02-19 14:49 ` Adam Litke
@ 2014-02-20  4:23   ` Rusty Russell
  2014-02-20 13:17     ` Adam Litke
  0 siblings, 1 reply; 6+ messages in thread
From: Rusty Russell @ 2014-02-20  4:23 UTC (permalink / raw)
  To: Adam Litke
  Cc: msivak, Daniel Kiper, virtualization, dfediuck, Luiz Capitulino

Adam Litke <alitke@redhat.com> writes:
>> On Tue Feb 11 06:01:10 UTC 2014, Rusty Russell wrote:
>> Hi all!
>> 
>>         We're debating the design of the balloon for the OASIS spec.
>> Noone likes the current one, but there are fundamental usage pattern
>> questions which we're fumbling with.
>> 
>> So if you know anyone who is using it in production?  If, so, how?  In
>> particular, would you be happy with guests simply giving the host back
>> whatever memory they can spare (as Xen's self-balloon does)?  Or do
>> you
>> require the host-forcing approach?  Comment or email please!
>
> Hi Rusty,
>
> I do not maintain any production setups but I have played with
> ballooning (especially automatic ballooning) for quite some time now.
> Most recently, I am working with the oVirt project [1] to enable
> memory over-commitment and offer SLAs around VM memory usage.

Hi Adam,

        Thanks for the comprehensive thoughts.

> To address the question about whether the Xen self-balloon approach
> would be enough...  I think a guest-driven approach such as this would
> work very well in self-hosted/private cloud deployments where a single
> entity owns all of the virtual machines that are sharing memory.  As
> soon as you move to a "public" cloud environment where multiple
> customers are sharing a single host then you will need a "bad cop" to
> enforce some limits.  (Yes I know ballooning always requires guest
> cooperation, but when you combine it with punative cgroups on the host
> the guest has a compelling reason to cooperate.)  When I say "bad
> cop", I mean a completely host-controlled balloon as we currently do
> in oVirt with the Memory Overcommitment Manager [2].  This allows
> customers to expect a certain minimum amount of performance.

It's interesting that Dan Magenheimer made the opposite point: that
if you're charging customers by the MB of memory, it's easy to get them
to balloon themselves.

> In order to support both modes of operation (at the same time) how
> about supporting two virtio configuration variables in the balloon
> driver: auto_min and auto_max.  These variables would allow the host
> to restrict the range in which the auto-balloon algorithm may operate.
> Setting both to 0 would disable auto-ballooning and require all
> inflate/deflate commands to come from the host.  I think there are
> some very interesting possibilities how auto-balloon can be combined
> with host directed ballooning to yield good results in a variety of
> configurations [3].

I think we're headed to the same destination here; the variant which I
came up with (and suggested to Daniel and Luiz, CC'd) is similar: the
guest self-balloons, giving up pages when it can, but the host sets a
ceiling.

This way, if the host really needs to set a limit, it can: a disobedient
guest will start paging.  But generally, a guest should use its
judgement to balloon its own pages as it can (below the ceiling).

Thoughts?
Rusty.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Users of ballooning, please come forth!
  2014-02-20  4:23   ` Rusty Russell
@ 2014-02-20 13:17     ` Adam Litke
  2014-02-20 13:42       ` Luiz Capitulino
  2014-02-21  1:28       ` Rusty Russell
  0 siblings, 2 replies; 6+ messages in thread
From: Adam Litke @ 2014-02-20 13:17 UTC (permalink / raw)
  To: Rusty Russell
  Cc: msivak, Daniel Kiper, virtualization, dfediuck, Luiz Capitulino

On 20/02/14 14:53 +1030, Rusty Russell wrote:
>Adam Litke <alitke@redhat.com> writes:
>>> On Tue Feb 11 06:01:10 UTC 2014, Rusty Russell wrote:
>>> Hi all!
>>>
>>>         We're debating the design of the balloon for the OASIS spec.
>>> Noone likes the current one, but there are fundamental usage pattern
>>> questions which we're fumbling with.
>>>
>>> So if you know anyone who is using it in production?  If, so, how?  In
>>> particular, would you be happy with guests simply giving the host back
>>> whatever memory they can spare (as Xen's self-balloon does)?  Or do
>>> you
>>> require the host-forcing approach?  Comment or email please!
>>
>> Hi Rusty,
>>
>> I do not maintain any production setups but I have played with
>> ballooning (especially automatic ballooning) for quite some time now.
>> Most recently, I am working with the oVirt project [1] to enable
>> memory over-commitment and offer SLAs around VM memory usage.
>
>Hi Adam,
>
>        Thanks for the comprehensive thoughts.
>
>> To address the question about whether the Xen self-balloon approach
>> would be enough...  I think a guest-driven approach such as this would
>> work very well in self-hosted/private cloud deployments where a single
>> entity owns all of the virtual machines that are sharing memory.  As
>> soon as you move to a "public" cloud environment where multiple
>> customers are sharing a single host then you will need a "bad cop" to
>> enforce some limits.  (Yes I know ballooning always requires guest
>> cooperation, but when you combine it with punative cgroups on the host
>> the guest has a compelling reason to cooperate.)  When I say "bad
>> cop", I mean a completely host-controlled balloon as we currently do
>> in oVirt with the Memory Overcommitment Manager [2].  This allows
>> customers to expect a certain minimum amount of performance.
>
>It's interesting that Dan Magenheimer made the opposite point: that
>if you're charging customers by the MB of memory, it's easy to get them
>to balloon themselves.

Sure, it's all about how the incentives are structured and what the
workload is.  Some people will insist on having a certain amount of
memory "reserved" and available immediately.  If you meter memory
usage you would certainly shift the burden of conservation onto the
guest and this could be preferred for some customers.

>
>> In order to support both modes of operation (at the same time) how
>> about supporting two virtio configuration variables in the balloon
>> driver: auto_min and auto_max.  These variables would allow the host
>> to restrict the range in which the auto-balloon algorithm may operate.
>> Setting both to 0 would disable auto-ballooning and require all
>> inflate/deflate commands to come from the host.  I think there are
>> some very interesting possibilities how auto-balloon can be combined
>> with host directed ballooning to yield good results in a variety of
>> configurations [3].
>
>I think we're headed to the same destination here; the variant which I
>came up with (and suggested to Daniel and Luiz, CC'd) is similar: the
>guest self-balloons, giving up pages when it can, but the host sets a
>ceiling.
>
>This way, if the host really needs to set a limit, it can: a disobedient
>guest will start paging.  But generally, a guest should use its
>judgement to balloon its own pages as it can (below the ceiling).

It sounds similar but it sounds like you are suggesting one limit
value and I am suggesting two.  Your ceiling value sounds like a soft
limit on total guest memory (aka minimum balloon size).  This is the
more important limit of the two I have suggested.  Do you think it's
also worthwhile to have a maximum balloon size (floor value) to keep
the allowable balloon size between two points?

-- 
Adam Litke

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Users of ballooning, please come forth!
  2014-02-20 13:17     ` Adam Litke
@ 2014-02-20 13:42       ` Luiz Capitulino
  2014-02-21  1:28       ` Rusty Russell
  1 sibling, 0 replies; 6+ messages in thread
From: Luiz Capitulino @ 2014-02-20 13:42 UTC (permalink / raw)
  To: Adam Litke; +Cc: msivak, Daniel Kiper, dfediuck, virtualization

On Thu, 20 Feb 2014 08:17:06 -0500
Adam Litke <alitke@redhat.com> wrote:

> On 20/02/14 14:53 +1030, Rusty Russell wrote:
> >Adam Litke <alitke@redhat.com> writes:
> >>> On Tue Feb 11 06:01:10 UTC 2014, Rusty Russell wrote:
> >>> Hi all!
> >>>
> >>>         We're debating the design of the balloon for the OASIS spec.
> >>> Noone likes the current one, but there are fundamental usage pattern
> >>> questions which we're fumbling with.
> >>>
> >>> So if you know anyone who is using it in production?  If, so, how?  In
> >>> particular, would you be happy with guests simply giving the host back
> >>> whatever memory they can spare (as Xen's self-balloon does)?  Or do
> >>> you
> >>> require the host-forcing approach?  Comment or email please!
> >>
> >> Hi Rusty,
> >>
> >> I do not maintain any production setups but I have played with
> >> ballooning (especially automatic ballooning) for quite some time now.
> >> Most recently, I am working with the oVirt project [1] to enable
> >> memory over-commitment and offer SLAs around VM memory usage.
> >
> >Hi Adam,
> >
> >        Thanks for the comprehensive thoughts.
> >
> >> To address the question about whether the Xen self-balloon approach
> >> would be enough...  I think a guest-driven approach such as this would
> >> work very well in self-hosted/private cloud deployments where a single
> >> entity owns all of the virtual machines that are sharing memory.  As
> >> soon as you move to a "public" cloud environment where multiple
> >> customers are sharing a single host then you will need a "bad cop" to
> >> enforce some limits.  (Yes I know ballooning always requires guest
> >> cooperation, but when you combine it with punative cgroups on the host
> >> the guest has a compelling reason to cooperate.)  When I say "bad
> >> cop", I mean a completely host-controlled balloon as we currently do
> >> in oVirt with the Memory Overcommitment Manager [2].  This allows
> >> customers to expect a certain minimum amount of performance.
> >
> >It's interesting that Dan Magenheimer made the opposite point: that
> >if you're charging customers by the MB of memory, it's easy to get them
> >to balloon themselves.
> 
> Sure, it's all about how the incentives are structured and what the
> workload is.  Some people will insist on having a certain amount of
> memory "reserved" and available immediately.  If you meter memory
> usage you would certainly shift the burden of conservation onto the
> guest and this could be preferred for some customers.
> 
> >
> >> In order to support both modes of operation (at the same time) how
> >> about supporting two virtio configuration variables in the balloon
> >> driver: auto_min and auto_max.  These variables would allow the host
> >> to restrict the range in which the auto-balloon algorithm may operate.
> >> Setting both to 0 would disable auto-ballooning and require all
> >> inflate/deflate commands to come from the host.  I think there are
> >> some very interesting possibilities how auto-balloon can be combined
> >> with host directed ballooning to yield good results in a variety of
> >> configurations [3].
> >
> >I think we're headed to the same destination here; the variant which I
> >came up with (and suggested to Daniel and Luiz, CC'd) is similar: the
> >guest self-balloons, giving up pages when it can, but the host sets a
> >ceiling.
> >
> >This way, if the host really needs to set a limit, it can: a disobedient
> >guest will start paging.  But generally, a guest should use its
> >judgement to balloon its own pages as it can (below the ceiling).
> 
> It sounds similar but it sounds like you are suggesting one limit
> value and I am suggesting two.  Your ceiling value sounds like a soft
> limit on total guest memory (aka minimum balloon size).  This is the
> more important limit of the two I have suggested.  Do you think it's
> also worthwhile to have a maximum balloon size (floor value) to keep
> the allowable balloon size between two points?

I was already planning for everything you asked (QMP commands to
disable/enable automatic ballooning, min and max sizes). I still have to
think a bit how those settings will fit in the guest-led design, but it
it should be fine in principle.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Users of ballooning, please come forth!
  2014-02-20 13:17     ` Adam Litke
  2014-02-20 13:42       ` Luiz Capitulino
@ 2014-02-21  1:28       ` Rusty Russell
  1 sibling, 0 replies; 6+ messages in thread
From: Rusty Russell @ 2014-02-21  1:28 UTC (permalink / raw)
  To: Adam Litke
  Cc: msivak, Daniel Kiper, virtualization, dfediuck, Luiz Capitulino

Adam Litke <alitke@redhat.com> writes:
> On 20/02/14 14:53 +1030, Rusty Russell wrote:
>>I think we're headed to the same destination here; the variant which I
>>came up with (and suggested to Daniel and Luiz, CC'd) is similar: the
>>guest self-balloons, giving up pages when it can, but the host sets a
>>ceiling.
>>
>>This way, if the host really needs to set a limit, it can: a disobedient
>>guest will start paging.  But generally, a guest should use its
>>judgement to balloon its own pages as it can (below the ceiling).
>
> It sounds similar but it sounds like you are suggesting one limit
> value and I am suggesting two.  Your ceiling value sounds like a soft
> limit on total guest memory (aka minimum balloon size).  This is the
> more important limit of the two I have suggested.  Do you think it's
> also worthwhile to have a maximum balloon size (floor value) to keep
> the allowable balloon size between two points?

It's a little simpler to have a ceiling only.

And if everyone (guests and host) are feeling like they have plenty of
memory, it's probably best sitting in the host.

1) It can quickly go to a guest if necessary.
2) The host could coalesce/shuffle memory.
3) The host could turn off RAM to save power.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-02-21  2:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-11  6:01 Users of ballooning, please come forth! Rusty Russell
2014-02-19 14:49 ` Adam Litke
2014-02-20  4:23   ` Rusty Russell
2014-02-20 13:17     ` Adam Litke
2014-02-20 13:42       ` Luiz Capitulino
2014-02-21  1:28       ` Rusty Russell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).