xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Q about System-wide Memory Management Strategies
@ 2010-08-02 21:38 Joanna Rutkowska
  2010-08-02 23:57 ` Dan Magenheimer
  0 siblings, 1 reply; 8+ messages in thread
From: Joanna Rutkowska @ 2010-08-02 21:38 UTC (permalink / raw)
  To: xen-devel@lists.xensource.com, Dan Magenheimer; +Cc: qubes-devel


[-- Attachment #1.1: Type: text/plain, Size: 2288 bytes --]

Dan, Xen.org'ers,

I have a few questions regarding strategies for optimal memory
assignment among VMs (PV DomU and Dom0, all Linux-based).

We've been thinking about implementing a "Direct Ballooning" strategy
(as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
that would be running in Dom0 and, based on the statistics provided by
ballond daemons running in DomUs, to adjust memory assigned to all VMs
in the system (via xm mem-set).

Rather than trying to maximize the number of VMs we could run at the
same time, in Qubes OS we are more interested in optimizing user
experience for running "reasonable number" of VMs (i.e.
minimizing/eliminating swapping). In other words, given the number of
VMs that the user feels the need to run at the same time (in practice
usually between 3-6), and given the amount of RAM in the system (4-6 GB
in practice today), how to optimally distribute it among the VMs? In our
model we assume the disk backend(s) are in Dom0.

Some specific questions:
1) What is the best estimator of the "ideal" amount of RAM each VM would
like to have? Dan mentions [1] the Commited_AS value from /proc/meminfo,
but what about the fs cache? I would expect that we should (ideally)
allocate Commited_AS + some_cache amount of RAM, no?

2) What's the best estimator for "minimal reasonable" amount of RAM for
VM (below which the swapping would kill the performance for good)? The
rationale behind this, is that if we couldn't allocate "ideal" amount of
RAM (point 1 above), then we would be scaling the available RAM down,
until this "reasonable minimum" value. Below this, we would display a
message to the user that they should close some VMs (or will close
"inactive" one automatically), and also we would refuse to start any new
AppVMs.

3) Assuming we have enough RAM to satisfy all the VMs' "ideal" requests,
what should we do with the excessive RAM -- options are:
a) distribute among all the VMs (more per-VM RAM, means larger FS
caches, means faster I/O), or
b) assign it to Dom0, where the disk backend is running (larger FS cache
means faster disk backends, means faster I/O in each VM?)

Thanks,
joanna.

[1]
http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008.pdf


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Q about System-wide Memory Management Strategies
  2010-08-02 21:38 Q about System-wide Memory Management Strategies Joanna Rutkowska
@ 2010-08-02 23:57 ` Dan Magenheimer
  2010-08-03 22:33   ` Joanna Rutkowska
  0 siblings, 1 reply; 8+ messages in thread
From: Dan Magenheimer @ 2010-08-02 23:57 UTC (permalink / raw)
  To: Joanna Rutkowska, xen-devel; +Cc: qubes-devel

Hi Joanna --

The slides you refer to are over two years old, and there's
been a lot of progress in this area since then.  I suggest
you google for "Transcendent Memory" and especially
my presentation at the most recent Xen Summit North America
and/or http://oss.oracle.com/projects/tmem 

Specifically, I now have "selfballooning" built into
the guest kernel... I don't see direct ballooning as
feasible (certainly without other guest changes such
as cleancache and frontswap).

Anyway, I have limited availability in the next couple of
weeks but would love to talk (or email) more about
this topic after that (but would welcome clarification
questions in the meantime).

Dan

> -----Original Message-----
> From: Joanna Rutkowska [mailto:joanna@invisiblethingslab.com]
> Sent: Monday, August 02, 2010 3:39 PM
> To: xen-devel@lists.xensource.com; Dan Magenheimer
> Cc: qubes-devel@googlegroups.com
> Subject: Q about System-wide Memory Management Strategies
> 
> Dan, Xen.org'ers,
> 
> I have a few questions regarding strategies for optimal memory
> assignment among VMs (PV DomU and Dom0, all Linux-based).
> 
> We've been thinking about implementing a "Direct Ballooning" strategy
> (as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
> that would be running in Dom0 and, based on the statistics provided by
> ballond daemons running in DomUs, to adjust memory assigned to all VMs
> in the system (via xm mem-set).
> 
> Rather than trying to maximize the number of VMs we could run at the
> same time, in Qubes OS we are more interested in optimizing user
> experience for running "reasonable number" of VMs (i.e.
> minimizing/eliminating swapping). In other words, given the number of
> VMs that the user feels the need to run at the same time (in practice
> usually between 3-6), and given the amount of RAM in the system (4-6 GB
> in practice today), how to optimally distribute it among the VMs? In
> our
> model we assume the disk backend(s) are in Dom0.
> 
> Some specific questions:
> 1) What is the best estimator of the "ideal" amount of RAM each VM
> would
> like to have? Dan mentions [1] the Commited_AS value from
> /proc/meminfo,
> but what about the fs cache? I would expect that we should (ideally)
> allocate Commited_AS + some_cache amount of RAM, no?
> 
> 2) What's the best estimator for "minimal reasonable" amount of RAM for
> VM (below which the swapping would kill the performance for good)? The
> rationale behind this, is that if we couldn't allocate "ideal" amount
> of
> RAM (point 1 above), then we would be scaling the available RAM down,
> until this "reasonable minimum" value. Below this, we would display a
> message to the user that they should close some VMs (or will close
> "inactive" one automatically), and also we would refuse to start any
> new
> AppVMs.
> 
> 3) Assuming we have enough RAM to satisfy all the VMs' "ideal"
> requests,
> what should we do with the excessive RAM -- options are:
> a) distribute among all the VMs (more per-VM RAM, means larger FS
> caches, means faster I/O), or
> b) assign it to Dom0, where the disk backend is running (larger FS
> cache
> means faster disk backends, means faster I/O in each VM?)
> 
> Thanks,
> joanna.
> 
> [1]
> http://www.xen.org/files/xensummitboston08/MemoryOvercommit-
> XenSummit2008.pdf
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Q about System-wide Memory Management Strategies
  2010-08-02 23:57 ` Dan Magenheimer
@ 2010-08-03 22:33   ` Joanna Rutkowska
  2010-08-04 14:52     ` Dan Magenheimer
                       ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Joanna Rutkowska @ 2010-08-03 22:33 UTC (permalink / raw)
  To: Dan Magenheimer; +Cc: xen-devel, qubes-devel


[-- Attachment #1.1: Type: text/plain, Size: 6210 bytes --]

On 08/03/10 01:57, Dan Magenheimer wrote:
> Hi Joanna --
> 
> The slides you refer to are over two years old, and there's
> been a lot of progress in this area since then.  I suggest
> you google for "Transcendent Memory" and especially
> my presentation at the most recent Xen Summit North America
> and/or http://oss.oracle.com/projects/tmem 
> 

Thanks Dan. I've been aware of tmem, but I've been skeptical about it
for two reasons: it's complex, and seems rather unportable to other
OSes, specifically Windows, which is a concern for us, as we plan to
support Windows AppVMs in the future in Qubes.

(Hhm, is it really unportable? Perhaps one could create
pseudo-filesystem driver that would behave like precache, and a
pseudo-disk driver that would behave like preswap?)

From reading the papers on tmem (the hogs were really cute :), I
understand now that the single most important advantage of using tmem
vs. just-ballooning is: no memory inertia for needy VMs, correct? I'm
tempted to think that this might not be such a big deal for the
Qubes-specific types of workload -- after all, if some apps starts
slowing down, the user will temporarily stop "operating" them, and let
the system recover within a few seconds, when the balloon will return
some more memory. Or am I wrong here, and the recovery is not so easy in
practice?

> Specifically, I now have "selfballooning" built into
> the guest kernel...

In your latest presentation you mention selfballooning implemented in
kernel, rather than via a userland daemon -- any significant benefit of
this? I've been thinking of trying selfballooning using 2.6.34-xenlinux
kernel with usermode balloond...

How to initially provision the VMs in selfballooning, i.e. how to set
mem and memmax? I'm tempted to set memmax to the amount of all physical
memory minus memory reserved for Dom0, and other service VMs (which
would get fixed, small, amount). The rationale behind this is that we
don't know what type of tasks the user will end up doing in any given
VM, and she might very well end up with something reaaally memory-hungry
(sure, we will not let any other VMs to run at the same time in that
case, but we should still be able to handle this I think).

> I don't see direct ballooning as feasible (certainly without other
> guest changes such as cleancache and frontswap).
> 

Why is that? Intuitively it sounds like the most straightforward
solution -- only Dom0 can see the system-wide picture of all the VM
needs (and priorities).

What happens if too many guests would request too much memory, i.e.
within their maxmem limits, but such that the overall total exceeds the
total available in the system? I guess then whoever was first and lucky
would get the memory, but the last ones would get nothing, right? While
if we had centrally-managed allocation, we would be able to e.g. scale
down the target memory sizes equally, or tell the user that some VMs
must be closed for smooth operation of the others (or close them
automatically).

> Anyway, I have limited availability in the next couple of
> weeks but would love to talk (or email) more about
> this topic after that (but would welcome clarification
> questions in the meantime).
> 

No problem. Hopefully some of the above questions would fall into the
"clarification" category :) And maybe others will answer the others :)

Thanks,
joanna.

> Dan
> 
>> -----Original Message-----
>> From: Joanna Rutkowska [mailto:joanna@invisiblethingslab.com]
>> Sent: Monday, August 02, 2010 3:39 PM
>> To: xen-devel@lists.xensource.com; Dan Magenheimer
>> Cc: qubes-devel@googlegroups.com
>> Subject: Q about System-wide Memory Management Strategies
>>
>> Dan, Xen.org'ers,
>>
>> I have a few questions regarding strategies for optimal memory
>> assignment among VMs (PV DomU and Dom0, all Linux-based).
>>
>> We've been thinking about implementing a "Direct Ballooning" strategy
>> (as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
>> that would be running in Dom0 and, based on the statistics provided by
>> ballond daemons running in DomUs, to adjust memory assigned to all VMs
>> in the system (via xm mem-set).
>>
>> Rather than trying to maximize the number of VMs we could run at the
>> same time, in Qubes OS we are more interested in optimizing user
>> experience for running "reasonable number" of VMs (i.e.
>> minimizing/eliminating swapping). In other words, given the number of
>> VMs that the user feels the need to run at the same time (in practice
>> usually between 3-6), and given the amount of RAM in the system (4-6 GB
>> in practice today), how to optimally distribute it among the VMs? In
>> our
>> model we assume the disk backend(s) are in Dom0.
>>
>> Some specific questions:
>> 1) What is the best estimator of the "ideal" amount of RAM each VM
>> would
>> like to have? Dan mentions [1] the Commited_AS value from
>> /proc/meminfo,
>> but what about the fs cache? I would expect that we should (ideally)
>> allocate Commited_AS + some_cache amount of RAM, no?
>>
>> 2) What's the best estimator for "minimal reasonable" amount of RAM for
>> VM (below which the swapping would kill the performance for good)? The
>> rationale behind this, is that if we couldn't allocate "ideal" amount
>> of
>> RAM (point 1 above), then we would be scaling the available RAM down,
>> until this "reasonable minimum" value. Below this, we would display a
>> message to the user that they should close some VMs (or will close
>> "inactive" one automatically), and also we would refuse to start any
>> new
>> AppVMs.
>>
>> 3) Assuming we have enough RAM to satisfy all the VMs' "ideal"
>> requests,
>> what should we do with the excessive RAM -- options are:
>> a) distribute among all the VMs (more per-VM RAM, means larger FS
>> caches, means faster I/O), or
>> b) assign it to Dom0, where the disk backend is running (larger FS
>> cache
>> means faster disk backends, means faster I/O in each VM?)
>>
>> Thanks,
>> joanna.
>>
>> [1]
>> http://www.xen.org/files/xensummitboston08/MemoryOvercommit-
>> XenSummit2008.pdf
>>



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Q about System-wide Memory Management Strategies
  2010-08-03 22:33   ` Joanna Rutkowska
@ 2010-08-04 14:52     ` Dan Magenheimer
  2010-08-19 11:39     ` Joanna Rutkowska
  2010-08-20 17:26     ` Daniel Kiper
  2 siblings, 0 replies; 8+ messages in thread
From: Dan Magenheimer @ 2010-08-04 14:52 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: xen-devel, qubes-devel

> From: Joanna Rutkowska [mailto:joanna@invisiblethingslab.com]
> Subject: Re: Q about System-wide Memory Management Strategies
> 
> On 08/03/10 01:57, Dan Magenheimer wrote:
> > Hi Joanna --
> >
> > The slides you refer to are over two years old, and there's
> > been a lot of progress in this area since then.  I suggest
> > you google for "Transcendent Memory" and especially
> > my presentation at the most recent Xen Summit North America
> > and/or http://oss.oracle.com/projects/tmem
> 
> Thanks Dan. I've been aware of tmem, but I've been skeptical about it
> for two reasons: it's complex, and seems rather unportable to other
> OSes, specifically Windows, which is a concern for us, as we plan to
> support Windows AppVMs in the future in Qubes.

Thanks for the comments and review.  It's definitely complex.
If it were easy, the problem would have been solved long ago. :-)
 
> (Hhm, is it really unportable? Perhaps one could create
> pseudo-filesystem driver that would behave like precache, and a
> pseudo-disk driver that would behave like preswap?)

I know nothing about Windows drivers.  I think tmem could
definitely be implemented on Windows, with source code changes
("enlightenments").  It could probably be implemented in drivers
but would likely lose a lot of its value and take a performance
hit.

> From reading the papers on tmem (the hogs were really cute :), I
> understand now that the single most important advantage of using tmem
> vs. just-ballooning is: no memory inertia for needy VMs, correct? I'm
> tempted to think that this might not be such a big deal for the
> Qubes-specific types of workload -- after all, if some apps starts
> slowing down, the user will temporarily stop "operating" them, and let
> the system recover within a few seconds, when the balloon will return
> some more memory. Or am I wrong here, and the recovery is not so easy
> in practice?

If you have a perfect "directed ballooning" daemon in dom0 that
can correctly predict the future, moving memory that won't be
needed (in the future) by guest A to guest B (that does need
it real soon now), neither self-ballooning nor tmem is necessary.
Sadly, crystal balls are hard to come by, even for one single
guest.  And when you are dealing with multiple dynamically-changing
guests, you quickly get to a bin-packing problem (which I am
pretty sure is NP-complete).

One partial solution is to "pad" the amount of memory given
to each guest, but then you are trying to predict how much
padding is needed... also unguessable.

My 2008 solution was to "aggressively" take memory away from each
guest to approach a knowable per-guest target (which can be done
from dom0 via xenstore or in the guest itself).  But this
sometimes/frequently causes the same problems as just giving
each guest less memory to start with, including both performance
issues like lots of paging and swapping, but also bad things like
OOMs and swapstorms.

IMHO, this is sometimes "not so easy to recover from in practice".

Tmem is designed to complement aggressive ballooning (regardless
of where the ballooning decisions are made) by reducing or
eliminating the problems that result from it and at the same
time reduce "memory inertia" so that a large amount of memory
can be quickly moved to where it is most needed (including,
when necessary, launching or migrate-receiving more guests).

> > Specifically, I now have "selfballooning" built into
> > the guest kernel...
> 
> In your latest presentation you mention selfballooning implemented in
> kernel, rather than via a userland daemon -- any significant benefit of
> this? I've been thinking of trying selfballooning using 2.6.34-xenlinux
> kernel with usermode balloond...

It's all a question of response time.  If the policy/mechanism
is in dom0, it's difficult to react quickly enough to one guest,
let alone "many".  If the policy/mechanism is in the guest but
in userland, well, sometimes user processes don't get much
attention (other than being gratuitously killed) when the kernel
is under memory pressure.

So, since, tmem requires kernel changes anyway, I moved the
selfballooning policy into the Xen balloon driver, with a lot
of tunables in sysfs that can be tweaked.

> How to initially provision the VMs in selfballooning, i.e. how to set
> mem and memmax? I'm tempted to set memmax to the amount of all physical
> memory minus memory reserved for Dom0, and other service VMs (which
> would get fixed, small, amount). The rationale behind this is that we
> don't know what type of tasks the user will end up doing in any given
> VM, and she might very well end up with something reaaally memory-
> hungry
> (sure, we will not let any other VMs to run at the same time in that
> case, but we should still be able to handle this I think).

Memmax for each guest can be essentially unlimited, since Xen reserves
its memory and dom0 memory.  Only the ballooning policy cares.
But in practice, I think users think "physical", e.g. how much RAM
does this physical machine need, so tend to prefer to think about
memory as one single value.  As a result, everything should work
properly when mem=memmax.

> > I don't see direct ballooning as feasible (certainly without other
> > guest changes such as cleancache and frontswap).
> 
> Why is that? Intuitively it sounds like the most straightforward
> solution -- only Dom0 can see the system-wide picture of all the VM
> needs (and priorities).

It is straightforward.  And it will work most of the time
for many workloads.  But it responds too slowly for many
other workloads.

> What happens if too many guests would request too much memory, i.e.
> within their maxmem limits, but such that the overall total exceeds the
> total available in the system? I guess then whoever was first and lucky
> would get the memory, but the last ones would get nothing, right? While
> if we had centrally-managed allocation, we would be able to e.g. scale
> down the target memory sizes equally, or tell the user that some VMs
> must be closed for smooth operation of the others (or close them
> automatically).

"First and lucky" creates problems when all the guests are
happy to absorb as much memory as you give them.

Tmem has some built-in policy to avoid the worst of this and
some tool-specifiable parameters to optionally enforce load
balancing with prioritization.

But if, in your product environment, users can just be told to
shut down a VM, sure, that's a good solution.

> > Anyway, I have limited availability in the next couple of
> > weeks but would love to talk (or email) more about
> > this topic after that (but would welcome clarification
> > questions in the meantime).
> 
> No problem. Hopefully some of the above questions would fall into the
> "clarification" category :) And maybe others will answer the others :)

Since this topic is near and dear to me (having spent the
better part of the last two years on it), I tend to get
long-winded in my answers... and procrastinate on other things
that are higher priority :-(

> Thanks,
> joanna.
> 
> > Dan
> >
> >> -----Original Message-----
> >> From: Joanna Rutkowska [mailto:joanna@invisiblethingslab.com]
> >> Sent: Monday, August 02, 2010 3:39 PM
> >> To: xen-devel@lists.xensource.com; Dan Magenheimer
> >> Cc: qubes-devel@googlegroups.com
> >> Subject: Q about System-wide Memory Management Strategies
> >>
> >> Dan, Xen.org'ers,
> >>
> >> I have a few questions regarding strategies for optimal memory
> >> assignment among VMs (PV DomU and Dom0, all Linux-based).
> >>
> >> We've been thinking about implementing a "Direct Ballooning"
> strategy
> >> (as described on slide #20 in Dan's slides [1]), i.e. to write a
> daemon
> >> that would be running in Dom0 and, based on the statistics provided
> by
> >> ballond daemons running in DomUs, to adjust memory assigned to all
> VMs
> >> in the system (via xm mem-set).
> >>
> >> Rather than trying to maximize the number of VMs we could run at the
> >> same time, in Qubes OS we are more interested in optimizing user
> >> experience for running "reasonable number" of VMs (i.e.
> >> minimizing/eliminating swapping). In other words, given the number
> of
> >> VMs that the user feels the need to run at the same time (in
> practice
> >> usually between 3-6), and given the amount of RAM in the system (4-6
> GB
> >> in practice today), how to optimally distribute it among the VMs? In
> >> our
> >> model we assume the disk backend(s) are in Dom0.
> >>
> >> Some specific questions:
> >> 1) What is the best estimator of the "ideal" amount of RAM each VM
> >> would
> >> like to have? Dan mentions [1] the Commited_AS value from
> >> /proc/meminfo,
> >> but what about the fs cache? I would expect that we should (ideally)
> >> allocate Commited_AS + some_cache amount of RAM, no?
> >>
> >> 2) What's the best estimator for "minimal reasonable" amount of RAM
> for
> >> VM (below which the swapping would kill the performance for good)?
> The
> >> rationale behind this, is that if we couldn't allocate "ideal"
> amount
> >> of
> >> RAM (point 1 above), then we would be scaling the available RAM
> down,
> >> until this "reasonable minimum" value. Below this, we would display
> a
> >> message to the user that they should close some VMs (or will close
> >> "inactive" one automatically), and also we would refuse to start any
> >> new
> >> AppVMs.
> >>
> >> 3) Assuming we have enough RAM to satisfy all the VMs' "ideal"
> >> requests,
> >> what should we do with the excessive RAM -- options are:
> >> a) distribute among all the VMs (more per-VM RAM, means larger FS
> >> caches, means faster I/O), or
> >> b) assign it to Dom0, where the disk backend is running (larger FS
> >> cache
> >> means faster disk backends, means faster I/O in each VM?)
> >>
> >> Thanks,
> >> joanna.
> >>
> >> [1]
> >> http://www.xen.org/files/xensummitboston08/MemoryOvercommit-
> >> XenSummit2008.pdf
> >>
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Q about System-wide Memory Management Strategies
  2010-08-03 22:33   ` Joanna Rutkowska
  2010-08-04 14:52     ` Dan Magenheimer
@ 2010-08-19 11:39     ` Joanna Rutkowska
  2010-08-19 11:39       ` Jean Guyader
  2010-08-19 15:02       ` Dan Magenheimer
  2010-08-20 17:26     ` Daniel Kiper
  2 siblings, 2 replies; 8+ messages in thread
From: Joanna Rutkowska @ 2010-08-19 11:39 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: Dan Magenheimer, xen-devel, qubes-devel


[-- Attachment #1.1: Type: text/plain, Size: 328 bytes --]

On 08/04/10 00:33, Joanna Rutkowska wrote:
> 
> No problem. Hopefully some of the above questions would fall into the
> "clarification" category :) And maybe others will answer the others :)

Just been wondering -- how does XenClient deal with this? Is the memory
assigned statically to the guests?

Thanks,
joanna.


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: Q about System-wide Memory Management Strategies
  2010-08-19 11:39     ` Joanna Rutkowska
@ 2010-08-19 11:39       ` Jean Guyader
  2010-08-19 15:02       ` Dan Magenheimer
  1 sibling, 0 replies; 8+ messages in thread
From: Jean Guyader @ 2010-08-19 11:39 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: Dan Magenheimer, xen-devel, qubes-devel

On 19 August 2010 12:39, Joanna Rutkowska <joanna@invisiblethingslab.com> wrote:
> On 08/04/10 00:33, Joanna Rutkowska wrote:
>>
>> No problem. Hopefully some of the above questions would fall into the
>> "clarification" category :) And maybe others will answer the others :)
>
> Just been wondering -- how does XenClient deal with this? Is the memory
> assigned statically to the guests?
>

Right now yes.

Jean

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Q about System-wide Memory Management Strategies
  2010-08-19 11:39     ` Joanna Rutkowska
  2010-08-19 11:39       ` Jean Guyader
@ 2010-08-19 15:02       ` Dan Magenheimer
  1 sibling, 0 replies; 8+ messages in thread
From: Dan Magenheimer @ 2010-08-19 15:02 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: xen-devel, qubes-devel

> From: Joanna Rutkowska [mailto:joanna@invisiblethingslab.com]
> Sent: Thursday, August 19, 2010 5:39 AM
> To: Joanna Rutkowska
> Cc: Dan Magenheimer; xen-devel@lists.xensource.com; qubes-
> devel@googlegroups.com
> Subject: Re: Q about System-wide Memory Management Strategies
> 
> On 08/04/10 00:33, Joanna Rutkowska wrote:
> >
> > No problem. Hopefully some of the above questions would fall into the
> > "clarification" category :) And maybe others will answer the others
> :)
> 
> Just been wondering -- how does XenClient deal with this? Is the memory
> assigned statically to the guests?

I don't know anything about XenClient but I suspect
static assignment is the right answer.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: Q about System-wide Memory Management Strategies
  2010-08-03 22:33   ` Joanna Rutkowska
  2010-08-04 14:52     ` Dan Magenheimer
  2010-08-19 11:39     ` Joanna Rutkowska
@ 2010-08-20 17:26     ` Daniel Kiper
  2 siblings, 0 replies; 8+ messages in thread
From: Daniel Kiper @ 2010-08-20 17:26 UTC (permalink / raw)
  To: Joanna Rutkowska; +Cc: Dan Magenheimer, xen-devel, qubes-devel

Hi,

On Wed, Aug 04, 2010 at 12:33:17AM +0200, Joanna Rutkowska wrote:
[...]
> How to initially provision the VMs in selfballooning, i.e. how to set
> mem and memmax? I'm tempted to set memmax to the amount of all physical
> memory minus memory reserved for Dom0, and other service VMs (which
> would get fixed, small, amount). The rationale behind this is that we
> don't know what type of tasks the user will end up doing in any given
> VM, and she might very well end up with something reaaally memory-hungry
> (sure, we will not let any other VMs to run at the same time in that
> case, but we should still be able to handle this I think).

There is memory hotplug mechanism (only for Linux now) under
development. I think that solution allow you to expand memory
guest above limit declared at system startup (if you underestimated
it at that stage).

Feel free to test it:
git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git,
xen/memory-hotplug head.

If you have any questions please drop me a line.

Daniel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-08-20 17:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-02 21:38 Q about System-wide Memory Management Strategies Joanna Rutkowska
2010-08-02 23:57 ` Dan Magenheimer
2010-08-03 22:33   ` Joanna Rutkowska
2010-08-04 14:52     ` Dan Magenheimer
2010-08-19 11:39     ` Joanna Rutkowska
2010-08-19 11:39       ` Jean Guyader
2010-08-19 15:02       ` Dan Magenheimer
2010-08-20 17:26     ` Daniel Kiper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).