xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Q about System-wide Memory Management Strategies
@ 2010-08-02 21:38 Joanna Rutkowska
  2010-08-02 23:57 ` Dan Magenheimer
  0 siblings, 1 reply; 8+ messages in thread
From: Joanna Rutkowska @ 2010-08-02 21:38 UTC (permalink / raw)
  To: xen-devel@lists.xensource.com, Dan Magenheimer; +Cc: qubes-devel


[-- Attachment #1.1: Type: text/plain, Size: 2288 bytes --]

Dan, Xen.org'ers,

I have a few questions regarding strategies for optimal memory
assignment among VMs (PV DomU and Dom0, all Linux-based).

We've been thinking about implementing a "Direct Ballooning" strategy
(as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
that would be running in Dom0 and, based on the statistics provided by
ballond daemons running in DomUs, to adjust memory assigned to all VMs
in the system (via xm mem-set).

Rather than trying to maximize the number of VMs we could run at the
same time, in Qubes OS we are more interested in optimizing user
experience for running "reasonable number" of VMs (i.e.
minimizing/eliminating swapping). In other words, given the number of
VMs that the user feels the need to run at the same time (in practice
usually between 3-6), and given the amount of RAM in the system (4-6 GB
in practice today), how to optimally distribute it among the VMs? In our
model we assume the disk backend(s) are in Dom0.

Some specific questions:
1) What is the best estimator of the "ideal" amount of RAM each VM would
like to have? Dan mentions [1] the Commited_AS value from /proc/meminfo,
but what about the fs cache? I would expect that we should (ideally)
allocate Commited_AS + some_cache amount of RAM, no?

2) What's the best estimator for "minimal reasonable" amount of RAM for
VM (below which the swapping would kill the performance for good)? The
rationale behind this, is that if we couldn't allocate "ideal" amount of
RAM (point 1 above), then we would be scaling the available RAM down,
until this "reasonable minimum" value. Below this, we would display a
message to the user that they should close some VMs (or will close
"inactive" one automatically), and also we would refuse to start any new
AppVMs.

3) Assuming we have enough RAM to satisfy all the VMs' "ideal" requests,
what should we do with the excessive RAM -- options are:
a) distribute among all the VMs (more per-VM RAM, means larger FS
caches, means faster I/O), or
b) assign it to Dom0, where the disk backend is running (larger FS cache
means faster disk backends, means faster I/O in each VM?)

Thanks,
joanna.

[1]
http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008.pdf


[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 518 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-08-20 17:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-02 21:38 Q about System-wide Memory Management Strategies Joanna Rutkowska
2010-08-02 23:57 ` Dan Magenheimer
2010-08-03 22:33   ` Joanna Rutkowska
2010-08-04 14:52     ` Dan Magenheimer
2010-08-19 11:39     ` Joanna Rutkowska
2010-08-19 11:39       ` Jean Guyader
2010-08-19 15:02       ` Dan Magenheimer
2010-08-20 17:26     ` Daniel Kiper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).