From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joanna Rutkowska Subject: Q about System-wide Memory Management Strategies Date: Mon, 02 Aug 2010 23:38:56 +0200 Message-ID: <4C573AF0.2050400@invisiblethingslab.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0866180710==" Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: "xen-devel@lists.xensource.com" , Dan Magenheimer Cc: qubes-devel@googlegroups.com List-Id: xen-devel@lists.xenproject.org This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --===============0866180710== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="------------enig9C830E50491548BA7D82EA74" This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig9C830E50491548BA7D82EA74 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Dan, Xen.org'ers, I have a few questions regarding strategies for optimal memory assignment among VMs (PV DomU and Dom0, all Linux-based). We've been thinking about implementing a "Direct Ballooning" strategy (as described on slide #20 in Dan's slides [1]), i.e. to write a daemon that would be running in Dom0 and, based on the statistics provided by ballond daemons running in DomUs, to adjust memory assigned to all VMs in the system (via xm mem-set). Rather than trying to maximize the number of VMs we could run at the same time, in Qubes OS we are more interested in optimizing user experience for running "reasonable number" of VMs (i.e. minimizing/eliminating swapping). In other words, given the number of VMs that the user feels the need to run at the same time (in practice usually between 3-6), and given the amount of RAM in the system (4-6 GB in practice today), how to optimally distribute it among the VMs? In our model we assume the disk backend(s) are in Dom0. Some specific questions: 1) What is the best estimator of the "ideal" amount of RAM each VM would like to have? Dan mentions [1] the Commited_AS value from /proc/meminfo, but what about the fs cache? I would expect that we should (ideally) allocate Commited_AS + some_cache amount of RAM, no? 2) What's the best estimator for "minimal reasonable" amount of RAM for VM (below which the swapping would kill the performance for good)? The rationale behind this, is that if we couldn't allocate "ideal" amount of RAM (point 1 above), then we would be scaling the available RAM down, until this "reasonable minimum" value. Below this, we would display a message to the user that they should close some VMs (or will close "inactive" one automatically), and also we would refuse to start any new AppVMs. 3) Assuming we have enough RAM to satisfy all the VMs' "ideal" requests, what should we do with the excessive RAM -- options are: a) distribute among all the VMs (more per-VM RAM, means larger FS caches, means faster I/O), or b) assign it to Dom0, where the disk backend is running (larger FS cache means faster disk backends, means faster I/O in each VM?) Thanks, joanna. [1] http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008= =2Epdf --------------enig9C830E50491548BA7D82EA74 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJMVzr2AAoJEDaIqHeRBUM0NVsH/jepE1AfMzSkXSgzOmGwrHlV nV/z8Yvd25P19Mg1anQ5e9YtZj88qYnG3zVlH6K+fQ+AKSnph1ipIX7HlQataZyY StoXOxF0NzRx7WAV9xMg8vXlnNmb/v+cidVTVcAM/M5mDmdMsB+R+9NQcQ3GviyP Mz3G2UIulFjn8zL+U8hwL4NtYBS6X4JWlaTGH0sI16jZ2XegZScFVNrJVHntsA4d Jw1+xAq3ltA/tWkAzzY0E7CZ9PCGldGaly0bHkLOoGEcZY2sIhltQuOtga8SS25x 34N/pNZxpDg7+2sTr4GmLZorS4k/cR6CNjIuVmZFh42ZDvTl2kF/8IQ9iIgTKe0= =3DVA -----END PGP SIGNATURE----- --------------enig9C830E50491548BA7D82EA74-- --===============0866180710== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============0866180710==--