xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Michal Novotny <minovotn@redhat.com>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [PATCH] Fix restore handling checks
Date: Wed, 23 Jun 2010 13:21:34 +0200	[thread overview]
Message-ID: <4C21EE3E.2090906@redhat.com> (raw)
In-Reply-To: <b0eca5df-d715-41f4-b774-04f183293ac5@default>

On 06/22/2010 10:46 PM, Dan Magenheimer wrote:
> Correct me if I am wrong, but I think your patch assumes
> that the amount of free memory in the system can be
> computed by assuming each guest memory is fixed size.
> Due to various features in Xen 4.0, this is no longer
> a safe assumption.  Tmem has a libxc call to freeze
> and unfreeze its use of memory so dynamic memory use
> by tmem can be stopped

Maybe it's stopped but in the domain_getinfo() of libxc we should be 
getting the value of original guest memory although it's being freezed 
but the only difference is that the memory should not be accessible 
since it's locked somehow. Is my understanding of tmem freeze correct?

> and another libxc call to
> determine "freeable" memory, and another to free it.
> I don't know if the page-sharing functionality added
> at 4.0 has anything similar.
>
> But in any case, simple algorithms to add up current
> (or max) guest memory will have many false-positive
> and false-negative results.
>
>    

Why should it give too many false-positives/false-negatives since the 
handling there is to sum the total guest memory and decrease the 
computed size from the total host memory according to physinfo() output 
from libxc. Also there should be the minimal memory for dom0 taken in 
account since. There's the example for my configuration - I'm having 8G 
of RAM in total, if I start up one guest with 2G of RAM allocated, we 
should be having 8 - 2 = 6 G available now (no matter what amount of 
memory is being allocated to the dom0 since the physinfo() is getting 
the total memory information from hypervisor directly, i.e. you could be 
having 4G allocated to dom0 but the host machine could be having 8G of 
RAM in total).

1. total physical memory = 8G
2. dom0_mem = 4G, dom0-min-mem = 1G
3. create the guest A with 2G RAM -> 6G in total are available now
4. create the guest B with 4G RAM -> 4G should be available but guest is 
still on migration/restore
5. In the middle of the guest restore/migrate from step 4 (guest B) we 
start another migration/restore of 2G guest (guest C), since the guest B 
is having already 2G  memory, that way "mem_kb" equals to 2G for guest B 
(instead of 4G) so we have to take "maxmem_kb" instead (i.e. 4G value) 
to compute we don't have enough memory for guest C creation.

If we used "mem_kb" in all the cases (even for migration/restore case) 
we would sum up the value to be: 2 + 2 (there should be 4 since the 
guest is restoring right now) + 2 = 6G which is less than 8G (total 
memory) - 1G (dom0-min-mem) = 7G so it would allow the guest creation 
which would result into failure when the migration/restore of guest C 
fails and therefore the guest C will be destroyed with incomplete memory 
transfer.

That's why I used the computation of "maxmem_kb" instead, since for this 
scenario the value is: 2 + 4 + 2 = 8G which is bigger than 7G (total 
memory - dom0-min-mem) so we disallow the guest restore immediately.

So what should those calculations give many false-positives or 
false-negatives?

Michal

-- 
Michal Novotny<minovotn@redhat.com>, RHCE
Virtualization Team (xen userspace), Red Hat

  parent reply	other threads:[~2010-06-23 11:21 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-21 16:30 [PATCH] Fix restore handling checks Michal Novotny
2010-06-21 18:04 ` Keir Fraser
2010-06-22  5:43   ` Michal Novotny
2010-06-22  6:11     ` Michal Novotny
2010-06-22  6:14       ` Keir Fraser
2010-06-22  6:17         ` Michal Novotny
2010-06-22 12:56           ` Michal Novotny
2010-06-22 14:10             ` Konrad Rzeszutek Wilk
2010-06-23 11:27               ` Michal Novotny
2010-06-22 20:46             ` Dan Magenheimer
2010-06-23  9:59               ` Michal Novotny
2010-06-23 11:21               ` Michal Novotny [this message]
2010-06-23 13:51                 ` Dan Magenheimer
2010-06-23 14:14                   ` Michal Novotny
2010-06-22 14:56     ` Ian Jackson
2010-06-23 11:19       ` Paolo Bonzini
2010-06-23 11:27         ` Ian Jackson
2010-06-23 11:29           ` Michal Novotny
2010-06-23 11:50             ` Ian Jackson
2010-06-23 11:54               ` Michal Novotny
2010-06-23 12:04                 ` Ian Jackson
2010-06-23 12:10                   ` Paolo Bonzini
2010-06-23 12:20                     ` Michal Novotny
2010-06-23 12:20                     ` Michal Novotny
2010-06-23 12:12                   ` Michal Novotny
2010-06-23 12:33                   ` Alan Cox
2010-06-23 15:12                     ` George Dunlap
2010-06-23 16:26                     ` Dan Magenheimer
2010-06-23 12:35             ` Alan Cox
2010-06-23 12:37               ` Michal Novotny

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C21EE3E.2090906@redhat.com \
    --to=minovotn@redhat.com \
    --cc=dan.magenheimer@oracle.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).