qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Uri Lublin <uril@redhat.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: Glauber Costa <glommer@redhat.com>, Dor Laor <dlaor@redhat.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] ram_save_live: add a no-progress convergence rule
Date: Wed, 20 May 2009 19:56:57 +0300	[thread overview]
Message-ID: <4A143659.6050108@redhat.com> (raw)
In-Reply-To: <4A12F7D5.7050608@codemonkey.ws>

On 05/19/2009 09:17 PM, Anthony Liguori wrote:
> Glauber Costa wrote:
>> On Tue, May 19, 2009 at 05:59:14PM +0300, Dor Laor wrote:
>>
>>> We can also make it configurable using the monitor migrate command.
>>> For example:
>>> migrate -d -no_progress -threshold=x tcp:....
>> it can be done, but it fits better as a different monitor command
>>
>> anthony, do you have any strong opinions here, or is this scheme
>> acceptable?
>
> Threshold is a bad metric. There's no way to choose a right number. If
> we were going to have a means to support metrics-based forced
> convergence (and I really think this belongs in libvirt) I'd rather see
> something based on bandwidth or wall clock time.
>
> Let me put it this way, why 50? What were the guidelines for choosing
> that number and how would you explain what number a user should choose?

I've changed the threshold of the first convergence rule, to 50 from 10. Why 10 
? For this rule the threshold (number of dirty pages) and the number of bytes to 
transfer are equivalent.

50 pages is about 200K, which can be still sent quickly.
I've added debug messages and noticed we never hit a number smaller than 10 
(excluding 0). The truth is there were very little number of runs with less than 
50 dirty pages too. I don't mind leaving it at 10 (should be configurable too).

For the second migration convergence rule I've set the limit to 10, as it seems 
much larger than what I've needed (all the runs I've made a number of 2-4 
no-progress iterations was good enough, as it seems to have a repetitive 
behavior later), but I've enlarged it "just in case". No real research work was 
done here.

Note that a no-progress iteration depends on both network bandwidth and guest 
actions.

Regards,
     Uri.

  reply	other threads:[~2009-05-20 16:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-19 11:09 [Qemu-devel] [PATCH] ram_save_live: add a no-progress convergence rule Uri Lublin
2009-05-19 13:00 ` Anthony Liguori
2009-05-19 14:41   ` Glauber Costa
2009-05-19 14:59     ` Dor Laor
2009-05-19 15:09       ` Glauber Costa
2009-05-19 18:17         ` Anthony Liguori
2009-05-20 16:56           ` Uri Lublin [this message]
2009-05-20 17:28             ` Blue Swirl
2009-05-20 17:34               ` Uri Lublin
2009-05-19 18:19         ` Anthony Liguori
2009-05-19 18:15       ` Anthony Liguori
2009-05-20 17:17         ` Uri Lublin
2009-05-19 18:09     ` Anthony Liguori
2009-05-20 17:25       ` Uri Lublin
2009-05-20 17:15     ` Daniel P. Berrange

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A143659.6050108@redhat.com \
    --to=uril@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=dlaor@redhat.com \
    --cc=glommer@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).