From: Anthony Liguori <anthony@codemonkey.ws>
To: Wen Congyang <wency@cn.fujitsu.com>
Cc: qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] stop the iteration when too many pages is transferred
Date: Fri, 19 Nov 2010 20:23:55 -0600 [thread overview]
Message-ID: <4CE7313B.1070103@codemonkey.ws> (raw)
In-Reply-To: <4CE49053.3000608@cn.fujitsu.com>
On 11/17/2010 08:32 PM, Wen Congyang wrote:
> When the total sent page size is larger than max_factor
> times of the size of guest OS's memory, stop the
> iteration.
> The default value of max_factor is 3.
>
> This is similar to XEN.
>
>
> Signed-off-by: Wen Congyang
>
I'm strongly opposed to doing this. I think Xen gets this totally wrong.
Migration is a contract. When you set the stop time, you're saying that
you want only want the guest to experience a fixed amount of downtime.
Stopping the guest after some arbitrary number of iterations makes the
downtime non-deterministic. With a very large guest, this could wreak
havoc causing dropped networking connections, etc.
It's totally unsafe.
If a management tool wants this behavior, they can set a timeout and
explicitly stop the guest during the live migration. IMHO, such a
management tool is not doing it's job properly but it still can be
implemented.
Regards,
Anthony Liguori
> ---
> arch_init.c | 13 ++++++++++++-
> 1 files changed, 12 insertions(+), 1 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 4486925..67e90f8 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -212,6 +212,14 @@ uint64_t ram_bytes_total(void)
> return total;
> }
>
> +static uint64_t ram_blocks_total(void)
> +{
> + return ram_bytes_total() / TARGET_PAGE_SIZE;
> +}
> +
> +static uint64_t blocks_transferred = 0;
> +static int max_factor = 3;
> +
> int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
> {
> ram_addr_t addr;
> @@ -234,6 +242,7 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
> bytes_transferred = 0;
> last_block = NULL;
> last_offset = 0;
> + blocks_transferred = 0;
>
> /* Make sure all dirty bits are set */
> QLIST_FOREACH(block, &ram_list.blocks, next) {
> @@ -266,6 +275,7 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>
> bytes_sent = ram_save_block(f);
> bytes_transferred += bytes_sent;
> + blocks_transferred += !!bytes_sent;
> if (bytes_sent == 0) { /* no more blocks */
> break;
> }
> @@ -295,7 +305,8 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int stage, void *opaque)
>
> expected_time = ram_save_remaining() * TARGET_PAGE_SIZE / bwidth;
>
> - return (stage == 2) && (expected_time <= migrate_max_downtime());
> + return (stage == 2) && ((expected_time <= migrate_max_downtime())
> + || (blocks_transferred > ram_blocks_total() * max_factor));
> }
>
> static inline void *host_from_stream_offset(QEMUFile *f,
>
next prev parent reply other threads:[~2010-11-20 16:35 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-18 2:32 [Qemu-devel] [PATCH] stop the iteration when too many pages is transferred Wen Congyang
2010-11-20 2:23 ` Anthony Liguori [this message]
2010-11-22 2:25 ` Wen Congyang
2010-11-22 7:11 ` KAMEZAWA Hiroyuki
2010-11-24 9:23 ` [Qemu-devel] " Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CE7313B.1070103@codemonkey.ws \
--to=anthony@codemonkey.ws \
--cc=qemu-devel@nongnu.org \
--cc=wency@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).