From: Quan Xu <quan.xu0@gmail.com>
To: quintela@redhat.com
Cc: qemu-devel@nongnu.org, dgilbert@redhat.com, kvm <kvm@vger.kernel.org>
Subject: Re: [Qemu-devel] [PATCH RFC] migration: make sure to run iterate precopy during the bulk stage
Date: Tue, 4 Sep 2018 20:48:51 +0800 [thread overview]
Message-ID: <4602076e-2c15-39dc-8e79-e8b1492a8c80@gmail.com> (raw)
In-Reply-To: <87va7lvd71.fsf@trasno.org>
on 2018/9/4 17:12, Juan Quintela wrote:
> Quan Xu <quan.xu0@gmail.com> wrote:
>> From 8dbf7370e7ea1caab0b769d0d4dcdd072d14d421 Mon Sep 17 00:00:00 2001
>> From: Quan Xu <quan.xu0@gmail.com>
>> Date: Wed, 29 Aug 2018 21:33:14 +0800
>> Subject: [PATCH RFC] migration: make sure to run iterate precopy during the
>> bulk stage
>>
>> Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
>> page is dirty, return a rough total ram as pending size to make sure that
>> migration thread continues to run iterate precopy during the bulk stage.
>>
>> Otherwise the downtime grows unpredictably, as migration thread needs to
>> send both the rest of pages and dirty pages during complete precopy.
>>
>> Signed-off-by: Quan Xu <quan.xu0@gmail.com>
>> ---
>> migration/ram.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 79c8942..cfa304c 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -3308,7 +3308,8 @@ static void ram_save_pending(QEMUFile *f, void
>> *opaque, uint64_t max_size,
>> /* We can do postcopy, and all the data is postcopiable */
>> *res_compatible += remaining_size;
>> } else {
>> - *res_precopy_only += remaining_size;
>> + *res_precopy_only += (rs->ram_bulk_stage ?
>> + ram_bytes_total() : remaining_size);
>> }
>> }
>
> Hi
>
> I don't oppose the change.
> But what I don't understand is _why_ it is needed (or to say it
> otherwise, how it worked until now).
I run migration in a slow network throughput (about ~500mbps).
in my opion, as the slow network throughput, there is more 'break'
during iterate precopy (as the MAX_WAIT).
as said in patch description, even to send both the rest pages and
dirty pages, if in a higher network throughput,
the downtime would look still within an acceptable range.
> I was wondering about the opposit
> direction, and just initialize the number of dirty pages at the
> beggining of the loop and then let decrease it for each processed page.
>
I understand your concern. I also wanted to fix as your suggestion.
however, to me, this would be an overhead to maintain another count
during migration.
Quan
> I don't remember either how big was the speedud of not walking the
> bitmap on the 1st stage to start with.
>
> Later, Juan.
>
prev parent reply other threads:[~2018-09-04 12:49 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-29 13:40 [Qemu-devel] [PATCH RFC] migration: make sure to run iterate precopy during the bulk stage Quan Xu
2018-09-04 9:09 ` Dr. David Alan Gilbert
2018-09-04 13:34 ` Quan Xu
2018-09-04 9:12 ` Juan Quintela
2018-09-04 12:48 ` Quan Xu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4602076e-2c15-39dc-8e79-e8b1492a8c80@gmail.com \
--to=quan.xu0@gmail.com \
--cc=dgilbert@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).