From: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
To: "Li, Liang Z" <liang.z.li@intel.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Cc: "amit.shah@redhat.com" <amit.shah@redhat.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"quintela@redhat.com" <quintela@redhat.com>,
peter.huangpeng@huawei.com,
"dgilbert@redhat.com" <dgilbert@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
Date: Mon, 18 Jan 2016 17:01:00 +0800 [thread overview]
Message-ID: <569CA9CC.80603@huawei.com> (raw)
In-Reply-To: <F2CBF3009FA73547804AE4C663CAB28E0373069F@SHSMSX101.ccr.corp.intel.com>
Hi,
On 2016/1/15 18:24, Li, Liang Z wrote:
>> It seems that this patch is incorrect, if the no-zero pages are zeroed again
>> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
>> an error.
>>
>
> If not in ram_bulk_stage, still send the header, could you explain why it's wrong?
>
> Liang
>
I have made a mistake, and yes, this patch can speed up the live migration time,
especially when there are many zero pages, it will be more obvious.
I like this idea. Did you test it with postcopy ? Does it break postcopy ?
Thanks,
zhanghailiang
>>> For guest just uses a small portions of RAM, this change can avoid
>>> allocating all the guest's RAM pages in the destination node after
>>> live migration. Another benefit is destination QEMU can save lots of
>>> CPU cycles for zero page checking.
>>>
>>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>>> ---
>>> migration/ram.c | 10 ++++++----
>>> 1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
>>> 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
>> RAMBlock
>>> *block, ram_addr_t offset,
>>>
>>> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>>> acct_info.dup_pages++;
>>> - *bytes_transferred += save_page_header(f, block,
>>> - offset | RAM_SAVE_FLAG_COMPRESS);
>>> - qemu_put_byte(f, 0);
>>> - *bytes_transferred += 1;
>>> + if (!ram_bulk_stage) {
>>> + *bytes_transferred += save_page_header(f, block, offset |
>>> + RAM_SAVE_FLAG_COMPRESS);
>>> + qemu_put_byte(f, 0);
>>> + *bytes_transferred += 1;
>>> + }
>>> pages = 1;
>>> }
>>>
>>>
>>
>>
>
>
> .
>
next prev parent reply other threads:[~2016-01-18 9:01 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-15 9:48 [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage Liang Li
2016-01-15 10:17 ` Hailiang Zhang
2016-01-15 10:24 ` Li, Liang Z
2016-01-18 9:01 ` Hailiang Zhang [this message]
2016-01-19 1:26 ` Li, Liang Z
2016-01-19 3:11 ` Hailiang Zhang
2016-01-19 3:17 ` Li, Liang Z
2016-01-20 9:55 ` Paolo Bonzini
2016-01-20 9:59 ` Li, Liang Z
2016-01-19 3:25 ` Hailiang Zhang
2016-01-19 3:36 ` Li, Liang Z
2016-01-15 11:39 ` Paolo Bonzini
2016-01-16 14:12 ` Li, Liang Z
2016-01-15 18:57 ` Dr. David Alan Gilbert
2016-01-16 14:25 ` Li, Liang Z
2016-01-18 9:33 ` Dr. David Alan Gilbert
2016-01-18 9:17 ` Hailiang Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=569CA9CC.80603@huawei.com \
--to=zhang.zhanghailiang@huawei.com \
--cc=amit.shah@redhat.com \
--cc=dgilbert@redhat.com \
--cc=liang.z.li@intel.com \
--cc=pbonzini@redhat.com \
--cc=peter.huangpeng@huawei.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).