* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 9:48 [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage Liang Li
@ 2016-01-15 10:17 ` Hailiang Zhang
2016-01-15 10:24 ` Li, Liang Z
2016-01-15 11:39 ` Paolo Bonzini
2016-01-15 18:57 ` Dr. David Alan Gilbert
2 siblings, 1 reply; 17+ messages in thread
From: Hailiang Zhang @ 2016-01-15 10:17 UTC (permalink / raw)
To: Liang Li, qemu-devel
Cc: amit.shah, pbonzini, peter.huangpeng, dgilbert, quintela
On 2016/1/15 17:48, Liang Li wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
>
It seems that this patch is incorrect, if the no-zero pages are zeroed again
during !ram_bulk_stage, we didn't send the new zeroed page, there will be an error.
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.
>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> ---
> migration/ram.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>
> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> acct_info.dup_pages++;
> - *bytes_transferred += save_page_header(f, block,
> - offset | RAM_SAVE_FLAG_COMPRESS);
> - qemu_put_byte(f, 0);
> - *bytes_transferred += 1;
> + if (!ram_bulk_stage) {
> + *bytes_transferred += save_page_header(f, block, offset |
> + RAM_SAVE_FLAG_COMPRESS);
> + qemu_put_byte(f, 0);
> + *bytes_transferred += 1;
> + }
> pages = 1;
> }
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 10:17 ` Hailiang Zhang
@ 2016-01-15 10:24 ` Li, Liang Z
2016-01-18 9:01 ` Hailiang Zhang
0 siblings, 1 reply; 17+ messages in thread
From: Li, Liang Z @ 2016-01-15 10:24 UTC (permalink / raw)
To: Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com, quintela@redhat.com,
peter.huangpeng@huawei.com, dgilbert@redhat.com
> It seems that this patch is incorrect, if the no-zero pages are zeroed again
> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
> an error.
>
If not in ram_bulk_stage, still send the header, could you explain why it's wrong?
Liang
> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> >
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > ---
> > migration/ram.c | 10 ++++++----
> > 1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
> RAMBlock
> > *block, ram_addr_t offset,
> >
> > if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> > acct_info.dup_pages++;
> > - *bytes_transferred += save_page_header(f, block,
> > - offset | RAM_SAVE_FLAG_COMPRESS);
> > - qemu_put_byte(f, 0);
> > - *bytes_transferred += 1;
> > + if (!ram_bulk_stage) {
> > + *bytes_transferred += save_page_header(f, block, offset |
> > + RAM_SAVE_FLAG_COMPRESS);
> > + qemu_put_byte(f, 0);
> > + *bytes_transferred += 1;
> > + }
> > pages = 1;
> > }
> >
> >
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 10:24 ` Li, Liang Z
@ 2016-01-18 9:01 ` Hailiang Zhang
2016-01-19 1:26 ` Li, Liang Z
0 siblings, 1 reply; 17+ messages in thread
From: Hailiang Zhang @ 2016-01-18 9:01 UTC (permalink / raw)
To: Li, Liang Z, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com, quintela@redhat.com,
peter.huangpeng, dgilbert@redhat.com
Hi,
On 2016/1/15 18:24, Li, Liang Z wrote:
>> It seems that this patch is incorrect, if the no-zero pages are zeroed again
>> during !ram_bulk_stage, we didn't send the new zeroed page, there will be
>> an error.
>>
>
> If not in ram_bulk_stage, still send the header, could you explain why it's wrong?
>
> Liang
>
I have made a mistake, and yes, this patch can speed up the live migration time,
especially when there are many zero pages, it will be more obvious.
I like this idea. Did you test it with postcopy ? Does it break postcopy ?
Thanks,
zhanghailiang
>>> For guest just uses a small portions of RAM, this change can avoid
>>> allocating all the guest's RAM pages in the destination node after
>>> live migration. Another benefit is destination QEMU can save lots of
>>> CPU cycles for zero page checking.
>>>
>>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>>> ---
>>> migration/ram.c | 10 ++++++----
>>> 1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
>>> 100644
>>> --- a/migration/ram.c
>>> +++ b/migration/ram.c
>>> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
>> RAMBlock
>>> *block, ram_addr_t offset,
>>>
>>> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>>> acct_info.dup_pages++;
>>> - *bytes_transferred += save_page_header(f, block,
>>> - offset | RAM_SAVE_FLAG_COMPRESS);
>>> - qemu_put_byte(f, 0);
>>> - *bytes_transferred += 1;
>>> + if (!ram_bulk_stage) {
>>> + *bytes_transferred += save_page_header(f, block, offset |
>>> + RAM_SAVE_FLAG_COMPRESS);
>>> + qemu_put_byte(f, 0);
>>> + *bytes_transferred += 1;
>>> + }
>>> pages = 1;
>>> }
>>>
>>>
>>
>>
>
>
> .
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-18 9:01 ` Hailiang Zhang
@ 2016-01-19 1:26 ` Li, Liang Z
2016-01-19 3:11 ` Hailiang Zhang
0 siblings, 1 reply; 17+ messages in thread
From: Li, Liang Z @ 2016-01-19 1:26 UTC (permalink / raw)
To: Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com,
peter.huangpeng@huawei.com, dgilbert@redhat.com,
quintela@redhat.com
> On 2016/1/15 18:24, Li, Liang Z wrote:
> >> It seems that this patch is incorrect, if the no-zero pages are
> >> zeroed again during !ram_bulk_stage, we didn't send the new zeroed
> >> page, there will be an error.
> >>
> >
> > If not in ram_bulk_stage, still send the header, could you explain why it's
> wrong?
> >
> > Liang
> >
>
> I have made a mistake, and yes, this patch can speed up the live migration
> time, especially when there are many zero pages, it will be more obvious.
> I like this idea. Did you test it with postcopy ? Does it break postcopy ?
>
Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix this.
A more important thing is Paolo's comments, I don't know in which case this patch will break LM. Do you have any idea about this?
Hope that QEMU don't write data to the block 'pc.ram'.
Liang
> Thanks,
> zhanghailiang
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-19 1:26 ` Li, Liang Z
@ 2016-01-19 3:11 ` Hailiang Zhang
2016-01-19 3:17 ` Li, Liang Z
2016-01-19 3:25 ` Hailiang Zhang
0 siblings, 2 replies; 17+ messages in thread
From: Hailiang Zhang @ 2016-01-19 3:11 UTC (permalink / raw)
To: Li, Liang Z, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com, peter.huangpeng,
dgilbert@redhat.com, quintela@redhat.com
On 2016/1/19 9:26, Li, Liang Z wrote:
>> On 2016/1/15 18:24, Li, Liang Z wrote:
>>>> It seems that this patch is incorrect, if the no-zero pages are
>>>> zeroed again during !ram_bulk_stage, we didn't send the new zeroed
>>>> page, there will be an error.
>>>>
>>>
>>> If not in ram_bulk_stage, still send the header, could you explain why it's
>> wrong?
>>>
>>> Liang
>>>
>>
>> I have made a mistake, and yes, this patch can speed up the live migration
>> time, especially when there are many zero pages, it will be more obvious.
>> I like this idea. Did you test it with postcopy ? Does it break postcopy ?
>>
>
> Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix this.
> A more important thing is Paolo's comments, I don't know in which case this patch will break LM. Do you have any idea about this?
> Hope that QEMU don't write data to the block 'pc.ram'.
>
Paolo is right, for VM in destination, QEMU may write VM's memory before VM starts.
So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
This patch will break LM.
> Liang
>
>> Thanks,
>> zhanghailiang
>>
>
> .
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-19 3:11 ` Hailiang Zhang
@ 2016-01-19 3:17 ` Li, Liang Z
2016-01-20 9:55 ` Paolo Bonzini
2016-01-19 3:25 ` Hailiang Zhang
1 sibling, 1 reply; 17+ messages in thread
From: Li, Liang Z @ 2016-01-19 3:17 UTC (permalink / raw)
To: Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com,
peter.huangpeng@huawei.com, dgilbert@redhat.com,
quintela@redhat.com
> > Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix
> this.
> > A more important thing is Paolo's comments, I don't know in which case
> this patch will break LM. Do you have any idea about this?
> > Hope that QEMU don't write data to the block 'pc.ram'.
> >
>
> Paolo is right, for VM in destination, QEMU may write VM's memory before
> VM starts.
> So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
> This patch will break LM.
>
Which portion of the VM's RAM pages will be written by QEMU? Do you know some exact information?
I can't wait for Paolo's response.
Liang
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-19 3:17 ` Li, Liang Z
@ 2016-01-20 9:55 ` Paolo Bonzini
2016-01-20 9:59 ` Li, Liang Z
0 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2016-01-20 9:55 UTC (permalink / raw)
To: Li, Liang Z, Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, peter.huangpeng@huawei.com,
dgilbert@redhat.com, quintela@redhat.com
On 19/01/2016 04:17, Li, Liang Z wrote:
> > Paolo is right, for VM in destination, QEMU may write VM's memory before
> > VM starts.
> > So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
> > This patch will break LM.
>
> Which portion of the VM's RAM pages will be written by QEMU? Do you know some exact information?
> I can't wait for Paolo's response.
It is basically anything that uses rom_add_file_fixed or
rom_add_blob_fixed with an address that points into RAM.
Paolo
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-20 9:55 ` Paolo Bonzini
@ 2016-01-20 9:59 ` Li, Liang Z
0 siblings, 0 replies; 17+ messages in thread
From: Li, Liang Z @ 2016-01-20 9:59 UTC (permalink / raw)
To: Paolo Bonzini, Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, peter.huangpeng@huawei.com,
dgilbert@redhat.com, quintela@redhat.com
This patch will break LM.
> >
> > Which portion of the VM's RAM pages will be written by QEMU? Do you
> know some exact information?
> > I can't wait for Paolo's response.
>
> It is basically anything that uses rom_add_file_fixed or rom_add_blob_fixed
> with an address that points into RAM.
>
> Paolo
Thanks a lot!
Liang
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-19 3:11 ` Hailiang Zhang
2016-01-19 3:17 ` Li, Liang Z
@ 2016-01-19 3:25 ` Hailiang Zhang
2016-01-19 3:36 ` Li, Liang Z
1 sibling, 1 reply; 17+ messages in thread
From: Hailiang Zhang @ 2016-01-19 3:25 UTC (permalink / raw)
To: Li, Liang Z, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com, peter.huangpeng,
dgilbert@redhat.com, quintela@redhat.com
On 2016/1/19 11:11, Hailiang Zhang wrote:
> On 2016/1/19 9:26, Li, Liang Z wrote:
>>> On 2016/1/15 18:24, Li, Liang Z wrote:
>>>>> It seems that this patch is incorrect, if the no-zero pages are
>>>>> zeroed again during !ram_bulk_stage, we didn't send the new zeroed
>>>>> page, there will be an error.
>>>>>
>>>>
>>>> If not in ram_bulk_stage, still send the header, could you explain why it's
>>> wrong?
>>>>
>>>> Liang
>>>>
>>>
>>> I have made a mistake, and yes, this patch can speed up the live migration
>>> time, especially when there are many zero pages, it will be more obvious.
>>> I like this idea. Did you test it with postcopy ? Does it break postcopy ?
>>>
>>
>> Not yet, I saw Dave's comment's, it will beak post copy, it's not hard to fix this.
>> A more important thing is Paolo's comments, I don't know in which case this patch will break LM. Do you have any idea about this?
>> Hope that QEMU don't write data to the block 'pc.ram'.
>>
>
> Paolo is right, for VM in destination, QEMU may write VM's memory before VM starts.
> So your assumption that "VM's RAM pages are initialized to zero" is incorrect.
> This patch will break LM.
>
Actually, someone has done like that before and cause a migration bug,
See commit f1c72795af573b24a7da5eb52375c9aba8a37972, and
the fixing patch is
commit 9ef051e5536b6368a1076046ec6c4ec4ac12b5c6
Revert "migration: do not sent zero pages in bulk stage"
>> Liang
>>
>>> Thanks,
>>> zhanghailiang
>>>
>>
>> .
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-19 3:25 ` Hailiang Zhang
@ 2016-01-19 3:36 ` Li, Liang Z
0 siblings, 0 replies; 17+ messages in thread
From: Li, Liang Z @ 2016-01-19 3:36 UTC (permalink / raw)
To: Hailiang Zhang, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, pbonzini@redhat.com,
peter.huangpeng@huawei.com, dgilbert@redhat.com,
quintela@redhat.com
> Actually, someone has done like that before and cause a migration bug, See
> commit f1c72795af573b24a7da5eb52375c9aba8a37972, and the fixing patch is
> commit 9ef051e5536b6368a1076046ec6c4ec4ac12b5c6
> Revert "migration: do not sent zero pages in bulk stage"
Thanks for your information, I didn't notice that before. May be there is a workaround solution instead of reverting, I need more investigation.
Liang
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 9:48 [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage Liang Li
2016-01-15 10:17 ` Hailiang Zhang
@ 2016-01-15 11:39 ` Paolo Bonzini
2016-01-16 14:12 ` Li, Liang Z
2016-01-15 18:57 ` Dr. David Alan Gilbert
2 siblings, 1 reply; 17+ messages in thread
From: Paolo Bonzini @ 2016-01-15 11:39 UTC (permalink / raw)
To: Liang Li, qemu-devel; +Cc: amit.shah, zhang.zhanghailiang, dgilbert, quintela
On 15/01/2016 10:48, Liang Li wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
>
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.
>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
This does not work. Depending on the board, some pages are written by
QEMU before the guest starts. If the guest rewrites them with zeroes,
this change breaks migration.
Paolo
> ---
> migration/ram.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>
> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> acct_info.dup_pages++;
> - *bytes_transferred += save_page_header(f, block,
> - offset | RAM_SAVE_FLAG_COMPRESS);
> - qemu_put_byte(f, 0);
> - *bytes_transferred += 1;
> + if (!ram_bulk_stage) {
> + *bytes_transferred += save_page_header(f, block, offset |
> + RAM_SAVE_FLAG_COMPRESS);
> + qemu_put_byte(f, 0);
> + *bytes_transferred += 1;
> + }
> pages = 1;
> }
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 11:39 ` Paolo Bonzini
@ 2016-01-16 14:12 ` Li, Liang Z
0 siblings, 0 replies; 17+ messages in thread
From: Li, Liang Z @ 2016-01-16 14:12 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel@nongnu.org
Cc: amit.shah@redhat.com, zhang.zhanghailiang@huawei.com,
dgilbert@redhat.com, quintela@redhat.com
> On 15/01/2016 10:48, Liang Li wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
> >
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
>
> This does not work. Depending on the board, some pages are written by
> QEMU before the guest starts. If the guest rewrites them with zeroes, this
> change breaks migration.
>
> Paolo
Hi Paolo,
Luckily I cc to you. Could you give an example in which case this patch will break migration?
Then I can understand your comments better. Much appreciate!
Liang
>
> > ---
> > migration/ram.c | 10 ++++++----
> > 1 file changed, 6 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index 4e606ab..c4821d1
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f,
> RAMBlock
> > *block, ram_addr_t offset,
> >
> > if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> > acct_info.dup_pages++;
> > - *bytes_transferred += save_page_header(f, block,
> > - offset | RAM_SAVE_FLAG_COMPRESS);
> > - qemu_put_byte(f, 0);
> > - *bytes_transferred += 1;
> > + if (!ram_bulk_stage) {
> > + *bytes_transferred += save_page_header(f, block, offset |
> > + RAM_SAVE_FLAG_COMPRESS);
> > + qemu_put_byte(f, 0);
> > + *bytes_transferred += 1;
> > + }
> > pages = 1;
> > }
> >
> >
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 9:48 [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage Liang Li
2016-01-15 10:17 ` Hailiang Zhang
2016-01-15 11:39 ` Paolo Bonzini
@ 2016-01-15 18:57 ` Dr. David Alan Gilbert
2016-01-16 14:25 ` Li, Liang Z
2016-01-18 9:17 ` Hailiang Zhang
2 siblings, 2 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2016-01-15 18:57 UTC (permalink / raw)
To: Liang Li; +Cc: amit.shah, pbonzini, zhang.zhanghailiang, qemu-devel, quintela
* Liang Li (liang.z.li@intel.com) wrote:
> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
> if hugetlbfs is used.) so there is no need to send the zero page header
> to destination.
>
> For guest just uses a small portions of RAM, this change can avoid
> allocating all the guest's RAM pages in the destination node after
> live migration. Another benefit is destination QEMU can save lots of
> CPU cycles for zero page checking.
I think this would break postcopy, because the zero pages wouldn't be
filled in, so accessing them would still generate a userfault.
So you'd have to disable this optimisation if postcopy is enabled
(even during the precopy bulk stage).
Also, are you sure about the benefits?
Destination guests RAM should not be allocated on receiving a zero
page; see ram_handle_compressed, it doesn't write to the page if
it's zero, so it shouldn't cause an allocate. I think you're probably
correct about the zero page test on the destination, I wonder if we
can speed that up.
Dave
>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> ---
> migration/ram.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 4e606ab..c4821d1 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>
> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> acct_info.dup_pages++;
> - *bytes_transferred += save_page_header(f, block,
> - offset | RAM_SAVE_FLAG_COMPRESS);
> - qemu_put_byte(f, 0);
> - *bytes_transferred += 1;
> + if (!ram_bulk_stage) {
> + *bytes_transferred += save_page_header(f, block, offset |
> + RAM_SAVE_FLAG_COMPRESS);
> + qemu_put_byte(f, 0);
> + *bytes_transferred += 1;
> + }
> pages = 1;
> }
>
> --
> 1.9.1
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 18:57 ` Dr. David Alan Gilbert
@ 2016-01-16 14:25 ` Li, Liang Z
2016-01-18 9:33 ` Dr. David Alan Gilbert
2016-01-18 9:17 ` Hailiang Zhang
1 sibling, 1 reply; 17+ messages in thread
From: Li, Liang Z @ 2016-01-16 14:25 UTC (permalink / raw)
To: Dr. David Alan Gilbert
Cc: amit.shah@redhat.com, pbonzini@redhat.com,
zhang.zhanghailiang@huawei.com, qemu-devel@nongnu.org,
quintela@redhat.com
> * Liang Li (liang.z.li@intel.com) wrote:
> > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> MAP_SHARED
> > if hugetlbfs is used.) so there is no need to send the zero page
> > header to destination.
> >
> > For guest just uses a small portions of RAM, this change can avoid
> > allocating all the guest's RAM pages in the destination node after
> > live migration. Another benefit is destination QEMU can save lots of
> > CPU cycles for zero page checking.
>
> I think this would break postcopy, because the zero pages wouldn't be filled
> in, so accessing them would still generate a userfault.
> So you'd have to disable this optimisation if postcopy is enabled (even during
> the precopy bulk stage).
>
> Also, are you sure about the benefits?
> Destination guests RAM should not be allocated on receiving a zero page;
> see ram_handle_compressed, it doesn't write to the page if it's zero, so it
> shouldn't cause an allocate. I think you're probably correct about the zero
> page test on the destination, I wonder if we can speed that up.
>
> Dave
I have test the performance, with a 8G guest just booted, this patch can reduce total live migration time about 10%.
Unfortunately, Paolo said this patch would break LM in some case ....
For the zero page test on the destination, if the page is really a zero page, test is faster than writing a whole page of zero.
Liang
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-16 14:25 ` Li, Liang Z
@ 2016-01-18 9:33 ` Dr. David Alan Gilbert
0 siblings, 0 replies; 17+ messages in thread
From: Dr. David Alan Gilbert @ 2016-01-18 9:33 UTC (permalink / raw)
To: Li, Liang Z
Cc: amit.shah@redhat.com, pbonzini@redhat.com,
zhang.zhanghailiang@huawei.com, qemu-devel@nongnu.org,
quintela@redhat.com
* Li, Liang Z (liang.z.li@intel.com) wrote:
> > * Liang Li (liang.z.li@intel.com) wrote:
> > > Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
> > > with the mmap() and MAP_ANONYMOUS option, or mmap() without
> > MAP_SHARED
> > > if hugetlbfs is used.) so there is no need to send the zero page
> > > header to destination.
> > >
> > > For guest just uses a small portions of RAM, this change can avoid
> > > allocating all the guest's RAM pages in the destination node after
> > > live migration. Another benefit is destination QEMU can save lots of
> > > CPU cycles for zero page checking.
> >
> > I think this would break postcopy, because the zero pages wouldn't be filled
> > in, so accessing them would still generate a userfault.
> > So you'd have to disable this optimisation if postcopy is enabled (even during
> > the precopy bulk stage).
> >
> > Also, are you sure about the benefits?
> > Destination guests RAM should not be allocated on receiving a zero page;
> > see ram_handle_compressed, it doesn't write to the page if it's zero, so it
> > shouldn't cause an allocate. I think you're probably correct about the zero
> > page test on the destination, I wonder if we can speed that up.
> >
> > Dave
>
> I have test the performance, with a 8G guest just booted, this patch can reduce total live migration time about 10%.
> Unfortunately, Paolo said this patch would break LM in some case ....
>
> For the zero page test on the destination, if the page is really a zero page, test is faster than writing a whole page of zero.
There shouldn't be a write on the destination though; it does a check if
the page is already zero and only if it's none-zero does it do the write;
it should rarely be non-zero.
Dave
>
> Liang
>
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
2016-01-15 18:57 ` Dr. David Alan Gilbert
2016-01-16 14:25 ` Li, Liang Z
@ 2016-01-18 9:17 ` Hailiang Zhang
1 sibling, 0 replies; 17+ messages in thread
From: Hailiang Zhang @ 2016-01-18 9:17 UTC (permalink / raw)
To: Dr. David Alan Gilbert, Liang Li
Cc: amit.shah, pbonzini, quintela, peter.huangpeng, qemu-devel
On 2016/1/16 2:57, Dr. David Alan Gilbert wrote:
> * Liang Li (liang.z.li@intel.com) wrote:
>> Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
>> with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
>> if hugetlbfs is used.) so there is no need to send the zero page header
>> to destination.
>>
>> For guest just uses a small portions of RAM, this change can avoid
>> allocating all the guest's RAM pages in the destination node after
>> live migration. Another benefit is destination QEMU can save lots of
>> CPU cycles for zero page checking.
>
> I think this would break postcopy, because the zero pages wouldn't be
> filled in, so accessing them would still generate a userfault.
> So you'd have to disable this optimisation if postcopy is enabled
> (even during the precopy bulk stage).
>
> Also, are you sure about the benefits?
> Destination guests RAM should not be allocated on receiving a zero
> page; see ram_handle_compressed, it doesn't write to the page if
> it's zero, so it shouldn't cause an allocate. I think you're probably
> correct about the zero page test on the destination, I wonder if we
> can speed that up.
>
Yes, we have already optimize the zero page allocation in destination.
but this patch can reduce the amount of data that transferred and the
time of checking zero page, which can reduce the migration time.
> Dave
>
>>
>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>> ---
>> migration/ram.c | 10 ++++++----
>> 1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 4e606ab..c4821d1 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>>
>> if (is_zero_range(p, TARGET_PAGE_SIZE)) {
>> acct_info.dup_pages++;
>> - *bytes_transferred += save_page_header(f, block,
>> - offset | RAM_SAVE_FLAG_COMPRESS);
>> - qemu_put_byte(f, 0);
>> - *bytes_transferred += 1;
>> + if (!ram_bulk_stage) {
>> + *bytes_transferred += save_page_header(f, block, offset |
>> + RAM_SAVE_FLAG_COMPRESS);
>> + qemu_put_byte(f, 0);
>> + *bytes_transferred += 1;
>> + }
>> pages = 1;
>> }
>>
>> --
>> 1.9.1
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
> .
>
^ permalink raw reply [flat|nested] 17+ messages in thread