* [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
@ 2016-05-18 11:20 Jitendra Kolhe
2016-05-30 10:49 ` Jitendra Kolhe
2016-12-23 2:50 ` Li, Liang Z
0 siblings, 2 replies; 11+ messages in thread
From: Jitendra Kolhe @ 2016-05-18 11:20 UTC (permalink / raw)
To: qemu-devel
Cc: pbonzini, crosthwaite.peter, rth, eblake, lcapitulino, stefanha,
armbru, quintela, mst, den, JBottomley, borntraeger, amit.shah,
dgilbert, ehabkost, jitendra.kolhe, mohan_parthasarathy, simhan,
renganathan.meenakshisundaram
While measuring live migration performance for qemu/kvm guest, it was observed
that the qemu doesn’t maintain any intelligence for the guest ram pages released
by the guest balloon driver and treat such pages as any other
normal guest ram pages. This has direct impact on overall migration time for
the guest which has released (ballooned out) memory to the host.
In case of large systems, where we can configure large guests with 1TB and
with considerable amount of memory released by balloon driver to the host,
the migration time gets worse.
The solution proposed below is local to qemu (and does not require any
modification to Linux kernel or any guest driver). We have verified the fix
for large guests =1TB on HPE Superdome X (which can support up to 240 cores
and 12TB of memory).
During live migration, as part of first iteration in ram_save_iterate() ->
ram_find_and_save_block () will try to migrate ram pages even if they are
released by vitrio-balloon driver (balloon inflate). Although these pages
which are returned to the host by virtio-balloon driver are zero pages,
the migration algorithm will still end up scanning the entire page
ram_find_and_save_block() -> ram_save_page()/ram_save_compressed_page() ->
save_zero_page() -> is_zero_range(). We also end-up sending header information
over network for these pages during migration. This adds to the total
migration time.
The solution creates a balloon bitmap ramblock as a part of virtio-balloon
device initialization. The bits in the balloon bitmap represent a guest ram
page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If TARGET_PAGE_BITS <=
VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for the dirty page which is used by
dirty page bitmap during migration is checked against the balloon bitmap as
is, if the bit is set in the balloon bitmap, the corresponding ram page will be
excluded from scanning and sending header information during migration. In case
TARGET_PAGE_BITS > VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr,
all sub-pages of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out
to avoid zero page scan and sending header information.
The bitmap represents entire guest ram memory till max configured memory.
Guest ram pages claimed by the virtio-balloon driver will be represented by 1
in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
target as part migration’s ram iterative and ram complete phase. So that
substituent migrations from the target can continue to use optimization.
A new migration capability called skip-balloon is introduced. The user can
disable the capability in cases where user does not expect much benefit or in
case the migration is from an older version.
During live migration setup the optimization can be set to disabled state if
. no virtio-balloon device is initialized.
. skip-balloon migration capability is disabled.
. If the guest virtio-balloon driver has not set VIRTIO_BALLOON_F_MUST_TELL_HOST
flag. Which means the guest may start using a ram pages freed by guest balloon
driver, even before the host/qemu is aware of it. In such case, the
optimization is disabled so that the ram pages that are being used by the
guest will continue to be scanned and migrated.
Balloon bitmap ramblock size is set to zero if the optimization is disabled,
to avoid overhead of migrating the bitmap. If the bitmap is not migrated to
the target, the destination starts with a fresh bitmap and tracks the
ballooning operation thereafter.
Jitendra Kolhe (4):
balloon: maintain bitmap for pages released by guest balloon driver.
balloon: add balloon bitmap migration capability and setup bitmap
migration status.
balloon: reset balloon bitmap ramblock size on source and target.
migration: skip scanning and migrating ram pages released by
virtio-balloon driver.
Changed in v2:
- Resolved compilation issue for qemu-user binaries in exec.c
- Localize balloon bitmap test to save_zero_page().
- Updated version string for newly added migration capability to 2.7.
- Made minor modifications to patch commit text.
Changed in v3:
- Add balloon bitmap to RAMBlock.
- Resolve bitmap offset calculation by translating host addr back to a
RAMBlock and ram_addr
- Add balloon bitmap support for case if TARGET_PAGE_BITS
> VIRTIO_BALLOON_PFN_SHIFT.
- Remove dependency of skip-balloon migration capability on postcopy
migration.
- Disable optimization if the guest balloon driver does not support
VIRTIO_BALLOON_F_MUST_TELL_HOST feature.
- Split single patch into 4 small patches.
balloon.c | 196 ++++++++++++++++++++++++++++++++++++-
exec.c | 6 ++
hw/virtio/virtio-balloon.c | 42 +++++++-
include/hw/virtio/virtio-balloon.h | 1 +
include/migration/migration.h | 1 +
include/sysemu/balloon.h | 13 ++-
migration/migration.c | 9 ++
migration/ram.c | 9 +-
qapi-schema.json | 5 +-
9 files changed, 276 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2016-05-18 11:20 [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver Jitendra Kolhe
@ 2016-05-30 10:49 ` Jitendra Kolhe
2016-06-15 6:15 ` Jitendra Kolhe
2016-12-23 2:50 ` Li, Liang Z
1 sibling, 1 reply; 11+ messages in thread
From: Jitendra Kolhe @ 2016-05-30 10:49 UTC (permalink / raw)
To: Jitendra Kolhe
Cc: qemu-devel, Renganathan, JBottomley, ehabkost, crosthwaite.peter,
simhan, quintela, armbru, lcapitulino, borntraeger, mst,
mohan_parthasarathy, stefanha, den, amit.shah, pbonzini, dgilbert,
rth
ping...
for entire v3 version of the patchset.
http://patchwork.ozlabs.org/project/qemu-devel/list/?submitter=68462
- Jitendra
On Wed, May 18, 2016 at 4:50 PM, Jitendra Kolhe <jitendra.kolhe@hpe.com> wrote:
> While measuring live migration performance for qemu/kvm guest, it was observed
> that the qemu doesn’t maintain any intelligence for the guest ram pages released
> by the guest balloon driver and treat such pages as any other
> normal guest ram pages. This has direct impact on overall migration time for
> the guest which has released (ballooned out) memory to the host.
>
> In case of large systems, where we can configure large guests with 1TB and
> with considerable amount of memory released by balloon driver to the host,
> the migration time gets worse.
>
> The solution proposed below is local to qemu (and does not require any
> modification to Linux kernel or any guest driver). We have verified the fix
> for large guests =1TB on HPE Superdome X (which can support up to 240 cores
> and 12TB of memory).
>
> During live migration, as part of first iteration in ram_save_iterate() ->
> ram_find_and_save_block () will try to migrate ram pages even if they are
> released by vitrio-balloon driver (balloon inflate). Although these pages
> which are returned to the host by virtio-balloon driver are zero pages,
> the migration algorithm will still end up scanning the entire page
> ram_find_and_save_block() -> ram_save_page()/ram_save_compressed_page() ->
> save_zero_page() -> is_zero_range(). We also end-up sending header information
> over network for these pages during migration. This adds to the total
> migration time.
>
> The solution creates a balloon bitmap ramblock as a part of virtio-balloon
> device initialization. The bits in the balloon bitmap represent a guest ram
> page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If TARGET_PAGE_BITS <=
> VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for the dirty page which is used by
> dirty page bitmap during migration is checked against the balloon bitmap as
> is, if the bit is set in the balloon bitmap, the corresponding ram page will be
> excluded from scanning and sending header information during migration. In case
> TARGET_PAGE_BITS > VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr,
> all sub-pages of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out
> to avoid zero page scan and sending header information.
>
> The bitmap represents entire guest ram memory till max configured memory.
> Guest ram pages claimed by the virtio-balloon driver will be represented by 1
> in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
> target as part migration’s ram iterative and ram complete phase. So that
> substituent migrations from the target can continue to use optimization.
>
> A new migration capability called skip-balloon is introduced. The user can
> disable the capability in cases where user does not expect much benefit or in
> case the migration is from an older version.
>
> During live migration setup the optimization can be set to disabled state if
> . no virtio-balloon device is initialized.
> . skip-balloon migration capability is disabled.
> . If the guest virtio-balloon driver has not set VIRTIO_BALLOON_F_MUST_TELL_HOST
> flag. Which means the guest may start using a ram pages freed by guest balloon
> driver, even before the host/qemu is aware of it. In such case, the
> optimization is disabled so that the ram pages that are being used by the
> guest will continue to be scanned and migrated.
>
> Balloon bitmap ramblock size is set to zero if the optimization is disabled,
> to avoid overhead of migrating the bitmap. If the bitmap is not migrated to
> the target, the destination starts with a fresh bitmap and tracks the
> ballooning operation thereafter.
>
> Jitendra Kolhe (4):
> balloon: maintain bitmap for pages released by guest balloon driver.
> balloon: add balloon bitmap migration capability and setup bitmap
> migration status.
> balloon: reset balloon bitmap ramblock size on source and target.
> migration: skip scanning and migrating ram pages released by
> virtio-balloon driver.
>
> Changed in v2:
> - Resolved compilation issue for qemu-user binaries in exec.c
> - Localize balloon bitmap test to save_zero_page().
> - Updated version string for newly added migration capability to 2.7.
> - Made minor modifications to patch commit text.
>
> Changed in v3:
> - Add balloon bitmap to RAMBlock.
> - Resolve bitmap offset calculation by translating host addr back to a
> RAMBlock and ram_addr
> - Add balloon bitmap support for case if TARGET_PAGE_BITS
> > VIRTIO_BALLOON_PFN_SHIFT.
> - Remove dependency of skip-balloon migration capability on postcopy
> migration.
> - Disable optimization if the guest balloon driver does not support
> VIRTIO_BALLOON_F_MUST_TELL_HOST feature.
> - Split single patch into 4 small patches.
>
> balloon.c | 196 ++++++++++++++++++++++++++++++++++++-
> exec.c | 6 ++
> hw/virtio/virtio-balloon.c | 42 +++++++-
> include/hw/virtio/virtio-balloon.h | 1 +
> include/migration/migration.h | 1 +
> include/sysemu/balloon.h | 13 ++-
> migration/migration.c | 9 ++
> migration/ram.c | 9 +-
> qapi-schema.json | 5 +-
> 9 files changed, 276 insertions(+), 6 deletions(-)
>
> --
> 1.8.3.1
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2016-05-30 10:49 ` Jitendra Kolhe
@ 2016-06-15 6:15 ` Jitendra Kolhe
0 siblings, 0 replies; 11+ messages in thread
From: Jitendra Kolhe @ 2016-06-15 6:15 UTC (permalink / raw)
To: qemu-devel
Cc: Jitendra Kolhe, Renganathan, ehabkost, crosthwaite.peter, simhan,
quintela, armbru, lcapitulino, borntraeger, mst,
mohan_parthasarathy, stefanha, den, amit.shah, pbonzini, dgilbert,
rth
ping ...
also had received some bounce back from few individual email-ids,
so consider this one as resend.
Thanks,
- Jitendra
On 5/30/2016 4:19 PM, Jitendra Kolhe wrote:
> ping...
> for entire v3 version of the patchset.
> http://patchwork.ozlabs.org/project/qemu-devel/list/?submitter=68462
>
> - Jitendra
>
> On Wed, May 18, 2016 at 4:50 PM, Jitendra Kolhe <jitendra.kolhe@hpe.com> wrote:
>> While measuring live migration performance for qemu/kvm guest, it was observed
>> that the qemu doesn’t maintain any intelligence for the guest ram pages released
>> by the guest balloon driver and treat such pages as any other
>> normal guest ram pages. This has direct impact on overall migration time for
>> the guest which has released (ballooned out) memory to the host.
>>
>> In case of large systems, where we can configure large guests with 1TB and
>> with considerable amount of memory released by balloon driver to the host,
>> the migration time gets worse.
>>
>> The solution proposed below is local to qemu (and does not require any
>> modification to Linux kernel or any guest driver). We have verified the fix
>> for large guests =1TB on HPE Superdome X (which can support up to 240 cores
>> and 12TB of memory).
>>
>> During live migration, as part of first iteration in ram_save_iterate() ->
>> ram_find_and_save_block () will try to migrate ram pages even if they are
>> released by vitrio-balloon driver (balloon inflate). Although these pages
>> which are returned to the host by virtio-balloon driver are zero pages,
>> the migration algorithm will still end up scanning the entire page
>> ram_find_and_save_block() -> ram_save_page()/ram_save_compressed_page() ->
>> save_zero_page() -> is_zero_range(). We also end-up sending header information
>> over network for these pages during migration. This adds to the total
>> migration time.
>>
>> The solution creates a balloon bitmap ramblock as a part of virtio-balloon
>> device initialization. The bits in the balloon bitmap represent a guest ram
>> page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If TARGET_PAGE_BITS <=
>> VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for the dirty page which is used by
>> dirty page bitmap during migration is checked against the balloon bitmap as
>> is, if the bit is set in the balloon bitmap, the corresponding ram page will be
>> excluded from scanning and sending header information during migration. In case
>> TARGET_PAGE_BITS > VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr,
>> all sub-pages of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out
>> to avoid zero page scan and sending header information.
>>
>> The bitmap represents entire guest ram memory till max configured memory.
>> Guest ram pages claimed by the virtio-balloon driver will be represented by 1
>> in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
>> target as part migration’s ram iterative and ram complete phase. So that
>> substituent migrations from the target can continue to use optimization.
>>
>> A new migration capability called skip-balloon is introduced. The user can
>> disable the capability in cases where user does not expect much benefit or in
>> case the migration is from an older version.
>>
>> During live migration setup the optimization can be set to disabled state if
>> . no virtio-balloon device is initialized.
>> . skip-balloon migration capability is disabled.
>> . If the guest virtio-balloon driver has not set VIRTIO_BALLOON_F_MUST_TELL_HOST
>> flag. Which means the guest may start using a ram pages freed by guest balloon
>> driver, even before the host/qemu is aware of it. In such case, the
>> optimization is disabled so that the ram pages that are being used by the
>> guest will continue to be scanned and migrated.
>>
>> Balloon bitmap ramblock size is set to zero if the optimization is disabled,
>> to avoid overhead of migrating the bitmap. If the bitmap is not migrated to
>> the target, the destination starts with a fresh bitmap and tracks the
>> ballooning operation thereafter.
>>
>> Jitendra Kolhe (4):
>> balloon: maintain bitmap for pages released by guest balloon driver.
>> balloon: add balloon bitmap migration capability and setup bitmap
>> migration status.
>> balloon: reset balloon bitmap ramblock size on source and target.
>> migration: skip scanning and migrating ram pages released by
>> virtio-balloon driver.
>>
>> Changed in v2:
>> - Resolved compilation issue for qemu-user binaries in exec.c
>> - Localize balloon bitmap test to save_zero_page().
>> - Updated version string for newly added migration capability to 2.7.
>> - Made minor modifications to patch commit text.
>>
>> Changed in v3:
>> - Add balloon bitmap to RAMBlock.
>> - Resolve bitmap offset calculation by translating host addr back to a
>> RAMBlock and ram_addr
>> - Add balloon bitmap support for case if TARGET_PAGE_BITS
>> > VIRTIO_BALLOON_PFN_SHIFT.
>> - Remove dependency of skip-balloon migration capability on postcopy
>> migration.
>> - Disable optimization if the guest balloon driver does not support
>> VIRTIO_BALLOON_F_MUST_TELL_HOST feature.
>> - Split single patch into 4 small patches.
>>
>> balloon.c | 196 ++++++++++++++++++++++++++++++++++++-
>> exec.c | 6 ++
>> hw/virtio/virtio-balloon.c | 42 +++++++-
>> include/hw/virtio/virtio-balloon.h | 1 +
>> include/migration/migration.h | 1 +
>> include/sysemu/balloon.h | 13 ++-
>> migration/migration.c | 9 ++
>> migration/ram.c | 9 +-
>> qapi-schema.json | 5 +-
>> 9 files changed, 276 insertions(+), 6 deletions(-)
>>
>> --
>> 1.8.3.1
>>
>>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2016-05-18 11:20 [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver Jitendra Kolhe
2016-05-30 10:49 ` Jitendra Kolhe
@ 2016-12-23 2:50 ` Li, Liang Z
2016-12-26 11:47 ` Jitendra Kolhe
2017-01-02 8:54 ` David Hildenbrand
1 sibling, 2 replies; 11+ messages in thread
From: Li, Liang Z @ 2016-12-23 2:50 UTC (permalink / raw)
To: Jitendra Kolhe, qemu-devel@nongnu.org
Cc: renganathan.meenakshisundaram@hpe.com, JBottomley@Odin.com,
ehabkost@redhat.com, crosthwaite.peter@gmail.com, simhan@hpe.com,
quintela@redhat.com, armbru@redhat.com, lcapitulino@redhat.com,
borntraeger@de.ibm.com, mst@redhat.com,
mohan_parthasarathy@hpe.com, stefanha@redhat.com, den@openvz.org,
amit.shah@redhat.com, pbonzini@redhat.com, dgilbert@redhat.com,
rth@twiddle.net
> While measuring live migration performance for qemu/kvm guest, it was
> observed that the qemu doesn’t maintain any intelligence for the guest ram
> pages released by the guest balloon driver and treat such pages as any other
> normal guest ram pages. This has direct impact on overall migration time for
> the guest which has released (ballooned out) memory to the host.
>
> In case of large systems, where we can configure large guests with 1TB and
> with considerable amount of memory released by balloon driver to the host,
> the migration time gets worse.
>
> The solution proposed below is local to qemu (and does not require any
> modification to Linux kernel or any guest driver). We have verified the fix for
> large guests =1TB on HPE Superdome X (which can support up to 240 cores
> and 12TB of memory).
>
> During live migration, as part of first iteration in ram_save_iterate() ->
> ram_find_and_save_block () will try to migrate ram pages even if they are
> released by vitrio-balloon driver (balloon inflate). Although these pages
> which are returned to the host by virtio-balloon driver are zero pages, the
> migration algorithm will still end up scanning the entire page
> ram_find_and_save_block() ->
> ram_save_page()/ram_save_compressed_page() ->
> save_zero_page() -> is_zero_range(). We also end-up sending header
> information over network for these pages during migration. This adds to the
> total migration time.
>
> The solution creates a balloon bitmap ramblock as a part of virtio-balloon
> device initialization. The bits in the balloon bitmap represent a guest ram
> page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If
> TARGET_PAGE_BITS <= VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for
> the dirty page which is used by dirty page bitmap during migration is checked
> against the balloon bitmap as is, if the bit is set in the balloon bitmap, the
> corresponding ram page will be excluded from scanning and sending header
> information during migration. In case TARGET_PAGE_BITS >
> VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr, all sub-pages
> of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out to
> avoid zero page scan and sending header information.
>
> The bitmap represents entire guest ram memory till max configured memory.
> Guest ram pages claimed by the virtio-balloon driver will be represented by 1
> in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
> target as part migration’s ram iterative and ram complete phase. So that
> substituent migrations from the target can continue to use optimization.
>
> A new migration capability called skip-balloon is introduced. The user can
> disable the capability in cases where user does not expect much benefit or in
> case the migration is from an older version.
>
> During live migration setup the optimization can be set to disabled state if .
> no virtio-balloon device is initialized.
> . skip-balloon migration capability is disabled.
> . If the guest virtio-balloon driver has not set
> VIRTIO_BALLOON_F_MUST_TELL_HOST
> flag. Which means the guest may start using a ram pages freed by guest
> balloon
> driver, even before the host/qemu is aware of it. In such case, the
> optimization is disabled so that the ram pages that are being used by the
> guest will continue to be scanned and migrated.
>
> Balloon bitmap ramblock size is set to zero if the optimization is disabled, to
> avoid overhead of migrating the bitmap. If the bitmap is not migrated to the
> target, the destination starts with a fresh bitmap and tracks the ballooning
> operation thereafter.
>
I have a better way to get rid of the bitmap.
We should not maintain the inflating pages in the bitmap, instead, we
can get them from the guest if it's needed, just like what we did for the guest's
unused pages. Then we can combine the inflating page info with the unused page
info together, and skip them during live migration.
Thanks!
Liang
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2016-12-23 2:50 ` Li, Liang Z
@ 2016-12-26 11:47 ` Jitendra Kolhe
2017-01-02 8:54 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: Jitendra Kolhe @ 2016-12-26 11:47 UTC (permalink / raw)
To: Li, Liang Z, qemu-devel@nongnu.org
Cc: renganathan.meenakshisundaram@hpe.com, JBottomley@Odin.com,
ehabkost@redhat.com, crosthwaite.peter@gmail.com, simhan@hpe.com,
quintela@redhat.com, armbru@redhat.com, lcapitulino@redhat.com,
borntraeger@de.ibm.com, mst@redhat.com,
mohan_parthasarathy@hpe.com, stefanha@redhat.com, den@openvz.org,
amit.shah@redhat.com, pbonzini@redhat.com, dgilbert@redhat.com,
rth@twiddle.net
On 12/23/2016 8:20 AM, Li, Liang Z wrote:
>> While measuring live migration performance for qemu/kvm guest, it was
>> observed that the qemu doesn’t maintain any intelligence for the guest ram
>> pages released by the guest balloon driver and treat such pages as any other
>> normal guest ram pages. This has direct impact on overall migration time for
>> the guest which has released (ballooned out) memory to the host.
>>
>> In case of large systems, where we can configure large guests with 1TB and
>> with considerable amount of memory released by balloon driver to the host,
>> the migration time gets worse.
>>
>> The solution proposed below is local to qemu (and does not require any
>> modification to Linux kernel or any guest driver). We have verified the fix for
>> large guests =1TB on HPE Superdome X (which can support up to 240 cores
>> and 12TB of memory).
>>
>> During live migration, as part of first iteration in ram_save_iterate() ->
>> ram_find_and_save_block () will try to migrate ram pages even if they are
>> released by vitrio-balloon driver (balloon inflate). Although these pages
>> which are returned to the host by virtio-balloon driver are zero pages, the
>> migration algorithm will still end up scanning the entire page
>> ram_find_and_save_block() ->
>> ram_save_page()/ram_save_compressed_page() ->
>> save_zero_page() -> is_zero_range(). We also end-up sending header
>> information over network for these pages during migration. This adds to the
>> total migration time.
>>
>> The solution creates a balloon bitmap ramblock as a part of virtio-balloon
>> device initialization. The bits in the balloon bitmap represent a guest ram
>> page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If
>> TARGET_PAGE_BITS <= VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for
>> the dirty page which is used by dirty page bitmap during migration is checked
>> against the balloon bitmap as is, if the bit is set in the balloon bitmap, the
>> corresponding ram page will be excluded from scanning and sending header
>> information during migration. In case TARGET_PAGE_BITS >
>> VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr, all sub-pages
>> of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out to
>> avoid zero page scan and sending header information.
>>
>> The bitmap represents entire guest ram memory till max configured memory.
>> Guest ram pages claimed by the virtio-balloon driver will be represented by 1
>> in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
>> target as part migration’s ram iterative and ram complete phase. So that
>> substituent migrations from the target can continue to use optimization.
>>
>> A new migration capability called skip-balloon is introduced. The user can
>> disable the capability in cases where user does not expect much benefit or in
>> case the migration is from an older version.
>>
>> During live migration setup the optimization can be set to disabled state if .
>> no virtio-balloon device is initialized.
>> . skip-balloon migration capability is disabled.
>> . If the guest virtio-balloon driver has not set
>> VIRTIO_BALLOON_F_MUST_TELL_HOST
>> flag. Which means the guest may start using a ram pages freed by guest
>> balloon
>> driver, even before the host/qemu is aware of it. In such case, the
>> optimization is disabled so that the ram pages that are being used by the
>> guest will continue to be scanned and migrated.
>>
>> Balloon bitmap ramblock size is set to zero if the optimization is disabled, to
>> avoid overhead of migrating the bitmap. If the bitmap is not migrated to the
>> target, the destination starts with a fresh bitmap and tracks the ballooning
>> operation thereafter.
>>
>
> I have a better way to get rid of the bitmap.
> We should not maintain the inflating pages in the bitmap, instead, we
> can get them from the guest if it's needed, just like what we did for the guest's
> unused pages. Then we can combine the inflating page info with the unused page
> info together, and skip them during live migration.
>
Thanks for your response. I will try and work on top of your patch set to use
the same framework which skips migrating guest's unused pages, for ballooned out
pages too.
Thanks,
- Jitendra
> Thanks!
> Liang
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2016-12-23 2:50 ` Li, Liang Z
2016-12-26 11:47 ` Jitendra Kolhe
@ 2017-01-02 8:54 ` David Hildenbrand
2017-01-05 1:33 ` Li, Liang Z
1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2017-01-02 8:54 UTC (permalink / raw)
To: Li, Liang Z, Jitendra Kolhe, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
Am 23.12.2016 um 03:50 schrieb Li, Liang Z:
>> While measuring live migration performance for qemu/kvm guest, it was
>> observed that the qemu doesn’t maintain any intelligence for the guest ram
>> pages released by the guest balloon driver and treat such pages as any other
>> normal guest ram pages. This has direct impact on overall migration time for
>> the guest which has released (ballooned out) memory to the host.
>>
>> In case of large systems, where we can configure large guests with 1TB and
>> with considerable amount of memory released by balloon driver to the host,
>> the migration time gets worse.
>>
>> The solution proposed below is local to qemu (and does not require any
>> modification to Linux kernel or any guest driver). We have verified the fix for
>> large guests =1TB on HPE Superdome X (which can support up to 240 cores
>> and 12TB of memory).
>>
>> During live migration, as part of first iteration in ram_save_iterate() ->
>> ram_find_and_save_block () will try to migrate ram pages even if they are
>> released by vitrio-balloon driver (balloon inflate). Although these pages
>> which are returned to the host by virtio-balloon driver are zero pages, the
>> migration algorithm will still end up scanning the entire page
>> ram_find_and_save_block() ->
>> ram_save_page()/ram_save_compressed_page() ->
>> save_zero_page() -> is_zero_range(). We also end-up sending header
>> information over network for these pages during migration. This adds to the
>> total migration time.
>>
>> The solution creates a balloon bitmap ramblock as a part of virtio-balloon
>> device initialization. The bits in the balloon bitmap represent a guest ram
>> page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT or 4K. If
>> TARGET_PAGE_BITS <= VIRTIO_BALLOON_PFN_SHIFT, ram_addr offset for
>> the dirty page which is used by dirty page bitmap during migration is checked
>> against the balloon bitmap as is, if the bit is set in the balloon bitmap, the
>> corresponding ram page will be excluded from scanning and sending header
>> information during migration. In case TARGET_PAGE_BITS >
>> VIRTIO_BALLOON_PFN_SHIFT for a given dirty page ram_addr, all sub-pages
>> of 1UL << VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out to
>> avoid zero page scan and sending header information.
>>
>> The bitmap represents entire guest ram memory till max configured memory.
>> Guest ram pages claimed by the virtio-balloon driver will be represented by 1
>> in the bitmap. Since the bitmap is maintained as a ramblock, it’s migrated to
>> target as part migration’s ram iterative and ram complete phase. So that
>> substituent migrations from the target can continue to use optimization.
>>
>> A new migration capability called skip-balloon is introduced. The user can
>> disable the capability in cases where user does not expect much benefit or in
>> case the migration is from an older version.
>>
>> During live migration setup the optimization can be set to disabled state if .
>> no virtio-balloon device is initialized.
>> . skip-balloon migration capability is disabled.
>> . If the guest virtio-balloon driver has not set
>> VIRTIO_BALLOON_F_MUST_TELL_HOST
>> flag. Which means the guest may start using a ram pages freed by guest
>> balloon
>> driver, even before the host/qemu is aware of it. In such case, the
>> optimization is disabled so that the ram pages that are being used by the
>> guest will continue to be scanned and migrated.
>>
>> Balloon bitmap ramblock size is set to zero if the optimization is disabled, to
>> avoid overhead of migrating the bitmap. If the bitmap is not migrated to the
>> target, the destination starts with a fresh bitmap and tracks the ballooning
>> operation thereafter.
>>
>
> I have a better way to get rid of the bitmap.
> We should not maintain the inflating pages in the bitmap, instead, we
> can get them from the guest if it's needed, just like what we did for the guest's
> unused pages. Then we can combine the inflating page info with the unused page
> info together, and skip them during live migration.
>
If we want to actually host enforce (disallow) access to inflated pages,
having such a bitmap in QEMU is required. Getting them from the guest
doesn't make sense in that context anymore.
Thanks,
David
> Thanks!
> Liang
>
>
--
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2017-01-02 8:54 ` David Hildenbrand
@ 2017-01-05 1:33 ` Li, Liang Z
2017-01-05 7:04 ` Jitendra Kolhe
2017-01-05 9:41 ` David Hildenbrand
0 siblings, 2 replies; 11+ messages in thread
From: Li, Liang Z @ 2017-01-05 1:33 UTC (permalink / raw)
To: David Hildenbrand, Jitendra Kolhe, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
> Am 23.12.2016 um 03:50 schrieb Li, Liang Z:
> >> While measuring live migration performance for qemu/kvm guest, it was
> >> observed that the qemu doesn’t maintain any intelligence for the
> >> guest ram pages released by the guest balloon driver and treat such
> >> pages as any other normal guest ram pages. This has direct impact on
> >> overall migration time for the guest which has released (ballooned out)
> memory to the host.
> >>
> >> In case of large systems, where we can configure large guests with
> >> 1TB and with considerable amount of memory released by balloon driver
> >> to the host, the migration time gets worse.
> >>
> >> The solution proposed below is local to qemu (and does not require
> >> any modification to Linux kernel or any guest driver). We have
> >> verified the fix for large guests =1TB on HPE Superdome X (which can
> >> support up to 240 cores and 12TB of memory).
> >>
> >> During live migration, as part of first iteration in
> >> ram_save_iterate() -> ram_find_and_save_block () will try to migrate
> >> ram pages even if they are released by vitrio-balloon driver (balloon
> >> inflate). Although these pages which are returned to the host by
> >> virtio-balloon driver are zero pages, the migration algorithm will
> >> still end up scanning the entire page
> >> ram_find_and_save_block() ->
> >> ram_save_page()/ram_save_compressed_page() ->
> >> save_zero_page() -> is_zero_range(). We also end-up sending header
> >> information over network for these pages during migration. This adds
> >> to the total migration time.
> >>
> >> The solution creates a balloon bitmap ramblock as a part of
> >> virtio-balloon device initialization. The bits in the balloon bitmap
> >> represent a guest ram page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT
> or
> >> 4K. If TARGET_PAGE_BITS <= VIRTIO_BALLOON_PFN_SHIFT, ram_addr
> offset
> >> for the dirty page which is used by dirty page bitmap during
> >> migration is checked against the balloon bitmap as is, if the bit is
> >> set in the balloon bitmap, the corresponding ram page will be
> >> excluded from scanning and sending header information during
> >> migration. In case TARGET_PAGE_BITS > VIRTIO_BALLOON_PFN_SHIFT
> for a
> >> given dirty page ram_addr, all sub-pages of 1UL <<
> >> VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out to avoid zero
> page scan and sending header information.
> >>
> >> The bitmap represents entire guest ram memory till max configured
> memory.
> >> Guest ram pages claimed by the virtio-balloon driver will be
> >> represented by 1 in the bitmap. Since the bitmap is maintained as a
> >> ramblock, it’s migrated to target as part migration’s ram iterative
> >> and ram complete phase. So that substituent migrations from the target
> can continue to use optimization.
> >>
> >> A new migration capability called skip-balloon is introduced. The
> >> user can disable the capability in cases where user does not expect
> >> much benefit or in case the migration is from an older version.
> >>
> >> During live migration setup the optimization can be set to disabled state if .
> >> no virtio-balloon device is initialized.
> >> . skip-balloon migration capability is disabled.
> >> . If the guest virtio-balloon driver has not set
> >> VIRTIO_BALLOON_F_MUST_TELL_HOST
> >> flag. Which means the guest may start using a ram pages freed by
> >> guest balloon
> >> driver, even before the host/qemu is aware of it. In such case, the
> >> optimization is disabled so that the ram pages that are being used by the
> >> guest will continue to be scanned and migrated.
> >>
> >> Balloon bitmap ramblock size is set to zero if the optimization is
> >> disabled, to avoid overhead of migrating the bitmap. If the bitmap is
> >> not migrated to the target, the destination starts with a fresh
> >> bitmap and tracks the ballooning operation thereafter.
> >>
> >
> > I have a better way to get rid of the bitmap.
> > We should not maintain the inflating pages in the bitmap, instead, we
> > can get them from the guest if it's needed, just like what we did for
> > the guest's unused pages. Then we can combine the inflating page info
> > with the unused page info together, and skip them during live migration.
> >
>
> If we want to actually host enforce (disallow) access to inflated pages, having
Is that a new feature?
> such a bitmap in QEMU is required. Getting them from the guest doesn't
> make sense in that context anymore.
Even a bitmap is required, we should avoid to send it in the stop & copy stage.
Thanks!
Liang
>
> Thanks,
> David
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2017-01-05 1:33 ` Li, Liang Z
@ 2017-01-05 7:04 ` Jitendra Kolhe
2017-01-05 9:41 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: Jitendra Kolhe @ 2017-01-05 7:04 UTC (permalink / raw)
To: Li, Liang Z, David Hildenbrand, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
On 1/5/2017 7:03 AM, Li, Liang Z wrote:
>> Am 23.12.2016 um 03:50 schrieb Li, Liang Z:
>>>> While measuring live migration performance for qemu/kvm guest, it was
>>>> observed that the qemu doesn’t maintain any intelligence for the
>>>> guest ram pages released by the guest balloon driver and treat such
>>>> pages as any other normal guest ram pages. This has direct impact on
>>>> overall migration time for the guest which has released (ballooned out)
>> memory to the host.
>>>>
>>>> In case of large systems, where we can configure large guests with
>>>> 1TB and with considerable amount of memory released by balloon driver
>>>> to the host, the migration time gets worse.
>>>>
>>>> The solution proposed below is local to qemu (and does not require
>>>> any modification to Linux kernel or any guest driver). We have
>>>> verified the fix for large guests =1TB on HPE Superdome X (which can
>>>> support up to 240 cores and 12TB of memory).
>>>>
>>>> During live migration, as part of first iteration in
>>>> ram_save_iterate() -> ram_find_and_save_block () will try to migrate
>>>> ram pages even if they are released by vitrio-balloon driver (balloon
>>>> inflate). Although these pages which are returned to the host by
>>>> virtio-balloon driver are zero pages, the migration algorithm will
>>>> still end up scanning the entire page
>>>> ram_find_and_save_block() ->
>>>> ram_save_page()/ram_save_compressed_page() ->
>>>> save_zero_page() -> is_zero_range(). We also end-up sending header
>>>> information over network for these pages during migration. This adds
>>>> to the total migration time.
>>>>
>>>> The solution creates a balloon bitmap ramblock as a part of
>>>> virtio-balloon device initialization. The bits in the balloon bitmap
>>>> represent a guest ram page of size 1UL << VIRTIO_BALLOON_PFN_SHIFT
>> or
>>>> 4K. If TARGET_PAGE_BITS <= VIRTIO_BALLOON_PFN_SHIFT, ram_addr
>> offset
>>>> for the dirty page which is used by dirty page bitmap during
>>>> migration is checked against the balloon bitmap as is, if the bit is
>>>> set in the balloon bitmap, the corresponding ram page will be
>>>> excluded from scanning and sending header information during
>>>> migration. In case TARGET_PAGE_BITS > VIRTIO_BALLOON_PFN_SHIFT
>> for a
>>>> given dirty page ram_addr, all sub-pages of 1UL <<
>>>> VIRTIO_BALLOON_PFN_SHIFT size should be ballooned out to avoid zero
>> page scan and sending header information.
>>>>
>>>> The bitmap represents entire guest ram memory till max configured
>> memory.
>>>> Guest ram pages claimed by the virtio-balloon driver will be
>>>> represented by 1 in the bitmap. Since the bitmap is maintained as a
>>>> ramblock, it’s migrated to target as part migration’s ram iterative
>>>> and ram complete phase. So that substituent migrations from the target
>> can continue to use optimization.
>>>>
>>>> A new migration capability called skip-balloon is introduced. The
>>>> user can disable the capability in cases where user does not expect
>>>> much benefit or in case the migration is from an older version.
>>>>
>>>> During live migration setup the optimization can be set to disabled state if .
>>>> no virtio-balloon device is initialized.
>>>> . skip-balloon migration capability is disabled.
>>>> . If the guest virtio-balloon driver has not set
>>>> VIRTIO_BALLOON_F_MUST_TELL_HOST
>>>> flag. Which means the guest may start using a ram pages freed by
>>>> guest balloon
>>>> driver, even before the host/qemu is aware of it. In such case, the
>>>> optimization is disabled so that the ram pages that are being used by the
>>>> guest will continue to be scanned and migrated.
>>>>
>>>> Balloon bitmap ramblock size is set to zero if the optimization is
>>>> disabled, to avoid overhead of migrating the bitmap. If the bitmap is
>>>> not migrated to the target, the destination starts with a fresh
>>>> bitmap and tracks the ballooning operation thereafter.
>>>>
>>>
>>> I have a better way to get rid of the bitmap.
>>> We should not maintain the inflating pages in the bitmap, instead, we
>>> can get them from the guest if it's needed, just like what we did for
>>> the guest's unused pages. Then we can combine the inflating page info
>>> with the unused page info together, and skip them during live migration.
>>>
>>
>> If we want to actually host enforce (disallow) access to inflated pages, having
>
> Is that a new feature?
>
>> such a bitmap in QEMU is required. Getting them from the guest doesn't
>> make sense in that context anymore.
>
> Even a bitmap is required, we should avoid to send it in the stop & copy stage.
>
How about using both ways? If the guest is capable to share ballooned out page
information using unused page framework use it. If the guest is not capable we
maintain a bitmap in qemu. This way we can use advantage of both approaches.
1. In case the guest is capable - we avoid bitmap send in stop and copy stage.
2. In case the guest is not capable - host can still disallow migration of ballooned
out pages.
But since this approach may add a bit of code churn, we recommend to go with maintaining
the bitmap in qemu at first phase (existing implementation) and then enhance it later
to use unused page approach (to query guest) based on the guest capability.
Thanks,
- Jitendra
> Thanks!
> Liang
>>
>> Thanks,
>> David
>>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2017-01-05 1:33 ` Li, Liang Z
2017-01-05 7:04 ` Jitendra Kolhe
@ 2017-01-05 9:41 ` David Hildenbrand
2017-01-05 10:15 ` Li, Liang Z
1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2017-01-05 9:41 UTC (permalink / raw)
To: Li, Liang Z, Jitendra Kolhe, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
>>
>> If we want to actually host enforce (disallow) access to inflated pages, having
>
> Is that a new feature?
Still in the design phase. Andrea Arcangeli briefly mentioned in in his
reply to v5 of your patch series
"We also plan to use userfaultfd to make the balloon driver host
enforced (will work fine on hugetlbfs 2M and tmpfs too) but that's
going to be invisible to the ABI so it's not strictly relevant for
this discussion."
>
>> such a bitmap in QEMU is required. Getting them from the guest doesn't
>> make sense in that context anymore.
>
> Even a bitmap is required, we should avoid to send it in the stop & copy stage.
yes, I agree.
Thanks!
>
> Thanks!
> Liang
>>
>> Thanks,
>> David
>>
--
David
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2017-01-05 9:41 ` David Hildenbrand
@ 2017-01-05 10:15 ` Li, Liang Z
2017-01-05 13:47 ` David Hildenbrand
0 siblings, 1 reply; 11+ messages in thread
From: Li, Liang Z @ 2017-01-05 10:15 UTC (permalink / raw)
To: David Hildenbrand, Jitendra Kolhe, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
> >> If we want to actually host enforce (disallow) access to inflated
> >> pages, having
> >
> > Is that a new feature?
>
> Still in the design phase. Andrea Arcangeli briefly mentioned in in his reply to
> v5 of your patch series
>
> "We also plan to use userfaultfd to make the balloon driver host enforced
> (will work fine on hugetlbfs 2M and tmpfs too) but that's going to be invisible
> to the ABI so it's not strictly relevant for this discussion."
>
Looking forward to see more details. Is there any thread about the design?
Thanks!
Liang
> >
> >> such a bitmap in QEMU is required. Getting them from the guest
> >> doesn't make sense in that context anymore.
> >
> > Even a bitmap is required, we should avoid to send it in the stop & copy
> stage.
>
> yes, I agree.
>
> Thanks!
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver.
2017-01-05 10:15 ` Li, Liang Z
@ 2017-01-05 13:47 ` David Hildenbrand
0 siblings, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2017-01-05 13:47 UTC (permalink / raw)
To: Li, Liang Z, Jitendra Kolhe, qemu-devel@nongnu.org
Cc: crosthwaite.peter@gmail.com, ehabkost@redhat.com,
quintela@redhat.com, simhan@hpe.com, JBottomley@Odin.com,
armbru@redhat.com, lcapitulino@redhat.com,
renganathan.meenakshisundaram@hpe.com, borntraeger@de.ibm.com,
mst@redhat.com, mohan_parthasarathy@hpe.com, stefanha@redhat.com,
pbonzini@redhat.com, amit.shah@redhat.com, den@openvz.org,
dgilbert@redhat.com, rth@twiddle.net
Am 05.01.2017 um 11:15 schrieb Li, Liang Z:
>>>> If we want to actually host enforce (disallow) access to inflated
>>>> pages, having
>>>
>>> Is that a new feature?
>>
>> Still in the design phase. Andrea Arcangeli briefly mentioned in in his reply to
>> v5 of your patch series
>>
>> "We also plan to use userfaultfd to make the balloon driver host enforced
>> (will work fine on hugetlbfs 2M and tmpfs too) but that's going to be invisible
>> to the ABI so it's not strictly relevant for this discussion."
>>
>
> Looking forward to see more details. Is there any thread about the design?
>
Not yet, we're still evaluating possible approaches. As soon as we
sorted out the details, we'll start a discussion thread. I'll put you
on cc!
Thanks!
> Thanks!
>
> Liang
>
>>>
>>>> such a bitmap in QEMU is required. Getting them from the guest
>>>> doesn't make sense in that context anymore.
>>>
>>> Even a bitmap is required, we should avoid to send it in the stop & copy
>> stage.
>>
>> yes, I agree.
>>
>> Thanks!
--
David
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2017-01-05 13:47 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-18 11:20 [Qemu-devel] [PATCH v3 0/4] migration: skip scanning and migrating ram pages released by virtio-balloon driver Jitendra Kolhe
2016-05-30 10:49 ` Jitendra Kolhe
2016-06-15 6:15 ` Jitendra Kolhe
2016-12-23 2:50 ` Li, Liang Z
2016-12-26 11:47 ` Jitendra Kolhe
2017-01-02 8:54 ` David Hildenbrand
2017-01-05 1:33 ` Li, Liang Z
2017-01-05 7:04 ` Jitendra Kolhe
2017-01-05 9:41 ` David Hildenbrand
2017-01-05 10:15 ` Li, Liang Z
2017-01-05 13:47 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).