From: Peter Lieven <pl@kamp.de>
To: qemu-devel@nongnu.org
Cc: Peter Lieven <pl@kamp.de>
Subject: [Qemu-devel] [PATCHv2 7/9] migration: do not sent zero pages in bulk stage
Date: Fri, 15 Mar 2013 16:50:16 +0100 [thread overview]
Message-ID: <1363362619-3190-8-git-send-email-pl@kamp.de> (raw)
In-Reply-To: <1363362619-3190-1-git-send-email-pl@kamp.de>
during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.
even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receival of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.
Signed-off-by: Peter Lieven <pl@kamp.de>
---
arch_init.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch_init.c b/arch_init.c
index e5531e8..a3dc20d 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -432,9 +432,11 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
bytes_sent = -1;
if (buffer_is_zero(p, TARGET_PAGE_SIZE)) {
acct_info.dup_pages++;
- bytes_sent = save_block_hdr(f, block, offset, cont,
- RAM_SAVE_FLAG_COMPRESS);
- qemu_put_byte(f, *p);
+ if (!ram_bulk_stage) {
+ bytes_sent = save_block_hdr(f, block, offset, cont,
+ RAM_SAVE_FLAG_COMPRESS);
+ qemu_put_byte(f, *p);
+ }
bytes_sent += 1;
} else if (migrate_use_xbzrle()) {
current_addr = block->offset + offset;
--
1.7.9.5
next prev parent reply other threads:[~2013-03-15 16:11 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-15 15:50 [Qemu-devel] [PATCHv2 0/9] buffer_is_zero / migration optimizations Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 1/9] move vector definitions to qemu-common.h Peter Lieven
2013-03-19 15:35 ` Eric Blake
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 2/9] cutils: add a function to find non-zero content in a buffer Peter Lieven
2013-03-19 15:54 ` Eric Blake
2013-03-19 16:18 ` Peter Lieven
2013-03-19 16:43 ` Eric Blake
2013-03-19 19:42 ` Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 3/9] buffer_is_zero: use vector optimizations if possible Peter Lieven
2013-03-19 16:08 ` Eric Blake
2013-03-19 16:14 ` Peter Lieven
2013-03-19 19:44 ` Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 4/9] bitops: use vector algorithm to optimize find_next_bit() Peter Lieven
2013-03-19 16:49 ` Eric Blake
2013-03-19 19:40 ` Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 5/9] migration: search for zero instead of dup pages Peter Lieven
2013-03-19 16:55 ` Eric Blake
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 6/9] migration: add an indicator for bulk state of ram migration Peter Lieven
2013-03-19 17:32 ` Eric Blake
2013-03-15 15:50 ` Peter Lieven [this message]
2013-03-19 17:36 ` [Qemu-devel] [PATCHv2 7/9] migration: do not sent zero pages in bulk stage Eric Blake
2013-03-19 19:35 ` Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 8/9] migration: do not search dirty " Peter Lieven
2013-03-19 17:40 ` Eric Blake
2013-03-19 19:29 ` Peter Lieven
2013-03-15 15:50 ` [Qemu-devel] [PATCHv2 9/9] migration: use XBZRLE only after " Peter Lieven
2013-03-19 17:43 ` Eric Blake
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1363362619-3190-8-git-send-email-pl@kamp.de \
--to=pl@kamp.de \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).