qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Lieven <pl@kamp.de>
To: qemu-devel@nongnu.org
Cc: quintela@redhat.com, Stefan Hajnoczi <stefanha@gmail.com>,
	Peter Lieven <pl@kamp.de>, Orit Wasserman <owasserm@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [PATCHv5 08/10] migration: do not sent zero pages in bulk stage
Date: Tue, 26 Mar 2013 10:58:37 +0100	[thread overview]
Message-ID: <1364291919-19563-9-git-send-email-pl@kamp.de> (raw)
In-Reply-To: <1364291919-19563-1-git-send-email-pl@kamp.de>

during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.

even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receipt of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.

this patch also updates QMP to return the number of
skipped pages in MigrationStats.

Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 arch_init.c                   |   24 ++++++++++++++++++++----
 hmp.c                         |    2 ++
 include/migration/migration.h |    2 ++
 migration.c                   |    3 ++-
 qapi-schema.json              |    8 +++++---
 qmp-commands.hx               |    3 ++-
 6 files changed, 33 insertions(+), 9 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 1291bd2..3a0d02e 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -183,6 +183,7 @@ int64_t xbzrle_cache_resize(int64_t new_size)
 /* accounting for migration statistics */
 typedef struct AccountingInfo {
     uint64_t dup_pages;
+    uint64_t skipped_pages;
     uint64_t norm_pages;
     uint64_t iterations;
     uint64_t xbzrle_bytes;
@@ -208,6 +209,16 @@ uint64_t dup_mig_pages_transferred(void)
     return acct_info.dup_pages;
 }
 
+uint64_t skipped_mig_bytes_transferred(void)
+{
+    return acct_info.skipped_pages * TARGET_PAGE_SIZE;
+}
+
+uint64_t skipped_mig_pages_transferred(void)
+{
+    return acct_info.skipped_pages;
+}
+
 uint64_t norm_mig_bytes_transferred(void)
 {
     return acct_info.norm_pages * TARGET_PAGE_SIZE;
@@ -440,10 +451,15 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
             bytes_sent = -1;
             if (is_zero_page(p)) {
                 acct_info.dup_pages++;
-                bytes_sent = save_block_hdr(f, block, offset, cont,
-                                            RAM_SAVE_FLAG_COMPRESS);
-                qemu_put_byte(f, 0);
-                bytes_sent++;
+                if (!ram_bulk_stage) {
+                    bytes_sent = save_block_hdr(f, block, offset, cont,
+                                                RAM_SAVE_FLAG_COMPRESS);
+                    qemu_put_byte(f, 0);
+                    bytes_sent++;
+                } else {
+                    acct_info.skipped_pages++;
+                    bytes_sent = 0;
+                }
             } else if (migrate_use_xbzrle()) {
                 current_addr = block->offset + offset;
                 bytes_sent = save_xbzrle_page(f, p, current_addr, block,
diff --git a/hmp.c b/hmp.c
index b0a861c..e3e833e 100644
--- a/hmp.c
+++ b/hmp.c
@@ -173,6 +173,8 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)
                        info->ram->total >> 10);
         monitor_printf(mon, "duplicate: %" PRIu64 " pages\n",
                        info->ram->duplicate);
+        monitor_printf(mon, "skipped: %" PRIu64 " pages\n",
+                       info->ram->skipped);
         monitor_printf(mon, "normal: %" PRIu64 " pages\n",
                        info->ram->normal);
         monitor_printf(mon, "normal bytes: %" PRIu64 " kbytes\n",
diff --git a/include/migration/migration.h b/include/migration/migration.h
index bb617fd..e2acec6 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -96,6 +96,8 @@ extern SaveVMHandlers savevm_ram_handlers;
 
 uint64_t dup_mig_bytes_transferred(void);
 uint64_t dup_mig_pages_transferred(void);
+uint64_t skipped_mig_bytes_transferred(void);
+uint64_t skipped_mig_pages_transferred(void);
 uint64_t norm_mig_bytes_transferred(void);
 uint64_t norm_mig_pages_transferred(void);
 uint64_t xbzrle_mig_bytes_transferred(void);
diff --git a/migration.c b/migration.c
index 185d112..7fb2147 100644
--- a/migration.c
+++ b/migration.c
@@ -197,11 +197,11 @@ MigrationInfo *qmp_query_migrate(Error **errp)
         info->ram->remaining = ram_bytes_remaining();
         info->ram->total = ram_bytes_total();
         info->ram->duplicate = dup_mig_pages_transferred();
+        info->ram->skipped = skipped_mig_pages_transferred();
         info->ram->normal = norm_mig_pages_transferred();
         info->ram->normal_bytes = norm_mig_bytes_transferred();
         info->ram->dirty_pages_rate = s->dirty_pages_rate;
 
-
         if (blk_mig_active()) {
             info->has_disk = true;
             info->disk = g_malloc0(sizeof(*info->disk));
@@ -227,6 +227,7 @@ MigrationInfo *qmp_query_migrate(Error **errp)
         info->ram->remaining = 0;
         info->ram->total = ram_bytes_total();
         info->ram->duplicate = dup_mig_pages_transferred();
+        info->ram->skipped = skipped_mig_pages_transferred();
         info->ram->normal = norm_mig_pages_transferred();
         info->ram->normal_bytes = norm_mig_bytes_transferred();
         break;
diff --git a/qapi-schema.json b/qapi-schema.json
index 088f4e1..d6a8812 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -496,7 +496,9 @@
 #
 # @total: total amount of bytes involved in the migration process
 #
-# @duplicate: number of duplicate pages (since 1.2)
+# @duplicate: number of duplicate (zero) pages (since 1.2)
+#
+# @skipped: number of skipped zero pages (since 1.5)
 #
 # @normal : number of normal pages (since 1.2)
 #
@@ -509,8 +511,8 @@
 ##
 { 'type': 'MigrationStats',
   'data': {'transferred': 'int', 'remaining': 'int', 'total': 'int' ,
-           'duplicate': 'int', 'normal': 'int', 'normal-bytes': 'int',
-           'dirty-pages-rate' : 'int' } }
+           'duplicate': 'int', 'skipped': 'int', 'normal': 'int',
+           'normal-bytes': 'int', 'dirty-pages-rate' : 'int' } }
 
 ##
 # @XBZRLECacheStats
diff --git a/qmp-commands.hx b/qmp-commands.hx
index b370060..fed74c6 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2442,7 +2442,8 @@ The main json-object contains the following:
          - "transferred": amount transferred (json-int)
          - "remaining": amount remaining (json-int)
          - "total": total (json-int)
-         - "duplicate": number of duplicated pages (json-int)
+         - "duplicate": number of duplicated (zero) pages (json-int)
+         - "skipped": number of skipped zero pages (json-int)
          - "normal" : number of normal pages transferred (json-int)
          - "normal-bytes" : number of normal bytes transferred (json-int)
 - "disk": only present if "status" is "active" and it is a block migration,
-- 
1.7.9.5

  parent reply	other threads:[~2013-03-26 10:01 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-26  9:58 [Qemu-devel] [PATCHv5 00/10] buffer_is_zero / migration optimizations Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 01/10] move vector definitions to qemu-common.h Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 02/10] add a zero splat vector " Peter Lieven
2013-03-26 10:14   ` Paolo Bonzini
2013-03-26 10:17     ` Peter Lieven
2013-03-26 10:17       ` Paolo Bonzini
2013-03-26 10:18         ` Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 03/10] cutils: add a function to find non-zero content in a buffer Peter Lieven
2013-03-26 10:38   ` Juan Quintela
2013-03-26 10:42     ` Peter Lieven
2013-03-26 10:41   ` Juan Quintela
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 04/10] buffer_is_zero: use vector optimizations if possible Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 05/10] bitops: unroll while loop in find_next_bit() Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 06/10] migration: search for zero instead of dup pages Peter Lieven
2013-04-05 19:23   ` Kevin Wolf
2013-04-05 20:00     ` Paolo Bonzini
2013-04-05 21:44       ` Peter Lieven
2013-04-05 22:06       ` Peter Lieven
2013-04-08  8:38         ` Paolo Bonzini
2013-04-08  9:25           ` Peter Lieven
2013-04-08 10:33             ` Paolo Bonzini
2013-04-08  8:33     ` Peter Lieven
2013-04-08  8:39       ` Peter Lieven
2013-04-08  8:49       ` Kevin Wolf
2013-04-08  8:50         ` Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 07/10] migration: add an indicator for bulk state of ram migration Peter Lieven
2013-03-26  9:58 ` Peter Lieven [this message]
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 09/10] migration: do not search dirty pages in bulk stage Peter Lieven
2013-03-26  9:58 ` [Qemu-devel] [PATCHv5 10/10] migration: use XBZRLE only after " Peter Lieven
2013-03-26 10:46 ` [Qemu-devel] [PATCHv5 00/10] buffer_is_zero / migration optimizations Juan Quintela
2013-03-26 11:02   ` Peter Lieven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1364291919-19563-9-git-send-email-pl@kamp.de \
    --to=pl@kamp.de \
    --cc=owasserm@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).