From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Philippe Mathieu-Daudé" <f4bug@amsat.org>,
"Juan Quintela" <quintela@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Yanan Wang" <wangyanan55@huawei.com>
Subject: [PATCH v2 5/6] multifd: Only sync once each full round of memory
Date: Thu, 28 Jul 2022 13:59:56 +0200 [thread overview]
Message-ID: <20220728115957.5554-6-quintela@redhat.com> (raw)
In-Reply-To: <20220728115957.5554-1-quintela@redhat.com>
We need to add a new flag to mean to sync at that point.
Notice that we still synchronize at the end of setup and at the end of
complete stages.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Add missing qemu_fflush(), now it passes all tests always.
---
migration/migration.c | 2 +-
migration/ram.c | 27 ++++++++++++++++++++++++++-
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index ebca4f2d8a..7905145d7d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4393,7 +4393,7 @@ static Property migration_properties[] = {
DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
/* We will change to false when we introduce the new mechanism */
DEFINE_PROP_BOOL("multifd-sync-after-each-section", MigrationState,
- multifd_sync_after_each_section, true),
+ multifd_sync_after_each_section, false),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
DEFINE_PROP_MIG_CAP("x-rdma-pin-all", MIGRATION_CAPABILITY_RDMA_PIN_ALL),
diff --git a/migration/ram.c b/migration/ram.c
index 1507ba1991..234603ee4f 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -82,6 +82,7 @@
#define RAM_SAVE_FLAG_XBZRLE 0x40
/* 0x80 is reserved in migration.h start with 0x100 next */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
+#define RAM_SAVE_FLAG_MULTIFD_SYNC 0x200
XBZRLECacheStats xbzrle_counters;
@@ -1540,6 +1541,7 @@ retry:
* associated with the search process.
*
* Returns:
+ * <0: An error happened
* PAGE_ALL_CLEAN: no dirty page found, give up
* PAGE_TRY_AGAIN: no dirty page found, retry for next block
* PAGE_DIRTY_FOUND: dirty page found
@@ -1572,6 +1574,14 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
pss->page = 0;
pss->block = QLIST_NEXT_RCU(pss->block, next);
if (!pss->block) {
+ if (!migrate_multifd_sync_after_each_section()) {
+ int ret = multifd_send_sync_main(rs->f);
+ if (ret < 0) {
+ return ret;
+ }
+ qemu_put_be64(rs->f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ qemu_fflush(rs->f);
+ }
/*
* If memory migration starts over, we will meet a dirtied page
* which may still exists in compression threads's ring, so we
@@ -2556,6 +2566,9 @@ static int ram_find_and_save_block(RAMState *rs)
break;
} else if (res == PAGE_TRY_AGAIN) {
continue;
+ } else if (res < 0) {
+ pages = res;
+ break;
}
}
}
@@ -3232,6 +3245,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return ret;
}
+ if (!migrate_multifd_sync_after_each_section()) {
+ qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ }
+
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
@@ -3419,6 +3436,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
+ if (!migrate_multifd_sync_after_each_section()) {
+ qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ }
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
@@ -4087,7 +4107,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
}
decompress_data_with_multi_threads(f, page_buffer, len);
break;
-
+ case RAM_SAVE_FLAG_MULTIFD_SYNC:
+ multifd_recv_sync_main();
+ break;
case RAM_SAVE_FLAG_EOS:
/* normal exit */
if (migrate_multifd_sync_after_each_section()) {
@@ -4367,6 +4389,9 @@ static int ram_load_precopy(QEMUFile *f)
break;
}
break;
+ case RAM_SAVE_FLAG_MULTIFD_SYNC:
+ multifd_recv_sync_main();
+ break;
case RAM_SAVE_FLAG_EOS:
/* normal exit */
if (migrate_multifd_sync_after_each_section()) {
--
2.37.1
next prev parent reply other threads:[~2022-07-28 12:08 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-28 11:59 [PATCH v2 0/6] Eliminate multifd flush Juan Quintela
2022-07-28 11:59 ` [PATCH v2 1/6] multifd: Create property multifd-sync-after-each-section Juan Quintela
2022-07-28 11:59 ` [PATCH v2 2/6] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2022-07-28 11:59 ` [PATCH v2 3/6] migration: Simplify ram_find_and_save_block() Juan Quintela
2022-07-28 11:59 ` [PATCH v2 4/6] migration: Make find_dirty_block() return a single parameter Juan Quintela
2022-07-28 11:59 ` Juan Quintela [this message]
2022-07-28 11:59 ` [PATCH v2 6/6] ram: Document migration ram flags Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220728115957.5554-6-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eduardo@habkost.net \
--cc=f4bug@amsat.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).