From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Yanan Wang" <wangyanan55@huawei.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Juan Quintela" <quintela@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Peter Xu" <peterx@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Leonardo Bras" <leobras@redhat.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: [PATCH v8 1/3] multifd: Create property multifd-flush-after-each-section
Date: Tue, 25 Apr 2023 18:31:12 +0200 [thread overview]
Message-ID: <20230425163114.2609-2-quintela@redhat.com> (raw)
In-Reply-To: <20230425163114.2609-1-quintela@redhat.com>
We used to flush all channels at the end of each RAM section
sent. That is not needed, so preparing to only flush after a full
iteration through all the RAM.
Default value of the property is false. But we return "true" in
migrate_multifd_flush_after_each_section() until we implement the code
in following patches.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Rename each-iteration to after-each-section
Rename multifd-sync-after-each-section to
multifd-flush-after-each-section
---
hw/core/machine.c | 1 +
migration/migration.c | 13 +++++++++++++
migration/migration.h | 13 +++++++++++++
3 files changed, 27 insertions(+)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 2ce97a5d3b..32bd9277b3 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -60,6 +60,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
GlobalProperty hw_compat_7_0[] = {
{ "arm-gicv3-common", "force-8-bit-prio", "on" },
{ "nvme-ns", "eui64-default", "on"},
+ { "migration", "multifd-flush-after-each-section", "on"},
};
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
diff --git a/migration/migration.c b/migration/migration.c
index d91fe9fd86..4b94360f05 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2697,6 +2697,17 @@ bool migrate_use_multifd(void)
return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
}
+bool migrate_multifd_flush_after_each_section(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ /*
+ * Until the patch that remove this comment, we always return that
+ * the property is enabled.
+ */
+ return true || s->multifd_flush_after_each_section;
+}
+
bool migrate_pause_before_switchover(void)
{
MigrationState *s;
@@ -4448,6 +4459,8 @@ static Property migration_properties[] = {
send_section_footer, true),
DEFINE_PROP_BOOL("decompress-error-check", MigrationState,
decompress_error_check, true),
+ DEFINE_PROP_BOOL("multifd-flush-after-each-section", MigrationState,
+ multifd_flush_after_each_section, true),
DEFINE_PROP_UINT8("x-clear-bitmap-shift", MigrationState,
clear_bitmap_shift, CLEAR_BITMAP_SHIFT_DEFAULT),
DEFINE_PROP_BOOL("x-preempt-pre-7-2", MigrationState,
diff --git a/migration/migration.h b/migration/migration.h
index 04e0860b4e..50b4006e93 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -404,6 +404,18 @@ struct MigrationState {
*/
bool preempt_pre_7_2;
+ /*
+ * flush every channel after each section sent.
+ *
+ * This assures that we can't mix pages from one iteration through
+ * ram pages with pages for the following iteration. We really
+ * only need to do this flush after we have go through all the
+ * dirty pages. For historical reasons, we do that after each
+ * section. This is suboptimal (we flush too many times).
+ * Default value is false. Setting this property has no effect
+ * until the patch that removes this comment. (since 8.1)
+ */
+ bool multifd_flush_after_each_section;
/*
* This decides the size of guest memory chunk that will be used
* to track dirty bitmap clearing. The size of memory chunk will
@@ -463,6 +475,7 @@ int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
+bool migrate_multifd_flush_after_each_section(void);
#ifdef CONFIG_LINUX
bool migrate_use_zero_copy_send(void);
--
2.40.0
next prev parent reply other threads:[~2023-04-25 16:32 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-25 16:31 [PATCH v8 0/3] Eliminate multifd flush Juan Quintela
2023-04-25 16:31 ` Juan Quintela [this message]
2023-04-25 18:42 ` [PATCH v8 1/3] multifd: Create property multifd-flush-after-each-section Peter Xu
2023-04-26 16:55 ` Juan Quintela
2023-04-25 16:31 ` [PATCH v8 2/3] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2023-04-25 16:31 ` [PATCH v8 3/3] multifd: Only flush once each full round of memory Juan Quintela
2023-04-25 18:49 ` [PATCH v8 0/3] Eliminate multifd flush Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230425163114.2609-2-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eduardo@habkost.net \
--cc=leobras@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).