* [PATCH v5 0/3] Eliminate multifd flush
@ 2023-02-13 2:57 Juan Quintela
2023-02-13 2:57 ` [PATCH v5 1/3] multifd: Create property multifd-sync-after-each-section Juan Quintela
0 siblings, 1 reply; 4+ messages in thread
From: Juan Quintela @ 2023-02-13 2:57 UTC (permalink / raw)
To: qemu-devel
Cc: Markus Armbruster, Philippe Mathieu-Daudé, Juan Quintela,
Marcel Apfelbaum, Yanan Wang, Dr. David Alan Gilbert, Eric Blake,
Eduardo Habkost
Hi
In this v5:
- Remove RAM Flags documentation (already on PULL request)
- rebase on top of PULL request.
Please review.
Based-on: <20230213025150.71537-1-quintela@redhat.com>
Migration 20230213 patches
In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.
Please review.
In this v3:
- update to latest upstream.
- fix checkpatch errors.
Please, review.
In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
(only for tls, no clue why).
Please, review.
[v1]
Upstream multifd code synchronize all threads after each RAM section. This is suboptimal.
Change it to only flush after we go trough all ram.
Preserve all semantics for old machine types.
Juan Quintela (3):
multifd: Create property multifd-sync-after-each-section
multifd: Protect multifd_send_sync_main() calls
multifd: Only sync once each full round of memory
qapi/migration.json | 10 +++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 13 +++++++++++--
migration/ram.c | 44 +++++++++++++++++++++++++++++++++++++------
5 files changed, 60 insertions(+), 9 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH v5 1/3] multifd: Create property multifd-sync-after-each-section
2023-02-13 2:57 [PATCH v5 0/3] Eliminate multifd flush Juan Quintela
@ 2023-02-13 2:57 ` Juan Quintela
0 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-02-13 2:57 UTC (permalink / raw)
To: qemu-devel
Cc: Markus Armbruster, Philippe Mathieu-Daudé, Juan Quintela,
Marcel Apfelbaum, Yanan Wang, Dr. David Alan Gilbert, Eric Blake,
Eduardo Habkost
We used to synchronize all channels at the end of each RAM section
sent. That is not needed, so preparing to only synchronize once every
full round in latests patches.
Notice that we initialize the property as true. We will change the
default when we introduce the new mechanism.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Rename each-iteration to after-each-section
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 10 +++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 15 +++++++++++++--
4 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..2907241b9c 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,13 @@
# should not affect the correctness of postcopy migration.
# (since 7.1)
#
+# @multifd-sync-after-each-section: Synchronize channels after each
+# section is sent. We used to do
+# that in the past, but it is
+# suboptimal.
+# Default value is true until all code is in.
+# (since 8.0)
+#
# Features:
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
#
@@ -492,7 +499,8 @@
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
- 'zero-copy-send', 'postcopy-preempt'] }
+ 'zero-copy-send', 'postcopy-preempt',
+ 'multifd-sync-after-each-section'] }
##
# @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 2da2f8a164..cf84520196 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -424,6 +424,7 @@ int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_after_each_section(void);
#ifdef CONFIG_LINUX
bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index f73fc4c45c..dc86849402 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -54,6 +54,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
GlobalProperty hw_compat_7_0[] = {
{ "arm-gicv3-common", "force-8-bit-prio", "on" },
{ "nvme-ns", "eui64-default", "on"},
+ { "migration", "multifd-sync-after-each-section", "on"},
};
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
diff --git a/migration/migration.c b/migration/migration.c
index 90fca70cb7..406c27bc82 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_XBZRLE,
MIGRATION_CAPABILITY_X_COLO,
MIGRATION_CAPABILITY_VALIDATE_UUID,
- MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+ MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+ MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION);
/* When we add fault tolerance, we could have several
migrations at once. For now we don't need to add
@@ -2701,6 +2702,15 @@ bool migrate_use_multifd(void)
return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
}
+bool migrate_multifd_sync_after_each_section(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return true;
+ // We will change this when code gets in.
+ return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
+}
+
bool migrate_pause_before_switchover(void)
{
MigrationState *s;
@@ -4535,7 +4545,8 @@ static Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-zero-copy-send",
MIGRATION_CAPABILITY_ZERO_COPY_SEND),
#endif
-
+ DEFINE_PROP_MIG_CAP("multifd-sync-after-each-section",
+ MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION),
DEFINE_PROP_END_OF_LIST(),
};
--
2.39.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v5 0/3] Eliminate multifd flush
@ 2023-02-13 9:15 Juan Quintela
0 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-02-13 9:15 UTC (permalink / raw)
To: qemu-devel
Cc: Philippe Mathieu-Daudé, Markus Armbruster,
Dr. David Alan Gilbert, Eduardo Habkost, Eric Blake,
Juan Quintela, Yanan Wang, Marcel Apfelbaum
Hi
In this v5:
- Remove RAM Flags documentation (already on PULL request)
- rebase on top of PULL request.
Please review.
Based-on: <20230213025150.71537-1-quintela@redhat.com>
Migration 20230213 patches
In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.
Please review.
In this v3:
- update to latest upstream.
- fix checkpatch errors.
Please, review.
In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
(only for tls, no clue why).
Please, review.
[v1]
Upstream multifd code synchronize all threads after each RAM section. This is suboptimal.
Change it to only flush after we go trough all ram.
Preserve all semantics for old machine types.
Juan Quintela (3):
multifd: Create property multifd-sync-after-each-section
multifd: Protect multifd_send_sync_main() calls
multifd: Only sync once each full round of memory
qapi/migration.json | 10 +++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 13 +++++++++++--
migration/ram.c | 44 +++++++++++++++++++++++++++++++++++++------
5 files changed, 60 insertions(+), 9 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH v5 0/3] Eliminate multifd flush
@ 2023-02-13 2:57 Juan Quintela
0 siblings, 0 replies; 4+ messages in thread
From: Juan Quintela @ 2023-02-13 2:57 UTC (permalink / raw)
To: qemu-devel
Cc: Philippe Mathieu-Daudé, Dr. David Alan Gilbert,
Markus Armbruster, Yanan Wang, Juan Quintela, Marcel Apfelbaum,
Eduardo Habkost, Eric Blake
Hi
In this v5:
- Remove RAM Flags documentation (already on PULL request)
- rebase on top of PULL request.
Please review.
Based-on: <20230213025150.71537-1-quintela@redhat.com>
Migration 20230213 patches
In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.
Please review.
In this v3:
- update to latest upstream.
- fix checkpatch errors.
Please, review.
In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
(only for tls, no clue why).
Please, review.
[v1]
Upstream multifd code synchronize all threads after each RAM section. This is suboptimal.
Change it to only flush after we go trough all ram.
Preserve all semantics for old machine types.
Juan Quintela (3):
multifd: Create property multifd-sync-after-each-section
multifd: Protect multifd_send_sync_main() calls
multifd: Only sync once each full round of memory
qapi/migration.json | 10 +++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 13 +++++++++++--
migration/ram.c | 44 +++++++++++++++++++++++++++++++++++++------
5 files changed, 60 insertions(+), 9 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-02-13 9:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-13 2:57 [PATCH v5 0/3] Eliminate multifd flush Juan Quintela
2023-02-13 2:57 ` [PATCH v5 1/3] multifd: Create property multifd-sync-after-each-section Juan Quintela
-- strict thread matches above, loose matches on Subject: below --
2023-02-13 9:15 [PATCH v5 0/3] Eliminate multifd flush Juan Quintela
2023-02-13 2:57 Juan Quintela
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).