qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/8] Eliminate multifd flush
@ 2023-02-13  8:57 Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 1/8] migration/multifd: Change multifd_load_cleanup() signature and usage Juan Quintela
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum

Hi

In this v5:
- Remove RAM Flags documentation (already on PULL request)
- rebase on top of PULL request.

Please review.

Based-on: <20230213025150.71537-1-quintela@redhat.com>
          Migration 20230213 patches

In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.

Please review.

In this v3:
- update to latest upstream.
- fix checkpatch errors.

Please, review.

In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
  (only for tls, no clue why).

Please, review.

[v1]
Upstream multifd code synchronize all threads after each RAM section.  This is suboptimal.
Change it to only flush after we go trough all ram.

Preserve all semantics for old machine types.

Juan Quintela (4):
  ram: Document migration ram flags
  multifd: Create property multifd-sync-after-each-section
  multifd: Protect multifd_send_sync_main() calls
  multifd: Only sync once each full round of memory

Leonardo Bras (4):
  migration/multifd: Change multifd_load_cleanup() signature and usage
  migration/multifd: Remove unnecessary assignment on
    multifd_load_cleanup()
  migration/multifd: Join all multifd threads in order to avoid leaks
  migration/multifd: Move load_cleanup inside incoming_state_destroy

 qapi/migration.json   | 10 +++++++-
 migration/migration.h |  1 +
 migration/multifd.h   |  3 ++-
 hw/core/machine.c     |  1 +
 migration/migration.c | 29 ++++++++++++---------
 migration/multifd.c   | 17 +++++++-----
 migration/ram.c       | 60 ++++++++++++++++++++++++++++++++++---------
 7 files changed, 89 insertions(+), 32 deletions(-)

-- 
2.39.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 1/8] migration/multifd: Change multifd_load_cleanup() signature and usage
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 2/8] migration/multifd: Remove unnecessary assignment on multifd_load_cleanup() Juan Quintela
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum, Leonardo Bras, Li Xiaohui,
	Peter Xu

From: Leonardo Bras <leobras@redhat.com>

Since it's introduction in commit f986c3d256 ("migration: Create multifd
migration threads"), multifd_load_cleanup() never returned any value
different than 0, neither set up any error on errp.

Even though, on process_incoming_migration_bh() an if clause uses it's
return value to decide on setting autostart = false, which will never
happen.

In order to simplify the codebase, change multifd_load_cleanup() signature
to 'void multifd_load_cleanup(void)', and for every usage remove error
handling or decision made based on return value != 0.

Fixes: b5eea99ec2 ("migration: Add yank feature")
Reported-by: Li Xiaohui <xiaohli@redhat.com>
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h   |  2 +-
 migration/migration.c | 14 ++++----------
 migration/multifd.c   |  6 ++----
 3 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index ff3aa2e2e9..9a7e1a8826 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -16,7 +16,7 @@
 int multifd_save_setup(Error **errp);
 void multifd_save_cleanup(void);
 int multifd_load_setup(Error **errp);
-int multifd_load_cleanup(Error **errp);
+void multifd_load_cleanup(void);
 bool multifd_recv_all_channels_created(void);
 void multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
 void multifd_recv_sync_main(void);
diff --git a/migration/migration.c b/migration/migration.c
index a5c22e327d..5bf332fdd2 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -559,13 +559,7 @@ static void process_incoming_migration_bh(void *opaque)
      */
     qemu_announce_self(&mis->announce_timer, migrate_announce_params());
 
-    if (multifd_load_cleanup(&local_err) != 0) {
-        error_report_err(local_err);
-        autostart = false;
-    }
-    /* If global state section was not received or we are in running
-       state, we need to obey autostart. Any other state is set with
-       runstate_set. */
+    multifd_load_cleanup();
 
     dirty_bitmap_mig_before_vm_start();
 
@@ -665,9 +659,9 @@ fail:
     migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
                       MIGRATION_STATUS_FAILED);
     qemu_fclose(mis->from_src_file);
-    if (multifd_load_cleanup(&local_err) != 0) {
-        error_report_err(local_err);
-    }
+
+    multifd_load_cleanup();
+
     exit(EXIT_FAILURE);
 }
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 99a59830c8..cac8496edc 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1013,12 +1013,12 @@ static void multifd_recv_terminate_threads(Error *err)
     }
 }
 
-int multifd_load_cleanup(Error **errp)
+void multifd_load_cleanup(void)
 {
     int i;
 
     if (!migrate_use_multifd()) {
-        return 0;
+        return;
     }
     multifd_recv_terminate_threads(NULL);
     for (i = 0; i < migrate_multifd_channels(); i++) {
@@ -1058,8 +1058,6 @@ int multifd_load_cleanup(Error **errp)
     multifd_recv_state->params = NULL;
     g_free(multifd_recv_state);
     multifd_recv_state = NULL;
-
-    return 0;
 }
 
 void multifd_recv_sync_main(void)
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 2/8] migration/multifd: Remove unnecessary assignment on multifd_load_cleanup()
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 1/8] migration/multifd: Change multifd_load_cleanup() signature and usage Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 3/8] migration/multifd: Join all multifd threads in order to avoid leaks Juan Quintela
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum, Leonardo Bras, Li Xiaohui,
	Peter Xu

From: Leonardo Bras <leobras@redhat.com>

Before assigning "p->quit = true" for every multifd channel,
multifd_load_cleanup() will call multifd_recv_terminate_threads() which
already does the same assignment, while protected by a mutex.

So there is no point doing the same assignment again.

Fixes: b5eea99ec2 ("migration: Add yank feature")
Reported-by: Li Xiaohui <xiaohli@redhat.com>
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index cac8496edc..3dd569d0c9 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1025,7 +1025,6 @@ void multifd_load_cleanup(void)
         MultiFDRecvParams *p = &multifd_recv_state->params[i];
 
         if (p->running) {
-            p->quit = true;
             /*
              * multifd_recv_thread may hung at MULTIFD_FLAG_SYNC handle code,
              * however try to wakeup it without harm in cleanup phase.
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 3/8] migration/multifd: Join all multifd threads in order to avoid leaks
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 1/8] migration/multifd: Change multifd_load_cleanup() signature and usage Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 2/8] migration/multifd: Remove unnecessary assignment on multifd_load_cleanup() Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 4/8] migration/multifd: Move load_cleanup inside incoming_state_destroy Juan Quintela
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum, Leonardo Bras, Li Xiaohui,
	Peter Xu

From: Leonardo Bras <leobras@redhat.com>

Current approach will only join threads that are still running.

For the threads not joined, resources or private memory are always kept in
the process space and never reclaimed before process end, and this risks
serious memory leaks.

This should usually not represent a big problem, since multifd migration
is usually just ran at most a few times, and after it succeeds there is
not much to be done before exiting the process.

Yet still, it should not hurt performance to join all of them.

Fixes: b5eea99ec2 ("migration: Add yank feature")
Reported-by: Li Xiaohui <xiaohli@redhat.com>
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index 3dd569d0c9..840d5814e4 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1030,8 +1030,9 @@ void multifd_load_cleanup(void)
              * however try to wakeup it without harm in cleanup phase.
              */
             qemu_sem_post(&p->sem_sync);
-            qemu_thread_join(&p->thread);
         }
+
+        qemu_thread_join(&p->thread);
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
         MultiFDRecvParams *p = &multifd_recv_state->params[i];
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 4/8] migration/multifd: Move load_cleanup inside incoming_state_destroy
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
                   ` (2 preceding siblings ...)
  2023-02-13  8:57 ` [PATCH v5 3/8] migration/multifd: Join all multifd threads in order to avoid leaks Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 5/8] ram: Document migration ram flags Juan Quintela
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum, Leonardo Bras, Li Xiaohui,
	Peter Xu

From: Leonardo Bras <leobras@redhat.com>

Currently running migration_incoming_state_destroy() without first running
multifd_load_cleanup() will cause a yank error:

qemu-system-x86_64: ../util/yank.c:107: yank_unregister_instance:
Assertion `QLIST_EMPTY(&entry->yankfns)' failed.
(core dumped)

The above error happens in the target host, when multifd is being used
for precopy, and then postcopy is triggered and the migration finishes.
This will crash the VM in the target host.

To avoid that, move multifd_load_cleanup() inside
migration_incoming_state_destroy(), so that the load cleanup becomes part
of the incoming state destroying process.

Running multifd_load_cleanup() twice can become an issue, though, but the
only scenario it could be ran twice is on process_incoming_migration_bh().
So removing this extra call is necessary.

On the other hand, this multifd_load_cleanup() call happens way before the
migration_incoming_state_destroy() and having this happening before
dirty_bitmap_mig_before_vm_start() and vm_start() may be a need.

So introduce a new function multifd_load_shutdown() that will mainly stop
all multifd threads and close their QIOChannels. Then use this function
instead of multifd_load_cleanup() to make sure nothing else is received
before dirty_bitmap_mig_before_vm_start().

Fixes: b5eea99ec2 ("migration: Add yank feature")
Reported-by: Li Xiaohui <xiaohli@redhat.com>
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/multifd.h   | 1 +
 migration/migration.c | 4 +++-
 migration/multifd.c   | 7 +++++++
 3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/migration/multifd.h b/migration/multifd.h
index 9a7e1a8826..7cfc265148 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -17,6 +17,7 @@ int multifd_save_setup(Error **errp);
 void multifd_save_cleanup(void);
 int multifd_load_setup(Error **errp);
 void multifd_load_cleanup(void);
+void multifd_load_shutdown(void);
 bool multifd_recv_all_channels_created(void);
 void multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
 void multifd_recv_sync_main(void);
diff --git a/migration/migration.c b/migration/migration.c
index 5bf332fdd2..90fca70cb7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -315,6 +315,8 @@ void migration_incoming_state_destroy(void)
 {
     struct MigrationIncomingState *mis = migration_incoming_get_current();
 
+    multifd_load_cleanup();
+
     if (mis->to_src_file) {
         /* Tell source that we are done */
         migrate_send_rp_shut(mis, qemu_file_get_error(mis->from_src_file) != 0);
@@ -559,7 +561,7 @@ static void process_incoming_migration_bh(void *opaque)
      */
     qemu_announce_self(&mis->announce_timer, migrate_announce_params());
 
-    multifd_load_cleanup();
+    multifd_load_shutdown();
 
     dirty_bitmap_mig_before_vm_start();
 
diff --git a/migration/multifd.c b/migration/multifd.c
index 840d5814e4..5e85c3ea9b 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1013,6 +1013,13 @@ static void multifd_recv_terminate_threads(Error *err)
     }
 }
 
+void multifd_load_shutdown(void)
+{
+    if (migrate_use_multifd()) {
+        multifd_recv_terminate_threads(NULL);
+    }
+}
+
 void multifd_load_cleanup(void)
 {
     int i;
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 5/8] ram: Document migration ram flags
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
                   ` (3 preceding siblings ...)
  2023-02-13  8:57 ` [PATCH v5 4/8] migration/multifd: Move load_cleanup inside incoming_state_destroy Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  8:57 ` [PATCH v5 6/8] multifd: Create property multifd-sync-after-each-section Juan Quintela
  2023-02-13  9:02 ` [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum

0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
Bigger usable flag is 0x200, noticing that.
We can reuse RAM_SAVe_FLAG_FULL.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/ram.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 18ac68b181..521912385d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -67,21 +67,25 @@
 /***********************************************************/
 /* ram save/restore */
 
-/* RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
- * worked for pages that where filled with the same char.  We switched
+/*
+ * RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
+ * worked for pages that were filled with the same char.  We switched
  * it to only search for the zero value.  And to avoid confusion with
- * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
+ * RAM_SAVE_FLAG_COMPRESS_PAGE just rename it.
  */
-
-#define RAM_SAVE_FLAG_FULL     0x01 /* Obsolete, not used anymore */
+/*
+ * RAM_SAVE_FLAG_FULL was obsoleted in 2009, it can be reused now
+ */
+#define RAM_SAVE_FLAG_FULL     0x01
 #define RAM_SAVE_FLAG_ZERO     0x02
 #define RAM_SAVE_FLAG_MEM_SIZE 0x04
 #define RAM_SAVE_FLAG_PAGE     0x08
 #define RAM_SAVE_FLAG_EOS      0x10
 #define RAM_SAVE_FLAG_CONTINUE 0x20
 #define RAM_SAVE_FLAG_XBZRLE   0x40
-/* 0x80 is reserved in migration.h start with 0x100 next */
+/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
 #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100
+/* We can't use any flag that is bigger than 0x200 */
 
 int (*xbzrle_encode_buffer_func)(uint8_t *, uint8_t *, int,
      uint8_t *, int) = xbzrle_encode_buffer;
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v5 6/8] multifd: Create property multifd-sync-after-each-section
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
                   ` (4 preceding siblings ...)
  2023-02-13  8:57 ` [PATCH v5 5/8] ram: Document migration ram flags Juan Quintela
@ 2023-02-13  8:57 ` Juan Quintela
  2023-02-13  9:02 ` [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  8:57 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Juan Quintela, Philippe Mathieu-Daudé, Eric Blake,
	Markus Armbruster, Marcel Apfelbaum

We used to synchronize all channels at the end of each RAM section
sent.  That is not needed, so preparing to only synchronize once every
full round in latests patches.

Notice that we initialize the property as true.  We will change the
default when we introduce the new mechanism.

Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>

---

Rename each-iteration to after-each-section

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 qapi/migration.json   | 10 +++++++++-
 migration/migration.h |  1 +
 hw/core/machine.c     |  1 +
 migration/migration.c | 15 +++++++++++++--
 4 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..2907241b9c 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,13 @@
 #                    should not affect the correctness of postcopy migration.
 #                    (since 7.1)
 #
+# @multifd-sync-after-each-section: Synchronize channels after each
+#                                   section is sent.  We used to do
+#                                   that in the past, but it is
+#                                   suboptimal.
+#                                   Default value is true until all code is in.
+#                                   (since 8.0)
+#
 # Features:
 # @unstable: Members @x-colo and @x-ignore-shared are experimental.
 #
@@ -492,7 +499,8 @@
            'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
            { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
            'validate-uuid', 'background-snapshot',
-           'zero-copy-send', 'postcopy-preempt'] }
+           'zero-copy-send', 'postcopy-preempt',
+           'multifd-sync-after-each-section'] }
 
 ##
 # @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 2da2f8a164..cf84520196 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -424,6 +424,7 @@ int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
 int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_after_each_section(void);
 
 #ifdef CONFIG_LINUX
 bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index f73fc4c45c..dc86849402 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -54,6 +54,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
 GlobalProperty hw_compat_7_0[] = {
     { "arm-gicv3-common", "force-8-bit-prio", "on" },
     { "nvme-ns", "eui64-default", "on"},
+    { "migration", "multifd-sync-after-each-section", "on"},
 };
 const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
 
diff --git a/migration/migration.c b/migration/migration.c
index 90fca70cb7..406c27bc82 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
     MIGRATION_CAPABILITY_XBZRLE,
     MIGRATION_CAPABILITY_X_COLO,
     MIGRATION_CAPABILITY_VALIDATE_UUID,
-    MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+    MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+    MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION);
 
 /* When we add fault tolerance, we could have several
    migrations at once.  For now we don't need to add
@@ -2701,6 +2702,15 @@ bool migrate_use_multifd(void)
     return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
 }
 
+bool migrate_multifd_sync_after_each_section(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return true;
+    // We will change this when code gets in.
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
+}
+
 bool migrate_pause_before_switchover(void)
 {
     MigrationState *s;
@@ -4535,7 +4545,8 @@ static Property migration_properties[] = {
     DEFINE_PROP_MIG_CAP("x-zero-copy-send",
             MIGRATION_CAPABILITY_ZERO_COPY_SEND),
 #endif
-
+    DEFINE_PROP_MIG_CAP("multifd-sync-after-each-section",
+                        MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 0/8] Eliminate multifd flush
  2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
                   ` (5 preceding siblings ...)
  2023-02-13  8:57 ` [PATCH v5 6/8] multifd: Create property multifd-sync-after-each-section Juan Quintela
@ 2023-02-13  9:02 ` Juan Quintela
  6 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-13  9:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
	Philippe Mathieu-Daudé, Eric Blake, Markus Armbruster,
	Marcel Apfelbaum

Juan Quintela <quintela@redhat.com> wrote:
> Hi

nack.

Mail server is giving me a bad time, sorry.

>
> In this v5:
> - Remove RAM Flags documentation (already on PULL request)
> - rebase on top of PULL request.
>
> Please review.
>
> Based-on: <20230213025150.71537-1-quintela@redhat.com>
>           Migration 20230213 patches
>
> In this v4:
> - Rebased on top of migration-20230209 PULL request
> - Integrate two patches in that pull request
> - Rebase
> - Address Eric reviews.
>
> Please review.
>
> In this v3:
> - update to latest upstream.
> - fix checkpatch errors.
>
> Please, review.
>
> In this v2:
> - update to latest upstream
> - change 0, 1, 2 values to defines
> - Add documentation for SAVE_VM_FLAGS
> - Add missing qemu_fflush(), it made random hangs for migration test
>   (only for tls, no clue why).
>
> Please, review.
>
> [v1]
> Upstream multifd code synchronize all threads after each RAM section.  This is suboptimal.
> Change it to only flush after we go trough all ram.
>
> Preserve all semantics for old machine types.
>
> Juan Quintela (4):
>   ram: Document migration ram flags
>   multifd: Create property multifd-sync-after-each-section
>   multifd: Protect multifd_send_sync_main() calls
>   multifd: Only sync once each full round of memory
>
> Leonardo Bras (4):
>   migration/multifd: Change multifd_load_cleanup() signature and usage
>   migration/multifd: Remove unnecessary assignment on
>     multifd_load_cleanup()
>   migration/multifd: Join all multifd threads in order to avoid leaks
>   migration/multifd: Move load_cleanup inside incoming_state_destroy
>
>  qapi/migration.json   | 10 +++++++-
>  migration/migration.h |  1 +
>  migration/multifd.h   |  3 ++-
>  hw/core/machine.c     |  1 +
>  migration/migration.c | 29 ++++++++++++---------
>  migration/multifd.c   | 17 +++++++-----
>  migration/ram.c       | 60 ++++++++++++++++++++++++++++++++++---------
>  7 files changed, 89 insertions(+), 32 deletions(-)



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-02-13  9:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-13  8:57 [PATCH v5 0/8] Eliminate multifd flush Juan Quintela
2023-02-13  8:57 ` [PATCH v5 1/8] migration/multifd: Change multifd_load_cleanup() signature and usage Juan Quintela
2023-02-13  8:57 ` [PATCH v5 2/8] migration/multifd: Remove unnecessary assignment on multifd_load_cleanup() Juan Quintela
2023-02-13  8:57 ` [PATCH v5 3/8] migration/multifd: Join all multifd threads in order to avoid leaks Juan Quintela
2023-02-13  8:57 ` [PATCH v5 4/8] migration/multifd: Move load_cleanup inside incoming_state_destroy Juan Quintela
2023-02-13  8:57 ` [PATCH v5 5/8] ram: Document migration ram flags Juan Quintela
2023-02-13  8:57 ` [PATCH v5 6/8] multifd: Create property multifd-sync-after-each-section Juan Quintela
2023-02-13  9:02 ` [PATCH v5 0/8] Eliminate multifd flush Juan Quintela

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).