* [PATCH v4 0/4] Eliminate multifd flush
@ 2023-02-09 23:37 Juan Quintela
2023-02-09 23:37 ` [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section Juan Quintela
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-09 23:37 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Juan Quintela, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum, Eric Blake,
Philippe Mathieu-Daudé
Hi
In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.
Please review.
In this v3:
- update to latest upstream.
- fix checkpatch errors.
Please, review.
In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
(only for tls, no clue why).
Please, review.
[v1]
Upstream multifd code synchronize all threads after each RAM section. This is suboptimal.
Change it to only flush after we go trough all ram.
Preserve all semantics for old machine types.
Juan Quintela (4):
multifd: Create property multifd-sync-after-each-section
multifd: Protect multifd_send_sync_main() calls
multifd: Only sync once each full round of memory
ram: Document migration ram flags
qapi/migration.json | 10 +++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 13 ++++++++--
migration/ram.c | 56 +++++++++++++++++++++++++++++++++++--------
5 files changed, 68 insertions(+), 13 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section
2023-02-09 23:37 [PATCH v4 0/4] Eliminate multifd flush Juan Quintela
@ 2023-02-09 23:37 ` Juan Quintela
2023-02-10 6:28 ` Markus Armbruster
2023-02-09 23:37 ` [PATCH v4 2/4] multifd: Protect multifd_send_sync_main() calls Juan Quintela
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Juan Quintela @ 2023-02-09 23:37 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Juan Quintela, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum, Eric Blake,
Philippe Mathieu-Daudé
We used to synchronize all channels at the end of each RAM section
sent. That is not needed, so preparing to only synchronize once every
full round in latests patches.
Notice that we initialize the property as true. We will change the
default when we introduce the new mechanism.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Rename each-iteration to after-each-section
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
qapi/migration.json | 10 +++++++++-
migration/migration.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 15 +++++++++++++--
4 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index c84fa10e86..2907241b9c 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -478,6 +478,13 @@
# should not affect the correctness of postcopy migration.
# (since 7.1)
#
+# @multifd-sync-after-each-section: Synchronize channels after each
+# section is sent. We used to do
+# that in the past, but it is
+# suboptimal.
+# Default value is true until all code is in.
+# (since 8.0)
+#
# Features:
# @unstable: Members @x-colo and @x-ignore-shared are experimental.
#
@@ -492,7 +499,8 @@
'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
- 'zero-copy-send', 'postcopy-preempt'] }
+ 'zero-copy-send', 'postcopy-preempt',
+ 'multifd-sync-after-each-section'] }
##
# @MigrationCapabilityStatus:
diff --git a/migration/migration.h b/migration/migration.h
index 2da2f8a164..cf84520196 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -424,6 +424,7 @@ int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
+bool migrate_multifd_sync_after_each_section(void);
#ifdef CONFIG_LINUX
bool migrate_use_zero_copy_send(void);
diff --git a/hw/core/machine.c b/hw/core/machine.c
index f73fc4c45c..dc86849402 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -54,6 +54,7 @@ const size_t hw_compat_7_1_len = G_N_ELEMENTS(hw_compat_7_1);
GlobalProperty hw_compat_7_0[] = {
{ "arm-gicv3-common", "force-8-bit-prio", "on" },
{ "nvme-ns", "eui64-default", "on"},
+ { "migration", "multifd-sync-after-each-section", "on"},
};
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
diff --git a/migration/migration.c b/migration/migration.c
index a5c22e327d..b2844d374f 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_XBZRLE,
MIGRATION_CAPABILITY_X_COLO,
MIGRATION_CAPABILITY_VALIDATE_UUID,
- MIGRATION_CAPABILITY_ZERO_COPY_SEND);
+ MIGRATION_CAPABILITY_ZERO_COPY_SEND,
+ MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION);
/* When we add fault tolerance, we could have several
migrations at once. For now we don't need to add
@@ -2705,6 +2706,15 @@ bool migrate_use_multifd(void)
return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
}
+bool migrate_multifd_sync_after_each_section(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return true;
+ // We will change this when code gets in.
+ return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
+}
+
bool migrate_pause_before_switchover(void)
{
MigrationState *s;
@@ -4539,7 +4549,8 @@ static Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-zero-copy-send",
MIGRATION_CAPABILITY_ZERO_COPY_SEND),
#endif
-
+ DEFINE_PROP_MIG_CAP("multifd-sync-after-each-section",
+ MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION),
DEFINE_PROP_END_OF_LIST(),
};
--
2.39.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 2/4] multifd: Protect multifd_send_sync_main() calls
2023-02-09 23:37 [PATCH v4 0/4] Eliminate multifd flush Juan Quintela
2023-02-09 23:37 ` [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section Juan Quintela
@ 2023-02-09 23:37 ` Juan Quintela
2023-02-09 23:37 ` [PATCH v4 3/4] multifd: Only sync once each full round of memory Juan Quintela
2023-02-09 23:37 ` [PATCH v4 4/4] ram: Document migration ram flags Juan Quintela
3 siblings, 0 replies; 8+ messages in thread
From: Juan Quintela @ 2023-02-09 23:37 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Juan Quintela, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum, Eric Blake,
Philippe Mathieu-Daudé
We only need to do that on the ram_save_iterate() call on sending and
on destination when we get a RAM_SAVE_FLAG_EOS.
In setup() and complete() we need to synch in both new and old cases,
so don't add a check there.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Remove the wrappers that we take out on patch 5.
---
migration/ram.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 8d114afd4b..899e2cd8af 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3406,9 +3406,11 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
out:
if (ret >= 0
&& migration_is_setup_or_active(migrate_get_current()->state)) {
- ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
- if (ret < 0) {
- return ret;
+ if (migrate_multifd_sync_after_each_section()) {
+ ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
+ if (ret < 0) {
+ return ret;
+ }
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
@@ -4168,7 +4170,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
case RAM_SAVE_FLAG_EOS:
/* normal exit */
- multifd_recv_sync_main();
+ if (migrate_multifd_sync_after_each_section()) {
+ multifd_recv_sync_main();
+ }
break;
default:
error_report("Unknown combination of migration flags: 0x%x"
@@ -4439,7 +4443,9 @@ static int ram_load_precopy(QEMUFile *f)
break;
case RAM_SAVE_FLAG_EOS:
/* normal exit */
- multifd_recv_sync_main();
+ if (migrate_multifd_sync_after_each_section()) {
+ multifd_recv_sync_main();
+ }
break;
default:
if (flags & RAM_SAVE_FLAG_HOOK) {
--
2.39.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 3/4] multifd: Only sync once each full round of memory
2023-02-09 23:37 [PATCH v4 0/4] Eliminate multifd flush Juan Quintela
2023-02-09 23:37 ` [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section Juan Quintela
2023-02-09 23:37 ` [PATCH v4 2/4] multifd: Protect multifd_send_sync_main() calls Juan Quintela
@ 2023-02-09 23:37 ` Juan Quintela
2023-02-10 6:29 ` Markus Armbruster
2023-02-09 23:37 ` [PATCH v4 4/4] ram: Document migration ram flags Juan Quintela
3 siblings, 1 reply; 8+ messages in thread
From: Juan Quintela @ 2023-02-09 23:37 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Juan Quintela, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum, Eric Blake,
Philippe Mathieu-Daudé
We need to add a new flag to mean to sync at that point.
Notice that we still synchronize at the end of setup and at the end of
complete stages.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Add missing qemu_fflush(), now it passes all tests always.
---
qapi/migration.json | 2 +-
migration/migration.c | 2 --
migration/ram.c | 28 +++++++++++++++++++++++++++-
3 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/qapi/migration.json b/qapi/migration.json
index 2907241b9c..5d0efa4590 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -482,7 +482,7 @@
# section is sent. We used to do
# that in the past, but it is
# suboptimal.
-# Default value is true until all code is in.
+# Default value is false.
# (since 8.0)
#
# Features:
diff --git a/migration/migration.c b/migration/migration.c
index b2844d374f..9eb061319d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2710,8 +2710,6 @@ bool migrate_multifd_sync_after_each_section(void)
{
MigrationState *s = migrate_get_current();
- return true;
- // We will change this when code gets in.
return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
}
diff --git a/migration/ram.c b/migration/ram.c
index 899e2cd8af..32fab7b5ee 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -82,6 +82,7 @@
#define RAM_SAVE_FLAG_XBZRLE 0x40
/* 0x80 is reserved in migration.h start with 0x100 next */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
+#define RAM_SAVE_FLAG_MULTIFD_SYNC 0x200
int (*xbzrle_encode_buffer_func)(uint8_t *, uint8_t *, int,
uint8_t *, int) = xbzrle_encode_buffer;
@@ -1593,6 +1594,7 @@ retry:
* associated with the search process.
*
* Returns:
+ * <0: An error happened
* PAGE_ALL_CLEAN: no dirty page found, give up
* PAGE_TRY_AGAIN: no dirty page found, retry for next block
* PAGE_DIRTY_FOUND: dirty page found
@@ -1620,6 +1622,15 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
pss->page = 0;
pss->block = QLIST_NEXT_RCU(pss->block, next);
if (!pss->block) {
+ if (!migrate_multifd_sync_after_each_section()) {
+ QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
+ int ret = multifd_send_sync_main(f);
+ if (ret < 0) {
+ return ret;
+ }
+ qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ qemu_fflush(f);
+ }
/*
* If memory migration starts over, we will meet a dirtied page
* which may still exists in compression threads's ring, so we
@@ -2612,6 +2623,9 @@ static int ram_find_and_save_block(RAMState *rs)
break;
} else if (res == PAGE_TRY_AGAIN) {
continue;
+ } else if (res < 0) {
+ pages = res;
+ break;
}
}
}
@@ -3298,6 +3312,10 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return ret;
}
+ if (!migrate_multifd_sync_after_each_section()) {
+ qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ }
+
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
@@ -3483,6 +3501,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
+ if (!migrate_multifd_sync_after_each_section()) {
+ qemu_put_be64(f, RAM_SAVE_FLAG_MULTIFD_SYNC);
+ }
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
@@ -4167,7 +4188,9 @@ int ram_load_postcopy(QEMUFile *f, int channel)
}
decompress_data_with_multi_threads(f, page_buffer, len);
break;
-
+ case RAM_SAVE_FLAG_MULTIFD_SYNC:
+ multifd_recv_sync_main();
+ break;
case RAM_SAVE_FLAG_EOS:
/* normal exit */
if (migrate_multifd_sync_after_each_section()) {
@@ -4441,6 +4464,9 @@ static int ram_load_precopy(QEMUFile *f)
break;
}
break;
+ case RAM_SAVE_FLAG_MULTIFD_SYNC:
+ multifd_recv_sync_main();
+ break;
case RAM_SAVE_FLAG_EOS:
/* normal exit */
if (migrate_multifd_sync_after_each_section()) {
--
2.39.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 4/4] ram: Document migration ram flags
2023-02-09 23:37 [PATCH v4 0/4] Eliminate multifd flush Juan Quintela
` (2 preceding siblings ...)
2023-02-09 23:37 ` [PATCH v4 3/4] multifd: Only sync once each full round of memory Juan Quintela
@ 2023-02-09 23:37 ` Juan Quintela
2023-02-10 20:56 ` Eric Blake
3 siblings, 1 reply; 8+ messages in thread
From: Juan Quintela @ 2023-02-09 23:37 UTC (permalink / raw)
To: qemu-devel
Cc: Eduardo Habkost, Juan Quintela, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum, Eric Blake,
Philippe Mathieu-Daudé
0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
Bigger usable flag is 0x200, noticing that.
We can reuse RAM_SAVe_FLAG_FULL.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 32fab7b5ee..3648cfc357 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -67,22 +67,26 @@
/***********************************************************/
/* ram save/restore */
-/* RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
+/*
+ * RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
* worked for pages that where filled with the same char. We switched
* it to only search for the zero value. And to avoid confusion with
* RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
*/
-
-#define RAM_SAVE_FLAG_FULL 0x01 /* Obsolete, not used anymore */
+/*
+ * RAM_SAVE_FLAG_FULL was obsoleted in 2009, it can be reused now
+ */
+#define RAM_SAVE_FLAG_FULL 0x01
#define RAM_SAVE_FLAG_ZERO 0x02
#define RAM_SAVE_FLAG_MEM_SIZE 0x04
#define RAM_SAVE_FLAG_PAGE 0x08
#define RAM_SAVE_FLAG_EOS 0x10
#define RAM_SAVE_FLAG_CONTINUE 0x20
#define RAM_SAVE_FLAG_XBZRLE 0x40
-/* 0x80 is reserved in migration.h start with 0x100 next */
+/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
#define RAM_SAVE_FLAG_MULTIFD_SYNC 0x200
+/* We can't use any flag that is bigger than 0x200 */
int (*xbzrle_encode_buffer_func)(uint8_t *, uint8_t *, int,
uint8_t *, int) = xbzrle_encode_buffer;
--
2.39.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section
2023-02-09 23:37 ` [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section Juan Quintela
@ 2023-02-10 6:28 ` Markus Armbruster
0 siblings, 0 replies; 8+ messages in thread
From: Markus Armbruster @ 2023-02-10 6:28 UTC (permalink / raw)
To: Juan Quintela
Cc: qemu-devel, Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
Marcel Apfelbaum, Eric Blake, Philippe Mathieu-Daudé
Juan Quintela <quintela@redhat.com> writes:
> We used to synchronize all channels at the end of each RAM section
> sent. That is not needed, so preparing to only synchronize once every
> full round in latests patches.
>
> Notice that we initialize the property as true. We will change the
> default when we introduce the new mechanism.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> ---
>
> Rename each-iteration to after-each-section
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
[...]
> diff --git a/migration/migration.c b/migration/migration.c
> index a5c22e327d..b2844d374f 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -167,7 +167,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
> MIGRATION_CAPABILITY_XBZRLE,
> MIGRATION_CAPABILITY_X_COLO,
> MIGRATION_CAPABILITY_VALIDATE_UUID,
> - MIGRATION_CAPABILITY_ZERO_COPY_SEND);
> + MIGRATION_CAPABILITY_ZERO_COPY_SEND,
> + MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION);
>
> /* When we add fault tolerance, we could have several
> migrations at once. For now we don't need to add
> @@ -2705,6 +2706,15 @@ bool migrate_use_multifd(void)
> return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD];
> }
>
> +bool migrate_multifd_sync_after_each_section(void)
> +{
> + MigrationState *s = migrate_get_current();
> +
> + return true;
> + // We will change this when code gets in.
It seems the time for that would be right now, doesn't it? ;)
> + return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
> +}
> +
> bool migrate_pause_before_switchover(void)
> {
> MigrationState *s;
> @@ -4539,7 +4549,8 @@ static Property migration_properties[] = {
> DEFINE_PROP_MIG_CAP("x-zero-copy-send",
> MIGRATION_CAPABILITY_ZERO_COPY_SEND),
> #endif
> -
> + DEFINE_PROP_MIG_CAP("multifd-sync-after-each-section",
> + MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION),
> DEFINE_PROP_END_OF_LIST(),
> };
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 3/4] multifd: Only sync once each full round of memory
2023-02-09 23:37 ` [PATCH v4 3/4] multifd: Only sync once each full round of memory Juan Quintela
@ 2023-02-10 6:29 ` Markus Armbruster
0 siblings, 0 replies; 8+ messages in thread
From: Markus Armbruster @ 2023-02-10 6:29 UTC (permalink / raw)
To: Juan Quintela
Cc: qemu-devel, Eduardo Habkost, Yanan Wang, Dr. David Alan Gilbert,
Marcel Apfelbaum, Eric Blake, Philippe Mathieu-Daudé
Juan Quintela <quintela@redhat.com> writes:
> We need to add a new flag to mean to sync at that point.
> Notice that we still synchronize at the end of setup and at the end of
> complete stages.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> ---
>
> Add missing qemu_fflush(), now it passes all tests always.
> ---
[...]
> diff --git a/migration/migration.c b/migration/migration.c
> index b2844d374f..9eb061319d 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2710,8 +2710,6 @@ bool migrate_multifd_sync_after_each_section(void)
> {
> MigrationState *s = migrate_get_current();
>
> - return true;
> - // We will change this when code gets in.
> return s->enabled_capabilities[MIGRATION_CAPABILITY_MULTIFD_SYNC_AFTER_EACH_SECTION];
> }
Ah, nevermind.
[...]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 4/4] ram: Document migration ram flags
2023-02-09 23:37 ` [PATCH v4 4/4] ram: Document migration ram flags Juan Quintela
@ 2023-02-10 20:56 ` Eric Blake
0 siblings, 0 replies; 8+ messages in thread
From: Eric Blake @ 2023-02-10 20:56 UTC (permalink / raw)
To: Juan Quintela
Cc: qemu-devel, Eduardo Habkost, Yanan Wang, Markus Armbruster,
Dr. David Alan Gilbert, Marcel Apfelbaum,
Philippe Mathieu-Daudé
On Fri, Feb 10, 2023 at 12:37:30AM +0100, Juan Quintela wrote:
> 0x80 is RAM_SAVE_FLAG_HOOK, it is in qemu-file now.
> Bigger usable flag is 0x200, noticing that.
> We can reuse RAM_SAVe_FLAG_FULL.
SAVE
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/ram.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 32fab7b5ee..3648cfc357 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -67,22 +67,26 @@
> /***********************************************************/
> /* ram save/restore */
>
> -/* RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
> +/*
> + * RAM_SAVE_FLAG_ZERO used to be named RAM_SAVE_FLAG_COMPRESS, it
> * worked for pages that where filled with the same char. We switched
As long as you're in the area,
s/where/were/
> * it to only search for the zero value. And to avoid confusion with
> * RAM_SSAVE_FLAG_COMPRESS_PAGE just rename it.
s/SSAVE/SAVE/
> */
> -
> -#define RAM_SAVE_FLAG_FULL 0x01 /* Obsolete, not used anymore */
> +/*
> + * RAM_SAVE_FLAG_FULL was obsoleted in 2009, it can be reused now
> + */
> +#define RAM_SAVE_FLAG_FULL 0x01
> #define RAM_SAVE_FLAG_ZERO 0x02
> #define RAM_SAVE_FLAG_MEM_SIZE 0x04
> #define RAM_SAVE_FLAG_PAGE 0x08
> #define RAM_SAVE_FLAG_EOS 0x10
> #define RAM_SAVE_FLAG_CONTINUE 0x20
> #define RAM_SAVE_FLAG_XBZRLE 0x40
> -/* 0x80 is reserved in migration.h start with 0x100 next */
> +/* 0x80 is reserved in qemu-file.h for RAM_SAVE_FLAG_HOOK */
> #define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
> #define RAM_SAVE_FLAG_MULTIFD_SYNC 0x200
> +/* We can't use any flag that is bigger than 0x200 */
Spelling fixes are trivial; feel free to add:
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-02-10 20:57 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-09 23:37 [PATCH v4 0/4] Eliminate multifd flush Juan Quintela
2023-02-09 23:37 ` [PATCH v4 1/4] multifd: Create property multifd-sync-after-each-section Juan Quintela
2023-02-10 6:28 ` Markus Armbruster
2023-02-09 23:37 ` [PATCH v4 2/4] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2023-02-09 23:37 ` [PATCH v4 3/4] multifd: Only sync once each full round of memory Juan Quintela
2023-02-10 6:29 ` Markus Armbruster
2023-02-09 23:37 ` [PATCH v4 4/4] ram: Document migration ram flags Juan Quintela
2023-02-10 20:56 ` Eric Blake
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).