* [PULL 00/23] Next patches
@ 2026-05-05 20:26 Peter Xu
2026-05-05 20:26 ` [PULL 01/23] migration: Fix blocking in POSTCOPY_DEVICE during package load Peter Xu
` (23 more replies)
0 siblings, 24 replies; 27+ messages in thread
From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw)
To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu
The following changes since commit ac0cc20ad2fe0b8df2e5d9458e90a095ac711ab1:
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2026-05-01 14:41:49 -0400)
are available in the Git repository at:
https://gitlab.com/peterx/qemu.git tags/next-pull-request
for you to fetch changes up to 348c67cb16ecb479994b0c957699f906bd40a6c2:
tests/qtest/migration: Fix A-B file build (2026-05-05 12:35:25 -0400)
----------------------------------------------------------------
Migration and mem pull request
- Fabiano's fix on migrate_set_parameter crash with multifd & zerocopy
- Pranav's fix on postcopy stucking at device state when ack lost
- Samuel's new migration parameter x-rdma-chunk-size for RDMA
- PeterX's vfio/migration series to report remaining data and fix downtime calc
- PeterM's MemoryRegionOps .impl cleanup series
- Fabiano's fix to build a-b migration bootfiles for all archs
----------------------------------------------------------------
CJ Chen (2):
hw/riscv: iommu-trap: remove .impl.unaligned = true
system/memory: assert on invalid MemoryRegionOps .unaligned combo
Fabiano Rosas (2):
migration: Use QAPI_CLONE_MEMBERS in migrate_params_test_apply
tests/qtest/migration: Fix A-B file build
Peter Maydell (2):
hw/npcm7xx_fiu: Specify .impl for npcm7xx_fiu_flash_ops
hw/xtensa/mx_pic: Specify xtensa_mx_pic_ops .impl settings
Peter Xu (15):
migration: Fix low possibility downtime violation
migration/qapi: Rename MigrationStats to MigrationRAMStats
vfio/migration: Cache stop size in VFIOMigration
migration/treewide: Merge @state_pending_{exact|estimate} APIs
migration: Use the new save_query_pending() API directly
migration: Introduce stopcopy_bytes in save_query_pending()
vfio/migration: Fix incorrect reporting for VFIO pending data
migration: Move iteration counter out of RAM
migration: Introduce a helper to return switchover bw estimate
migration: Calculate expected downtime on demand
migration: Fix calculation of expected_downtime to take VFIO info
migration: Remember total dirty bytes in mig_stats
migration/qapi: Introduce system-wide "remaining" reports
migration/qapi: Update unit for avail-switchover-bandwidth
vfio/migration: Add tracepoints for precopy/stopcopy query ioctls
Pranav Tyagi (1):
migration: Fix blocking in POSTCOPY_DEVICE during package load
Samuel Zhang (1):
migration/rdma: add x-rdma-chunk-size parameter
docs/about/removed-features.rst | 2 +-
docs/devel/migration/main.rst | 9 +-
docs/devel/migration/vfio.rst | 9 +-
qapi/migration.json | 45 +-
hw/vfio/vfio-migration-internal.h | 8 +
include/migration/register.h | 59 +-
migration/migration-stats.h | 20 +-
migration/migration.h | 3 +-
migration/options.h | 1 +
migration/savevm.h | 7 +-
tests/qtest/migration/aarch64/a-b-kernel.h | 7 +-
tests/qtest/migration/bootfile.h | 11 +
tests/qtest/migration/i386/a-b-bootblock.h | 8 +-
tests/qtest/migration/ppc64/a-b-kernel.h | 8 +-
tests/qtest/migration/s390x/a-b-bios.h | 611 +++++++++++++--------
hw/riscv/riscv-iommu.c | 1 -
hw/s390x/s390-stattrib.c | 9 +-
hw/ssi/npcm7xx_fiu.c | 5 +
hw/vfio/migration.c | 125 +++--
hw/xtensa/mx_pic.c | 27 +
migration/block-dirty-bitmap.c | 10 +-
migration/migration-hmp-cmds.c | 16 +
migration/migration.c | 225 +++++---
migration/options.c | 59 +-
migration/ram.c | 40 +-
migration/rdma.c | 30 +-
migration/savevm.c | 42 +-
system/memory.c | 1 +
tests/qtest/migration/s390x/a-b-bios.c | 66 ++-
hw/vfio/trace-events | 5 +-
migration/trace-events | 3 +-
tests/qtest/migration/Makefile | 11 +-
tests/qtest/migration/aarch64/Makefile | 4 +-
tests/qtest/migration/aarch64/a-b-kernel.S | 2 +-
tests/qtest/migration/i386/Makefile | 4 +-
tests/qtest/migration/i386/a-b-bootblock.S | 2 +-
tests/qtest/migration/ppc64/Makefile | 6 +-
tests/qtest/migration/ppc64/a-b-kernel.S | 2 +-
tests/qtest/migration/s390x/Makefile | 8 +-
39 files changed, 957 insertions(+), 554 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 27+ messages in thread* [PULL 01/23] migration: Fix blocking in POSTCOPY_DEVICE during package load 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 02/23] migration: Use QAPI_CLONE_MEMBERS in migrate_params_test_apply Peter Xu ` (22 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Pranav Tyagi, Juraj Marcin From: Pranav Tyagi <prtyagi@redhat.com> The package_loaded event is not set in case MIG_RP_MSG_PONG does not arrive on the source from the destination in the return path thread. The migration thread would then be blocked waiting for package_loaded event indefinitely in POSTCOPY_DEVICE state. Where as, in such a condition the source VM can safely resume as the destination has not yet started. The pong message can get lost in case of a network failure or destination crash before sending the pong. This patch removes the package_loaded event and uses rp_sem, instead of kicking multiple events. The error is detected in case of network failure or destination crash and rp_sem is set in the out path of the return path thread. This will kick the migration thread out from a condition of indefinitely waiting for rp_sem. The migration thread then fails early and breaks from the migration loop to resume the vm on the source side. Fixes: 7b842fe354c6 ("migration: Introduce POSTCOPY_DEVICE state") Signed-off-by: Pranav Tyagi <prtyagi@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20260423094438.43556-1-prtyagi@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration.h | 1 - migration/migration.c | 48 ++++++++++++++++++++++++++++--------------- 2 files changed, 31 insertions(+), 18 deletions(-) diff --git a/migration/migration.h b/migration/migration.h index b6888daced..9081e6a612 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -512,7 +512,6 @@ struct MigrationState { bool rdma_migration; bool postcopy_package_loaded; - QemuEvent postcopy_package_loaded_event; GSource *hup_source; diff --git a/migration/migration.c b/migration/migration.c index 5c9aaa6e58..6e4988a590 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1661,7 +1661,6 @@ int migrate_init(MigrationState *s, Error **errp) migration_reset_vfio_bytes_transferred(); s->postcopy_package_loaded = false; - qemu_event_reset(&s->postcopy_package_loaded_event); return 0; } @@ -2317,7 +2316,7 @@ static void *source_return_path_thread(void *opaque) if (tmp32 == QEMU_VM_PING_PACKAGED_LOADED) { trace_source_return_path_thread_postcopy_package_loaded(); ms->postcopy_package_loaded = true; - qemu_event_set(&ms->postcopy_package_loaded_event); + migration_rp_kick(ms); } break; @@ -2388,16 +2387,21 @@ out: trace_source_return_path_thread_bad_end(); } - if (ms->state == MIGRATION_STATUS_POSTCOPY_RECOVER) { + if (ms->state == MIGRATION_STATUS_POSTCOPY_RECOVER || + ms->state == MIGRATION_STATUS_POSTCOPY_DEVICE) { /* - * this will be extremely unlikely: that we got yet another network - * issue during recovering of the 1st network failure.. during this - * period the main migration thread can be waiting on rp_sem for - * this thread to sync with the other side. + * The migration thread can get stuck waiting for rp_sem if the + * return path fails to sync with the destination. This handles + * two specific cases: * - * When this happens, explicitly kick the migration thread out of - * RECOVER stage and back to PAUSED, so the admin can try - * everything again. + * POSTCOPY_RECOVER: A failure occurs during a recovery attempt. + * We kick the migration thread back to PAUSED so the admin can + * retry. + * + * POSTCOPY_DEVICE: The MIG_RP_MSG_PONG is lost due to a + * network failure or destination crash. We kick the migration + * thread out of its wait so it can fail the migration and safely + * resume the VM on the source. */ migration_rp_kick(ms); } @@ -3226,12 +3230,24 @@ static MigIterateState migration_iteration_run(MigrationState *s) if (s->state == MIGRATION_STATUS_POSTCOPY_DEVICE && (s->postcopy_package_loaded || complete_ready)) { /* - * If package has been loaded, the event is set and we will - * immediatelly transition to POSTCOPY_ACTIVE. If we are ready for - * completion, we need to wait for destination to load the postcopy - * package before actually completing. + * We will immediately transition to POSTCOPY_ACTIVE. + * If we are ready for completion, we need to wait for + * destination to load the postcopy package before actually + * completing. */ - qemu_event_wait(&s->postcopy_package_loaded_event); + while (!s->postcopy_package_loaded) { + if (migration_rp_wait(s)) { + /* + * Error happened. Migration thread was stuck waiting in + * POSTCOPY_DEVICE for rp_sem which was never set. + */ + migrate_set_state(&s->state, + MIGRATION_STATUS_POSTCOPY_DEVICE, + MIGRATION_STATUS_FAILING); + return MIG_ITERATE_BREAK; + } + } + /* Acknowledgement received from the destination */ migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_DEVICE, MIGRATION_STATUS_POSTCOPY_ACTIVE); } @@ -3863,7 +3879,6 @@ static void migration_instance_finalize(Object *obj) qemu_sem_destroy(&ms->rp_state.rp_pong_acks); qemu_sem_destroy(&ms->postcopy_qemufile_src_sem); error_free(ms->error); - qemu_event_destroy(&ms->postcopy_package_loaded_event); } static void migration_instance_init(Object *obj) @@ -3885,7 +3900,6 @@ static void migration_instance_init(Object *obj) qemu_sem_init(&ms->wait_unplug_sem, 0); qemu_sem_init(&ms->postcopy_qemufile_src_sem, 0); qemu_mutex_init(&ms->qemu_file_lock); - qemu_event_init(&ms->postcopy_package_loaded_event, 0); } /* -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 02/23] migration: Use QAPI_CLONE_MEMBERS in migrate_params_test_apply 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu 2026-05-05 20:26 ` [PULL 01/23] migration: Fix blocking in POSTCOPY_DEVICE during package load Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 03/23] migration/rdma: add x-rdma-chunk-size parameter Peter Xu ` (21 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, qemu-stable, Maciej S. Szmigiero, Maciej S. Szmigiero From: Fabiano Rosas <farosas@suse.de> Use QAPI_CLONE_MEMBERS instead of making an assignment. The QAPI method makes the handling of the TLS strings more intuitive because it clones them as well. This also fixes a segfault when a NULL TLS option is accessed as part of a validation check for another option (e.g. in the zero-copy + multifd compression case). Details follow: Currently, after copying s->parameters to the temporary MigrationParameters object before migrate_params_check(), the references in temporary object to the TLS options are dropped, either because: a) the user set a new option, in which case that's fine as s->parameters still holds the reference to the old memory or, b) the user did not set a new option, in which case keeping the references in the temporary object would later cause them to be freed along with it, leading to double-free when s->parameters is also freed later on. In this second case, it was overlooked that the TLS options can be accessed already during migrate_params_check() as part of validation of another option. Those pointers should not have been cleared. Using QAPI_CLONE_MEMBERS fixes the issue because the temporary object is not stealing a reference from s->parameters anymore. Cc: qemu-stable <qemu-stable@nongnu.org> Fixes: aed97f0563 ("migration: Normalize tls arguments") Reported-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Link: https://lore.kernel.org/r/a65a1049-9f19-460a-8e27-a62bb30d2727@maciej.szmigiero.name Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Tested-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Link: https://lore.kernel.org/r/20260414223718.23965-1-farosas@suse.de Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/options.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/migration/options.c b/migration/options.c index 7556fbc06b..68441f0276 100644 --- a/migration/options.c +++ b/migration/options.c @@ -1279,9 +1279,9 @@ bool migrate_params_check(MigrationParameters *params, Error **errp) static void migrate_params_test_apply(MigrationParameters *params, MigrationParameters *dest) { - *dest = migrate_get_current()->parameters; + MigrationState *s = migrate_get_current(); - /* TODO use QAPI_CLONE() instead of duplicating it inline */ + QAPI_CLONE_MEMBERS(MigrationParameters, dest, &s->parameters); if (params->has_throttle_trigger_threshold) { dest->throttle_trigger_threshold = params->throttle_trigger_threshold; @@ -1300,24 +1300,18 @@ static void migrate_params_test_apply(MigrationParameters *params, } if (params->tls_creds) { + qapi_free_StrOrNull(dest->tls_creds); dest->tls_creds = QAPI_CLONE(StrOrNull, params->tls_creds); - } else { - /* clear the reference, it's owned by s->parameters */ - dest->tls_creds = NULL; } if (params->tls_hostname) { + qapi_free_StrOrNull(dest->tls_hostname); dest->tls_hostname = QAPI_CLONE(StrOrNull, params->tls_hostname); - } else { - /* clear the reference, it's owned by s->parameters */ - dest->tls_hostname = NULL; } if (params->tls_authz) { + qapi_free_StrOrNull(dest->tls_authz); dest->tls_authz = QAPI_CLONE(StrOrNull, params->tls_authz); - } else { - /* clear the reference, it's owned by s->parameters */ - dest->tls_authz = NULL; } if (params->has_max_bandwidth) { @@ -1374,8 +1368,9 @@ static void migrate_params_test_apply(MigrationParameters *params, } if (params->has_block_bitmap_mapping) { - dest->has_block_bitmap_mapping = true; - dest->block_bitmap_mapping = params->block_bitmap_mapping; + qapi_free_BitmapMigrationNodeAliasList(dest->block_bitmap_mapping); + dest->block_bitmap_mapping = QAPI_CLONE(BitmapMigrationNodeAliasList, + params->block_bitmap_mapping); } if (params->has_x_vcpu_dirty_limit_period) { @@ -1399,7 +1394,8 @@ static void migrate_params_test_apply(MigrationParameters *params, } if (params->has_cpr_exec_command) { - dest->cpr_exec_command = params->cpr_exec_command; + qapi_free_strList(dest->cpr_exec_command); + dest->cpr_exec_command = QAPI_CLONE(strList, params->cpr_exec_command); } } @@ -1555,4 +1551,6 @@ void qmp_migrate_set_parameters(MigrationParameters *params, Error **errp) } migrate_tls_opts_free(&tmp); + qapi_free_BitmapMigrationNodeAliasList(tmp.block_bitmap_mapping); + qapi_free_strList(tmp.cpr_exec_command); } -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 03/23] migration/rdma: add x-rdma-chunk-size parameter 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu 2026-05-05 20:26 ` [PULL 01/23] migration: Fix blocking in POSTCOPY_DEVICE during package load Peter Xu 2026-05-05 20:26 ` [PULL 02/23] migration: Use QAPI_CLONE_MEMBERS in migrate_params_test_apply Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 04/23] migration: Fix low possibility downtime violation Peter Xu ` (20 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Samuel Zhang, Markus Armbruster, Li Zhijian From: Samuel Zhang <guoqing.zhang@amd.com> The default 1MB RDMA chunk size causes slow live migration because each chunk triggers a write_flush (ibv_post_send). For 8GB RAM, 1MB chunk size produces ~15000 flushes vs ~3700 with 1024MB chunk size. Add x-rdma-chunk-size parameter to configure the RDMA chunk size for faster migration. Usage: `migrate_set_parameter x-rdma-chunk-size 1024M` Performance with RDMA live migration of 8GB RAM VM: | x-rdma-chunk-size (B) | time (s) | throughput (MB/s) | |-----------------------|----------|-------------------| | 1M (default) | 37.915 | 1,007 | | 32M | 17.880 | 2,260 | | 1024M | 4.368 | 17,529 | Signed-off-by: Samuel Zhang <guoqing.zhang@amd.com> Acked-by: Markus Armbruster <armbru@redhat.com> Acked-by: Li Zhijian <lizhijian@fujitsu.com> Tested-by: Li Zhijian <lizhijian@fujitsu.com> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20260427031401.3895523-1-guoqing.zhang@amd.com Signed-off-by: Peter Xu <peterx@redhat.com> --- qapi/migration.json | 13 +++++++++++-- migration/options.h | 1 + migration/migration-hmp-cmds.c | 11 +++++++++++ migration/options.c | 33 ++++++++++++++++++++++++++++++++- migration/rdma.c | 30 ++++++++++++++++-------------- 5 files changed, 71 insertions(+), 17 deletions(-) diff --git a/qapi/migration.json b/qapi/migration.json index 7134d4ce47..0db115ec5e 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -806,7 +806,7 @@ # # Features: # -# @unstable: Members @x-checkpoint-delay and +# @unstable: Members @x-checkpoint-delay, @x-rdma-chunk-size, and # @x-vcpu-dirty-limit-period are experimental. # # Since: 2.4 @@ -831,6 +831,7 @@ 'mode', 'zero-page-detection', 'direct-io', + { 'name': 'x-rdma-chunk-size', 'features': [ 'unstable' ] }, 'cpr-exec-command'] } ## @@ -1007,9 +1008,15 @@ # is @cpr-exec. The first list element is the program's filename, # the remainder its arguments. (Since 10.2) # +# @x-rdma-chunk-size: RDMA memory registration chunk size in bytes. +# Default is 1MiB. Must be a power of 2 in the range +# [1MiB, 1024MiB]. Only applies when migrating via RDMA. +# Must be set to the same value on both source and destination +# before migration starts. (Since 11.1) +# # Features: # -# @unstable: Members @x-checkpoint-delay and +# @unstable: Members @x-checkpoint-delay, @x-rdma-chunk-size, and # @x-vcpu-dirty-limit-period are experimental. # # Since: 2.4 @@ -1046,6 +1053,8 @@ '*mode': 'MigMode', '*zero-page-detection': 'ZeroPageDetection', '*direct-io': 'bool', + '*x-rdma-chunk-size': { 'type': 'uint64', + 'features': [ 'unstable' ] }, '*cpr-exec-command': [ 'str' ]} } ## diff --git a/migration/options.h b/migration/options.h index b502871097..b46221998a 100644 --- a/migration/options.h +++ b/migration/options.h @@ -87,6 +87,7 @@ const char *migrate_tls_creds(void); const char *migrate_tls_hostname(void); uint64_t migrate_xbzrle_cache_size(void); ZeroPageDetection migrate_zero_page_detection(void); +uint64_t migrate_rdma_chunk_size(void); /* parameters helpers */ diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index 0a193b8f54..4f6c1dbf89 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -451,6 +451,13 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict) params->direct_io ? "on" : "off"); } + if (params->has_x_rdma_chunk_size) { + monitor_printf(mon, "%s: %" PRIu64 " bytes\n", + MigrationParameter_str( + MIGRATION_PARAMETER_X_RDMA_CHUNK_SIZE), + params->x_rdma_chunk_size); + } + assert(params->has_cpr_exec_command); monitor_print_cpr_exec_command(mon, params->cpr_exec_command); } @@ -734,6 +741,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict) p->has_direct_io = true; visit_type_bool(v, param, &p->direct_io, &err); break; + case MIGRATION_PARAMETER_X_RDMA_CHUNK_SIZE: + p->has_x_rdma_chunk_size = true; + visit_type_size(v, param, &p->x_rdma_chunk_size, &err); + break; case MIGRATION_PARAMETER_CPR_EXEC_COMMAND: { /* * NOTE: g_autofree will only auto g_free() the strv array when diff --git a/migration/options.c b/migration/options.c index 68441f0276..5cbfd29099 100644 --- a/migration/options.c +++ b/migration/options.c @@ -13,6 +13,7 @@ #include "qemu/osdep.h" #include "qemu/error-report.h" +#include "qemu/units.h" #include "exec/target_page.h" #include "qapi/clone-visitor.h" #include "qapi/error.h" @@ -90,6 +91,7 @@ const PropertyInfo qdev_prop_StrOrNull; #define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT_PERIOD 1000 /* milliseconds */ #define DEFAULT_MIGRATE_VCPU_DIRTY_LIMIT 1 /* MB/s */ +#define DEFAULT_MIGRATE_X_RDMA_CHUNK_SIZE MiB const Property migration_properties[] = { DEFINE_PROP_BOOL("store-global-state", MigrationState, @@ -183,6 +185,9 @@ const Property migration_properties[] = { DEFINE_PROP_ZERO_PAGE_DETECTION("zero-page-detection", MigrationState, parameters.zero_page_detection, ZERO_PAGE_DETECTION_MULTIFD), + DEFINE_PROP_UINT64("x-rdma-chunk-size", MigrationState, + parameters.x_rdma_chunk_size, + DEFAULT_MIGRATE_X_RDMA_CHUNK_SIZE), /* Migration capabilities */ DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE), @@ -1000,6 +1005,15 @@ ZeroPageDetection migrate_zero_page_detection(void) return s->parameters.zero_page_detection; } +uint64_t migrate_rdma_chunk_size(void) +{ + MigrationState *s = migrate_get_current(); + uint64_t size = s->parameters.x_rdma_chunk_size; + + assert(MiB <= size && size <= GiB && is_power_of_2(size)); + return size; +} + /* parameters helpers */ AnnounceParameters *migrate_announce_params(void) @@ -1062,7 +1076,7 @@ static void migrate_mark_all_params_present(MigrationParameters *p) &p->has_announce_step, &p->has_block_bitmap_mapping, &p->has_x_vcpu_dirty_limit_period, &p->has_vcpu_dirty_limit, &p->has_mode, &p->has_zero_page_detection, &p->has_direct_io, - &p->has_cpr_exec_command, + &p->has_x_rdma_chunk_size, &p->has_cpr_exec_command, }; len = ARRAY_SIZE(has_fields); @@ -1273,6 +1287,15 @@ bool migrate_params_check(MigrationParameters *params, Error **errp) return false; } + if (params->has_x_rdma_chunk_size && + (params->x_rdma_chunk_size < MiB || + params->x_rdma_chunk_size > GiB || + !is_power_of_2(params->x_rdma_chunk_size))) { + error_setg(errp, "Option x_rdma_chunk_size expects " + "a power of 2 in the range 1MiB to 1024MiB"); + return false; + } + return true; } @@ -1393,6 +1416,10 @@ static void migrate_params_test_apply(MigrationParameters *params, dest->direct_io = params->direct_io; } + if (params->has_x_rdma_chunk_size) { + dest->x_rdma_chunk_size = params->x_rdma_chunk_size; + } + if (params->has_cpr_exec_command) { qapi_free_strList(dest->cpr_exec_command); dest->cpr_exec_command = QAPI_CLONE(strList, params->cpr_exec_command); @@ -1520,6 +1547,10 @@ static void migrate_params_apply(MigrationParameters *params) s->parameters.direct_io = params->direct_io; } + if (params->has_x_rdma_chunk_size) { + s->parameters.x_rdma_chunk_size = params->x_rdma_chunk_size; + } + if (params->has_cpr_exec_command) { qapi_free_strList(s->parameters.cpr_exec_command); s->parameters.cpr_exec_command = diff --git a/migration/rdma.c b/migration/rdma.c index 55ab85650a..3e37a1d440 100644 --- a/migration/rdma.c +++ b/migration/rdma.c @@ -45,10 +45,12 @@ #define RDMA_RESOLVE_TIMEOUT_MS 10000 /* Do not merge data if larger than this. */ -#define RDMA_MERGE_MAX (2 * 1024 * 1024) -#define RDMA_SIGNALED_SEND_MAX (RDMA_MERGE_MAX / 4096) +static inline uint64_t rdma_merge_max(void) +{ + return migrate_rdma_chunk_size() * 2; +} -#define RDMA_REG_CHUNK_SHIFT 20 /* 1 MB */ +#define RDMA_SIGNALED_SEND_MAX 512 /* * This is only for non-live state being migrated. @@ -527,21 +529,21 @@ static int qemu_rdma_exchange_send(RDMAContext *rdma, RDMAControlHeader *head, static inline uint64_t ram_chunk_index(const uint8_t *start, const uint8_t *host) { - return ((uintptr_t) host - (uintptr_t) start) >> RDMA_REG_CHUNK_SHIFT; + return ((uintptr_t) host - (uintptr_t) start) / migrate_rdma_chunk_size(); } static inline uint8_t *ram_chunk_start(const RDMALocalBlock *rdma_ram_block, uint64_t i) { return (uint8_t *)(uintptr_t)(rdma_ram_block->local_host_addr + - (i << RDMA_REG_CHUNK_SHIFT)); + (i * migrate_rdma_chunk_size())); } static inline uint8_t *ram_chunk_end(const RDMALocalBlock *rdma_ram_block, uint64_t i) { uint8_t *result = ram_chunk_start(rdma_ram_block, i) + - (1UL << RDMA_REG_CHUNK_SHIFT); + migrate_rdma_chunk_size(); if (result > (rdma_ram_block->local_host_addr + rdma_ram_block->length)) { result = rdma_ram_block->local_host_addr + rdma_ram_block->length; @@ -1841,6 +1843,7 @@ static int qemu_rdma_write_one(RDMAContext *rdma, struct ibv_send_wr *bad_wr; int reg_result_idx, ret, count = 0; uint64_t chunk, chunks; + uint64_t chunk_size = migrate_rdma_chunk_size(); uint8_t *chunk_start, *chunk_end; RDMALocalBlock *block = &(rdma->local_ram_blocks.block[current_index]); RDMARegister reg; @@ -1861,22 +1864,21 @@ retry: chunk_start = ram_chunk_start(block, chunk); if (block->is_ram_block) { - chunks = length / (1UL << RDMA_REG_CHUNK_SHIFT); + chunks = length / chunk_size; - if (chunks && ((length % (1UL << RDMA_REG_CHUNK_SHIFT)) == 0)) { + if (chunks && ((length % chunk_size) == 0)) { chunks--; } } else { - chunks = block->length / (1UL << RDMA_REG_CHUNK_SHIFT); + chunks = block->length / chunk_size; - if (chunks && ((block->length % (1UL << RDMA_REG_CHUNK_SHIFT)) == 0)) { + if (chunks && ((block->length % chunk_size) == 0)) { chunks--; } } trace_qemu_rdma_write_one_top(chunks + 1, - (chunks + 1) * - (1UL << RDMA_REG_CHUNK_SHIFT) / 1024 / 1024); + (chunks + 1) * chunk_size / 1024 / 1024); chunk_end = ram_chunk_end(block, chunk + chunks); @@ -2176,7 +2178,7 @@ static int qemu_rdma_write(RDMAContext *rdma, rdma->current_length += len; /* flush it if buffer is too large */ - if (rdma->current_length >= RDMA_MERGE_MAX) { + if (rdma->current_length >= rdma_merge_max()) { return qemu_rdma_write_flush(rdma, errp); } @@ -3522,7 +3524,7 @@ int rdma_registration_handle(QEMUFile *f) } else { chunk = reg->key.chunk; host_addr = block->local_host_addr + - (reg->key.chunk * (1UL << RDMA_REG_CHUNK_SHIFT)); + (reg->key.chunk * migrate_rdma_chunk_size()); /* Check for particularly bad chunk value */ if (host_addr < (void *)block->local_host_addr) { error_report("rdma: bad chunk for block %s" -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 04/23] migration: Fix low possibility downtime violation 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (2 preceding siblings ...) 2026-05-05 20:26 ` [PULL 03/23] migration/rdma: add x-rdma-chunk-size parameter Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 05/23] migration/qapi: Rename MigrationStats to MigrationRAMStats Peter Xu ` (19 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, qemu-stable, Juraj Marcin When QEMU queried the estimated version of pending data and thinks it's ready to converge, it'll send another accurate query to make sure of it. It is needed to make sure we collect the latest reports and that equation still holds true. However we missed one tiny little difference here on "<" v.s. "<=" when comparing pending_size (A) to threshold_size (B).. QEMU src only re-query if A<B, but will kickoff switchover if A<=B. I think it means it is possible to happen if A (as an estimate only so far) accidentally equals to B, then re-query won't happen and switchover will proceed without considering new dirtied data. It turns out it was an accident in my commit 7aaa1fc072 when refactoring the code around. Fix this by using the same equation in both places. Fixes: 7aaa1fc072 ("migration: Rewrite the migration complete detect logic") Cc: qemu-stable@nongnu.org Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-3-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migration/migration.c b/migration/migration.c index 6e4988a590..5f4efb1fe5 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3258,7 +3258,7 @@ static MigIterateState migration_iteration_run(MigrationState *s) * postcopy started, so ESTIMATE should always match with EXACT * during postcopy phase. */ - if (pending_size < s->threshold_size) { + if (pending_size <= s->threshold_size) { qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy); pending_size = must_precopy + can_postcopy; trace_migrate_pending_exact(pending_size, must_precopy, -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 05/23] migration/qapi: Rename MigrationStats to MigrationRAMStats 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (3 preceding siblings ...) 2026-05-05 20:26 ` [PULL 04/23] migration: Fix low possibility downtime violation Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 06/23] vfio/migration: Cache stop size in VFIOMigration Peter Xu ` (18 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Daniel P. Berrangé, devel, Markus Armbruster, Juraj Marcin, Michal Privoznik This stats is only about RAM, make it accurate. This paves way for statistics for all devices. Thanks to Markus, who pointed out that docs/devel/qapi-code-gen.rst has a section "Compatibility considerations" stated: Since type names are not visible in the Client JSON Protocol, types may be freely renamed. Even certain refactorings are invisible, such as splitting members from one type into a common base type. Hence this change is not ABI violation according to the document. While at it, touch up the lines to make it read better, correct the restriction on migration status being 'active' or 'completed': over time we grew too many new status that will also report "ram" section. Cc: Daniel P. Berrangé <berrange@redhat.com> Cc: devel@lists.libvirt.org Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-4-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- docs/about/removed-features.rst | 2 +- qapi/migration.json | 10 +++++----- migration/migration-stats.h | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/about/removed-features.rst b/docs/about/removed-features.rst index e75db08410..626162022a 100644 --- a/docs/about/removed-features.rst +++ b/docs/about/removed-features.rst @@ -699,7 +699,7 @@ was superseded by ``sections``. ``query-migrate`` return value member ``skipped`` (removed in 9.1) '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' -Member ``skipped`` of the ``MigrationStats`` struct hasn't been used +Member ``skipped`` of the ``MigrationRAMStats`` struct hasn't been used for more than 10 years. Removed with no replacement. ``migrate`` command option ``inc`` (removed in 9.1) diff --git a/qapi/migration.json b/qapi/migration.json index 0db115ec5e..ed475e4261 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -12,7 +12,7 @@ { 'include': 'sockets.json' } ## -# @MigrationStats: +# @MigrationRAMStats: # # Detailed migration status. # @@ -64,7 +64,7 @@ # # Since: 0.14 ## -{ 'struct': 'MigrationStats', +{ 'struct': 'MigrationRAMStats', 'data': {'transferred': 'int', 'remaining': 'int', 'total': 'int' , 'duplicate': 'int', 'normal': 'int', @@ -209,8 +209,8 @@ # If this field is not returned, no migration process has been # initiated # -# @ram: `MigrationStats` containing detailed migration status, only -# returned if status is 'active' or 'completed'(since 1.2) +# @ram: Detailed migration RAM statistics, only returned if migration +# is in progress or completed (since 1.2) # # @xbzrle-cache: `XBZRLECacheStats` containing detailed XBZRLE # migration statistics, only returned if XBZRLE feature is on and @@ -309,7 +309,7 @@ # Since: 0.14 ## { 'struct': 'MigrationInfo', - 'data': {'*status': 'MigrationStatus', '*ram': 'MigrationStats', + 'data': {'*status': 'MigrationStatus', '*ram': 'MigrationRAMStats', '*vfio': 'VfioStats', '*xbzrle-cache': 'XBZRLECacheStats', '*total-time': 'int', diff --git a/migration/migration-stats.h b/migration/migration-stats.h index c0f50144c9..1153520f7a 100644 --- a/migration/migration-stats.h +++ b/migration/migration-stats.h @@ -27,7 +27,7 @@ /* * These are the ram migration statistic counters. It is loosely - * based on MigrationStats. + * based on MigrationRAMStats. */ typedef struct { /* -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 06/23] vfio/migration: Cache stop size in VFIOMigration 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (4 preceding siblings ...) 2026-05-05 20:26 ` [PULL 05/23] migration/qapi: Rename MigrationStats to MigrationRAMStats Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 07/23] migration/treewide: Merge @state_pending_{exact|estimate} APIs Peter Xu ` (17 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Avihai Horon Add a field to cache stop size. Note that there's an initial value change in vfio_save_setup for the stop size default, but it shouldn't matter if it is followed with a math of MIN() against VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE. Document that all the three sizes we read from VFIO's uAPI on dirty or stop sizes are estimates, so QEMU needs to always remember they can be anything. Reviewed-by: Avihai Horon <avihaih@nvidia.com> Link: https://lore.kernel.org/r/20260421202110.306051-5-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/vfio/vfio-migration-internal.h | 8 +++++ hw/vfio/migration.c | 50 ++++++++++++++++++------------- 2 files changed, 38 insertions(+), 20 deletions(-) diff --git a/hw/vfio/vfio-migration-internal.h b/hw/vfio/vfio-migration-internal.h index 814fbd9eba..a15fc74703 100644 --- a/hw/vfio/vfio-migration-internal.h +++ b/hw/vfio/vfio-migration-internal.h @@ -45,8 +45,16 @@ typedef struct VFIOMigration { void *data_buffer; size_t data_buffer_size; uint64_t mig_flags; + /* + * NOTE: all three sizes cached are reported from VFIO's uAPI, which + * are defined as estimate only. QEMU should not trust these values + * but only use them to do best-effort estimates. Always be prepared + * that these sizes may either grow or even shrink in reality while + * read()ing from the VFIO fds. + */ uint64_t precopy_init_size; uint64_t precopy_dirty_size; + uint64_t stopcopy_size; bool multifd_transfer; VFIOMultifd *multifd; bool initial_data_sent; diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 83327b6573..5d5fca09bd 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -41,6 +41,12 @@ */ #define VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE (1 * MiB) +/* + * Migration size of VFIO devices can be as little as a few KBs or as big as + * many GBs. This value should be big enough to cover the worst case. + */ +#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB) + static unsigned long bytes_transferred; static const char *mig_state_to_str(enum vfio_device_mig_state state) @@ -314,8 +320,7 @@ static void vfio_migration_cleanup(VFIODevice *vbasedev) migration->data_fd = -1; } -static int vfio_query_stop_copy_size(VFIODevice *vbasedev, - uint64_t *stop_copy_size) +static int vfio_query_stop_copy_size(VFIODevice *vbasedev) { uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) + sizeof(struct vfio_device_feature_mig_data_size), @@ -323,16 +328,22 @@ static int vfio_query_stop_copy_size(VFIODevice *vbasedev, struct vfio_device_feature *feature = (struct vfio_device_feature *)buf; struct vfio_device_feature_mig_data_size *mig_data_size = (struct vfio_device_feature_mig_data_size *)feature->data; + VFIOMigration *migration = vbasedev->migration; feature->argsz = sizeof(buf); feature->flags = VFIO_DEVICE_FEATURE_GET | VFIO_DEVICE_FEATURE_MIG_DATA_SIZE; if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) { + /* + * If getting pending migration size fails, VFIO_MIG_STOP_COPY_SIZE + * is reported so downtime limit won't be violated. + */ + migration->stopcopy_size = VFIO_MIG_STOP_COPY_SIZE; return -errno; } - *stop_copy_size = mig_data_size->stop_copy_length; + migration->stopcopy_size = mig_data_size->stop_copy_length; return 0; } @@ -409,6 +420,16 @@ static void vfio_update_estimated_pending_data(VFIOMigration *migration, return; } + /* + * The total size remaining requires separate accounting. Do not trust + * the counter, so what we have read() may be more than what reported. + */ + if (migration->stopcopy_size > data_size) { + migration->stopcopy_size -= data_size; + } else { + migration->stopcopy_size = 0; + } + if (migration->precopy_init_size) { uint64_t init_size = MIN(migration->precopy_init_size, data_size); @@ -463,7 +484,6 @@ static int vfio_save_setup(QEMUFile *f, void *opaque, Error **errp) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; - uint64_t stop_copy_size = VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE; int ret; if (!vfio_multifd_setup(vbasedev, false, errp)) { @@ -472,9 +492,9 @@ static int vfio_save_setup(QEMUFile *f, void *opaque, Error **errp) qemu_put_be64(f, VFIO_MIG_FLAG_DEV_SETUP_STATE); - vfio_query_stop_copy_size(vbasedev, &stop_copy_size); + vfio_query_stop_copy_size(vbasedev); migration->data_buffer_size = MIN(VFIO_MIG_DEFAULT_DATA_BUFFER_SIZE, - stop_copy_size); + migration->stopcopy_size); migration->data_buffer = g_try_malloc0(migration->data_buffer_size); if (!migration->data_buffer) { error_setg(errp, "%s: Failed to allocate migration data buffer", @@ -570,32 +590,22 @@ static void vfio_state_pending_estimate(void *opaque, uint64_t *must_precopy, migration->precopy_dirty_size); } -/* - * Migration size of VFIO devices can be as little as a few KBs or as big as - * many GBs. This value should be big enough to cover the worst case. - */ -#define VFIO_MIG_STOP_COPY_SIZE (100 * GiB) - static void vfio_state_pending_exact(void *opaque, uint64_t *must_precopy, uint64_t *can_postcopy) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; - uint64_t stop_copy_size = VFIO_MIG_STOP_COPY_SIZE; - /* - * If getting pending migration size fails, VFIO_MIG_STOP_COPY_SIZE is - * reported so downtime limit won't be violated. - */ - vfio_query_stop_copy_size(vbasedev, &stop_copy_size); - *must_precopy += stop_copy_size; + vfio_query_stop_copy_size(vbasedev); + *must_precopy += migration->stopcopy_size; if (vfio_device_state_is_precopy(vbasedev)) { vfio_query_precopy_size(migration); } trace_vfio_state_pending_exact(vbasedev->name, *must_precopy, *can_postcopy, - stop_copy_size, migration->precopy_init_size, + migration->stopcopy_size, + migration->precopy_init_size, migration->precopy_dirty_size); } -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 07/23] migration/treewide: Merge @state_pending_{exact|estimate} APIs 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (5 preceding siblings ...) 2026-05-05 20:26 ` [PULL 06/23] vfio/migration: Cache stop size in VFIOMigration Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 08/23] migration: Use the new save_query_pending() API directly Peter Xu ` (16 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Halil Pasic, Christian Borntraeger, Eric Farman, Matthew Rosato, Richard Henderson, Ilya Leoshkevich, David Hildenbrand, Cornelia Huck, Eric Blake, Vladimir Sementsov-Ogievskiy, John Snow, Jason J. Herne, Juraj Marcin, Avihai Horon These two APIs are a slight duplication. For example, there're a few users that directly pass in the same function. It might also be error prone to provide two hooks, so that it's easier to happen one module report different things via the two hooks. In reality, they should always report the same thing, only about whether we should use a fast-path when the slow path might be too slow, as QEMU may query these information quite frequently during migration process. Merge it into one API, provide a bool showing if the query is an exact query or not. No functional change intended. Export qemu_savevm_query_pending(). We should use the new API here provided when there're new users to do the query. This will happen very soon. Cc: Halil Pasic <pasic@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Eric Farman <farman@linux.ibm.com> Cc: Matthew Rosato <mjrosato@linux.ibm.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: David Hildenbrand <david@kernel.org> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Cc: John Snow <jsnow@redhat.com> Reviewed-by: Jason J. Herne <jjherne@linux.ibm.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Avihai Horon <avihaih@nvidia.com> Acked-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Link: https://lore.kernel.org/r/20260421202110.306051-6-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- docs/devel/migration/main.rst | 9 ++---- docs/devel/migration/vfio.rst | 9 ++---- include/migration/register.h | 52 ++++++++++++---------------------- migration/savevm.h | 3 ++ hw/s390x/s390-stattrib.c | 9 +++--- hw/vfio/migration.c | 48 ++++++++++++++----------------- migration/block-dirty-bitmap.c | 10 +++---- migration/ram.c | 33 +++++++-------------- migration/savevm.c | 42 +++++++++++++-------------- hw/vfio/trace-events | 3 +- 10 files changed, 86 insertions(+), 132 deletions(-) diff --git a/docs/devel/migration/main.rst b/docs/devel/migration/main.rst index 2de7050764..430673a499 100644 --- a/docs/devel/migration/main.rst +++ b/docs/devel/migration/main.rst @@ -515,13 +515,8 @@ An iterative device must provide: - A ``load_setup`` function that initialises the data structures on the destination. - - A ``state_pending_exact`` function that indicates how much more - data we must save. The core migration code will use this to - determine when to pause the CPUs and complete the migration. - - - A ``state_pending_estimate`` function that indicates how much more - data we must save. When the estimated amount is smaller than the - threshold, we call ``state_pending_exact``. + - A ``save_query_pending`` function that indicates how much more + data we must save. - A ``save_live_iterate`` function should send a chunk of data until the point that stream bandwidth limits tell it to stop. Each call diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst index 0790e5031d..691061d182 100644 --- a/docs/devel/migration/vfio.rst +++ b/docs/devel/migration/vfio.rst @@ -50,13 +50,8 @@ VFIO implements the device hooks for the iterative approach as follows: * A ``load_setup`` function that sets the VFIO device on the destination in _RESUMING state. -* A ``state_pending_estimate`` function that reports an estimate of the - remaining pre-copy data that the vendor driver has yet to save for the VFIO - device. - -* A ``state_pending_exact`` function that reads pending_bytes from the vendor - driver, which indicates the amount of data that the vendor driver has yet to - save for the VFIO device. +* A ``save_query_pending`` function that reports the remaining data that + the vendor driver has yet to save for the VFIO device. * An ``is_active_iterate`` function that indicates ``save_live_iterate`` is active only when the VFIO device is in pre-copy states. diff --git a/include/migration/register.h b/include/migration/register.h index d0f37f5f43..e2117e8dd4 100644 --- a/include/migration/register.h +++ b/include/migration/register.h @@ -16,6 +16,13 @@ #include "hw/core/vmstate-if.h" +typedef struct MigPendingData { + /* Amount of pending bytes can be transferred in precopy or stopcopy */ + uint64_t precopy_bytes; + /* Amount of pending bytes can be transferred in postcopy */ + uint64_t postcopy_bytes; +} MigPendingData; + /** * struct SaveVMHandlers: handler structure to finely control * migration of complex subsystems and devices, such as RAM, block and @@ -197,46 +204,23 @@ typedef struct SaveVMHandlers { bool (*save_postcopy_prepare)(QEMUFile *f, void *opaque, Error **errp); /** - * @state_pending_estimate - * - * This estimates the remaining data to transfer + * @save_query_pending * - * Sum of @can_postcopy and @must_postcopy is the whole amount of - * pending data. - * - * @opaque: data pointer passed to register_savevm_live() - * @must_precopy: amount of data that must be migrated in precopy - * or in stopped state, i.e. that must be migrated - * before target start. - * @can_postcopy: amount of data that can be migrated in postcopy - * or in stopped state, i.e. after target start. - * Some can also be migrated during precopy (RAM). - * Some must be migrated after source stops - * (block-dirty-bitmap) - */ - void (*state_pending_estimate)(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy); - - /** - * @state_pending_exact + * This estimates the remaining data to transfer on the source side. * - * This calculates the exact remaining data to transfer + * When @exact is true, a module must report accurate results. When + * @exact is false, a module may report estimates. * - * Sum of @can_postcopy and @must_postcopy is the whole amount of - * pending data. + * It's highly recommended that modules implement a faster version of + * the query path (for example, by proper caching on the counters) if + * an accurate query will be time-consuming. * * @opaque: data pointer passed to register_savevm_live() - * @must_precopy: amount of data that must be migrated in precopy - * or in stopped state, i.e. that must be migrated - * before target start. - * @can_postcopy: amount of data that can be migrated in postcopy - * or in stopped state, i.e. after target start. - * Some can also be migrated during precopy (RAM). - * Some must be migrated after source stops - * (block-dirty-bitmap) + * @pending: pointer to a MigPendingData struct + * @exact: set to true for an accurate (slow) query */ - void (*state_pending_exact)(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy); + void (*save_query_pending)(void *opaque, MigPendingData *pending, + bool exact); /** * @load_state diff --git a/migration/savevm.h b/migration/savevm.h index b3d1e8a13c..e4efd243f3 100644 --- a/migration/savevm.h +++ b/migration/savevm.h @@ -14,6 +14,8 @@ #ifndef MIGRATION_SAVEVM_H #define MIGRATION_SAVEVM_H +#include "migration/register.h" + #define QEMU_VM_FILE_MAGIC 0x5145564d #define QEMU_VM_FILE_VERSION_COMPAT 0x00000002 #define QEMU_VM_FILE_VERSION 0x00000003 @@ -43,6 +45,7 @@ int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy); void qemu_savevm_state_cleanup(void); void qemu_savevm_state_complete_postcopy(QEMUFile *f); int qemu_savevm_state_complete_precopy(MigrationState *s); +void qemu_savevm_query_pending(MigPendingData *pending, bool exact); void qemu_savevm_state_pending_exact(uint64_t *must_precopy, uint64_t *can_postcopy); void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, diff --git a/hw/s390x/s390-stattrib.c b/hw/s390x/s390-stattrib.c index 2e83aa211c..dfbd452e44 100644 --- a/hw/s390x/s390-stattrib.c +++ b/hw/s390x/s390-stattrib.c @@ -187,15 +187,15 @@ static int cmma_save_setup(QEMUFile *f, void *opaque, Error **errp) return 0; } -static void cmma_state_pending(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void cmma_state_pending(void *opaque, MigPendingData *pending, + bool exact) { S390StAttribState *sas = S390_STATTRIB(opaque); S390StAttribClass *sac = S390_STATTRIB_GET_CLASS(sas); long long res = sac->get_dirtycount(sas); if (res >= 0) { - *must_precopy += res; + pending->precopy_bytes += res; } } @@ -340,8 +340,7 @@ static SaveVMHandlers savevm_s390_stattrib_handlers = { .save_setup = cmma_save_setup, .save_live_iterate = cmma_save_iterate, .save_complete = cmma_save_complete, - .state_pending_exact = cmma_state_pending, - .state_pending_estimate = cmma_state_pending, + .save_query_pending = cmma_state_pending, .save_cleanup = cmma_save_cleanup, .load_state = cmma_load, .is_active = cmma_active, diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index 5d5fca09bd..e965ba51fb 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -571,42 +571,39 @@ static void vfio_save_cleanup(void *opaque) trace_vfio_save_cleanup(vbasedev->name); } -static void vfio_state_pending_estimate(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void vfio_state_pending_sync(VFIODevice *vbasedev) { - VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; - if (!vfio_device_state_is_precopy(vbasedev)) { - return; - } - - *must_precopy += - migration->precopy_init_size + migration->precopy_dirty_size; + vfio_query_stop_copy_size(vbasedev); - trace_vfio_state_pending_estimate(vbasedev->name, *must_precopy, - *can_postcopy, - migration->precopy_init_size, - migration->precopy_dirty_size); + if (vfio_device_state_is_precopy(vbasedev)) { + vfio_query_precopy_size(migration); + } } -static void vfio_state_pending_exact(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void vfio_state_pending(void *opaque, MigPendingData *pending, + bool exact) { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; + uint64_t remain; - vfio_query_stop_copy_size(vbasedev); - *must_precopy += migration->stopcopy_size; - - if (vfio_device_state_is_precopy(vbasedev)) { - vfio_query_precopy_size(migration); + if (exact) { + vfio_state_pending_sync(vbasedev); + remain = migration->stopcopy_size; + } else { + if (!vfio_device_state_is_precopy(vbasedev)) { + return; + } + remain = migration->precopy_init_size + migration->precopy_dirty_size; } - trace_vfio_state_pending_exact(vbasedev->name, *must_precopy, *can_postcopy, - migration->stopcopy_size, - migration->precopy_init_size, - migration->precopy_dirty_size); + pending->precopy_bytes += remain; + + trace_vfio_state_pending(vbasedev->name, migration->stopcopy_size, + migration->precopy_init_size, + migration->precopy_dirty_size, exact); } static bool vfio_is_active_iterate(void *opaque) @@ -851,8 +848,7 @@ static const SaveVMHandlers savevm_vfio_handlers = { .save_prepare = vfio_save_prepare, .save_setup = vfio_save_setup, .save_cleanup = vfio_save_cleanup, - .state_pending_estimate = vfio_state_pending_estimate, - .state_pending_exact = vfio_state_pending_exact, + .save_query_pending = vfio_state_pending, .is_active_iterate = vfio_is_active_iterate, .save_live_iterate = vfio_save_iterate, .save_complete = vfio_save_complete_precopy, diff --git a/migration/block-dirty-bitmap.c b/migration/block-dirty-bitmap.c index a061aad817..15d417013c 100644 --- a/migration/block-dirty-bitmap.c +++ b/migration/block-dirty-bitmap.c @@ -766,9 +766,8 @@ static int dirty_bitmap_save_complete(QEMUFile *f, void *opaque) return 0; } -static void dirty_bitmap_state_pending(void *opaque, - uint64_t *must_precopy, - uint64_t *can_postcopy) +static void dirty_bitmap_state_pending(void *opaque, MigPendingData *data, + bool exact) { DBMSaveState *s = &((DBMState *)opaque)->save; SaveBitmapState *dbms; @@ -788,7 +787,7 @@ static void dirty_bitmap_state_pending(void *opaque, trace_dirty_bitmap_state_pending(pending); - *can_postcopy += pending; + data->postcopy_bytes += pending; } /* First occurrence of this bitmap. It should be created if doesn't exist */ @@ -1250,8 +1249,7 @@ static SaveVMHandlers savevm_dirty_bitmap_handlers = { .save_setup = dirty_bitmap_save_setup, .save_complete = dirty_bitmap_save_complete, .has_postcopy = dirty_bitmap_has_postcopy, - .state_pending_exact = dirty_bitmap_state_pending, - .state_pending_estimate = dirty_bitmap_state_pending, + .save_query_pending = dirty_bitmap_state_pending, .save_live_iterate = dirty_bitmap_save_iterate, .is_active_iterate = dirty_bitmap_is_active_iterate, .load_state = dirty_bitmap_load, diff --git a/migration/ram.c b/migration/ram.c index 2046f16caa..44503bf3f7 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3449,30 +3449,18 @@ static int ram_save_complete(QEMUFile *f, void *opaque) return qemu_fflush(f); } -static void ram_state_pending_estimate(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) -{ - RAMState **temp = opaque; - RAMState *rs = *temp; - - uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE; - - if (migrate_postcopy_ram()) { - /* We can do postcopy, and all the data is postcopiable */ - *can_postcopy += remaining_size; - } else { - *must_precopy += remaining_size; - } -} - -static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy, - uint64_t *can_postcopy) +static void ram_state_pending(void *opaque, MigPendingData *pending, + bool exact) { RAMState **temp = opaque; RAMState *rs = *temp; uint64_t remaining_size; - if (!migration_in_postcopy()) { + /* + * Sync is not needed either with: (1) a fast query, or (2) after + * postcopy has started (no new dirty will generate anymore). + */ + if (exact && !migration_in_postcopy()) { bql_lock(); WITH_RCU_READ_LOCK_GUARD() { migration_bitmap_sync_precopy(false); @@ -3484,9 +3472,9 @@ static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy, if (migrate_postcopy_ram()) { /* We can do postcopy, and all the data is postcopiable */ - *can_postcopy += remaining_size; + pending->postcopy_bytes += remaining_size; } else { - *must_precopy += remaining_size; + pending->precopy_bytes += remaining_size; } } @@ -4709,8 +4697,7 @@ static SaveVMHandlers savevm_ram_handlers = { .save_live_iterate = ram_save_iterate, .save_complete = ram_save_complete, .has_postcopy = ram_has_postcopy, - .state_pending_exact = ram_state_pending_exact, - .state_pending_estimate = ram_state_pending_estimate, + .save_query_pending = ram_state_pending, .load_state = ram_load, .save_cleanup = ram_save_cleanup, .load_setup = ram_load_setup, diff --git a/migration/savevm.c b/migration/savevm.c index 765df8ce2d..41f1906598 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1796,46 +1796,44 @@ int qemu_savevm_state_complete_precopy(MigrationState *s) return qemu_fflush(f); } -/* Give an estimate of the amount left to be transferred, - * the result is split into the amount for units that can and - * for units that can't do postcopy. - */ -void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, - uint64_t *can_postcopy) +void qemu_savevm_query_pending(MigPendingData *pending, bool exact) { SaveStateEntry *se; - *must_precopy = 0; - *can_postcopy = 0; + pending->precopy_bytes = 0; + pending->postcopy_bytes = 0; QTAILQ_FOREACH(se, &savevm_state.handlers, entry) { - if (!se->ops || !se->ops->state_pending_estimate) { + if (!se->ops || !se->ops->save_query_pending) { continue; } if (!qemu_savevm_state_active(se)) { continue; } - se->ops->state_pending_estimate(se->opaque, must_precopy, can_postcopy); + se->ops->save_query_pending(se->opaque, pending, exact); } } +void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, + uint64_t *can_postcopy) +{ + MigPendingData pending; + + qemu_savevm_query_pending(&pending, false); + + *must_precopy = pending.precopy_bytes; + *can_postcopy = pending.postcopy_bytes; +} + void qemu_savevm_state_pending_exact(uint64_t *must_precopy, uint64_t *can_postcopy) { - SaveStateEntry *se; + MigPendingData pending; - *must_precopy = 0; - *can_postcopy = 0; + qemu_savevm_query_pending(&pending, true); - QTAILQ_FOREACH(se, &savevm_state.handlers, entry) { - if (!se->ops || !se->ops->state_pending_exact) { - continue; - } - if (!qemu_savevm_state_active(se)) { - continue; - } - se->ops->state_pending_exact(se->opaque, must_precopy, can_postcopy); - } + *must_precopy = pending.precopy_bytes; + *can_postcopy = pending.postcopy_bytes; } void qemu_savevm_state_cleanup(void) diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 846e3625c5..287df0b8cb 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -173,8 +173,7 @@ vfio_save_device_config_state(const char *name) " (%s)" vfio_save_iterate(const char *name, uint64_t precopy_init_size, uint64_t precopy_dirty_size) " (%s) precopy initial size %"PRIu64" precopy dirty size %"PRIu64 vfio_save_iterate_start(const char *name) " (%s)" vfio_save_setup(const char *name, uint64_t data_buffer_size) " (%s) data buffer size %"PRIu64 -vfio_state_pending_estimate(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t precopy_init_size, uint64_t precopy_dirty_size) " (%s) precopy %"PRIu64" postcopy %"PRIu64" precopy initial size %"PRIu64" precopy dirty size %"PRIu64 -vfio_state_pending_exact(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t stopcopy_size, uint64_t precopy_init_size, uint64_t precopy_dirty_size) " (%s) precopy %"PRIu64" postcopy %"PRIu64" stopcopy size %"PRIu64" precopy initial size %"PRIu64" precopy dirty size %"PRIu64 +vfio_state_pending(const char *name, uint64_t stopcopy_size, uint64_t precopy_init_size, uint64_t precopy_dirty_size, bool exact) " (%s) stopcopy size %"PRIu64" precopy initial size %"PRIu64" precopy dirty size %"PRIu64 " exact %d" vfio_vmstate_change(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" vfio_vmstate_change_prepare(const char *name, int running, const char *reason, const char *dev_state) " (%s) running %d reason %s device state %s" -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 08/23] migration: Use the new save_query_pending() API directly 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (6 preceding siblings ...) 2026-05-05 20:26 ` [PULL 07/23] migration/treewide: Merge @state_pending_{exact|estimate} APIs Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 09/23] migration: Introduce stopcopy_bytes in save_query_pending() Peter Xu ` (15 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Juraj Marcin, Avihai Horon It's easier to use the new API directly in the migration iterations. This also paves way for follow up patches to add new data to report directly to the iterator function. When at it, merge the tracepoints too into one. No functional change intended. Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Avihai Horon <avihaih@nvidia.com> Link: https://lore.kernel.org/r/20260421202110.306051-7-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/savevm.h | 4 ---- migration/migration.c | 16 +++++++--------- migration/savevm.c | 23 ++--------------------- migration/trace-events | 3 +-- 4 files changed, 10 insertions(+), 36 deletions(-) diff --git a/migration/savevm.h b/migration/savevm.h index e4efd243f3..96fdf96d4e 100644 --- a/migration/savevm.h +++ b/migration/savevm.h @@ -46,10 +46,6 @@ void qemu_savevm_state_cleanup(void); void qemu_savevm_state_complete_postcopy(QEMUFile *f); int qemu_savevm_state_complete_precopy(MigrationState *s); void qemu_savevm_query_pending(MigPendingData *pending, bool exact); -void qemu_savevm_state_pending_exact(uint64_t *must_precopy, - uint64_t *can_postcopy); -void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, - uint64_t *can_postcopy); int qemu_savevm_state_complete_precopy_iterable(QEMUFile *f, bool in_postcopy); bool qemu_savevm_state_postcopy_prepare(QEMUFile *f, Error **errp); void qemu_savevm_state_end(QEMUFile *f); diff --git a/migration/migration.c b/migration/migration.c index 5f4efb1fe5..c75ad01b64 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3208,17 +3208,17 @@ typedef enum { */ static MigIterateState migration_iteration_run(MigrationState *s) { - uint64_t must_precopy, can_postcopy, pending_size; Error *local_err = NULL; bool in_postcopy = (s->state == MIGRATION_STATUS_POSTCOPY_DEVICE || s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE); bool can_switchover = migration_can_switchover(s); + MigPendingData pending = { }; + uint64_t pending_size; bool complete_ready; /* Fast path - get the estimated amount of pending data */ - qemu_savevm_state_pending_estimate(&must_precopy, &can_postcopy); - pending_size = must_precopy + can_postcopy; - trace_migrate_pending_estimate(pending_size, must_precopy, can_postcopy); + qemu_savevm_query_pending(&pending, false); + pending_size = pending.precopy_bytes + pending.postcopy_bytes; if (in_postcopy) { /* @@ -3259,14 +3259,12 @@ static MigIterateState migration_iteration_run(MigrationState *s) * during postcopy phase. */ if (pending_size <= s->threshold_size) { - qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy); - pending_size = must_precopy + can_postcopy; - trace_migrate_pending_exact(pending_size, must_precopy, - can_postcopy); + qemu_savevm_query_pending(&pending, true); + pending_size = pending.precopy_bytes + pending.postcopy_bytes; } /* Should we switch to postcopy now? */ - if (must_precopy <= s->threshold_size && + if (pending.precopy_bytes <= s->threshold_size && can_switchover && qatomic_read(&s->start_postcopy)) { if (postcopy_start(s, &local_err)) { migrate_error_propagate(s, error_copy(local_err)); diff --git a/migration/savevm.c b/migration/savevm.c index 41f1906598..72454e15ad 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1812,28 +1812,9 @@ void qemu_savevm_query_pending(MigPendingData *pending, bool exact) } se->ops->save_query_pending(se->opaque, pending, exact); } -} - -void qemu_savevm_state_pending_estimate(uint64_t *must_precopy, - uint64_t *can_postcopy) -{ - MigPendingData pending; - - qemu_savevm_query_pending(&pending, false); - - *must_precopy = pending.precopy_bytes; - *can_postcopy = pending.postcopy_bytes; -} - -void qemu_savevm_state_pending_exact(uint64_t *must_precopy, - uint64_t *can_postcopy) -{ - MigPendingData pending; - - qemu_savevm_query_pending(&pending, true); - *must_precopy = pending.precopy_bytes; - *can_postcopy = pending.postcopy_bytes; + trace_qemu_savevm_query_pending(exact, pending->precopy_bytes, + pending->postcopy_bytes); } void qemu_savevm_state_cleanup(void) diff --git a/migration/trace-events b/migration/trace-events index 34143b14b4..ca7dfd4cb7 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -7,6 +7,7 @@ qemu_loadvm_state_section_partend(uint32_t section_id) "%u" qemu_loadvm_state_post_main(int ret) "%d" qemu_loadvm_state_section_startfull(uint32_t section_id, const char *idstr, uint32_t instance_id, uint32_t version_id) "%u(%s) %u %u" qemu_savevm_send_packaged(void) "" +qemu_savevm_query_pending(bool exact, uint64_t precopy, uint64_t postcopy) "exact=%d, precopy=%"PRIu64", postcopy=%"PRIu64 loadvm_state_switchover_ack_needed(unsigned int switchover_ack_pending_num) "Switchover ack pending num=%u" loadvm_state_setup(void) "" loadvm_state_cleanup(void) "" @@ -161,8 +162,6 @@ migration_cleanup(void) "" migrate_error(const char *error_desc) "error=%s" migration_cancel(void) "" migrate_handle_rp_req_pages(const char *rbname, size_t start, size_t len) "in %s at 0x%zx len 0x%zx" -migrate_pending_exact(uint64_t size, uint64_t pre, uint64_t post) "exact pending size %" PRIu64 " (pre = %" PRIu64 " post=%" PRIu64 ")" -migrate_pending_estimate(uint64_t size, uint64_t pre, uint64_t post) "estimate pending size %" PRIu64 " (pre = %" PRIu64 " post=%" PRIu64 ")" migrate_send_rp_message(int msg_type, uint16_t len) "%d: len %d" migrate_send_rp_recv_bitmap(char *name, int64_t size) "block '%s' size 0x%"PRIi64 migration_completion_file_err(void) "" -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 09/23] migration: Introduce stopcopy_bytes in save_query_pending() 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (7 preceding siblings ...) 2026-05-05 20:26 ` [PULL 08/23] migration: Use the new save_query_pending() API directly Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 10/23] vfio/migration: Fix incorrect reporting for VFIO pending data Peter Xu ` (14 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Avihai Horon, Juraj Marcin Allow modules to report data that can only be migrated after VM is stopped. When this concept is introduced, we will need to account stopcopy size to be part of pending_size as before. However, when there're data only can be migrated in stopcopy phase, it means the old "pending_size" may not always be able to reach low enough to kickoff an slow version of query sync. It used to be almost guaranteed to happen as all prior iterative modules doesn't have stopcopy only data. VFIO may change that fact by having some data that must be copied during stop phase. So we need to make sure QEMU will kickoff a synchronized version of query pending when all precopy data is migrated. This might be important to VFIO to keep making progress even if the downtime cannot yet be satisfied. So far, this patch should introduce no functional change, as no module yet report stopcopy size. This paves way for VFIO to properly report its pending data sizes, which will start to include stop-only data. Reviewed-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-8-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- include/migration/register.h | 7 ++++ migration/migration.c | 65 ++++++++++++++++++++++++++++++------ migration/savevm.c | 10 ++++-- migration/trace-events | 2 +- 4 files changed, 70 insertions(+), 14 deletions(-) diff --git a/include/migration/register.h b/include/migration/register.h index e2117e8dd4..5e5e0ee432 100644 --- a/include/migration/register.h +++ b/include/migration/register.h @@ -21,6 +21,13 @@ typedef struct MigPendingData { uint64_t precopy_bytes; /* Amount of pending bytes can be transferred in postcopy */ uint64_t postcopy_bytes; + /* Amount of pending bytes can be transferred only in stopcopy */ + uint64_t stopcopy_bytes; + /* + * Total pending data, modules do not need to update this field, it + * will be automatically calculated by migration core API. + */ + uint64_t total_bytes; } MigPendingData; /** diff --git a/migration/migration.c b/migration/migration.c index c75ad01b64..049b69fbe7 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3202,6 +3202,54 @@ typedef enum { MIG_ITERATE_BREAK, /* Break the loop */ } MigIterateState; +/* Are we ready to move to the next iteration phase? */ +static bool migration_iteration_next_ready(MigrationState *s, + MigPendingData *pending) +{ + /* + * If the estimated values already suggest us to switchover, mark this + * iteration finished, time to do a slow sync. + */ + if (pending->total_bytes <= s->threshold_size) { + return true; + } + + /* + * Since we may have modules reporting stop-only data, we also want to + * re-query with slow mode if all precopy data is moved over. This + * will also mark the current iteration done. + * + * This could happen when e.g. a module (like, VFIO) reports stopcopy + * size too large so it will never yet satisfy the downtime with the + * current setup (above check). Here, slow version of re-query helps + * because we keep trying the best to move whatever we have. + */ + if (pending->precopy_bytes == 0) { + return true; + } + + return false; +} + +static void migration_iteration_go_next(MigPendingData *pending) +{ + /* + * Do a slow sync will achieve this. TODO: move RAM iteration code + * into the core layer. + */ + qemu_savevm_query_pending(pending, true); +} + +static bool postcopy_should_start(MigrationState *s, MigPendingData *pending) +{ + /* If postcopy's switchver will violate user specified downtime, stop */ + if (pending->precopy_bytes + pending->stopcopy_bytes > s->threshold_size) { + return false; + } + + return qatomic_read(&s->start_postcopy); +} + /* * Return true if continue to the next iteration directly, false * otherwise. @@ -3213,12 +3261,10 @@ static MigIterateState migration_iteration_run(MigrationState *s) s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE); bool can_switchover = migration_can_switchover(s); MigPendingData pending = { }; - uint64_t pending_size; bool complete_ready; /* Fast path - get the estimated amount of pending data */ qemu_savevm_query_pending(&pending, false); - pending_size = pending.precopy_bytes + pending.postcopy_bytes; if (in_postcopy) { /* @@ -3226,7 +3272,7 @@ static MigIterateState migration_iteration_run(MigrationState *s) * postcopy completion doesn't rely on can_switchover, because when * POSTCOPY_ACTIVE it means switchover already happened. */ - complete_ready = !pending_size; + complete_ready = !pending.total_bytes; if (s->state == MIGRATION_STATUS_POSTCOPY_DEVICE && (s->postcopy_package_loaded || complete_ready)) { /* @@ -3258,14 +3304,12 @@ static MigIterateState migration_iteration_run(MigrationState *s) * postcopy started, so ESTIMATE should always match with EXACT * during postcopy phase. */ - if (pending_size <= s->threshold_size) { - qemu_savevm_query_pending(&pending, true); - pending_size = pending.precopy_bytes + pending.postcopy_bytes; + if (migration_iteration_next_ready(s, &pending)) { + migration_iteration_go_next(&pending); } /* Should we switch to postcopy now? */ - if (pending.precopy_bytes <= s->threshold_size && - can_switchover && qatomic_read(&s->start_postcopy)) { + if (can_switchover && postcopy_should_start(s, &pending)) { if (postcopy_start(s, &local_err)) { migrate_error_propagate(s, error_copy(local_err)); error_report_err(local_err); @@ -3280,11 +3324,12 @@ static MigIterateState migration_iteration_run(MigrationState *s) * (2) Pending size is no more than the threshold specified * (which was calculated from expected downtime) */ - complete_ready = can_switchover && (pending_size <= s->threshold_size); + complete_ready = can_switchover && + (pending.total_bytes <= s->threshold_size); } if (complete_ready) { - trace_migration_thread_low_pending(pending_size); + trace_migration_thread_low_pending(pending.total_bytes); migration_completion(s); return MIG_ITERATE_BREAK; } diff --git a/migration/savevm.c b/migration/savevm.c index 72454e15ad..39430470aa 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1800,8 +1800,7 @@ void qemu_savevm_query_pending(MigPendingData *pending, bool exact) { SaveStateEntry *se; - pending->precopy_bytes = 0; - pending->postcopy_bytes = 0; + memset(pending, 0, sizeof(*pending)); QTAILQ_FOREACH(se, &savevm_state.handlers, entry) { if (!se->ops || !se->ops->save_query_pending) { @@ -1813,8 +1812,13 @@ void qemu_savevm_query_pending(MigPendingData *pending, bool exact) se->ops->save_query_pending(se->opaque, pending, exact); } + pending->total_bytes = pending->precopy_bytes + + pending->stopcopy_bytes + pending->postcopy_bytes; + trace_qemu_savevm_query_pending(exact, pending->precopy_bytes, - pending->postcopy_bytes); + pending->stopcopy_bytes, + pending->postcopy_bytes, + pending->total_bytes); } void qemu_savevm_state_cleanup(void) diff --git a/migration/trace-events b/migration/trace-events index ca7dfd4cb7..de99d976ab 100644 --- a/migration/trace-events +++ b/migration/trace-events @@ -7,7 +7,7 @@ qemu_loadvm_state_section_partend(uint32_t section_id) "%u" qemu_loadvm_state_post_main(int ret) "%d" qemu_loadvm_state_section_startfull(uint32_t section_id, const char *idstr, uint32_t instance_id, uint32_t version_id) "%u(%s) %u %u" qemu_savevm_send_packaged(void) "" -qemu_savevm_query_pending(bool exact, uint64_t precopy, uint64_t postcopy) "exact=%d, precopy=%"PRIu64", postcopy=%"PRIu64 +qemu_savevm_query_pending(bool exact, uint64_t precopy, uint64_t stopcopy, uint64_t postcopy, uint64_t total) "exact=%d, precopy=%"PRIu64", stopcopy=%"PRIu64", postcopy=%"PRIu64", total=%"PRIu64 loadvm_state_switchover_ack_needed(unsigned int switchover_ack_pending_num) "Switchover ack pending num=%u" loadvm_state_setup(void) "" loadvm_state_cleanup(void) "" -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 10/23] vfio/migration: Fix incorrect reporting for VFIO pending data 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (8 preceding siblings ...) 2026-05-05 20:26 ` [PULL 09/23] migration: Introduce stopcopy_bytes in save_query_pending() Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 11/23] migration: Move iteration counter out of RAM Peter Xu ` (13 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Avihai Horon VFIO reports different things in its fast/slow version of query pending results. It was because it wants to make sure precopy data can reach 0, which is needed to make sure sync queries will happen periodically over time. Now with stopcopy size reporting facility it doesn't need this hack anymore. Fix this by reporting the same values in fast/slow versions of query pending request, except that the slow version will do a slow sync with the hardwares. When at it, removing the special casing for vfio_device_state_is_precopy() which may reporting nothing in a fast query. Then ther reporting will be consistent to VFIO devices that do not support precopy phase. Copy stable might be too much; just skip it and skip the Fixes. Reviewed-by: Avihai Horon <avihaih@nvidia.com> Tested-by: Avihai Horon <avihaih@nvidia.com> Link: https://lore.kernel.org/r/20260421202110.306051-9-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/vfio/migration.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index e965ba51fb..e6e6a0d53d 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -587,19 +587,23 @@ static void vfio_state_pending(void *opaque, MigPendingData *pending, { VFIODevice *vbasedev = opaque; VFIOMigration *migration = vbasedev->migration; - uint64_t remain; + uint64_t precopy_size, stopcopy_size; if (exact) { vfio_state_pending_sync(vbasedev); - remain = migration->stopcopy_size; + } + + precopy_size = + migration->precopy_init_size + migration->precopy_dirty_size; + + if (migration->stopcopy_size > precopy_size) { + stopcopy_size = migration->stopcopy_size - precopy_size; } else { - if (!vfio_device_state_is_precopy(vbasedev)) { - return; - } - remain = migration->precopy_init_size + migration->precopy_dirty_size; + stopcopy_size = 0; } - pending->precopy_bytes += remain; + pending->precopy_bytes += precopy_size; + pending->stopcopy_bytes += stopcopy_size; trace_vfio_state_pending(vbasedev->name, migration->stopcopy_size, migration->precopy_init_size, -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 11/23] migration: Move iteration counter out of RAM 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (9 preceding siblings ...) 2026-05-05 20:26 ` [PULL 10/23] vfio/migration: Fix incorrect reporting for VFIO pending data Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 12/23] migration: Introduce a helper to return switchover bw estimate Peter Xu ` (12 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Hyman Huang, Prasad Pandit, Juraj Marcin It used to hide in RAM dirty sync path. Now with more modules being able to slow sync on dirty information, keeping it there may not be good anymore because it's not RAM's own concept for iterations: all modules should follow. More importantly, mgmt may try to query dirty info (to make policy decisions like adjusting downtime) by listening to iteration count changes via QMP events. So we must make sure the boost of iterations only happens _after_ the dirty sync operations with whatever form (RAM's dirty bitmap sync, or VFIO's different ioctls to fetch latest dirty info from kernel). Move this to core migration path to manage, together with the event generation, so that it can be well ordered with the sync operations for all modules. This brings a good side effect that we should have an old issue regarding to cpu_throttle_dirty_sync_timer_tick() which can randomly boost iteration counts (because it invokes sync ops). Now it won't, which is actually the right behavior. Said that, we have code (not only QEMU, but likely mgmt too) assuming the 1st iteration will always shows dirty count to 1. Make it initialized with 1 this time, because we'll miss the dirty sync for setup() on boosting this counter now. Reviewed-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Prasad Pandit <pjp@fedoraproject.org> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-10-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration-stats.h | 3 ++- migration/migration.c | 29 ++++++++++++++++++++++++++--- migration/ram.c | 6 ------ 3 files changed, 28 insertions(+), 10 deletions(-) diff --git a/migration/migration-stats.h b/migration/migration-stats.h index 1153520f7a..326ddb0088 100644 --- a/migration/migration-stats.h +++ b/migration/migration-stats.h @@ -43,7 +43,8 @@ typedef struct { */ uint64_t dirty_pages_rate; /* - * Number of times we have synchronized guest bitmaps. + * Number of times we have synchronized guest bitmaps. This always + * starts from 1 for the 1st iteration. */ uint64_t dirty_sync_count; /* diff --git a/migration/migration.c b/migration/migration.c index 049b69fbe7..8abc7e0327 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1654,10 +1654,15 @@ int migrate_init(MigrationState *s, Error **errp) s->threshold_size = 0; s->switchover_acked = false; s->rdma_migration = false; + /* - * set mig_stats memory to zero for a new migration + * set mig_stats memory to zero for a new migration.. except the + * iteration counter, which we want to make sure it returns 1 for the + * first iteration. */ memset(&mig_stats, 0, sizeof(mig_stats)); + mig_stats.dirty_sync_count = 1; + migration_reset_vfio_bytes_transferred(); s->postcopy_package_loaded = false; @@ -3234,10 +3239,28 @@ static bool migration_iteration_next_ready(MigrationState *s, static void migration_iteration_go_next(MigPendingData *pending) { /* - * Do a slow sync will achieve this. TODO: move RAM iteration code - * into the core layer. + * Do a slow sync first before boosting the iteration count. */ qemu_savevm_query_pending(pending, true); + + /* + * Boost dirty sync count to reflect we finished one iteration. + * + * NOTE: we need to make sure when this happens (together with the + * event sent below) all modules have slow-synced the pending data + * above. That means a write mem barrier, but qatomic_add() should be + * enough. + * + * It's because a mgmt could wait on the iteration event to query again + * on pending data for policy changes (e.g. downtime adjustments). The + * ordering will make sure the query will fetch the latest results from + * all the modules. + */ + qatomic_add(&mig_stats.dirty_sync_count, 1); + + if (migrate_events()) { + qapi_event_send_migration_pass(mig_stats.dirty_sync_count); + } } static bool postcopy_should_start(MigrationState *s, MigPendingData *pending) diff --git a/migration/ram.c b/migration/ram.c index 44503bf3f7..ecd4b6165c 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1136,8 +1136,6 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage) RAMBlock *block; int64_t end_time; - qatomic_add(&mig_stats.dirty_sync_count, 1); - if (!rs->time_last_bitmap_sync) { rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); } @@ -1172,10 +1170,6 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage) rs->num_dirty_pages_period = 0; rs->bytes_xfer_prev = migration_transferred_bytes(); } - if (migrate_events()) { - uint64_t generation = qatomic_read(&mig_stats.dirty_sync_count); - qapi_event_send_migration_pass(generation); - } } void migration_bitmap_sync_precopy(bool last_stage) -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 12/23] migration: Introduce a helper to return switchover bw estimate 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (10 preceding siblings ...) 2026-05-05 20:26 ` [PULL 11/23] migration: Move iteration counter out of RAM Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 13/23] migration: Calculate expected downtime on demand Peter Xu ` (11 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Juraj Marcin Add a helper migration_get_switchover_bw() to return an estimate of switchover bandwidth. Use it to simplify the current code. This will be used in later to remove expected_downtime. When at it, remove two qatomic_read() to shrink the lines because atomic ops are not needed when it's always the same thread who does the updates. Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-11-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration.c | 48 +++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/migration/migration.c b/migration/migration.c index 8abc7e0327..4e19fe3409 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -984,6 +984,21 @@ void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value) migrate_send_rp_message(mis, MIG_RP_MSG_RESUME_ACK, sizeof(buf), &buf); } +/* + * Returns the estimated switchover bandwidth (unit: bytes / seconds) + */ +static double migration_get_switchover_bw(MigrationState *s) +{ + uint64_t switchover_bw = migrate_avail_switchover_bandwidth(); + + if (switchover_bw) { + /* If user specified, prioritize this value and don't estimate */ + return (double)switchover_bw; + } + + return s->mbps / 8 * 1000 * 1000; +} + bool migration_is_running(void) { MigrationState *s = current_migration; @@ -3130,37 +3145,22 @@ static void migration_update_counters(MigrationState *s, { uint64_t transferred, transferred_pages, time_spent; uint64_t current_bytes; /* bytes transferred since the beginning */ - uint64_t switchover_bw; - /* Expected bandwidth when switching over to destination QEMU */ - double expected_bw_per_ms; - double bandwidth; + double switchover_bw_per_ms; if (current_time < s->iteration_start_time + BUFFER_DELAY) { return; } - switchover_bw = migrate_avail_switchover_bandwidth(); current_bytes = migration_transferred_bytes(); transferred = current_bytes - s->iteration_initial_bytes; time_spent = current_time - s->iteration_start_time; - bandwidth = (double)transferred / time_spent; - - if (switchover_bw) { - /* - * If the user specified a switchover bandwidth, let's trust the - * user so that can be more accurate than what we estimated. - */ - expected_bw_per_ms = (double)switchover_bw / 1000; - } else { - /* If the user doesn't specify bandwidth, we use the estimated */ - expected_bw_per_ms = bandwidth; - } - - s->threshold_size = expected_bw_per_ms * migrate_downtime_limit(); - s->mbps = (((double) transferred * 8.0) / ((double) time_spent / 1000.0)) / 1000.0 / 1000.0; + /* NOTE: only update this after bandwidth (s->mbps) updated */ + switchover_bw_per_ms = migration_get_switchover_bw(s) / 1000; + s->threshold_size = switchover_bw_per_ms * migrate_downtime_limit(); + transferred_pages = ram_get_total_transferred_pages() - s->iteration_initial_pages; s->pages_per_second = (double) transferred_pages / @@ -3170,10 +3170,9 @@ static void migration_update_counters(MigrationState *s, * if we haven't sent anything, we don't want to * recalculate. 10000 is a small enough number for our purposes */ - if (qatomic_read(&mig_stats.dirty_pages_rate) && - transferred > 10000) { + if (mig_stats.dirty_pages_rate && transferred > 10000) { s->expected_downtime = - qatomic_read(&mig_stats.dirty_bytes_last_sync) / expected_bw_per_ms; + mig_stats.dirty_bytes_last_sync / switchover_bw_per_ms; } migration_rate_reset(); @@ -3182,7 +3181,8 @@ static void migration_update_counters(MigrationState *s, trace_migrate_transferred(transferred, time_spent, /* Both in unit bytes/ms */ - bandwidth, switchover_bw / 1000, + (uint64_t)s->mbps, + (uint64_t)switchover_bw_per_ms, s->threshold_size); } -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 13/23] migration: Calculate expected downtime on demand 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (11 preceding siblings ...) 2026-05-05 20:26 ` [PULL 12/23] migration: Introduce a helper to return switchover bw estimate Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-07 19:57 ` Peter Maydell 2026-05-05 20:26 ` [PULL 14/23] migration: Fix calculation of expected_downtime to take VFIO info Peter Xu ` (10 subsequent siblings) 23 siblings, 1 reply; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Juraj Marcin This value does not need to be calculated as frequent. Only calculate it on demand when query-migrate happened. With that we can remove the variable in MigrationState. This paves way for fixing this value to include all modules (not only RAM but others too). Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-12-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration.h | 2 +- migration/migration.c | 25 ++++++++++++------------- 2 files changed, 13 insertions(+), 14 deletions(-) diff --git a/migration/migration.h b/migration/migration.h index 9081e6a612..a5e064a1ac 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -359,7 +359,6 @@ struct MigrationState { /* Timestamp when VM is down (ms) to migrate the last stuff */ int64_t downtime_start; int64_t downtime; - int64_t expected_downtime; bool capabilities[MIGRATION_CAPABILITY__MAX]; int64_t setup_time; @@ -585,6 +584,7 @@ void migration_cancel(void); void migration_populate_vfio_info(MigrationInfo *info); void migration_reset_vfio_bytes_transferred(void); void postcopy_temp_page_reset(PostcopyTmpPage *tmp_page); +int64_t migration_downtime_calc_expected(MigrationState *s); /* * Migration thread waiting for return path thread. Return non-zero if an diff --git a/migration/migration.c b/migration/migration.c index 4e19fe3409..d740d9df85 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1041,6 +1041,17 @@ static bool migrate_show_downtime(MigrationState *s) return (s->state == MIGRATION_STATUS_COMPLETED) || migration_in_postcopy(); } +/* Return expected downtime (unit: milliseconds) */ +int64_t migration_downtime_calc_expected(MigrationState *s) +{ + if (mig_stats.dirty_sync_count <= 1) { + return migrate_downtime_limit(); + } + + return mig_stats.dirty_bytes_last_sync / + migration_get_switchover_bw(s) * 1000; +} + static void populate_time_info(MigrationInfo *info, MigrationState *s) { info->has_status = true; @@ -1061,7 +1072,7 @@ static void populate_time_info(MigrationInfo *info, MigrationState *s) info->downtime = s->downtime; } else { info->has_expected_downtime = true; - info->expected_downtime = s->expected_downtime; + info->expected_downtime = migration_downtime_calc_expected(s); } } @@ -1649,7 +1660,6 @@ int migrate_init(MigrationState *s, Error **errp) s->mbps = 0.0; s->pages_per_second = 0.0; s->downtime = 0; - s->expected_downtime = 0; s->setup_time = 0; s->start_postcopy = false; s->migration_thread_running = false; @@ -3166,15 +3176,6 @@ static void migration_update_counters(MigrationState *s, s->pages_per_second = (double) transferred_pages / (((double) time_spent / 1000.0)); - /* - * if we haven't sent anything, we don't want to - * recalculate. 10000 is a small enough number for our purposes - */ - if (mig_stats.dirty_pages_rate && transferred > 10000) { - s->expected_downtime = - mig_stats.dirty_bytes_last_sync / switchover_bw_per_ms; - } - migration_rate_reset(); update_iteration_initial_status(s); @@ -3841,8 +3842,6 @@ void migration_start_outgoing(MigrationState *s) bool resume = (s->state == MIGRATION_STATUS_POSTCOPY_RECOVER_SETUP); int ret; - s->expected_downtime = migrate_downtime_limit(); - if (resume) { /* This is a resumed migration */ rate_limit = migrate_max_postcopy_bandwidth(); -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PULL 13/23] migration: Calculate expected downtime on demand 2026-05-05 20:26 ` [PULL 13/23] migration: Calculate expected downtime on demand Peter Xu @ 2026-05-07 19:57 ` Peter Maydell 2026-05-11 13:48 ` Peter Xu 0 siblings, 1 reply; 27+ messages in thread From: Peter Maydell @ 2026-05-07 19:57 UTC (permalink / raw) To: Peter Xu; +Cc: qemu-devel, Fabiano Rosas, Paolo Bonzini, Juraj Marcin On Tue, 5 May 2026 at 21:29, Peter Xu <peterx@redhat.com> wrote: > > This value does not need to be calculated as frequent. Only calculate it > on demand when query-migrate happened. With that we can remove the > variable in MigrationState. > > This paves way for fixing this value to include all modules (not only RAM > but others too). > > Reviewed-by: Juraj Marcin <jmarcin@redhat.com> > Link: https://lore.kernel.org/r/20260421202110.306051-12-peterx@redhat.com > Signed-off-by: Peter Xu <peterx@redhat.com> Hi; I'm seeing a clang undefined-behaviour sanitizer failure in the code introduced in this change when running the aarch64 migration-test via "make check" on an x86-64 host. It seems to happen fairly reliably when I do a "make check -j20", but not when I run the test on its own, so it's probably load dependent. Here's the backtrace: ../../migration/migration.c:1051:12: runtime error: inf is outside the range of representable values of type 'long' #0 0x57b49d635c0d in migration_downtime_calc_expected /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1051:12 #1 0x57b49d63e860 in populate_time_info /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1075:35 #2 0x57b49d63617e in fill_source_migration_info /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1184:9 #3 0x57b49d63617e in qmp_query_migrate /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1264:5 #4 0x57b49e4aed75 in qmp_marshal_query_migrate /home/pm215/qemu/build/arm-clang/qapi/qapi-commands-migration.c:48:14 #5 0x57b49e526814 in do_qmp_dispatch_bh /home/pm215/qemu/build/arm-clang/../../qapi/qmp-dispatch.c:128:5 #6 0x57b49e58c35a in aio_bh_call /home/pm215/qemu/build/arm-clang/../../util/async.c:173:5 #7 0x57b49e58c698 in aio_bh_poll /home/pm215/qemu/build/arm-clang/../../util/async.c:220:13 #8 0x57b49e542fc1 in aio_dispatch /home/pm215/qemu/build/arm-clang/../../util/aio-posix.c:390:5 #9 0x57b49e58f10a in aio_ctx_dispatch /home/pm215/qemu/build/arm-clang/../../util/async.c:365:5 #10 0x7c74a09b8584 (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5d584) (BuildId: 116e142b9b52c8a4dfd403e759e71ab8f95d8bb3) #11 0x7c74a09b86cf in g_main_context_dispatch (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5d6cf) (BuildId: 116e142b9b52c8a4dfd403e759e71ab8f95d8bb3) #12 0x57b49e5901cb in glib_pollfds_poll /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:290:9 #13 0x57b49e5901cb in os_host_main_loop_wait /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:313:5 #14 0x57b49e5901cb in main_loop_wait /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:592:11 #15 0x57b49d5f5486 in qemu_main_loop /home/pm215/qemu/build/arm-clang/../../system/runstate.c:948:9 #16 0x57b49e42cdfb in qemu_default_main /home/pm215/qemu/build/arm-clang/../../system/main.c:50:14 #17 0x57b49e42cdd3 in main /home/pm215/qemu/build/arm-clang/../../system/main.c:93:9 > +/* Return expected downtime (unit: milliseconds) */ > +int64_t migration_downtime_calc_expected(MigrationState *s) > +{ > + if (mig_stats.dirty_sync_count <= 1) { > + return migrate_downtime_limit(); > + } > + > + return mig_stats.dirty_bytes_last_sync / > + migration_get_switchover_bw(s) * 1000; > +} Presumably in this function migration_get_switchover_bw() returns 0, so the (floating-point) division results in Infinity. That's fine until we have to convert it to int64_t to return it, which is the UB that the sanitizer is complaining about... thanks -- PMM ^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PULL 13/23] migration: Calculate expected downtime on demand 2026-05-07 19:57 ` Peter Maydell @ 2026-05-11 13:48 ` Peter Xu 0 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-11 13:48 UTC (permalink / raw) To: Peter Maydell; +Cc: qemu-devel, Fabiano Rosas, Paolo Bonzini, Juraj Marcin On Thu, May 07, 2026 at 08:57:24PM +0100, Peter Maydell wrote: > On Tue, 5 May 2026 at 21:29, Peter Xu <peterx@redhat.com> wrote: > > > > This value does not need to be calculated as frequent. Only calculate it > > on demand when query-migrate happened. With that we can remove the > > variable in MigrationState. > > > > This paves way for fixing this value to include all modules (not only RAM > > but others too). > > > > Reviewed-by: Juraj Marcin <jmarcin@redhat.com> > > Link: https://lore.kernel.org/r/20260421202110.306051-12-peterx@redhat.com > > Signed-off-by: Peter Xu <peterx@redhat.com> > > Hi; I'm seeing a clang undefined-behaviour sanitizer failure > in the code introduced in this change when running the > aarch64 migration-test via "make check" on an x86-64 host. > > It seems to happen fairly reliably when I do a "make check -j20", > but not when I run the test on its own, so it's probably load dependent. > > Here's the backtrace: > > ../../migration/migration.c:1051:12: runtime error: inf is outside the > range of representable values of type 'long' > #0 0x57b49d635c0d in migration_downtime_calc_expected > /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1051:12 > #1 0x57b49d63e860 in populate_time_info > /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1075:35 > #2 0x57b49d63617e in fill_source_migration_info > /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1184:9 > #3 0x57b49d63617e in qmp_query_migrate > /home/pm215/qemu/build/arm-clang/../../migration/migration.c:1264:5 > #4 0x57b49e4aed75 in qmp_marshal_query_migrate > /home/pm215/qemu/build/arm-clang/qapi/qapi-commands-migration.c:48:14 > #5 0x57b49e526814 in do_qmp_dispatch_bh > /home/pm215/qemu/build/arm-clang/../../qapi/qmp-dispatch.c:128:5 > #6 0x57b49e58c35a in aio_bh_call > /home/pm215/qemu/build/arm-clang/../../util/async.c:173:5 > #7 0x57b49e58c698 in aio_bh_poll > /home/pm215/qemu/build/arm-clang/../../util/async.c:220:13 > #8 0x57b49e542fc1 in aio_dispatch > /home/pm215/qemu/build/arm-clang/../../util/aio-posix.c:390:5 > #9 0x57b49e58f10a in aio_ctx_dispatch > /home/pm215/qemu/build/arm-clang/../../util/async.c:365:5 > #10 0x7c74a09b8584 > (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5d584) (BuildId: > 116e142b9b52c8a4dfd403e759e71ab8f95d8bb3) > #11 0x7c74a09b86cf in g_main_context_dispatch > (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5d6cf) (BuildId: > 116e142b9b52c8a4dfd403e759e71ab8f95d8bb3) > #12 0x57b49e5901cb in glib_pollfds_poll > /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:290:9 > #13 0x57b49e5901cb in os_host_main_loop_wait > /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:313:5 > #14 0x57b49e5901cb in main_loop_wait > /home/pm215/qemu/build/arm-clang/../../util/main-loop.c:592:11 > #15 0x57b49d5f5486 in qemu_main_loop > /home/pm215/qemu/build/arm-clang/../../system/runstate.c:948:9 > #16 0x57b49e42cdfb in qemu_default_main > /home/pm215/qemu/build/arm-clang/../../system/main.c:50:14 > #17 0x57b49e42cdd3 in main > /home/pm215/qemu/build/arm-clang/../../system/main.c:93:9 > > > > +/* Return expected downtime (unit: milliseconds) */ > > +int64_t migration_downtime_calc_expected(MigrationState *s) > > +{ > > + if (mig_stats.dirty_sync_count <= 1) { > > + return migrate_downtime_limit(); > > + } > > + > > + return mig_stats.dirty_bytes_last_sync / > > + migration_get_switchover_bw(s) * 1000; > > +} > > Presumably in this function migration_get_switchover_bw() returns 0, > so the (floating-point) division results in Infinity. That's fine > until we have to convert it to int64_t to return it, which is the > UB that the sanitizer is complaining about... True, I can easily reproduce the warning too. I'll send a patch. Thanks, -- Peter Xu ^ permalink raw reply [flat|nested] 27+ messages in thread
* [PULL 14/23] migration: Fix calculation of expected_downtime to take VFIO info 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (12 preceding siblings ...) 2026-05-05 20:26 ` [PULL 13/23] migration: Calculate expected downtime on demand Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 15/23] migration: Remember total dirty bytes in mig_stats Peter Xu ` (9 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Cédric Le Goater, Juraj Marcin QEMU will provide an expected downtime for the whole system during migration, by remembering the total dirty RAM that we synced the last time, divides the estimated switchover bandwidth. That was flawed when VFIO is taking into account: consider there is a VFIO GPU device that contains GBs of data to migrate during stop phase. Those will not be accounted in this math. Fix it by updating dirty_bytes_last_sync properly only when we go to the next iteration, rather than hide this update in the RAM code. Meanwhile, fetch the total (rather than RAM-only) portion of dirty bytes, so as to include GPU device states too. Update the comment of the field to reflect its new meaning. Now after this change, the expected-downtime to be read from query-migrate should be very accurate even with VFIO devices involved. Tested-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-13-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration-stats.h | 8 +++----- migration/migration.c | 11 ++++++++--- migration/ram.c | 1 - 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/migration/migration-stats.h b/migration/migration-stats.h index 326ddb0088..1775b916df 100644 --- a/migration/migration-stats.h +++ b/migration/migration-stats.h @@ -31,11 +31,9 @@ */ typedef struct { /* - * Number of bytes that were dirty last time that we synced with - * the guest memory. We use that to calculate the downtime. As - * the remaining dirty amounts to what we know that is still dirty - * since last iteration, not counting what the guest has dirtied - * since we synchronized bitmaps. + * Number of bytes that were reported dirty after the latest + * system-wise synchronization of dirty information. It is used to do + * best-effort estimation on expected downtime. */ uint64_t dirty_bytes_last_sync; /* diff --git a/migration/migration.c b/migration/migration.c index d740d9df85..ab09dcbcf4 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -3244,18 +3244,23 @@ static void migration_iteration_go_next(MigPendingData *pending) */ qemu_savevm_query_pending(pending, true); + /* + * Update the dirty information for the whole system for this + * iteration. This value is used to calculate expected downtime. + */ + qatomic_set(&mig_stats.dirty_bytes_last_sync, pending->total_bytes); + /* * Boost dirty sync count to reflect we finished one iteration. * * NOTE: we need to make sure when this happens (together with the * event sent below) all modules have slow-synced the pending data - * above. That means a write mem barrier, but qatomic_add() should be - * enough. + * above and updated corresponding fields (e.g. dirty_bytes_last_sync). * * It's because a mgmt could wait on the iteration event to query again * on pending data for policy changes (e.g. downtime adjustments). The * ordering will make sure the query will fetch the latest results from - * all the modules. + * all the modules on everything. */ qatomic_add(&mig_stats.dirty_sync_count, 1); diff --git a/migration/ram.c b/migration/ram.c index ecd4b6165c..fc38ffbf8a 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1148,7 +1148,6 @@ static void migration_bitmap_sync(RAMState *rs, bool last_stage) RAMBLOCK_FOREACH_NOT_IGNORED(block) { ramblock_sync_dirty_bitmap(rs, block); } - qatomic_set(&mig_stats.dirty_bytes_last_sync, ram_bytes_remaining()); } } -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 15/23] migration: Remember total dirty bytes in mig_stats 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (13 preceding siblings ...) 2026-05-05 20:26 ` [PULL 14/23] migration: Fix calculation of expected_downtime to take VFIO info Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 16/23] migration/qapi: Introduce system-wide "remaining" reports Peter Xu ` (8 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Juraj Marcin Introduce this new counter to remember the total dirty bytes for the whole system. It will be used for query-migrate command to fetch system-wise remaining data. A prior attempt was made to not use this counter but query directly from all the modules in a QMP handler, but it exposed some complexity not only on migration state machine race conditions (where the query may be invoked anytime of the state machine), or on locking implications (where some of the query hooks may take BQL, which is illegal at least in a QMP handler). For more information, see: https://lore.kernel.org/r/aeZMtxqrKWAMKzdN@x1.local This oneliner will resolve everything, except that it is not as accurate. The hope is it is a worthwhile trade-off solution, after knowing above challenges. Now, there is one more reason we should make each invocation of save_live_iterate() to be lightweight, because this counter will only get updated once for each loop over all save_live_iterate() hooks when present. But that's always the goal. Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-14-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- migration/migration-stats.h | 7 +++++++ migration/savevm.c | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/migration/migration-stats.h b/migration/migration-stats.h index 1775b916df..9f9a8eb9eb 100644 --- a/migration/migration-stats.h +++ b/migration/migration-stats.h @@ -36,6 +36,13 @@ typedef struct { * best-effort estimation on expected downtime. */ uint64_t dirty_bytes_last_sync; + /* + * Number of bytes that were reported dirty now. This is an estimate + * value and will be updated every time migration thread queries from + * modules in an iteration loop. It is used to provide best-effort + * estimation on total remaining data. + */ + uint64_t dirty_bytes_total; /* * Number of pages dirtied per second. */ diff --git a/migration/savevm.c b/migration/savevm.c index 39430470aa..d1dd696c17 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1815,6 +1815,13 @@ void qemu_savevm_query_pending(MigPendingData *pending, bool exact) pending->total_bytes = pending->precopy_bytes + pending->stopcopy_bytes + pending->postcopy_bytes; + /* + * Update system remaining dirty bytes whenever QEMU queries. It will + * make the value to be not as accurate, but should still be pretty + * close to reality when this got invoked frequently while iterating. + */ + mig_stats.dirty_bytes_total = pending->total_bytes; + trace_qemu_savevm_query_pending(exact, pending->precopy_bytes, pending->stopcopy_bytes, pending->postcopy_bytes, -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 16/23] migration/qapi: Introduce system-wide "remaining" reports 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (14 preceding siblings ...) 2026-05-05 20:26 ` [PULL 15/23] migration: Remember total dirty bytes in mig_stats Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 17/23] migration/qapi: Update unit for avail-switchover-bandwidth Peter Xu ` (7 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Aseef Imran, Juraj Marcin, Dr. David Alan Gilbert, Markus Armbruster Currently, mgmt can only query for remaining RAM using QMP command "query-migrate" and monitor the "ram" section. There's no way to report system-wide remaining data including VFIO devices. It was not a problem before, because for a very long time RAM was the only part that matters. After VFIO migrations landed upstream, it may not be enough. There can be GPU devices that contain GBs of device states. Mgmt may want to know how much remaining for special devices like VFIO, because all of them will be accounted as VM data to migrate and will contribute to downtime in the switchover phase. Add a new "remaining" field in query-migrate results on the top level, reflecting system-wide remaining data, which will include everything like VFIO devices. This information will be useful for mgmt to implement generic way of stall detection that covers all system resources. For example, when system-wide remaining data (especially, if sampled right after each migration iteration) does not decrease anymore for a relatively long period of time, then it may imply there is a challenge of converging, mgmt can react based on how this value changes over time. Before this patch, "expected_downtime" almost played this role. For example, by monitoring "expected_downtime" at the beginning of each iteration can in most cases also reflect the progress of migration system-wide. Said that, "expected_downtime" was always calculated based on a bandwidth value that can fluctuate if avail-switchover-bandwidth is not used. This new "remaining" field will remove that part of uncertainty for mgmt no matter if avail-switchover-bandwidth is used by the mgmt. With the new field, HMP "info migrate" now reports this: (qemu) info migrate Status: active Time (ms): total=12080, setup=14, exp_down=300 Remaining: 1.36 GiB <--- this is the new line RAM info: Throughput (Mbps): 840.50 Sizes: pagesize=4 KiB, total=4.02 GiB Transfers: transferred=1.18 GiB, remain=1.36 GiB Channels: precopy=1.18 GiB, multifd=0 B, postcopy=0 B Page Types: normal=307923, zero=388148 Page Rates (pps): transfer=25660 Others: dirty_syncs=1 When VFIO is not involved, the value reported in the new field should be approximately the same as reported in the "remaining" field of the RAM section. It is only approximately because the system-wide remaining data is a cached value, which gets frequently updated by migration core. OTOH, the RAM's remaining data is accurate. When VFIO is involved, the new value reported should normally be larger, because it will include the size of VFIO remaining data too. Cc: Aseef Imran <aimran@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dave@treblig.org> Acked-by: Markus Armbruster <armbru@redhat.com> # QAPI schema Link: https://lore.kernel.org/r/20260421202110.306051-15-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- qapi/migration.json | 4 ++++ migration/migration-hmp-cmds.c | 5 +++++ migration/migration.c | 7 +++++++ 3 files changed, 16 insertions(+) diff --git a/qapi/migration.json b/qapi/migration.json index ed475e4261..9a4817ec73 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -300,6 +300,9 @@ # average memory load of the virtual CPU indirectly. Note that # zero means guest doesn't dirty memory. (Since 8.1) # +# @remaining: amount of bytes remaining to be migrated system-wide, +# includes both RAM and all devices (like VFIO). (Since 11.1) +# # Features: # # @unstable: Members @postcopy-latency, @postcopy-vcpu-latency, @@ -310,6 +313,7 @@ ## { 'struct': 'MigrationInfo', 'data': {'*status': 'MigrationStatus', '*ram': 'MigrationRAMStats', + '*remaining': 'size', '*vfio': 'VfioStats', '*xbzrle-cache': 'XBZRLECacheStats', '*total-time': 'int', diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c index 4f6c1dbf89..2703448c52 100644 --- a/migration/migration-hmp-cmds.c +++ b/migration/migration-hmp-cmds.c @@ -178,6 +178,11 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) } } + if (info->has_remaining) { + g_autofree char *remaining = size_to_str(info->remaining); + monitor_printf(mon, "Remaining: \t\t%s\n", remaining); + } + if (info->has_socket_address) { SocketAddressList *addr; diff --git a/migration/migration.c b/migration/migration.c index ab09dcbcf4..ecc69dc4d2 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1076,6 +1076,12 @@ static void populate_time_info(MigrationInfo *info, MigrationState *s) } } +static void populate_global_info(MigrationInfo *info, MigrationState *s) +{ + info->has_remaining = true; + info->remaining = qatomic_read(&mig_stats.dirty_bytes_total); +} + static void populate_ram_info(MigrationInfo *info, MigrationState *s) { size_t page_size = qemu_target_page_size(); @@ -1177,6 +1183,7 @@ static void fill_source_migration_info(MigrationInfo *info) /* TODO add some postcopy stats */ populate_time_info(info, s); populate_ram_info(info, s); + populate_global_info(info, s); migration_populate_vfio_info(info); break; case MIGRATION_STATUS_COLO: -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 17/23] migration/qapi: Update unit for avail-switchover-bandwidth 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (15 preceding siblings ...) 2026-05-05 20:26 ` [PULL 16/23] migration/qapi: Introduce system-wide "remaining" reports Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 18/23] vfio/migration: Add tracepoints for precopy/stopcopy query ioctls Peter Xu ` (6 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Markus Armbruster, Juraj Marcin Add ", in bytes per second". Unfortunately indentations need to be updated completely, but no change on the rest. Cc: Markus Armbruster <armbru@redhat.com> Suggested-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-16-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- qapi/migration.json | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/qapi/migration.json b/qapi/migration.json index 9a4817ec73..27a7970556 100644 --- a/qapi/migration.json +++ b/qapi/migration.json @@ -922,15 +922,15 @@ # (Since 2.8) # # @avail-switchover-bandwidth: to set the available bandwidth that -# migration can use during switchover phase. **Note:** this does -# not limit the bandwidth during switchover, but only for -# calculations when making decisions to switchover. By default, -# this value is zero, which means QEMU will estimate the bandwidth -# automatically. This can be set when the estimated value is not -# accurate, while the user is able to guarantee such bandwidth is -# available when switching over. When specified correctly, this -# can make the switchover decision much more accurate. -# (Since 8.2) +# migration can use during switchover phase, in bytes per +# second. **Note:** this does not limit the bandwidth during +# switchover, but only for calculations when making decisions to +# switchover. By default, this value is zero, which means QEMU +# will estimate the bandwidth automatically. This can be set +# when the estimated value is not accurate, while the user is +# able to guarantee such bandwidth is available when switching +# over. When specified correctly, this can make the switchover +# decision much more accurate. (Since 8.2) # # @downtime-limit: set maximum tolerated downtime for migration. # maximum downtime in milliseconds (Since 2.8) -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 18/23] vfio/migration: Add tracepoints for precopy/stopcopy query ioctls 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (16 preceding siblings ...) 2026-05-05 20:26 ` [PULL 17/23] migration/qapi: Update unit for avail-switchover-bandwidth Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 19/23] hw/riscv: iommu-trap: remove .impl.unaligned = true Peter Xu ` (5 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Avihai Horon, Cédric Le Goater Add two tracepoints for both precopy and stopcopy query ioctls. When at it, add one warn_report_once() for each of them when it fails. Reviewed-by: Avihai Horon <avihaih@nvidia.com> Tested-by: Cédric Le Goater <clg@redhat.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Link: https://lore.kernel.org/r/20260421202110.306051-17-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/vfio/migration.c | 35 +++++++++++++++++++++++++---------- hw/vfio/trace-events | 2 ++ 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c index e6e6a0d53d..150e28656e 100644 --- a/hw/vfio/migration.c +++ b/hw/vfio/migration.c @@ -329,6 +329,7 @@ static int vfio_query_stop_copy_size(VFIODevice *vbasedev) struct vfio_device_feature_mig_data_size *mig_data_size = (struct vfio_device_feature_mig_data_size *)feature->data; VFIOMigration *migration = vbasedev->migration; + int ret; feature->argsz = sizeof(buf); feature->flags = @@ -340,12 +341,19 @@ static int vfio_query_stop_copy_size(VFIODevice *vbasedev) * is reported so downtime limit won't be violated. */ migration->stopcopy_size = VFIO_MIG_STOP_COPY_SIZE; - return -errno; + ret = -errno; + warn_report_once("VFIO device %s ioctl(VFIO_DEVICE_FEATURE) on " + "VFIO_DEVICE_FEATURE_MIG_DATA_SIZE failed (%d)", + vbasedev->name, ret); + } else { + migration->stopcopy_size = mig_data_size->stop_copy_length; + ret = 0; } - migration->stopcopy_size = mig_data_size->stop_copy_length; + trace_vfio_query_stop_copy_size(vbasedev->name, + migration->stopcopy_size, ret); - return 0; + return ret; } static int vfio_query_precopy_size(VFIOMigration *migration) @@ -353,18 +361,25 @@ static int vfio_query_precopy_size(VFIOMigration *migration) struct vfio_precopy_info precopy = { .argsz = sizeof(precopy), }; - - migration->precopy_init_size = 0; - migration->precopy_dirty_size = 0; + int ret; if (ioctl(migration->data_fd, VFIO_MIG_GET_PRECOPY_INFO, &precopy)) { - return -errno; + migration->precopy_init_size = 0; + migration->precopy_dirty_size = 0; + ret = -errno; + warn_report_once("VFIO device %s ioctl(VFIO_MIG_GET_PRECOPY_INFO) " + "failed (%d)", migration->vbasedev->name, ret); + } else { + migration->precopy_init_size = precopy.initial_bytes; + migration->precopy_dirty_size = precopy.dirty_bytes; + ret = 0; } - migration->precopy_init_size = precopy.initial_bytes; - migration->precopy_dirty_size = precopy.dirty_bytes; + trace_vfio_query_precopy_size(migration->vbasedev->name, + migration->precopy_init_size, + migration->precopy_dirty_size, ret); - return 0; + return ret; } /* Returns the size of saved data on success and -errno on error */ diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events index 287df0b8cb..2049159015 100644 --- a/hw/vfio/trace-events +++ b/hw/vfio/trace-events @@ -162,6 +162,8 @@ vfio_migration_realize(const char *name) " (%s)" vfio_migration_set_device_state(const char *name, const char *state) " (%s) state %s" vfio_migration_set_state(const char *name, const char *new_state, const char *recover_state) " (%s) new state %s, recover state %s" vfio_migration_state_notifier(const char *name, int state) " (%s) state %d" +vfio_query_precopy_size(const char *name, uint64_t init_size, uint64_t dirty_size, int ret) " (%s) init %"PRIu64" dirty %"PRIu64" ret %d" +vfio_query_stop_copy_size(const char *name, uint64_t size, int ret) " (%s) stopcopy size %"PRIu64" ret %d" vfio_save_block(const char *name, int data_size) " (%s) data_size %d" vfio_save_block_precopy_empty_hit(const char *name) " (%s)" vfio_save_cleanup(const char *name) " (%s)" -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 19/23] hw/riscv: iommu-trap: remove .impl.unaligned = true 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (17 preceding siblings ...) 2026-05-05 20:26 ` [PULL 18/23] vfio/migration: Add tracepoints for precopy/stopcopy query ioctls Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 20/23] hw/npcm7xx_fiu: Specify .impl for npcm7xx_fiu_flash_ops Peter Xu ` (4 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, CJ Chen, Tomoyuki Hirose, Daniel Henrique Barboza, Peter Maydell, Alistair Francis, Philippe Mathieu-Daudé From: CJ Chen <cjchen@igel.co.jp> The riscv_iommu_trap_ops MemoryRegionOps specifies that unaligned accesses are not valid for this device but that it does implement them. This doesn't make much sense, and we want to add an assertion that registered MRs don't specify this invalid combination of settings. Drop .impl.unaligned = true, with no behaviour change. Signed-off-by: CJ Chen <cjchen@igel.co.jp> Acked-by: Tomoyuki Hirose <hrstmyk811m@gmail.com> Reported-by: Tomoyuki Hirose <hrstmyk811m@gmail.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> [PMM: reworded commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/r/20260428093339.2087081-2-peter.maydell@linaro.org Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/riscv/riscv-iommu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/hw/riscv/riscv-iommu.c b/hw/riscv/riscv-iommu.c index 7ba3240552..2b40ab2ce0 100644 --- a/hw/riscv/riscv-iommu.c +++ b/hw/riscv/riscv-iommu.c @@ -2434,7 +2434,6 @@ static const MemoryRegionOps riscv_iommu_trap_ops = { .impl = { .min_access_size = 4, .max_access_size = 8, - .unaligned = true, }, .valid = { .min_access_size = 4, -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 20/23] hw/npcm7xx_fiu: Specify .impl for npcm7xx_fiu_flash_ops 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (18 preceding siblings ...) 2026-05-05 20:26 ` [PULL 19/23] hw/riscv: iommu-trap: remove .impl.unaligned = true Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 21/23] hw/xtensa/mx_pic: Specify xtensa_mx_pic_ops .impl settings Peter Xu ` (3 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Peter Maydell, CJ Chen From: Peter Maydell <peter.maydell@linaro.org> Currently npcm7xx_fiu_flash_ops provides no .impl substruct; this means that it gets the default of "implements 1, 2 and 4 byte aligned accesses". This is more constrained than the device permits in its .valid substruct, and also narrower than the functions are written to handle. Add a .impl substruct matching the .valid substruct; this means that all guest accesses are handled directly by the read and write functions, and are never synthesized by the memory subsystem performing multiple accesses to the device (which would not behave correctly, as these read and write fucntions have side effects). Based-on-a-patch-by: CJ Chen <cjchen@igel.co.jp> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Link: https://lore.kernel.org/r/20260428093339.2087081-3-peter.maydell@linaro.org Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/ssi/npcm7xx_fiu.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/hw/ssi/npcm7xx_fiu.c b/hw/ssi/npcm7xx_fiu.c index 02707de350..2d5bed005a 100644 --- a/hw/ssi/npcm7xx_fiu.c +++ b/hw/ssi/npcm7xx_fiu.c @@ -250,6 +250,11 @@ static const MemoryRegionOps npcm7xx_fiu_flash_ops = { .read = npcm7xx_fiu_flash_read, .write = npcm7xx_fiu_flash_write, .endianness = DEVICE_LITTLE_ENDIAN, + .impl = { + .min_access_size = 1, + .max_access_size = 8, + .unaligned = true, + }, .valid = { .min_access_size = 1, .max_access_size = 8, -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 21/23] hw/xtensa/mx_pic: Specify xtensa_mx_pic_ops .impl settings 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (19 preceding siblings ...) 2026-05-05 20:26 ` [PULL 20/23] hw/npcm7xx_fiu: Specify .impl for npcm7xx_fiu_flash_ops Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 22/23] system/memory: assert on invalid MemoryRegionOps .unaligned combo Peter Xu ` (2 subsequent siblings) 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, Peter Maydell, CJ Chen, Max Filippov From: Peter Maydell <peter.maydell@linaro.org> The xtensa mx-pic interrupt controller has a rather odd register setup, where some registers are 32 bits but are decoded at offsets only one apart from each other. The QEMU implementation handles this correctly, but it did not set .impl.unaligned = true. This has worked up til now because QEMU has entirely ignored .impl.unaligned, and just allowed through unaligned accesses when .valid.unaligned is set. To allow the possibility of properly implementing synthesis of unaligned accesses by the memory subsystem when they are valid but the device doesn't implement them, and for clarity of intention, state explicitly that this MR's read and write functions directly handle unaligned accesses, by setting .impl.unaligned = true. While we are adjusting the MemoryRegionOps, we set also the minimum and maximum allowed access sizes. Since the only way to get at this device is via the CPU's RER and WER instructions, which always operate at 32-bit sizes (see the HELPER(rer) and HELPER(wer) functions in target/xtensa/op_helper.c), we know we will always get 32-bit accesses. Specify explicitly that that is what is valid and implemented for the MR. Add a comment to clarify that the hardware behaviour here is not "true memory-mapped registers", so the odd-looking implementation is correct. Based-on-a-patch-by: CJ Chen <cjchen@igel.co.jp> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> Link: https://lore.kernel.org/r/20260428093339.2087081-4-peter.maydell@linaro.org Signed-off-by: Peter Xu <peterx@redhat.com> --- hw/xtensa/mx_pic.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/hw/xtensa/mx_pic.c b/hw/xtensa/mx_pic.c index 07c3731aef..098c1aaf85 100644 --- a/hw/xtensa/mx_pic.c +++ b/hw/xtensa/mx_pic.c @@ -69,6 +69,26 @@ struct XtensaMxPic { } cpu[MX_MAX_CPU]; }; +/* + * Note that decode for these registers is rather strange by the usual + * MMIO standards -- the MIROUT and MIPICAUSE areas can be read and + * written at 32-bit length, returning different values for each byte + * offset, because the low bits of the address are treated as selecting + * an IRQ or a processor: + * + * 00nn 0...0p..p Interrupt Routing, route IRQ n to processor p + * 01pp 0...0d..d 16 bits (d) 'ored' as single IPI to processor p + * + * This is because (like x86 IO port in/out accesses) the offset is + * not a memory-mapped address but is really a register number, + * accessed via the Xtensa RER/WER "external register" instructions. + * + * We set .valid and .impl to both allow unaligned = true to permit + * these byte-offsets. Because this device is not a true memory mapped + * device but is accessible only via the Xtensa RER/WER "external + * register" interface, all accesses are guaranteed 32 bits. + */ + static uint64_t xtensa_mx_pic_ext_reg_read(void *opaque, hwaddr offset, unsigned size) { @@ -267,7 +287,14 @@ static const MemoryRegionOps xtensa_mx_pic_ops = { .read = xtensa_mx_pic_ext_reg_read, .write = xtensa_mx_pic_ext_reg_write, .endianness = DEVICE_NATIVE_ENDIAN, + .impl = { + .min_access_size = 4, + .max_access_size = 4, + .unaligned = true, + }, .valid = { + .min_access_size = 4, + .max_access_size = 4, .unaligned = true, }, }; -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 22/23] system/memory: assert on invalid MemoryRegionOps .unaligned combo 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (20 preceding siblings ...) 2026-05-05 20:26 ` [PULL 21/23] hw/xtensa/mx_pic: Specify xtensa_mx_pic_ops .impl settings Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-05 20:26 ` [PULL 23/23] tests/qtest/migration: Fix A-B file build Peter Xu 2026-05-06 18:19 ` [PULL 00/23] Next patches Stefan Hajnoczi 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu, CJ Chen, Tomoyuki Hirose, Philippe Mathieu-Daudé, Peter Maydell From: CJ Chen <cjchen@igel.co.jp> When it comes to this pattern: .valid.unaligned = false and impl.unaligned = true, is effectlvely contradictory. The .valid structure indicates that unaligned access should be rejected at the access validation phase, yet .impl suggests the underlying device implementation can handle unaligned operations. As a result, the upper-layer code will never even reach the .impl logic. Add an assertion that the MemoryRegionOps doesn't specify this invalid combination. Signed-off-by: CJ Chen <cjchen@igel.co.jp> Tested-by: CJ Chen <cjchen@igel.co.jp> Suggested-by: Peter Xu <peterx@redhat.com> Acked-by: Tomoyuki Hirose <hrstmyk811m@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> [PMM: tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Link: https://lore.kernel.org/r/20260428093339.2087081-5-peter.maydell@linaro.org Signed-off-by: Peter Xu <peterx@redhat.com> --- system/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/system/memory.c b/system/memory.c index 225bbe38c3..739ba11da6 100644 --- a/system/memory.c +++ b/system/memory.c @@ -1573,6 +1573,7 @@ void memory_region_init_io(MemoryRegion *mr, Object *owner, const MemoryRegionOps *ops, void *opaque, const char *name, uint64_t size) { + g_assert(!ops || !(ops->impl.unaligned && !ops->valid.unaligned)); memory_region_init(mr, owner, name, size); memory_region_set_ops(mr, ops, opaque); } -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* [PULL 23/23] tests/qtest/migration: Fix A-B file build 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (21 preceding siblings ...) 2026-05-05 20:26 ` [PULL 22/23] system/memory: assert on invalid MemoryRegionOps .unaligned combo Peter Xu @ 2026-05-05 20:26 ` Peter Xu 2026-05-06 18:19 ` [PULL 00/23] Next patches Stefan Hajnoczi 23 siblings, 0 replies; 27+ messages in thread From: Peter Xu @ 2026-05-05 20:26 UTC (permalink / raw) To: qemu-devel; +Cc: Fabiano Rosas, Paolo Bonzini, Peter Xu From: Fabiano Rosas <farosas@suse.de> The generation of the guest workloads for migration-test is all broken: 1) We moved code around and forgot to update the includes from the guest workloads at qtest/migration/<ARCH>/. 2) Since this code doesn't build by default due to the need for cross compilation, we also broke the s390x code entirely by removing the libc implementation from sclp.c. 3) The build adds a blank line at the end of the header files due to how the sed command used to remove extra variable declarations is written. 4) Must be too long since we last built this thing that powerpc compilers would output BE by default. Nowadays we need -mbig-endian to force it. 5) Format of comments needs fixing to please checkpatch. Fix everything. Most changes are trivial, except for s390x where I had to reimplement some of the routines from sclp.c. Compiling that object file in doesn't work because of its various includes that this standalone code can't provide. Testing this change (assuming x86 host): $ cd build $ pushd tests/qtest/migration $ make clean $ make i386 $ make CROSS_PREFIX=your-favorite-cross-compiler- s390x $ make CROSS_PREFIX=your-favorite-cross-compiler- aarch64 $ make CROSS_PREFIX=your-favorite-cross-compiler- ppc64 $ popd $ make -j$(nproc) # must build the tests again $ QTEST_QEMU_BINARY=./qemu-system-<ARCH> migration-test Fixes: 9f4278837d ("pc-bios/s390-ccw: Use the libc from SLOF and remove sclp prints") Fixes: e1803dabdc ("tests/qtest/migration: Move common test code") Signed-off-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20260429231025.30818-1-farosas@suse.de Signed-off-by: Peter Xu <peterx@redhat.com> --- tests/qtest/migration/aarch64/a-b-kernel.h | 7 +- tests/qtest/migration/bootfile.h | 11 + tests/qtest/migration/i386/a-b-bootblock.h | 8 +- tests/qtest/migration/ppc64/a-b-kernel.h | 8 +- tests/qtest/migration/s390x/a-b-bios.h | 611 +++++++++++++-------- tests/qtest/migration/s390x/a-b-bios.c | 66 ++- tests/qtest/migration/Makefile | 11 +- tests/qtest/migration/aarch64/Makefile | 4 +- tests/qtest/migration/aarch64/a-b-kernel.S | 2 +- tests/qtest/migration/i386/Makefile | 4 +- tests/qtest/migration/i386/a-b-bootblock.S | 2 +- tests/qtest/migration/ppc64/Makefile | 6 +- tests/qtest/migration/ppc64/a-b-kernel.S | 2 +- tests/qtest/migration/s390x/Makefile | 8 +- 14 files changed, 492 insertions(+), 258 deletions(-) diff --git a/tests/qtest/migration/aarch64/a-b-kernel.h b/tests/qtest/migration/aarch64/a-b-kernel.h index 34e518d061..c444cdaf73 100644 --- a/tests/qtest/migration/aarch64/a-b-kernel.h +++ b/tests/qtest/migration/aarch64/a-b-kernel.h @@ -1,6 +1,7 @@ -/* This file is automatically generated from the assembly file in - * tests/migration/aarch64. Edit that file and then run "make all" - * inside tests/migration to update, and then remember to send both +/* + * This file is automatically generated from the assembly file in + * tests/qtest/migration/aarch64. Edit that file and then run "make" + * inside tests/qtest/migration to update, and then remember to send both * the header and the assembler differences in your patch submission. */ unsigned char aarch64_kernel[] = { diff --git a/tests/qtest/migration/bootfile.h b/tests/qtest/migration/bootfile.h index 96e784b163..0ce5b34433 100644 --- a/tests/qtest/migration/bootfile.h +++ b/tests/qtest/migration/bootfile.h @@ -8,6 +8,15 @@ #ifndef BOOTFILE_H #define BOOTFILE_H +/* + * This file is included from qtest, but also from the assembly code + * that generates the header file containing the guest workload. Don't + * expose the function declarations and non-standard types to it. + */ +#ifdef __ASSEMBLER__ +#define MIGRATION_GUEST_CODE +#endif + /* Common */ #define TEST_MEM_PAGE_SIZE 4096 @@ -33,8 +42,10 @@ */ #define ARM_TEST_MAX_KERNEL_SIZE (512 * 1024) +#ifndef MIGRATION_GUEST_CODE void bootfile_delete(void); char *bootfile_create(const char *arch, const char *dir, bool suspend_me); char *bootfile_get(void); +#endif #endif /* BOOTFILE_H */ diff --git a/tests/qtest/migration/i386/a-b-bootblock.h b/tests/qtest/migration/i386/a-b-bootblock.h index c83f8711db..3b5de60161 100644 --- a/tests/qtest/migration/i386/a-b-bootblock.h +++ b/tests/qtest/migration/i386/a-b-bootblock.h @@ -1,6 +1,7 @@ -/* This file is automatically generated from the assembly file in - * tests/migration/i386. Edit that file and then run "make all" - * inside tests/migration to update, and then remember to send both +/* + * This file is automatically generated from the assembly file in + * tests/qtest/migration/i386. Edit that file and then run "make" + * inside tests/qtest/migration to update, and then remember to send both * the header and the assembler differences in your patch submission. */ unsigned char x86_bootsect[] = { @@ -48,7 +49,6 @@ unsigned char x86_bootsect[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x55, 0xaa }; - #define SYM_do_zero 0x00007c3d #define SYM_gdt 0x00007ca0 #define SYM_gdtdesc 0x00007cb8 diff --git a/tests/qtest/migration/ppc64/a-b-kernel.h b/tests/qtest/migration/ppc64/a-b-kernel.h index 673317efdb..8dc2bcd61a 100644 --- a/tests/qtest/migration/ppc64/a-b-kernel.h +++ b/tests/qtest/migration/ppc64/a-b-kernel.h @@ -1,6 +1,7 @@ -/* This file is automatically generated from the assembly file in - * tests/migration/ppc64. Edit that file and then run "make all" - * inside tests/migration to update, and then remember to send both +/* + * This file is automatically generated from the assembly file in + * tests/qtest/migration/ppc64. Edit that file and then run "make" + * inside tests/qtest/migration to update, and then remember to send both * the header and the assembler differences in your patch submission. */ unsigned char ppc64_kernel[] = { @@ -39,4 +40,3 @@ unsigned char ppc64_kernel[] = { 0x38, 0xa0, 0x00, 0x01, 0x38, 0xc0, 0x00, 0x42, 0x78, 0xc6, 0xc1, 0xc6, 0x44, 0x00, 0x00, 0x22, 0x4b, 0xff, 0xff, 0xcc }; - diff --git a/tests/qtest/migration/s390x/a-b-bios.h b/tests/qtest/migration/s390x/a-b-bios.h index 96103dadbb..f8e385c75c 100644 --- a/tests/qtest/migration/s390x/a-b-bios.h +++ b/tests/qtest/migration/s390x/a-b-bios.h @@ -1,15 +1,16 @@ -/* This file is automatically generated from the a-b-bios.c file in - * tests/migration/s390x. Edit that file and then run "make all" - * inside tests/migration to update, and then remember to send both +/* + * This file is automatically generated from the assembly file in + * tests/qtest/migration/s390x. Edit that file and then run "make" + * inside tests/qtest/migration to update, and then remember to send both * the header and the assembler differences in your patch submission. */ unsigned char s390x_elf[] = { 0x7f, 0x45, 0x4c, 0x46, 0x02, 0x02, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x16, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xa8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x80, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x98, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x38, 0x00, 0x07, 0x00, 0x40, - 0x00, 0x0d, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x04, + 0x00, 0x10, 0x00, 0x0f, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x88, 0x00, 0x00, 0x00, 0x00, @@ -21,171 +22,86 @@ unsigned char s390x_elf[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xac, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xac, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x88, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x88, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xb8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x17, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xb8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x38, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x01, 0x18, 0x48, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0xd8, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x1e, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x30, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x01, 0x21, 0x28, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x07, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xb8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xb8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x01, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x20, + 0x00, 0x00, 0x0e, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0xd8, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x01, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x64, 0x74, 0xe5, 0x51, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x64, 0x74, 0xe5, 0x52, 0x00, 0x00, 0x00, 0x04, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xb8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x17, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xb8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x38, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x01, 0x38, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0xd8, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x1e, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x28, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x01, 0x28, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x2f, 0x6c, 0x69, 0x62, 0x2f, 0x6c, 0x64, 0x36, 0x34, 0x2e, 0x73, 0x6f, - 0x2e, 0x31, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x2e, 0x31, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x02, 0x48, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x03, 0xa8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0c, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0xf0, 0xeb, 0xef, 0xf0, 0x70, - 0x00, 0x24, 0xa7, 0xfb, 0xff, 0x60, 0xc0, 0xe5, 0x00, 0x00, 0x01, 0x5f, - 0xc0, 0x20, 0x00, 0x00, 0x02, 0xa8, 0xc0, 0xe5, 0x00, 0x00, 0x01, 0x75, - 0xa5, 0x2e, 0x00, 0x10, 0xa7, 0x19, 0x63, 0x00, 0x92, 0x00, 0x20, 0x00, - 0xa7, 0x2b, 0x10, 0x00, 0xa7, 0x17, 0xff, 0xfc, 0xa5, 0x1e, 0x00, 0x10, - 0xa7, 0x29, 0x63, 0x00, 0xe3, 0x30, 0x10, 0x00, 0x00, 0x90, 0xa7, 0x3a, - 0x00, 0x01, 0x42, 0x30, 0x10, 0x00, 0xa7, 0x1b, 0x10, 0x00, 0xa7, 0x27, - 0xff, 0xf7, 0xc0, 0x20, 0x00, 0x00, 0x02, 0x8a, 0xc0, 0xe5, 0x00, 0x00, - 0x01, 0x56, 0xa7, 0xf4, 0xff, 0xeb, 0x07, 0x07, 0xc0, 0xf0, 0x00, 0x00, - 0x4e, 0x5c, 0xc0, 0x20, 0x00, 0x00, 0x00, 0x7d, 0xe3, 0x20, 0x20, 0x00, - 0x00, 0x04, 0xc0, 0x30, 0x00, 0x00, 0x96, 0xa3, 0xb9, 0x0b, 0x00, 0x32, - 0xb9, 0x02, 0x00, 0x33, 0xa7, 0x84, 0x00, 0x19, 0xa7, 0x3b, 0xff, 0xff, - 0xeb, 0x43, 0x00, 0x08, 0x00, 0x0c, 0xb9, 0x02, 0x00, 0x44, 0xb9, 0x04, - 0x00, 0x12, 0xa7, 0x84, 0x00, 0x09, 0xd7, 0xff, 0x10, 0x00, 0x10, 0x00, - 0x41, 0x10, 0x11, 0x00, 0xa7, 0x47, 0xff, 0xfb, 0xc0, 0x20, 0x00, 0x00, - 0x00, 0x0d, 0x44, 0x30, 0x20, 0x00, 0xc0, 0x20, 0x00, 0x00, 0x00, 0x5b, - 0xd2, 0x0f, 0x01, 0xd0, 0x20, 0x00, 0xa7, 0xf4, 0xff, 0xa1, 0xd7, 0x00, - 0x10, 0x00, 0x10, 0x00, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x50, 0xb2, 0xb2, - 0x10, 0x00, 0xa7, 0xf4, 0x00, 0x00, 0xeb, 0x00, 0xf0, 0x00, 0x00, 0x25, - 0x96, 0x02, 0xf0, 0x06, 0xeb, 0x00, 0xf0, 0x00, 0x00, 0x2f, 0xc0, 0x10, - 0x00, 0x00, 0x00, 0x2a, 0xe3, 0x10, 0x01, 0xb8, 0x00, 0x24, 0xc0, 0x10, - 0x00, 0x00, 0x00, 0x4b, 0xd2, 0x07, 0x01, 0xb0, 0x10, 0x00, 0xc0, 0x10, - 0x00, 0x00, 0x00, 0x3d, 0xb2, 0xb2, 0x10, 0x00, 0xeb, 0x66, 0xf0, 0x00, - 0x00, 0x25, 0x96, 0xff, 0xf0, 0x04, 0xeb, 0x66, 0xf0, 0x00, 0x00, 0x2f, - 0xc0, 0x10, 0x00, 0x00, 0x00, 0x1a, 0xe3, 0x10, 0x01, 0xf8, 0x00, 0x24, - 0xc0, 0x10, 0x00, 0x00, 0x00, 0x36, 0xd2, 0x07, 0x01, 0xf0, 0x10, 0x00, - 0xc0, 0x10, 0x00, 0x00, 0x00, 0x24, 0xb2, 0xb2, 0x10, 0x00, 0xeb, 0x00, - 0xf0, 0x00, 0x00, 0x25, 0x94, 0xfd, 0xf0, 0x06, 0xeb, 0x00, 0xf0, 0x00, - 0x00, 0x2f, 0x07, 0xfe, 0xeb, 0x66, 0xf0, 0x00, 0x00, 0x25, 0x94, 0x00, - 0xf0, 0x04, 0xeb, 0x66, 0xf0, 0x00, 0x00, 0x2f, 0x07, 0xfe, 0x07, 0x07, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0xf0, 0x00, 0x02, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x07, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x70, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x00, + 0xeb, 0xdf, 0xf0, 0x68, 0x00, 0x24, 0xc0, 0xd0, 0x00, 0x00, 0x01, 0x05, + 0xa7, 0xfb, 0xff, 0x60, 0xc0, 0x10, 0x00, 0x00, 0x96, 0xc0, 0xa7, 0x28, + 0x00, 0x1c, 0x40, 0x20, 0x10, 0x00, 0xa5, 0x2c, 0x00, 0x04, 0xe3, 0x20, + 0x10, 0x0a, 0x00, 0x24, 0xa7, 0x28, 0x00, 0x40, 0x40, 0x20, 0x10, 0x12, + 0x58, 0x20, 0xd0, 0x00, 0xb2, 0x20, 0x00, 0x21, 0xb2, 0x22, 0x00, 0x30, + 0x88, 0x30, 0x00, 0x1c, 0xc0, 0xe5, 0x00, 0x00, 0x00, 0x63, 0xa7, 0x29, + 0x00, 0x41, 0xc0, 0xe5, 0x00, 0x00, 0x00, 0xbf, 0xa5, 0x2e, 0x00, 0x10, + 0xa7, 0x19, 0x63, 0x00, 0x92, 0x00, 0x20, 0x00, 0xa7, 0x2b, 0x10, 0x00, + 0xa7, 0x17, 0xff, 0xfc, 0xa5, 0x1e, 0x00, 0x10, 0xa7, 0x29, 0x63, 0x00, + 0xe3, 0x30, 0x10, 0x00, 0x00, 0x90, 0xa7, 0x3a, 0x00, 0x01, 0x42, 0x30, + 0x10, 0x00, 0xa7, 0x1b, 0x10, 0x00, 0xa7, 0x27, 0xff, 0xf7, 0xa7, 0x29, + 0x00, 0x42, 0xc0, 0xe5, 0x00, 0x00, 0x00, 0xa1, 0xa7, 0xf4, 0xff, 0xec, + 0xc0, 0xf0, 0x00, 0x00, 0x56, 0x30, 0xc0, 0x20, 0x00, 0x00, 0x0e, 0x7d, + 0xe3, 0x20, 0x20, 0x00, 0x00, 0x04, 0xc0, 0x30, 0x00, 0x00, 0x9e, 0x77, + 0xb9, 0x0b, 0x00, 0x32, 0xb9, 0x02, 0x00, 0x33, 0xa7, 0x84, 0x00, 0x19, + 0xa7, 0x3b, 0xff, 0xff, 0xeb, 0x43, 0x00, 0x08, 0x00, 0x0c, 0xb9, 0x02, + 0x00, 0x44, 0xb9, 0x04, 0x00, 0x12, 0xa7, 0x84, 0x00, 0x09, 0xd7, 0xff, + 0x10, 0x00, 0x10, 0x00, 0x41, 0x10, 0x11, 0x00, 0xa7, 0x47, 0xff, 0xfb, + 0xc0, 0x20, 0x00, 0x00, 0x00, 0x0d, 0x44, 0x30, 0x20, 0x00, 0xc0, 0x20, + 0x00, 0x00, 0x00, 0x57, 0xd2, 0x0f, 0x01, 0xd0, 0x20, 0x00, 0xa7, 0xf4, + 0xff, 0x89, 0xd7, 0x00, 0x10, 0x00, 0x10, 0x00, 0xc0, 0x10, 0x00, 0x00, + 0x00, 0x4c, 0xb2, 0xb2, 0x10, 0x00, 0xa7, 0xf4, 0x00, 0x00, 0xeb, 0x00, + 0xf0, 0x00, 0x00, 0x25, 0x96, 0x02, 0xf0, 0x06, 0xeb, 0x00, 0xf0, 0x00, + 0x00, 0x2f, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x2a, 0xe3, 0x10, 0x01, 0xb8, + 0x00, 0x24, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x47, 0xd2, 0x07, 0x01, 0xb0, + 0x10, 0x00, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x39, 0xb2, 0xb2, 0x10, 0x00, + 0xeb, 0x66, 0xf0, 0x00, 0x00, 0x25, 0x96, 0xff, 0xf0, 0x04, 0xeb, 0x66, + 0xf0, 0x00, 0x00, 0x2f, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x1a, 0xe3, 0x10, + 0x01, 0xf8, 0x00, 0x24, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x32, 0xd2, 0x07, + 0x01, 0xf0, 0x10, 0x00, 0xc0, 0x10, 0x00, 0x00, 0x00, 0x20, 0xb2, 0xb2, + 0x10, 0x00, 0xeb, 0x00, 0xf0, 0x00, 0x00, 0x25, 0x94, 0xfd, 0xf0, 0x06, + 0xeb, 0x00, 0xf0, 0x00, 0x00, 0x2f, 0x07, 0xfe, 0xeb, 0x66, 0xf0, 0x00, + 0x00, 0x25, 0x94, 0x00, 0xf0, 0x04, 0xeb, 0x66, 0xf0, 0x00, 0x00, 0x2f, + 0x07, 0xfe, 0x07, 0x07, 0x00, 0x02, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x02, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x03, 0x02, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, 0xeb, 0xbf, 0xf0, 0x58, - 0x00, 0x24, 0xc0, 0x10, 0x00, 0x00, 0x4e, 0x0d, 0xa7, 0xfb, 0xff, 0x60, - 0xb2, 0x20, 0x00, 0x21, 0xb2, 0x22, 0x00, 0xb0, 0x88, 0xb0, 0x00, 0x1c, - 0xc0, 0xe5, 0xff, 0xff, 0xff, 0x91, 0xa7, 0xbe, 0x00, 0x03, 0xa7, 0x84, - 0x00, 0x13, 0xa7, 0xbe, 0x00, 0x02, 0xa7, 0x28, 0x00, 0x00, 0xa7, 0x74, - 0x00, 0x04, 0xa7, 0x28, 0xff, 0xfe, 0xe3, 0x40, 0xf1, 0x10, 0x00, 0x04, - 0xb9, 0x14, 0x00, 0x22, 0xeb, 0xbf, 0xf0, 0xf8, 0x00, 0x04, 0x07, 0xf4, - 0xa7, 0x28, 0xff, 0xff, 0xa7, 0xf4, 0xff, 0xf5, 0x07, 0x07, 0x07, 0x07, - 0xeb, 0xbf, 0xf0, 0x58, 0x00, 0x24, 0xc0, 0xd0, 0x00, 0x00, 0x01, 0x25, - 0xa7, 0xfb, 0xff, 0x60, 0xa7, 0xb9, 0x00, 0x00, 0xa7, 0x19, 0x00, 0x00, - 0xc0, 0x40, 0x00, 0x00, 0x4d, 0xd8, 0xa7, 0x3b, 0x00, 0x01, 0xa7, 0x37, - 0x00, 0x23, 0xc0, 0x20, 0x00, 0x00, 0x4d, 0xd1, 0x18, 0x31, 0xa7, 0x1a, - 0x00, 0x06, 0x40, 0x10, 0x20, 0x08, 0xa7, 0x3a, 0x00, 0x0e, 0xa7, 0x18, - 0x1a, 0x00, 0x40, 0x30, 0x20, 0x00, 0x92, 0x00, 0x20, 0x02, 0x40, 0x10, - 0x20, 0x0a, 0xe3, 0x20, 0xd0, 0x00, 0x00, 0x04, 0xc0, 0xe5, 0xff, 0xff, - 0xff, 0xac, 0xe3, 0x40, 0xf1, 0x10, 0x00, 0x04, 0xb9, 0x04, 0x00, 0x2b, - 0xeb, 0xbf, 0xf0, 0xf8, 0x00, 0x04, 0x07, 0xf4, 0xb9, 0x04, 0x00, 0x51, - 0xa7, 0x5b, 0x00, 0x01, 0xa7, 0x09, 0x0f, 0xf7, 0xb9, 0x21, 0x00, 0x50, - 0xa7, 0x24, 0xff, 0xd7, 0x41, 0xeb, 0x20, 0x00, 0x95, 0x0a, 0xe0, 0x00, - 0xa7, 0x74, 0x00, 0x08, 0x41, 0x11, 0x40, 0x0e, 0x92, 0x0d, 0x10, 0x00, - 0xb9, 0x04, 0x00, 0x15, 0x43, 0x5b, 0x20, 0x00, 0x42, 0x51, 0x40, 0x0e, - 0xa7, 0xbb, 0x00, 0x01, 0x41, 0x10, 0x10, 0x01, 0xa7, 0xf4, 0xff, 0xbf, - 0xc0, 0x50, 0x00, 0x00, 0x00, 0xd8, 0xc0, 0x10, 0x00, 0x00, 0x4d, 0x8d, - 0xa7, 0x48, 0x00, 0x1c, 0x40, 0x40, 0x10, 0x00, 0x50, 0x20, 0x10, 0x0c, - 0xa7, 0x48, 0x00, 0x04, 0xe3, 0x20, 0x50, 0x00, 0x00, 0x04, 0x40, 0x40, - 0x10, 0x0a, 0x50, 0x30, 0x10, 0x10, 0xc0, 0xf4, 0xff, 0xff, 0xff, 0x6b, - 0xa7, 0x39, 0x00, 0x40, 0xa7, 0x29, 0x00, 0x00, 0xc0, 0xf4, 0xff, 0xff, - 0xff, 0xe4, 0x07, 0x07, 0xb9, 0x04, 0x00, 0x13, 0xa7, 0x2a, 0xff, 0xff, - 0xb9, 0x04, 0x00, 0x34, 0xa7, 0x48, 0x00, 0x01, 0x15, 0x24, 0xa7, 0x24, - 0x00, 0x07, 0xb9, 0x04, 0x00, 0x21, 0xc0, 0xf4, 0xff, 0xff, 0xff, 0x7f, - 0xa7, 0x29, 0xff, 0xff, 0x07, 0xfe, 0x07, 0x07, 0xa7, 0x39, 0x00, 0x00, - 0x41, 0x13, 0x20, 0x00, 0x95, 0x00, 0x10, 0x00, 0xa7, 0x74, 0x00, 0x05, - 0xc0, 0xf4, 0xff, 0xff, 0xff, 0x70, 0xa7, 0x3b, 0x00, 0x01, 0xa7, 0xf4, - 0xff, 0xf5, 0x07, 0x07, 0xeb, 0xbf, 0xf0, 0x58, 0x00, 0x24, 0xc0, 0xd0, - 0x00, 0x00, 0x00, 0x95, 0xa7, 0xfb, 0xff, 0x60, 0xb9, 0x04, 0x00, 0xb2, - 0xa7, 0x19, 0x00, 0x20, 0xc0, 0x20, 0x00, 0x00, 0x4d, 0x40, 0x92, 0x00, - 0x20, 0x00, 0xa7, 0x2b, 0x00, 0x01, 0xa7, 0x17, 0xff, 0xfc, 0xc0, 0x10, - 0x00, 0x00, 0x4d, 0x37, 0xa7, 0x28, 0x10, 0x00, 0x40, 0x20, 0x10, 0x00, - 0xe3, 0x20, 0xd0, 0x00, 0x00, 0x04, 0xc0, 0xe5, 0xff, 0xff, 0xff, 0x1d, - 0x12, 0x22, 0xa7, 0x74, 0x00, 0x19, 0xa7, 0x19, 0x00, 0x00, 0xc0, 0x40, - 0x00, 0x00, 0x00, 0x79, 0xa7, 0x39, 0x00, 0x08, 0xc0, 0x20, 0x00, 0x00, - 0x4d, 0x2c, 0x41, 0x21, 0x20, 0x00, 0xe3, 0x20, 0x20, 0x00, 0x00, 0x90, - 0x43, 0x22, 0x40, 0x00, 0x42, 0x21, 0xb0, 0x00, 0xa7, 0x1b, 0x00, 0x01, - 0xa7, 0x37, 0xff, 0xf2, 0xe3, 0x40, 0xf1, 0x10, 0x00, 0x04, 0xeb, 0xbf, - 0xf0, 0xf8, 0x00, 0x04, 0x07, 0xf4, 0x07, 0x07, 0xeb, 0xaf, 0xf0, 0x50, - 0x00, 0x24, 0xc0, 0xd0, 0x00, 0x00, 0x00, 0x55, 0xa7, 0xfb, 0xff, 0x60, - 0xa7, 0x19, 0x0f, 0xf8, 0xb9, 0x21, 0x00, 0x31, 0xb9, 0x04, 0x00, 0xa2, - 0xa7, 0xc4, 0x00, 0x2a, 0xa7, 0xb9, 0x0f, 0xf8, 0xc0, 0x10, 0x00, 0x00, - 0x4c, 0xf6, 0xa7, 0x28, 0x10, 0x00, 0x40, 0x20, 0x10, 0x00, 0x92, 0x00, - 0x10, 0x02, 0xe3, 0x20, 0xd0, 0x00, 0x00, 0x04, 0xc0, 0xe5, 0xff, 0xff, - 0xfe, 0xda, 0xa7, 0xbb, 0x00, 0x01, 0xa7, 0x19, 0x00, 0x00, 0xa7, 0xb7, - 0x00, 0x17, 0xc0, 0x10, 0x00, 0x00, 0x4c, 0xe1, 0xe3, 0x40, 0xf1, 0x10, - 0x00, 0x04, 0xe3, 0x20, 0x10, 0x08, 0x00, 0x91, 0xa7, 0x2a, 0xff, 0xf9, - 0xb9, 0x14, 0x00, 0x22, 0xeb, 0xaf, 0xf0, 0xf0, 0x00, 0x04, 0x07, 0xf4, - 0xb9, 0x04, 0x00, 0xb3, 0xa7, 0xf4, 0xff, 0xd8, 0xc0, 0x20, 0x00, 0x00, - 0x4c, 0xcc, 0x41, 0x31, 0xa0, 0x00, 0x41, 0x21, 0x20, 0x00, 0xa7, 0x1b, - 0x00, 0x01, 0xd2, 0x00, 0x30, 0x00, 0x20, 0x0f, 0xa7, 0xf4, 0xff, 0xdd, - 0x07, 0x07, 0x07, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x76, 0x00, 0x05, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x78, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x02, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x77, 0x00, 0x05, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x20, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x3c, 0x28, 0x2b, 0x7c, 0x26, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x21, 0x24, 0x2a, 0x29, 0x3b, 0x2e, - 0x2d, 0x2f, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2c, - 0x25, 0x5f, 0x3e, 0x3f, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x60, 0x3a, 0x23, 0x40, 0x27, 0x3d, 0x22, 0x2e, 0x61, 0x62, 0x63, - 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, 0x70, 0x71, 0x72, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, - 0x79, 0x7a, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x2e, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, - 0x51, 0x52, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x53, 0x54, - 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, 0x2e, - 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x2e, 0x2e, - 0x2e, 0x2e, 0x2e, 0x2e, 0x41, 0x00, 0x42, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x6f, 0xff, 0xfe, 0xf5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xd8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x02, 0x28, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xf8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x30, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x6f, 0xff, 0xff, 0xfb, - 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, - 0x6f, 0xff, 0xff, 0xf9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x01, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x80, 0x00, 0x00, 0x00, 0x07, 0xfe, 0x07, 0x07, 0x07, 0x07, 0x07, 0x07, + 0xc0, 0x50, 0x00, 0x00, 0x00, 0x20, 0xc0, 0x10, 0x00, 0x00, 0x95, 0xe1, + 0x42, 0x20, 0x10, 0x0e, 0xa7, 0x28, 0x00, 0x0f, 0x40, 0x20, 0x10, 0x00, + 0x58, 0x20, 0x50, 0x04, 0x50, 0x20, 0x10, 0x08, 0x92, 0x00, 0x10, 0x02, + 0x58, 0x20, 0x50, 0x00, 0xb2, 0x20, 0x00, 0x21, 0xb2, 0x22, 0x00, 0x30, + 0x88, 0x30, 0x00, 0x1c, 0xc0, 0xf4, 0xff, 0xff, 0xff, 0x85, 0x07, 0x07, + 0x07, 0x07, 0x07, 0x07, 0x00, 0x76, 0x00, 0x05, 0x00, 0x07, 0x1a, 0x00, + 0x00, 0x78, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, @@ -193,87 +109,340 @@ unsigned char s390x_elf[] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x17, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x47, 0x43, 0x43, 0x3a, - 0x20, 0x28, 0x55, 0x62, 0x75, 0x6e, 0x74, 0x75, 0x20, 0x31, 0x31, 0x2e, - 0x34, 0x2e, 0x30, 0x2d, 0x31, 0x75, 0x62, 0x75, 0x6e, 0x74, 0x75, 0x31, - 0x7e, 0x32, 0x32, 0x2e, 0x30, 0x34, 0x29, 0x20, 0x31, 0x31, 0x2e, 0x34, - 0x2e, 0x30, 0x00, 0x00, 0x2e, 0x73, 0x68, 0x73, 0x74, 0x72, 0x74, 0x61, - 0x62, 0x00, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x00, 0x2e, 0x67, - 0x6e, 0x75, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x00, 0x2e, 0x64, 0x79, 0x6e, - 0x73, 0x79, 0x6d, 0x00, 0x2e, 0x64, 0x79, 0x6e, 0x73, 0x74, 0x72, 0x00, - 0x2e, 0x72, 0x65, 0x6c, 0x61, 0x2e, 0x64, 0x79, 0x6e, 0x00, 0x2e, 0x74, - 0x65, 0x78, 0x74, 0x00, 0x2e, 0x72, 0x6f, 0x64, 0x61, 0x74, 0x61, 0x00, - 0x2e, 0x64, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x00, 0x2e, 0x67, 0x6f, - 0x74, 0x00, 0x2e, 0x62, 0x73, 0x73, 0x00, 0x2e, 0x63, 0x6f, 0x6d, 0x6d, - 0x65, 0x6e, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xc8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xc8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x6f, 0xff, 0xfe, 0xf5, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x50, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x20, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x02, 0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, + 0x00, 0x00, 0x00, 0x00, 0x6f, 0xff, 0xff, 0xfb, 0x00, 0x00, 0x00, 0x00, + 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x6f, 0xff, 0xff, 0xf9, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x6f, 0xff, 0xff, 0xf6, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x00, + 0x47, 0x43, 0x43, 0x3a, 0x20, 0x28, 0x53, 0x55, 0x53, 0x45, 0x20, 0x4c, + 0x69, 0x6e, 0x75, 0x78, 0x29, 0x20, 0x31, 0x32, 0x2e, 0x33, 0x2e, 0x30, + 0x00, 0x00, 0x2e, 0x73, 0x68, 0x73, 0x74, 0x72, 0x74, 0x61, 0x62, 0x00, + 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x70, 0x00, 0x2e, 0x67, 0x6e, 0x75, + 0x2e, 0x68, 0x61, 0x73, 0x68, 0x00, 0x2e, 0x64, 0x79, 0x6e, 0x73, 0x79, + 0x6d, 0x00, 0x2e, 0x64, 0x79, 0x6e, 0x73, 0x74, 0x72, 0x00, 0x2e, 0x72, + 0x65, 0x6c, 0x61, 0x2e, 0x64, 0x79, 0x6e, 0x00, 0x2e, 0x74, 0x65, 0x78, + 0x74, 0x00, 0x2e, 0x72, 0x6f, 0x64, 0x61, 0x74, 0x61, 0x00, 0x2e, 0x65, + 0x68, 0x5f, 0x66, 0x72, 0x61, 0x6d, 0x65, 0x00, 0x2e, 0x64, 0x79, 0x6e, + 0x61, 0x6d, 0x69, 0x63, 0x00, 0x2e, 0x67, 0x6f, 0x74, 0x00, 0x2e, 0x64, + 0x61, 0x74, 0x61, 0x00, 0x2e, 0x62, 0x73, 0x73, 0x00, 0x2e, 0x63, 0x6f, + 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x01, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xd8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00, 0x03, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1d, - 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xf8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x01, 0xf8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, - 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, - 0x00, 0x00, 0x00, 0x25, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x28, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x28, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2d, 0x00, 0x00, 0x00, 0x04, + 0x00, 0x00, 0x01, 0xc8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xc8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, + 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xd8, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x01, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x28, + 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, + 0x00, 0x00, 0x00, 0x13, 0x6f, 0xff, 0xff, 0xf6, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1d, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x02, 0x30, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x30, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x03, + 0x00, 0x00, 0x02, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x20, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0x05, + 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x25, + 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x50, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x02, 0x50, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x2d, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x58, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x58, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x37, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x02, 0x70, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x70, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x37, - 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x48, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x02, 0x48, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x40, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3d, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x78, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x04, 0x78, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x3d, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x88, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x88, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x01, 0x24, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x88, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x88, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x06, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4f, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x17, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xb8, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x20, 0x00, 0x00, 0x00, 0x04, + 0x00, 0x00, 0x1e, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0xd8, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x10, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x4e, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x58, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0xd8, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x08, 0xd8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1f, 0xe8, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x0f, 0xe8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x53, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, + 0x00, 0x00, 0x00, 0x5d, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0xf0, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x01, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x58, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0xf0, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2b, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, - 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x63, 0x00, 0x00, 0x00, 0x08, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x08, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x68, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x10, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x19, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x09, 0x1b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x61, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, - 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x21, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x71, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00 }; diff --git a/tests/qtest/migration/s390x/a-b-bios.c b/tests/qtest/migration/s390x/a-b-bios.c index ff99a3ef57..3999254dde 100644 --- a/tests/qtest/migration/s390x/a-b-bios.c +++ b/tests/qtest/migration/s390x/a-b-bios.c @@ -9,23 +9,75 @@ * option) any later version. */ -#define LOADPARM_LEN 8 /* Needed for sclp.h */ +/* for sclp.h */ +#define LOADPARM_LEN 8 +typedef unsigned int uint32_t; +typedef unsigned short uint16_t; +typedef unsigned char uint8_t; -#include <libc.h> -#include <s390-ccw.h> #include <sclp.h> -char stack[0x8000] __attribute__((aligned(4096))); - #define START_ADDRESS (1024 * 1024) #define END_ADDRESS (100 * 1024 * 1024) +/* at pc-bios/s390-ccw/start.S */ + +void __attribute__((weak)) consume_sclp_int(void) +{ +} + +static char _sccb[4096] __attribute__((__aligned__(4096))); +char stack[0x8000] __attribute__((aligned(4096))); + +static void sclp_service_call(unsigned int command) +{ + int cc; + + asm volatile( + " .insn rre,0xb2200000,%1,%2\n" /* servc %1,%2 */ + " ipm %0\n" + " srl %0,28" + : "=&d" (cc) : "d" (command), "a" (__pa(_sccb)) + : "cc", "memory"); + consume_sclp_int(); +} + +static void sclp_setup(void) +{ + WriteEventMask *sccb = (void *)_sccb; + + sccb->h.length = sizeof(WriteEventMask); + + sccb->mask_length = sizeof(unsigned int); + sccb->cp_receive_mask = 0; + sccb->cp_send_mask = SCLP_EVENT_MASK_MSG_ASCII; + + sclp_service_call(SCLP_CMD_WRITE_EVENT_MASK); +} + +static void putc(const char c) +{ + WriteEventData *sccb = (void *)_sccb; + int len = 1; + + sccb->data[0] = c; + + sccb->h.length = sizeof(WriteEventData) + len; + sccb->h.function_code = SCLP_FC_NORMAL_WRITE; + + sccb->ebh.length = sizeof(EventBufferHeader) + len; + sccb->ebh.type = SCLP_EVENT_ASCII_CONSOLE_DATA; + sccb->ebh.flags = 0; + + sclp_service_call(SCLP_CMD_WRITE_EVENT_DATA); +} + void main(void) { unsigned long addr; sclp_setup(); - sclp_print("A"); + putc('A'); /* * Make sure all of the pages have consistent contents before incrementing @@ -39,6 +91,6 @@ void main(void) for (addr = START_ADDRESS; addr < END_ADDRESS; addr += 4096) { *(volatile char *)addr += 1; /* Change pages */ } - sclp_print("B"); + putc('B'); } } diff --git a/tests/qtest/migration/Makefile b/tests/qtest/migration/Makefile index 2c5ee287ec..c183b69941 100644 --- a/tests/qtest/migration/Makefile +++ b/tests/qtest/migration/Makefile @@ -7,7 +7,7 @@ TARGET_LIST = i386 aarch64 s390x ppc64 -SRC_PATH = ../.. +SRC_PATH = ../../../.. .PHONY: help $(TARGET_LIST) help: @@ -23,16 +23,17 @@ help: @echo " Possible targets are: $(TARGET_LIST)" override define __note -/* This file is automatically generated from the assembly file in - * tests/migration/$@. Edit that file and then run "make all" - * inside tests/migration to update, and then remember to send both +/* + * This file is automatically generated from the assembly file in + * tests/qtest/migration/$@. Edit that file and then run "make" + * inside tests/qtest/migration to update, and then remember to send both * the header and the assembler differences in your patch submission. */ endef export __note $(TARGET_LIST): - $(MAKE) CROSS_PREFIX=$(CROSS_PREFIX) -C $@ + $(MAKE) SRC_PATH=$(SRC_PATH) CROSS_PREFIX=$(CROSS_PREFIX) -C $@ clean: for target in $(TARGET_LIST); do \ diff --git a/tests/qtest/migration/aarch64/Makefile b/tests/qtest/migration/aarch64/Makefile index 9c4fa18e76..fd102ab776 100644 --- a/tests/qtest/migration/aarch64/Makefile +++ b/tests/qtest/migration/aarch64/Makefile @@ -6,7 +6,7 @@ all: a-b-kernel.h a-b-kernel.h: aarch64.kernel echo "$$__note" > $@ - xxd -i $< | sed -e 's/.*int.*//' >> $@ + xxd -i $< | sed -e '/.*int.*/d' >> $@ aarch64.kernel: aarch64.elf $(CROSS_PREFIX)objcopy -O binary $< $@ @@ -15,4 +15,4 @@ aarch64.elf: a-b-kernel.S $(CROSS_PREFIX)gcc -o $@ -nostdlib -Wl,--build-id=none $< clean: - $(RM) *.kernel *.elf + $(RM) *.kernel *.elf *.h diff --git a/tests/qtest/migration/aarch64/a-b-kernel.S b/tests/qtest/migration/aarch64/a-b-kernel.S index a4103ecb71..8f29efbb4a 100644 --- a/tests/qtest/migration/aarch64/a-b-kernel.S +++ b/tests/qtest/migration/aarch64/a-b-kernel.S @@ -11,7 +11,7 @@ # pc-relative address. Also the branch instructions should use relative # addresses only. -#include "../migration-test.h" +#include "../bootfile.h" .section .text diff --git a/tests/qtest/migration/i386/Makefile b/tests/qtest/migration/i386/Makefile index 37a72ae353..cbf63103af 100644 --- a/tests/qtest/migration/i386/Makefile +++ b/tests/qtest/migration/i386/Makefile @@ -6,7 +6,7 @@ all: a-b-bootblock.h a-b-bootblock.h: x86.bootsect x86.o echo "$$__note" > header.tmp - xxd -i $< | sed -e 's/.*int.*//' >> header.tmp + xxd -i $< | sed -e '/.*int.*/d' >> header.tmp nm x86.o | awk '{print "#define SYM_"$$3" 0x"$$1}' >> header.tmp mv header.tmp $@ @@ -20,4 +20,4 @@ x86.o: a-b-bootblock.S $(CROSS_PREFIX)gcc -I.. -m32 -march=i486 -c $< -o $@ clean: - @rm -rf *.boot *.o *.bootsect + @rm -rf *.boot *.o *.bootsect *.h diff --git a/tests/qtest/migration/i386/a-b-bootblock.S b/tests/qtest/migration/i386/a-b-bootblock.S index 6f39eb6051..99253f5ee5 100644 --- a/tests/qtest/migration/i386/a-b-bootblock.S +++ b/tests/qtest/migration/i386/a-b-bootblock.S @@ -9,7 +9,7 @@ # # Author: dgilbert@redhat.com -#include "migration-test.h" +#include "../bootfile.h" #define ACPI_ENABLE 0xf1 #define ACPI_PORT_SMI_CMD 0xb2 diff --git a/tests/qtest/migration/ppc64/Makefile b/tests/qtest/migration/ppc64/Makefile index a3a2d98ac8..1bd6d35799 100644 --- a/tests/qtest/migration/ppc64/Makefile +++ b/tests/qtest/migration/ppc64/Makefile @@ -3,13 +3,13 @@ all: a-b-kernel.h a-b-kernel.h: ppc64.kernel echo "$$__note" > $@ - xxd -i $< | sed -e 's/.*int.*//' >> $@ + xxd -i $< | sed -e '/.*int.*/d' >> $@ ppc64.kernel: ppc64.elf $(CROSS_PREFIX)objcopy -O binary -S $< $@ ppc64.elf: a-b-kernel.S - $(CROSS_PREFIX)gcc -static -o $@ -nostdlib -Wl,--build-id=none $< + $(CROSS_PREFIX)gcc -mbig-endian -static -o $@ -nostdlib -Wl,--build-id=none $< clean: - $(RM) *.kernel *.elf + $(RM) *.kernel *.elf *.h diff --git a/tests/qtest/migration/ppc64/a-b-kernel.S b/tests/qtest/migration/ppc64/a-b-kernel.S index 0613a8d18e..c0009ecd10 100644 --- a/tests/qtest/migration/ppc64/a-b-kernel.S +++ b/tests/qtest/migration/ppc64/a-b-kernel.S @@ -4,7 +4,7 @@ # This work is licensed under the terms of the GNU GPL, version 2 or later. # See the COPYING file in the top-level directory. -#include "../migration-test.h" +#include "../bootfile.h" .section .text diff --git a/tests/qtest/migration/s390x/Makefile b/tests/qtest/migration/s390x/Makefile index 6671de2efc..a2399821a7 100644 --- a/tests/qtest/migration/s390x/Makefile +++ b/tests/qtest/migration/s390x/Makefile @@ -3,7 +3,7 @@ .PHONY: all clean all: a-b-bios.h -fwdir=../../../pc-bios/s390-ccw +fwdir=$(SRC_PATH)/pc-bios/s390-ccw CFLAGS+=-ffreestanding -fno-delete-null-pointer-checks -fPIE -Os \ -msoft-float -march=z900 -fno-asynchronous-unwind-tables \ @@ -11,14 +11,14 @@ CFLAGS+=-ffreestanding -fno-delete-null-pointer-checks -fPIE -Os \ a-b-bios.h: s390x.elf echo "$$__note" > header.tmp - xxd -i $< | sed -e 's/.*int.*//' >> header.tmp + xxd -i $< | sed -e '/.*int.*/d' >> header.tmp mv header.tmp $@ # We use common-page-size=16 to avoid big padding in the ELF file s390x.elf: a-b-bios.c $(CROSS_PREFIX)gcc $(CFLAGS) -I$(fwdir) $(fwdir)/start.S \ - $(fwdir)/sclp.c -Wl,-zcommon-page-size=16 -o $@ $< + -Wl,-zcommon-page-size=16 -o $@ $< $(CROSS_PREFIX)strip $@ clean: - @rm -rf *.elf *.o + @rm -rf *.elf *.o *.h -- 2.53.0 ^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [PULL 00/23] Next patches 2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu ` (22 preceding siblings ...) 2026-05-05 20:26 ` [PULL 23/23] tests/qtest/migration: Fix A-B file build Peter Xu @ 2026-05-06 18:19 ` Stefan Hajnoczi 23 siblings, 0 replies; 27+ messages in thread From: Stefan Hajnoczi @ 2026-05-06 18:19 UTC (permalink / raw) To: Peter Xu; +Cc: qemu-devel, Fabiano Rosas, Paolo Bonzini, Peter Xu [-- Attachment #1: Type: text/plain, Size: 116 bytes --] Applied, thanks. Please update the changelog at https://wiki.qemu.org/ChangeLog/11.1 for any user-visible changes. [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2026-05-11 13:49 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-05 20:26 [PULL 00/23] Next patches Peter Xu
2026-05-05 20:26 ` [PULL 01/23] migration: Fix blocking in POSTCOPY_DEVICE during package load Peter Xu
2026-05-05 20:26 ` [PULL 02/23] migration: Use QAPI_CLONE_MEMBERS in migrate_params_test_apply Peter Xu
2026-05-05 20:26 ` [PULL 03/23] migration/rdma: add x-rdma-chunk-size parameter Peter Xu
2026-05-05 20:26 ` [PULL 04/23] migration: Fix low possibility downtime violation Peter Xu
2026-05-05 20:26 ` [PULL 05/23] migration/qapi: Rename MigrationStats to MigrationRAMStats Peter Xu
2026-05-05 20:26 ` [PULL 06/23] vfio/migration: Cache stop size in VFIOMigration Peter Xu
2026-05-05 20:26 ` [PULL 07/23] migration/treewide: Merge @state_pending_{exact|estimate} APIs Peter Xu
2026-05-05 20:26 ` [PULL 08/23] migration: Use the new save_query_pending() API directly Peter Xu
2026-05-05 20:26 ` [PULL 09/23] migration: Introduce stopcopy_bytes in save_query_pending() Peter Xu
2026-05-05 20:26 ` [PULL 10/23] vfio/migration: Fix incorrect reporting for VFIO pending data Peter Xu
2026-05-05 20:26 ` [PULL 11/23] migration: Move iteration counter out of RAM Peter Xu
2026-05-05 20:26 ` [PULL 12/23] migration: Introduce a helper to return switchover bw estimate Peter Xu
2026-05-05 20:26 ` [PULL 13/23] migration: Calculate expected downtime on demand Peter Xu
2026-05-07 19:57 ` Peter Maydell
2026-05-11 13:48 ` Peter Xu
2026-05-05 20:26 ` [PULL 14/23] migration: Fix calculation of expected_downtime to take VFIO info Peter Xu
2026-05-05 20:26 ` [PULL 15/23] migration: Remember total dirty bytes in mig_stats Peter Xu
2026-05-05 20:26 ` [PULL 16/23] migration/qapi: Introduce system-wide "remaining" reports Peter Xu
2026-05-05 20:26 ` [PULL 17/23] migration/qapi: Update unit for avail-switchover-bandwidth Peter Xu
2026-05-05 20:26 ` [PULL 18/23] vfio/migration: Add tracepoints for precopy/stopcopy query ioctls Peter Xu
2026-05-05 20:26 ` [PULL 19/23] hw/riscv: iommu-trap: remove .impl.unaligned = true Peter Xu
2026-05-05 20:26 ` [PULL 20/23] hw/npcm7xx_fiu: Specify .impl for npcm7xx_fiu_flash_ops Peter Xu
2026-05-05 20:26 ` [PULL 21/23] hw/xtensa/mx_pic: Specify xtensa_mx_pic_ops .impl settings Peter Xu
2026-05-05 20:26 ` [PULL 22/23] system/memory: assert on invalid MemoryRegionOps .unaligned combo Peter Xu
2026-05-05 20:26 ` [PULL 23/23] tests/qtest/migration: Fix A-B file build Peter Xu
2026-05-06 18:19 ` [PULL 00/23] Next patches Stefan Hajnoczi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox