* [PULL v2 0/3] Migration patches for 2025-03-10
@ 2025-03-10 22:34 Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 1/3] migration: Fix UAF for incoming migration on MigrationState Fabiano Rosas
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Fabiano Rosas @ 2025-03-10 22:34 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu
The following changes since commit 5136598e2667f35ef3dc1d757616a266bd5eb3a2:
Merge tag 'accel-cpus-20250309' of https://github.com/philmd/qemu into staging (2025-03-10 13:40:48 +0800)
are available in the Git repository at:
https://gitlab.com/farosas/qemu.git tags/migration-20250310-pull-request
for you to fetch changes up to baa41af1c083446971feac39b0da845e547ca068:
migration: Prioritize RDMA in ram_save_target_page() (2025-03-10 12:09:24 -0300)
----------------------------------------------------------------
Migration pull request
- Fix use-after-free in incoming migration
- Improve cpr migration blocker for volatile ram
- Fix RDMA migration
----------------------------------------------------------------
Li Zhijian (1):
migration: Prioritize RDMA in ram_save_target_page()
Peter Xu (1):
migration: Fix UAF for incoming migration on MigrationState
Steve Sistare (1):
migration: ram block cpr blockers
include/exec/memory.h | 3 ++
include/exec/ramblock.h | 1 +
migration/migration.c | 40 +++++++++++++++++++++++--
migration/ram.c | 9 +++---
migration/savevm.c | 2 ++
system/physmem.c | 66 +++++++++++++++++++++++++++++++++++++++++
6 files changed, 115 insertions(+), 6 deletions(-)
--
2.35.3
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PULL v2 1/3] migration: Fix UAF for incoming migration on MigrationState
2025-03-10 22:34 [PULL v2 0/3] Migration patches for 2025-03-10 Fabiano Rosas
@ 2025-03-10 22:34 ` Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 2/3] migration: ram block cpr blockers Fabiano Rosas
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Fabiano Rosas @ 2025-03-10 22:34 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Yan Fu
From: Peter Xu <peterx@redhat.com>
On the incoming migration side, QEMU uses a coroutine to load all the VM
states. Inside, it may reference MigrationState on global states like
migration capabilities, parameters, error state, shared mutexes and more.
However there's nothing yet to make sure MigrationState won't get
destroyed (e.g. after migration_shutdown()). Meanwhile there's also no API
available to remove the incoming coroutine in migration_shutdown(),
avoiding it to access the freed elements.
There's a bug report showing this can happen and crash dest QEMU when
migration is cancelled on source.
When it happens, the dest main thread is trying to cleanup everything:
#0 qemu_aio_coroutine_enter
#1 aio_dispatch_handler
#2 aio_poll
#3 monitor_cleanup
#4 qemu_cleanup
#5 qemu_default_main
Then it found the migration incoming coroutine, schedule it (even after
migration_shutdown()), causing crash:
#0 __pthread_kill_implementation
#1 __pthread_kill_internal
#2 __GI_raise
#3 __GI_abort
#4 __assert_fail_base
#5 __assert_fail
#6 qemu_mutex_lock_impl
#7 qemu_lockable_mutex_lock
#8 qemu_lockable_lock
#9 qemu_lockable_auto_lock
#10 migrate_set_error
#11 process_incoming_migration_co
#12 coroutine_trampoline
To fix it, take a refcount after an incoming setup is properly done when
qmp_migrate_incoming() succeeded the 1st time. As it's during a QMP
handler which needs BQL, it means the main loop is still alive (without
going into cleanups, which also needs BQL).
Releasing the refcount now only until the incoming migration coroutine
finished or failed. Hence the refcount is valid for both (1) setup phase
of incoming ports, mostly IO watches (e.g. qio_channel_add_watch_full()),
and (2) the incoming coroutine itself (process_incoming_migration_co()).
Note that we can't unref in migration_incoming_state_destroy(), because
both qmp_xen_load_devices_state() and load_snapshot() will use it without
an incoming migration. Those hold BQL so they're not prone to this issue.
PS: I suspect nobody uses Xen's command at all, as it didn't register yank,
hence AFAIU the command should crash on master when trying to unregister
yank in migration_incoming_state_destroy().. but that's another story.
Also note that in some incoming failure cases we may not always unref the
MigrationState refcount, which is a trade-off to keep things simple. We
could make it accurate, but it can be an overkill. Some examples:
- Unlike most of the rest protocols, socket_start_incoming_migration()
may create net listener after incoming port setup sucessfully.
It means we can't unref in migration_channel_process_incoming() as a
generic path because socket protocol might keep using MigrationState.
- For either socket or file, multiple IO watches might be created, it
means logically each IO watch needs to take one refcount for
MigrationState so as to be 100% accurate on ownership of refcount taken.
In general, we at least need per-protocol handling to make it accurate,
which can be an overkill if we know incoming failed after all. Add a short
comment to explain that when taking the refcount in qmp_migrate_incoming().
Bugzilla: https://issues.redhat.com/browse/RHEL-69775
Tested-by: Yan Fu <yafu@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-ID: <20250220132459.512610-1-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/migration.c | 40 ++++++++++++++++++++++++++++++++++++++--
1 file changed, 38 insertions(+), 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 1833cfe358..d46e776e24 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -116,6 +116,27 @@ static void migration_downtime_start(MigrationState *s)
s->downtime_start = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
}
+/*
+ * This is unfortunate: incoming migration actually needs the outgoing
+ * migration state (MigrationState) to be there too, e.g. to query
+ * capabilities, parameters, using locks, setup errors, etc.
+ *
+ * NOTE: when calling this, making sure current_migration exists and not
+ * been freed yet! Otherwise trying to access the refcount is already
+ * an use-after-free itself..
+ *
+ * TODO: Move shared part of incoming / outgoing out into separate object.
+ * Then this is not needed.
+ */
+static void migrate_incoming_ref_outgoing_state(void)
+{
+ object_ref(migrate_get_current());
+}
+static void migrate_incoming_unref_outgoing_state(void)
+{
+ object_unref(migrate_get_current());
+}
+
static void migration_downtime_end(MigrationState *s)
{
int64_t now = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -863,7 +884,7 @@ process_incoming_migration_co(void *opaque)
* postcopy thread.
*/
trace_process_incoming_migration_co_postcopy_end_main();
- return;
+ goto out;
}
/* Else if something went wrong then just fall out of the normal exit */
}
@@ -879,7 +900,8 @@ process_incoming_migration_co(void *opaque)
}
migration_bh_schedule(process_incoming_migration_bh, mis);
- return;
+ goto out;
+
fail:
migrate_set_state(&mis->state, MIGRATION_STATUS_ACTIVE,
MIGRATION_STATUS_FAILED);
@@ -896,6 +918,9 @@ fail:
exit(EXIT_FAILURE);
}
+out:
+ /* Pairs with the refcount taken in qmp_migrate_incoming() */
+ migrate_incoming_unref_outgoing_state();
}
/**
@@ -1901,6 +1926,17 @@ void qmp_migrate_incoming(const char *uri, bool has_channels,
return;
}
+ /*
+ * Making sure MigrationState is available until incoming migration
+ * completes.
+ *
+ * NOTE: QEMU _might_ leak this refcount in some failure paths, but
+ * that's OK. This is the minimum change we need to at least making
+ * sure success case is clean on the refcount. We can try harder to
+ * make it accurate for any kind of failures, but it might be an
+ * overkill and doesn't bring us much benefit.
+ */
+ migrate_incoming_ref_outgoing_state();
once = false;
}
--
2.35.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PULL v2 2/3] migration: ram block cpr blockers
2025-03-10 22:34 [PULL v2 0/3] Migration patches for 2025-03-10 Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 1/3] migration: Fix UAF for incoming migration on MigrationState Fabiano Rosas
@ 2025-03-10 22:34 ` Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 3/3] migration: Prioritize RDMA in ram_save_target_page() Fabiano Rosas
2025-03-11 5:04 ` [PULL v2 0/3] Migration patches for 2025-03-10 Stefan Hajnoczi
3 siblings, 0 replies; 5+ messages in thread
From: Fabiano Rosas @ 2025-03-10 22:34 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Steve Sistare, David Hildenbrand
From: Steve Sistare <steven.sistare@oracle.com>
Unlike cpr-reboot mode, cpr-transfer mode cannot save volatile ram blocks
in the migration stream file and recreate them later, because the physical
memory for the blocks is pinned and registered for vfio. Add a blocker
for volatile ram blocks.
Also add a blocker for RAM_GUEST_MEMFD. Preserving guest_memfd may be
sufficient for CPR, but it has not been tested yet.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-ID: <1740667681-257312-1-git-send-email-steven.sistare@oracle.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
include/exec/memory.h | 3 ++
include/exec/ramblock.h | 1 +
migration/savevm.c | 2 ++
system/physmem.c | 66 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 72 insertions(+)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 78c4e0aec8..d09af58c97 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -3203,6 +3203,9 @@ bool ram_block_discard_is_disabled(void);
*/
bool ram_block_discard_is_required(void);
+void ram_block_add_cpr_blocker(RAMBlock *rb, Error **errp);
+void ram_block_del_cpr_blocker(RAMBlock *rb);
+
#endif
#endif
diff --git a/include/exec/ramblock.h b/include/exec/ramblock.h
index 0babd105c0..64484cd821 100644
--- a/include/exec/ramblock.h
+++ b/include/exec/ramblock.h
@@ -39,6 +39,7 @@ struct RAMBlock {
/* RCU-enabled, writes protected by the ramlist lock */
QLIST_ENTRY(RAMBlock) next;
QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers;
+ Error *cpr_blocker;
int fd;
uint64_t fd_offset;
int guest_memfd;
diff --git a/migration/savevm.c b/migration/savevm.c
index 5c4fdfd95e..ce158c3512 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -3514,12 +3514,14 @@ void vmstate_register_ram(MemoryRegion *mr, DeviceState *dev)
qemu_ram_set_idstr(mr->ram_block,
memory_region_name(mr), dev);
qemu_ram_set_migratable(mr->ram_block);
+ ram_block_add_cpr_blocker(mr->ram_block, &error_fatal);
}
void vmstate_unregister_ram(MemoryRegion *mr, DeviceState *dev)
{
qemu_ram_unset_idstr(mr->ram_block);
qemu_ram_unset_migratable(mr->ram_block);
+ ram_block_del_cpr_blocker(mr->ram_block);
}
void vmstate_register_ram_global(MemoryRegion *mr)
diff --git a/system/physmem.c b/system/physmem.c
index a6af555f4b..e97de3ef65 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -71,7 +71,10 @@
#include "qemu/pmem.h"
+#include "qapi/qapi-types-migration.h"
+#include "migration/blocker.h"
#include "migration/cpr.h"
+#include "migration/options.h"
#include "migration/vmstate.h"
#include "qemu/range.h"
@@ -1904,6 +1907,14 @@ static void ram_block_add(RAMBlock *new_block, Error **errp)
qemu_mutex_unlock_ramlist();
goto out_free;
}
+
+ error_setg(&new_block->cpr_blocker,
+ "Memory region %s uses guest_memfd, "
+ "which is not supported with CPR.",
+ memory_region_name(new_block->mr));
+ migrate_add_blocker_modes(&new_block->cpr_blocker, errp,
+ MIG_MODE_CPR_TRANSFER,
+ -1);
}
ram_size = (new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS;
@@ -4095,3 +4106,58 @@ bool ram_block_discard_is_required(void)
return qatomic_read(&ram_block_discard_required_cnt) ||
qatomic_read(&ram_block_coordinated_discard_required_cnt);
}
+
+/*
+ * Return true if ram is compatible with CPR. Do not exclude rom,
+ * because the rom file could change in new QEMU.
+ */
+static bool ram_is_cpr_compatible(RAMBlock *rb)
+{
+ MemoryRegion *mr = rb->mr;
+
+ if (!mr || !memory_region_is_ram(mr)) {
+ return true;
+ }
+
+ /* Ram device is remapped in new QEMU */
+ if (memory_region_is_ram_device(mr)) {
+ return true;
+ }
+
+ /*
+ * A file descriptor is passed to new QEMU and remapped, or its backing
+ * file is reopened and mapped. It must be shared to avoid COW.
+ */
+ if (rb->fd >= 0 && qemu_ram_is_shared(rb)) {
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * Add a blocker for each volatile ram block. This function should only be
+ * called after we know that the block is migratable. Non-migratable blocks
+ * are either re-created in new QEMU, or are handled specially, or are covered
+ * by a device-level CPR blocker.
+ */
+void ram_block_add_cpr_blocker(RAMBlock *rb, Error **errp)
+{
+ assert(qemu_ram_is_migratable(rb));
+
+ if (ram_is_cpr_compatible(rb)) {
+ return;
+ }
+
+ error_setg(&rb->cpr_blocker,
+ "Memory region %s is not compatible with CPR. share=on is "
+ "required for memory-backend objects, and aux-ram-share=on is "
+ "required.", memory_region_name(rb->mr));
+ migrate_add_blocker_modes(&rb->cpr_blocker, errp, MIG_MODE_CPR_TRANSFER,
+ -1);
+}
+
+void ram_block_del_cpr_blocker(RAMBlock *rb)
+{
+ migrate_del_blocker(&rb->cpr_blocker);
+}
--
2.35.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PULL v2 3/3] migration: Prioritize RDMA in ram_save_target_page()
2025-03-10 22:34 [PULL v2 0/3] Migration patches for 2025-03-10 Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 1/3] migration: Fix UAF for incoming migration on MigrationState Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 2/3] migration: ram block cpr blockers Fabiano Rosas
@ 2025-03-10 22:34 ` Fabiano Rosas
2025-03-11 5:04 ` [PULL v2 0/3] Migration patches for 2025-03-10 Stefan Hajnoczi
3 siblings, 0 replies; 5+ messages in thread
From: Fabiano Rosas @ 2025-03-10 22:34 UTC (permalink / raw)
To: qemu-devel; +Cc: Peter Xu, Li Zhijian
From: Li Zhijian <lizhijian@fujitsu.com>
Address an error in RDMA-based migration by ensuring RDMA is prioritized
when saving pages in `ram_save_target_page()`.
Previously, the RDMA protocol's page-saving step was placed after other
protocols due to a refactoring in commit bc38dc2f5f3. This led to migration
failures characterized by unknown control messages and state loading errors
destination:
(qemu) qemu-system-x86_64: Unknown control message QEMU FILE
qemu-system-x86_64: error while loading state section id 1(ram)
qemu-system-x86_64: load of migration failed: Operation not permitted
source:
(qemu) qemu-system-x86_64: RDMA is in an error state waiting migration to abort!
qemu-system-x86_64: failed to save SaveStateEntry with id(name): 1(ram): -1
qemu-system-x86_64: rdma migration: recv polling control error!
qemu-system-x86_64: warning: Early error. Sending error.
qemu-system-x86_64: warning: rdma migration: send polling control error
RDMA migration implemented its own protocol/method to send pages to
destination side, hand over to RDMA first to prevent pages being saved by
other protocol.
Fixes: bc38dc2f5f3 ("migration: refactor ram_save_target_page functions")
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Message-ID: <20250305062825.772629-2-lizhijian@fujitsu.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/ram.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 589b6505eb..424df6d9f1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1964,6 +1964,11 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
int res;
+ /* Hand over to RDMA first */
+ if (control_save_page(pss, offset, &res)) {
+ return res;
+ }
+
if (!migrate_multifd()
|| migrate_zero_page_detection() == ZERO_PAGE_DETECTION_LEGACY) {
if (save_zero_page(rs, pss, offset)) {
@@ -1976,10 +1981,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
return ram_save_multifd_page(block, offset);
}
- if (control_save_page(pss, offset, &res)) {
- return res;
- }
-
return ram_save_page(rs, pss);
}
--
2.35.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PULL v2 0/3] Migration patches for 2025-03-10
2025-03-10 22:34 [PULL v2 0/3] Migration patches for 2025-03-10 Fabiano Rosas
` (2 preceding siblings ...)
2025-03-10 22:34 ` [PULL v2 3/3] migration: Prioritize RDMA in ram_save_target_page() Fabiano Rosas
@ 2025-03-11 5:04 ` Stefan Hajnoczi
3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2025-03-11 5:04 UTC (permalink / raw)
To: Fabiano Rosas; +Cc: qemu-devel, Peter Xu
[-- Attachment #1: Type: text/plain, Size: 116 bytes --]
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/10.0 for any user-visible changes.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-03-11 5:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-10 22:34 [PULL v2 0/3] Migration patches for 2025-03-10 Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 1/3] migration: Fix UAF for incoming migration on MigrationState Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 2/3] migration: ram block cpr blockers Fabiano Rosas
2025-03-10 22:34 ` [PULL v2 3/3] migration: Prioritize RDMA in ram_save_target_page() Fabiano Rosas
2025-03-11 5:04 ` [PULL v2 0/3] Migration patches for 2025-03-10 Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).