* [PATCH] migration: Fix a possible crash when halting a guest during migration
@ 2025-12-08 13:51 Thomas Huth
2025-12-08 14:45 ` Stefan Hajnoczi
2025-12-08 15:45 ` Peter Xu
0 siblings, 2 replies; 6+ messages in thread
From: Thomas Huth @ 2025-12-08 13:51 UTC (permalink / raw)
To: Peter Xu, Fabiano Rosas, qemu-devel, Kevin Wolf
Cc: qemu-block, Eric Blake, Stefan Hajnoczi, Paolo Bonzini,
Daniel P . Berrangé, Markus Armbruster, Peter Maydell,
Alex Bennée
From: Thomas Huth <thuth@redhat.com>
When shutting down a guest that is currently in progress of being
migrated, there is a chance that QEMU might crash during bdrv_delete().
The backtrace looks like this:
Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
(gdb) bt
#0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
#1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
The problem is apparently that the migration thread is still active
(migration_shutdown() only asks it to stop the current migration, but
does not wait for it to finish), while the main thread continues to
bdrv_close_all() that will destroy all block drivers. So the two threads
are racing here for the destruction of the migration-related block drivers.
I was able to bisect the problem and the race has apparently been introduced
by commit c2a189976e211c9ff782 ("migration/block-active: Remove global active
flag"), so reverting it might be an option as well, but waiting for the
migration thread to finish before continuing with the further clean-ups
during shutdown seems less intrusive.
Note: I used the Claude AI assistant for analyzing the crash, and it
came up with the idea of waiting for the migration thread to finish
in migration_shutdown() before proceeding with the further clean-up,
but the patch itself has been 100% written by myself.
Fixes: c2a189976e ("migration/block-active: Remove global active flag")
Signed-off-by: Thomas Huth <thuth@redhat.com>
---
migration/migration.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index b316ee01ab2..6f4bb6d8438 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -380,6 +380,16 @@ void migration_bh_schedule(QEMUBHFunc *cb, void *opaque)
qemu_bh_schedule(bh);
}
+static void migration_thread_join(MigrationState *s)
+{
+ if (s && s->migration_thread_running) {
+ bql_unlock();
+ qemu_thread_join(&s->thread);
+ s->migration_thread_running = false;
+ bql_lock();
+ }
+}
+
void migration_shutdown(void)
{
/*
@@ -393,6 +403,13 @@ void migration_shutdown(void)
* stop the migration using this structure
*/
migration_cancel();
+ /*
+ * Wait for migration thread to finish to prevent a possible race where
+ * the migration thread is still running and accessing host block drivers
+ * while the main cleanup proceeds to remove them in bdrv_close_all()
+ * later.
+ */
+ migration_thread_join(migrate_get_current());
object_unref(OBJECT(current_migration));
/*
@@ -1499,12 +1516,7 @@ static void migration_cleanup(MigrationState *s)
close_return_path_on_source(s);
- if (s->migration_thread_running) {
- bql_unlock();
- qemu_thread_join(&s->thread);
- s->migration_thread_running = false;
- bql_lock();
- }
+ migration_thread_join(s);
WITH_QEMU_LOCK_GUARD(&s->qemu_file_lock) {
/*
--
2.52.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
2025-12-08 13:51 [PATCH] migration: Fix a possible crash when halting a guest during migration Thomas Huth
@ 2025-12-08 14:45 ` Stefan Hajnoczi
2025-12-08 15:26 ` Fabiano Rosas
2025-12-08 15:45 ` Peter Xu
1 sibling, 1 reply; 6+ messages in thread
From: Stefan Hajnoczi @ 2025-12-08 14:45 UTC (permalink / raw)
To: Thomas Huth
Cc: Peter Xu, Fabiano Rosas, qemu-devel, Kevin Wolf, qemu-block,
Eric Blake, Paolo Bonzini, Daniel P . Berrangé,
Markus Armbruster, Peter Maydell, Alex Bennée
[-- Attachment #1: Type: text/plain, Size: 4024 bytes --]
On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
> From: Thomas Huth <thuth@redhat.com>
>
> When shutting down a guest that is currently in progress of being
> migrated, there is a chance that QEMU might crash during bdrv_delete().
> The backtrace looks like this:
>
> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>
> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
> (gdb) bt
> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>
> The problem is apparently that the migration thread is still active
> (migration_shutdown() only asks it to stop the current migration, but
> does not wait for it to finish), while the main thread continues to
> bdrv_close_all() that will destroy all block drivers. So the two threads
> are racing here for the destruction of the migration-related block drivers.
>
> I was able to bisect the problem and the race has apparently been introduced
> by commit c2a189976e211c9ff782 ("migration/block-active: Remove global active
> flag"), so reverting it might be an option as well, but waiting for the
> migration thread to finish before continuing with the further clean-ups
> during shutdown seems less intrusive.
>
> Note: I used the Claude AI assistant for analyzing the crash, and it
> came up with the idea of waiting for the migration thread to finish
> in migration_shutdown() before proceeding with the further clean-up,
> but the patch itself has been 100% written by myself.
It sounds like the migration thread does not hold block graph refcounts
and assumes the BlockDriverStates it uses have a long enough lifetime.
I don't know the migration code well enough to say whether joining in
migration_shutdown() is okay. Another option would be expicitly holding
the necessary refcounts in the migration thread.
>
> Fixes: c2a189976e ("migration/block-active: Remove global active flag")
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
> migration/migration.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index b316ee01ab2..6f4bb6d8438 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -380,6 +380,16 @@ void migration_bh_schedule(QEMUBHFunc *cb, void *opaque)
> qemu_bh_schedule(bh);
> }
>
> +static void migration_thread_join(MigrationState *s)
> +{
> + if (s && s->migration_thread_running) {
> + bql_unlock();
> + qemu_thread_join(&s->thread);
> + s->migration_thread_running = false;
> + bql_lock();
> + }
> +}
> +
> void migration_shutdown(void)
> {
> /*
> @@ -393,6 +403,13 @@ void migration_shutdown(void)
> * stop the migration using this structure
> */
> migration_cancel();
> + /*
> + * Wait for migration thread to finish to prevent a possible race where
> + * the migration thread is still running and accessing host block drivers
> + * while the main cleanup proceeds to remove them in bdrv_close_all()
> + * later.
> + */
> + migration_thread_join(migrate_get_current());
> object_unref(OBJECT(current_migration));
>
> /*
> @@ -1499,12 +1516,7 @@ static void migration_cleanup(MigrationState *s)
>
> close_return_path_on_source(s);
>
> - if (s->migration_thread_running) {
> - bql_unlock();
> - qemu_thread_join(&s->thread);
> - s->migration_thread_running = false;
> - bql_lock();
> - }
> + migration_thread_join(s);
>
> WITH_QEMU_LOCK_GUARD(&s->qemu_file_lock) {
> /*
> --
> 2.52.0
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
2025-12-08 14:45 ` Stefan Hajnoczi
@ 2025-12-08 15:26 ` Fabiano Rosas
2025-12-12 17:18 ` Thomas Huth
0 siblings, 1 reply; 6+ messages in thread
From: Fabiano Rosas @ 2025-12-08 15:26 UTC (permalink / raw)
To: Stefan Hajnoczi, Thomas Huth
Cc: Peter Xu, qemu-devel, Kevin Wolf, qemu-block, Eric Blake,
Paolo Bonzini, Daniel P . Berrangé, Markus Armbruster,
Peter Maydell, Alex Bennée
Stefan Hajnoczi <stefanha@redhat.com> writes:
> On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
>> From: Thomas Huth <thuth@redhat.com>
>>
>> When shutting down a guest that is currently in progress of being
>> migrated, there is a chance that QEMU might crash during bdrv_delete().
>> The backtrace looks like this:
>>
>> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>>
>> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
>> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
>> (gdb) bt
>> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
>> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>>
How does the migration thread reaches here? Is this from
migration_block_inactivate()?
>> The problem is apparently that the migration thread is still active
>> (migration_shutdown() only asks it to stop the current migration, but
>> does not wait for it to finish)
"asks it to stop", more like pulls the plug abruptly. Note that setting
the CANCELLING state has technically nothing to do with this, the actual
cancelling lies on the not so gentle:
if (s->to_dst_file) {
qemu_file_shutdown(s->to_dst_file);
}
>> , while the main thread continues to
>> bdrv_close_all() that will destroy all block drivers. So the two threads
>> are racing here for the destruction of the migration-related block drivers.
>>
>> I was able to bisect the problem and the race has apparently been introduced
>> by commit c2a189976e211c9ff782 ("migration/block-active: Remove global active
>> flag"), so reverting it might be an option as well, but waiting for the
>> migration thread to finish before continuing with the further clean-ups
>> during shutdown seems less intrusive.
>>
>> Note: I used the Claude AI assistant for analyzing the crash, and it
>> came up with the idea of waiting for the migration thread to finish
>> in migration_shutdown() before proceeding with the further clean-up,
>> but the patch itself has been 100% written by myself.
>
> It sounds like the migration thread does not hold block graph refcounts
> and assumes the BlockDriverStates it uses have a long enough lifetime.
>
> I don't know the migration code well enough to say whether joining in
> migration_shutdown() is okay. Another option would be expicitly holding
> the necessary refcounts in the migration thread.
>
I agree. In principle and also because shuffling the joining around
feels like something that's prone to introduce other bugs.
>>
>> Fixes: c2a189976e ("migration/block-active: Remove global active flag")
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> ---
>> migration/migration.c | 24 ++++++++++++++++++------
>> 1 file changed, 18 insertions(+), 6 deletions(-)
>>
>> diff --git a/migration/migration.c b/migration/migration.c
>> index b316ee01ab2..6f4bb6d8438 100644
>> --- a/migration/migration.c
>> +++ b/migration/migration.c
>> @@ -380,6 +380,16 @@ void migration_bh_schedule(QEMUBHFunc *cb, void *opaque)
>> qemu_bh_schedule(bh);
>> }
>>
>> +static void migration_thread_join(MigrationState *s)
>> +{
>> + if (s && s->migration_thread_running) {
>> + bql_unlock();
>> + qemu_thread_join(&s->thread);
>> + s->migration_thread_running = false;
>> + bql_lock();
>> + }
>> +}
>> +
>> void migration_shutdown(void)
>> {
>> /*
>> @@ -393,6 +403,13 @@ void migration_shutdown(void)
>> * stop the migration using this structure
>> */
>> migration_cancel();
>> + /*
>> + * Wait for migration thread to finish to prevent a possible race where
>> + * the migration thread is still running and accessing host block drivers
>> + * while the main cleanup proceeds to remove them in bdrv_close_all()
>> + * later.
>> + */
>> + migration_thread_join(migrate_get_current());
>> object_unref(OBJECT(current_migration));
>>
>> /*
>> @@ -1499,12 +1516,7 @@ static void migration_cleanup(MigrationState *s)
>>
>> close_return_path_on_source(s);
>>
>> - if (s->migration_thread_running) {
>> - bql_unlock();
>> - qemu_thread_join(&s->thread);
>> - s->migration_thread_running = false;
>> - bql_lock();
>> - }
>> + migration_thread_join(s);
>>
>> WITH_QEMU_LOCK_GUARD(&s->qemu_file_lock) {
>> /*
>> --
>> 2.52.0
>>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
2025-12-08 15:26 ` Fabiano Rosas
@ 2025-12-12 17:18 ` Thomas Huth
2025-12-12 21:26 ` Fabiano Rosas
0 siblings, 1 reply; 6+ messages in thread
From: Thomas Huth @ 2025-12-12 17:18 UTC (permalink / raw)
To: Fabiano Rosas, Stefan Hajnoczi
Cc: Peter Xu, qemu-devel, Kevin Wolf, qemu-block, Eric Blake,
Paolo Bonzini, Daniel P . Berrangé, Markus Armbruster,
Peter Maydell, Alex Bennée
On 08/12/2025 16.26, Fabiano Rosas wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:
>
>> On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
>>> From: Thomas Huth <thuth@redhat.com>
>>>
>>> When shutting down a guest that is currently in progress of being
>>> migrated, there is a chance that QEMU might crash during bdrv_delete().
>>> The backtrace looks like this:
>>>
>>> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>>>
>>> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
>>> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>>> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
>>> (gdb) bt
>>> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>>> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
>>> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>>>
>
> How does the migration thread reaches here? Is this from
> migration_block_inactivate()?
Unfortunately, gdb was not very helpful here (claiming that it cannot access
the memory and stack anymore), so I had to do some printf debugging. This is
what seems to happen:
Main thread: qemu_cleanup() calls migration_shutdown() -->
migration_cancel() which signals the migration thread to cancel the migration.
Migration thread: migration_thread() got kicked out the loop and calls
migration_iteration_finish(), which tries to get the BQL via bql_lock() but
that is currently held by another thread, so the migration thread is blocked
here.
Main thread: qemu_cleanup() advances to bdrv_close_all() that uses
blockdev_close_all_bdrv_states() to unref all BDS. The BDS with the name
'libvirt-1-storage' gets deleted via bdrv_delete() that way.
Migration thread: Later, migration_iteration_finish() finally gets the BQL,
and calls the migration_block_activate() function in the
MIGRATION_STATUS_CANCELLING case statement. This calls bdrv_activate_all().
bdrv_activate_all() gets a pointer to that 'libvirt-1-storage' BDS again
from bdrv_first(), and during the bdrv_next() that BDS gets unref'ed again
which is causing the crash.
==> Why is bdrv_first() still providing a BDS that have been deleted by
other threads earlier?
>> It sounds like the migration thread does not hold block graph refcounts
>> and assumes the BlockDriverStates it uses have a long enough lifetime.
>>
>> I don't know the migration code well enough to say whether joining in
>> migration_shutdown() is okay. Another option would be expicitly holding
>> the necessary refcounts in the migration thread.
>
> I agree. In principle and also because shuffling the joining around
> feels like something that's prone to introduce other bugs.
I'm a little bit lost here right now ... Can you suggest a place where we
would need to increase the refcounts in the migration thread?
Thomas
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
2025-12-12 17:18 ` Thomas Huth
@ 2025-12-12 21:26 ` Fabiano Rosas
0 siblings, 0 replies; 6+ messages in thread
From: Fabiano Rosas @ 2025-12-12 21:26 UTC (permalink / raw)
To: Thomas Huth, Stefan Hajnoczi
Cc: Peter Xu, qemu-devel, Kevin Wolf, qemu-block, Eric Blake,
Paolo Bonzini, Daniel P . Berrangé, Markus Armbruster,
Peter Maydell, Alex Bennée
Thomas Huth <thuth@redhat.com> writes:
> On 08/12/2025 16.26, Fabiano Rosas wrote:
>> Stefan Hajnoczi <stefanha@redhat.com> writes:
>>
>>> On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
>>>> From: Thomas Huth <thuth@redhat.com>
>>>>
>>>> When shutting down a guest that is currently in progress of being
>>>> migrated, there is a chance that QEMU might crash during bdrv_delete().
>>>> The backtrace looks like this:
>>>>
>>>> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>>>>
>>>> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
>>>> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>>>> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
>>>> (gdb) bt
>>>> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
>>>> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
>>>> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>>>>
>>
>> How does the migration thread reaches here? Is this from
>> migration_block_inactivate()?
>
> Unfortunately, gdb was not very helpful here (claiming that it cannot access
> the memory and stack anymore), so I had to do some printf debugging. This is
> what seems to happen:
>
> Main thread: qemu_cleanup() calls migration_shutdown() -->
> migration_cancel() which signals the migration thread to cancel the migration.
>
> Migration thread: migration_thread() got kicked out the loop and calls
> migration_iteration_finish(), which tries to get the BQL via bql_lock() but
> that is currently held by another thread, so the migration thread is blocked
> here.
>
> Main thread: qemu_cleanup() advances to bdrv_close_all() that uses
> blockdev_close_all_bdrv_states() to unref all BDS. The BDS with the name
> 'libvirt-1-storage' gets deleted via bdrv_delete() that way.
>
Has qmp_blockdev_del() ever been called to remove the BDS from the
monitor_bdrv_states list? Otherwise your debugging seems to indicate
blockdev_close_all_bdrv_states() is dropping the last reference to bs,
but it's still accessible from bdrv_next() via
bdrv_next_monitor_owned().
> Migration thread: Later, migration_iteration_finish() finally gets the BQL,
> and calls the migration_block_activate() function in the
> MIGRATION_STATUS_CANCELLING case statement. This calls bdrv_activate_all().
> bdrv_activate_all() gets a pointer to that 'libvirt-1-storage' BDS again
> from bdrv_first(), and during the bdrv_next() that BDS gets unref'ed again
> which is causing the crash.
>
> ==> Why is bdrv_first() still providing a BDS that have been deleted by
> other threads earlier?
>
>>> It sounds like the migration thread does not hold block graph refcounts
>>> and assumes the BlockDriverStates it uses have a long enough lifetime.
>>>
>>> I don't know the migration code well enough to say whether joining in
>>> migration_shutdown() is okay. Another option would be expicitly holding
>>> the necessary refcounts in the migration thread.
>>
>> I agree. In principle and also because shuffling the joining around
>> feels like something that's prone to introduce other bugs.
>
> I'm a little bit lost here right now ... Can you suggest a place where we
> would need to increase the refcounts in the migration thread?
>
> Thomas
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
2025-12-08 13:51 [PATCH] migration: Fix a possible crash when halting a guest during migration Thomas Huth
2025-12-08 14:45 ` Stefan Hajnoczi
@ 2025-12-08 15:45 ` Peter Xu
1 sibling, 0 replies; 6+ messages in thread
From: Peter Xu @ 2025-12-08 15:45 UTC (permalink / raw)
To: Thomas Huth
Cc: Fabiano Rosas, qemu-devel, Kevin Wolf, qemu-block, Eric Blake,
Stefan Hajnoczi, Paolo Bonzini, Daniel P . Berrangé,
Markus Armbruster, Peter Maydell, Alex Bennée
On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
> From: Thomas Huth <thuth@redhat.com>
>
> When shutting down a guest that is currently in progress of being
> migrated, there is a chance that QEMU might crash during bdrv_delete().
> The backtrace looks like this:
>
> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>
> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
> (gdb) bt
> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>
> The problem is apparently that the migration thread is still active
> (migration_shutdown() only asks it to stop the current migration, but
> does not wait for it to finish), while the main thread continues to
> bdrv_close_all() that will destroy all block drivers. So the two threads
> are racing here for the destruction of the migration-related block drivers.
>
> I was able to bisect the problem and the race has apparently been introduced
> by commit c2a189976e211c9ff782 ("migration/block-active: Remove global active
> flag"), so reverting it might be an option as well, but waiting for the
> migration thread to finish before continuing with the further clean-ups
> during shutdown seems less intrusive.
>
> Note: I used the Claude AI assistant for analyzing the crash, and it
> came up with the idea of waiting for the migration thread to finish
> in migration_shutdown() before proceeding with the further clean-up,
> but the patch itself has been 100% written by myself.
>
> Fixes: c2a189976e ("migration/block-active: Remove global active flag")
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
> migration/migration.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index b316ee01ab2..6f4bb6d8438 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -380,6 +380,16 @@ void migration_bh_schedule(QEMUBHFunc *cb, void *opaque)
> qemu_bh_schedule(bh);
> }
>
> +static void migration_thread_join(MigrationState *s)
> +{
> + if (s && s->migration_thread_running) {
> + bql_unlock();
> + qemu_thread_join(&s->thread);
> + s->migration_thread_running = false;
> + bql_lock();
> + }
> +}
> +
> void migration_shutdown(void)
> {
> /*
> @@ -393,6 +403,13 @@ void migration_shutdown(void)
> * stop the migration using this structure
> */
> migration_cancel();
> + /*
> + * Wait for migration thread to finish to prevent a possible race where
> + * the migration thread is still running and accessing host block drivers
> + * while the main cleanup proceeds to remove them in bdrv_close_all()
> + * later.
> + */
> + migration_thread_join(migrate_get_current());
Not join() the thread was intentional, per commit 892ae715b6bc81, and then
I found I asked a question before; Dave answers here:
https://lore.kernel.org/all/20190228114019.GB4970@work-vm/
I wonder if we can still investigate what Stefan mentioned as the other
approach, as join() here may introduce other hang risks before we can
justify it's safe..
Thanks,
> object_unref(OBJECT(current_migration));
>
> /*
> @@ -1499,12 +1516,7 @@ static void migration_cleanup(MigrationState *s)
>
> close_return_path_on_source(s);
>
> - if (s->migration_thread_running) {
> - bql_unlock();
> - qemu_thread_join(&s->thread);
> - s->migration_thread_running = false;
> - bql_lock();
> - }
> + migration_thread_join(s);
>
> WITH_QEMU_LOCK_GUARD(&s->qemu_file_lock) {
> /*
> --
> 2.52.0
>
--
Peter Xu
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-12-12 21:27 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-08 13:51 [PATCH] migration: Fix a possible crash when halting a guest during migration Thomas Huth
2025-12-08 14:45 ` Stefan Hajnoczi
2025-12-08 15:26 ` Fabiano Rosas
2025-12-12 17:18 ` Thomas Huth
2025-12-12 21:26 ` Fabiano Rosas
2025-12-08 15:45 ` Peter Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).