From: Stefan Hajnoczi <stefanha@redhat.com>
To: Thomas Huth <thuth@redhat.com>
Cc: "Peter Xu" <peterx@redhat.com>, "Fabiano Rosas" <farosas@suse.de>,
qemu-devel@nongnu.org, "Kevin Wolf" <kwolf@redhat.com>,
qemu-block@nongnu.org, "Eric Blake" <eblake@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Daniel P . Berrangé" <berrange@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Peter Maydell" <peter.maydell@linaro.org>,
"Alex Bennée" <alex.bennee@linaro.org>
Subject: Re: [PATCH] migration: Fix a possible crash when halting a guest during migration
Date: Mon, 8 Dec 2025 09:45:25 -0500 [thread overview]
Message-ID: <20251208144525.GA1341938@fedora> (raw)
In-Reply-To: <20251208135101.271417-1-thuth@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4024 bytes --]
On Mon, Dec 08, 2025 at 02:51:01PM +0100, Thomas Huth wrote:
> From: Thomas Huth <thuth@redhat.com>
>
> When shutting down a guest that is currently in progress of being
> migrated, there is a chance that QEMU might crash during bdrv_delete().
> The backtrace looks like this:
>
> Thread 74 "mig/src/main" received signal SIGSEGV, Segmentation fault.
>
> [Switching to Thread 0x3f7de7fc8c0 (LWP 2161436)]
> 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> 5560 QTAILQ_REMOVE(&graph_bdrv_states, bs, node_list);
> (gdb) bt
> #0 0x000002aa00664012 in bdrv_delete (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:5560
> #1 bdrv_unref (bs=0x2aa00f875c0) at ../../devel/qemu/block.c:7170
> Backtrace stopped: Cannot access memory at address 0x3f7de7f83e0
>
> The problem is apparently that the migration thread is still active
> (migration_shutdown() only asks it to stop the current migration, but
> does not wait for it to finish), while the main thread continues to
> bdrv_close_all() that will destroy all block drivers. So the two threads
> are racing here for the destruction of the migration-related block drivers.
>
> I was able to bisect the problem and the race has apparently been introduced
> by commit c2a189976e211c9ff782 ("migration/block-active: Remove global active
> flag"), so reverting it might be an option as well, but waiting for the
> migration thread to finish before continuing with the further clean-ups
> during shutdown seems less intrusive.
>
> Note: I used the Claude AI assistant for analyzing the crash, and it
> came up with the idea of waiting for the migration thread to finish
> in migration_shutdown() before proceeding with the further clean-up,
> but the patch itself has been 100% written by myself.
It sounds like the migration thread does not hold block graph refcounts
and assumes the BlockDriverStates it uses have a long enough lifetime.
I don't know the migration code well enough to say whether joining in
migration_shutdown() is okay. Another option would be expicitly holding
the necessary refcounts in the migration thread.
>
> Fixes: c2a189976e ("migration/block-active: Remove global active flag")
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
> migration/migration.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index b316ee01ab2..6f4bb6d8438 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -380,6 +380,16 @@ void migration_bh_schedule(QEMUBHFunc *cb, void *opaque)
> qemu_bh_schedule(bh);
> }
>
> +static void migration_thread_join(MigrationState *s)
> +{
> + if (s && s->migration_thread_running) {
> + bql_unlock();
> + qemu_thread_join(&s->thread);
> + s->migration_thread_running = false;
> + bql_lock();
> + }
> +}
> +
> void migration_shutdown(void)
> {
> /*
> @@ -393,6 +403,13 @@ void migration_shutdown(void)
> * stop the migration using this structure
> */
> migration_cancel();
> + /*
> + * Wait for migration thread to finish to prevent a possible race where
> + * the migration thread is still running and accessing host block drivers
> + * while the main cleanup proceeds to remove them in bdrv_close_all()
> + * later.
> + */
> + migration_thread_join(migrate_get_current());
> object_unref(OBJECT(current_migration));
>
> /*
> @@ -1499,12 +1516,7 @@ static void migration_cleanup(MigrationState *s)
>
> close_return_path_on_source(s);
>
> - if (s->migration_thread_running) {
> - bql_unlock();
> - qemu_thread_join(&s->thread);
> - s->migration_thread_running = false;
> - bql_lock();
> - }
> + migration_thread_join(s);
>
> WITH_QEMU_LOCK_GUARD(&s->qemu_file_lock) {
> /*
> --
> 2.52.0
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2025-12-08 14:46 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-08 13:51 [PATCH] migration: Fix a possible crash when halting a guest during migration Thomas Huth
2025-12-08 14:45 ` Stefan Hajnoczi [this message]
2025-12-08 15:26 ` Fabiano Rosas
2025-12-12 17:18 ` Thomas Huth
2025-12-12 21:26 ` Fabiano Rosas
2025-12-08 15:45 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251208144525.GA1341938@fedora \
--to=stefanha@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=eblake@redhat.com \
--cc=farosas@suse.de \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=peterx@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).