From: Fabiano Rosas <farosas@suse.de>
To: Peter Xu <peterx@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [PATCH 1/5] migration: Fix use-after-free of migration state object
Date: Mon, 22 Jan 2024 13:55:45 -0300 [thread overview]
Message-ID: <87le8hgve6.fsf@suse.de> (raw)
In-Reply-To: <Za5BkH5au-5h0imh@x1n>
Peter Xu <peterx@redhat.com> writes:
> On Mon, Jan 22, 2024 at 05:49:01PM +0800, Peter Xu wrote:
>> On Fri, Jan 19, 2024 at 08:39:18PM -0300, Fabiano Rosas wrote:
>> > We're currently allowing the process_incoming_migration_bh bottom-half
>> > to run without holding a reference to the 'current_migration' object,
>> > which leads to a segmentation fault if the BH is still live after
>> > migration_shutdown() has dropped the last reference to
>> > current_migration.
>> >
>> > In my system the bug manifests as migrate_multifd() returning true
>> > when it shouldn't and multifd_load_shutdown() calling
>> > multifd_recv_terminate_threads() which crashes due to an uninitialized
>> > multifd_recv_state.
>> >
>> > Fix the issue by holding a reference to the object when scheduling the
>> > BH and dropping it before returning from the BH. The same is already
>> > done for the cleanup_bh at migrate_fd_cleanup_schedule().
>> >
>> > Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> > ---
>> > migration/migration.c | 2 ++
>> > 1 file changed, 2 insertions(+)
>> >
>> > diff --git a/migration/migration.c b/migration/migration.c
>> > index 219447dea1..cf17b68e57 100644
>> > --- a/migration/migration.c
>> > +++ b/migration/migration.c
>> > @@ -648,6 +648,7 @@ static void process_incoming_migration_bh(void *opaque)
>> > MIGRATION_STATUS_COMPLETED);
>> > qemu_bh_delete(mis->bh);
>> > migration_incoming_state_destroy();
>> > + object_unref(OBJECT(migrate_get_current()));
>> > }
>> >
>> > static void coroutine_fn
>> > @@ -713,6 +714,7 @@ process_incoming_migration_co(void *opaque)
>> > }
>> >
>> > mis->bh = qemu_bh_new(process_incoming_migration_bh, mis);
>> > + object_ref(OBJECT(migrate_get_current()));
>> > qemu_bh_schedule(mis->bh);
>> > return;
>> > fail:
>> > --
>> > 2.35.3
>> >
>>
> I know I missed something, but I'd better ask: use-after-free needs to
> happen only after migration_shutdown() / qemu_cleanup(), am I right?
>
> If so, shouldn't qemu_main_loop() already returned? Then how could any BH
> keep running (including migration's) without qemu_main_loop()?
The aio_bh_poll() -> aio_bh_call() sequence doesn't depend on
qemu_main_loop(). In the stack you found below it happens after
aio_wait_bh_oneshot() -> AIO_WAIT_WHILE -> aio_poll().
The stack I see is:
#0 0x00005625c97bffc0 in multifd_recv_terminate_threads (err=0x0) at ../migration/multifd.c:992
#1 0x00005625c97c0086 in multifd_load_shutdown () at ../migration/multifd.c:1012
#2 0x00005625c97b6163 in process_incoming_migration_bh (opaque=0x5625cbce59f0) at ../migration/migration.c:624
#3 0x00005625c9c740c2 in aio_bh_call (bh=0x5625cc9e1320) at ../util/async.c:169
#4 0x00005625c9c741de in aio_bh_poll (ctx=0x5625cba2a670) at ../util/async.c:216
here^
#5 0x00005625c9af0599 in bdrv_graph_wrunlock () at ../block/graph-lock.c:170
#6 0x00005625c9aba8bd in bdrv_close (bs=0x5625cbcb3d80) at ../block.c:5099
#7 0x00005625c9abb71a in bdrv_delete (bs=0x5625cbcb3d80) at ../block.c:5501
#8 0x00005625c9abe840 in bdrv_unref (bs=0x5625cbcb3d80) at ../block.c:7075
#9 0x00005625c9abe865 in bdrv_schedule_unref_bh (opaque=0x5625cbcb3d80) at ../block.c:7083
#10 0x00005625c9c740c2 in aio_bh_call (bh=0x5625cbde09d0) at ../util/async.c:169
#11 0x00005625c9c741de in aio_bh_poll (ctx=0x5625cba2a670) at ../util/async.c:216
#12 0x00005625c9af0599 in bdrv_graph_wrunlock () at ../block/graph-lock.c:170
#13 0x00005625c9ae05db in blk_remove_bs (blk=0x5625cbcc1070) at ../block/block-backend.c:907
#14 0x00005625c9adfb1b in blk_remove_all_bs () at ../block/block-backend.c:571
#15 0x00005625c9abab0d in bdrv_close_all () at ../block.c:5146
#16 0x00005625c97892e4 in qemu_cleanup (status=0) at ../system/runstate.c:880
#17 0x00005625c9a58081 in qemu_default_main () at ../system/main.c:38
#18 0x00005625c9a580af in main (argc=35, argv=0x7ffe30ab0578) at ../system/main.c:48
> Hmm, I saw a pretty old stack mentioned in commit fd392cfa8e6:
>
> Original output:
> qemu-system-x86_64: terminating on signal 15 from pid 31980 (<unknown process>)
> =================================================================
> ==31958==ERROR: AddressSanitizer: heap-use-after-free on address 0x61900001d210
> at pc 0x555558a535ca bp 0x7fffffffb190 sp 0x7fffffffb188
> READ of size 8 at 0x61900001d210 thread T0 (qemu-vm-0)
> #0 0x555558a535c9 in migrate_fd_cleanup migration/migration.c:1502:23
> #1 0x5555594fde0a in aio_bh_call util/async.c:90:5
> #2 0x5555594fe522 in aio_bh_poll util/async.c:118:13
> #3 0x555559524783 in aio_poll util/aio-posix.c:725:17
> #4 0x555559504fb3 in aio_wait_bh_oneshot util/aio-wait.c:71:5
> #5 0x5555573bddf6 in virtio_blk_data_plane_stop
> hw/block/dataplane/virtio-blk.c:282:5
> #6 0x5555589d5c09 in virtio_bus_stop_ioeventfd hw/virtio/virtio-bus.c:246:9
> #7 0x5555589e9917 in virtio_pci_stop_ioeventfd hw/virtio/virtio-pci.c:287:5
> #8 0x5555589e22bf in virtio_pci_vmstate_change hw/virtio/virtio-pci.c:1072:9
> #9 0x555557628931 in virtio_vmstate_change hw/virtio/virtio.c:2257:9
> #10 0x555557c36713 in vm_state_notify vl.c:1605:9
> #11 0x55555716ef53 in do_vm_stop cpus.c:1074:9
> #12 0x55555716eeff in vm_shutdown cpus.c:1092:12
> #13 0x555557c4283e in main vl.c:4617:5
> #14 0x7fffdfdb482f in __libc_start_main
> (/lib/x86_64-linux-gnu/libc.so.6+0x2082f)
> #15 0x555556ecb118 in _start (x86_64-softmmu/qemu-system-x86_64+0x1977118)
>
> Would that be the same case that you mentioned here? As vm_shutdown() is
> indeed after migration_shutdown().
>
> Even if so, two follow up comments..
>
> (1) How did it help if process_incoming_migration_bh() took ref on
> MigrationState*? I didn't even see it touching the object (instead, it
> uses the incoming object)?
We touch MigrationState every time we check for a capability. See the
stack I posted above: process_incoming_migration_bh() ->
multifd_load_shutdown().
void multifd_load_shutdown(void)
{
if (migrate_multifd()) { <-- HERE
multifd_recv_terminate_threads(NULL);
}
}
The bug reproduces *without* multifd, because that check passes and we
go into multifd code that has not been initialized.
side note: we should probably introduce a MigrationOutgoingState to pair
with MigrationIncomingState and have both inside a global MigrationState
that contains the common elements. If you agree I can add this to our
todo list.
> (2) This is what I'm just wondering.. on whether we should clear
> current_migration to NULL in migration_shutdown() after we unref it.
> Maybe it'll make such issues abort in an even clearer way.
It hits the assert at migrate_get_current():
#4 0x00005643006e22ae in migrate_get_current () at ../migration/migration.c:246
#5 0x00005643006f0415 in migrate_late_block_activate () at ../migration/options.c:275
#6 0x00005643006e30e0 in process_incoming_migration_bh (opaque=0x564303b279f0) at ../migration/migration.c:603
#7 0x0000564300ba10cd in aio_bh_call (bh=0x564304823320) at ../util/async.c:169
#8 0x0000564300ba11e9 in aio_bh_poll (ctx=0x56430386c670) at ../util/async.c:216
...
#20 0x00005643006b62e4 in qemu_cleanup (status=0) at ../system/runstate.c:880
#21 0x000056430098508c in qemu_default_main () at ../system/main.c:38
#22 0x00005643009850ba in main (argc=35, argv=0x7ffc8bf703c8) at
#../system/main.c:4
Note that this breaks at migrate_late_block_activate(), which is even
earlier than the bug scenario at multifd_load_shutdown().
However we cannot set NULL currently because the BHs are still running
after migration_shutdown(). I don't know of a safe way to cancel/delete
a BH after it has (potentially) been scheduled already.
next prev parent reply other threads:[~2024-01-22 16:56 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-19 23:39 [PATCH 0/5] migration: Fix migration state reference counting Fabiano Rosas
2024-01-19 23:39 ` [PATCH 1/5] migration: Fix use-after-free of migration state object Fabiano Rosas
2024-01-19 23:43 ` Fabiano Rosas
2024-01-22 9:49 ` Peter Xu
2024-01-22 10:21 ` Peter Xu
2024-01-22 16:55 ` Fabiano Rosas [this message]
2024-01-23 1:56 ` Peter Xu
2024-01-19 23:39 ` [PATCH 2/5] migration: Take reference to migration state around bg_migration_vm_start_bh Fabiano Rosas
2024-01-19 23:39 ` [PATCH 3/5] migration: Reference migration state around loadvm_postcopy_handle_run_bh Fabiano Rosas
2024-01-19 23:39 ` [PATCH 4/5] migration: Add a wrapper to qemu_bh_schedule Fabiano Rosas
2024-01-19 23:39 ` [PATCH 5/5] migration: Centralize BH creation and dispatch Fabiano Rosas
2024-01-23 2:19 ` [PATCH 0/5] migration: Fix migration state reference counting Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87le8hgve6.fsf@suse.de \
--to=farosas@suse.de \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).