From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39DBFC47DD9 for ; Mon, 22 Jan 2024 16:56:09 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rRxaU-0006g6-Fl; Mon, 22 Jan 2024 11:55:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rRxaT-0006dr-2O for qemu-devel@nongnu.org; Mon, 22 Jan 2024 11:55:53 -0500 Received: from smtp-out2.suse.de ([195.135.223.131]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rRxaQ-0004ki-HA for qemu-devel@nongnu.org; Mon, 22 Jan 2024 11:55:52 -0500 Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id E3F931F388; Mon, 22 Jan 2024 16:55:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1705942549; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sRT3/GSJvvqwQ0XaNCEZq++LisKHS1+Cm/MbKT85N1M=; b=h7BV3gdUrycWtX6fb4+DlHod81r2zJDFeWqQIqN0UDZihFB9gx3eQc2nCBEXfhhRyGIza9 VKkQoHJ2JdAPASlhFV8DxSu6OSV2SDm9hdPLDGe5CUQ+Oe2nJyo07QkTnCJqxrlz9aha7R K1ZxHXg8KhEusq3XBJ0SeDGwBh4x89A= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1705942549; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sRT3/GSJvvqwQ0XaNCEZq++LisKHS1+Cm/MbKT85N1M=; b=EzVna9xZLr+3IboxXKcprJhKtn0UX+mVY30bUDccOpP//ZJU8a724g8kVqtyOW4eoLhS8P Sn5cH2/x4JqBsSCA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1705942548; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sRT3/GSJvvqwQ0XaNCEZq++LisKHS1+Cm/MbKT85N1M=; b=BHnJQxuY+l7FlW6+vimwqcVPm1NRUNZumAqFyUh2etbO+hpGESkSd0R0WoTBFM/5eQHzCE o0Bn8l/PYt7GiouSb88yqmawZSUO5WJZ74xONKFUjrDIT3G6ewnsooYfImka7T2hvdN50Y 8AKc4Yq2/Imd8k1LJTTRa/D2R5Ib2+M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1705942548; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sRT3/GSJvvqwQ0XaNCEZq++LisKHS1+Cm/MbKT85N1M=; b=LIMRN0iVOFHaT8CUgUbaCclQcGmjo7kUlIG8tWvmPlLFS95t6GGZnk+zsmMB+TJB6IXCXr 2T58zdas5KLff5DQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 661D4136A4; Mon, 22 Jan 2024 16:55:48 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id VXK1CxSermUeBwAAD6G6ig (envelope-from ); Mon, 22 Jan 2024 16:55:48 +0000 From: Fabiano Rosas To: Peter Xu Cc: qemu-devel@nongnu.org Subject: Re: [PATCH 1/5] migration: Fix use-after-free of migration state object In-Reply-To: References: <20240119233922.32588-1-farosas@suse.de> <20240119233922.32588-2-farosas@suse.de> Date: Mon, 22 Jan 2024 13:55:45 -0300 Message-ID: <87le8hgve6.fsf@suse.de> MIME-Version: 1.0 Content-Type: text/plain Authentication-Results: smtp-out2.suse.de; none X-Spamd-Result: default: False [-4.30 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; BAYES_HAM(-3.00)[100.00%]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; RCVD_COUNT_THREE(0.00)[3]; DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519]; NEURAL_HAM_SHORT(-0.20)[-0.988]; RCPT_COUNT_TWO(0.00)[2]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_TLS_ALL(0.00)[]; MID_RHS_MATCH_FROM(0.00)[] Received-SPF: pass client-ip=195.135.223.131; envelope-from=farosas@suse.de; helo=smtp-out2.suse.de X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Peter Xu writes: > On Mon, Jan 22, 2024 at 05:49:01PM +0800, Peter Xu wrote: >> On Fri, Jan 19, 2024 at 08:39:18PM -0300, Fabiano Rosas wrote: >> > We're currently allowing the process_incoming_migration_bh bottom-half >> > to run without holding a reference to the 'current_migration' object, >> > which leads to a segmentation fault if the BH is still live after >> > migration_shutdown() has dropped the last reference to >> > current_migration. >> > >> > In my system the bug manifests as migrate_multifd() returning true >> > when it shouldn't and multifd_load_shutdown() calling >> > multifd_recv_terminate_threads() which crashes due to an uninitialized >> > multifd_recv_state. >> > >> > Fix the issue by holding a reference to the object when scheduling the >> > BH and dropping it before returning from the BH. The same is already >> > done for the cleanup_bh at migrate_fd_cleanup_schedule(). >> > >> > Signed-off-by: Fabiano Rosas >> > --- >> > migration/migration.c | 2 ++ >> > 1 file changed, 2 insertions(+) >> > >> > diff --git a/migration/migration.c b/migration/migration.c >> > index 219447dea1..cf17b68e57 100644 >> > --- a/migration/migration.c >> > +++ b/migration/migration.c >> > @@ -648,6 +648,7 @@ static void process_incoming_migration_bh(void *opaque) >> > MIGRATION_STATUS_COMPLETED); >> > qemu_bh_delete(mis->bh); >> > migration_incoming_state_destroy(); >> > + object_unref(OBJECT(migrate_get_current())); >> > } >> > >> > static void coroutine_fn >> > @@ -713,6 +714,7 @@ process_incoming_migration_co(void *opaque) >> > } >> > >> > mis->bh = qemu_bh_new(process_incoming_migration_bh, mis); >> > + object_ref(OBJECT(migrate_get_current())); >> > qemu_bh_schedule(mis->bh); >> > return; >> > fail: >> > -- >> > 2.35.3 >> > >> > I know I missed something, but I'd better ask: use-after-free needs to > happen only after migration_shutdown() / qemu_cleanup(), am I right? > > If so, shouldn't qemu_main_loop() already returned? Then how could any BH > keep running (including migration's) without qemu_main_loop()? The aio_bh_poll() -> aio_bh_call() sequence doesn't depend on qemu_main_loop(). In the stack you found below it happens after aio_wait_bh_oneshot() -> AIO_WAIT_WHILE -> aio_poll(). The stack I see is: #0 0x00005625c97bffc0 in multifd_recv_terminate_threads (err=0x0) at ../migration/multifd.c:992 #1 0x00005625c97c0086 in multifd_load_shutdown () at ../migration/multifd.c:1012 #2 0x00005625c97b6163 in process_incoming_migration_bh (opaque=0x5625cbce59f0) at ../migration/migration.c:624 #3 0x00005625c9c740c2 in aio_bh_call (bh=0x5625cc9e1320) at ../util/async.c:169 #4 0x00005625c9c741de in aio_bh_poll (ctx=0x5625cba2a670) at ../util/async.c:216 here^ #5 0x00005625c9af0599 in bdrv_graph_wrunlock () at ../block/graph-lock.c:170 #6 0x00005625c9aba8bd in bdrv_close (bs=0x5625cbcb3d80) at ../block.c:5099 #7 0x00005625c9abb71a in bdrv_delete (bs=0x5625cbcb3d80) at ../block.c:5501 #8 0x00005625c9abe840 in bdrv_unref (bs=0x5625cbcb3d80) at ../block.c:7075 #9 0x00005625c9abe865 in bdrv_schedule_unref_bh (opaque=0x5625cbcb3d80) at ../block.c:7083 #10 0x00005625c9c740c2 in aio_bh_call (bh=0x5625cbde09d0) at ../util/async.c:169 #11 0x00005625c9c741de in aio_bh_poll (ctx=0x5625cba2a670) at ../util/async.c:216 #12 0x00005625c9af0599 in bdrv_graph_wrunlock () at ../block/graph-lock.c:170 #13 0x00005625c9ae05db in blk_remove_bs (blk=0x5625cbcc1070) at ../block/block-backend.c:907 #14 0x00005625c9adfb1b in blk_remove_all_bs () at ../block/block-backend.c:571 #15 0x00005625c9abab0d in bdrv_close_all () at ../block.c:5146 #16 0x00005625c97892e4 in qemu_cleanup (status=0) at ../system/runstate.c:880 #17 0x00005625c9a58081 in qemu_default_main () at ../system/main.c:38 #18 0x00005625c9a580af in main (argc=35, argv=0x7ffe30ab0578) at ../system/main.c:48 > Hmm, I saw a pretty old stack mentioned in commit fd392cfa8e6: > > Original output: > qemu-system-x86_64: terminating on signal 15 from pid 31980 () > ================================================================= > ==31958==ERROR: AddressSanitizer: heap-use-after-free on address 0x61900001d210 > at pc 0x555558a535ca bp 0x7fffffffb190 sp 0x7fffffffb188 > READ of size 8 at 0x61900001d210 thread T0 (qemu-vm-0) > #0 0x555558a535c9 in migrate_fd_cleanup migration/migration.c:1502:23 > #1 0x5555594fde0a in aio_bh_call util/async.c:90:5 > #2 0x5555594fe522 in aio_bh_poll util/async.c:118:13 > #3 0x555559524783 in aio_poll util/aio-posix.c:725:17 > #4 0x555559504fb3 in aio_wait_bh_oneshot util/aio-wait.c:71:5 > #5 0x5555573bddf6 in virtio_blk_data_plane_stop > hw/block/dataplane/virtio-blk.c:282:5 > #6 0x5555589d5c09 in virtio_bus_stop_ioeventfd hw/virtio/virtio-bus.c:246:9 > #7 0x5555589e9917 in virtio_pci_stop_ioeventfd hw/virtio/virtio-pci.c:287:5 > #8 0x5555589e22bf in virtio_pci_vmstate_change hw/virtio/virtio-pci.c:1072:9 > #9 0x555557628931 in virtio_vmstate_change hw/virtio/virtio.c:2257:9 > #10 0x555557c36713 in vm_state_notify vl.c:1605:9 > #11 0x55555716ef53 in do_vm_stop cpus.c:1074:9 > #12 0x55555716eeff in vm_shutdown cpus.c:1092:12 > #13 0x555557c4283e in main vl.c:4617:5 > #14 0x7fffdfdb482f in __libc_start_main > (/lib/x86_64-linux-gnu/libc.so.6+0x2082f) > #15 0x555556ecb118 in _start (x86_64-softmmu/qemu-system-x86_64+0x1977118) > > Would that be the same case that you mentioned here? As vm_shutdown() is > indeed after migration_shutdown(). > > Even if so, two follow up comments.. > > (1) How did it help if process_incoming_migration_bh() took ref on > MigrationState*? I didn't even see it touching the object (instead, it > uses the incoming object)? We touch MigrationState every time we check for a capability. See the stack I posted above: process_incoming_migration_bh() -> multifd_load_shutdown(). void multifd_load_shutdown(void) { if (migrate_multifd()) { <-- HERE multifd_recv_terminate_threads(NULL); } } The bug reproduces *without* multifd, because that check passes and we go into multifd code that has not been initialized. side note: we should probably introduce a MigrationOutgoingState to pair with MigrationIncomingState and have both inside a global MigrationState that contains the common elements. If you agree I can add this to our todo list. > (2) This is what I'm just wondering.. on whether we should clear > current_migration to NULL in migration_shutdown() after we unref it. > Maybe it'll make such issues abort in an even clearer way. It hits the assert at migrate_get_current(): #4 0x00005643006e22ae in migrate_get_current () at ../migration/migration.c:246 #5 0x00005643006f0415 in migrate_late_block_activate () at ../migration/options.c:275 #6 0x00005643006e30e0 in process_incoming_migration_bh (opaque=0x564303b279f0) at ../migration/migration.c:603 #7 0x0000564300ba10cd in aio_bh_call (bh=0x564304823320) at ../util/async.c:169 #8 0x0000564300ba11e9 in aio_bh_poll (ctx=0x56430386c670) at ../util/async.c:216 ... #20 0x00005643006b62e4 in qemu_cleanup (status=0) at ../system/runstate.c:880 #21 0x000056430098508c in qemu_default_main () at ../system/main.c:38 #22 0x00005643009850ba in main (argc=35, argv=0x7ffc8bf703c8) at #../system/main.c:4 Note that this breaks at migrate_late_block_activate(), which is even earlier than the bug scenario at multifd_load_shutdown(). However we cannot set NULL currently because the BHs are still running after migration_shutdown(). I don't know of a safe way to cancel/delete a BH after it has (potentially) been scheduled already.