* [PULL 1/2] coroutine-ucontext: Save fake stack for pooled coroutine
2024-01-22 16:01 [PULL 0/2] Block patches Stefan Hajnoczi
@ 2024-01-22 16:01 ` Stefan Hajnoczi
2024-01-22 16:01 ` [PULL 2/2] block/io: clear BDRV_BLOCK_RECURSE flag after recursing in bdrv_co_block_status Stefan Hajnoczi
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2024-01-22 16:01 UTC (permalink / raw)
To: qemu-devel
Cc: Hanna Reitz, Stefan Hajnoczi, Peter Maydell, qemu-block,
Fam Zheng, Kevin Wolf, Akihiko Odaki, Marc-André Lureau
From: Akihiko Odaki <akihiko.odaki@daynix.com>
Coroutine may be pooled even after COROUTINE_TERMINATE if
CONFIG_COROUTINE_POOL is enabled and fake stack should be saved in
such a case to keep AddressSanitizerUseAfterReturn working. Even worse,
I'm seeing stack corruption without fake stack being saved.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-ID: <20240117-asan-v2-1-26f9e1ea6e72@daynix.com>
---
util/coroutine-ucontext.c | 35 ++++++++++++++++++++++++++---------
1 file changed, 26 insertions(+), 9 deletions(-)
diff --git a/util/coroutine-ucontext.c b/util/coroutine-ucontext.c
index 7b304c79d9..8ef603d081 100644
--- a/util/coroutine-ucontext.c
+++ b/util/coroutine-ucontext.c
@@ -119,13 +119,11 @@ void finish_switch_fiber(void *fake_stack_save)
/* always_inline is required to avoid TSan runtime fatal errors. */
static inline __attribute__((always_inline))
-void start_switch_fiber_asan(CoroutineAction action, void **fake_stack_save,
+void start_switch_fiber_asan(void **fake_stack_save,
const void *bottom, size_t size)
{
#ifdef CONFIG_ASAN
- __sanitizer_start_switch_fiber(
- action == COROUTINE_TERMINATE ? NULL : fake_stack_save,
- bottom, size);
+ __sanitizer_start_switch_fiber(fake_stack_save, bottom, size);
#endif
}
@@ -165,7 +163,7 @@ static void coroutine_trampoline(int i0, int i1)
if (!sigsetjmp(self->env, 0)) {
CoroutineUContext *leaderp = get_ptr_leader();
- start_switch_fiber_asan(COROUTINE_YIELD, &fake_stack_save,
+ start_switch_fiber_asan(&fake_stack_save,
leaderp->stack, leaderp->stack_size);
start_switch_fiber_tsan(&fake_stack_save, self, true); /* true=caller */
siglongjmp(*(sigjmp_buf *)co->entry_arg, 1);
@@ -226,8 +224,7 @@ Coroutine *qemu_coroutine_new(void)
/* swapcontext() in, siglongjmp() back out */
if (!sigsetjmp(old_env, 0)) {
- start_switch_fiber_asan(COROUTINE_YIELD, &fake_stack_save, co->stack,
- co->stack_size);
+ start_switch_fiber_asan(&fake_stack_save, co->stack, co->stack_size);
start_switch_fiber_tsan(&fake_stack_save,
co, false); /* false=not caller */
@@ -269,10 +266,28 @@ static inline void valgrind_stack_deregister(CoroutineUContext *co)
#endif
#endif
+#if defined(CONFIG_ASAN) && defined(CONFIG_COROUTINE_POOL)
+static void coroutine_fn terminate_asan(void *opaque)
+{
+ CoroutineUContext *to = DO_UPCAST(CoroutineUContext, base, opaque);
+
+ set_current(opaque);
+ start_switch_fiber_asan(NULL, to->stack, to->stack_size);
+ G_STATIC_ASSERT(!IS_ENABLED(CONFIG_TSAN));
+ siglongjmp(to->env, COROUTINE_ENTER);
+}
+#endif
+
void qemu_coroutine_delete(Coroutine *co_)
{
CoroutineUContext *co = DO_UPCAST(CoroutineUContext, base, co_);
+#if defined(CONFIG_ASAN) && defined(CONFIG_COROUTINE_POOL)
+ co_->entry_arg = qemu_coroutine_self();
+ co_->entry = terminate_asan;
+ qemu_coroutine_switch(co_->entry_arg, co_, COROUTINE_ENTER);
+#endif
+
#ifdef CONFIG_VALGRIND_H
valgrind_stack_deregister(co);
#endif
@@ -305,8 +320,10 @@ qemu_coroutine_switch(Coroutine *from_, Coroutine *to_,
ret = sigsetjmp(from->env, 0);
if (ret == 0) {
- start_switch_fiber_asan(action, &fake_stack_save, to->stack,
- to->stack_size);
+ start_switch_fiber_asan(IS_ENABLED(CONFIG_COROUTINE_POOL) ||
+ action != COROUTINE_TERMINATE ?
+ &fake_stack_save : NULL,
+ to->stack, to->stack_size);
start_switch_fiber_tsan(&fake_stack_save,
to, false); /* false=not caller */
siglongjmp(to->env, action);
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PULL 2/2] block/io: clear BDRV_BLOCK_RECURSE flag after recursing in bdrv_co_block_status
2024-01-22 16:01 [PULL 0/2] Block patches Stefan Hajnoczi
2024-01-22 16:01 ` [PULL 1/2] coroutine-ucontext: Save fake stack for pooled coroutine Stefan Hajnoczi
@ 2024-01-22 16:01 ` Stefan Hajnoczi
2024-01-25 15:11 ` [PULL 0/2] Block patches Peter Maydell
2024-01-25 15:47 ` Michael Tokarev
3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2024-01-22 16:01 UTC (permalink / raw)
To: qemu-devel
Cc: Hanna Reitz, Stefan Hajnoczi, Peter Maydell, qemu-block,
Fam Zheng, Kevin Wolf, Fiona Ebner, Vladimir Sementsov-Ogievskiy
From: Fiona Ebner <f.ebner@proxmox.com>
Using fleecing backup like in [0] on a qcow2 image (with metadata
preallocation) can lead to the following assertion failure:
> bdrv_co_do_block_status: Assertion `!(ret & BDRV_BLOCK_ZERO)' failed.
In the reproducer [0], it happens because the BDRV_BLOCK_RECURSE flag
will be set by the qcow2 driver, so the caller will recursively check
the file child. Then the BDRV_BLOCK_ZERO set too. Later up the call
chain, in bdrv_co_do_block_status() for the snapshot-access driver,
the assertion failure will happen, because both flags are set.
To fix it, clear the recurse flag after the recursive check was done.
In detail:
> #0 qcow2_co_block_status
Returns 0x45 = BDRV_BLOCK_RECURSE | BDRV_BLOCK_DATA |
BDRV_BLOCK_OFFSET_VALID.
> #1 bdrv_co_do_block_status
Because of the data flag, bdrv_co_do_block_status() will now also set
BDRV_BLOCK_ALLOCATED. Because of the recurse flag,
bdrv_co_do_block_status() for the bdrv_file child will be called,
which returns 0x16 = BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_OFFSET_VALID |
BDRV_BLOCK_ZERO. Now the return value inherits the zero flag.
Returns 0x57 = BDRV_BLOCK_RECURSE | BDRV_BLOCK_DATA |
BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_ZERO.
> #2 bdrv_co_common_block_status_above
> #3 bdrv_co_block_status_above
> #4 bdrv_co_block_status
> #5 cbw_co_snapshot_block_status
> #6 bdrv_co_snapshot_block_status
> #7 snapshot_access_co_block_status
> #8 bdrv_co_do_block_status
Return value is propagated all the way up to here, where the assertion
failure happens, because BDRV_BLOCK_RECURSE and BDRV_BLOCK_ZERO are
both set.
> #9 bdrv_co_common_block_status_above
> #10 bdrv_co_block_status_above
> #11 block_copy_block_status
> #12 block_copy_dirty_clusters
> #13 block_copy_common
> #14 block_copy_async_co_entry
> #15 coroutine_trampoline
[0]:
> #!/bin/bash
> rm /tmp/disk.qcow2
> ./qemu-img create /tmp/disk.qcow2 -o preallocation=metadata -f qcow2 1G
> ./qemu-img create /tmp/fleecing.qcow2 -f qcow2 1G
> ./qemu-img create /tmp/backup.qcow2 -f qcow2 1G
> ./qemu-system-x86_64 --qmp stdio \
> --blockdev qcow2,node-name=node0,file.driver=file,file.filename=/tmp/disk.qcow2 \
> --blockdev qcow2,node-name=node1,file.driver=file,file.filename=/tmp/fleecing.qcow2 \
> --blockdev qcow2,node-name=node2,file.driver=file,file.filename=/tmp/backup.qcow2 \
> <<EOF
> {"execute": "qmp_capabilities"}
> {"execute": "blockdev-add", "arguments": { "driver": "copy-before-write", "file": "node0", "target": "node1", "node-name": "node3" } }
> {"execute": "blockdev-add", "arguments": { "driver": "snapshot-access", "file": "node3", "node-name": "snap0" } }
> {"execute": "blockdev-backup", "arguments": { "device": "snap0", "target": "node1", "sync": "full", "job-id": "backup0" } }
> EOF
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-id: 20240116154839.401030-1-f.ebner@proxmox.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/io.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/block/io.c b/block/io.c
index 8fa7670571..33150c0359 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2584,6 +2584,16 @@ bdrv_co_do_block_status(BlockDriverState *bs, bool want_zero,
ret |= (ret2 & BDRV_BLOCK_ZERO);
}
}
+
+ /*
+ * Now that the recursive search was done, clear the flag. Otherwise,
+ * with more complicated block graphs like snapshot-access ->
+ * copy-before-write -> qcow2, where the return value will be propagated
+ * further up to a parent bdrv_co_do_block_status() call, both the
+ * BDRV_BLOCK_RECURSE and BDRV_BLOCK_ZERO flags would be set, which is
+ * not allowed.
+ */
+ ret &= ~BDRV_BLOCK_RECURSE;
}
out:
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PULL 0/2] Block patches
2024-01-22 16:01 [PULL 0/2] Block patches Stefan Hajnoczi
2024-01-22 16:01 ` [PULL 1/2] coroutine-ucontext: Save fake stack for pooled coroutine Stefan Hajnoczi
2024-01-22 16:01 ` [PULL 2/2] block/io: clear BDRV_BLOCK_RECURSE flag after recursing in bdrv_co_block_status Stefan Hajnoczi
@ 2024-01-25 15:11 ` Peter Maydell
2024-01-25 15:47 ` Michael Tokarev
3 siblings, 0 replies; 5+ messages in thread
From: Peter Maydell @ 2024-01-25 15:11 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: qemu-devel, Hanna Reitz, qemu-block, Fam Zheng, Kevin Wolf
On Mon, 22 Jan 2024 at 16:01, Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
> The following changes since commit 09be34717190c1620f0c6e5c8765b8da354aeb4b:
>
> Merge tag 'pull-request-2024-01-19' of https://gitlab.com/thuth/qemu into staging (2024-01-20 17:22:16 +0000)
>
> are available in the Git repository at:
>
> https://gitlab.com/stefanha/qemu.git tags/block-pull-request
>
> for you to fetch changes up to 8a9be7992426c8920d4178e7dca59306a18c7a3a:
>
> block/io: clear BDRV_BLOCK_RECURSE flag after recursing in bdrv_co_block_status (2024-01-22 11:00:12 -0500)
>
> ----------------------------------------------------------------
> Pull request
>
> ----------------------------------------------------------------
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/9.0
for any user-visible changes.
-- PMM
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PULL 0/2] Block patches
2024-01-22 16:01 [PULL 0/2] Block patches Stefan Hajnoczi
` (2 preceding siblings ...)
2024-01-25 15:11 ` [PULL 0/2] Block patches Peter Maydell
@ 2024-01-25 15:47 ` Michael Tokarev
3 siblings, 0 replies; 5+ messages in thread
From: Michael Tokarev @ 2024-01-25 15:47 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Hanna Reitz, Peter Maydell, qemu-block, Fam Zheng, Kevin Wolf,
Akihiko Odaki
22.01.2024 19:01, Stefan Hajnoczi :
> Akihiko Odaki (1):
> coroutine-ucontext: Save fake stack for pooled coroutine
>
> Fiona Ebner (1):
> block/io: clear BDRV_BLOCK_RECURSE flag after recursing in
> bdrv_co_block_status
These too also look like -stable matherial, both of the changes.
Please let me know if it's not.
Thanks,
/mjt
^ permalink raw reply [flat|nested] 5+ messages in thread