* [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job
@ 2023-11-15 17:20 Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 1/4] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Kevin Wolf @ 2023-11-15 17:20 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, hreitz, stefanha, qemu-devel
This series contains three fixes for deadlocks that follow the same
pattern: A nested event loop in the main thread waits for an iothread to
make progress, but the AioContext lock of that iothread is still held by
the main loop, so it can never make progress.
We're planning to fully remove the AioContext lock in 9.0, which would
automatically get rid of this kind of bugs, but it's still there in 8.2,
so let's fix them individually for this release.
Kevin Wolf (4):
block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
block: Fix deadlocks in bdrv_graph_wrunlock()
stream: Fix AioContext locking during bdrv_graph_wrlock()
iotests: Test two stream jobs in a single iothread
include/block/graph-lock.h | 15 +++-
block.c | 26 +++----
block/backup.c | 2 +-
block/blklogwrites.c | 4 +-
block/blkverify.c | 2 +-
block/block-backend.c | 10 ++-
block/commit.c | 10 +--
block/graph-lock.c | 23 +++++-
block/mirror.c | 14 ++--
block/qcow2.c | 2 +-
block/quorum.c | 4 +-
block/replication.c | 10 +--
block/snapshot.c | 2 +-
block/stream.c | 10 +--
block/vmdk.c | 10 +--
blockdev.c | 4 +-
blockjob.c | 8 +-
tests/unit/test-bdrv-drain.c | 20 ++---
tests/unit/test-bdrv-graph-mod.c | 10 +--
scripts/block-coroutine-wrapper.py | 2 +-
tests/qemu-iotests/tests/iothreads-stream | 73 +++++++++++++++++++
tests/qemu-iotests/tests/iothreads-stream.out | 11 +++
22 files changed, 197 insertions(+), 75 deletions(-)
create mode 100755 tests/qemu-iotests/tests/iothreads-stream
create mode 100644 tests/qemu-iotests/tests/iothreads-stream.out
--
2.41.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH for-8.2 1/4] block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
@ 2023-11-15 17:20 ` Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 2/4] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Kevin Wolf @ 2023-11-15 17:20 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, hreitz, stefanha, qemu-devel
While not all callers of blk_remove_bs() are correct in this respect,
the assumption in the function is that callers hold the AioContext lock
of the BlockBackend (this is required by the drain calls in it).
In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock()
has then to be called with the root BlockDriverState as its parameter
instead of NULL, so that this AioContext lock is temporarily dropped.
Fixes: https://issues.redhat.com/browse/RHEL-1761
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/block-backend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 4053134781..f6f05ead28 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -882,6 +882,8 @@ BlockBackend *blk_by_public(BlockBackendPublic *public)
/*
* Disassociates the currently associated BlockDriverState from @blk.
+ *
+ * The caller must hold the AioContext lock for the BlockBackend.
*/
void blk_remove_bs(BlockBackend *blk)
{
@@ -916,7 +918,7 @@ void blk_remove_bs(BlockBackend *blk)
root = blk->root;
blk->root = NULL;
- bdrv_graph_wrlock(NULL);
+ bdrv_graph_wrlock(root->bs);
bdrv_root_unref_child(root);
bdrv_graph_wrunlock();
}
--
2.41.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-8.2 2/4] block: Fix deadlocks in bdrv_graph_wrunlock()
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 1/4] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
@ 2023-11-15 17:20 ` Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 3/4] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Kevin Wolf @ 2023-11-15 17:20 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, hreitz, stefanha, qemu-devel
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that
have a nested event loop. Nested event loops can depend on other
iothreads making progress, so in order to allow them to make progress it
must not hold the AioContext lock of another thread while calling
aio_poll().
This introduces a @bs parameter to bdrv_graph_wrunlock() whose
AioContext is temporarily dropped (which matches bdrv_graph_wrlock()),
and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState
doesn't necessarily exist any more when unlocking.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/block/graph-lock.h | 15 ++++++++++++++-
block.c | 26 +++++++++++++-------------
block/backup.c | 2 +-
block/blklogwrites.c | 4 ++--
block/blkverify.c | 2 +-
block/block-backend.c | 6 ++++--
block/commit.c | 10 +++++-----
block/graph-lock.c | 23 ++++++++++++++++++++++-
block/mirror.c | 14 +++++++-------
block/qcow2.c | 2 +-
block/quorum.c | 4 ++--
block/replication.c | 10 +++++-----
block/snapshot.c | 2 +-
block/stream.c | 8 ++++----
block/vmdk.c | 10 +++++-----
blockdev.c | 4 ++--
blockjob.c | 8 ++++----
tests/unit/test-bdrv-drain.c | 20 ++++++++++----------
tests/unit/test-bdrv-graph-mod.c | 10 +++++-----
scripts/block-coroutine-wrapper.py | 2 +-
20 files changed, 109 insertions(+), 73 deletions(-)
diff --git a/include/block/graph-lock.h b/include/block/graph-lock.h
index 6f1cd12745..22b5db1ed9 100644
--- a/include/block/graph-lock.h
+++ b/include/block/graph-lock.h
@@ -123,8 +123,21 @@ bdrv_graph_wrlock(BlockDriverState *bs);
* bdrv_graph_wrunlock:
* Write finished, reset global has_writer to 0 and restart
* all readers that are waiting.
+ *
+ * If @bs is non-NULL, its AioContext is temporarily released.
+ */
+void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA
+bdrv_graph_wrunlock(BlockDriverState *bs);
+
+/*
+ * bdrv_graph_wrunlock_ctx:
+ * Write finished, reset global has_writer to 0 and restart
+ * all readers that are waiting.
+ *
+ * If @ctx is non-NULL, its lock is temporarily released.
*/
-void bdrv_graph_wrunlock(void) TSA_RELEASE(graph_lock) TSA_NO_TSA;
+void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA
+bdrv_graph_wrunlock_ctx(AioContext *ctx);
/*
* bdrv_graph_co_rdlock:
diff --git a/block.c b/block.c
index eac105a504..efb91d4ffb 100644
--- a/block.c
+++ b/block.c
@@ -1713,7 +1713,7 @@ open_failed:
bdrv_unref_child(bs, bs->file);
assert(!bs->file);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(bs->opaque);
bs->opaque = NULL;
@@ -3577,7 +3577,7 @@ int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
bdrv_drained_begin(drain_bs);
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(backing_hd);
bdrv_drained_end(drain_bs);
bdrv_unref(drain_bs);
@@ -3796,7 +3796,7 @@ BdrvChild *bdrv_open_child(const char *filename,
child = bdrv_attach_child(parent, bs, bdref_key, child_class, child_role,
errp);
aio_context_release(ctx);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
return child;
}
@@ -4652,7 +4652,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
bdrv_graph_wrlock(NULL);
tran_commit(tran);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
QTAILQ_FOREACH_REVERSE(bs_entry, bs_queue, entry) {
BlockDriverState *bs = bs_entry->state.bs;
@@ -4671,7 +4671,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
abort:
bdrv_graph_wrlock(NULL);
tran_abort(tran);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
QTAILQ_FOREACH_SAFE(bs_entry, bs_queue, entry, next) {
if (bs_entry->prepared) {
@@ -4857,7 +4857,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
tran, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(new_child_bs);
if (old_ctx != ctx) {
aio_context_release(ctx);
@@ -5216,7 +5216,7 @@ static void bdrv_close(BlockDriverState *bs)
assert(!bs->backing);
assert(!bs->file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
g_free(bs->opaque);
bs->opaque = NULL;
@@ -5511,7 +5511,7 @@ int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
bdrv_drained_begin(child_bs);
bdrv_graph_wrlock(bs);
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
bdrv_drained_end(child_bs);
return ret;
@@ -5593,7 +5593,7 @@ out:
tran_finalize(tran, ret);
bdrv_refresh_limits(bs_top, NULL, NULL);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs_top);
bdrv_drained_end(bs_top);
bdrv_drained_end(bs_new);
@@ -5631,7 +5631,7 @@ int bdrv_replace_child_bs(BdrvChild *child, BlockDriverState *new_bs,
tran_finalize(tran, ret);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(new_bs);
bdrv_drained_end(old_bs);
bdrv_drained_end(new_bs);
bdrv_unref(old_bs);
@@ -5720,7 +5720,7 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
bdrv_drained_begin(new_node_bs);
bdrv_graph_wrlock(new_node_bs);
ret = bdrv_replace_node(bs, new_node_bs, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(new_node_bs);
bdrv_drained_end(new_node_bs);
bdrv_drained_end(bs);
bdrv_unref(bs);
@@ -6015,7 +6015,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
* That's a FIXME.
*/
bdrv_replace_node_common(top, base, false, false, &local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
if (local_err) {
error_report_err(local_err);
@@ -6052,7 +6052,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
goto exit;
exit_wrlock:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
exit:
bdrv_drained_end(base);
bdrv_unref(top);
diff --git a/block/backup.c b/block/backup.c
index 5bad7d116f..8aae5836d7 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -499,7 +499,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target);
return &job->common;
diff --git a/block/blklogwrites.c b/block/blklogwrites.c
index a0d70729bb..3678f6cf42 100644
--- a/block/blklogwrites.c
+++ b/block/blklogwrites.c
@@ -253,7 +253,7 @@ fail_log:
if (ret < 0) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->log_file = NULL;
}
fail:
@@ -268,7 +268,7 @@ static void blk_log_writes_close(BlockDriverState *bs)
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
s->log_file = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int64_t coroutine_fn GRAPH_RDLOCK
diff --git a/block/blkverify.c b/block/blkverify.c
index a96905db35..9b17c46644 100644
--- a/block/blkverify.c
+++ b/block/blkverify.c
@@ -154,7 +154,7 @@ static void blkverify_close(BlockDriverState *bs)
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->test_file);
s->test_file = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int64_t coroutine_fn GRAPH_RDLOCK
diff --git a/block/block-backend.c b/block/block-backend.c
index f6f05ead28..e100915838 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -889,6 +889,7 @@ void blk_remove_bs(BlockBackend *blk)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
BdrvChild *root;
+ AioContext *ctx;
GLOBAL_STATE_CODE();
@@ -918,9 +919,10 @@ void blk_remove_bs(BlockBackend *blk)
root = blk->root;
blk->root = NULL;
+ ctx = bdrv_get_aio_context(root->bs);
bdrv_graph_wrlock(root->bs);
bdrv_root_unref_child(root);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(ctx);
}
/*
@@ -938,7 +940,7 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
blk->perm, blk->shared_perm,
blk, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
if (blk->root == NULL) {
return -EPERM;
}
diff --git a/block/commit.c b/block/commit.c
index eb3dc01f45..69cc75be0c 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -102,7 +102,7 @@ static void commit_abort(Job *job)
bdrv_drained_begin(commit_top_backing_bs);
bdrv_graph_wrlock(commit_top_backing_bs);
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(commit_top_backing_bs);
bdrv_drained_end(commit_top_backing_bs);
bdrv_unref(s->commit_top_bs);
@@ -370,19 +370,19 @@ void commit_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
goto fail;
}
}
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
goto fail;
}
s->chain_frozen = true;
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
if (ret < 0) {
goto fail;
@@ -436,7 +436,7 @@ fail:
bdrv_drained_begin(top);
bdrv_graph_wrlock(top);
bdrv_replace_node(commit_top_bs, top, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
bdrv_drained_end(top);
}
}
diff --git a/block/graph-lock.c b/block/graph-lock.c
index e5525ee2db..079e878d9b 100644
--- a/block/graph-lock.c
+++ b/block/graph-lock.c
@@ -161,11 +161,21 @@ void no_coroutine_fn bdrv_graph_wrlock(BlockDriverState *bs)
}
}
-void bdrv_graph_wrunlock(void)
+void no_coroutine_fn bdrv_graph_wrunlock_ctx(AioContext *ctx)
{
GLOBAL_STATE_CODE();
assert(qatomic_read(&has_writer));
+ /*
+ * Release only non-mainloop AioContext. The mainloop often relies on the
+ * BQL and doesn't lock the main AioContext before doing things.
+ */
+ if (ctx && ctx != qemu_get_aio_context()) {
+ aio_context_release(ctx);
+ } else {
+ ctx = NULL;
+ }
+
WITH_QEMU_LOCK_GUARD(&aio_context_list_lock) {
/*
* No need for memory barriers, this works in pair with
@@ -187,6 +197,17 @@ void bdrv_graph_wrunlock(void)
* progress.
*/
aio_bh_poll(qemu_get_aio_context());
+
+ if (ctx) {
+ aio_context_acquire(ctx);
+ }
+}
+
+void no_coroutine_fn bdrv_graph_wrunlock(BlockDriverState *bs)
+{
+ AioContext *ctx = bs ? bdrv_get_aio_context(bs) : NULL;
+
+ bdrv_graph_wrunlock_ctx(ctx);
}
void coroutine_fn bdrv_graph_co_rdlock(void)
diff --git a/block/mirror.c b/block/mirror.c
index 2096fade90..cd9d3ad4a8 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -773,7 +773,7 @@ static int mirror_exit_common(Job *job)
"would not lead to an abrupt change of visible data",
to_replace->node_name, target_bs->node_name);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target_bs);
bdrv_drained_end(to_replace);
if (local_err) {
error_report_err(local_err);
@@ -798,7 +798,7 @@ static int mirror_exit_common(Job *job)
block_job_remove_all_bdrv(bjob);
bdrv_graph_wrlock(mirror_top_bs);
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(mirror_top_bs);
bdrv_drained_end(target_bs);
bdrv_unref(target_bs);
@@ -1920,7 +1920,7 @@ static BlockJob *mirror_start_job(
BLK_PERM_CONSISTENT_READ,
errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
@@ -1965,17 +1965,17 @@ static BlockJob *mirror_start_job(
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
QTAILQ_INIT(&s->ops_in_flight);
@@ -2006,7 +2006,7 @@ fail:
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort);
bdrv_replace_node(mirror_top_bs, bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
bdrv_drained_end(bs);
bdrv_unref(mirror_top_bs);
diff --git a/block/qcow2.c b/block/qcow2.c
index cf2468858f..13e032bd5e 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -2809,7 +2809,7 @@ qcow2_do_close(BlockDriverState *bs, bool close_data_file)
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->data_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->data_file = NULL;
bdrv_graph_rdlock_main_loop();
}
diff --git a/block/quorum.c b/block/quorum.c
index d3ffc2ee33..505b8b3e18 100644
--- a/block/quorum.c
+++ b/block/quorum.c
@@ -1044,7 +1044,7 @@ close_exit:
}
bdrv_unref_child(bs, s->children[i]);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->children);
g_free(opened);
exit:
@@ -1061,7 +1061,7 @@ static void quorum_close(BlockDriverState *bs)
for (i = 0; i < s->num_children; i++) {
bdrv_unref_child(bs, s->children[i]);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->children);
}
diff --git a/block/replication.c b/block/replication.c
index 43e259444b..5ded5f1ca9 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -568,7 +568,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
aio_context_release(aio_context);
return;
}
@@ -579,7 +579,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
BDRV_CHILD_DATA, &local_err);
if (local_err) {
error_propagate(errp, local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
aio_context_release(aio_context);
return;
}
@@ -592,7 +592,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
if (!top_bs || !bdrv_is_root_node(top_bs) ||
!check_top_bs(top_bs, bs)) {
error_setg(errp, "No top_bs or it is invalid");
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
reopen_backing_file(bs, false, NULL);
aio_context_release(aio_context);
return;
@@ -600,7 +600,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
bdrv_op_block_all(top_bs, s->blocker);
bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
s->backup_job = backup_job_create(
NULL, s->secondary_disk->bs, s->hidden_disk->bs,
@@ -696,7 +696,7 @@ static void replication_done(void *opaque, int ret)
s->secondary_disk = NULL;
bdrv_unref_child(bs, s->hidden_disk);
s->hidden_disk = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->error = 0;
} else {
diff --git a/block/snapshot.c b/block/snapshot.c
index 55974273ae..ec8cf4810b 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -292,7 +292,7 @@ int bdrv_snapshot_goto(BlockDriverState *bs,
/* .bdrv_open() will re-attach it */
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, fallback);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
ret = bdrv_snapshot_goto(fallback_bs, snapshot_id, errp);
open_ret = drv->bdrv_open(bs, options, bs->open_flags, &local_err);
diff --git a/block/stream.c b/block/stream.c
index 0b92410c00..e3aa696289 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -101,7 +101,7 @@ static int stream_prepare(Job *job)
bdrv_graph_wrlock(base);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
/*
* This call will do I/O, so the graph can change again from here on.
@@ -369,7 +369,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
bdrv_graph_wrlock(bs);
if (block_job_add_bdrv(&s->common, "active node", bs, 0,
basic_flags | BLK_PERM_WRITE, errp)) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
@@ -389,11 +389,11 @@ void stream_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
basic_flags, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
s->base_overlay = base_overlay;
s->above_base = above_base;
diff --git a/block/vmdk.c b/block/vmdk.c
index dda783f06b..d87f6d9aaa 100644
--- a/block/vmdk.c
+++ b/block/vmdk.c
@@ -283,7 +283,7 @@ static void vmdk_free_extents(BlockDriverState *bs)
bdrv_unref_child(bs, e->file);
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->extents);
}
@@ -1237,7 +1237,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1256,7 +1256,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1267,7 +1267,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1277,7 +1277,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
ret = -ENOTSUP;
goto out;
diff --git a/blockdev.c b/blockdev.c
index 5bc921236c..4c1177e8db 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1613,7 +1613,7 @@ static void external_snapshot_abort(void *opaque)
bdrv_drained_begin(state->new_bs);
bdrv_graph_wrlock(state->old_bs);
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(state->old_bs);
bdrv_drained_end(state->new_bs);
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
@@ -3692,7 +3692,7 @@ void qmp_x_blockdev_change(const char *parent, const char *child,
}
out:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
BlockJobInfoList *qmp_query_block_jobs(Error **errp)
diff --git a/blockjob.c b/blockjob.c
index af44322cbe..b7a29052b9 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -212,7 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
g_slist_free_1(l);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(job->job.aio_context);
}
bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)
@@ -523,7 +523,7 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
flags, cb, opaque, errp);
if (job == NULL) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
return NULL;
}
@@ -563,11 +563,11 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
goto fail;
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
return job;
fail:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
job_early_fail(&job->job);
return NULL;
}
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index 8d05538bf6..704d1a3f36 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -809,7 +809,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type,
bdrv_graph_wrlock(target);
block_job_add_bdrv(job, "target", target, 0, BLK_PERM_ALL, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target);
switch (result) {
case TEST_JOB_SUCCESS:
@@ -995,7 +995,7 @@ static void bdrv_test_top_close(BlockDriverState *bs)
QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) {
bdrv_unref_child(bs, c);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int coroutine_fn GRAPH_RDLOCK
@@ -1088,7 +1088,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* This child will be the one to pass to requests through to, and
* it will stall until a drain occurs */
@@ -1101,7 +1101,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
&child_of_bds,
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* This child is just there to be deleted
* (for detach_instead_of_delete == true) */
@@ -1110,7 +1110,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds, BDRV_CHILD_DATA,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
blk = blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL);
blk_insert_bs(blk, bs, &error_abort);
@@ -1200,7 +1200,7 @@ static void no_coroutine_fn detach_indirect_bh(void *opaque)
data->child_c = bdrv_attach_child(data->parent_b, data->c, "PB-C",
&child_of_bds, BDRV_CHILD_DATA,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static void coroutine_mixed_fn detach_by_parent_aio_cb(void *opaque, int ret)
@@ -1308,7 +1308,7 @@ static void TSA_NO_TSA test_detach_indirect(bool by_parent_cb)
bdrv_attach_child(parent_a, a, "PA-A",
by_parent_cb ? &child_of_bds : &detach_by_driver_cb_class,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_assert_cmpint(parent_a->refcnt, ==, 1);
g_assert_cmpint(parent_b->refcnt, ==, 1);
@@ -1735,7 +1735,7 @@ static void test_drop_intermediate_poll(void)
&chain_child_class, BDRV_CHILD_COW, &error_abort);
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
job = block_job_create("job", &test_simple_job_driver, NULL, job_node,
0, BLK_PERM_ALL, 0, 0, NULL, NULL, &error_abort);
@@ -1985,7 +1985,7 @@ static void do_test_replace_child_mid_drain(int old_drain_count,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(parent_bs, old_child_bs, "child", &child_of_bds,
BDRV_CHILD_COW, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
parent_s->setup_completed = true;
for (i = 0; i < old_drain_count; i++) {
@@ -2018,7 +2018,7 @@ static void do_test_replace_child_mid_drain(int old_drain_count,
bdrv_drained_begin(new_child_bs);
bdrv_graph_wrlock(NULL);
bdrv_replace_node(old_child_bs, new_child_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_drained_end(new_child_bs);
bdrv_drained_end(old_child_bs);
g_assert(parent_bs->quiesce_counter == new_drain_count);
diff --git a/tests/unit/test-bdrv-graph-mod.c b/tests/unit/test-bdrv-graph-mod.c
index 878544dbd5..074adcbb93 100644
--- a/tests/unit/test-bdrv-graph-mod.c
+++ b/tests/unit/test-bdrv-graph-mod.c
@@ -140,7 +140,7 @@ static void test_update_perm_tree(void)
bdrv_graph_wrlock(NULL);
bdrv_attach_child(filter, bs, "child", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
ret = bdrv_append(filter, bs, NULL);
@@ -210,7 +210,7 @@ static void test_should_update_child(void)
g_assert(target->backing->bs == bs);
bdrv_attach_child(filter, target, "target", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
bdrv_append(filter, bs, &error_abort);
aio_context_release(qemu_get_aio_context());
@@ -260,7 +260,7 @@ static void test_parallel_exclusive_write(void)
&error_abort);
bdrv_replace_node(fl1, fl2, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_drained_end(fl2);
bdrv_drained_end(fl1);
@@ -380,7 +380,7 @@ static void test_parallel_perm_update(void)
bdrv_attach_child(fl2, base, "backing", &child_of_bds,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* Select fl1 as first child to be active */
s->selected = c_fl1;
@@ -438,7 +438,7 @@ static void test_append_greedy_filter(void)
bdrv_attach_child(top, base, "backing", &child_of_bds,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
bdrv_append(fl, base, &error_abort);
diff --git a/scripts/block-coroutine-wrapper.py b/scripts/block-coroutine-wrapper.py
index a601c3c672..a38e5833fb 100644
--- a/scripts/block-coroutine-wrapper.py
+++ b/scripts/block-coroutine-wrapper.py
@@ -262,7 +262,7 @@ def gen_no_co_wrapper(func: FuncDecl) -> str:
graph_unlock=' bdrv_graph_rdunlock_main_loop();'
elif func.graph_wrlock:
graph_lock=' bdrv_graph_wrlock(NULL);'
- graph_unlock=' bdrv_graph_wrunlock();'
+ graph_unlock=' bdrv_graph_wrunlock(NULL);'
return f"""\
/*
--
2.41.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-8.2 3/4] stream: Fix AioContext locking during bdrv_graph_wrlock()
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 1/4] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 2/4] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
@ 2023-11-15 17:20 ` Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 4/4] iotests: Test two stream jobs in a single iothread Kevin Wolf
2023-11-15 18:36 ` [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Stefan Hajnoczi
4 siblings, 0 replies; 6+ messages in thread
From: Kevin Wolf @ 2023-11-15 17:20 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, hreitz, stefanha, qemu-devel
In stream_prepare(), we need to temporarily drop the AioContext lock
that job_prepare_locked() took for us while calling the graph write lock
functions which can poll.
All block nodes related to this block job are in the same AioContext, so
we can pass any of them to bdrv_graph_wrlock()/ bdrv_graph_wrunlock().
Unfortunately, the one that we picked is base, which can be NULL - and
in this case the AioContext lock is not released and deadlocks can
occur.
Fix this by passing s->target_bs, which is never NULL.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/stream.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/stream.c b/block/stream.c
index e3aa696289..01fe7c0f16 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -99,9 +99,9 @@ static int stream_prepare(Job *job)
}
}
- bdrv_graph_wrlock(base);
+ bdrv_graph_wrlock(s->target_bs);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
- bdrv_graph_wrunlock(base);
+ bdrv_graph_wrunlock(s->target_bs);
/*
* This call will do I/O, so the graph can change again from here on.
--
2.41.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH for-8.2 4/4] iotests: Test two stream jobs in a single iothread
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
` (2 preceding siblings ...)
2023-11-15 17:20 ` [PATCH for-8.2 3/4] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
@ 2023-11-15 17:20 ` Kevin Wolf
2023-11-15 18:36 ` [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Stefan Hajnoczi
4 siblings, 0 replies; 6+ messages in thread
From: Kevin Wolf @ 2023-11-15 17:20 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, hreitz, stefanha, qemu-devel
This tests two parallel stream jobs that will complete around the same
time and run on two different disks in the same iothreads. It is loosely
based on the bug report at https://issues.redhat.com/browse/RHEL-1761.
For me, this test hangs reliably with the originally reported bug in
blk_remove_bs(). After fixing it, it intermittently hangs for the bugs
fixed after it, missing AioContext unlocking in bdrv_graph_wrunlock()
and in stream_prepare(). The deadlocks seem to happen more frequently
when the test directory is on tmpfs.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/tests/iothreads-stream | 73 +++++++++++++++++++
tests/qemu-iotests/tests/iothreads-stream.out | 11 +++
2 files changed, 84 insertions(+)
create mode 100755 tests/qemu-iotests/tests/iothreads-stream
create mode 100644 tests/qemu-iotests/tests/iothreads-stream.out
diff --git a/tests/qemu-iotests/tests/iothreads-stream b/tests/qemu-iotests/tests/iothreads-stream
new file mode 100755
index 0000000000..dfd52ac2cc
--- /dev/null
+++ b/tests/qemu-iotests/tests/iothreads-stream
@@ -0,0 +1,73 @@
+#!/usr/bin/env python3
+# group: rw quick auto
+#
+# Copyright (C) 2023 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+
+import iotests
+
+iotests.script_initialize(supported_fmts=['qcow2'],
+ supported_platforms=['linux'])
+iotests.verify_virtio_scsi_pci_or_ccw()
+
+with iotests.FilePath('disk1.img') as base1_path, \
+ iotests.FilePath('disk1-snap.img') as snap1_path, \
+ iotests.FilePath('disk2.img') as base2_path, \
+ iotests.FilePath('disk2-snap.img') as snap2_path, \
+ iotests.VM() as vm:
+
+ img_size = '10M'
+
+ # Only one iothread for both disks
+ vm.add_object('iothread,id=iothread0')
+ vm.add_device('virtio-scsi,iothread=iothread0')
+
+ iotests.log('Preparing disks...')
+ for i, base_path, snap_path in ((0, base1_path, snap1_path),
+ (1, base2_path, snap2_path)):
+ iotests.qemu_img_create('-f', iotests.imgfmt, base_path, img_size)
+ iotests.qemu_img_create('-f', iotests.imgfmt, '-b', base_path,
+ '-F', iotests.imgfmt, snap_path)
+
+ iotests.qemu_io_log('-c', f'write 0 {img_size}', base_path)
+
+ vm.add_blockdev(f'file,node-name=disk{i}-base-file,filename={base_path}')
+ vm.add_blockdev(f'qcow2,node-name=disk{i}-base,file=disk{i}-base-file')
+ vm.add_blockdev(f'file,node-name=disk{i}-file,filename={snap_path}')
+ vm.add_blockdev(f'qcow2,node-name=disk{i},file=disk{i}-file,'
+ f'backing=disk{i}-base')
+ vm.add_device(f'scsi-hd,drive=disk{i}')
+
+ iotests.log('Launching VM...')
+ vm.launch()
+
+ iotests.log('Starting stream jobs...')
+ iotests.log(vm.qmp('block-stream', device='disk0', job_id='job0'))
+ iotests.log(vm.qmp('block-stream', device='disk1', job_id='job1'))
+
+ finished = 0
+ while True:
+ try:
+ ev = vm.event_wait('JOB_STATUS_CHANGE', timeout=0.1)
+ if ev is not None and ev['data']['status'] == 'null':
+ finished += 1
+ # The test is done once both jobs are gone
+ if finished == 2:
+ break
+ except TimeoutError:
+ pass
+ vm.cmd('query-jobs')
diff --git a/tests/qemu-iotests/tests/iothreads-stream.out b/tests/qemu-iotests/tests/iothreads-stream.out
new file mode 100644
index 0000000000..ef134165e5
--- /dev/null
+++ b/tests/qemu-iotests/tests/iothreads-stream.out
@@ -0,0 +1,11 @@
+Preparing disks...
+wrote 10485760/10485760 bytes at offset 0
+10 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 10485760/10485760 bytes at offset 0
+10 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+Launching VM...
+Starting stream jobs...
+{"return": {}}
+{"return": {}}
--
2.41.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
` (3 preceding siblings ...)
2023-11-15 17:20 ` [PATCH for-8.2 4/4] iotests: Test two stream jobs in a single iothread Kevin Wolf
@ 2023-11-15 18:36 ` Stefan Hajnoczi
4 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2023-11-15 18:36 UTC (permalink / raw)
To: Kevin Wolf; +Cc: qemu-block, hreitz, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 2382 bytes --]
On Wed, Nov 15, 2023 at 06:20:08PM +0100, Kevin Wolf wrote:
> This series contains three fixes for deadlocks that follow the same
> pattern: A nested event loop in the main thread waits for an iothread to
> make progress, but the AioContext lock of that iothread is still held by
> the main loop, so it can never make progress.
>
> We're planning to fully remove the AioContext lock in 9.0, which would
> automatically get rid of this kind of bugs, but it's still there in 8.2,
> so let's fix them individually for this release.
>
> Kevin Wolf (4):
> block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
> block: Fix deadlocks in bdrv_graph_wrunlock()
> stream: Fix AioContext locking during bdrv_graph_wrlock()
> iotests: Test two stream jobs in a single iothread
>
> include/block/graph-lock.h | 15 +++-
> block.c | 26 +++----
> block/backup.c | 2 +-
> block/blklogwrites.c | 4 +-
> block/blkverify.c | 2 +-
> block/block-backend.c | 10 ++-
> block/commit.c | 10 +--
> block/graph-lock.c | 23 +++++-
> block/mirror.c | 14 ++--
> block/qcow2.c | 2 +-
> block/quorum.c | 4 +-
> block/replication.c | 10 +--
> block/snapshot.c | 2 +-
> block/stream.c | 10 +--
> block/vmdk.c | 10 +--
> blockdev.c | 4 +-
> blockjob.c | 8 +-
> tests/unit/test-bdrv-drain.c | 20 ++---
> tests/unit/test-bdrv-graph-mod.c | 10 +--
> scripts/block-coroutine-wrapper.py | 2 +-
> tests/qemu-iotests/tests/iothreads-stream | 73 +++++++++++++++++++
> tests/qemu-iotests/tests/iothreads-stream.out | 11 +++
> 22 files changed, 197 insertions(+), 75 deletions(-)
> create mode 100755 tests/qemu-iotests/tests/iothreads-stream
> create mode 100644 tests/qemu-iotests/tests/iothreads-stream.out
>
> --
> 2.41.0
>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-11-15 18:36 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-15 17:20 [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 1/4] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 2/4] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 3/4] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
2023-11-15 17:20 ` [PATCH for-8.2 4/4] iotests: Test two stream jobs in a single iothread Kevin Wolf
2023-11-15 18:36 ` [PATCH for-8.2 0/4] block: Fix deadlocks with the stream job Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).