From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, qemu-devel@nongnu.org
Subject: [PULL 09/26] block: Mark drain related functions GRAPH_RDLOCK
Date: Thu, 12 Oct 2023 18:22:07 +0200 [thread overview]
Message-ID: <20231012162224.240535-10-kwolf@redhat.com> (raw)
In-Reply-To: <20231012162224.240535-1-kwolf@redhat.com>
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Draining recursively traverses the graph, therefore we need to make sure
that also such accesses to the graph are protected by the graph rdlock.
There are 3 different drain callers to consider:
1. drain in the main loop: no issue at all, rdlock is nop.
2. drain in an iothread: rdlock only works in main loop or coroutines,
so disallow it.
3. drain in a coroutine (regardless of AioContext): the drain mechanism
takes care of scheduling a BH in the bs->aio_context that will
then take care of perform the actual draining. This is wrong,
because as pointed in (2) if bs->aio_context is an iothread then
rdlock won't work. Therefore change bdrv_co_yield_to_drain to
schedule the BH in the main loop.
Caller (2) also implies that we need to modify test-bdrv-drain.c to
disallow draining in the iothreads.
For some places, we know that they will hold the lock, but we don't have
the GRAPH_RDLOCK annotations yet. In this case, add assume_graph_lock()
with a FIXME comment. These places will be removed once everything is
properly annotated.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20230929145157.45443-6-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/block/block-io.h | 23 ++++++++++++++++++-----
include/block/block_int-common.h | 6 +++---
block.c | 6 +++---
block/io.c | 32 ++++++++++++++++++++++++++++----
tests/unit/test-bdrv-drain.c | 4 ++--
5 files changed, 54 insertions(+), 17 deletions(-)
diff --git a/include/block/block-io.h b/include/block/block-io.h
index 6485b194a4..9707eb3eff 100644
--- a/include/block/block-io.h
+++ b/include/block/block-io.h
@@ -370,7 +370,7 @@ bdrv_writev_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos);
*
* Begin a quiesced section for the parent of @c.
*/
-void bdrv_parent_drained_begin_single(BdrvChild *c);
+void GRAPH_RDLOCK bdrv_parent_drained_begin_single(BdrvChild *c);
/**
* bdrv_parent_drained_poll_single:
@@ -378,14 +378,14 @@ void bdrv_parent_drained_begin_single(BdrvChild *c);
* Returns true if there is any pending activity to cease before @c can be
* called quiesced, false otherwise.
*/
-bool bdrv_parent_drained_poll_single(BdrvChild *c);
+bool GRAPH_RDLOCK bdrv_parent_drained_poll_single(BdrvChild *c);
/**
* bdrv_parent_drained_end_single:
*
* End a quiesced section for the parent of @c.
*/
-void bdrv_parent_drained_end_single(BdrvChild *c);
+void GRAPH_RDLOCK bdrv_parent_drained_end_single(BdrvChild *c);
/**
* bdrv_drain_poll:
@@ -398,8 +398,9 @@ void bdrv_parent_drained_end_single(BdrvChild *c);
*
* This is part of bdrv_drained_begin.
*/
-bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
- bool ignore_bds_parents);
+bool GRAPH_RDLOCK
+bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
+ bool ignore_bds_parents);
/**
* bdrv_drained_begin:
@@ -407,6 +408,12 @@ bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
* Begin a quiesced section for exclusive access to the BDS, by disabling
* external request sources including NBD server, block jobs, and device model.
*
+ * This function can only be invoked by the main loop or a coroutine
+ * (regardless of the AioContext where it is running).
+ * If the coroutine is running in an Iothread AioContext, this function will
+ * just schedule a BH to run in the main loop.
+ * However, it cannot be directly called by an Iothread.
+ *
* This function can be recursive.
*/
void bdrv_drained_begin(BlockDriverState *bs);
@@ -423,6 +430,12 @@ void bdrv_do_drained_begin_quiesce(BlockDriverState *bs, BdrvChild *parent);
* bdrv_drained_end:
*
* End a quiescent section started by bdrv_drained_begin().
+ *
+ * This function can only be invoked by the main loop or a coroutine
+ * (regardless of the AioContext where it is running).
+ * If the coroutine is running in an Iothread AioContext, this function will
+ * just schedule a BH to run in the main loop.
+ * However, it cannot be directly called by an Iothread.
*/
void bdrv_drained_end(BlockDriverState *bs);
diff --git a/include/block/block_int-common.h b/include/block/block_int-common.h
index 2ca3758cb8..8ef68176a5 100644
--- a/include/block/block_int-common.h
+++ b/include/block/block_int-common.h
@@ -963,15 +963,15 @@ struct BdrvChildClass {
* Note that this can be nested. If drained_begin() was called twice, new
* I/O is allowed only after drained_end() was called twice, too.
*/
- void (*drained_begin)(BdrvChild *child);
- void (*drained_end)(BdrvChild *child);
+ void GRAPH_RDLOCK_PTR (*drained_begin)(BdrvChild *child);
+ void GRAPH_RDLOCK_PTR (*drained_end)(BdrvChild *child);
/*
* Returns whether the parent has pending requests for the child. This
* callback is polled after .drained_begin() has been called until all
* activity on the child has stopped.
*/
- bool (*drained_poll)(BdrvChild *child);
+ bool GRAPH_RDLOCK_PTR (*drained_poll)(BdrvChild *child);
/*
* Notifies the parent that the filename of its child has changed (e.g.
diff --git a/block.c b/block.c
index 701f77fe2f..6cc4115510 100644
--- a/block.c
+++ b/block.c
@@ -1192,19 +1192,19 @@ static char *bdrv_child_get_parent_desc(BdrvChild *c)
return g_strdup_printf("node '%s'", bdrv_get_node_name(parent));
}
-static void bdrv_child_cb_drained_begin(BdrvChild *child)
+static void GRAPH_RDLOCK bdrv_child_cb_drained_begin(BdrvChild *child)
{
BlockDriverState *bs = child->opaque;
bdrv_do_drained_begin_quiesce(bs, NULL);
}
-static bool bdrv_child_cb_drained_poll(BdrvChild *child)
+static bool GRAPH_RDLOCK bdrv_child_cb_drained_poll(BdrvChild *child)
{
BlockDriverState *bs = child->opaque;
return bdrv_drain_poll(bs, NULL, false);
}
-static void bdrv_child_cb_drained_end(BdrvChild *child)
+static void GRAPH_RDLOCK bdrv_child_cb_drained_end(BdrvChild *child)
{
BlockDriverState *bs = child->opaque;
bdrv_drained_end(bs);
diff --git a/block/io.c b/block/io.c
index 5821a4d1f5..e2e846d37d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -46,9 +46,12 @@ static void bdrv_parent_cb_resize(BlockDriverState *bs);
static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int64_t bytes, BdrvRequestFlags flags);
-static void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
+static void GRAPH_RDLOCK
+bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
{
BdrvChild *c, *next;
+ IO_OR_GS_CODE();
+ assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore) {
@@ -70,9 +73,12 @@ void bdrv_parent_drained_end_single(BdrvChild *c)
}
}
-static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
+static void GRAPH_RDLOCK
+bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
{
BdrvChild *c;
+ IO_OR_GS_CODE();
+ assert_bdrv_graph_readable();
QLIST_FOREACH(c, &bs->parents, next_parent) {
if (c == ignore) {
@@ -84,17 +90,22 @@ static void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
bool bdrv_parent_drained_poll_single(BdrvChild *c)
{
+ IO_OR_GS_CODE();
+
if (c->klass->drained_poll) {
return c->klass->drained_poll(c);
}
return false;
}
-static bool bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
- bool ignore_bds_parents)
+static bool GRAPH_RDLOCK
+bdrv_parent_drained_poll(BlockDriverState *bs, BdrvChild *ignore,
+ bool ignore_bds_parents)
{
BdrvChild *c, *next;
bool busy = false;
+ IO_OR_GS_CODE();
+ assert_bdrv_graph_readable();
QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
if (c == ignore || (ignore_bds_parents && c->klass->parent_is_bds)) {
@@ -114,6 +125,7 @@ void bdrv_parent_drained_begin_single(BdrvChild *c)
c->quiesced_parent = true;
if (c->klass->drained_begin) {
+ /* called with rdlock taken, but it doesn't really need it. */
c->klass->drained_begin(c);
}
}
@@ -263,6 +275,9 @@ bool bdrv_drain_poll(BlockDriverState *bs, BdrvChild *ignore_parent,
static bool bdrv_drain_poll_top_level(BlockDriverState *bs,
BdrvChild *ignore_parent)
{
+ GLOBAL_STATE_CODE();
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
+
return bdrv_drain_poll(bs, ignore_parent, false);
}
@@ -362,6 +377,7 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent,
/* Stop things in parent-to-child order */
if (qatomic_fetch_inc(&bs->quiesce_counter) == 0) {
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
bdrv_parent_drained_begin(bs, parent);
if (bs->drv && bs->drv->bdrv_drain_begin) {
bs->drv->bdrv_drain_begin(bs);
@@ -408,12 +424,16 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
bdrv_co_yield_to_drain(bs, false, parent, false);
return;
}
+
+ /* At this point, we should be always running in the main loop. */
+ GLOBAL_STATE_CODE();
assert(bs->quiesce_counter > 0);
GLOBAL_STATE_CODE();
/* Re-enable things in child-to-parent order */
old_quiesce_counter = qatomic_fetch_dec(&bs->quiesce_counter);
if (old_quiesce_counter == 1) {
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
if (bs->drv && bs->drv->bdrv_drain_end) {
bs->drv->bdrv_drain_end(bs);
}
@@ -437,6 +457,8 @@ void bdrv_drain(BlockDriverState *bs)
static void bdrv_drain_assert_idle(BlockDriverState *bs)
{
BdrvChild *child, *next;
+ GLOBAL_STATE_CODE();
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
assert(qatomic_read(&bs->in_flight) == 0);
QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
@@ -450,7 +472,9 @@ static bool bdrv_drain_all_poll(void)
{
BlockDriverState *bs = NULL;
bool result = false;
+
GLOBAL_STATE_CODE();
+ GRAPH_RDLOCK_GUARD_MAINLOOP();
/* bdrv_drain_poll() can't make changes to the graph and we are holding the
* main AioContext lock, so iterating bdrv_next_all_states() is safe. */
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index bdd3757615..d734829778 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -1196,7 +1196,7 @@ static void coroutine_mixed_fn detach_by_parent_aio_cb(void *opaque, int ret)
}
}
-static void detach_by_driver_cb_drained_begin(BdrvChild *child)
+static void GRAPH_RDLOCK detach_by_driver_cb_drained_begin(BdrvChild *child)
{
struct detach_by_parent_data *data = &detach_by_parent_data;
@@ -1233,7 +1233,7 @@ static BdrvChildClass detach_by_driver_cb_class;
* state is messed up, but if it is only polled in the single
* BDRV_POLL_WHILE() at the end of the drain, this should work fine.
*/
-static void test_detach_indirect(bool by_parent_cb)
+static void TSA_NO_TSA test_detach_indirect(bool by_parent_cb)
{
BlockBackend *blk;
BlockDriverState *parent_a, *parent_b, *a, *b, *c;
--
2.41.0
next prev parent reply other threads:[~2023-10-12 16:30 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-12 16:21 [PULL 00/26] Block layer patches Kevin Wolf
2023-10-12 16:21 ` [PULL 01/26] block: rename the bdrv_co_block_status static function Kevin Wolf
2023-10-12 16:22 ` [PULL 02/26] block: complete public block status API Kevin Wolf
2023-10-12 16:22 ` [PULL 03/26] block: switch to co_wrapper for bdrv_is_allocated_* Kevin Wolf
2023-10-12 16:22 ` [PULL 04/26] block: convert more bdrv_is_allocated* and bdrv_block_status* calls to coroutine versions Kevin Wolf
2023-10-12 16:22 ` [PULL 05/26] test-bdrv-drain: Don't call bdrv_graph_wrlock() in coroutine context Kevin Wolf
2023-10-12 16:22 ` [PULL 06/26] block-coroutine-wrapper: Add no_co_wrapper_bdrv_rdlock functions Kevin Wolf
2023-10-12 16:22 ` [PULL 07/26] block: Take graph rdlock in bdrv_inactivate_all() Kevin Wolf
2023-10-12 16:22 ` [PULL 08/26] block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCK Kevin Wolf
2023-10-12 16:22 ` Kevin Wolf [this message]
2023-10-12 16:22 ` [PULL 10/26] block: Mark bdrv_parent_cb_resize() and callers GRAPH_RDLOCK Kevin Wolf
2023-10-12 16:22 ` [PULL 11/26] block: Mark bdrv_snapshot_fallback() " Kevin Wolf
2023-10-12 16:22 ` [PULL 12/26] block: Take graph rdlock in parts of reopen Kevin Wolf
2023-10-12 16:22 ` [PULL 13/26] block: Mark bdrv_get_xdbg_block_graph() and callers GRAPH_RDLOCK Kevin Wolf
2023-10-12 16:22 ` [PULL 14/26] block: Mark bdrv_refresh_filename() " Kevin Wolf
2023-10-12 16:22 ` [PULL 15/26] block: Mark bdrv_primary_child() " Kevin Wolf
2023-10-12 16:22 ` [PULL 16/26] block: Mark bdrv_get_parent_name() " Kevin Wolf
2023-10-12 16:22 ` [PULL 17/26] block: Mark bdrv_amend_options() " Kevin Wolf
2023-10-12 16:22 ` [PULL 18/26] qcow2: Mark qcow2_signal_corruption() " Kevin Wolf
2023-10-12 16:22 ` [PULL 19/26] qcow2: Mark qcow2_inactivate() " Kevin Wolf
2023-10-12 16:22 ` [PULL 20/26] qcow2: Mark check_constraints_on_bitmap() GRAPH_RDLOCK Kevin Wolf
2023-10-12 16:22 ` [PULL 21/26] block: Mark bdrv_op_is_blocked() and callers GRAPH_RDLOCK Kevin Wolf
2023-10-12 16:22 ` [PULL 22/26] block: Mark bdrv_apply_auto_read_only() " Kevin Wolf
2023-10-12 16:22 ` [PULL 23/26] block: Mark bdrv_get_specific_info() " Kevin Wolf
2023-10-12 16:22 ` [PULL 24/26] block: Protect bs->parents with graph_lock Kevin Wolf
2023-10-12 16:22 ` [PULL 25/26] block: Protect bs->children " Kevin Wolf
2023-10-12 16:22 ` [PULL 26/26] block: Add assertion for bdrv_graph_wrlock() Kevin Wolf
2023-10-16 19:20 ` [PULL 00/26] Block layer patches Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231012162224.240535-10-kwolf@redhat.com \
--to=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).