From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, mreitz@redhat.com, famz@redhat.com,
pbonzini@redhat.com, slp@redhat.com, jsnow@redhat.com,
qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH v2 15/17] test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level()
Date: Thu, 13 Sep 2018 14:52:15 +0200 [thread overview]
Message-ID: <20180913125217.23173-16-kwolf@redhat.com> (raw)
In-Reply-To: <20180913125217.23173-1-kwolf@redhat.com>
This is a regression test for a deadlock that could occur in callbacks
called from the aio_poll() in bdrv_drain_poll_top_level(). The
AioContext lock wasn't released and therefore would be taken a second
time in the callback. This would cause a possible AIO_WAIT_WHILE() in
the callback to hang.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
---
tests/test-bdrv-drain.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index c3c17b9ff7..e105c0ae84 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -636,6 +636,17 @@ static void test_iothread_aio_cb(void *opaque, int ret)
qemu_event_set(&done_event);
}
+static void test_iothread_main_thread_bh(void *opaque)
+{
+ struct test_iothread_data *data = opaque;
+
+ /* Test that the AioContext is not yet locked in a random BH that is
+ * executed during drain, otherwise this would deadlock. */
+ aio_context_acquire(bdrv_get_aio_context(data->bs));
+ bdrv_flush(data->bs);
+ aio_context_release(bdrv_get_aio_context(data->bs));
+}
+
/*
* Starts an AIO request on a BDS that runs in the AioContext of iothread 1.
* The request involves a BH on iothread 2 before it can complete.
@@ -705,6 +716,8 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
aio_context_acquire(ctx_a);
}
+ aio_bh_schedule_oneshot(ctx_a, test_iothread_main_thread_bh, &data);
+
/* The request is running on the IOThread a. Draining its block device
* will make sure that it has completed as far as the BDS is concerned,
* but the drain in this thread can continue immediately after
--
2.13.6
next prev parent reply other threads:[~2018-09-13 12:53 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-13 12:52 [Qemu-devel] [PATCH v2 00/17] Fix some jobs/drain/aio_poll related hangs Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 01/17] job: Fix missing locking due to mismerge Kevin Wolf
2018-09-13 13:56 ` Max Reitz
2018-09-13 17:38 ` John Snow
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 02/17] blockjob: Wake up BDS when job becomes idle Kevin Wolf
2018-09-13 14:31 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 03/17] aio-wait: Increase num_waiters even in home thread Kevin Wolf
2018-09-13 15:11 ` Paolo Bonzini
2018-09-13 17:21 ` Kevin Wolf
2018-09-14 15:14 ` Paolo Bonzini
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 04/17] test-bdrv-drain: Drain with block jobs in an I/O thread Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 05/17] test-blockjob: Acquire AioContext around job_cancel_sync() Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 06/17] job: Use AIO_WAIT_WHILE() in job_finish_sync() Kevin Wolf
2018-09-13 14:45 ` Max Reitz
2018-09-13 15:15 ` Paolo Bonzini
2018-09-13 17:39 ` Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 07/17] test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 08/17] block: Add missing locking in bdrv_co_drain_bh_cb() Kevin Wolf
2018-09-13 14:58 ` Max Reitz
2018-09-13 15:17 ` Paolo Bonzini
2018-09-13 17:36 ` Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 09/17] block-backend: Add .drained_poll callback Kevin Wolf
2018-09-13 15:01 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 10/17] block-backend: Fix potential double blk_delete() Kevin Wolf
2018-09-13 15:19 ` Paolo Bonzini
2018-09-13 19:50 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 11/17] block-backend: Decrease in_flight only after callback Kevin Wolf
2018-09-13 15:10 ` Paolo Bonzini
2018-09-13 16:59 ` Kevin Wolf
2018-09-14 7:47 ` Fam Zheng
2018-09-14 15:12 ` Paolo Bonzini
2018-09-14 17:14 ` Kevin Wolf
2018-09-14 17:38 ` Paolo Bonzini
2018-09-13 20:50 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 12/17] mirror: Fix potential use-after-free in active commit Kevin Wolf
2018-09-13 20:55 ` Max Reitz
2018-09-13 21:43 ` Max Reitz
2018-09-14 16:25 ` Kevin Wolf
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 13/17] blockjob: Lie better in child_job_drained_poll() Kevin Wolf
2018-09-13 21:52 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 14/17] block: Remove aio_poll() in bdrv_drain_poll variants Kevin Wolf
2018-09-13 21:55 ` Max Reitz
2018-09-13 12:52 ` Kevin Wolf [this message]
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 16/17] job: Avoid deadlocks in job_completed_txn_abort() Kevin Wolf
2018-09-13 22:01 ` Max Reitz
2018-09-13 12:52 ` [Qemu-devel] [PATCH v2 17/17] test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort Kevin Wolf
2018-09-13 22:05 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180913125217.23173-16-kwolf@redhat.com \
--to=kwolf@redhat.com \
--cc=famz@redhat.com \
--cc=jsnow@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=slp@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).