* [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change @ 2019-05-03 17:17 Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf ` (3 more replies) 0 siblings, 4 replies; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel Kevin Wolf (2): blockjob: Fix coroutine thread after AioContext change test-block-iothread: Job coroutine thread after AioContext switch job.c | 2 +- tests/test-block-iothread.c | 107 ++++++++++++++++++++++++++++++++++++ 2 files changed, 108 insertions(+), 1 deletion(-) -- 2.20.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change 2019-05-03 17:17 [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf @ 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` [Qemu-devel] [PATCH 1/2] " Kevin Wolf ` (2 subsequent siblings) 3 siblings, 0 replies; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, qemu-devel, mreitz Kevin Wolf (2): blockjob: Fix coroutine thread after AioContext change test-block-iothread: Job coroutine thread after AioContext switch job.c | 2 +- tests/test-block-iothread.c | 107 ++++++++++++++++++++++++++++++++++++ 2 files changed, 108 insertions(+), 1 deletion(-) -- 2.20.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 1/2] blockjob: Fix coroutine thread after AioContext change 2019-05-03 17:17 [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf @ 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch Kevin Wolf 2019-05-08 12:15 ` [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf 3 siblings, 1 reply; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel Commit 463e0be10 ('blockjob: add AioContext attached callback') tried to make block jobs robust against AioContext changes of their main node, but it never made sure that the job coroutine actually runs in the new thread. Instead of waking up the job coroutine in whatever thread it ran before, let's always pass the AioContext where it should be running now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- job.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/job.c b/job.c index da8e4b7bf2..2167d53717 100644 --- a/job.c +++ b/job.c @@ -432,7 +432,7 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) timer_del(&job->sleep_timer); job->busy = true; job_unlock(); - aio_co_wake(job->co); + aio_co_enter(job->aio_context, job->co); } void job_enter(Job *job) -- 2.20.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 1/2] blockjob: Fix coroutine thread after AioContext change 2019-05-03 17:17 ` [Qemu-devel] [PATCH 1/2] " Kevin Wolf @ 2019-05-03 17:17 ` Kevin Wolf 0 siblings, 0 replies; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, qemu-devel, mreitz Commit 463e0be10 ('blockjob: add AioContext attached callback') tried to make block jobs robust against AioContext changes of their main node, but it never made sure that the job coroutine actually runs in the new thread. Instead of waking up the job coroutine in whatever thread it ran before, let's always pass the AioContext where it should be running now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- job.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/job.c b/job.c index da8e4b7bf2..2167d53717 100644 --- a/job.c +++ b/job.c @@ -432,7 +432,7 @@ void job_enter_cond(Job *job, bool(*fn)(Job *job)) timer_del(&job->sleep_timer); job->busy = true; job_unlock(); - aio_co_wake(job->co); + aio_co_enter(job->aio_context, job->co); } void job_enter(Job *job) -- 2.20.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch 2019-05-03 17:17 [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` [Qemu-devel] [PATCH 1/2] " Kevin Wolf @ 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-08 12:15 ` [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf 3 siblings, 1 reply; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, mreitz, qemu-devel This tests that a job coroutine always runs in the right iothread after the AioContext of its main node has changed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- tests/test-block-iothread.c | 107 ++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/tests/test-block-iothread.c b/tests/test-block-iothread.c index 97ac0b159d..036ed9a3b3 100644 --- a/tests/test-block-iothread.c +++ b/tests/test-block-iothread.c @@ -354,6 +354,111 @@ static void test_sync_op(const void *opaque) blk_unref(blk); } +typedef struct TestBlockJob { + BlockJob common; + bool should_complete; + int n; +} TestBlockJob; + +static int test_job_prepare(Job *job) +{ + g_assert(qemu_get_current_aio_context() == qemu_get_aio_context()); + return 0; +} + +static int coroutine_fn test_job_run(Job *job, Error **errp) +{ + TestBlockJob *s = container_of(job, TestBlockJob, common.job); + + job_transition_to_ready(&s->common.job); + while (!s->should_complete) { + s->n++; + g_assert(qemu_get_current_aio_context() == job->aio_context); + + /* Avoid job_sleep_ns() because it marks the job as !busy. We want to + * emulate some actual activity (probably some I/O) here so that the + * drain involved in AioContext switches has to wait for this activity + * to stop. */ + qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 1000000); + + job_pause_point(&s->common.job); + } + + g_assert(qemu_get_current_aio_context() == job->aio_context); + return 0; +} + +static void test_job_complete(Job *job, Error **errp) +{ + TestBlockJob *s = container_of(job, TestBlockJob, common.job); + s->should_complete = true; +} + +BlockJobDriver test_job_driver = { + .job_driver = { + .instance_size = sizeof(TestBlockJob), + .free = block_job_free, + .user_resume = block_job_user_resume, + .drain = block_job_drain, + .run = test_job_run, + .complete = test_job_complete, + .prepare = test_job_prepare, + }, +}; + +static void test_attach_blockjob(void) +{ + IOThread *iothread = iothread_new(); + AioContext *ctx = iothread_get_aio_context(iothread); + BlockBackend *blk; + BlockDriverState *bs; + TestBlockJob *tjob; + + blk = blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs = bdrv_new_open_driver(&bdrv_test, "base", BDRV_O_RDWR, &error_abort); + blk_insert_bs(blk, bs, &error_abort); + + tjob = block_job_create("job0", &test_job_driver, NULL, bs, + 0, BLK_PERM_ALL, + 0, 0, NULL, NULL, &error_abort); + job_start(&tjob->common.job); + + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + blk_set_aio_context(blk, ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + aio_context_acquire(ctx); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + blk_set_aio_context(blk, ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + aio_context_acquire(ctx); + job_complete_sync(&tjob->common.job, &error_abort); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx); + + bdrv_unref(bs); + blk_unref(blk); +} + int main(int argc, char **argv) { int i; @@ -368,5 +473,7 @@ int main(int argc, char **argv) g_test_add_data_func(t->name, t, test_sync_op); } + g_test_add_func("/attach/blockjob", test_attach_blockjob); + return g_test_run(); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch 2019-05-03 17:17 ` [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch Kevin Wolf @ 2019-05-03 17:17 ` Kevin Wolf 0 siblings, 0 replies; 7+ messages in thread From: Kevin Wolf @ 2019-05-03 17:17 UTC (permalink / raw) To: qemu-block; +Cc: kwolf, qemu-devel, mreitz This tests that a job coroutine always runs in the right iothread after the AioContext of its main node has changed. Signed-off-by: Kevin Wolf <kwolf@redhat.com> --- tests/test-block-iothread.c | 107 ++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/tests/test-block-iothread.c b/tests/test-block-iothread.c index 97ac0b159d..036ed9a3b3 100644 --- a/tests/test-block-iothread.c +++ b/tests/test-block-iothread.c @@ -354,6 +354,111 @@ static void test_sync_op(const void *opaque) blk_unref(blk); } +typedef struct TestBlockJob { + BlockJob common; + bool should_complete; + int n; +} TestBlockJob; + +static int test_job_prepare(Job *job) +{ + g_assert(qemu_get_current_aio_context() == qemu_get_aio_context()); + return 0; +} + +static int coroutine_fn test_job_run(Job *job, Error **errp) +{ + TestBlockJob *s = container_of(job, TestBlockJob, common.job); + + job_transition_to_ready(&s->common.job); + while (!s->should_complete) { + s->n++; + g_assert(qemu_get_current_aio_context() == job->aio_context); + + /* Avoid job_sleep_ns() because it marks the job as !busy. We want to + * emulate some actual activity (probably some I/O) here so that the + * drain involved in AioContext switches has to wait for this activity + * to stop. */ + qemu_co_sleep_ns(QEMU_CLOCK_REALTIME, 1000000); + + job_pause_point(&s->common.job); + } + + g_assert(qemu_get_current_aio_context() == job->aio_context); + return 0; +} + +static void test_job_complete(Job *job, Error **errp) +{ + TestBlockJob *s = container_of(job, TestBlockJob, common.job); + s->should_complete = true; +} + +BlockJobDriver test_job_driver = { + .job_driver = { + .instance_size = sizeof(TestBlockJob), + .free = block_job_free, + .user_resume = block_job_user_resume, + .drain = block_job_drain, + .run = test_job_run, + .complete = test_job_complete, + .prepare = test_job_prepare, + }, +}; + +static void test_attach_blockjob(void) +{ + IOThread *iothread = iothread_new(); + AioContext *ctx = iothread_get_aio_context(iothread); + BlockBackend *blk; + BlockDriverState *bs; + TestBlockJob *tjob; + + blk = blk_new(BLK_PERM_ALL, BLK_PERM_ALL); + bs = bdrv_new_open_driver(&bdrv_test, "base", BDRV_O_RDWR, &error_abort); + blk_insert_bs(blk, bs, &error_abort); + + tjob = block_job_create("job0", &test_job_driver, NULL, bs, + 0, BLK_PERM_ALL, + 0, 0, NULL, NULL, &error_abort); + job_start(&tjob->common.job); + + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + blk_set_aio_context(blk, ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + aio_context_acquire(ctx); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + blk_set_aio_context(blk, ctx); + + tjob->n = 0; + while (tjob->n == 0) { + aio_poll(qemu_get_aio_context(), false); + } + + aio_context_acquire(ctx); + job_complete_sync(&tjob->common.job, &error_abort); + blk_set_aio_context(blk, qemu_get_aio_context()); + aio_context_release(ctx); + + bdrv_unref(bs); + blk_unref(blk); +} + int main(int argc, char **argv) { int i; @@ -368,5 +473,7 @@ int main(int argc, char **argv) g_test_add_data_func(t->name, t, test_sync_op); } + g_test_add_func("/attach/blockjob", test_attach_blockjob); + return g_test_run(); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change 2019-05-03 17:17 [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf ` (2 preceding siblings ...) 2019-05-03 17:17 ` [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch Kevin Wolf @ 2019-05-08 12:15 ` Kevin Wolf 3 siblings, 0 replies; 7+ messages in thread From: Kevin Wolf @ 2019-05-08 12:15 UTC (permalink / raw) To: qemu-block; +Cc: qemu-devel, mreitz Am 03.05.2019 um 19:17 hat Kevin Wolf geschrieben: > Kevin Wolf (2): > blockjob: Fix coroutine thread after AioContext change > test-block-iothread: Job coroutine thread after AioContext switch Applied to the block branch. Kevin ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2019-05-08 12:16 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-05-03 17:17 [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` [Qemu-devel] [PATCH 1/2] " Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-03 17:17 ` [Qemu-devel] [PATCH 2/2] test-block-iothread: Job coroutine thread after AioContext switch Kevin Wolf 2019-05-03 17:17 ` Kevin Wolf 2019-05-08 12:15 ` [Qemu-devel] [PATCH 0/2] blockjob: Fix coroutine thread after AioContext change Kevin Wolf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).