From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
jjherne@linux.vnet.ibm.com, Fam Zheng <famz@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>, Jeff Cody <jcody@redhat.com>,
mreitz@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>
Subject: [Qemu-devel] [PATCH v4 4/5] mirror: follow AioContext change gracefully
Date: Tue, 14 Jun 2016 19:17:07 +0100 [thread overview]
Message-ID: <1465928228-1184-5-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1465928228-1184-1-git-send-email-stefanha@redhat.com>
Add block_job_pause_point() calls to mark quiescent points and make sure
to complete in-flight requests when switching AioContexts.
This patch solves undefined behavior in the mirror block job when the
BDS AioContext is changed by dataplane.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
---
block/mirror.c | 45 +++++++++++++++++++++++++++++++++++++--------
1 file changed, 37 insertions(+), 8 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index 80fd3c7..4c4e55b 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -331,6 +331,8 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
mirror_wait_for_io(s);
}
+ block_job_pause_point(&s->common);
+
/* Find the number of consective dirty chunks following the first dirty
* one, and wait for in flight requests in them. */
while (nb_chunks * sectors_per_chunk < (s->buf_size >> BDRV_SECTOR_BITS)) {
@@ -581,6 +583,8 @@ static void coroutine_fn mirror_run(void *opaque)
if (now - last_pause_ns > SLICE_TIME) {
last_pause_ns = now;
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, 0);
+ } else {
+ block_job_pause_point(&s->common);
}
if (block_job_is_cancelled(&s->common)) {
@@ -612,6 +616,8 @@ static void coroutine_fn mirror_run(void *opaque)
goto immediate_exit;
}
+ block_job_pause_point(&s->common);
+
cnt = bdrv_get_dirty_count(s->dirty_bitmap);
/* s->common.offset contains the number of bytes already processed so
* far, cnt is the number of dirty sectors remaining and
@@ -781,18 +787,41 @@ static void mirror_complete(BlockJob *job, Error **errp)
block_job_enter(&s->common);
}
+/* There is no matching mirror_resume() because mirror_run() will begin
+ * iterating again when the job is resumed.
+ */
+static void mirror_pause(BlockJob *job)
+{
+ MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
+
+ while (s->in_flight > 0) {
+ aio_poll(blk_get_aio_context(job->blk), true);
+ }
+}
+
+static void mirror_attached_aio_context(BlockJob *job, AioContext *new_context)
+{
+ MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
+
+ blk_set_aio_context(s->target, new_context);
+}
+
static const BlockJobDriver mirror_job_driver = {
- .instance_size = sizeof(MirrorBlockJob),
- .job_type = BLOCK_JOB_TYPE_MIRROR,
- .set_speed = mirror_set_speed,
- .complete = mirror_complete,
+ .instance_size = sizeof(MirrorBlockJob),
+ .job_type = BLOCK_JOB_TYPE_MIRROR,
+ .set_speed = mirror_set_speed,
+ .complete = mirror_complete,
+ .pause = mirror_pause,
+ .attached_aio_context = mirror_attached_aio_context,
};
static const BlockJobDriver commit_active_job_driver = {
- .instance_size = sizeof(MirrorBlockJob),
- .job_type = BLOCK_JOB_TYPE_COMMIT,
- .set_speed = mirror_set_speed,
- .complete = mirror_complete,
+ .instance_size = sizeof(MirrorBlockJob),
+ .job_type = BLOCK_JOB_TYPE_COMMIT,
+ .set_speed = mirror_set_speed,
+ .complete = mirror_complete,
+ .pause = mirror_pause,
+ .attached_aio_context = mirror_attached_aio_context,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
--
2.5.5
next prev parent reply other threads:[~2016-06-14 18:17 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-14 18:17 [Qemu-devel] [PATCH v4 0/5] blockjob: AioContext change support for mirror and backup Stefan Hajnoczi
2016-06-14 18:17 ` [Qemu-devel] [PATCH v4 1/5] blockjob: move iostatus reset out of block_job_enter() Stefan Hajnoczi
2016-06-15 8:47 ` Fam Zheng
2016-06-14 18:17 ` [Qemu-devel] [PATCH v4 2/5] blockjob: add pause points Stefan Hajnoczi
2016-06-15 8:53 ` Paolo Bonzini
2016-06-16 13:17 ` Stefan Hajnoczi
2016-06-16 13:24 ` Paolo Bonzini
2016-06-15 8:57 ` Fam Zheng
2016-06-15 9:01 ` Paolo Bonzini
2016-06-16 10:19 ` Stefan Hajnoczi
2016-06-14 18:17 ` [Qemu-devel] [PATCH v4 3/5] blockjob: add AioContext attached callback Stefan Hajnoczi
2016-06-15 9:05 ` Fam Zheng
2016-06-16 10:13 ` Stefan Hajnoczi
2016-06-14 18:17 ` Stefan Hajnoczi [this message]
2016-06-15 8:57 ` [Qemu-devel] [PATCH v4 4/5] mirror: follow AioContext change gracefully Paolo Bonzini
2016-06-16 10:17 ` Stefan Hajnoczi
2016-06-16 10:21 ` Paolo Bonzini
2016-06-16 11:28 ` Stefan Hajnoczi
2016-06-14 18:17 ` [Qemu-devel] [PATCH v4 5/5] backup: " Stefan Hajnoczi
2016-06-14 19:06 ` [Qemu-devel] [PATCH v4 0/5] blockjob: AioContext change support for mirror and backup Jason J. Herne
2016-06-15 8:56 ` Stefan Hajnoczi
2016-06-15 8:59 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1465928228-1184-5-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=famz@redhat.com \
--cc=jcody@redhat.com \
--cc=jjherne@linux.vnet.ibm.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).