From: Fam Zheng <famz@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
pbonzini@redhat.com, Jeff Cody <jcody@redhat.com>,
qemu-block@nongnu.org, mreitz@redhat.com
Subject: [Qemu-devel] [PATCH v8 2/2] mirror: Add mirror_wait_for_io
Date: Thu, 24 Dec 2015 11:15:14 +0800 [thread overview]
Message-ID: <1450926914-12509-3-git-send-email-famz@redhat.com> (raw)
In-Reply-To: <1450926914-12509-1-git-send-email-famz@redhat.com>
The three lines are duplicated a number of times now, refactor a
function.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
---
block/mirror.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index 0081c2e..07ad068 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -206,6 +206,14 @@ static int mirror_cow_align(MirrorBlockJob *s,
return diff;
}
+static inline void mirror_wait_for_io(MirrorBlockJob *s)
+{
+ assert(!s->waiting_for_io);
+ s->waiting_for_io = true;
+ qemu_coroutine_yield();
+ s->waiting_for_io = false;
+}
+
/* Submit async read while handling COW.
* Returns: nb_sectors if no alignment is necessary, or
* (new_end - sector_num) if tail is rounded up or down due to
@@ -238,9 +246,7 @@ static int mirror_do_read(MirrorBlockJob *s, int64_t sector_num,
while (s->buf_free_count < nb_chunks) {
trace_mirror_yield_in_flight(s, sector_num, s->in_flight);
- s->waiting_for_io = true;
- qemu_coroutine_yield();
- s->waiting_for_io = false;
+ mirror_wait_for_io(s);
}
/* Allocate a MirrorOp that is used as an AIO callback. */
@@ -331,9 +337,7 @@ static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
break;
}
trace_mirror_yield_in_flight(s, next_sector, s->in_flight);
- s->waiting_for_io = true;
- qemu_coroutine_yield();
- s->waiting_for_io = false;
+ mirror_wait_for_io(s);
/* Now retry. */
} else {
hbitmap_next = hbitmap_iter_next(&s->hbi);
@@ -423,9 +427,7 @@ static void mirror_free_init(MirrorBlockJob *s)
static void mirror_drain(MirrorBlockJob *s)
{
while (s->in_flight > 0) {
- s->waiting_for_io = true;
- qemu_coroutine_yield();
- s->waiting_for_io = false;
+ mirror_wait_for_io(s);
}
}
@@ -613,9 +615,7 @@ static void coroutine_fn mirror_run(void *opaque)
if (s->in_flight == MAX_IN_FLIGHT || s->buf_free_count == 0 ||
(cnt == 0 && s->in_flight > 0)) {
trace_mirror_yield(s, s->in_flight, s->buf_free_count, cnt);
- s->waiting_for_io = true;
- qemu_coroutine_yield();
- s->waiting_for_io = false;
+ mirror_wait_for_io(s);
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
--
2.4.3
prev parent reply other threads:[~2015-12-24 3:15 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-24 3:15 [Qemu-devel] [PATCH v8 0/2] mirror: Improve zero write and discard Fam Zheng
2015-12-24 3:15 ` [Qemu-devel] [PATCH v8 1/2] mirror: Rewrite mirror_iteration Fam Zheng
2016-01-04 19:27 ` Max Reitz
2016-01-05 8:18 ` Fam Zheng
2015-12-24 3:15 ` Fam Zheng [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1450926914-12509-3-git-send-email-famz@redhat.com \
--to=famz@redhat.com \
--cc=jcody@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).