From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Michal Privoznik <mprivozn@redhat.com>,
Jeff Cody <jcody@redhat.com>,
Joaquim Barrera <jbarrera@ac.upc.edu>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [PATCH for-2.0 1/2] mirror: fix throttling delay calculation
Date: Fri, 21 Mar 2014 13:55:18 +0100 [thread overview]
Message-ID: <1395406519-21410-2-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1395406519-21410-1-git-send-email-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
The throttling delay calculation was using an inaccurate sector count to
calculate the time to sleep. This broke rate-limiting for the block
mirror job.
Move the delay calculation into mirror_iteration() where we know how
many sectors were transferred. This lets us calculate an accurate delay
time.
Reported-by: Joaquim Barrera <jbarrera@ac.upc.edu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/mirror.c | 28 +++++++++++++++-------------
trace-events | 2 +-
2 files changed, 16 insertions(+), 14 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index dd5ee05..adb09cf 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -139,11 +139,12 @@ static void mirror_read_complete(void *opaque, int ret)
mirror_write_complete, op);
}
-static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
+static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
BlockDriverState *source = s->common.bs;
int nb_sectors, sectors_per_chunk, nb_chunks;
int64_t end, sector_num, next_chunk, next_sector, hbitmap_next_sector;
+ uint64_t delay_ns;
MirrorOp *op;
s->sector_num = hbitmap_iter_next(&s->hbi);
@@ -231,7 +232,12 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
nb_chunks += added_chunks;
next_sector += added_sectors;
next_chunk += added_chunks;
- } while (next_sector < end);
+ if (!s->synced && s->common.speed) {
+ delay_ns = ratelimit_calculate_delay(&s->limit, added_sectors);
+ } else {
+ delay_ns = 0;
+ }
+ } while (delay_ns == 0 && next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_slice_new(MirrorOp);
@@ -268,6 +274,7 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
trace_mirror_one_iteration(s, sector_num, nb_sectors);
bdrv_aio_readv(source, sector_num, &op->qiov, nb_sectors,
mirror_read_complete, op);
+ return delay_ns;
}
static void mirror_free_init(MirrorBlockJob *s)
@@ -362,7 +369,7 @@ static void coroutine_fn mirror_run(void *opaque)
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
for (;;) {
- uint64_t delay_ns;
+ uint64_t delay_ns = 0;
int64_t cnt;
bool should_complete;
@@ -386,8 +393,10 @@ static void coroutine_fn mirror_run(void *opaque)
qemu_coroutine_yield();
continue;
} else if (cnt != 0) {
- mirror_iteration(s);
- continue;
+ delay_ns = mirror_iteration(s);
+ if (delay_ns == 0) {
+ continue;
+ }
}
}
@@ -432,17 +441,10 @@ static void coroutine_fn mirror_run(void *opaque)
}
ret = 0;
- trace_mirror_before_sleep(s, cnt, s->synced);
+ trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
if (!s->synced) {
/* Publish progress */
s->common.offset = (end - cnt) * BDRV_SECTOR_SIZE;
-
- if (s->common.speed) {
- delay_ns = ratelimit_calculate_delay(&s->limit, sectors_per_chunk);
- } else {
- delay_ns = 0;
- }
-
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
diff --git a/trace-events b/trace-events
index 002c260..3b7ff4d 100644
--- a/trace-events
+++ b/trace-events
@@ -82,7 +82,7 @@ mirror_start(void *bs, void *s, void *co, void *opaque) "bs %p s %p co %p opaque
mirror_restart_iter(void *s, int64_t cnt) "s %p dirty count %"PRId64
mirror_before_flush(void *s) "s %p"
mirror_before_drain(void *s, int64_t cnt) "s %p dirty count %"PRId64
-mirror_before_sleep(void *s, int64_t cnt, int synced) "s %p dirty count %"PRId64" synced %d"
+mirror_before_sleep(void *s, int64_t cnt, int synced, uint64_t delay_ns) "s %p dirty count %"PRId64" synced %d delay %"PRIu64"ns"
mirror_one_iteration(void *s, int64_t sector_num, int nb_sectors) "s %p sector_num %"PRId64" nb_sectors %d"
mirror_iteration_done(void *s, int64_t sector_num, int nb_sectors, int ret) "s %p sector_num %"PRId64" nb_sectors %d ret %d"
mirror_yield(void *s, int64_t cnt, int buf_free_count, int in_flight) "s %p dirty count %"PRId64" free buffers %d in_flight %d"
--
1.8.5.3
next prev parent reply other threads:[~2014-03-21 12:55 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-21 12:55 [Qemu-devel] [PATCH for-2.0 0/2] mirror: fix rate-limiting Stefan Hajnoczi
2014-03-21 12:55 ` Stefan Hajnoczi [this message]
2014-03-21 12:55 ` [Qemu-devel] [PATCH for-2.0 2/2] mirror: fix early wake from sleep due to aio Stefan Hajnoczi
2014-03-21 12:57 ` [Qemu-devel] [PATCH for-2.0 0/2] mirror: fix rate-limiting Paolo Bonzini
2014-03-25 12:50 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1395406519-21410-2-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=jbarrera@ac.upc.edu \
--cc=jcody@redhat.com \
--cc=kwolf@redhat.com \
--cc=mprivozn@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).