From: Fam Zheng <famz@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] block/throttle: Use host clock type
Date: Fri, 13 Mar 2015 16:27:34 +0800 [thread overview]
Message-ID: <20150313082734.GA3527@ad.nay.redhat.com> (raw)
In-Reply-To: <55029B1A.6000802@redhat.com>
On Fri, 03/13 09:08, Paolo Bonzini wrote:
>
>
> On 13/03/2015 07:35, Fam Zheng wrote:
> > Throttle timers won't make any progress when VCPU is not running, which
> > is prone to stall the request queue in cases like utils, qtest,
> > suspending, and live migration, unless carefully handled. What we do now
> > is crude. For example in bdrv_drain_all, requests are resumed
> > immediately without consulting throttling timer. Unfortunately
> > bdrv_drain_all is so widely used that there may be too many holes that
> > guest could bypass throttling.
> >
> > If we use the host clock, we can just trust the nested poll when waiting
> > for requests.
> >
> > Signed-off-by: Fam Zheng <famz@redhat.com>
> > ---
> > block.c | 2 +-
> > tests/test-throttle.c | 14 +++++++-------
>
> I think test-throttle.c should use the vm_clock. At some point it was
> managing the clock manually (by overriding cpu_get_clock from
> libqemustub.a), and that's only possible with QEMU_CLOCK_VIRTUAL.
Ah! That is in iotests 093 (hint: authord by Fam Zheng :-/), which WILL be
complicated if block.c switches away from QEMU_CLOCK_VIRTUAL. But I'll do the
work if we decide to make this change.
As to tests/test-throttle.c, I don't see its dependency on clock type, so
either way should work and I don't mind keeping it as-is at all.
>
> As to block.c, I'll leave the review to the block folks. But I think
> QEMU_CLOCK_REALTIME is preferrable.
Real time clock should be fine, but we should review that the code handles
clock reversing.
Fam
>
> > 2 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/block.c b/block.c
> > index 191a847..11f9065 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -184,7 +184,7 @@ void bdrv_io_limits_enable(BlockDriverState *bs)
> > assert(!bs->io_limits_enabled);
> > throttle_init(&bs->throttle_state,
> > bdrv_get_aio_context(bs),
> > - QEMU_CLOCK_VIRTUAL,
> > + QEMU_CLOCK_HOST,
> > bdrv_throttle_read_timer_cb,
> > bdrv_throttle_write_timer_cb,
> > bs);
> > diff --git a/tests/test-throttle.c b/tests/test-throttle.c
> > index d8ba415..1fb1792 100644
> > --- a/tests/test-throttle.c
> > +++ b/tests/test-throttle.c
> > @@ -107,11 +107,11 @@ static void test_init(void)
> > memset(&ts, 1, sizeof(ts));
> >
> > /* init the structure */
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> >
> > /* check initialized fields */
> > - g_assert(ts.clock_type == QEMU_CLOCK_VIRTUAL);
> > + g_assert(ts.clock_type == QEMU_CLOCK_HOST);
> > g_assert(ts.timers[0]);
> > g_assert(ts.timers[1]);
> >
> > @@ -130,7 +130,7 @@ static void test_init(void)
> > static void test_destroy(void)
> > {
> > int i;
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> > throttle_destroy(&ts);
> > for (i = 0; i < 2; i++) {
> > @@ -170,7 +170,7 @@ static void test_config_functions(void)
> >
> > orig_cfg.op_size = 1;
> >
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> > /* structure reset by throttle_init previous_leak should be null */
> > g_assert(!ts.previous_leak);
> > @@ -330,7 +330,7 @@ static void test_have_timer(void)
> > g_assert(!throttle_have_timer(&ts));
> >
> > /* init the structure */
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> >
> > /* timer set by init should return true */
> > @@ -345,7 +345,7 @@ static void test_detach_attach(void)
> > memset(&ts, 0, sizeof(ts));
> >
> > /* init the structure */
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> >
> > /* timer set by init should return true */
> > @@ -387,7 +387,7 @@ static bool do_test_accounting(bool is_ops, /* are we testing bps or ops */
> >
> > cfg.op_size = op_size;
> >
> > - throttle_init(&ts, ctx, QEMU_CLOCK_VIRTUAL,
> > + throttle_init(&ts, ctx, QEMU_CLOCK_HOST,
> > read_timer_cb, write_timer_cb, &ts);
> > throttle_config(&ts, &cfg);
> >
> >
next prev parent reply other threads:[~2015-03-13 8:27 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-13 6:35 [Qemu-devel] [PATCH] block/throttle: Use host clock type Fam Zheng
2015-03-13 8:08 ` Paolo Bonzini
2015-03-13 8:27 ` Fam Zheng [this message]
2015-03-13 11:09 ` Paolo Bonzini
2015-03-13 11:43 ` Fam Zheng
2015-03-13 12:23 ` Alberto Garcia
2015-03-13 12:28 ` Paolo Bonzini
2015-03-16 2:15 ` Fam Zheng
2015-03-16 15:18 ` Kevin Wolf
2015-03-16 16:08 ` Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150313082734.GA3527@ad.nay.redhat.com \
--to=famz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).