qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>,
	jjherne@linux.vnet.ibm.com, qemu-devel@nongnu.org,
	Peter Xu <peterx@redhat.com>, Juan Quintela <quintela@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 5/5] cpu: throttle: fix throttle time slice
Date: Fri, 31 Mar 2017 15:46:57 -0400 (EDT)	[thread overview]
Message-ID: <3171819.10011849.1490989617665.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20170331191321.GI2408@work-vm>



> > So I'm inclined _not_ to take your patch.  One possibility could be to
> > do the following:
> > 
> > - for throttling between 0% and 80%, use the current algorithm.  At 66%,
> > the CPU will work for 10 ms and sleep for 40 ms.
> > 
> > - for throttling above 80% adapt your algorithm to have a variable
> > timeslice, going from 50 ms at 66% to 100 ms at 100%.  This way, the CPU
> > time will shrink below 10 ms and the sleep time will grow.

Oops, all the 66% should be 80%.

> It seems odd to have a threshold like that on something that's supposedly
> a linear scale.

I futzed a bit with the threshold until the first derivative of the CPU
time was zero at the threshold, and the result was 80%.  That is, if you
switch before 80%, the CPU time slice can grow to more than 10 ms right
after the threshold, and then start shrinking.

> > It looks like this: http://i.imgur.com/lyFie04.png
> > 
> > So at 99% the timeslice will be 97.5 ms; the CPU will work for 975 us
> > and sleep for the rest (10x more than with just your patch).  But I'm
> > not sure it's really worth it.
> 
> Can you really run a CPU for 975us ?

It's 2-3 million clock cycles, should be doable.

Paolo

  reply	other threads:[~2017-03-31 19:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-27  7:21 [Qemu-devel] [PATCH 0/5] several migrations related patches Peter Xu
2017-03-27  7:21 ` [Qemu-devel] [PATCH 1/5] migration: set current_active_state once Peter Xu
2017-03-31 18:50   ` Dr. David Alan Gilbert
2017-03-27  7:21 ` [Qemu-devel] [PATCH 2/5] migration: rename max_size to threshold_size Peter Xu
2017-03-31 18:59   ` Dr. David Alan Gilbert
2017-04-01  7:16     ` Peter Xu
2017-03-27  7:21 ` [Qemu-devel] [PATCH 3/5] hmp: info migrate_capability format tunes Peter Xu
2017-03-31 19:01   ` Dr. David Alan Gilbert
2017-03-27  7:21 ` [Qemu-devel] [PATCH 4/5] hmp: info migrate_parameters " Peter Xu
2017-03-31 19:02   ` Dr. David Alan Gilbert
2017-03-27  7:21 ` [Qemu-devel] [PATCH 5/5] cpu: throttle: fix throttle time slice Peter Xu
2017-03-27  7:40   ` Peter Xu
2017-03-31 16:38   ` Paolo Bonzini
2017-03-31 19:13     ` Dr. David Alan Gilbert
2017-03-31 19:46       ` Paolo Bonzini [this message]
2017-04-04 15:44         ` Dr. David Alan Gilbert
2017-04-01  7:52     ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3171819.10011849.1490989617665.JavaMail.zimbra@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=jjherne@linux.vnet.ibm.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=rth@twiddle.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).