qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Rik van Riel <riel@redhat.com>
Cc: kvm@vger.kernel.org, Mike Galbraith <efault@gmx.de>,
	qemu-devel@nongnu.org, vatsa@linux.vnet.ibm.com,
	Chris Wright <chrisw@sous-sol.org>,
	Anthony Liguori <aliguori@linux.vnet.ibm.com>,
	Kivity <avi@redhat.com>,
	Avi@gnu.org
Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2)
Date: Wed, 01 Dec 2010 20:35:36 +0100	[thread overview]
Message-ID: <1291232136.32004.1964.camel@laptop> (raw)
In-Reply-To: <4CF6A0E4.1050108@redhat.com>

On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
> On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
> 
> >> The pause loop exiting&  directed yield patches I am working on
> >> preserve inter-vcpu fairness by round robining among the vcpus
> >> inside one KVM guest.
> >
> > I don't necessarily think that's enough.
> >
> > Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning.
> > They'll end up all three donating some time to the 4th.
> >
> > The only way to make that fair again is if due to future contention the
> > 4th cpu donates an equal amount of time back to the resp. cpus it got
> > time from. Guest lock patterns and host scheduling don't provide this
> > guarantee.
> 
> You have no guarantees when running virtualized, guest
> CPU time could be taken away by another guest just as
> easily as by another VCPU.
> 
> Even if we equalized the amount of CPU time each VCPU
> ends up getting across some time interval, that is no
> guarantee they get useful work done, or that the time
> gets fairly divided to _user processes_ running inside
> the guest.

Right, and Jeremy was working on making the guest load-balancer aware of
that so the user-space should get fairly scheduled on service (of
course, that's assuming you run a linux guest with that logic in).

> The VCPU could be running something lock-happy when
> it temporarily gives up the CPU, and get extra CPU time
> back when running something userspace intensive.
> 
> In-between, it may well have scheduled to another task
> (allowing it to get more CPU time).
> 
> I'm not convinced the kind of fairness you suggest is
> possible or useful.

Well, physical cpus get equal service, but yeah, time loss due to
contention could probably be talked equivalent to do non-equal service
in the vcpu case.

Anyway, don't take it as a critique per-se, your approach sounds like
the sanest proposal yet.

  reply	other threads:[~2010-12-01 19:35 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-23 16:49 [Qemu-devel] [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2) Anthony Liguori
2010-11-23 19:35 ` Blue Swirl
2010-11-23 21:46   ` Anthony Liguori
2010-11-23 23:43     ` Paolo Bonzini
2010-11-24  1:15       ` Anthony Liguori
2010-11-24  2:08         ` Paolo Bonzini
2010-11-24  8:18 ` [Qemu-devel] " Avi Kivity
2010-11-24 13:58   ` Anthony Liguori
2010-11-24 14:23     ` Avi Kivity
2010-12-01 12:37       ` Srivatsa Vaddagiri
2010-12-01 12:56         ` Avi Kivity
2010-12-01 16:12           ` Srivatsa Vaddagiri
2010-12-01 16:25             ` Peter Zijlstra
2010-12-01 17:17               ` Chris Wright
2010-12-01 17:22                 ` Peter Zijlstra
2010-12-01 17:26                   ` Rik van Riel
2010-12-01 19:07                     ` Peter Zijlstra
2010-12-01 19:24                       ` Rik van Riel
2010-12-01 19:35                         ` Peter Zijlstra [this message]
2010-12-01 19:42                           ` Rik van Riel
2010-12-01 19:47                             ` Peter Zijlstra
2010-12-02  9:07                       ` Avi Kivity
2010-12-01 17:46                   ` Chris Wright
2010-12-01 17:29               ` Srivatsa Vaddagiri
2010-12-01 17:45                 ` Peter Zijlstra
2010-12-01 18:00                   ` Srivatsa Vaddagiri
2010-12-01 19:09                     ` Peter Zijlstra
2010-12-02  9:17                       ` Avi Kivity
2010-12-02 11:47                         ` Srivatsa Vaddagiri
2010-12-02 12:22                           ` Srivatsa Vaddagiri
2010-12-02 12:41                           ` Avi Kivity
2010-12-02 13:13                             ` Srivatsa Vaddagiri
2010-12-02 13:49                               ` Avi Kivity
2010-12-02 15:27                                 ` Srivatsa Vaddagiri
2010-12-02 15:28                                   ` Srivatsa Vaddagiri
2010-12-02 15:33                                   ` Avi Kivity
2010-12-02 15:44                                     ` Srivatsa Vaddagiri
2010-12-02 12:19                         ` Srivatsa Vaddagiri
2010-12-02 12:42                           ` Avi Kivity
2010-12-02  9:14                 ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1291232136.32004.1964.camel@laptop \
    --to=a.p.zijlstra@chello.nl \
    --cc=Avi@gnu.org \
    --cc=aliguori@linux.vnet.ibm.com \
    --cc=avi@redhat.com \
    --cc=chrisw@sous-sol.org \
    --cc=efault@gmx.de \
    --cc=kvm@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=riel@redhat.com \
    --cc=vatsa@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).