public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Rik van Riel <riel@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Mike Galbraith <efault@gmx.de>,
	Chris Wright <chrisw@sous-sol.org>
Subject: Re: [RFC -v4 PATCH 0/3] directed yield for Pause Loop Exiting
Date: Thu, 13 Jan 2011 15:12:22 +0200	[thread overview]
Message-ID: <4D2EFA36.2040100@redhat.com> (raw)
In-Reply-To: <20110113002108.3abdf953@annuminas.surriel.com>

On 01/13/2011 07:21 AM, Rik van Riel wrote:
> When running SMP virtual machines, it is possible for one VCPU to be
> spinning on a spinlock, while the VCPU that holds the spinlock is not
> currently running, because the host scheduler preempted it to run
> something else.
>
> Both Intel and AMD CPUs have a feature that detects when a virtual
> CPU is spinning on a lock and will trap to the host.
>
> The current KVM code sleeps for a bit whenever that happens, which
> results in eg. a 64 VCPU Windows guest taking forever and a bit to
> boot up.  This is because the VCPU holding the lock is actually
> running and not sleeping, so the pause is counter-productive.
>
> In other workloads a pause can also be counter-productive, with
> spinlock detection resulting in one guest giving up its CPU time
> to the others.  Instead of spinning, it ends up simply not running
> much at all.
>
> This patch series aims to fix that, by having a VCPU that spins
> give the remainder of its timeslice to another VCPU in the same
> guest before yielding the CPU - one that is runnable but got
> preempted, hopefully the lock holder.

Can you share some benchmark results?

I'm mostly interested in moderately sized guests (4-8 vcpus) under 
conditions of no overcommit, and high overcommit (2x).

For no overcommit, I'd like to see comparisons against mainline with PLE 
disabled, to be sure there aren't significant regressions. For 
overcommit, comparisons against the no overcommit case.  Comparisons 
against mainline, with or without PLE disabled, are uninteresting since 
we know it sucks both ways.

-- 
error compiling committee.c: too many arguments to function


      parent reply	other threads:[~2011-01-13 13:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-13  5:21 [RFC -v4 PATCH 0/3] directed yield for Pause Loop Exiting Rik van Riel
2011-01-13  5:22 ` [RFC -v4 PATCH 1/3] kvm: keep track of which task is running a KVM vcpu Rik van Riel
2011-01-13  5:26 ` [RFC -v4 PATCH 2/3] sched: Add yield_to(task, preempt) functionality Rik van Riel
2011-01-13  5:27 ` [RFC -v4 PATCH 3/3] kvm: use yield_to instead of sleep in kvm_vcpu_on_spin Rik van Riel
2011-01-13 13:16   ` Avi Kivity
2011-01-13 15:06     ` Rik van Riel
2011-01-13 15:23       ` Avi Kivity
2011-01-14  0:10         ` Rik van Riel
2011-01-13 13:12 ` Avi Kivity [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D2EFA36.2040100@redhat.com \
    --to=avi@redhat.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=chrisw@sous-sol.org \
    --cc=efault@gmx.de \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=riel@redhat.com \
    --cc=vatsa@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox