xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Attilio Rao <attilio.rao@citrix.com>
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@elte.hu>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, Avi Kivity <avi@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>, KVM <kvm@vger.kernel.org>,
	Andi Kleen <andi@firstfloor.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Virtualization <virtualization@lists.linux-foundation.org>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>,
	Stephan Diestelhorst <stephan.diestelhorst@amd.com>,
	Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [PATCH RFC V6 1/11]  x86/spinlock: replace pv spinlocks with pv ticketlocks
Date: Wed, 21 Mar 2012 13:04:25 +0000	[thread overview]
Message-ID: <4F69D1D9.9080107@citrix.com> (raw)
In-Reply-To: <20120321102052.473.40193.sendpatchset@codeblue.in.ibm.com>

On 21/03/12 10:20, Raghavendra K T wrote:
> From: Jeremy Fitzhardinge<jeremy.fitzhardinge@citrix.com>
>
> Rather than outright replacing the entire spinlock implementation in
> order to paravirtualize it, keep the ticket lock implementation but add
> a couple of pvops hooks on the slow patch (long spin on lock, unlocking
> a contended lock).
>
> Ticket locks have a number of nice properties, but they also have some
> surprising behaviours in virtual environments.  They enforce a strict
> FIFO ordering on cpus trying to take a lock; however, if the hypervisor
> scheduler does not schedule the cpus in the correct order, the system can
> waste a huge amount of time spinning until the next cpu can take the lock.
>
> (See Thomas Friebel's talk "Prevent Guests from Spinning Around"
> http://www.xen.org/files/xensummitboston08/LHP.pdf  for more details.)
>
> To address this, we add two hooks:
>   - __ticket_spin_lock which is called after the cpu has been
>     spinning on the lock for a significant number of iterations but has
>     failed to take the lock (presumably because the cpu holding the lock
>     has been descheduled).  The lock_spinning pvop is expected to block
>     the cpu until it has been kicked by the current lock holder.
>   - __ticket_spin_unlock, which on releasing a contended lock
>     (there are more cpus with tail tickets), it looks to see if the next
>     cpu is blocked and wakes it if so.
>
> When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
> functions causes all the extra code to go away.
>    

I've made some real world benchmarks based on this serie of patches 
applied on top of a vanilla Linux-3.3-rc6 (commit 
4704fe65e55fb088fbcb1dc0b15ff7cc8bff3685), with both 
CONFIG_PARAVIRT_SPINLOCK=y and n, which means essentially 4 versions 
compared:
* vanilla - CONFIG_PARAVIRT_SPINLOCK - patch
* vanilla + CONFIG_PARAVIRT_SPINLOCK - patch
* vanilla - CONFIG_PARAVIRT_SPINLOCK + patch
* vanilla + CONFIG_PARAVIRT_SPINLOCK + patch

(you can check out the monolithic kernel configuration I used, and 
verify the sole difference, here):
http://xenbits.xen.org/people/attilio/jeremy-spinlock/kernel-configs/

Tests, information and results are summarized below.

== System used information:
* Machine is a XEON x3450, 2.6GHz, 8-ways system:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/dmesg
* System version, a Debian Squeeze 6.0.4:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/debian-version
* gcc version, 4.4.5:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/gcc-version

== Tests performed
* pgbench based on PostgreSQL 9.2 (development version) as it has a lot 
of scalability improvements in it:
http://www.postgresql.org/docs/devel/static/install-getsource.html

I used a stock installation, with only this simple configuration change:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/postsgresql.conf.patch

For collecting data I used this simple scripts, which runs the test 10 
times every time with a different set of threads (from 1 to 64). Please 
note that the first 8 runs cache all the data in memory in order to 
avoid subsequent I/O, thus they are discarded in sampling and calculation:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pgbench_script

Here is the crude data (please remind this is tps, thus the higher the 
better):
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pgbench-crude-datas/

And here are data chartered with ministat tool, comparing all the 4 
kernel configuration for every thread configuration:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pgbench-9.2-total.bench

As you can see, the patch doesn't really show a statistically meaningful 
difference for this workload, excluding the single-thread run for the 
patched + CONFIG_PARAVIRT_SPINLOCK=y case, which seems 5% faster.


* pbzip2, which is a parallel version of bzip2, supposed to reproduce a 
CPU-intensive, multithreaded, application.
The file choosen for compression is 1GB sized, got from /dev/urandom 
(this is not published but I may have it, so if you need it for more 
tests please just ask), and all the I/O is done on a tmpfs volume in 
order to avoid I/O floaty effects.

For collecting data I used this simple scripts, which runs the test 10 
times every time with a different set of threads (from 1 to 64):
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pbzip2bench_script

Here is the crude data (please remind this is time(1) output, thus the 
lower the better):
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pbzip2-crude-datas/

And here are data chartered with ministat tool, comparing all the 4 
kernel configuration for every thread configuration:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/pbzip2-1.1.1-total.bench

As you can see, the patch doesn't really show a statistically meaningful 
difference for this workload.


* kernbench-0.50 run, doing I/O on a 10GB tmpfs volume (thus no actual  
I/O involved), with the following invokation:
./kernbench -n10 -s -c16 -M -f

(I had to do that because kernbench wasn't getting a good maximum value 
at all, thus I disabled default maximum and forced for 16 threads).

Here is the crude data (please remind this is time(1) output, thus the 
lower the better):
http://xenbits.xen.org/people/attilio/jeremy-spinlock/kernbench-crude-datas/

Please note that kernbench already calculates std deviation for them. 
However I also wanted a ministat summary in order to quickly display any 
possible difference, thus I just replicated 3 times any value (the 
minimum requested by ministat) and charted them:
http://xenbits.xen.org/people/attilio/jeremy-spinlock/kernbench-0.50-total.bench

Again, it doesn't seem to be any meaningful statistical difference.

== Results
This test points in the direction that Jeremy's rebased patches don't 
introduce a peformance penalty at all, but also that we could likely 
consider CONFIG_PARAVIRT_SPINLOCK option removal, or turn it on by 
default and suggest disabling just on very old CPUs (assuming a 
performance regression can be proven there).

If you have questions please let me know.

Thanks,
Attilio

  reply	other threads:[~2012-03-21 13:04 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-21 10:20 [PATCH RFC V6 0/11] Paravirtualized ticketlocks Raghavendra K T
2012-03-21 10:20 ` [PATCH RFC V6 1/11] x86/spinlock: replace pv spinlocks with pv ticketlocks Raghavendra K T
2012-03-21 13:04   ` Attilio Rao [this message]
2012-03-21 13:22     ` Stephan Diestelhorst
2012-03-21 13:49       ` Attilio Rao
2012-03-21 14:25         ` Stephan Diestelhorst
2012-03-21 14:33           ` Attilio Rao
2012-03-21 14:49             ` Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 2/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
2012-03-21 17:13   ` Linus Torvalds
2012-03-22 10:06     ` Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 3/11] x86/ticketlock: collapse a layer of functions Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 4/11] xen: defer spinlock setup until boot CPU setup Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 5/11] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 6/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
2012-03-21 10:21 ` [PATCH RFC V6 7/11] x86/pvticketlock: use callee-save for lock_spinning Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 8/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2 Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 9/11] x86/ticketlock: add slowpath logic Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking Raghavendra K T
2012-03-21 10:22 ` [PATCH RFC V6 11/11] xen: enable PV ticketlocks on HVM Xen Raghavendra K T
2012-03-26 14:25 ` [PATCH RFC V6 0/11] Paravirtualized ticketlocks Avi Kivity
2012-03-27  7:37   ` Raghavendra K T
2012-03-28 16:09     ` Alan Meadows
2012-03-28 18:21       ` Raghavendra K T
2012-03-29  9:58         ` Avi Kivity
2012-03-29 18:03           ` Raghavendra K T
2012-03-30 10:07             ` Raghavendra K T
2012-04-01 13:18               ` Avi Kivity
2012-04-01 13:48                 ` Raghavendra K T
2012-04-01 13:53                   ` Avi Kivity
2012-04-01 13:56                     ` Raghavendra K T
2012-04-02  9:51                     ` Raghavendra K T
2012-04-02 12:15                       ` Raghavendra K T
2012-04-05  9:01                       ` Avi Kivity
2012-04-05 10:40                         ` Raghavendra K T
2012-04-05  8:43                     ` Raghavendra K T
2012-03-30 20:26 ` H. Peter Anvin
2012-03-30 22:07   ` Thomas Gleixner
2012-03-30 22:18     ` Andi Kleen
2012-03-30 23:04       ` Thomas Gleixner
2012-03-31  0:08         ` Andi Kleen
2012-03-31  8:11           ` Ingo Molnar
2012-03-31  4:07     ` Srivatsa Vaddagiri
2012-03-31  4:09       ` Srivatsa Vaddagiri
2012-04-16 15:44       ` Konrad Rzeszutek Wilk
2012-04-16 16:36         ` [Xen-devel] " Ian Campbell
2012-04-16 16:42           ` Jeremy Fitzhardinge
2012-04-17  2:54           ` Srivatsa Vaddagiri
2012-04-01 13:31     ` Avi Kivity
2012-04-02  9:26       ` Thomas Gleixner
2012-04-05  9:15         ` Avi Kivity
2012-04-02  4:36     ` [Xen-devel] " Juergen Gross
2012-04-02  9:42     ` Ian Campbell
2012-04-11  1:29     ` Marcelo Tosatti
2012-03-31  0:51   ` Raghavendra K T

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F69D1D9.9080107@citrix.com \
    --to=attilio.rao@citrix.com \
    --cc=Stefano.Stabellini@eu.citrix.com \
    --cc=andi@firstfloor.org \
    --cc=avi@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jeremy.fitzhardinge@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=mtosatti@redhat.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=stephan.diestelhorst@amd.com \
    --cc=torvalds@linux-foundation.org \
    --cc=vatsa@linux.vnet.ibm.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).