linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	Greg Kroah-Hartman <gregkh@suse.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Marcelo Tosatti <mtosatti@redhat.com>, X86 <x86@kernel.org>,
	Gleb Natapov <gleb@redhat.com>, Ingo Molnar <mingo@redhat.com>,
	Attilio Rao <attilio.rao@citrix.com>,
	Virtualization <virtualization@lists.linux-foundation.org>,
	Xen Devel <xen-devel@lists.xensource.com>,
	linux-doc@vger.kernel.org, KVM <kvm@vger.kernel.org>,
	Andi Kleen <andi@firstfloor.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Stephan Diestelhorst <stephan.diestelhorst@amd.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Thomas Gleixner <tglx@linutronix.de>,
	"Nikunj A. Dadhania" <nikunj@linux.vnet.ibm.com>
Subject: Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks
Date: Wed, 16 May 2012 08:49:00 +0530	[thread overview]
Message-ID: <4FB31CA4.5070908@linux.vnet.ibm.com> (raw)
In-Reply-To: <4FB0014A.90604@linux.vnet.ibm.com>

On 05/14/2012 12:15 AM, Raghavendra K T wrote:
> On 05/07/2012 08:22 PM, Avi Kivity wrote:
>
> I could not come with pv-flush results (also Nikunj had clarified that
> the result was on NOn PLE
>
>> I'd like to see those numbers, then.
>>
>> Ingo, please hold on the kvm-specific patches, meanwhile.
>>
>
> 3 guests 8GB RAM, 1 used for kernbench
> (kernbench -f -H -M -o 20) other for cpuhog (shell script with while
> true do hackbench)
>
> 1x: no hogs
> 2x: 8hogs in one guest
> 3x: 8hogs each in two guest
>
> kernbench on PLE:
> Machine : IBM xSeries with Intel(R) Xeon(R) X7560 2.27GHz CPU with 32
> core, with 8 online cpus and 4*64GB RAM.
>
> The average is taken over 4 iterations with 3 run each (4*3=12). and
> stdev is calculated over mean reported in each run.
>
>
> A): 8 vcpu guest
>
> BASE BASE+patch %improvement w.r.t
> mean (sd) mean (sd) patched kernel time
> case 1*1x: 61.7075 (1.17872) 60.93 (1.475625) 1.27605
> case 1*2x: 107.2125 (1.3821349) 97.506675 (1.3461878) 9.95401
> case 1*3x: 144.3515 (1.8203927) 138.9525 (0.58309319) 3.8855
>
>
> B): 16 vcpu guest
> BASE BASE+patch %improvement w.r.t
> mean (sd) mean (sd) patched kernel time
> case 2*1x: 70.524 (1.5941395) 69.68866 (1.9392529) 1.19867
> case 2*2x: 133.0738 (1.4558653) 124.8568 (1.4544986) 6.58114
> case 2*3x: 206.0094 (1.3437359) 181.4712 (2.9134116) 13.5218
>
> B): 32 vcpu guest
> BASE BASE+patch %improvementw.r.t
> mean (sd) mean (sd) patched kernel time
> case 4*1x: 100.61046 (2.7603485) 85.48734 (2.6035035) 17.6905
>
> It seems while we do not see any improvement in low contention case,
> the benefit becomes evident with overcommit and large guests. I am
> continuing analysis with other benchmarks (now with pgbench to check if
> it has acceptable improvement/degradation in low contenstion case).

Here are the results for pgbench and sysbench. Here the results are on a 
single guest.

Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32 
core, with 8
          online cpus and 4*64GB RAM.

Guest config: 8GB RAM

pgbench
==========

   unit=tps (higher is better)
   pgbench based on pgsql 9.2-dev:
	http://www.postgresql.org/ftp/snapshot/dev/ (link given by Attilo)

   tool used to collect benachmark: 
git://git.postgresql.org/git/pgbench-tools.git
   config: MAX_WORKER=16 SCALE=32 run for NRCLIENTS = 1, 8, 64

Average taken over 10 iterations.

      8 vcpu guest	

      N  base	   patch	improvement
      1  5271       5235    	-0.687679
      8  37953      38202    	0.651798
      64 37546      37774    	0.60359


      16 vcpu guest	

      N  base	   patch	improvement
      1  5229       5239  	0.190876
      8  34908      36048    	3.16245
      64 51796      52852   	1.99803

sysbench
==========
sysbench 0.4.12 cnfigured for postgres driver ran with
sysbench --num-threads=8/16/32 --max-requests=100000 --test=oltp 
--oltp-table-size=500000 --db-driver=pgsql --oltp-read-only run
annalysed with ministat with
x patch
+ base

8 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       20.7805         21.55       20.9667      21.03502    0.22682186
+  10        21.025       22.3122      21.29535      21.41793    0.39542349
Difference at 98.0% confidence
	1.82035% +/- 1.74892%

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       20.8786       21.3967       21.1566      21.14441    0.15490983
+  10       21.3992       21.9437      21.46235      21.58724     0.2089425
Difference at 98.0% confidence
	2.09431% +/- 0.992732%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       21.1329       21.3726      21.33415       21.2893    0.08324195
+  10       21.5692       21.8966       21.6441      21.65679   0.093430003
Difference at 98.0% confidence
	1.72617% +/- 0.474343%


16 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       23.5314       25.6118      24.76145      24.64517    0.74856264
+  10       22.2675       26.6204       22.9131      23.50554      1.345386
No difference proven at 98.0% confidence

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       12.0095       12.2305      12.15575      12.13926   0.070872722
+  10        11.413       11.6986       11.4817        11.493   0.080007819
Difference at 98.0% confidence
	-5.32372% +/- 0.710561%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       12.1378       12.3567      12.21675      12.22703     0.0670695
+  10        11.573       11.7438       11.6306      11.64905   0.062780221
Difference at 98.0% confidence
	-4.72707% +/- 0.606349%


32 vcpu guest
---------------
1) num_threads = 8
     N           Min           Max        Median           Avg        Stddev
x  10       30.5602       41.4756      37.45155      36.43752     3.5490215
+  10       21.1183       49.2599      22.60845      29.61119     11.269393
No difference proven at 98.0% confidence

2) num_threads = 16
     N           Min           Max        Median           Avg        Stddev
x  10       12.2556       12.9023       12.4968      12.55764    0.25330459
+  10       11.7627       11.9959       11.8419      11.86256   0.088563903
Difference at 98.0% confidence
	-5.53512% +/- 1.72448%

3) num_threads = 32
     N           Min           Max        Median           Avg        Stddev
x  10       16.8751       17.0756      16.97335      16.96765   0.063197191
+  10       21.3763       21.8111       21.6799      21.66438    0.13059888
Difference at 98.0% confidence
	27.6805% +/- 0.690056%


To summarise,
with 32 vcpu guest with nr thread=32 we get around 27% improvement. In 
very low/undercommitted systems we may see very small improvement or 
small acceptable degradation ( which it deserves).

(IMO with more overcommit/contention, we can get more than 15% for the 
benchmarks and we do ).

  Please let me know if you have any suggestions for try.
(Currently my PLE machine lease is expired, it may take some time to 
comeback :()

  Ingo, Avi ?


>
> Avi,
> Can patch series go ahead for inclusion into tree with following
> reasons:
>
> The patch series brings fairness with ticketlock ( hence the
> predictability, since during contention, vcpu trying
> to acqire lock is sure that it gets its turn in less than total number
> of vcpus conntending for lock), which is very much desired irrespective
> of its low benefit/degradation (if any) in low contention scenarios.
>
> Ofcourse ticketlocks had undesirable effect of exploding LHP problem,
> and the series addresses with improvement in scheduling and sleeping
> instead of burning cpu time.
>
> Finally a less famous one, it brings almost PLE equivalent capabilty to
> all the non PLE hardware (TBH I always preferred my experiment kernel to
> be compiled in my pv guest that saves more than 30 min of time for each
> run).


  parent reply	other threads:[~2012-05-16  3:20 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-02 10:06 [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 1/17] x86/spinlock: Replace pv spinlocks with pv ticketlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 2/17] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks Raghavendra K T
2012-05-02 10:06 ` [PATCH RFC V8 3/17] x86/ticketlock: Collapse a layer of functions Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 4/17] xen: Defer spinlock setup until boot CPU setup Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 5/17] xen/pvticketlock: Xen implementation for PV ticket locks Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 6/17] xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 7/17] x86/pvticketlock: Use callee-save for lock_spinning Raghavendra K T
2012-05-02 10:07 ` [PATCH RFC V8 8/17] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 9/17] Split out rate limiting from jump_label.h Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 10/17] x86/ticketlock: Add slowpath logic Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 11/17] xen/pvticketlock: Allow interrupts to be enabled while blocking Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 12/17] xen: Enable PV ticketlocks on HVM Xen Raghavendra K T
2012-05-02 10:08 ` [PATCH RFC V8 13/17] kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 14/17] kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 15/17] kvm guest : Add configuration support to enable debug information for KVM Guests Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 16/17] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T
2012-05-02 10:09 ` [PATCH RFC V8 17/17] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Raghavendra K T
2012-05-30 11:54   ` Jan Kiszka
2012-05-30 13:44     ` Raghavendra K T
2012-05-07  8:29 ` [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks Ingo Molnar
2012-05-07  8:32   ` Avi Kivity
2012-05-07 10:58     ` Raghavendra K T
2012-05-07 12:06       ` Avi Kivity
2012-05-07 13:20         ` Raghavendra K T
2012-05-07 13:22           ` Avi Kivity
2012-05-07 13:38             ` Raghavendra K T
2012-05-07 13:46               ` Srivatsa Vaddagiri
2012-05-07 13:49                 ` Avi Kivity
2012-05-07 13:53                   ` Raghavendra K T
2012-05-07 13:58                     ` Avi Kivity
2012-05-07 14:47                       ` Raghavendra K T
2012-05-07 14:52                         ` Avi Kivity
2012-05-07 14:54                           ` Avi Kivity
2012-05-07 17:25                           ` Ingo Molnar
2012-05-07 20:42                             ` Thomas Gleixner
2012-05-08  6:46                               ` Nikunj A Dadhania
2012-05-15 11:26                             ` [Xen-devel] " Jan Beulich
2012-05-08  5:25                           ` Raghavendra K T
2012-05-13 18:45                           ` Raghavendra K T
2012-05-14  4:57                             ` Nikunj A Dadhania
2012-05-14  9:01                               ` Raghavendra K T
2012-05-14  7:38                             ` Jeremy Fitzhardinge
2012-05-14  8:11                               ` Raghavendra K T
2012-05-16  3:19                             ` Raghavendra K T [this message]
2012-05-30 11:26                               ` Raghavendra K T
2012-06-14 12:21                                 ` Raghavendra K T
2012-05-07 13:55                   ` Srivatsa Vaddagiri
2012-05-07 23:15                   ` Jeremy Fitzhardinge
2012-05-08  1:13                     ` Raghavendra K T
2012-05-08  9:08                     ` Avi Kivity
2012-05-07 13:56                 ` Raghavendra K T
2012-05-13 17:59         ` Raghavendra K T

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FB31CA4.5070908@linux.vnet.ibm.com \
    --to=raghavendra.kt@linux.vnet.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=attilio.rao@citrix.com \
    --cc=avi@redhat.com \
    --cc=gleb@redhat.com \
    --cc=gregkh@suse.de \
    --cc=hpa@zytor.com \
    --cc=jeremy@goop.org \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=stephan.diestelhorst@amd.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vatsa@linux.vnet.ibm.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).