xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@kernel.org>
To: Juergen Gross <jgross@suse.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	xen-devel@lists.xensource.com, kvm@vger.kernel.org,
	konrad.wilk@oracle.com, gleb@kernel.org, x86@kernel.org,
	akataria@vmware.com, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, chrisw@sous-sol.org,
	mingo@redhat.com, david.vrabel@citrix.com, hpa@zytor.com,
	pbonzini@redhat.com, tglx@linutronix.de,
	boris.ostrovsky@oracle.com
Subject: Re: [PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Date: Sun, 17 May 2015 07:30:36 +0200	[thread overview]
Message-ID: <20150517053036.GB16607@gmail.com> (raw)
In-Reply-To: <554A0132.3070802@suse.com>


* Juergen Gross <jgross@suse.com> wrote:

> On 05/05/2015 07:21 PM, Jeremy Fitzhardinge wrote:
> >On 05/03/2015 10:55 PM, Juergen Gross wrote:
> >>I did a small measurement of the pure locking functions on bare metal
> >>without and with my patches.
> >>
> >>spin_lock() for the first time (lock and code not in cache) dropped from
> >>about 600 to 500 cycles.
> >>
> >>spin_unlock() for first time dropped from 145 to 87 cycles.
> >>
> >>spin_lock() in a loop dropped from 48 to 45 cycles.
> >>
> >>spin_unlock() in the same loop dropped from 24 to 22 cycles.
> >
> >Did you isolate icache hot/cold from dcache hot/cold? It seems to me the
> >main difference will be whether the branch predictor is warmed up rather
> >than if the lock itself is in dcache, but its much more likely that the
> >lock code is icache if the code is lock intensive, making the cold case
> >moot. But that's pure speculation.
> >
> >Could you see any differences in workloads beyond microbenchmarks?
> >
> >Not that its my call at all, but I think we'd need to see some concrete
> >improvements in real workloads before adding the complexity of more pvops.
> 
> I did another test on a larger machine:
> 
> 25 kernel builds (time make -j 32) on a 32 core machine. Before each
> build "make clean" was called, the first result after boot was omitted
> to avoid disk cache warmup effects.
> 
> System time without my patches: 861.5664 +/- 3.3665 s
>                with my patches: 852.2269 +/- 3.6629 s

So how does the profile look like in the guest, before/after the PV 
spinlock patches? I'm a bit surprised to see so much spinlock 
overhead.

Thanks,

	Ingo

  reply	other threads:[~2015-05-17  5:30 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-30 10:53 [PATCH 0/6] x86: reduce paravirtualized spinlock overhead Juergen Gross
2015-04-30 10:53 ` [PATCH 1/6] x86: use macro instead of "0" for setting TICKET_SLOWPATH_FLAG Juergen Gross
2015-04-30 10:53 ` [PATCH 2/6] x86: move decision about clearing slowpath flag into arch_spin_lock() Juergen Gross
2015-04-30 10:54 ` [PATCH 3/6] x86: introduce new pvops function clear_slowpath Juergen Gross
2015-04-30 10:54 ` [PATCH 4/6] x86: introduce new pvops function spin_unlock Juergen Gross
2015-04-30 10:54 ` [PATCH 5/6] x86: switch config from UNINLINE_SPIN_UNLOCK to INLINE_SPIN_UNLOCK Juergen Gross
2015-04-30 10:54 ` [PATCH 6/6] x86: remove no longer needed paravirt_ticketlocks_enabled Juergen Gross
2015-04-30 16:39 ` [PATCH 0/6] x86: reduce paravirtualized spinlock overhead Jeremy Fitzhardinge
2015-05-04  5:55   ` Juergen Gross
2015-05-05 17:21     ` Jeremy Fitzhardinge
2015-05-06 11:55       ` Juergen Gross
2015-05-17  5:30         ` Ingo Molnar [this message]
2015-05-18  8:11           ` Juergen Gross
2015-05-15 12:16 ` Juergen Gross
2015-06-08  4:09 ` Juergen Gross
2015-06-16 14:37 ` Juergen Gross
2015-06-16 15:18   ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150517053036.GB16607@gmail.com \
    --to=mingo@kernel.org \
    --cc=akataria@vmware.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=chrisw@sous-sol.org \
    --cc=david.vrabel@citrix.com \
    --cc=gleb@kernel.org \
    --cc=hpa@zytor.com \
    --cc=jeremy@goop.org \
    --cc=jgross@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).