linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <waiman.long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arch@vger.kernel.org, Gleb Natapov <gleb@redhat.com>,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Rik van Riel <riel@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Scott J Norton <scott.norton@hp.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky>
Subject: Re: [PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Date: Wed, 19 Mar 2014 11:25:29 -0400	[thread overview]
Message-ID: <20140319152529.GA16565@phenom.dumpdata.com> (raw)
In-Reply-To: <53290AEF.8000802@hp.com>

On Tue, Mar 18, 2014 at 11:11:43PM -0400, Waiman Long wrote:
> On 03/17/2014 03:10 PM, Konrad Rzeszutek Wilk wrote:
> >On Mon, Mar 17, 2014 at 01:44:34PM -0400, Waiman Long wrote:
> >>On 03/14/2014 04:30 AM, Peter Zijlstra wrote:
> >>>On Thu, Mar 13, 2014 at 04:05:19PM -0400, Waiman Long wrote:
> >>>>On 03/13/2014 11:15 AM, Peter Zijlstra wrote:
> >>>>>On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> >>>>>>+static inline void arch_spin_lock(struct qspinlock *lock)
> >>>>>>+{
> >>>>>>+	if (static_key_false(&paravirt_unfairlocks_enabled))
> >>>>>>+		queue_spin_lock_unfair(lock);
> >>>>>>+	else
> >>>>>>+		queue_spin_lock(lock);
> >>>>>>+}
> >>>>>So I would have expected something like:
> >>>>>
> >>>>>	if (static_key_false(&paravirt_spinlock)) {
> >>>>>		while (!queue_spin_trylock(lock))
> >>>>>			cpu_relax();
> >>>>>		return;
> >>>>>	}
> >>>>>
> >>>>>At the top of queue_spin_lock_slowpath().
> >>>>I don't like the idea of constantly spinning on the lock. That can cause all
> >>>>sort of performance issues.
> >>>Its bloody virt; _that_ is a performance issue to begin with.
> >>>
> >>>Anybody half sane stops using virt (esp. if they care about
> >>>performance).
> >>>
> >>>>My version of the unfair lock tries to grab the
> >>>>lock ignoring if there are others waiting in the queue or not. So instead of
> >>>>the doing a cmpxchg of the whole 32-bit word, I just do a cmpxchg of the
> >>>>lock byte in the unfair version. A CPU has only one chance to steal the
> >>>>lock. If it can't, it will be lined up in the queue just like the fair
> >>>>version. It is not as unfair as the other unfair locking schemes that spins
> >>>>on the lock repetitively. So lock starvation should be less a problem.
> >>>>
> >>>>On the other hand, it may not perform as well as the other unfair locking
> >>>>schemes. It is a compromise to provide some lock unfairness without
> >>>>sacrificing the good cacheline behavior of the queue spinlock.
> >>>But but but,.. any kind of queueing gets you into a world of hurt with
> >>>virt.
> >>>
> >>>The simple test-and-set lock (as per the above) still sucks due to lock
> >>>holder preemption, but at least the suckage doesn't queue. Because with
> >>>queueing you not only have to worry about the lock holder getting
> >>>preemption, but also the waiter(s).
> >>>
> >>>Take the situation of 3 (v)CPUs where cpu0 holds the lock but is
> >>>preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after
> >>>which cpu0 gets back online.
> >>>
> >>>The simple test-and-set lock will now let cpu2 acquire. Your queue
> >>>however will just sit there spinning, waiting for cpu1 to come back from
> >>>holiday.
> >>>
> >>>I think you're way over engineering this. Just do the simple
> >>>test-and-set lock for virt&&   !paravirt (as I think Paolo Bonzini
> >>>suggested RHEL6 already does).
> >>The PV ticketlock code was designed to handle lock holder preemption
> >>by redirecting CPU resources in a preempted guest to another guest
> >>that can better use it and then return the preempted CPU back
> >>sooner.
> >>
> >>Using a simple test-and-set lock will not allow us to enable this PV
> >>spinlock functionality as there is no structure to decide who does
> >>what. I can extend the current unfair lock code to allow those
> >And what would be needed to do 'decide who does what'?
> >
> >
> 
> Sorry for not very clear in my previous mail. The current PV code
> will halt the CPUs if the lock isn't acquired in a certain time.
> When the locker holder come back, it will wake up the next CPU in
> the ticket lock queue. With a simple set and test lock, there is no
> structure to decide which one to wake up. So you still need to have
> some sort of queue to do that, you just can't wake up all of them
> and let them fight for the lock.

Why not? That is what bytelocks did and it worked OK in the
virtualization environment. The reason it was OK was because the
hypervisor decided which VCPU to schedule and that one would get the
lock. Granted it did have a per-cpu-what-lock-am-I-waiting-for
structure to aid in this.

In some form, the hypervisor serializes who is going to get
the lock as it ultimately decides which of the VCPUs that are
kicked to wake up - and if the lock is not FIFO - it doesn't
matter which ones gets it.

With the 'per-cpu-what-lock-I-am-waiting-for' you can also
have a round-robin of which vCPU you had kick to keep
it fair.

You might want to take a look at the PV bytelock that existed
in the Xen code prior to PV ticketlock (so 3.10) to see how
it did that.

  reply	other threads:[~2014-03-19 15:25 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-12 18:54 [PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-03-12 18:54 ` [PATCH v6 01/11] qspinlock: A generic 4-byte queue spinlock implementation Waiman Long
2014-03-12 18:54 ` [PATCH v6 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock Waiman Long
2014-03-12 18:54 ` [PATCH v6 03/11] qspinlock: More optimized code for smaller NR_CPUS Waiman Long
2014-03-12 18:54 ` [PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks Waiman Long
2014-03-12 19:08   ` Waiman Long
2014-03-13 13:57     ` Peter Zijlstra
2014-03-17 17:23       ` Waiman Long
2014-03-12 18:54 ` [PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest Waiman Long
2014-03-13 10:54   ` David Vrabel
2014-03-13 13:16     ` Paolo Bonzini
2014-03-13 13:16       ` Paolo Bonzini
2014-03-17 19:05       ` Konrad Rzeszutek Wilk
2014-03-17 19:05         ` Konrad Rzeszutek Wilk
2014-03-18  8:14         ` Paolo Bonzini
2014-03-18  8:14           ` Paolo Bonzini
2014-03-19  3:15           ` Waiman Long
2014-03-19  3:15             ` Waiman Long
2014-03-19 10:07             ` Paolo Bonzini
2014-03-19 16:58               ` Waiman Long
2014-03-19 17:08                 ` Paolo Bonzini
2014-03-19 17:08                   ` Paolo Bonzini
2014-03-13 19:03     ` Waiman Long
2014-03-13 15:15   ` Peter Zijlstra
2014-03-13 20:05     ` Waiman Long
2014-03-14  8:30       ` Peter Zijlstra
2014-03-14  8:48         ` Paolo Bonzini
2014-03-14  8:48           ` Paolo Bonzini
2014-03-17 17:44         ` Waiman Long
2014-03-17 18:54           ` Peter Zijlstra
2014-03-18  8:16             ` Paolo Bonzini
2014-03-18  8:16               ` Paolo Bonzini
2014-03-19  3:08             ` Waiman Long
2014-03-17 19:10           ` Konrad Rzeszutek Wilk
2014-03-19  3:11             ` Waiman Long
2014-03-19 15:25               ` Konrad Rzeszutek Wilk [this message]
2014-03-12 18:54 ` [PATCH v6 06/11] pvqspinlock, x86: Allow unfair queue spinlock in a KVM guest Waiman Long
2014-03-12 18:54 ` [PATCH v6 07/11] pvqspinlock, x86: Allow unfair queue spinlock in a XEN guest Waiman Long
2014-03-12 18:54 ` [PATCH v6 08/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-03-12 18:54 ` [PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support Waiman Long
2014-03-13 11:21   ` David Vrabel
2014-03-13 13:57     ` Paolo Bonzini
2014-03-13 13:57       ` Paolo Bonzini
2014-03-13 19:49       ` Waiman Long
2014-03-13 19:49         ` Waiman Long
2014-03-14  9:44         ` Paolo Bonzini
2014-03-14  9:44           ` Paolo Bonzini
2014-03-13 19:05     ` Waiman Long
2014-03-12 18:54 ` [PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM Waiman Long
2014-03-13 13:59   ` Paolo Bonzini
2014-03-13 13:59     ` Paolo Bonzini
2014-03-13 19:13     ` Waiman Long
2014-03-14  8:42       ` Paolo Bonzini
2014-03-17 17:47         ` Waiman Long
2014-03-17 17:47           ` Waiman Long
2014-03-18  8:18           ` Paolo Bonzini
2014-03-13 15:25   ` Peter Zijlstra
2014-03-13 20:09     ` Waiman Long
2014-03-12 18:54 ` [PATCH RFC v6 11/11] pvqspinlock, x86: Enable qspinlock PV support for XEN Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140319152529.GA16565@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=akataria@vmware.com \
    --cc=andi@firstfloor.org \
    --cc=arnd@arndb.de \
    --cc=aswin@hp.com \
    --cc=chegu_vinod@hp.com \
    --cc=chrisw@sous-sol.org \
    --cc=gleb@redhat.com \
    --cc=hpa@zytor.com \
    --cc=jeremy@goop.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=waiman.long@hp.com \
    --cc=walken@google.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).