linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Radim Krčmář" <rkrcmar@redhat.com>
To: Waiman Long <Waiman.Long@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Peter Zijlstra <peterz@infradead.org>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	Paolo Bonzini <paolo.bonzini@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Rik van Riel <riel@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Oleg Nesterov <oleg@redhat.com>, Gleb Natapov <gleb@redhat.com>,
	Scott J Norton <scott.norton@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>
Subject: Re: [PATCH v10 03/19] qspinlock: Add pending bit
Date: Mon, 12 May 2014 17:22:08 +0200	[thread overview]
Message-ID: <20140512152208.GA12309@potion.brq.redhat.com> (raw)
In-Reply-To: <1399474907-22206-4-git-send-email-Waiman.Long@hp.com>

2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.

I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
    - we have PLE and lock holder isn't running [1]
    - the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
   loop in trylock's 'for (;;)'
    - the worst-case is lock starving [2]
    - PLE can save us from wasting whole timeslice

Retry threshold is the easiest solution, regardless of its ugliness [4].

Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?

Thanks.


---
1: Pause Loop Exiting is almost certain to vmexit in that case: we
   default to 4096 TSC cycles on KVM, and pending loop is longer than 4
   (4096/PSPIN_THRESHOLD).
   We would also vmexit if critical section was longer than 4k.

2: In this example, vpus 1 and 2 use the lock while 3 never gets there.
   VCPU:  1      2      3
        lock()                   // we are the holder
              pend()             // we have pending bit
              vmexit             // while in PSPIN_THRESHOLD loop
        unlock()
                     vmentry
                     SPINNING    // for {;;} loop
                     vmexit
              vmentry
              lock()
        pend()
        vmexit
             unlock()
                     vmentry
                     SPINNING
                     vmexit
        vmentry
        --- loop ---

   The window is (should be) too small to happen in bare-metal.

3: Pending VCPU was first in line, but when it decides to queue, it must
   go to the tail.

4:
The idea is to prevent unfairness by queueing after a while of useless
looping.  Magic value should be set a bit above the time it takes an
active pending bit holder to go through the loop.  4 looks enough.
We can use either pv_qspinlock_enabled() or cpu_has_hypervisor.
I presume that we never want this to happen in a VM and that we won't
have pv_qspinlock_enabled() without cpu_has_hypervisor.

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 37b5c7f..cd45c27 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock)
 static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 {
 	u32 old, new, val = *pval;
-	int retry = 1;
+	int retry = 0;
 
 	/*
 	 * trylock || pending
@@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 			 * a while to see if that either bit will be cleared.
 			 * If that is no change, we return and be queued.
 			 */
-			if (!retry)
+			if (retry)
 				return 0;
-			retry--;
+			retry++;
 			cpu_relax();
 			cpu_relax();
 			*pval = val = atomic_read(&lock->val);
@@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 			 * Assuming that the pending bit holder is going to
 			 * set the lock bit and clear the pending bit soon,
 			 * it is better to wait than to exit at this point.
+			 * Our assumption does not hold on hypervisors, where
+			 * the pending bit holder doesn't have to be running.
 			 */
+			if (cpu_has_hypervisor && ++retry > MAGIC)
+					return 0;
 			cpu_relax();
 			*pval = val = atomic_read(&lock->val);
 			continue;

  parent reply	other threads:[~2014-05-12 15:22 UTC|newest]

Thread overview: 97+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-07 15:01 [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use " Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 18:57   ` Peter Zijlstra
2014-05-10  0:49     ` Waiman Long
2014-05-10  0:49       ` Waiman Long
2014-05-12 15:22   ` Radim Krčmář [this message]
2014-05-12 17:29     ` Peter Zijlstra
2014-05-13 19:47     ` Waiman Long
2014-05-13 19:47       ` Waiman Long
2014-05-14 16:51       ` Radim Krčmář
2014-05-14 16:51         ` Radim Krčmář
2014-05-14 17:00         ` Peter Zijlstra
2014-05-14 17:00           ` Peter Zijlstra
2014-05-14 19:13           ` Radim Krčmář
2014-05-14 19:13             ` Radim Krčmář
2014-05-19 20:17             ` Waiman Long
2014-05-19 20:17               ` Waiman Long
     [not found]               ` <20140521164930.GA26199@potion.brq.redhat.com>
2014-05-21 17:02                 ` [RFC 08/07] qspinlock: integrate pending bit into queue Radim Krčmář
2014-05-21 17:02                   ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 18:58   ` Peter Zijlstra
2014-05-08 18:58     ` Peter Zijlstra
2014-05-10  0:58     ` Waiman Long
2014-05-10  0:58       ` Waiman Long
2014-05-10 13:38       ` Peter Zijlstra
2014-05-10 13:38         ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 19:00   ` Peter Zijlstra
2014-05-08 19:00     ` Peter Zijlstra
2014-05-10  1:05     ` Waiman Long
2014-05-10  1:05       ` Waiman Long
2014-05-08 19:02   ` Peter Zijlstra
2014-05-08 19:02     ` Peter Zijlstra
2014-05-10  1:06     ` Waiman Long
2014-05-10  1:06       ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 19:04   ` Peter Zijlstra
2014-05-08 19:04     ` Peter Zijlstra
2014-05-10  1:08     ` Waiman Long
2014-05-10  1:08       ` Waiman Long
2014-05-10 14:14       ` Peter Zijlstra
2014-05-10 14:14         ` Peter Zijlstra
2014-05-10 18:21         ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 09/19] qspinlock: Prepare for unfair lock support Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 19:06   ` Peter Zijlstra
2014-05-08 19:06     ` Peter Zijlstra
2014-05-10  1:19     ` Waiman Long
2014-05-10 14:13       ` Peter Zijlstra
2014-05-10 14:13         ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 19:12   ` Peter Zijlstra
2014-05-08 19:12     ` Peter Zijlstra
2014-05-19 20:30     ` Waiman Long
2014-05-19 20:30       ` Waiman Long
2014-05-12 18:57   ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-08 19:19   ` Peter Zijlstra
2014-05-08 19:19     ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 19:07   ` Konrad Rzeszutek Wilk
2014-05-07 19:07     ` Konrad Rzeszutek Wilk
2014-05-08 17:54     ` Waiman Long
2014-05-08 17:54       ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long
2014-05-07 15:01   ` Waiman Long
2014-05-07 19:07 ` [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Konrad Rzeszutek Wilk
2014-05-07 19:07   ` Konrad Rzeszutek Wilk
2014-05-08 17:54   ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140512152208.GA12309@potion.brq.redhat.com \
    --to=rkrcmar@redhat.com \
    --cc=Waiman.Long@hp.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=chegu_vinod@hp.com \
    --cc=david.vrabel@citrix.com \
    --cc=gleb@redhat.com \
    --cc=hpa@zytor.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paolo.bonzini@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).