linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <waiman.long@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Rik van Riel <riel@redhat.com>, Arnd Bergmann <arnd@arndb.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Daniel J Blueman <daniel@numascale.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>,
	Thomas Gleixner <tglx@linutro>
Subject: Re: [PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
Date: Thu, 27 Feb 2014 15:42:19 -0500	[thread overview]
Message-ID: <530FA32B.8010202@hp.com> (raw)
In-Reply-To: <20140226162057.GW6835@laptop.programming.kicks-ass.net>

On 02/26/2014 11:20 AM, Peter Zijlstra wrote:
> You don't happen to have a proper state diagram for this thing do you?
>
> I suppose I'm going to have to make one; this is all getting a bit
> unwieldy, and those xchg() + fixup things are hard to read.

I don't have a state diagram on hand, but I will add more comments to 
describe the 4 possible cases and how to handle them.

>
> On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote:
>> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
>> +{
>> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
>> +	u16		     old;
>> +
>> +	/*
>> +	 * Fall into the quick spinning code path only if no one is waiting
>> +	 * or the lock is available.
>> +	 */
>> +	if (unlikely((qsval != _QSPINLOCK_LOCKED)&&
>> +		     (qsval != _QSPINLOCK_WAITING)))
>> +		return 0;
>> +
>> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
>> +
>> +	if (old == 0) {
>> +		/*
>> +		 * Got the lock, can clear the waiting bit now
>> +		 */
>> +		smp_u8_store_release(&qlock->wait, 0);
>
> So we just did an atomic op, and now you're trying to optimize this
> write. Why do you need a whole byte for that?
>
> Surely a cmpxchg loop with the right atomic op can't be _that_ much
> slower? Its far more readable and likely avoids that steal fail below as
> well.

At low contention level, atomic operations that requires a lock prefix 
are the major contributor to the total execution times. I saw estimate 
online that the time to execute a lock prefix instruction can easily be 
50X longer than a regular instruction that can be pipelined. That is why 
I try to do it with as few lock prefix instructions as possible. If I 
have to do an atomic cmpxchg, it probably won't be faster than the 
regular qspinlock slowpath.

Given that speed at low contention level which is the common case is 
important to get this patch accepted, I have to do what I can to make it 
run as far as possible for this 2 contending task case.

>> +		return 1;
>> +	} else if (old == _QSPINLOCK_LOCKED) {
>> +try_again:
>> +		/*
>> +		 * Wait until the lock byte is cleared to get the lock
>> +		 */
>> +		do {
>> +			cpu_relax();
>> +		} while (ACCESS_ONCE(qlock->lock));
>> +		/*
>> +		 * Set the lock bit&  clear the waiting bit
>> +		 */
>> +		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
>> +			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
>> +			return 1;
>> +		/*
>> +		 * Someone has steal the lock, so wait again
>> +		 */
>> +		goto try_again;
> That's just a fail.. steals should not ever be allowed. It's a fair lock
> after all.

The code is unfair, but this unfairness help it to run faster than 
ticket spinlock in this particular case. And the regular qspinlock 
slowpath is fair. A little bit of unfairness in this particular case 
helps its speed.

>> +	} else if (old == _QSPINLOCK_WAITING) {
>> +		/*
>> +		 * Another task is already waiting while it steals the lock.
>> +		 * A bit of unfairness here won't change the big picture.
>> +		 * So just take the lock and return.
>> +		 */
>> +		return 1;
>> +	}
>> +	/*
>> +	 * Nothing need to be done if the old value is
>> +	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
>> +	 */
>> +	return 0;
>> +}
>
>
>
>> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   		return;
>>   	}
>>
>> +#ifdef queue_code_xchg
>> +	prev_qcode = queue_code_xchg(lock, my_qcode);
>> +#else
>>   	/*
>>   	 * Exchange current copy of the queue node code
>>   	 */
>> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   	} else
>>   		prev_qcode&= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
>>   	my_qcode&= ~_QSPINLOCK_LOCKED;
>> +#endif /* queue_code_xchg */
>>
>>   	if (prev_qcode) {
>>   		/*
> That's just horrible.. please just make the entire #else branch another
> version of that same queue_code_xchg() function.

OK, I will wrap it in another function.

Regards,
Longman

  reply	other threads:[~2014-02-27 20:42 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-26 15:14 [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-02-26 15:14 ` [PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation Waiman Long
2014-02-26 16:22   ` Peter Zijlstra
2014-02-27 20:25     ` Waiman Long
2014-02-26 16:24   ` Peter Zijlstra
2014-02-27 20:25     ` Waiman Long
2014-02-26 15:14 ` [PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock Waiman Long
2014-02-26 15:14 ` [PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks Waiman Long
2014-02-26 16:20   ` Peter Zijlstra
2014-02-27 20:42     ` Waiman Long [this message]
2014-02-28  9:29       ` Peter Zijlstra
2014-02-28 16:25         ` Linus Torvalds
2014-02-28 17:37           ` Peter Zijlstra
2014-02-28 16:38         ` Waiman Long
2014-02-28 17:56           ` Peter Zijlstra
2014-03-03 17:43           ` Peter Zijlstra
2014-03-04 15:27             ` Waiman Long
2014-03-04 16:58             ` Peter Zijlstra
2014-03-04 18:09               ` Peter Zijlstra
2014-03-04 17:48             ` Waiman Long
2014-03-04 22:40               ` Peter Zijlstra
2014-03-05 20:59                 ` Peter Zijlstra
2014-02-26 15:14 ` [PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment Waiman Long
2014-02-26 17:07   ` Konrad Rzeszutek Wilk
2014-02-28 17:06     ` Waiman Long
2014-03-03 10:55       ` Paolo Bonzini
2014-03-04 15:15         ` Waiman Long
2014-03-04 15:23           ` Paolo Bonzini
2014-03-04 15:39           ` David Vrabel
2014-03-04 17:50           ` Raghavendra K T
2014-02-27 12:28   ` David Vrabel
2014-02-27 19:40     ` Waiman Long
2014-02-26 15:14 ` [PATCH RFC v5 5/8] pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest Waiman Long
2014-02-26 17:08   ` Konrad Rzeszutek Wilk
2014-02-28 17:08     ` Waiman Long
2014-02-27  9:41   ` Paolo Bonzini
2014-02-27 19:05     ` Waiman Long
2014-02-27 10:40   ` Raghavendra K T
2014-02-27 19:12     ` Waiman Long
2014-02-26 15:14 ` [PATCH RFC v5 6/8] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-02-26 15:14 ` [PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support Waiman Long
2014-02-26 17:54   ` Konrad Rzeszutek Wilk
2014-02-27 12:11   ` David Vrabel
2014-02-27 13:11     ` Paolo Bonzini
2014-02-27 14:18       ` David Vrabel
2014-02-27 14:45         ` Paolo Bonzini
2014-02-27 15:22           ` Raghavendra K T
2014-02-27 15:50             ` Paolo Bonzini
2014-03-03 11:06               ` [Xen-devel] " David Vrabel
2014-02-27 20:50             ` Waiman Long
2014-02-27 19:42           ` Waiman Long
2014-02-26 15:14 ` [PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support Waiman Long
2014-02-27  9:31   ` Paolo Bonzini
2014-02-27 18:36     ` Waiman Long
2014-02-26 17:00 ` [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with " Konrad Rzeszutek Wilk
2014-02-28 16:56   ` Waiman Long
2014-02-26 22:26 ` Paul E. McKenney
  -- strict thread matches above, loose matches on Subject: below --
2014-02-27  4:32 Waiman Long
2014-02-27  4:32 ` [PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks Waiman Long
2014-03-02 13:16   ` Oleg Nesterov
2014-03-04 14:54     ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=530FA32B.8010202@hp.com \
    --to=waiman.long@hp.com \
    --cc=akataria@vmware.com \
    --cc=andi@firstfloor.org \
    --cc=arnd@arndb.de \
    --cc=boris.ostrovsky@oracle.com \
    --cc=chrisw@sous-sol.org \
    --cc=daniel@numascale.com \
    --cc=halcy@yandex.ru \
    --cc=hpa@zytor.com \
    --cc=jeremy@goop.org \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux@horizon.com \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutro \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=walken@google.com \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).