public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <waiman.long@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org, Jason Low <jason.low2@hp.com>,
	mingo@kernel.org, paulmck@linux.vnet.ibm.com,
	torvalds@linux-foundation.org, tglx@linutronix.de,
	riel@redhat.com, akpm@linux-foundation.org, davidlohr@hp.com,
	hpa@zytor.com, andi@firstfloor.org, aswin@hp.com,
	scott.norton@hp.com, chegu_vinod@hp.com
Subject: Re: [PATCH 7/8] locking: Introduce qrwlock
Date: Fri, 14 Feb 2014 14:01:43 -0500	[thread overview]
Message-ID: <52FE6817.5050708@hp.com> (raw)
In-Reply-To: <20140213172657.GF3545@laptop.programming.kicks-ass.net>

On 02/13/2014 12:26 PM, Peter Zijlstra wrote:
> On Thu, Feb 13, 2014 at 05:35:46PM +0100, Peter Zijlstra wrote:
>> On Tue, Feb 11, 2014 at 03:12:59PM -0500, Waiman Long wrote:
>>> Using the same locktest program to repetitively take a single rwlock with
>>> programmable number of threads and count their execution times. Each
>>> thread takes the lock 5M times on a 4-socket 40-core Westmere-EX
>>> system. I bound all the threads to different CPUs with the following
>>> 3 configurations:
>>>
>>>   1) Both CPUs and lock are in the same node
>>>   2) CPUs and lock are in different nodes
>>>   3) Half of the CPUs are in same node as the lock&  the other half
>>>      are remote
>> I can't find these configurations in the below numbers; esp the first is
>> interesting because most computers out there have no nodes.
>>
>>> Two types of qrwlock are tested:
>>>   1) Use MCS lock
>>>   2) Use ticket lock
>> arch_spinlock_t; you forget that if you change that to an MCS style lock
>> this one goes along for free.
> Furthermore; comparing the current rwlock to the ticket-rwlock already
> shows an improvement, so on that aspect its worth it as well.

As I said in my previous email, I am not against your change.

> And there's also the paravirt people to consider; a fair rwlock will
> make them unhappy; and I'm hoping that their current paravirt ticket
> stuff is sufficient to deal with the ticket-rwlock without them having
> to come and wreck things again.

Actually, my original qrwlock patch has an unfair option. With some 
minor change, it can be made unfair pretty easily. So we can use the 
paravirt config macro to change that to unfair if it is what the 
virtualization people want.

> Similarly; qspinlock needs paravirt support.
>
>

The current paravirt code has hard-coded the use of ticket spinlock. 
That is why I have to disable my qspinlock code if paravirt is enabled.

I have thinking about that paravirt support. Since the waiting tasks are 
queued up. By maintaining some kind of heart beat signal, it is possible 
to make the waiting task jump the queue if the previous one in the queue 
doesn't seem to be alive. I will work on that next once I am done with 
the current qspinlock patch.

-Longman

  reply	other threads:[~2014-02-14 19:03 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-10 19:58 [PATCH 0/8] locking/core patches Peter Zijlstra
2014-02-10 19:58 ` [PATCH 1/8] locking: Move mcs_spinlock.h into kernel/locking/ Peter Zijlstra
2014-02-10 19:58 ` [PATCH 2/8] mutex: In mutex_can_spin_on_owner(), return false if task need_resched() Peter Zijlstra
2014-02-10 21:02   ` Peter Zijlstra
2014-02-10 19:58 ` [PATCH 3/8] mutex: Modify the way optimistic spinners are queued Peter Zijlstra
2014-02-11  1:33   ` Jason Low
2014-02-11  7:20     ` Peter Zijlstra
2014-02-10 19:58 ` [PATCH 4/8] mutex: Unlock the mutex without the wait_lock Peter Zijlstra
2014-02-10 19:58 ` [PATCH 5/8] locking, mutex: Cancelable MCS lock for adaptive spinning Peter Zijlstra
2014-02-10 21:15   ` Jason Low
2014-02-10 21:32     ` Peter Zijlstra
2014-02-10 22:04       ` Jason Low
2014-02-11  9:18         ` Peter Zijlstra
2014-02-11  9:38           ` Ingo Molnar
2014-02-25 19:56   ` Jason Low
2014-02-26  9:22     ` Peter Zijlstra
2014-02-26 17:45       ` Jason Low
2014-02-10 19:58 ` [PATCH 6/8] mutex: Extra reschedule point Peter Zijlstra
2014-02-10 22:59   ` Andrew Morton
2014-02-10 19:58 ` [PATCH 7/8] locking: Introduce qrwlock Peter Zijlstra
2014-02-11 18:17   ` Waiman Long
2014-02-11 20:12     ` Waiman Long
2014-02-13 16:35       ` Peter Zijlstra
2014-02-13 17:26         ` Peter Zijlstra
2014-02-14 19:01           ` Waiman Long [this message]
2014-02-14 18:48         ` Waiman Long
2014-02-10 19:58 ` [PATCH 8/8] x86,locking: Enable qrwlock Peter Zijlstra
2014-02-10 23:02 ` [PATCH 0/8] locking/core patches Andrew Morton
2014-02-11  7:17   ` Peter Zijlstra
2014-02-11  8:03     ` Andrew Morton
2014-02-11  8:45       ` Ingo Molnar
2014-02-11  8:57         ` Peter Zijlstra
2014-02-11 21:37           ` Waiman Long
2014-02-25 19:26   ` Jason Low
2014-02-26 21:40 ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52FE6817.5050708@hp.com \
    --to=waiman.long@hp.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=aswin@hp.com \
    --cc=chegu_vinod@hp.com \
    --cc=davidlohr@hp.com \
    --cc=hpa@zytor.com \
    --cc=jason.low2@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox