linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <riel@redhat.com>
To: Michel Lespinasse <walken@google.com>
Cc: Ingo Molnar <mingo@redhat.com>,
	"Paul E. McKenney" <paulmck@us.ibm.com>,
	David Howells <dhowells@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Eric Dumazet <edumazet@google.com>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Manfred Spraul <manfred@colorfullife.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 4/6] kernel: faster queue spinlock implementation
Date: Wed, 23 Jan 2013 16:55:25 -0500	[thread overview]
Message-ID: <51005C4D.1090105@redhat.com> (raw)
In-Reply-To: <1358896415-28569-5-git-send-email-walken@google.com>

On 01/22/2013 06:13 PM, Michel Lespinasse wrote:

> Because of these limitations, the MCS queue spinlock implementation does
> not always compare favorably to ticket spinlocks under moderate contention.
>
> This alternative queue spinlock implementation has some nice properties:
>
> - One single atomic operation (xchg) during acquire
> - One single memory store for unlock. No busy looping either.
>    Actually, the unlock is so short that we can just inline it.
> - Same basic API as with the MCS spinlock

There is one thing I do not understand about these locks.

> +static inline void
> +q_spin_unlock(struct q_spinlock *lock, struct q_spinlock_node *node)
> +{
> +	q_spin_unlock_mb();	/* guarantee release store semantics */
> +	ACCESS_ONCE(node->token->wait) = false;
> +	preempt_enable();
> +}

Here you set wait to false, in the CPU-local (on the current CPU)
queue lock token. Afterwards, the same CPU could try to lock another
lock, using the same token...

> +DEFINE_PER_CPU(struct q_spinlock_token *, q_spinlock_token[2]);
> +
> +static inline struct q_spinlock_token *
> +____q_spin_lock(struct q_spinlock *lock,
> +		struct q_spinlock_token **percpu_token)
>   {
> +	/*
> +	 * Careful about reentrancy here - if we are interrupted and the code
> +	 * running in that interrupt tries to get another queue spinlock,
> +	 * it must not use the same percpu_token that we're using here.
> +	 */
> +
> +	struct q_spinlock_token *token, *prev;
> +
> +	token = __this_cpu_read(*percpu_token);
> +	token->wait = true;
> +	prev = xchg(&lock->token, token);
> +	__this_cpu_write(*percpu_token, prev);
> +	while (ACCESS_ONCE(prev->wait))
>   		cpu_relax();
>   	q_spin_lock_mb();	/* guarantee acquire load semantics */
> +	return token;
>   }

Here a CPU trying to take the lock will spin on the previous
CPU's token.

However, the previous CPU can immediately re-use its token.

It looks like it might be possible for the CPU trying to
acquire the lock to miss prev->wait being set to false, and
continue spinning.

If this lock type is widely used, could that lead to a deadlock?

Is there something in your code that guarantees the scenario
I described cannot happen, and I just missed it?

  reply	other threads:[~2013-01-23 21:55 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-22 23:13 [RFC PATCH 0/6] fast queue spinlocks Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 1/6] kernel: implement queue spinlock API Michel Lespinasse
2013-02-07 22:34   ` Paul E. McKenney
2013-02-07 22:56     ` Eric Dumazet
2013-02-07 23:53       ` Paul E. McKenney
2013-02-07 23:58       ` Michel Lespinasse
2013-02-08  0:03         ` Eric Dumazet
2013-02-08  0:40           ` Paul E. McKenney
2013-02-08  3:48             ` Michel Lespinasse
2013-02-08  4:36               ` Paul E. McKenney
2013-02-08  5:03                 ` Paul E. McKenney
2013-02-08  5:11                   ` Michel Lespinasse
2013-02-08 16:17                     ` Paul E. McKenney
2013-02-07 23:14     ` John Stultz
2013-02-08  0:35     ` Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 2/6] net: convert qdisc busylock to use " Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 3/6] ipc: convert ipc objects " Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 4/6] kernel: faster queue spinlock implementation Michel Lespinasse
2013-01-23 21:55   ` Rik van Riel [this message]
2013-01-23 23:52     ` Michel Lespinasse
2013-01-24  0:18   ` Eric Dumazet
2013-01-25 20:30   ` [RFC PATCH 7/6] kernel: document fast queue spinlocks Rik van Riel
2013-01-22 23:13 ` [RFC PATCH 5/6] net: qdisc busylock updates to account for queue spinlock api change Michel Lespinasse
2013-01-22 23:13 ` [RFC PATCH 6/6] ipc: object locking " Michel Lespinasse
2013-01-22 23:17 ` [RFC PATCH 0/6] fast queue spinlocks Michel Lespinasse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51005C4D.1090105@redhat.com \
    --to=riel@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=ebiederm@xmission.com \
    --cc=edumazet@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manfred@colorfullife.com \
    --cc=mingo@redhat.com \
    --cc=paulmck@us.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=walken@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).