From: Waiman Long <waiman.long@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>,
Rik van Riel <riel@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
David Howells <dhowells@redhat.com>,
Ingo Molnar <mingo@kernel.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mutex: Fix mutex_can_spin_on_owner
Date: Fri, 19 Jul 2013 15:08:36 -0400 [thread overview]
Message-ID: <51E98EB4.3080307@hp.com> (raw)
In-Reply-To: <20130719183101.GA20909@twins.programming.kicks-ass.net>
On 07/19/2013 02:31 PM, Peter Zijlstra wrote:
> mutex_can_spin_on_owner() is broken in that it would allow the compiler
> to load lock->owner twice, seeing a pointer first time and a MULL
> pointer the second time.
>
> Signed-off-by: Peter Zijlstra<peterz@infradead.org>
> ---
> kernel/mutex.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/mutex.c b/kernel/mutex.c
> index ff05f4b..7ff48c5 100644
> --- a/kernel/mutex.c
> +++ b/kernel/mutex.c
> @@ -209,11 +209,13 @@ int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
> */
> static inline int mutex_can_spin_on_owner(struct mutex *lock)
> {
> + struct task_struct *owner;
> int retval = 1;
>
> rcu_read_lock();
> - if (lock->owner)
> - retval = lock->owner->on_cpu;
> + owner = ACCESS_ONCE(lock->owner);
> + if (owner)
> + retval = owner->on_cpu;
> rcu_read_unlock();
> /*
> * if lock->owner is not set, the mutex owner may have just acquired
I am fine with this change. However, the compiler is smart enough to not
do two memory accesses to the same memory location. So this will not
change the generated code. Below is the relevant x86 code for that
section of code:
0x00000000000005d2 <+34>: mov 0x18(%rdi),%rdx
0x00000000000005d6 <+38>: mov $0x1,%eax
0x00000000000005db <+43>: test %rdx,%rdx
0x00000000000005de <+46>: je 0x5e3 <__mutex_lock_slowpath+51>
0x00000000000005e0 <+48>: mov 0x28(%rdx),%eax
0x00000000000005e3 <+51>: test %eax,%eax
0x00000000000005e5 <+53>: je 0x6d3 <__mutex_lock_slowpath+291>
Only one memory access is done.
Ack-by: Waiman Long <Waiman.Long@hp.com>
next prev parent reply other threads:[~2013-07-19 19:08 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-19 18:31 [PATCH] mutex: Fix mutex_can_spin_on_owner Peter Zijlstra
2013-07-19 18:36 ` Davidlohr Bueso
2013-07-19 19:08 ` Waiman Long [this message]
2013-07-19 19:41 ` Thomas Gleixner
2013-07-19 19:48 ` Linus Torvalds
2013-07-19 20:58 ` Waiman Long
2013-07-25 12:18 ` Jan-Simon Möller
2013-07-20 11:16 ` Peter Zijlstra
2013-07-19 19:36 ` Rik van Riel
2013-07-23 7:46 ` [tip:core/locking] mutex: Fix/ document access-once assumption in mutex_can_spin_on_owner() tip-bot for Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51E98EB4.3080307@hp.com \
--to=waiman.long@hp.com \
--cc=akpm@linux-foundation.org \
--cc=davidlohr.bueso@hp.com \
--cc=dhowells@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).