From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932107AbbCIUP0 (ORCPT ); Mon, 9 Mar 2015 16:15:26 -0400 Received: from g9t5008.houston.hp.com ([15.240.92.66]:56970 "EHLO g9t5008.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752710AbbCIUPK (ORCPT ); Mon, 9 Mar 2015 16:15:10 -0400 Message-ID: <1425932094.2475.400.camel@j-VirtualBox> Subject: [PATCH] locking/mutex: Refactor mutex_spin_on_owner() From: Jason Low To: Peter Zijlstra , Linus Torvalds , Ingo Molnar , Davidlohr Bueso Cc: LKML , Jason Low Date: Mon, 09 Mar 2015 13:14:54 -0700 Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3-0ubuntu6 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch applies on top of tip. ------------------------------------------------------------------- Similar to what Linus suggested for rwsem_spin_on_owner(), in mutex_spin_on_owner(), instead of having while (true) and breaking out of the spin loop on lock->owner != owner, we can have the loop directly check for while (lock->owner == owner). This improves the readability of the code. Signed-off-by: Jason Low --- kernel/locking/mutex.c | 17 +++++------------ 1 files changed, 5 insertions(+), 12 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 16b2d3c..1c3b7c5 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -224,16 +224,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, static noinline bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) { - bool ret; - rcu_read_lock(); - while (true) { - /* Return success when the lock owner changed */ - if (lock->owner != owner) { - ret = true; - break; - } - + while (lock->owner == owner) { /* * Ensure we emit the owner->on_cpu, dereference _after_ * checking lock->owner still matches owner, if that fails, @@ -242,16 +234,17 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) */ barrier(); + /* Stop spinning when need_resched or owner is not running. */ if (!owner->on_cpu || need_resched()) { - ret = false; - break; + rcu_read_unlock(); + return false; } cpu_relax_lowlatency(); } rcu_read_unlock(); - return ret; + return true; } /* -- 1.7.2.5