From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752299Ab0LWWvU (ORCPT ); Thu, 23 Dec 2010 17:51:20 -0500 Received: from hrndva-omtalb.mail.rr.com ([71.74.56.125]:53464 "EHLO hrndva-omtalb.mail.rr.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751660Ab0LWWvR (ORCPT ); Thu, 23 Dec 2010 17:51:17 -0500 X-Authority-Analysis: v=1.1 cv=+c36koQ5Dcj/1qolKHjtkYAGXvrVJRRiKMp+84F5sLg= c=1 sm=0 a=W3ra5iDP4xAA:10 a=bbbx4UPp9XUA:10 a=OPBmh+XkhLl+Enan7BmTLg==:17 a=20KFwNOVAAAA:8 a=meVymXHHAAAA:8 a=mWFOXvljRIWJwan956AA:9 a=pGGviBE5GyJPU9qnBacA:7 a=QoLhDuz4HvvHCgOmgc4Q2s2YIsMA:4 a=jEp0ucaQiEUA:10 a=jeBq3FmKZ4MA:10 a=OPBmh+XkhLl+Enan7BmTLg==:117 X-Cloudmark-Score: 0 X-Originating-IP: 67.242.120.143 Message-Id: <20101223225115.676407152@goodmis.org> User-Agent: quilt/0.48-1 Date: Thu, 23 Dec 2010 17:47:56 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Thomas Gleixner , Peter Zijlstra , Lai Jiangshan Subject: [RFC][RT][PATCH 1/4] rtmutex: Only save lock depth once in spin_slowlock References: <20101223224755.078983538@goodmis.org> Content-Disposition: inline; filename=0001-rtmutex-Only-save-lock-depth-once-in-spin_slowlock.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Steven Rostedt The task that enters the rt_spin_lock_slowlock() will not take or release the kernel lock again until it leaves the fuction. There's no reason that we need to save and restore the lock depth for each iteration of the try_lock loop. Just save and reset the lock depth before entering the loop and restore it upon exit. Signed-off-by: Steven Rostedt --- kernel/rtmutex.c | 16 +++++++++------- 1 files changed, 9 insertions(+), 7 deletions(-) diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c index 348b1e7..f0ce334 100644 --- a/kernel/rtmutex.c +++ b/kernel/rtmutex.c @@ -821,6 +821,7 @@ rt_spin_lock_slowlock(struct rt_mutex *lock) struct rt_mutex_waiter waiter; unsigned long saved_state, flags; struct task_struct *orig_owner; + int saved_lock_depth; debug_rt_mutex_init_waiter(&waiter); waiter.task = NULL; @@ -843,8 +844,14 @@ rt_spin_lock_slowlock(struct rt_mutex *lock) */ saved_state = rt_set_current_blocked_state(current->state); + /* + * Prevent schedule() to drop BKL, while waiting for + * the lock ! We restore lock_depth when we come back. + */ + saved_lock_depth = current->lock_depth; + current->lock_depth = -1; + for (;;) { - int saved_lock_depth = current->lock_depth; /* Try to acquire the lock */ if (do_try_to_take_rt_mutex(lock, STEAL_LATERAL)) @@ -863,11 +870,6 @@ rt_spin_lock_slowlock(struct rt_mutex *lock) continue; } - /* - * Prevent schedule() to drop BKL, while waiting for - * the lock ! We restore lock_depth when we come back. - */ - current->lock_depth = -1; orig_owner = rt_mutex_owner(lock); get_task_struct(orig_owner); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); @@ -883,9 +885,9 @@ rt_spin_lock_slowlock(struct rt_mutex *lock) put_task_struct(orig_owner); raw_spin_lock_irqsave(&lock->wait_lock, flags); - current->lock_depth = saved_lock_depth; saved_state = rt_set_current_blocked_state(saved_state); } + current->lock_depth = saved_lock_depth; rt_restore_current_state(saved_state); -- 1.7.2.3