From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754189AbdDMOE2 (ORCPT ); Thu, 13 Apr 2017 10:04:28 -0400 Received: from mail-pf0-f172.google.com ([209.85.192.172]:35426 "EHLO mail-pf0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753869AbdDMODe (ORCPT ); Thu, 13 Apr 2017 10:03:34 -0400 From: Alex Shi To: peterz@infradead.org, mingo@redhat.com, corbet@lwn.net, linux-kernel@vger.kernel.org (open list:LOCKING PRIMITIVES) Cc: linux-kernel@vger.kernel.org, Alex Shi , Steven Rostedt , Sebastian Siewior , Thomas Gleixner Subject: [PATCH 2/3] rtmutex: deboost priority conditionally when rt-mutex unlock Date: Thu, 13 Apr 2017 22:02:53 +0800 Message-Id: <1492092174-31734-3-git-send-email-alex.shi@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1492092174-31734-1-git-send-email-alex.shi@linaro.org> References: <1492092174-31734-1-git-send-email-alex.shi@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The rt_mutex_fastunlock() will deboost 'current' task when it should be. but the rt_mutex_slowunlock() function will set the 'deboost' flag unconditionally. That cause some unnecessary priority adjustment. 'current' release this lock, so 'current' should be a higher prio task than the next top waiter, unless the current prio was gotten from this top waiter, iff so, we need to deboost 'current' after the lock release. Signed-off-by: Alex Shi Cc: Steven Rostedt Cc: Sebastian Siewior To: linux-kernel@vger.kernel.org To: Ingo Molnar To: Peter Zijlstra Cc: Thomas Gleixner --- kernel/locking/rtmutex.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 6edc32e..05ff685 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1037,10 +1037,11 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, * * Called with lock->wait_lock held and interrupts disabled. */ -static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, +static bool mark_wakeup_next_waiter(struct wake_q_head *wake_q, struct rt_mutex *lock) { struct rt_mutex_waiter *waiter; + bool deboost = false; raw_spin_lock(¤t->pi_lock); @@ -1055,6 +1056,15 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, rt_mutex_dequeue_pi(current, waiter); /* + * 'current' release this lock, so 'current' should be a higher prio + * task than the next top waiter, unless the current prio was gotten + * from this top waiter, iff so, we need to deboost 'current' after + * the lock release. + */ + if (current->prio == waiter->prio) + deboost = true; + + /* * As we are waking up the top waiter, and the waiter stays * queued on the lock until it gets the lock, this lock * obviously has waiters. Just set the bit here and this has @@ -1067,6 +1077,8 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q, raw_spin_unlock(¤t->pi_lock); wake_q_add(wake_q, waiter->task); + + return deboost; } /* @@ -1336,6 +1348,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, struct wake_q_head *wake_q) { unsigned long flags; + bool deboost = false; /* irqsave required to support early boot calls */ raw_spin_lock_irqsave(&lock->wait_lock, flags); @@ -1389,12 +1402,12 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, * * Queue the next waiter for wakeup once we release the wait_lock. */ - mark_wakeup_next_waiter(wake_q, lock); + deboost = mark_wakeup_next_waiter(wake_q, lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); /* check PI boosting */ - return true; + return deboost; } /* -- 1.9.1