From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934035AbcIUNi4 (ORCPT ); Wed, 21 Sep 2016 09:38:56 -0400 Received: from mail-wm0-f46.google.com ([74.125.82.46]:38818 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756063AbcIUNiY (ORCPT ); Wed, 21 Sep 2016 09:38:24 -0400 From: Matt Fleming To: Peter Zijlstra , Ingo Molnar Cc: Byungchul Park , Frederic Weisbecker , Luca Abeni , Rik van Riel , Thomas Gleixner , Wanpeng Li , Yuyang Du , Petr Mladek , Jan Kara , Sergey Senozhatsky , linux-kernel@vger.kernel.org, Mel Gorman , Mike Galbraith , Matt Fleming Subject: [PATCH v2 6/7] sched/fair: Push rq lock pin/unpin into idle_balance() Date: Wed, 21 Sep 2016 14:38:12 +0100 Message-Id: <20160921133813.31976-7-matt@codeblueprint.co.uk> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160921133813.31976-1-matt@codeblueprint.co.uk> References: <20160921133813.31976-1-matt@codeblueprint.co.uk> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Future patches will emit warnings if rq_clock() is called before update_rq_clock() inside a rq_pin_lock()/rq_unpin_lock() pair. Since there is only one caller of idle_balance() we can push the unpin/repin there. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Mike Galbraith Cc: Mel Gorman Signed-off-by: Matt Fleming --- kernel/sched/fair.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f0032827fb79..df9a5b16e1df 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3232,7 +3232,7 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq) return cfs_rq->avg.load_avg; } -static int idle_balance(struct rq *this_rq); +static int idle_balance(struct rq *this_rq, struct rq_flags *rf); #else /* CONFIG_SMP */ @@ -3261,7 +3261,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} static inline void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} -static inline int idle_balance(struct rq *rq) +static inline int idle_balance(struct rq *rq, struct rq_flags *rf) { return 0; } @@ -5825,15 +5825,8 @@ simple: return p; idle: - /* - * This is OK, because current is on_cpu, which avoids it being picked - * for load-balance and preemption/IRQs are still disabled avoiding - * further scheduler activity on it and we're being very careful to - * re-start the picking loop. - */ - rq_unpin_lock(rq, rf); - new_tasks = idle_balance(rq); - rq_repin_lock(rq, rf); + new_tasks = idle_balance(rq, rf); + /* * Because idle_balance() releases (and re-acquires) rq->lock, it is * possible for any higher priority task to appear. In that case we @@ -7767,7 +7760,7 @@ update_next_balance(struct sched_domain *sd, unsigned long *next_balance) * idle_balance is called by schedule() if this_cpu is about to become * idle. Attempts to pull tasks from other CPUs. */ -static int idle_balance(struct rq *this_rq) +static int idle_balance(struct rq *this_rq, struct rq_flags *rf) { unsigned long next_balance = jiffies + HZ; int this_cpu = this_rq->cpu; @@ -7781,6 +7774,14 @@ static int idle_balance(struct rq *this_rq) */ this_rq->idle_stamp = rq_clock(this_rq); + /* + * This is OK, because current is on_cpu, which avoids it being picked + * for load-balance and preemption/IRQs are still disabled avoiding + * further scheduler activity on it and we're being very careful to + * re-start the picking loop. + */ + rq_unpin_lock(this_rq, rf); + if (this_rq->avg_idle < sysctl_sched_migration_cost || !this_rq->rd->overload) { rcu_read_lock(); @@ -7858,6 +7859,8 @@ out: if (pulled_task) this_rq->idle_stamp = 0; + rq_repin_lock(this_rq, rf); + return pulled_task; } -- 2.9.3