From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752262AbdANMmy (ORCPT ); Sat, 14 Jan 2017 07:42:54 -0500 Received: from terminus.zytor.com ([198.137.202.10]:51664 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752158AbdANMmw (ORCPT ); Sat, 14 Jan 2017 07:42:52 -0500 Date: Sat, 14 Jan 2017 04:41:52 -0800 From: tip-bot for Matt Fleming Message-ID: Cc: sergey.senozhatsky.work@gmail.com, yuyang.du@intel.com, torvalds@linux-foundation.org, pmladek@suse.com, linux-kernel@vger.kernel.org, wanpeng.li@hotmail.com, riel@redhat.com, umgwanakikbuti@gmail.com, luca.abeni@unitn.it, fweisbec@gmail.com, byungchul.park@lge.com, matt@codeblueprint.co.uk, jack@suse.cz, peterz@infradead.org, hpa@zytor.com, mgorman@techsingularity.net, mingo@kernel.org, efault@gmx.de, tglx@linutronix.de Reply-To: wanpeng.li@hotmail.com, riel@redhat.com, linux-kernel@vger.kernel.org, pmladek@suse.com, torvalds@linux-foundation.org, yuyang.du@intel.com, sergey.senozhatsky.work@gmail.com, tglx@linutronix.de, mingo@kernel.org, efault@gmx.de, mgorman@techsingularity.net, hpa@zytor.com, peterz@infradead.org, matt@codeblueprint.co.uk, jack@suse.cz, byungchul.park@lge.com, fweisbec@gmail.com, luca.abeni@unitn.it, umgwanakikbuti@gmail.com In-Reply-To: <20160921133813.31976-7-matt@codeblueprint.co.uk> References: <20160921133813.31976-7-matt@codeblueprint.co.uk> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Push rq lock pin/unpin into idle_balance() Git-Commit-ID: 46f69fa33712ad12ccaa723e46ed5929ee93589b X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 46f69fa33712ad12ccaa723e46ed5929ee93589b Gitweb: http://git.kernel.org/tip/46f69fa33712ad12ccaa723e46ed5929ee93589b Author: Matt Fleming AuthorDate: Wed, 21 Sep 2016 14:38:12 +0100 Committer: Ingo Molnar CommitDate: Sat, 14 Jan 2017 11:29:32 +0100 sched/fair: Push rq lock pin/unpin into idle_balance() Future patches will emit warnings if rq_clock() is called before update_rq_clock() inside a rq_pin_lock()/rq_unpin_lock() pair. Since there is only one caller of idle_balance() we can push the unpin/repin there. Signed-off-by: Matt Fleming Signed-off-by: Peter Zijlstra (Intel) Cc: Byungchul Park Cc: Frederic Weisbecker Cc: Jan Kara Cc: Linus Torvalds Cc: Luca Abeni Cc: Mel Gorman Cc: Mike Galbraith Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Petr Mladek Cc: Rik van Riel Cc: Sergey Senozhatsky Cc: Thomas Gleixner Cc: Wanpeng Li Cc: Yuyang Du Link: http://lkml.kernel.org/r/20160921133813.31976-7-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4904412..faf80e1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3424,7 +3424,7 @@ static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq) return cfs_rq->avg.load_avg; } -static int idle_balance(struct rq *this_rq); +static int idle_balance(struct rq *this_rq, struct rq_flags *rf); #else /* CONFIG_SMP */ @@ -3453,7 +3453,7 @@ attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} static inline void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {} -static inline int idle_balance(struct rq *rq) +static inline int idle_balance(struct rq *rq, struct rq_flags *rf) { return 0; } @@ -6320,15 +6320,8 @@ simple: return p; idle: - /* - * This is OK, because current is on_cpu, which avoids it being picked - * for load-balance and preemption/IRQs are still disabled avoiding - * further scheduler activity on it and we're being very careful to - * re-start the picking loop. - */ - rq_unpin_lock(rq, rf); - new_tasks = idle_balance(rq); - rq_repin_lock(rq, rf); + new_tasks = idle_balance(rq, rf); + /* * Because idle_balance() releases (and re-acquires) rq->lock, it is * possible for any higher priority task to appear. In that case we @@ -8297,7 +8290,7 @@ update_next_balance(struct sched_domain *sd, unsigned long *next_balance) * idle_balance is called by schedule() if this_cpu is about to become * idle. Attempts to pull tasks from other CPUs. */ -static int idle_balance(struct rq *this_rq) +static int idle_balance(struct rq *this_rq, struct rq_flags *rf) { unsigned long next_balance = jiffies + HZ; int this_cpu = this_rq->cpu; @@ -8311,6 +8304,14 @@ static int idle_balance(struct rq *this_rq) */ this_rq->idle_stamp = rq_clock(this_rq); + /* + * This is OK, because current is on_cpu, which avoids it being picked + * for load-balance and preemption/IRQs are still disabled avoiding + * further scheduler activity on it and we're being very careful to + * re-start the picking loop. + */ + rq_unpin_lock(this_rq, rf); + if (this_rq->avg_idle < sysctl_sched_migration_cost || !this_rq->rd->overload) { rcu_read_lock(); @@ -8388,6 +8389,8 @@ out: if (pulled_task) this_rq->idle_stamp = 0; + rq_repin_lock(this_rq, rf); + return pulled_task; }