From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752202Ab2LTSfF (ORCPT ); Thu, 20 Dec 2012 13:35:05 -0500 Received: from mail-wg0-f47.google.com ([74.125.82.47]:41176 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751934Ab2LTSeJ (ORCPT ); Thu, 20 Dec 2012 13:34:09 -0500 From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Alessio Igor Bogani , Andrew Morton , Avi Kivity , Chris Metcalf , Christoph Lameter , Geoff Levand , Gilad Ben Yossef , Hakan Akkan , Ingo Molnar , "Paul E. McKenney" , Paul Gortmaker , Peter Zijlstra , Steven Rostedt , Thomas Gleixner Subject: [PATCH 18/24] sched: Update nohz rq clock before searching busiest group on load balancing Date: Thu, 20 Dec 2012 19:33:05 +0100 Message-Id: <1356028391-14427-19-git-send-email-fweisbec@gmail.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1356028391-14427-1-git-send-email-fweisbec@gmail.com> References: <1356028391-14427-1-git-send-email-fweisbec@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While load balancing an rq target, we look for the busiest group. This operation may require an uptodate rq clock if we end up calling scale_rt_power(). To this end, update it manually if the target is running tickless. DOUBT: don't we actually also need this in vanilla kernel, in case this_cpu is in dyntick-idle mode? Signed-off-by: Frederic Weisbecker Cc: Alessio Igor Bogani Cc: Andrew Morton Cc: Avi Kivity Cc: Chris Metcalf Cc: Christoph Lameter Cc: Geoff Levand Cc: Gilad Ben Yossef Cc: Hakan Akkan Cc: Ingo Molnar Cc: Paul E. McKenney Cc: Paul Gortmaker Cc: Peter Zijlstra Cc: Steven Rostedt Cc: Thomas Gleixner --- kernel/sched/fair.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 291e225..b1b791d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4795,6 +4795,19 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_inc(sd, lb_count[idle]); + /* + * find_busiest_group() may need an uptodate cpu clock + * for find_busiest_group() (see scale_rt_power()). If + * the CPU is nohz, it's clock may be stale. + */ + if (tick_nohz_full_cpu(this_cpu)) { + local_irq_save(flags); + raw_spin_lock(&this_rq->lock); + update_rq_clock(this_rq); + raw_spin_unlock(&this_rq->lock); + local_irq_restore(flags); + } + redo: group = find_busiest_group(&env, balance); -- 1.7.5.4