From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753453Ab2L2QoE (ORCPT ); Sat, 29 Dec 2012 11:44:04 -0500 Received: from mail-we0-f175.google.com ([74.125.82.175]:35079 "EHLO mail-we0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753168Ab2L2Qnz (ORCPT ); Sat, 29 Dec 2012 11:43:55 -0500 From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Alessio Igor Bogani , Andrew Morton , Chris Metcalf , Christoph Lameter , Geoff Levand , Gilad Ben Yossef , Hakan Akkan , Ingo Molnar , "Paul E. McKenney" , Paul Gortmaker , Peter Zijlstra , Steven Rostedt , Thomas Gleixner Subject: [PATCH 18/27] sched: Update nohz rq clock before searching busiest group on load balancing Date: Sat, 29 Dec 2012 17:42:57 +0100 Message-Id: <1356799386-4212-19-git-send-email-fweisbec@gmail.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1356799386-4212-1-git-send-email-fweisbec@gmail.com> References: <1356799386-4212-1-git-send-email-fweisbec@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org While load balancing an rq target, we look for the busiest group. This operation may require an uptodate rq clock if we end up calling scale_rt_power(). To this end, update it manually if the target is running tickless. DOUBT: don't we actually also need this in vanilla kernel, in case this_cpu is in dyntick-idle mode? Signed-off-by: Frederic Weisbecker Cc: Alessio Igor Bogani Cc: Andrew Morton Cc: Chris Metcalf Cc: Christoph Lameter Cc: Geoff Levand Cc: Gilad Ben Yossef Cc: Hakan Akkan Cc: Ingo Molnar Cc: Paul E. McKenney Cc: Paul Gortmaker Cc: Peter Zijlstra Cc: Steven Rostedt Cc: Thomas Gleixner --- kernel/sched/fair.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 698137d..473f50f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5023,6 +5023,19 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_inc(sd, lb_count[idle]); + /* + * find_busiest_group() may need an uptodate cpu clock + * for find_busiest_group() (see scale_rt_power()). If + * the CPU is nohz, it's clock may be stale. + */ + if (tick_nohz_full_cpu(this_cpu)) { + local_irq_save(flags); + raw_spin_lock(&this_rq->lock); + update_rq_clock(this_rq); + raw_spin_unlock(&this_rq->lock); + local_irq_restore(flags); + } + redo: group = find_busiest_group(&env, balance); -- 1.7.5.4