From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753128AbcAMQCc (ORCPT ); Wed, 13 Jan 2016 11:02:32 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:33605 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751651AbcAMQBk (ORCPT ); Wed, 13 Jan 2016 11:01:40 -0500 From: Frederic Weisbecker To: Peter Zijlstra Cc: LKML , Frederic Weisbecker , Byungchul Park , Chris Metcalf , Thomas Gleixner , Luiz Capitulino , Christoph Lameter , "Paul E . McKenney" , Mike Galbraith , Rik van Riel Subject: [PATCH 1/4] sched: Don't account tickless CPU load on tick Date: Wed, 13 Jan 2016 17:01:28 +0100 Message-Id: <1452700891-21807-2-git-send-email-fweisbec@gmail.com> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1452700891-21807-1-git-send-email-fweisbec@gmail.com> References: <1452700891-21807-1-git-send-email-fweisbec@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The cpu load update on tick doesn't care about dynticks and as such is buggy when occuring on nohz ticks (including idle ticks) as it resets the jiffies snapshot that was recorded on nohz entry. We eventually ignore the potentially long tickless load that happened before the tick. We can fix this in two ways: 1) Handle the tickless load, but then we must make sure that a freshly woken task's load doesn't get accounted as the whole previous tickless load. 2) Ignore nohz ticks and delay the accounting to the nohz exit point. For simplicity, this patch propose to fix the issue with the second solution. Cc: Byungchul Park Cc: Mike Galbraith Cc: Chris Metcalf Cc: Christoph Lameter Cc: Luiz Capitulino Cc: Paul E . McKenney Cc: Rik van Riel Cc: Thomas Gleixner Signed-off-by: Frederic Weisbecker --- kernel/sched/fair.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1093873..b849ea8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4518,10 +4518,20 @@ void update_cpu_load_nohz(int active) */ void update_cpu_load_active(struct rq *this_rq) { - unsigned long load = weighted_cpuload(cpu_of(this_rq)); + unsigned long load; + + /* + * If the tick is stopped, we can't reliably update the + * load without risking to spuriously account the weight + * of a freshly woken task as the whole weight of a long + * tickless period. + */ + if (tick_nohz_tick_stopped()) + return; /* * See the mess around update_idle_cpu_load() / update_cpu_load_nohz(). */ + load = weighted_cpuload(cpu_of(this_rq)); this_rq->last_load_update_tick = jiffies; __update_cpu_load(this_rq, load, 1, 1); } -- 2.6.4