From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758902Ab2KVWwy (ORCPT ); Thu, 22 Nov 2012 17:52:54 -0500 Received: from mail-ea0-f174.google.com ([209.85.215.174]:43246 "EHLO mail-ea0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758873Ab2KVWvt (ORCPT ); Thu, 22 Nov 2012 17:51:49 -0500 From: Ingo Molnar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Zijlstra , Paul Turner , Lee Schermerhorn , Christoph Lameter , Rik van Riel , Mel Gorman , Andrew Morton , Andrea Arcangeli , Linus Torvalds , Thomas Gleixner , Johannes Weiner , Hugh Dickins Subject: [PATCH 30/33] sched: Average the fault stats longer Date: Thu, 22 Nov 2012 23:49:51 +0100 Message-Id: <1353624594-1118-31-git-send-email-mingo@kernel.org> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1353624594-1118-1-git-send-email-mingo@kernel.org> References: <1353624594-1118-1-git-send-email-mingo@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We will rely on the per CPU fault statistics and its shared/private derivative even more in the future, so stabilize this metric even better. The staged updates introduced in commit: sched: Introduce staged average NUMA faults Already stabilized this key metric significantly, but in real workloads it was still reacting to temporary load balancing transients too quickly. Slow down by weighting the average. The weighting value was found via experimentation. Cc: Linus Torvalds Cc: Andrew Morton Cc: Peter Zijlstra Cc: Andrea Arcangeli Cc: Rik van Riel Cc: Mel Gorman Cc: Hugh Dickins Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 24a5588..a5f3ad7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -914,8 +914,8 @@ static void task_numa_placement(struct task_struct *p) p->numa_faults_curr[idx] = 0; /* Keep a simple running average: */ - p->numa_faults[idx] += new_faults; - p->numa_faults[idx] /= 2; + p->numa_faults[idx] = p->numa_faults[idx]*7 + new_faults; + p->numa_faults[idx] /= 8; faults += p->numa_faults[idx]; total[priv] += p->numa_faults[idx]; -- 1.7.11.7