From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:45792 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752677AbdGJMnG (ORCPT ); Mon, 10 Jul 2017 08:43:06 -0400 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id 9771298E69 for ; Mon, 10 Jul 2017 12:37:53 +0000 (UTC) From: Mel Gorman To: Linux-Stable Cc: Mel Gorman Subject: [PATCH 5/9] sched/numa: Override part of migrate_degrades_locality() when idle balancing Date: Mon, 10 Jul 2017 13:37:48 +0100 Message-Id: <20170710123752.7563-6-mgorman@techsingularity.net> In-Reply-To: <20170710123752.7563-1-mgorman@techsingularity.net> References: <20170710123752.7563-1-mgorman@techsingularity.net> Sender: stable-owner@vger.kernel.org List-ID: From: Rik van Riel commit 739294fb03f590401bbd7faa6d31a507e3ffada5 upstream. Several tests in the NAS benchmark seem to run a lot slower with NUMA balancing enabled, than with NUMA balancing disabled. The slower run time corresponds with increased idle time. Overriding the final test of migrate_degrades_locality (but still doing the other NUMA tests first) seems to improve performance of those benchmarks. Reported-by: Jirka Hladky Signed-off-by: Rik van Riel Cc: Linus Torvalds Cc: Mel Gorman Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20170623165530.22514-2-riel@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 85ed4d2df424..bbf45ed4a370 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6687,6 +6687,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env) if (dst_nid == p->numa_preferred_nid) return 0; + /* Leaving a core idle is often worse than degrading locality. */ + if (env->idle != CPU_NOT_IDLE) + return -1; + if (numa_group) { src_faults = group_faults(p, src_nid); dst_faults = group_faults(p, dst_nid); -- 2.13.1