From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from outbound-smtp05.blacknight.com ([81.17.249.38]:40700 "EHLO outbound-smtp05.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753715AbdGJMnT (ORCPT ); Mon, 10 Jul 2017 08:43:19 -0400 Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp05.blacknight.com (Postfix) with ESMTPS id BB44398E6C for ; Mon, 10 Jul 2017 12:37:53 +0000 (UTC) From: Mel Gorman To: Linux-Stable Cc: Mel Gorman Subject: [PATCH 6/9] sched/fair: Simplify wake_affine() for the single socket case Date: Mon, 10 Jul 2017 13:37:49 +0100 Message-Id: <20170710123752.7563-7-mgorman@techsingularity.net> In-Reply-To: <20170710123752.7563-1-mgorman@techsingularity.net> References: <20170710123752.7563-1-mgorman@techsingularity.net> Sender: stable-owner@vger.kernel.org List-ID: From: Rik van Riel commit 7d894e6e34a5cdd12309c7e4a3f830277ad4b7bf upstream. Then 'this_cpu' and 'prev_cpu' are in the same socket, select_idle_sibling() will do its thing regardless of the return value of wake_affine(). Just return true and don't look at all the other things. Signed-off-by: Rik van Riel Cc: Linus Torvalds Cc: Mel Gorman Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: jhladky@redhat.com Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20170623165530.22514-3-riel@redhat.com Signed-off-by: Ingo Molnar Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bbf45ed4a370..c82ac75547d5 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5399,6 +5399,13 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, this_load = target_load(this_cpu, idx); /* + * Common case: CPUs are in the same socket, and select_idle_sibling() + * will do its thing regardless of what we return: + */ + if (cpus_share_cache(prev_cpu, this_cpu)) + return true; + + /* * If sync wakeup then subtract the (maximum possible) * effect of the currently running task from the load * of the current CPU: @@ -6012,11 +6019,15 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f if (affine_sd) { sd = NULL; /* Prefer wake_affine over balance flags */ - if (cpu != prev_cpu && wake_affine(affine_sd, p, prev_cpu, sync)) + if (cpu == prev_cpu) + goto pick_cpu; + + if (wake_affine(affine_sd, p, prev_cpu, sync)) new_cpu = cpu; } if (!sd) { + pick_cpu: if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); -- 2.13.1