From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx185.postini.com [74.125.245.185]) by kanga.kvack.org (Postfix) with SMTP id ABC826B00A3 for ; Sun, 2 Dec 2012 13:45:01 -0500 (EST) Received: by mail-ee0-f41.google.com with SMTP id d41so1476620eek.14 for ; Sun, 02 Dec 2012 10:45:01 -0800 (PST) From: Ingo Molnar Subject: [PATCH 29/52] sched: Implement NUMA scanning backoff Date: Sun, 2 Dec 2012 19:43:21 +0100 Message-Id: <1354473824-19229-30-git-send-email-mingo@kernel.org> In-Reply-To: <1354473824-19229-1-git-send-email-mingo@kernel.org> References: <1354473824-19229-1-git-send-email-mingo@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Zijlstra , Paul Turner , Lee Schermerhorn , Christoph Lameter , Rik van Riel , Mel Gorman , Andrew Morton , Andrea Arcangeli , Linus Torvalds , Thomas Gleixner , Johannes Weiner , Hugh Dickins Back off slowly from scanning, up to sysctl_sched_numa_scan_period_max (1.6 seconds). Scan faster again if we were forced to switch to another node. This makes sure that workload in equilibrium don't get scanned as often as workloads that are still converging. Cc: Peter Zijlstra Cc: Linus Torvalds Cc: Andrew Morton Cc: Andrea Arcangeli Cc: Rik van Riel Cc: Mel Gorman Cc: Hugh Dickins Signed-off-by: Ingo Molnar --- kernel/sched/core.c | 6 ++++++ kernel/sched/fair.c | 8 +++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8ef9a46..39cf991 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6029,6 +6029,12 @@ void sched_setnuma(struct task_struct *p, int node, int shared) if (on_rq) enqueue_task(rq, p, 0); task_rq_unlock(rq, p, &flags); + + /* + * Reset the scanning period. If the task converges + * on this node then we'll back off again: + */ + p->numa_scan_period = sysctl_sched_numa_scan_period_min; } #endif /* CONFIG_NUMA_BALANCING */ diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8f0e6ba..59fea2e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -865,8 +865,10 @@ static void task_numa_placement(struct task_struct *p) } } - if (max_node != p->numa_max_node) + if (max_node != p->numa_max_node) { sched_setnuma(p, max_node, task_numa_shared(p)); + goto out_backoff; + } p->numa_migrate_seq++; if (sched_feat(NUMA_SETTLE) && @@ -882,7 +884,11 @@ static void task_numa_placement(struct task_struct *p) if (shared != task_numa_shared(p)) { sched_setnuma(p, p->numa_max_node, shared); p->numa_migrate_seq = 0; + goto out_backoff; } + return; +out_backoff: + p->numa_scan_period = min(p->numa_scan_period * 2, sysctl_sched_numa_scan_period_max); } /* -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org