linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] numa/core updates
@ 2012-12-02 16:13 Ingo Molnar
  2012-12-02 16:13 ` [PATCH 1/2] sched: Exclude pinned tasks from the NUMA-balancing logic Ingo Molnar
  2012-12-02 16:13 ` [PATCH 2/2] sched: Add RSS filter to NUMA-balancing Ingo Molnar
  0 siblings, 2 replies; 4+ messages in thread
From: Ingo Molnar @ 2012-12-02 16:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Peter Zijlstra, Paul Turner, Lee Schermerhorn, Christoph Lameter,
	Rik van Riel, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Thomas Gleixner, Johannes Weiner, Hugh Dickins

I've been testing wider workloads and here's two more small and
obvious patches rounding up numa/core behavior around the edges.

The NUMA code should now be pretty unintrusive to all but the
long-running, memory-intense workloads where it's expected to
make a (positive) difference.

Short-run workloads like kbuild or hackbench don't trigger the
NUMA code now. The limits can be reconsidered later on,
iteratively - the goal now is to not regress.

Thanks,

	Ingo

-------------->
Ingo Molnar (2):
  sched: Exclude pinned tasks from the NUMA-balancing logic
  sched: Add RSS filter to NUMA-balancing

 include/linux/sched.h   |  1 +
 kernel/sched/core.c     |  6 ++++++
 kernel/sched/debug.c    |  1 +
 kernel/sched/fair.c     | 53 +++++++++++++++++++++++++++++++++++++++++++++----
 kernel/sched/features.h |  1 +
 kernel/sysctl.c         |  7 +++++++
 6 files changed, 65 insertions(+), 4 deletions(-)

-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] sched: Exclude pinned tasks from the NUMA-balancing logic
  2012-12-02 16:13 [PATCH 0/2] numa/core updates Ingo Molnar
@ 2012-12-02 16:13 ` Ingo Molnar
  2012-12-02 16:13 ` [PATCH 2/2] sched: Add RSS filter to NUMA-balancing Ingo Molnar
  1 sibling, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2012-12-02 16:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Peter Zijlstra, Paul Turner, Lee Schermerhorn, Christoph Lameter,
	Rik van Riel, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Thomas Gleixner, Johannes Weiner, Hugh Dickins

Don't try to NUMA-balance hard-bound tasks in vein. This
also makes it easier to compare hard-bound workloads against
NUMA-balanced workloads, because the NUMA code will
be completely inactive for those hard-bound tasks.

( Keep a debugging feature flag around: for development it
  makes sense to observe what NUMA balancing tries to do
  with hard-affine tasks. )

[ Note: the duplicated test condition will be consolidated
  in the next patch. ]

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c     | 6 ++++++
 kernel/sched/debug.c    | 1 +
 kernel/sched/fair.c     | 7 +++++++
 kernel/sched/features.h | 1 +
 4 files changed, 15 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 85fd67c..69b18b3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4664,6 +4664,12 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
 
 	cpumask_copy(&p->cpus_allowed, new_mask);
 	p->nr_cpus_allowed = cpumask_weight(new_mask);
+
+#ifdef CONFIG_NUMA_BALANCING
+	/* Don't disturb hard-bound tasks: */
+	if (sched_feat(NUMA_EXCLUDE_AFFINE) && (p->nr_cpus_allowed != num_online_cpus()))
+		p->numa_shared = -1;
+#endif
 }
 
 /*
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 2cd3c1b..e10b714 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -448,6 +448,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
 
 	nr_switches = p->nvcsw + p->nivcsw;
 
+	P(nr_cpus_allowed);
 #ifdef CONFIG_SCHEDSTATS
 	PN(se.statistics.wait_start);
 	PN(se.statistics.sleep_start);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index eaff006..9667191 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2495,6 +2495,13 @@ static void task_tick_numa(struct rq *rq, struct task_struct *curr)
 	if (!curr->mm || (curr->flags & PF_EXITING) || !curr->numa_faults)
 		return;
 
+	/* Don't disturb hard-bound tasks: */
+	if (sched_feat(NUMA_EXCLUDE_AFFINE) && (curr->nr_cpus_allowed != num_online_cpus())) {
+		if (curr->numa_shared >= 0)
+			curr->numa_shared = -1;
+		return;
+	}
+
 	task_tick_numa_scan(rq, curr);
 	task_tick_numa_placement(rq, curr);
 }
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 1775b80..5598f63 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -77,6 +77,7 @@ SCHED_FEAT(WAKE_ON_IDEAL_CPU,		false)
 SCHED_FEAT(NUMA,			true)
 SCHED_FEAT(NUMA_BALANCE_ALL,		false)
 SCHED_FEAT(NUMA_BALANCE_INTERNODE,	false)
+SCHED_FEAT(NUMA_EXCLUDE_AFFINE,		true)
 SCHED_FEAT(NUMA_LB,			false)
 SCHED_FEAT(NUMA_GROUP_LB_COMPRESS,	true)
 SCHED_FEAT(NUMA_GROUP_LB_SPREAD,	true)
-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] sched: Add RSS filter to NUMA-balancing
  2012-12-02 16:13 [PATCH 0/2] numa/core updates Ingo Molnar
  2012-12-02 16:13 ` [PATCH 1/2] sched: Exclude pinned tasks from the NUMA-balancing logic Ingo Molnar
@ 2012-12-02 16:13 ` Ingo Molnar
  2012-12-02 19:45   ` [PATCH 2/2, v2] " Ingo Molnar
  1 sibling, 1 reply; 4+ messages in thread
From: Ingo Molnar @ 2012-12-02 16:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Peter Zijlstra, Paul Turner, Lee Schermerhorn, Christoph Lameter,
	Rik van Riel, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Thomas Gleixner, Johannes Weiner, Hugh Dickins

NUMA-balancing, combined with NUMA-affine memory migration,
is a relatively long-term process (compared to the typical
time scale of scheduling) that takes time to establish and
converge - on the time scale of of several seconds or more.

Small, short-lived and don't have much of a NUMA placement
cost to begin with, so don't NUMA-balance them. A task needs
to execute long enough and needs to establish a large enough
user-space memory image to benefit from more intelligent
NUMA balancing.

We already have a CPU time limit before tasks are affected
by NUMA balancing - this change adds the memory equivalent:
by introducing an RSS limit of 128 MBs.

In practice this excludes most short-lived tasks - the limit
is in fact probably a bit on the conservative side - but with
intrusive kernel features conservative is good.

The /proc/sys/kernel/sched_numa_rss_threshold_mb value can be
tuned runtime - setting it to 0 turns off this filter.

To implement the RSS filter first factor out a clean
task_numa_candidate() function and comment on the various
reasons of why we wouldn't want to begin to NUMA-balance
a particular task (yet). Then add the RSS check.

Note, we are using the p->hiwater_rss value instead of the
current RSS size. We do this to avoid tasks flipping in and
out of the limit, if their RSS fluctuates around the limit.
The RSS high-water value increases monotonically in the
life-time of a task, so there's a single, precise transition
to NUMA-balancing as the limit is crossed.

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h |  1 +
 kernel/sched/fair.c   | 50 ++++++++++++++++++++++++++++++++++++++++++++------
 kernel/sysctl.c       |  7 +++++++
 3 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index ce834e7..6a29dfd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2059,6 +2059,7 @@ extern unsigned int sysctl_sched_numa_scan_period_min;
 extern unsigned int sysctl_sched_numa_scan_period_max;
 extern unsigned int sysctl_sched_numa_scan_size_min;
 extern unsigned int sysctl_sched_numa_scan_size_max;
+extern unsigned int sysctl_sched_numa_rss_threshold;
 extern unsigned int sysctl_sched_numa_settle_count;
 
 #ifdef CONFIG_SCHED_DEBUG
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9667191..eb49f07 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -812,6 +812,8 @@ unsigned int sysctl_sched_numa_scan_period_max	__read_mostly = 100*16;	/* ms */
 unsigned int sysctl_sched_numa_scan_size_min	__read_mostly =  32;	/* MB */
 unsigned int sysctl_sched_numa_scan_size_max	__read_mostly = 512;	/* MB */
 
+unsigned int sysctl_sched_numa_rss_threshold	__read_mostly = 128;	/* MB */
+
 /*
  * Wait for the 2-sample stuff to settle before migrating again
  */
@@ -2486,17 +2488,53 @@ static void task_tick_numa_placement(struct rq *rq, struct task_struct *curr)
 	task_work_add(curr, work, true);
 }
 
-static void task_tick_numa(struct rq *rq, struct task_struct *curr)
+/*
+ * Is this task worth NUMA-scanning and NUMA-balancing?
+ */
+static bool task_numa_candidate(struct task_struct *p)
 {
+	unsigned long rss_pages;
+
+	/* kthreads don't have any user-space memory to scan: */
+	if (!p->mm || !p->numa_faults)
+		return false;
+
 	/*
-	 * We don't care about NUMA placement if we don't have memory
-	 * or are exiting:
+	 * Exiting tasks won't touch any user-space memory in the future,
+	 * and this also avoids a race with work_exit():
 	 */
-	if (!curr->mm || (curr->flags & PF_EXITING) || !curr->numa_faults)
-		return;
+	if (p->flags & PF_EXITING)
+		return false;
 
 	/* Don't disturb hard-bound tasks: */
-	if (sched_feat(NUMA_EXCLUDE_AFFINE) && (curr->nr_cpus_allowed != num_online_cpus())) {
+	if (sched_feat(NUMA_EXCLUDE_AFFINE)) {
+		if (p->nr_cpus_allowed != num_online_cpus())
+			return false;
+	}
+
+	/*
+	 * NUMA-balancing, combined with NUMA memory migration,
+	 * is a long-term process that takes time to establish
+	 * and converge, on the time scale of of several seconds
+	 * or more.
+	 *
+	 * Small tasks are usually short-lived and don't have much
+	 * of a NUMA placement cost to begin with, so don't
+	 * NUMA-balance them:
+	 */
+	rss_pages = sysctl_sched_numa_rss_threshold;
+	rss_pages <<= 20 - PAGE_SHIFT; /* MB to pages */
+
+	if (p->mm->hiwater_rss < rss_pages)
+		return false;
+
+	return true;
+}
+
+static void task_tick_numa(struct rq *rq, struct task_struct *curr)
+{
+	/* Cheap checks first: */
+	if (!task_numa_candidate(curr)) {
 		if (curr->numa_shared >= 0)
 			curr->numa_shared = -1;
 		return;
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index b6ddfae..75ab895 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -388,6 +388,13 @@ static struct ctl_table kern_table[] = {
 		.proc_handler	= proc_dointvec,
 	},
 	{
+		.procname	= "sched_numa_rss_threshold_mb",
+		.data		= &sysctl_sched_numa_rss_threshold,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
+	{
 		.procname	= "sched_numa_settle_count",
 		.data		= &sysctl_sched_numa_settle_count,
 		.maxlen		= sizeof(unsigned int),
-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2, v2] sched: Add RSS filter to NUMA-balancing
  2012-12-02 16:13 ` [PATCH 2/2] sched: Add RSS filter to NUMA-balancing Ingo Molnar
@ 2012-12-02 19:45   ` Ingo Molnar
  0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2012-12-02 19:45 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: Peter Zijlstra, Paul Turner, Lee Schermerhorn, Christoph Lameter,
	Rik van Riel, Mel Gorman, Andrew Morton, Andrea Arcangeli,
	Linus Torvalds, Thomas Gleixner, Johannes Weiner, Hugh Dickins


Updated -v2 patch: RSS high-water calculation has a performance 
trick, so mm->hiwater_rss must be used together with get_mm_rss.

Thanks,

	Ingo

--------------------------->

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-12-02 19:45 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-02 16:13 [PATCH 0/2] numa/core updates Ingo Molnar
2012-12-02 16:13 ` [PATCH 1/2] sched: Exclude pinned tasks from the NUMA-balancing logic Ingo Molnar
2012-12-02 16:13 ` [PATCH 2/2] sched: Add RSS filter to NUMA-balancing Ingo Molnar
2012-12-02 19:45   ` [PATCH 2/2, v2] " Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).