From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 06/15] sched: Reschedule task on preferred NUMA node once selected
Date: Sat, 6 Jul 2013 00:08:53 +0100 [thread overview]
Message-ID: <1373065742-9753-7-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1373065742-9753-1-git-send-email-mgorman@suse.de>
A preferred node is selected based on the node the most NUMA hinting
faults was incurred on. There is no guarantee that the task is running
on that node at the time so this patch rescheules the task to run on
the most idle CPU of the selected node when selected. This avoids
waiting for the balancer to make a decision.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
kernel/sched/core.c | 17 ++++++++++++++++
kernel/sched/fair.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++-
kernel/sched/sched.h | 1 +
3 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5e02507..e4c1832 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -992,6 +992,23 @@ struct migration_arg {
static int migration_cpu_stop(void *data);
+#ifdef CONFIG_NUMA_BALANCING
+/* Migrate current task p to target_cpu */
+int migrate_task_to(struct task_struct *p, int target_cpu)
+{
+ struct migration_arg arg = { p, target_cpu };
+ int curr_cpu = task_cpu(p);
+
+ if (curr_cpu == target_cpu)
+ return 0;
+
+ if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p)))
+ return -EINVAL;
+
+ return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg);
+}
+#endif
+
/*
* wait_task_inactive - wait for a thread to unschedule.
*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5055bf9..5a01dcb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -800,6 +800,40 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;
*/
unsigned int sysctl_numa_balancing_settle_count __read_mostly = 3;
+static unsigned long weighted_cpuload(const int cpu);
+
+
+static int
+find_idlest_cpu_node(int this_cpu, int nid)
+{
+ unsigned long load, min_load = ULONG_MAX;
+ int i, idlest_cpu = this_cpu;
+
+ BUG_ON(cpu_to_node(this_cpu) == nid);
+
+ rcu_read_lock();
+ for_each_cpu(i, cpumask_of_node(nid)) {
+ load = weighted_cpuload(i);
+
+ if (load < min_load) {
+ /*
+ * Kernel threads can be preempted. For others, do
+ * not preempt if running on their preferred node
+ * or pinned.
+ */
+ struct task_struct *p = cpu_rq(i)->curr;
+ if ((p->flags & PF_KTHREAD) ||
+ (p->numa_preferred_nid != nid && p->nr_cpus_allowed > 1)) {
+ min_load = load;
+ idlest_cpu = i;
+ }
+ }
+ }
+ rcu_read_unlock();
+
+ return idlest_cpu;
+}
+
static void task_numa_placement(struct task_struct *p)
{
int seq, nid, max_nid = 0;
@@ -829,10 +863,29 @@ static void task_numa_placement(struct task_struct *p)
}
}
- /* Update the tasks preferred node if necessary */
+ /*
+ * Record the preferred node as the node with the most faults,
+ * requeue the task to be running on the idlest CPU on the
+ * preferred node and reset the scanning rate to recheck
+ * the working set placement.
+ */
if (max_faults && max_nid != p->numa_preferred_nid) {
+ int preferred_cpu;
+
+ /*
+ * If the task is not on the preferred node then find the most
+ * idle CPU to migrate to.
+ */
+ preferred_cpu = task_cpu(p);
+ if (cpu_to_node(preferred_cpu) != max_nid) {
+ preferred_cpu = find_idlest_cpu_node(preferred_cpu,
+ max_nid);
+ }
+
+ /* Update the preferred nid and migrate task if possible */
p->numa_preferred_nid = max_nid;
p->numa_migrate_seq = 0;
+ migrate_task_to(p, preferred_cpu);
}
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c5f773d..795346d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -504,6 +504,7 @@ DECLARE_PER_CPU(struct rq, runqueues);
#define raw_rq() (&__raw_get_cpu_var(runqueues))
#ifdef CONFIG_NUMA_BALANCING
+extern int migrate_task_to(struct task_struct *p, int cpu);
static inline void task_numa_free(struct task_struct *p)
{
kfree(p->numa_faults);
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-07-05 23:09 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-05 23:08 [PATCH 0/15] Basic scheduler support for automatic NUMA balancing V3 Mel Gorman
2013-07-05 23:08 ` [PATCH 01/15] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-07-05 23:08 ` [PATCH 02/15] sched: Track NUMA hinting faults on per-node basis Mel Gorman
2013-07-05 23:08 ` [PATCH 03/15] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-07-05 23:08 ` [PATCH 04/15] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-07-05 23:08 ` [PATCH 05/15] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-07-05 23:08 ` Mel Gorman [this message]
2013-07-06 10:38 ` [PATCH 06/15] sched: Reschedule task on preferred NUMA node once selected Peter Zijlstra
2013-07-08 8:34 ` Mel Gorman
2013-07-05 23:08 ` [PATCH 07/15] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-07-05 23:08 ` [PATCH 08/15] sched: Increase NUMA PTE scanning when a new preferred node is selected Mel Gorman
2013-07-05 23:08 ` [PATCH 09/15] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-07-05 23:08 ` [PATCH 10/15] sched: Set the scan rate proportional to the size of the task being scanned Mel Gorman
2013-07-05 23:08 ` [PATCH 11/15] mm: numa: Scan pages with elevated page_mapcount Mel Gorman
2013-07-05 23:08 ` [PATCH 12/15] sched: Remove check that skips small VMAs Mel Gorman
2013-07-05 23:09 ` [PATCH 13/15] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-07-06 10:41 ` Peter Zijlstra
2013-07-08 9:23 ` Mel Gorman
2013-07-06 10:44 ` Peter Zijlstra
2013-07-05 23:09 ` [PATCH 14/15] sched: Account for the number of preferred tasks running on a node when selecting a preferred node Mel Gorman
2013-07-06 10:46 ` Peter Zijlstra
2013-07-05 23:09 ` [PATCH 15/15] sched: Favour moving tasks towards nodes that incurred more faults Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1373065742-9753-7-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).