From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>, Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Ingo Molnar <mingo@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 15/27] sched: Track NUMA hinting faults on per-node basis
Date: Thu, 8 Aug 2013 15:00:27 +0100 [thread overview]
Message-ID: <1375970439-5111-16-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1375970439-5111-1-git-send-email-mgorman@suse.de>
This patch tracks what nodes numa hinting faults were incurred on.
This information is later used to schedule a task on the node storing
the pages most frequently faulted by the task.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
include/linux/sched.h | 2 ++
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 11 ++++++++++-
kernel/sched/sched.h | 12 ++++++++++++
4 files changed, 27 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 59c473b..702a5b6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1334,6 +1334,8 @@ struct task_struct {
unsigned int numa_scan_period_max;
u64 node_stamp; /* migration stamp */
struct callback_head numa_work;
+
+ unsigned long *numa_faults;
#endif /* CONFIG_NUMA_BALANCING */
struct rcu_head rcu;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e148975..e6dda1b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1635,6 +1635,7 @@ static void __sched_fork(struct task_struct *p)
p->numa_migrate_seq = p->mm ? p->mm->numa_scan_seq - 1 : 0;
p->numa_scan_period = sysctl_numa_balancing_scan_delay;
p->numa_work.next = &p->numa_work;
+ p->numa_faults = NULL;
#endif /* CONFIG_NUMA_BALANCING */
}
@@ -1896,6 +1897,8 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
if (mm)
mmdrop(mm);
if (unlikely(prev_state == TASK_DEAD)) {
+ task_numa_free(prev);
+
/*
* Remove function-return probe instances associated with this
* task and put them back on the free list.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d77bb32..babac71 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -902,7 +902,14 @@ void task_numa_fault(int node, int pages, bool migrated)
if (!sched_feat_numa(NUMA))
return;
- /* FIXME: Allocate task-specific structure for placement policy here */
+ /* Allocate buffer to track faults on a per-node basis */
+ if (unlikely(!p->numa_faults)) {
+ int size = sizeof(*p->numa_faults) * nr_node_ids;
+
+ p->numa_faults = kzalloc(size, GFP_KERNEL|__GFP_NOWARN);
+ if (!p->numa_faults)
+ return;
+ }
/*
* If pages are properly placed (did not migrate) then scan slower.
@@ -918,6 +925,8 @@ void task_numa_fault(int node, int pages, bool migrated)
}
task_numa_placement(p);
+
+ p->numa_faults[node] += pages;
}
static void reset_ptenuma_scan(struct task_struct *p)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ef0a7b2..c2f1c86 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -6,6 +6,7 @@
#include <linux/spinlock.h>
#include <linux/stop_machine.h>
#include <linux/tick.h>
+#include <linux/slab.h>
#include "cpupri.h"
#include "cpuacct.h"
@@ -553,6 +554,17 @@ static inline u64 rq_clock_task(struct rq *rq)
return rq->clock_task;
}
+#ifdef CONFIG_NUMA_BALANCING
+static inline void task_numa_free(struct task_struct *p)
+{
+ kfree(p->numa_faults);
+}
+#else /* CONFIG_NUMA_BALANCING */
+static inline void task_numa_free(struct task_struct *p)
+{
+}
+#endif /* CONFIG_NUMA_BALANCING */
+
#ifdef CONFIG_SMP
#define rcu_dereference_check_sched_domain(p) \
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-08 14:01 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-08 14:00 [PATCH 0/27] Basic scheduler support for automatic NUMA balancing V6 Mel Gorman
2013-08-08 14:00 ` [PATCH 01/27] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-08-08 14:00 ` [PATCH 02/27] sched, numa: Comment fixlets Mel Gorman
2013-08-08 14:00 ` [PATCH 03/27] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman
2013-08-08 14:00 ` [PATCH 04/27] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman
2013-08-08 14:00 ` [PATCH 05/27] mm, numa: Sanitize task_numa_fault() callsites Mel Gorman
2013-08-08 14:00 ` [PATCH 06/27] sched, numa: Mitigate chance that same task always updates PTEs Mel Gorman
2013-08-08 14:00 ` [PATCH 07/27] sched, numa: Continue PTE scanning even if migrate rate limited Mel Gorman
2013-08-08 14:00 ` [PATCH 08/27] Revert "mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node" Mel Gorman
2013-08-08 14:00 ` [PATCH 09/27] sched: numa: Initialise numa_next_scan properly Mel Gorman
2013-08-08 14:00 ` [PATCH 10/27] sched: numa: Slow scan rate if no NUMA hinting faults are being recorded Mel Gorman
2013-08-08 14:00 ` [PATCH 11/27] sched: Set the scan rate proportional to the memory usage of the task being scanned Mel Gorman
2013-08-08 14:00 ` [PATCH 12/27] sched: numa: Correct adjustment of numa_scan_period Mel Gorman
2013-08-08 14:00 ` [PATCH 13/27] mm: Only flush TLBs if a transhuge PMD is modified for NUMA pte scanning Mel Gorman
2013-08-08 14:00 ` [PATCH 14/27] mm: Do not flush TLB during protection change if !pte_present && !migration_entry Mel Gorman
2013-08-08 14:00 ` Mel Gorman [this message]
2013-08-08 14:00 ` [PATCH 16/27] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 17/27] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-08-08 14:00 ` [PATCH 18/27] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-08-08 14:00 ` [PATCH 19/27] sched: Resist moving tasks towards nodes with fewer hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 20/27] sched: Reschedule task on preferred NUMA node once selected Mel Gorman
2013-08-08 14:00 ` [PATCH 21/27] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 22/27] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-08-08 14:00 ` [PATCH 23/27] mm: numa: Scan pages with elevated page_mapcount Mel Gorman
2013-08-08 14:00 ` [PATCH 24/27] sched: Remove check that skips small VMAs Mel Gorman
2013-08-08 14:00 ` [PATCH 25/27] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-08-08 14:00 ` [PATCH 26/27] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman
2013-08-08 14:00 ` [PATCH 27/27] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1375970439-5111-16-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).