From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>, Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Ingo Molnar <mingo@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 23/27] mm: numa: Scan pages with elevated page_mapcount
Date: Thu, 8 Aug 2013 15:00:35 +0100 [thread overview]
Message-ID: <1375970439-5111-24-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1375970439-5111-1-git-send-email-mgorman@suse.de>
Currently automatic NUMA balancing is unable to distinguish between false
shared versus private pages except by ignoring pages with an elevated
page_mapcount entirely. This avoids shared pages bouncing between the
nodes whose task is using them but that is ignored quite a lot of data.
This patch kicks away the training wheels in preparation for adding support
for identifying shared/private pages is now in place. The ordering is so
that the impact of the shared/private detection can be easily measured. Note
that the patch does not migrate shared, file-backed within vmas marked
VM_EXEC as these are generally shared library pages. Migrating such pages
is not beneficial as there is an expectation they are read-shared between
caches and iTLB and iCache pressure is generally low.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
include/linux/migrate.h | 7 ++++---
mm/huge_memory.c | 5 +----
mm/memory.c | 7 ++-----
mm/migrate.c | 17 ++++++-----------
mm/mprotect.c | 4 +---
5 files changed, 14 insertions(+), 26 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a405d3dc..e7e26af 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -92,11 +92,12 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_NUMA_BALANCING
-extern int migrate_misplaced_page(struct page *page, int node);
-extern int migrate_misplaced_page(struct page *page, int node);
+extern int migrate_misplaced_page(struct page *page,
+ struct vm_area_struct *vma, int node);
extern bool migrate_ratelimited(int node);
#else
-static inline int migrate_misplaced_page(struct page *page, int node)
+static inline int migrate_misplaced_page(struct page *page,
+ struct vm_area_struct *vma, int node)
{
return -EAGAIN; /* can't migrate now */
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 52c4706..a6153eb 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1478,12 +1478,9 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
ret = HPAGE_PMD_NR;
BUG_ON(pmd_write(entry));
} else {
- struct page *page = pmd_page(*pmd);
ret = 1;
- /* only check non-shared pages */
- if (page_mapcount(page) == 1 &&
- !pmd_numa(*pmd)) {
+ if (!pmd_numa(*pmd)) {
entry = pmdp_get_and_clear(mm, addr, pmd);
entry = pmd_mknuma(entry);
ret = HPAGE_PMD_NR;
diff --git a/mm/memory.c b/mm/memory.c
index 7170707..0e7010c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3579,7 +3579,7 @@ int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
}
/* Migrate to the requested node */
- migrated = migrate_misplaced_page(page, target_nid);
+ migrated = migrate_misplaced_page(page, vma, target_nid);
if (migrated)
page_nid = target_nid;
@@ -3644,16 +3644,13 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
page = vm_normal_page(vma, addr, pteval);
if (unlikely(!page))
continue;
- /* only check non-shared pages */
- if (unlikely(page_mapcount(page) != 1))
- continue;
last_nid = page_nid_last(page);
page_nid = page_to_nid(page);
target_nid = numa_migrate_prep(page, vma, addr, page_nid);
pte_unmap_unlock(pte, ptl);
if (target_nid != -1) {
- migrated = migrate_misplaced_page(page, target_nid);
+ migrated = migrate_misplaced_page(page, vma, target_nid);
if (migrated)
page_nid = target_nid;
} else {
diff --git a/mm/migrate.c b/mm/migrate.c
index 6f0c244..08ac3ba 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1596,7 +1596,8 @@ int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
* node. Caller is expected to have an elevated reference count on
* the page that will be dropped by this function before returning.
*/
-int migrate_misplaced_page(struct page *page, int node)
+int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
+ int node)
{
pg_data_t *pgdat = NODE_DATA(node);
int isolated;
@@ -1604,10 +1605,11 @@ int migrate_misplaced_page(struct page *page, int node)
LIST_HEAD(migratepages);
/*
- * Don't migrate pages that are mapped in multiple processes.
- * TODO: Handle false sharing detection instead of this hammer
+ * Don't migrate file pages that are mapped in multiple processes
+ * with execute permissions as they are probably shared libraries.
*/
- if (page_mapcount(page) != 1)
+ if (page_mapcount(page) != 1 && page_is_file_cache(page) &&
+ (vma->vm_flags & VM_EXEC))
goto out;
/*
@@ -1658,13 +1660,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
int page_lru = page_is_file_cache(page);
/*
- * Don't migrate pages that are mapped in multiple processes.
- * TODO: Handle false sharing detection instead of this hammer
- */
- if (page_mapcount(page) != 1)
- goto out_dropref;
-
- /*
* Rate-limit the amount of data that is being migrated to a node.
* Optimal placement is no good if the memory bus is saturated and
* all the time is being spent migrating!
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8e7e9bd..df64356 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -69,9 +69,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
if (last_nid != this_nid)
all_same_node = false;
- /* only check non-shared pages */
- if (!pte_numa(oldpte) &&
- page_mapcount(page) == 1) {
+ if (!pte_numa(oldpte)) {
ptent = pte_mknuma(ptent);
updated = true;
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-08 14:01 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-08 14:00 [PATCH 0/27] Basic scheduler support for automatic NUMA balancing V6 Mel Gorman
2013-08-08 14:00 ` [PATCH 01/27] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-08-08 14:00 ` [PATCH 02/27] sched, numa: Comment fixlets Mel Gorman
2013-08-08 14:00 ` [PATCH 03/27] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman
2013-08-08 14:00 ` [PATCH 04/27] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman
2013-08-08 14:00 ` [PATCH 05/27] mm, numa: Sanitize task_numa_fault() callsites Mel Gorman
2013-08-08 14:00 ` [PATCH 06/27] sched, numa: Mitigate chance that same task always updates PTEs Mel Gorman
2013-08-08 14:00 ` [PATCH 07/27] sched, numa: Continue PTE scanning even if migrate rate limited Mel Gorman
2013-08-08 14:00 ` [PATCH 08/27] Revert "mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node" Mel Gorman
2013-08-08 14:00 ` [PATCH 09/27] sched: numa: Initialise numa_next_scan properly Mel Gorman
2013-08-08 14:00 ` [PATCH 10/27] sched: numa: Slow scan rate if no NUMA hinting faults are being recorded Mel Gorman
2013-08-08 14:00 ` [PATCH 11/27] sched: Set the scan rate proportional to the memory usage of the task being scanned Mel Gorman
2013-08-08 14:00 ` [PATCH 12/27] sched: numa: Correct adjustment of numa_scan_period Mel Gorman
2013-08-08 14:00 ` [PATCH 13/27] mm: Only flush TLBs if a transhuge PMD is modified for NUMA pte scanning Mel Gorman
2013-08-08 14:00 ` [PATCH 14/27] mm: Do not flush TLB during protection change if !pte_present && !migration_entry Mel Gorman
2013-08-08 14:00 ` [PATCH 15/27] sched: Track NUMA hinting faults on per-node basis Mel Gorman
2013-08-08 14:00 ` [PATCH 16/27] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 17/27] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-08-08 14:00 ` [PATCH 18/27] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-08-08 14:00 ` [PATCH 19/27] sched: Resist moving tasks towards nodes with fewer hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 20/27] sched: Reschedule task on preferred NUMA node once selected Mel Gorman
2013-08-08 14:00 ` [PATCH 21/27] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 22/27] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-08-08 14:00 ` Mel Gorman [this message]
2013-08-08 14:00 ` [PATCH 24/27] sched: Remove check that skips small VMAs Mel Gorman
2013-08-08 14:00 ` [PATCH 25/27] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-08-08 14:00 ` [PATCH 26/27] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman
2013-08-08 14:00 ` [PATCH 27/27] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1375970439-5111-24-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).