From: Mel Gorman <mgorman@suse.de>
To: Alex Thorlton <athorlton@sgi.com>
Cc: Rik van Riel <riel@redhat.com>, Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 13/15] mm: numa: Avoid unnecessary disruption of NUMA hinting during migration
Date: Tue, 3 Dec 2013 08:52:00 +0000 [thread overview]
Message-ID: <1386060721-3794-14-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1386060721-3794-1-git-send-email-mgorman@suse.de>
do_huge_pmd_numa_page() handles the case where there is parallel THP
migration. However, by the time it is checked the NUMA hinting information
has already been disrupted. This patch adds an earlier check with some helpers.
It reuses the helper to warn if a huge pmd copy takes place in parallel with
THP migration as that potentially leads to corruption.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
include/linux/migrate.h | 10 +++++++++-
mm/huge_memory.c | 22 ++++++++++++++++------
mm/migrate.c | 17 +++++++++++++++++
3 files changed, 42 insertions(+), 7 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 8d3c57f..804651c 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -90,10 +90,18 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_NUMA_BALANCING
-extern int migrate_misplaced_page(struct page *page, int node);
+extern bool pmd_trans_migrating(pmd_t pmd);
+extern void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd);
extern int migrate_misplaced_page(struct page *page, int node);
extern bool migrate_ratelimited(int node);
#else
+static inline bool pmd_trans_migrating(pmd_t pmd)
+{
+ return false;
+}
+static inline void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+}
static inline int migrate_misplaced_page(struct page *page, int node)
{
return -EAGAIN; /* can't migrate now */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fa277fa..4c7abd7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -884,6 +884,10 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
ret = 0;
goto out_unlock;
}
+
+ /* mmap_sem prevents this happening but warn if that changes */
+ WARN_ON(pmd_trans_migrating(pmd));
+
if (unlikely(pmd_trans_splitting(pmd))) {
/* split huge page running from under us */
spin_unlock(&src_mm->page_table_lock);
@@ -1294,6 +1298,17 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
if (unlikely(!pmd_same(pmd, *pmdp)))
goto out_unlock;
+ /*
+ * If there are potential migrations, wait for completion and retry
+ * without disrupting NUMA hinting information. Do not relock and
+ * check_same as the page may no longer be mapped.
+ */
+ if (unlikely(pmd_trans_migrating(*pmdp))) {
+ spin_unlock(&mm->page_table_lock);
+ wait_migrate_huge_page(vma->anon_vma, pmdp);
+ goto out;
+ }
+
page = pmd_page(pmd);
page_nid = page_to_nid(page);
count_vm_numa_event(NUMA_HINT_FAULTS);
@@ -1312,12 +1327,7 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
goto clear_pmdnuma;
}
- /*
- * If there are potential migrations, wait for completion and retry. We
- * do not relock and check_same as the page may no longer be mapped.
- * Furtermore, even if the page is currently misplaced, there is no
- * guarantee it is still misplaced after the migration completes.
- */
+ /* Migration could have started since the pmd_trans_migrating check */
if (!page_locked) {
spin_unlock(&mm->page_table_lock);
wait_on_page_locked(page);
diff --git a/mm/migrate.c b/mm/migrate.c
index e429206..5dfd552 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1645,6 +1645,23 @@ int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
return 1;
}
+bool pmd_trans_migrating(pmd_t pmd) {
+ struct page *page = pmd_page(pmd);
+ return PageLocked(page);
+}
+
+void wait_migrate_huge_page(struct anon_vma *anon_vma, pmd_t *pmd)
+{
+ struct page *page = pmd_page(*pmd);
+ if (get_page_unless_zero(page)) {
+ wait_on_page_locked(page);
+ put_page(page);
+ }
+
+ /* Guarantee that the newly migrated PTE is visible */
+ smp_rmb();
+}
+
/*
* Attempt to migrate a misplaced page to the specified destination
* node. Caller is expected to have an elevated reference count on
--
1.8.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-12-03 8:52 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-03 8:51 [PATCH 00/14] NUMA balancing segmentation faults candidate fix on large machines Mel Gorman
2013-12-03 8:51 ` [PATCH 01/15] mm: numa: Do not batch handle PMD pages Mel Gorman
2013-12-03 8:51 ` [PATCH 02/15] mm: hugetlbfs: fix hugetlbfs optimization Mel Gorman
2013-12-03 8:51 ` [PATCH 03/15] mm: thp: give transparent hugepage code a separate copy_page Mel Gorman
2013-12-04 16:59 ` Alex Thorlton
2013-12-05 13:35 ` Mel Gorman
2013-12-03 8:51 ` [PATCH 04/15] mm: numa: Serialise parallel get_user_page against THP migration Mel Gorman
2013-12-03 23:07 ` Rik van Riel
2013-12-03 23:54 ` Mel Gorman
2013-12-03 8:51 ` [PATCH 05/15] mm: numa: Call MMU notifiers on " Mel Gorman
2013-12-03 8:51 ` [PATCH 06/15] mm: Clear pmd_numa before invalidating Mel Gorman
2013-12-03 8:51 ` [PATCH 07/15] mm: numa: Do not clear PMD during PTE update scan Mel Gorman
2013-12-03 8:51 ` [PATCH 08/15] mm: numa: Do not clear PTE for pte_numa update Mel Gorman
2013-12-03 8:51 ` [PATCH 09/15] mm: numa: Ensure anon_vma is locked to prevent parallel THP splits Mel Gorman
2013-12-03 8:51 ` [PATCH 10/15] mm: numa: Avoid unnecessary work on the failure path Mel Gorman
2013-12-03 8:51 ` [PATCH 11/15] sched: numa: Skip inaccessible VMAs Mel Gorman
2013-12-03 8:51 ` [PATCH 12/15] Clear numa on mprotect Mel Gorman
2013-12-03 8:52 ` Mel Gorman [this message]
2013-12-03 8:52 ` [PATCH 14/15] mm: numa: Flush TLB if NUMA hinting faults race with PTE scan update Mel Gorman
2013-12-03 23:07 ` Rik van Riel
2013-12-03 23:46 ` Mel Gorman
2013-12-04 14:33 ` Rik van Riel
2013-12-04 16:07 ` Mel Gorman
2013-12-05 15:40 ` Rik van Riel
2013-12-05 19:54 ` Mel Gorman
2013-12-05 20:05 ` Rik van Riel
2013-12-06 9:24 ` Mel Gorman
2013-12-06 17:38 ` Alex Thorlton
2013-12-06 18:32 ` Mel Gorman
2013-12-06 19:13 ` [PATCH 14/15] mm: fix TLB flush race between migration, and change_protection_range Rik van Riel
2013-12-06 20:32 ` Christoph Lameter
2013-12-06 21:21 ` Rik van Riel
2013-12-07 0:25 ` Christoph Lameter
2013-12-07 3:14 ` Rik van Riel
2013-12-09 16:00 ` Christoph Lameter
2013-12-09 16:27 ` Mel Gorman
2013-12-09 16:59 ` Christoph Lameter
2013-12-09 21:01 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1386060721-3794-14-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=athorlton@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).