From: tip-bot for Peter Zijlstra <a.p.zijlstra@chello.nl>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@kernel.org,
a.p.zijlstra@chello.nl, tglx@linutronix.de
Subject: [tip:numa/core] sched/numa/mm: Avoid pointless TLB invalidation from page-migration
Date: Thu, 18 Oct 2012 10:04:41 -0700 [thread overview]
Message-ID: <tip-h1utred3dhv2ausjg1wqwuym@git.kernel.org> (raw)
Commit-ID: 5cc4a4cb0abc63699b6741d7737e07e49b502782
Gitweb: http://git.kernel.org/tip/5cc4a4cb0abc63699b6741d7737e07e49b502782
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Thu, 11 Oct 2012 17:42:06 +0200
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 15 Oct 2012 13:56:50 +0200
sched/numa/mm: Avoid pointless TLB invalidation from page-migration
When we do migrate-on-fault we faulted on a PROT_NONE entry, this is
a !present entry, we then replace this with a regular entry and then
proceed with page-migration.
Page-migration in turn replaces the now valid entry with a
migration-PTE (which is again !present) which requires a TLB
invalidate.
Instead, leave the PROT_NONE entry in-place when we need to migrate
such that the PROT_NONE -> migration-PTE transition is a !preset ->
!preset transition and doesn't require a TLB invalidate.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-h1utred3dhv2ausjg1wqwuym@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
mm/memory.c | 63 +++++++++++++++++++++++++++++------------------------------
1 files changed, 31 insertions(+), 32 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 9ada7ed..8b1ad86 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3442,38 +3442,12 @@ static bool pte_prot_none(struct vm_area_struct *vma, pte_t pte)
return pte_same(pte, pte_modify(pte, vma_prot_none(vma)));
}
-#ifdef CONFIG_NUMA
-
-
-static void do_prot_none_numa(struct mm_struct *mm, struct vm_area_struct *vma,
- unsigned long address, struct page *page)
-{
- int node, page_nid = page_to_nid(page);
-
- task_numa_placement();
-
- /*
- * For NUMA systems we use the special PROT_NONE maps to drive
- * lazy page migration, see MPOL_MF_LAZY and related.
- */
- node = mpol_misplaced(page, vma, address);
- if (node != -1 && !migrate_misplaced_page(mm, page, node))
- page_nid = node;
-
- task_numa_fault(page_nid);
-}
-#else
-static void do_prot_none_numa(struct mm_struct *mm, struct vm_area_struct *vma,
- unsigned long address, struct page *page)
-{
-}
-#endif /* CONFIG_NUMA */
-
static int do_prot_none(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pte_t *ptep, pmd_t *pmd,
unsigned int flags, pte_t entry)
{
struct page *page = NULL;
+ int node, page_nid = -1;
spinlock_t *ptl;
ptl = pte_lockptr(mm, pmd);
@@ -3481,6 +3455,16 @@ static int do_prot_none(struct mm_struct *mm, struct vm_area_struct *vma,
if (unlikely(!pte_same(*ptep, entry)))
goto unlock;
+ page = vm_normal_page(vma, address, entry);
+ if (page) {
+ get_page(page);
+ page_nid = page_to_nid(page);
+ node = mpol_misplaced(page, vma, address);
+ if (node != -1)
+ goto migrate;
+ }
+
+fixup:
flush_cache_page(vma, address, pte_pfn(entry));
ptep_modify_prot_start(mm, address, ptep);
@@ -3489,17 +3473,32 @@ static int do_prot_none(struct mm_struct *mm, struct vm_area_struct *vma,
update_mmu_cache(vma, address, ptep);
- page = vm_normal_page(vma, address, entry);
- if (page)
- get_page(page);
-
unlock:
pte_unmap_unlock(ptep, ptl);
+out:
if (page) {
- do_prot_none_numa(mm, vma, address, page);
+ task_numa_fault(page_nid, 1);
put_page(page);
}
+
return 0;
+
+migrate:
+ pte_unmap_unlock(ptep, ptl);
+
+ if (!migrate_misplaced_page(mm, page, node)) {
+ page_nid = node;
+ goto out;
+ }
+
+ ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
+ if (!pte_same(*ptep, entry)) {
+ put_page(page);
+ page = NULL;
+ goto unlock;
+ }
+
+ goto fixup;
}
/*
reply other threads:[~2012-10-18 17:04 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tip-h1utred3dhv2ausjg1wqwuym@git.kernel.org \
--to=a.p.zijlstra@chello.nl \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tip-commits@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox