From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>, Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Ingo Molnar <mingo@kernel.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>
Subject: [PATCH 13/27] mm: Only flush TLBs if a transhuge PMD is modified for NUMA pte scanning
Date: Thu, 8 Aug 2013 15:00:25 +0100 [thread overview]
Message-ID: <1375970439-5111-14-git-send-email-mgorman@suse.de> (raw)
In-Reply-To: <1375970439-5111-1-git-send-email-mgorman@suse.de>
NUMA PTE scanning is expensive both in terms of the scanning itself and
the TLB flush if there are any updates. The TLB flush is avoided if no
PTEs are updated but there is a bug where transhuge PMDs are considered
to be updated even if they were already pmd_numa. This patch addresses
the problem and TLB flushes should be reduced.
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
mm/huge_memory.c | 20 +++++++++++++++++---
mm/mprotect.c | 14 ++++++++++----
2 files changed, 27 insertions(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index bf59194..e6beb0f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1456,6 +1456,12 @@ out:
return ret;
}
+/*
+ * Returns
+ * - 0 if PMD could not be locked
+ * - 1 if PMD was locked but protections unchange and TLB flush unnecessary
+ * - HPAGE_PMD_NR is protections changed and TLB flush necessary
+ */
int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, pgprot_t newprot, int prot_numa)
{
@@ -1464,22 +1470,30 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
if (__pmd_trans_huge_lock(pmd, vma) == 1) {
pmd_t entry;
- entry = pmdp_get_and_clear(mm, addr, pmd);
+ ret = 1;
if (!prot_numa) {
+ entry = pmdp_get_and_clear(mm, addr, pmd);
entry = pmd_modify(entry, newprot);
+ ret = HPAGE_PMD_NR;
BUG_ON(pmd_write(entry));
} else {
struct page *page = pmd_page(*pmd);
+ ret = 1;
/* only check non-shared pages */
if (page_mapcount(page) == 1 &&
!pmd_numa(*pmd)) {
+ entry = pmdp_get_and_clear(mm, addr, pmd);
entry = pmd_mknuma(entry);
+ ret = HPAGE_PMD_NR;
}
}
- set_pmd_at(mm, addr, pmd, entry);
+
+ /* Set PMD if cleared earlier */
+ if (ret == HPAGE_PMD_NR)
+ set_pmd_at(mm, addr, pmd, entry);
+
spin_unlock(&vma->vm_mm->page_table_lock);
- ret = 1;
}
return ret;
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 94722a4..1f4ab1c 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -143,10 +143,16 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
if (pmd_trans_huge(*pmd)) {
if (next - addr != HPAGE_PMD_SIZE)
split_huge_page_pmd(vma, addr, pmd);
- else if (change_huge_pmd(vma, pmd, addr, newprot,
- prot_numa)) {
- pages += HPAGE_PMD_NR;
- continue;
+ else {
+ int nr_ptes = change_huge_pmd(vma, pmd, addr,
+ newprot, prot_numa);
+
+ if (nr_ptes) {
+ if (nr_ptes == HPAGE_PMD_NR)
+ pages += HPAGE_PMD_NR;
+
+ continue;
+ }
}
/* fall through */
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-08-08 14:00 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-08 14:00 [PATCH 0/27] Basic scheduler support for automatic NUMA balancing V6 Mel Gorman
2013-08-08 14:00 ` [PATCH 01/27] mm: numa: Document automatic NUMA balancing sysctls Mel Gorman
2013-08-08 14:00 ` [PATCH 02/27] sched, numa: Comment fixlets Mel Gorman
2013-08-08 14:00 ` [PATCH 03/27] mm: numa: Account for THP numa hinting faults on the correct node Mel Gorman
2013-08-08 14:00 ` [PATCH 04/27] mm: numa: Do not migrate or account for hinting faults on the zero page Mel Gorman
2013-08-08 14:00 ` [PATCH 05/27] mm, numa: Sanitize task_numa_fault() callsites Mel Gorman
2013-08-08 14:00 ` [PATCH 06/27] sched, numa: Mitigate chance that same task always updates PTEs Mel Gorman
2013-08-08 14:00 ` [PATCH 07/27] sched, numa: Continue PTE scanning even if migrate rate limited Mel Gorman
2013-08-08 14:00 ` [PATCH 08/27] Revert "mm: sched: numa: Delay PTE scanning until a task is scheduled on a new node" Mel Gorman
2013-08-08 14:00 ` [PATCH 09/27] sched: numa: Initialise numa_next_scan properly Mel Gorman
2013-08-08 14:00 ` [PATCH 10/27] sched: numa: Slow scan rate if no NUMA hinting faults are being recorded Mel Gorman
2013-08-08 14:00 ` [PATCH 11/27] sched: Set the scan rate proportional to the memory usage of the task being scanned Mel Gorman
2013-08-08 14:00 ` [PATCH 12/27] sched: numa: Correct adjustment of numa_scan_period Mel Gorman
2013-08-08 14:00 ` Mel Gorman [this message]
2013-08-08 14:00 ` [PATCH 14/27] mm: Do not flush TLB during protection change if !pte_present && !migration_entry Mel Gorman
2013-08-08 14:00 ` [PATCH 15/27] sched: Track NUMA hinting faults on per-node basis Mel Gorman
2013-08-08 14:00 ` [PATCH 16/27] sched: Select a preferred node with the most numa hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 17/27] sched: Update NUMA hinting faults once per scan Mel Gorman
2013-08-08 14:00 ` [PATCH 18/27] sched: Favour moving tasks towards the preferred node Mel Gorman
2013-08-08 14:00 ` [PATCH 19/27] sched: Resist moving tasks towards nodes with fewer hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 20/27] sched: Reschedule task on preferred NUMA node once selected Mel Gorman
2013-08-08 14:00 ` [PATCH 21/27] sched: Add infrastructure for split shared/private accounting of NUMA hinting faults Mel Gorman
2013-08-08 14:00 ` [PATCH 22/27] sched: Check current->mm before allocating NUMA faults Mel Gorman
2013-08-08 14:00 ` [PATCH 23/27] mm: numa: Scan pages with elevated page_mapcount Mel Gorman
2013-08-08 14:00 ` [PATCH 24/27] sched: Remove check that skips small VMAs Mel Gorman
2013-08-08 14:00 ` [PATCH 25/27] sched: Set preferred NUMA node based on number of private faults Mel Gorman
2013-08-08 14:00 ` [PATCH 26/27] sched: Avoid overloading CPUs on a preferred NUMA node Mel Gorman
2013-08-08 14:00 ` [PATCH 27/27] sched: Retry migration of tasks to CPU on a preferred node Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1375970439-5111-14-git-send-email-mgorman@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=srikar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).