From: Ingo Molnar <mingo@kernel.org>
To: Linus Torvalds <torvalds@linux-foundation.org>,
David Rientjes <rientjes@google.com>,
Mel Gorman <mgorman@suse.de>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Paul Turner <pjt@google.com>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Johannes Weiner <hannes@cmpxchg.org>,
Hugh Dickins <hughd@google.com>
Subject: [PATCH] mm, numa: Turn 4K pte NUMA faults into effective hugepage ones
Date: Tue, 20 Nov 2012 16:29:33 +0100 [thread overview]
Message-ID: <20121120152933.GA17996@gmail.com> (raw)
In-Reply-To: <20121120071704.GA14199@gmail.com>
* Ingo Molnar <mingo@kernel.org> wrote:
> * Linus Torvalds <torvalds@linux-foundation.org> wrote:
>
> > On Mon, Nov 19, 2012 at 12:36 PM, Ingo Molnar <mingo@kernel.org> wrote:
> > >
> > > Hugepages is a must for most forms of NUMA/HPC. This alone
> > > questions the relevance of most of your prior numa/core testing
> > > results. I now have to strongly dispute your other conclusions
> > > as well.
> >
> > Ingo, stop doing this kind of crap.
> >
> > Let's make it clear: if the NUMA patches continue to regress
> > performance for reasonable loads (and that very much includes
> > "no THP") then they won't be merged.
> [...]
>
> No doubt numa/core should not regress with THP off or on and
> I'll fix that.
Once it was clear how Mel's workload was configured I could
reproduce it immediately myself as well and the fix was easy and
straightforward: the attached patch should do the trick.
(Lightly tested.)
Updated 32-warehouse SPECjbb test benchmarks on a 4x4 64 GB
system:
mainline: 395 k/sec
numa/core +patch: 512 k/sec [ +29.6% ]
mainline +THP: 524 k/sec
numa/core +patch +THP: 654 k/sec [ +24.8% ]
So here on my box the reported 32-warehouse SPECjbb regressions
are fixed to the best of my knowledge, and numa/core is now a
nice unconditional speedup over mainline.
CONFIG_NUMA_BALANCING=y brings roughly as much of a speedup to
mainline as CONFIG_TRANSPARENT_HUGEPAGE=y itself - and the
combination of the two features brings roughly a combination of
speedups: +65%, which looks pretty impressive.
This fix had no impact on the good "+THP +NUMA" results that
were reproducible with -v16 already.
Mel, David, could you give this patch too a whirl? It should
improve !THP workloads.
( The 4x JVM regression is still an open bug I think - I'll
re-check and fix that one next, no need to re-report it,
I'm on it. )
Thanks,
Ingo
----------------------------->
Subject: mm, numa: Turn 4K pte NUMA faults into effective hugepage ones
From: Ingo Molnar <mingo@kernel.org>
Date: Tue Nov 20 15:48:26 CET 2012
Reduce the 4K page fault count by looking around and processing
nearby pages if possible.
To keep the logic simple and straightforward we do a couple of
simplifications:
- we only scan in the HPAGE_SIZE range of the faulting address
- we only go as far as the vma allows us
Also simplify the do_numa_page() flow while at it and fix the
previous double faulting we incurred due to not properly fixing
up freshly migrated ptes.
Suggested-by: Mel Gorman <mgorman@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
mm/memory.c | 101 +++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 66 insertions(+), 35 deletions(-)
Index: linux/mm/memory.c
===================================================================
--- linux.orig/mm/memory.c
+++ linux/mm/memory.c
@@ -3455,64 +3455,94 @@ static int do_nonlinear_fault(struct mm_
return __do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
}
-static int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
+static int __do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long address, pte_t *ptep, pmd_t *pmd,
- unsigned int flags, pte_t entry)
+ unsigned int flags, pte_t entry, spinlock_t *ptl)
{
- struct page *page = NULL;
- int node, page_nid = -1;
- int last_cpu = -1;
- spinlock_t *ptl;
-
- ptl = pte_lockptr(mm, pmd);
- spin_lock(ptl);
- if (unlikely(!pte_same(*ptep, entry)))
- goto out_unlock;
+ struct page *page;
+ int new_node;
page = vm_normal_page(vma, address, entry);
if (page) {
- get_page(page);
- page_nid = page_to_nid(page);
- last_cpu = page_last_cpu(page);
- node = mpol_misplaced(page, vma, address);
- if (node != -1 && node != page_nid)
+ int page_nid = page_to_nid(page);
+ int last_cpu = page_last_cpu(page);
+
+ new_node = mpol_misplaced(page, vma, address);
+ if (new_node != -1 && new_node != page_nid)
goto migrate;
+ task_numa_fault(page_nid, last_cpu, 1);
}
-out_pte_upgrade_unlock:
+out_pte_upgrade:
flush_cache_page(vma, address, pte_pfn(entry));
-
ptep_modify_prot_start(mm, address, ptep);
entry = pte_modify(entry, vma->vm_page_prot);
+ if (pte_dirty(entry))
+ entry = pte_mkwrite(entry);
ptep_modify_prot_commit(mm, address, ptep, entry);
-
/* No TLB flush needed because we upgraded the PTE */
-
update_mmu_cache(vma, address, ptep);
-
-out_unlock:
- pte_unmap_unlock(ptep, ptl);
-
- if (page) {
- task_numa_fault(page_nid, last_cpu, 1);
- put_page(page);
- }
out:
return 0;
migrate:
+ get_page(page);
pte_unmap_unlock(ptep, ptl);
- if (migrate_misplaced_page(page, node)) {
+ migrate_misplaced_page(page, new_node);
+
+ /* Re-check after migration: */
+
+ ptl = pte_lockptr(mm, pmd);
+ spin_lock(ptl);
+ entry = ACCESS_ONCE(*ptep);
+
+ if (!pte_numa(vma, entry))
goto out;
- }
- page = NULL;
- ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
- if (!pte_same(*ptep, entry))
- goto out_unlock;
+ page = vm_normal_page(vma, address, entry);
+ goto out_pte_upgrade;
+}
- goto out_pte_upgrade_unlock;
+/*
+ * Add a simple loop to also fetch ptes within the same pmd:
+ */
+static int do_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ unsigned long addr0, pte_t *ptep0, pmd_t *pmd,
+ unsigned int flags, pte_t entry0)
+{
+ unsigned long addr0_pmd = addr0 & PMD_MASK;
+ unsigned long addr_start;
+ unsigned long addr;
+ spinlock_t *ptl;
+ int entries = 0;
+ pte_t *ptep;
+
+ addr_start = max(addr0_pmd, vma->vm_start);
+ ptep = pte_offset_map(pmd, addr_start);
+
+ ptl = pte_lockptr(mm, pmd);
+ spin_lock(ptl);
+
+ for (addr = addr_start; addr < vma->vm_end; addr += PAGE_SIZE, ptep++) {
+ pte_t entry;
+
+ entry = ACCESS_ONCE(*ptep);
+
+ if ((addr & PMD_MASK) != addr0_pmd)
+ break;
+ if (!pte_present(entry))
+ continue;
+ if (!pte_numa(vma, entry))
+ continue;
+
+ __do_numa_page(mm, vma, addr, ptep, pmd, flags, entry, ptl);
+ entries++;
+ }
+
+ pte_unmap_unlock(ptep, ptl);
+
+ return 0;
}
/*
@@ -3536,6 +3566,7 @@ int handle_pte_fault(struct mm_struct *m
spinlock_t *ptl;
entry = ACCESS_ONCE(*pte);
+
if (!pte_present(entry)) {
if (pte_none(entry)) {
if (vma->vm_ops) {
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-11-20 15:29 UTC|newest]
Thread overview: 101+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-19 2:14 [PATCH 00/27] Latest numa/core release, v16 Ingo Molnar
2012-11-19 2:14 ` [PATCH 01/27] mm/generic: Only flush the local TLB in ptep_set_access_flags() Ingo Molnar
2012-11-19 2:14 ` [PATCH 02/27] x86/mm: Only do a local tlb flush " Ingo Molnar
2012-11-19 2:14 ` [PATCH 03/27] x86/mm: Introduce pte_accessible() Ingo Molnar
2012-11-19 2:14 ` [PATCH 04/27] mm: Only flush the TLB when clearing an accessible pte Ingo Molnar
2012-11-19 2:14 ` [PATCH 05/27] x86/mm: Completely drop the TLB flush from ptep_set_access_flags() Ingo Molnar
2012-11-19 2:14 ` [PATCH 06/27] mm: Count the number of pages affected in change_protection() Ingo Molnar
2012-11-19 2:14 ` [PATCH 07/27] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
2012-11-19 2:14 ` [PATCH 08/27] sched, numa, mm: Add last_cpu to page flags Ingo Molnar
2012-11-19 2:14 ` [PATCH 09/27] sched, mm, numa: Create generic NUMA fault infrastructure, with architectures overrides Ingo Molnar
2012-11-19 2:14 ` [PATCH 10/27] sched: Make find_busiest_queue() a method Ingo Molnar
2012-11-19 2:14 ` [PATCH 11/27] sched, numa, mm: Describe the NUMA scheduling problem formally Ingo Molnar
2012-11-19 2:14 ` [PATCH 12/27] numa, mm: Support NUMA hinting page faults from gup/gup_fast Ingo Molnar
2012-11-19 2:14 ` [PATCH 13/27] mm/migrate: Introduce migrate_misplaced_page() Ingo Molnar
2012-11-19 2:14 ` [PATCH 14/27] sched, numa, mm, arch: Add variable locality exception Ingo Molnar
2012-11-19 2:14 ` [PATCH 15/27] sched, numa, mm: Add credits for NUMA placement Ingo Molnar
2012-11-19 2:14 ` [PATCH 16/27] sched, mm, x86: Add the ARCH_SUPPORTS_NUMA_BALANCING flag Ingo Molnar
2012-11-19 2:14 ` [PATCH 17/27] sched, numa, mm: Add the scanning page fault machinery Ingo Molnar
2012-11-19 2:14 ` [PATCH 18/27] sched: Add adaptive NUMA affinity support Ingo Molnar
2012-11-19 2:14 ` [PATCH 19/27] sched: Implement constant, per task Working Set Sampling (WSS) rate Ingo Molnar
2012-11-19 2:14 ` [PATCH 20/27] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
2012-11-19 2:14 ` [PATCH 21/27] sched: Implement slow start for working set sampling Ingo Molnar
2012-11-19 2:14 ` [PATCH 22/27] sched, numa, mm: Interleave shared tasks Ingo Molnar
2012-11-19 2:14 ` [PATCH 23/27] sched: Implement NUMA scanning backoff Ingo Molnar
2012-11-19 2:14 ` [PATCH 24/27] sched: Improve convergence Ingo Molnar
2012-11-19 2:14 ` [PATCH 25/27] sched: Introduce staged average NUMA faults Ingo Molnar
2012-11-19 2:14 ` [PATCH 26/27] sched: Track groups of shared tasks Ingo Molnar
2012-11-19 2:14 ` [PATCH 27/27] sched: Use the best-buddy 'ideal cpu' in balancing decisions Ingo Molnar
2012-11-19 16:29 ` [PATCH 00/27] Latest numa/core release, v16 Mel Gorman
2012-11-19 19:13 ` Ingo Molnar
2012-11-19 21:18 ` Mel Gorman
2012-11-19 22:36 ` Ingo Molnar
2012-11-19 23:00 ` Mel Gorman
2012-11-20 0:41 ` Rik van Riel
2012-11-21 10:58 ` Mel Gorman
2012-11-20 1:02 ` Linus Torvalds
2012-11-20 7:17 ` Ingo Molnar
2012-11-20 7:37 ` David Rientjes
2012-11-20 7:48 ` Ingo Molnar
2012-11-20 8:01 ` Ingo Molnar
2012-11-20 8:11 ` David Rientjes
2012-11-21 11:14 ` Mel Gorman
2012-11-20 10:20 ` Mel Gorman
2012-11-20 10:47 ` Mel Gorman
2012-11-20 15:29 ` Ingo Molnar [this message]
2012-11-20 16:09 ` [PATCH, v2] mm, numa: Turn 4K pte NUMA faults into effective hugepage ones Ingo Molnar
2012-11-20 16:31 ` Rik van Riel
2012-11-20 16:52 ` Ingo Molnar
2012-11-21 12:08 ` Mel Gorman
2012-11-21 8:12 ` Ingo Molnar
2012-11-21 2:41 ` David Rientjes
2012-11-21 9:34 ` Ingo Molnar
2012-11-21 11:40 ` Mel Gorman
2012-11-23 1:26 ` Alex Shi
2012-11-20 17:56 ` numa/core regressions fixed - more testers wanted Ingo Molnar
2012-11-21 1:54 ` Andrew Theurer
2012-11-21 3:22 ` Rik van Riel
2012-11-21 4:10 ` Hugh Dickins
2012-11-21 17:59 ` Andrew Theurer
2012-11-21 11:52 ` Mel Gorman
2012-11-21 22:15 ` Andrew Theurer
2012-11-21 3:33 ` David Rientjes
2012-11-21 9:38 ` Ingo Molnar
2012-11-21 11:06 ` Ingo Molnar
2012-11-21 8:39 ` Alex Shi
2012-11-22 1:21 ` Ingo Molnar
2012-11-23 13:31 ` Ingo Molnar
2012-11-23 15:23 ` Alex Shi
2012-11-26 2:11 ` Alex Shi
2012-11-28 14:21 ` Alex Shi
2012-11-20 10:40 ` [PATCH 00/27] Latest numa/core release, v16 Ingo Molnar
2012-11-20 11:40 ` Mel Gorman
2012-11-21 10:38 ` Mel Gorman
2012-11-21 19:37 ` Andrea Arcangeli
2012-11-21 19:56 ` Mel Gorman
2012-11-19 20:07 ` Ingo Molnar
2012-11-19 21:37 ` Mel Gorman
2012-11-20 0:50 ` David Rientjes
2012-11-20 1:05 ` David Rientjes
2012-11-20 6:00 ` Ingo Molnar
2012-11-20 6:20 ` David Rientjes
2012-11-20 7:44 ` Ingo Molnar
2012-11-20 7:48 ` Paul Turner
2012-11-20 8:20 ` David Rientjes
2012-11-20 9:06 ` Ingo Molnar
2012-11-20 9:41 ` [patch] x86/vsyscall: Add Kconfig option to use native vsyscalls, switch to it Ingo Molnar
2012-11-20 23:01 ` Andy Lutomirski
2012-11-21 0:43 ` David Rientjes
2012-11-20 12:02 ` [PATCH] x86/mm: Don't flush the TLB on #WP pmd fixups Ingo Molnar
2012-11-20 12:31 ` Ingo Molnar
2012-11-21 11:47 ` Mel Gorman
2012-11-21 1:22 ` David Rientjes
2012-11-21 17:02 ` [PATCH 00/27] Latest numa/core release, v16 Linus Torvalds
2012-11-21 17:10 ` Ingo Molnar
2012-11-21 17:20 ` Ingo Molnar
2012-11-22 4:31 ` David Rientjes
2012-11-21 17:40 ` Ingo Molnar
2012-11-21 22:04 ` Linus Torvalds
2012-11-21 22:46 ` Ingo Molnar
2012-11-21 17:45 ` Rik van Riel
2012-11-21 18:04 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121120152933.GA17996@gmail.com \
--to=mingo@kernel.org \
--cc=Lee.Schermerhorn@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).