From: Mel Gorman <mgorman@suse.de>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rik van Riel <riel@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Thomas Gleixner <tglx@linutronix.de>,
Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Paul Turner <pjt@google.com>, Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH 03/31] mm/thp: Preserve pgprot across huge page split
Date: Thu, 1 Nov 2012 10:22:53 +0000 [thread overview]
Message-ID: <20121101102253.GO3888@suse.de> (raw)
In-Reply-To: <20121025124832.689253266@chello.nl>
On Thu, Oct 25, 2012 at 02:16:20PM +0200, Peter Zijlstra wrote:
> We're going to play games with page-protections, ensure we don't lose
> them over a THP split.
>
Why?
If PROT_NONE becomes a present pte, we lose samples. If a present pte
becomes PROT_NONE, we get spurious faults and some sampling oddities.
Both situations only apply when a THP is being split which implies
distruption anyway (mprotect, page reclaim etc.) and neither seems that
series. It seems premature at this point of the series and looks like it
could have been dropped entirely.
> Collapse seems to always allocate a new (huge) page which should
> already end up on the new target node so loosing protections there
> isn't a problem.
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Reviewed-by: Rik van Riel <riel@redhat.com>
> Cc: Paul Turner <pjt@google.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
> arch/x86/include/asm/pgtable.h | 1
> mm/huge_memory.c | 105 +++++++++++++++++++----------------------
> 2 files changed, 51 insertions(+), 55 deletions(-)
>
> Index: tip/arch/x86/include/asm/pgtable.h
> ===================================================================
> --- tip.orig/arch/x86/include/asm/pgtable.h
> +++ tip/arch/x86/include/asm/pgtable.h
> @@ -349,6 +349,7 @@ static inline pgprot_t pgprot_modify(pgp
> }
>
> #define pte_pgprot(x) __pgprot(pte_flags(x) & PTE_FLAGS_MASK)
> +#define pmd_pgprot(x) __pgprot(pmd_val(x) & ~_HPAGE_CHG_MASK)
>
> #define canon_pgprot(p) __pgprot(massage_pgprot(p))
>
> Index: tip/mm/huge_memory.c
> ===================================================================
> --- tip.orig/mm/huge_memory.c
> +++ tip/mm/huge_memory.c
> @@ -1343,63 +1343,60 @@ static int __split_huge_page_map(struct
> int ret = 0, i;
> pgtable_t pgtable;
> unsigned long haddr;
> + pgprot_t prot;
>
> spin_lock(&mm->page_table_lock);
> pmd = page_check_address_pmd(page, mm, address,
> PAGE_CHECK_ADDRESS_PMD_SPLITTING_FLAG);
> - if (pmd) {
> - pgtable = pgtable_trans_huge_withdraw(mm);
> - pmd_populate(mm, &_pmd, pgtable);
> -
> - haddr = address;
> - for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
> - pte_t *pte, entry;
> - BUG_ON(PageCompound(page+i));
> - entry = mk_pte(page + i, vma->vm_page_prot);
> - entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> - if (!pmd_write(*pmd))
> - entry = pte_wrprotect(entry);
> - else
> - BUG_ON(page_mapcount(page) != 1);
> - if (!pmd_young(*pmd))
> - entry = pte_mkold(entry);
> - pte = pte_offset_map(&_pmd, haddr);
> - BUG_ON(!pte_none(*pte));
> - set_pte_at(mm, haddr, pte, entry);
> - pte_unmap(pte);
> - }
> -
> - smp_wmb(); /* make pte visible before pmd */
> - /*
> - * Up to this point the pmd is present and huge and
> - * userland has the whole access to the hugepage
> - * during the split (which happens in place). If we
> - * overwrite the pmd with the not-huge version
> - * pointing to the pte here (which of course we could
> - * if all CPUs were bug free), userland could trigger
> - * a small page size TLB miss on the small sized TLB
> - * while the hugepage TLB entry is still established
> - * in the huge TLB. Some CPU doesn't like that. See
> - * http://support.amd.com/us/Processor_TechDocs/41322.pdf,
> - * Erratum 383 on page 93. Intel should be safe but is
> - * also warns that it's only safe if the permission
> - * and cache attributes of the two entries loaded in
> - * the two TLB is identical (which should be the case
> - * here). But it is generally safer to never allow
> - * small and huge TLB entries for the same virtual
> - * address to be loaded simultaneously. So instead of
> - * doing "pmd_populate(); flush_tlb_range();" we first
> - * mark the current pmd notpresent (atomically because
> - * here the pmd_trans_huge and pmd_trans_splitting
> - * must remain set at all times on the pmd until the
> - * split is complete for this pmd), then we flush the
> - * SMP TLB and finally we write the non-huge version
> - * of the pmd entry with pmd_populate.
> - */
> - pmdp_invalidate(vma, address, pmd);
> - pmd_populate(mm, pmd, pgtable);
> - ret = 1;
> + if (!pmd)
> + goto unlock;
> +
*whinge*
Changing the pmd check like this churned the code a more than necessary
making it harder to review. It forces me to move back and forth to figure
out exactly what it is you added. If you wanted to do this cleanup, it
should have been a separate patch.
> + prot = pmd_pgprot(*pmd);
> + pgtable = pgtable_trans_huge_withdraw(mm);
> + pmd_populate(mm, &_pmd, pgtable);
> +
> + for (i = 0, haddr = address; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
> + pte_t *pte, entry;
> +
> + BUG_ON(PageCompound(page+i));
> + entry = mk_pte(page + i, prot);
> + entry = pte_mkdirty(entry);
For example, because of the churn, it's not obvious that the
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
if (!pmd_write(*pmd))
entry = pte_wrprotect(entry);
else
BUG_ON(page_mapcount(page) != 1);
checks went away and you are instead using the prot flags retrieved by
pmd_pgprot to preserve _PAGE_RW which I think is the actual point of the
patch even if it's not obvious from the diff.
> + if (!pmd_young(*pmd))
> + entry = pte_mkold(entry);
> + pte = pte_offset_map(&_pmd, haddr);
> + BUG_ON(!pte_none(*pte));
> + set_pte_at(mm, haddr, pte, entry);
> + pte_unmap(pte);
> }
> +
> + smp_wmb(); /* make ptes visible before pmd, see __pte_alloc */
> + /*
> + * Up to this point the pmd is present and huge.
> + *
> + * If we overwrite the pmd with the not-huge version, we could trigger
> + * a small page size TLB miss on the small sized TLB while the hugepage
> + * TLB entry is still established in the huge TLB.
> + *
> + * Some CPUs don't like that. See
> + * http://support.amd.com/us/Processor_TechDocs/41322.pdf, Erratum 383
> + * on page 93.
> + *
> + * Thus it is generally safer to never allow small and huge TLB entries
> + * for overlapping virtual addresses to be loaded. So we first mark the
> + * current pmd not present, then we flush the TLB and finally we write
> + * the non-huge version of the pmd entry with pmd_populate.
> + *
> + * The above needs to be done under the ptl because pmd_trans_huge and
> + * pmd_trans_splitting must remain set on the pmd until the split is
> + * complete. The ptl also protects against concurrent faults due to
> + * making the pmd not-present.
> + */
> + set_pmd_at(mm, address, pmd, pmd_mknotpresent(*pmd));
> + flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> + pmd_populate(mm, pmd, pgtable);
> + ret = 1;
> +
> +unlock:
> spin_unlock(&mm->page_table_lock);
>
> return ret;
> @@ -2287,10 +2284,8 @@ static void khugepaged_do_scan(void)
> {
> struct page *hpage = NULL;
> unsigned int progress = 0, pass_through_head = 0;
> - unsigned int pages = khugepaged_pages_to_scan;
> bool wait = true;
> -
> - barrier(); /* write khugepaged_pages_to_scan to local stack */
> + unsigned int pages = ACCESS_ONCE(khugepaged_pages_to_scan);
>
> while (progress < pages) {
> if (!khugepaged_prealloc_page(&hpage, &wait))
>
This hunk looks fine but has nothing to do with the patch or the changelog.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-11-01 10:23 UTC|newest]
Thread overview: 135+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-25 12:16 [PATCH 00/31] numa/core patches Peter Zijlstra
2012-10-25 12:16 ` [PATCH 01/31] sched, numa, mm: Make find_busiest_queue() a method Peter Zijlstra
2012-10-25 12:16 ` [PATCH 02/31] sched, numa, mm: Describe the NUMA scheduling problem formally Peter Zijlstra
2012-11-01 9:56 ` Mel Gorman
2012-11-01 13:13 ` Rik van Riel
2012-10-25 12:16 ` [PATCH 03/31] mm/thp: Preserve pgprot across huge page split Peter Zijlstra
2012-11-01 10:22 ` Mel Gorman [this message]
2012-10-25 12:16 ` [PATCH 04/31] x86/mm: Introduce pte_accessible() Peter Zijlstra
2012-10-25 20:10 ` Linus Torvalds
2012-10-26 6:24 ` [PATCH 04/31, v2] " Ingo Molnar
2012-11-01 10:42 ` [PATCH 04/31] " Mel Gorman
2012-10-25 12:16 ` [PATCH 05/31] x86/mm: Reduce tlb flushes from ptep_set_access_flags() Peter Zijlstra
2012-10-25 20:17 ` Linus Torvalds
2012-10-26 2:30 ` Rik van Riel
2012-10-26 2:56 ` Linus Torvalds
2012-10-26 3:57 ` Rik van Riel
2012-10-26 4:23 ` Linus Torvalds
2012-10-26 6:42 ` Ingo Molnar
2012-10-26 12:34 ` Michel Lespinasse
2012-10-26 12:48 ` Andi Kleen
2012-10-26 13:16 ` Rik van Riel
2012-10-26 13:26 ` Ingo Molnar
2012-10-26 13:28 ` Ingo Molnar
2012-10-26 18:44 ` [PATCH 1/3] x86/mm: only do a local TLB flush in ptep_set_access_flags() Rik van Riel
2012-10-26 18:49 ` Linus Torvalds
2012-10-26 19:16 ` Rik van Riel
2012-10-26 19:18 ` Linus Torvalds
2012-10-26 19:21 ` Rik van Riel
2012-10-29 15:23 ` Rik van Riel
2012-12-21 9:57 ` trailing flush_tlb_fix_spurious_fault in handle_pte_fault (was Re: [PATCH 1/3] x86/mm: only do a local TLB flush in ptep_set_access_flags()) Vineet Gupta
2012-10-26 18:45 ` [PATCH 2/3] x86,mm: drop TLB flush from ptep_set_access_flags Rik van Riel
2012-10-26 21:12 ` Alan Cox
2012-10-27 3:49 ` Rik van Riel
2012-10-27 10:29 ` Ingo Molnar
2012-10-27 13:40 ` Rik van Riel
2012-10-29 16:57 ` Borislav Petkov
2012-10-29 17:06 ` Linus Torvalds
2012-11-17 14:50 ` Borislav Petkov
2012-11-17 14:56 ` Linus Torvalds
2012-11-17 15:17 ` Borislav Petkov
2012-11-17 15:24 ` Rik van Riel
2012-11-17 21:53 ` Shentino
2012-11-18 15:29 ` Michel Lespinasse
2012-10-26 18:46 ` [PATCH 3/3] mm,generic: only flush the local TLB in ptep_set_access_flags Rik van Riel
2012-10-26 18:48 ` Linus Torvalds
2012-10-26 18:53 ` Linus Torvalds
2012-10-26 18:57 ` Rik van Riel
2012-10-26 19:16 ` Linus Torvalds
2012-10-26 19:33 ` [PATCH -v2 " Rik van Riel
2012-10-26 13:23 ` [PATCH 05/31] x86/mm: Reduce tlb flushes from ptep_set_access_flags() Michel Lespinasse
2012-10-26 17:01 ` Linus Torvalds
2012-10-26 17:54 ` Rik van Riel
2012-10-26 18:02 ` Linus Torvalds
2012-10-26 18:14 ` Rik van Riel
2012-10-26 18:41 ` Linus Torvalds
2012-10-25 12:16 ` [PATCH 06/31] mm: Only flush the TLB when clearing an accessible pte Peter Zijlstra
2012-10-25 12:16 ` [PATCH 07/31] sched, numa, mm, s390/thp: Implement pmd_pgprot() for s390 Peter Zijlstra
2012-11-01 10:49 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 08/31] sched, numa, mm, MIPS/thp: Add pmd_pgprot() implementation Peter Zijlstra
2012-10-25 12:16 ` [PATCH 09/31] mm/pgprot: Move the pgprot_modify() fallback definition to mm.h Peter Zijlstra
2012-10-25 12:16 ` [PATCH 10/31] mm/mpol: Remove NUMA_INTERLEAVE_HIT Peter Zijlstra
2012-10-25 20:58 ` Andi Kleen
2012-10-26 7:59 ` Ingo Molnar
2012-10-25 12:16 ` [PATCH 11/31] mm/mpol: Make MPOL_LOCAL a real policy Peter Zijlstra
2012-11-01 10:58 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 12/31] mm/mpol: Add MPOL_MF_NOOP Peter Zijlstra
2012-11-01 11:10 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 13/31] mm/mpol: Check for misplaced page Peter Zijlstra
2012-10-25 12:16 ` [PATCH 14/31] mm/mpol: Create special PROT_NONE infrastructure Peter Zijlstra
2012-11-01 11:51 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 15/31] mm/mpol: Add MPOL_MF_LAZY Peter Zijlstra
2012-11-01 12:01 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 16/31] numa, mm: Support NUMA hinting page faults from gup/gup_fast Peter Zijlstra
2012-10-25 12:16 ` [PATCH 17/31] mm/migrate: Introduce migrate_misplaced_page() Peter Zijlstra
2012-11-01 12:20 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 18/31] mm/mpol: Use special PROT_NONE to migrate pages Peter Zijlstra
2012-10-25 12:16 ` [PATCH 19/31] sched, numa, mm: Introduce tsk_home_node() Peter Zijlstra
2012-11-01 13:48 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 20/31] sched, numa, mm/mpol: Make mempolicy home-node aware Peter Zijlstra
2012-11-01 13:58 ` Mel Gorman
2012-11-01 14:10 ` Don Morris
2012-10-25 12:16 ` [PATCH 21/31] sched, numa, mm: Introduce sched_feat_numa() Peter Zijlstra
2012-11-01 14:00 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 22/31] sched, numa, mm: Implement THP migration Peter Zijlstra
2012-11-01 14:16 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 23/31] sched, numa, mm: Implement home-node awareness Peter Zijlstra
2012-11-01 15:06 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 24/31] sched, numa, mm: Introduce last_nid in the pageframe Peter Zijlstra
2012-11-01 15:17 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 25/31] sched, numa, mm/mpol: Add_MPOL_F_HOME Peter Zijlstra
2012-10-25 12:16 ` [PATCH 26/31] sched, numa, mm: Add fault driven placement and migration policy Peter Zijlstra
2012-10-25 20:53 ` Linus Torvalds
2012-10-26 7:15 ` Ingo Molnar
2012-10-26 13:50 ` Ingo Molnar
2012-10-26 14:11 ` Peter Zijlstra
2012-10-26 14:14 ` Ingo Molnar
2012-10-26 16:47 ` Linus Torvalds
2012-10-30 19:23 ` Rik van Riel
2012-11-01 15:40 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 27/31] sched, numa, mm: Add credits for NUMA placement Peter Zijlstra
2012-10-25 12:16 ` [PATCH 28/31] sched, numa, mm: Implement constant, per task Working Set Sampling (WSS) rate Peter Zijlstra
2012-11-01 15:48 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 29/31] sched, numa, mm: Add NUMA_MIGRATION feature flag Peter Zijlstra
2012-10-25 12:16 ` [PATCH 30/31] sched, numa, mm: Implement slow start for working set sampling Peter Zijlstra
2012-11-01 15:52 ` Mel Gorman
2012-10-25 12:16 ` [PATCH 31/31] sched, numa, mm: Add memcg support to do_huge_pmd_numa_page() Peter Zijlstra
2012-10-26 9:07 ` [PATCH 00/31] numa/core patches Zhouping Liu
2012-10-26 9:08 ` Peter Zijlstra
2012-10-26 9:20 ` Ingo Molnar
2012-10-26 9:41 ` Zhouping Liu
2012-10-26 10:20 ` Zhouping Liu
2012-10-26 10:24 ` Ingo Molnar
2012-10-28 17:56 ` Johannes Weiner
2012-10-29 2:44 ` Zhouping Liu
2012-10-29 6:50 ` [PATCH] sched, numa, mm: Add memcg support to do_huge_pmd_numa_page() Ingo Molnar
2012-10-29 8:24 ` Johannes Weiner
2012-10-29 8:36 ` Zhouping Liu
2012-10-29 11:15 ` Ingo Molnar
2012-10-30 6:29 ` [PATCH 00/31] numa/core patches Zhouping Liu
2012-10-31 0:48 ` Johannes Weiner
2012-10-31 7:26 ` Hugh Dickins
2012-10-31 13:15 ` Zhouping Liu
2012-10-31 17:31 ` Hugh Dickins
2012-11-01 13:41 ` Hugh Dickins
2012-11-02 3:23 ` Zhouping Liu
2012-11-02 23:06 ` Hugh Dickins
2012-10-30 12:20 ` Mel Gorman
2012-10-30 15:28 ` Andrew Morton
2012-10-30 16:59 ` Mel Gorman
2012-11-03 11:04 ` Alex Shi
2012-11-03 12:21 ` Mel Gorman
2012-11-10 2:47 ` Alex Shi
2012-11-12 9:50 ` Mel Gorman
2012-11-09 8:51 ` Rik van Riel
2012-11-05 17:11 ` Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20121101102253.GO3888@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).