From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: aarcange@redhat.com, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, paulus@samba.org, akpm@linux-foundation.org,
linuxppc-dev@lists.ozlabs.org, kirill.shutemov@linux.intel.com
Subject: Re: [PATCH V2 1/2] mm/thp: Split out pmd collpase flush into a seperate functions
Date: Thu, 7 May 2015 12:20:00 +0300 [thread overview]
Message-ID: <20150507092000.GA18516@node.dhcp.inet.fi> (raw)
In-Reply-To: <1430983408-24924-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
On Thu, May 07, 2015 at 12:53:27PM +0530, Aneesh Kumar K.V wrote:
> After this patch pmdp_* functions operate only on hugepage pte,
> and not on regular pmd_t values pointing to page table.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
> arch/powerpc/include/asm/pgtable-ppc64.h | 4 ++
> arch/powerpc/mm/pgtable_64.c | 76 +++++++++++++++++---------------
> include/asm-generic/pgtable.h | 19 ++++++++
> mm/huge_memory.c | 2 +-
> 4 files changed, 65 insertions(+), 36 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> index 43e6ad424c7f..50830c9a2116 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> @@ -576,6 +576,10 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
> extern void pmdp_splitting_flush(struct vm_area_struct *vma,
> unsigned long address, pmd_t *pmdp);
>
> +#define __HAVE_ARCH_PMDP_COLLAPSE_FLUSH
> +extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> + unsigned long address, pmd_t *pmdp);
> +
> #define __HAVE_ARCH_PGTABLE_DEPOSIT
> extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
> pgtable_t pgtable);
> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> index 59daa5eeec25..9171c1a37290 100644
> --- a/arch/powerpc/mm/pgtable_64.c
> +++ b/arch/powerpc/mm/pgtable_64.c
> @@ -560,41 +560,47 @@ pmd_t pmdp_clear_flush(struct vm_area_struct *vma, unsigned long address,
> pmd_t pmd;
>
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> - if (pmd_trans_huge(*pmdp)) {
> - pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
> - } else {
> - /*
> - * khugepaged calls this for normal pmd
> - */
> - pmd = *pmdp;
> - pmd_clear(pmdp);
> - /*
> - * Wait for all pending hash_page to finish. This is needed
> - * in case of subpage collapse. When we collapse normal pages
> - * to hugepage, we first clear the pmd, then invalidate all
> - * the PTE entries. The assumption here is that any low level
> - * page fault will see a none pmd and take the slow path that
> - * will wait on mmap_sem. But we could very well be in a
> - * hash_page with local ptep pointer value. Such a hash page
> - * can result in adding new HPTE entries for normal subpages.
> - * That means we could be modifying the page content as we
> - * copy them to a huge page. So wait for parallel hash_page
> - * to finish before invalidating HPTE entries. We can do this
> - * by sending an IPI to all the cpus and executing a dummy
> - * function there.
> - */
> - kick_all_cpus_sync();
> - /*
> - * Now invalidate the hpte entries in the range
> - * covered by pmd. This make sure we take a
> - * fault and will find the pmd as none, which will
> - * result in a major fault which takes mmap_sem and
> - * hence wait for collapse to complete. Without this
> - * the __collapse_huge_page_copy can result in copying
> - * the old content.
> - */
> - flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
> - }
> + VM_BUG_ON(!pmd_trans_huge(*pmdp));
> + pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
> + return pmd;
The patches are in reverse order: you need to change pmdp_get_and_clear
first otherwise you break bisectability.
Or better merge patches together.
> +}
> +
> +pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
> + pmd_t *pmdp)
> +{
> + pmd_t pmd;
> +
> + VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> + VM_BUG_ON(pmd_trans_huge(*pmdp));
> +
> + pmd = *pmdp;
> + pmd_clear(pmdp);
> + /*
> + * Wait for all pending hash_page to finish. This is needed
> + * in case of subpage collapse. When we collapse normal pages
> + * to hugepage, we first clear the pmd, then invalidate all
> + * the PTE entries. The assumption here is that any low level
> + * page fault will see a none pmd and take the slow path that
> + * will wait on mmap_sem. But we could very well be in a
> + * hash_page with local ptep pointer value. Such a hash page
> + * can result in adding new HPTE entries for normal subpages.
> + * That means we could be modifying the page content as we
> + * copy them to a huge page. So wait for parallel hash_page
> + * to finish before invalidating HPTE entries. We can do this
> + * by sending an IPI to all the cpus and executing a dummy
> + * function there.
> + */
> + kick_all_cpus_sync();
> + /*
> + * Now invalidate the hpte entries in the range
> + * covered by pmd. This make sure we take a
> + * fault and will find the pmd as none, which will
> + * result in a major fault which takes mmap_sem and
> + * hence wait for collapse to complete. Without this
> + * the __collapse_huge_page_copy can result in copying
> + * the old content.
> + */
> + flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
> return pmd;
> }
>
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index 39f1d6a2b04d..80e6d415cd57 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -189,6 +189,25 @@ extern void pmdp_splitting_flush(struct vm_area_struct *vma,
> unsigned long address, pmd_t *pmdp);
> #endif
>
> +#ifndef __HAVE_ARCH_PMDP_COLLAPSE_FLUSH
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> + unsigned long address,
> + pmd_t *pmdp)
> +{
> + return pmdp_clear_flush(vma, address, pmdp);
> +}
> +#else
> +static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> + unsigned long address,
> + pmd_t *pmdp)
> +{
> + BUILD_BUG();
> + return __pmd(0);
> +}
> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> +#endif
> +
> #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
> extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
> pgtable_t pgtable);
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 078832cf3636..88f695a4e38b 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2499,7 +2499,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> * huge and small TLB entries for the same virtual address
> * to avoid the risk of CPU bugs in that area.
> */
> - _pmd = pmdp_clear_flush(vma, address, pmd);
> + _pmd = pmdp_collapse_flush(vma, address, pmd);
Why? pmdp_clear_flush() does kick_all_cpus_sync() already.
> spin_unlock(pmd_ptl);
> mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
>
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Kirill A. Shutemov
next prev parent reply other threads:[~2015-05-07 9:20 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-07 7:23 [PATCH V2 1/2] mm/thp: Split out pmd collpase flush into a seperate functions Aneesh Kumar K.V
2015-05-07 7:23 ` [PATCH V2 2/2] powerpc/thp: Serialize pmd clear against a linux page table walk Aneesh Kumar K.V
2015-05-08 22:21 ` Andrew Morton
2015-05-11 6:30 ` Aneesh Kumar K.V
2015-05-07 9:20 ` Kirill A. Shutemov [this message]
2015-05-07 11:18 ` [PATCH V2 1/2] mm/thp: Split out pmd collpase flush into a seperate functions Aneesh Kumar K.V
2015-05-08 22:24 ` Andrew Morton
2015-05-11 6:32 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150507092000.GA18516@node.dhcp.inet.fi \
--to=kirill@shutemov.name \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.vnet.ibm.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).