From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Kirill A. Shutemov" Subject: [PATCHv2 07/29] mm: introduce __p4d_alloc() Date: Tue, 27 Dec 2016 04:53:51 +0300 Message-ID: <20161227015413.187403-8-kirill.shutemov@linux.intel.com> References: <20161227015413.187403-1-kirill.shutemov@linux.intel.com> Return-path: In-Reply-To: <20161227015413.187403-1-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org To: Linus Torvalds , Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , Arnd Bergmann , "H. Peter Anvin" Cc: Andi Kleen , Dave Hansen , Andy Lutomirski , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" List-Id: linux-arch.vger.kernel.org For full 5-level paging we need a helper to allocate p4d pagetable. Signed-off-by: Kirill A. Shutemov --- mm/memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 267cee723aa1..de3582840cbc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3851,6 +3851,29 @@ int handle_mm_fault(struct vm_area_struct *vma, unsigned long address, } EXPORT_SYMBOL_GPL(handle_mm_fault); +#ifndef __PAGETABLE_P4D_FOLDED +/* + * Allocate p4d page table. + * We've already handled the fast-path in-line. + */ +int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) +{ + p4d_t *new = p4d_alloc_one(mm, address); + if (!new) + return -ENOMEM; + + smp_wmb(); /* See comment in __pte_alloc */ + + spin_lock(&mm->page_table_lock); + if (pgd_present(*pgd)) /* Another has populated it */ + p4d_free(mm, new); + else + pgd_populate(mm, pgd, new); + spin_unlock(&mm->page_table_lock); + return 0; +} +#endif /* __PAGETABLE_P4D_FOLDED */ + #ifndef __PAGETABLE_PUD_FOLDED /* * Allocate page upper directory. -- 2.11.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com ([192.55.52.93]:21632 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753531AbcL0Byv (ORCPT ); Mon, 26 Dec 2016 20:54:51 -0500 From: "Kirill A. Shutemov" Subject: [PATCHv2 07/29] mm: introduce __p4d_alloc() Date: Tue, 27 Dec 2016 04:53:51 +0300 Message-ID: <20161227015413.187403-8-kirill.shutemov@linux.intel.com> In-Reply-To: <20161227015413.187403-1-kirill.shutemov@linux.intel.com> References: <20161227015413.187403-1-kirill.shutemov@linux.intel.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Linus Torvalds , Andrew Morton , x86@kernel.org, Thomas Gleixner , Ingo Molnar , Arnd Bergmann , "H. Peter Anvin" Cc: Andi Kleen , Dave Hansen , Andy Lutomirski , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Message-ID: <20161227015351.zRb3W2h0GxSbLi2V3FQSyh3NOD_paP08p2JiUT6SBho@z> For full 5-level paging we need a helper to allocate p4d pagetable. Signed-off-by: Kirill A. Shutemov --- mm/memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 267cee723aa1..de3582840cbc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3851,6 +3851,29 @@ int handle_mm_fault(struct vm_area_struct *vma, unsigned long address, } EXPORT_SYMBOL_GPL(handle_mm_fault); +#ifndef __PAGETABLE_P4D_FOLDED +/* + * Allocate p4d page table. + * We've already handled the fast-path in-line. + */ +int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) +{ + p4d_t *new = p4d_alloc_one(mm, address); + if (!new) + return -ENOMEM; + + smp_wmb(); /* See comment in __pte_alloc */ + + spin_lock(&mm->page_table_lock); + if (pgd_present(*pgd)) /* Another has populated it */ + p4d_free(mm, new); + else + pgd_populate(mm, pgd, new); + spin_unlock(&mm->page_table_lock); + return 0; +} +#endif /* __PAGETABLE_P4D_FOLDED */ + #ifndef __PAGETABLE_PUD_FOLDED /* * Allocate page upper directory. -- 2.11.0