From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id 2E8046B0007 for ; Tue, 26 Jun 2018 10:22:54 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id p91-v6so10062484plb.12 for ; Tue, 26 Jun 2018 07:22:54 -0700 (PDT) Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id y16-v6si1608398pfl.11.2018.06.26.07.22.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Jun 2018 07:22:53 -0700 (PDT) From: "Kirill A. Shutemov" Subject: [PATCHv4 03/18] mm/page_alloc: Unify alloc_hugepage_vma() Date: Tue, 26 Jun 2018 17:22:30 +0300 Message-Id: <20180626142245.82850-4-kirill.shutemov@linux.intel.com> In-Reply-To: <20180626142245.82850-1-kirill.shutemov@linux.intel.com> References: <20180626142245.82850-1-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Ingo Molnar , x86@kernel.org, Thomas Gleixner , "H. Peter Anvin" , Tom Lendacky Cc: Dave Hansen , Kai Huang , Jacob Pan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" We don't need to have separate implementations of alloc_hugepage_vma() for NUMA and non-NUMA. Using variant based on alloc_pages_vma() we would cover both cases. This is preparation patch for allocation encrypted pages. alloc_pages_vma() will handle allocation of encrypted pages. With this change we don' t need to cover alloc_hugepage_vma() separately. The change makes typo in Alpha's implementation of __alloc_zeroed_user_highpage() visible. Fix it too. Signed-off-by: Kirill A. Shutemov --- arch/alpha/include/asm/page.h | 2 +- include/linux/gfp.h | 6 ++---- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index f3fb2848470a..9a6fbb5269f3 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -18,7 +18,7 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) #define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr) + alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE extern void copy_page(void * _to, void * _from); diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a6afcec53795..66f395737990 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -494,21 +494,19 @@ alloc_pages(gfp_t gfp_mask, unsigned int order) extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, struct vm_area_struct *vma, unsigned long addr, int node, bool hugepage); -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages_vma(gfp_mask, order, vma, addr, numa_node_id(), true) #else #define alloc_pages(gfp_mask, order) \ alloc_pages_node(numa_node_id(), gfp_mask, order) #define alloc_pages_vma(gfp_mask, order, vma, addr, node, false)\ alloc_pages(gfp_mask, order) -#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ - alloc_pages(gfp_mask, order) #endif #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) #define alloc_page_vma(gfp_mask, vma, addr) \ alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) #define alloc_page_vma_node(gfp_mask, vma, addr, node) \ alloc_pages_vma(gfp_mask, 0, vma, addr, node, false) +#define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ + alloc_pages_vma(gfp_mask, order, vma, addr, numa_node_id(), true) extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); -- 2.18.0