From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC02AC433F5 for ; Mon, 4 Apr 2022 19:30:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2BB366B0071; Mon, 4 Apr 2022 15:30:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26B0A6B0073; Mon, 4 Apr 2022 15:30:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10C486B0074; Mon, 4 Apr 2022 15:30:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 01BB96B0071 for ; Mon, 4 Apr 2022 15:30:24 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D52FD60D22 for ; Mon, 4 Apr 2022 19:30:14 +0000 (UTC) X-FDA: 79320187548.01.DEEFFE8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 19FB114000C for ; Mon, 4 Apr 2022 19:30:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=g5EuA3sUOB9xc2yyUxVMVulM7hFVZ05exS99l8H4tIs=; b=m9WE9605BZ1hXWvnHZQT06QR25 QD6EabJtrDmuW6pYF3ALUP2lRv6Vd3RDsWghPhwyNzcVRrF9SNm/4kd4kVdq2ZAZRtEFllChwi0UH qpq0Pc/3i48YzGGusqB1aeMEPA8bknPoEOVSgObtJgB1pjr2sWkUr+sV7fbNoIiC9VX8Qrxqp+nfW AwX07/R0Wedyx228CU0fiH57nwk0t8PGWUSLBJcuOJyy7PnZRP6HpzugR1AmoyKc3NQUY6c/IucBB YHCrL2Y57X6N3ucBlZVQiHTdDxds/5bUM5NqvKkiVmZUZBDtn+TU4QcJor/Xm7WYXH2qBrl2pto6S RyICoQaA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nbSOv-005zqN-C2; Mon, 04 Apr 2022 19:30:09 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton , Zi Yan Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 3/4] mm: Add vma_alloc_folio() Date: Mon, 4 Apr 2022 20:30:05 +0100 Message-Id: <20220404193006.1429250-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220404193006.1429250-1-willy@infradead.org> References: <20220404193006.1429250-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: wjbyjwqjaz33ie89zmcrghxrwihiqfki Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=m9WE9605; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 19FB114000C X-HE-Tag: 1649100612-130157 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This wrapper around alloc_pages_vma() calls prep_transhuge_page(), removing the obligation from the caller. This is in the same spirit as __folio_alloc(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/gfp.h | 8 ++++++-- mm/mempolicy.c | 13 +++++++++++++ 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 761f8f1885c7..3e3d36fc2109 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -613,9 +613,11 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, #ifdef CONFIG_NUMA struct page *alloc_pages(gfp_t gfp, unsigned int order); struct folio *folio_alloc(gfp_t gfp, unsigned order); -extern struct page *alloc_pages_vma(gfp_t gfp_mask, int order, +struct page *alloc_pages_vma(gfp_t gfp_mask, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, + unsigned long addr, bool hugepage); #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ alloc_pages_vma(gfp_mask, order, vma, addr, true) #else @@ -627,8 +629,10 @@ static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define alloc_pages_vma(gfp_mask, order, vma, addr, false)\ +#define alloc_pages_vma(gfp_mask, order, vma, addr, hugepage) \ alloc_pages(gfp_mask, order) +#define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ + folio_alloc(gfp, order) #define alloc_hugepage_vma(gfp_mask, vma, addr, order) \ alloc_pages(gfp_mask, order) #endif diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a2516d31db6c..ec15f4f4b714 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2227,6 +2227,19 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, } EXPORT_SYMBOL(alloc_pages_vma); +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, + unsigned long addr, bool hugepage) +{ + struct folio *folio; + + folio = (struct folio *)alloc_pages_vma(gfp, order, vma, addr, + hugepage); + if (folio && order > 1) + prep_transhuge_page(&folio->page); + + return folio; +} + /** * alloc_pages - Allocate pages. * @gfp: GFP flags. -- 2.34.1