From: Anshuman Khandual <anshuman.khandual@arm.com>
To: linux-mm@kvack.org
Cc: Mark Rutland <mark.rutland@arm.com>,
linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
James Hogan <jhogan@kernel.org>,
Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
Heiko Carstens <heiko.carstens@de.ibm.com>,
Michal Hocko <mhocko@kernel.org>,
Dave Hansen <dave.hansen@intel.com>,
Paul Mackerras <paulus@samba.org>,
sparclinux@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
Andrea Arcangeli <aarcange@redhat.com>,
linux-s390@vger.kernel.org, x86@kernel.org,
Russell King - ARM Linux <linux@armlinux.org.uk>,
Matthew Wilcox <willy@infradead.org>,
Steven Price <Steven.Price@arm.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Gerald Schaefer <gerald.schaefer@de.ibm.com>,
David Rientjes <rientjes@google.com>,
linux-snps-arc@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
Kees Cook <keescook@chromium.org>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Masahiro Yamada <yamada.masahiro@socionext.com>,
linuxppc-dev@lists.ozlabs.org, Mark Brown <broonie@kernel.org>,
"Kirill A . Shutemov" <kirill@shutemov.name>,
Dan Williams <dan.j.williams@intel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Oscar Salvador <osalvador@suse.de>,
Sri Krishna chowdary <schowdary@nvidia.com>,
Ard Biesheuvel <ard.biesheuvel@linaro.org>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
linux-mips@vger.kernel.org, Ralf Baechle <ralf@linux-mips.org>,
linux-kernel@vger.kernel.org, Paul Burton <paul.burton@mips.com>,
Mike Rapoport <rppt@linux.vnet.ibm.com>,
Vineet Gupta <vgupta@synopsys.com>,
Martin Schwidefsky <schwidefsky@de.ibm.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
"David S. Miller" <davem@davemloft.net>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: [PATCH V6 1/2] mm/page_alloc: Make alloc_gigantic_page() available for general use
Date: Tue, 15 Oct 2019 14:51:41 +0530 [thread overview]
Message-ID: <1571131302-32290-2-git-send-email-anshuman.khandual@arm.com> (raw)
In-Reply-To: <1571131302-32290-1-git-send-email-anshuman.khandual@arm.com>
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers needs such a method to allocate PUD_SIZE
sized memory block. In the future such methods might have other use cases
as well. So alloc_gigantic_page() has been split carving out actual memory
allocation method and made available via new alloc_gigantic_page_order()
which is wrapped under CONFIG_CONTIG_ALLOC.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Steven Price <Steven.Price@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sri Krishna chowdary <schowdary@nvidia.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-mips@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-ia64@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/gfp.h | 3 ++
mm/hugetlb.c | 76 +----------------------------------
mm/page_alloc.c | 98 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 102 insertions(+), 75 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index fb07b503dc45..379ad23437d1 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -589,6 +589,9 @@ static inline bool pm_suspended_storage(void)
/* The below functions must be run on a range from a single zone. */
extern int alloc_contig_range(unsigned long start, unsigned long end,
unsigned migratetype, gfp_t gfp_mask);
+extern struct page *alloc_gigantic_page_order(unsigned int order,
+ gfp_t gfp_mask, int nid,
+ nodemask_t *nodemask);
#endif
void free_contig_range(unsigned long pfn, unsigned int nr_pages);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 977f9a323a7a..d199556a4a2c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1023,86 +1023,12 @@ static void free_gigantic_page(struct page *page, unsigned int order)
}
#ifdef CONFIG_CONTIG_ALLOC
-static int __alloc_gigantic_page(unsigned long start_pfn,
- unsigned long nr_pages, gfp_t gfp_mask)
-{
- unsigned long end_pfn = start_pfn + nr_pages;
- return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE,
- gfp_mask);
-}
-
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i, end_pfn = start_pfn + nr_pages;
- struct page *page;
-
- for (i = start_pfn; i < end_pfn; i++) {
- if (!pfn_valid(i))
- return false;
-
- page = pfn_to_page(i);
-
- if (page_zone(page) != z)
- return false;
-
- if (PageReserved(page))
- return false;
-
- if (page_count(page) > 0)
- return false;
-
- if (PageHuge(page))
- return false;
- }
-
- return true;
-}
-
-static bool zone_spans_last_pfn(const struct zone *zone,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long last_pfn = start_pfn + nr_pages - 1;
- return zone_spans_pfn(zone, last_pfn);
-}
-
static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
{
unsigned int order = huge_page_order(h);
- unsigned long nr_pages = 1 << order;
- unsigned long ret, pfn, flags;
- struct zonelist *zonelist;
- struct zone *zone;
- struct zoneref *z;
-
- zonelist = node_zonelist(nid, gfp_mask);
- for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nodemask) {
- spin_lock_irqsave(&zone->lock, flags);
- pfn = ALIGN(zone->zone_start_pfn, nr_pages);
- while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
- if (pfn_range_valid_gigantic(zone, pfn, nr_pages)) {
- /*
- * We release the zone lock here because
- * alloc_contig_range() will also lock the zone
- * at some point. If there's an allocation
- * spinning on this lock, it may win the race
- * and cause alloc_contig_range() to fail...
- */
- spin_unlock_irqrestore(&zone->lock, flags);
- ret = __alloc_gigantic_page(pfn, nr_pages, gfp_mask);
- if (!ret)
- return pfn_to_page(pfn);
- spin_lock_irqsave(&zone->lock, flags);
- }
- pfn += nr_pages;
- }
-
- spin_unlock_irqrestore(&zone->lock, flags);
- }
-
- return NULL;
+ return alloc_gigantic_page_order(order, gfp_mask, nid, nodemask);
}
static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6ab8eb670fd3..0f67367213c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8497,6 +8497,104 @@ int alloc_contig_range(unsigned long start, unsigned long end,
pfn_max_align_up(end), migratetype);
return ret;
}
+
+static int __alloc_gigantic_page(unsigned long start_pfn,
+ unsigned long nr_pages, gfp_t gfp_mask)
+{
+ unsigned long end_pfn = start_pfn + nr_pages;
+
+ return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE,
+ gfp_mask);
+}
+
+static bool pfn_range_valid_gigantic(struct zone *z, unsigned long start_pfn,
+ unsigned long nr_pages)
+{
+ unsigned long i, end_pfn = start_pfn + nr_pages;
+ struct page *page;
+
+ for (i = start_pfn; i < end_pfn; i++) {
+ if (!pfn_valid(i))
+ return false;
+
+ page = pfn_to_page(i);
+
+ if (page_zone(page) != z)
+ return false;
+
+ if (PageReserved(page))
+ return false;
+
+ if (page_count(page) > 0)
+ return false;
+
+ if (PageHuge(page))
+ return false;
+ }
+ return true;
+}
+
+static bool zone_spans_last_pfn(const struct zone *zone,
+ unsigned long start_pfn, unsigned long nr_pages)
+{
+ unsigned long last_pfn = start_pfn + nr_pages - 1;
+
+ return zone_spans_pfn(zone, last_pfn);
+}
+
+/**
+ * alloc_gigantic_page_order() -- tries to allocate given order of pages
+ * @order: allocation order (greater than MAX_ORDER)
+ * @gfp_mask: GFP mask to use during compaction
+ * @nid: allocation node
+ * @nodemask: allocation nodemask
+ *
+ * This routine is an wrapper around alloc_contig_range() which scans over
+ * all zones on an applicable zonelist to find a contiguous pfn range which
+ * can the be allocated with alloc_contig_range(). This routine is intended
+ * to be used for allocations greater than MAX_ORDER.
+ *
+ * Return: page on success or NULL on failure. On success a memory block
+ * of 'order' starting with 'page' has been allocated successfully. Memory
+ * allocated here needs to be freed with free_contig_range().
+ */
+struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
+ int nid, nodemask_t *nodemask)
+{
+ unsigned long nr_pages = 1 << order;
+ unsigned long ret, pfn, flags;
+ struct zonelist *zonelist;
+ struct zone *zone;
+ struct zoneref *z;
+
+ zonelist = node_zonelist(nid, gfp_mask);
+ for_each_zone_zonelist_nodemask(zone, z, zonelist,
+ gfp_zone(gfp_mask), nodemask) {
+ spin_lock_irqsave(&zone->lock, flags);
+
+ pfn = ALIGN(zone->zone_start_pfn, nr_pages);
+ while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
+ if (pfn_range_valid_gigantic(zone, pfn, nr_pages)) {
+ /*
+ * We release the zone lock here because
+ * alloc_contig_range() will also lock the zone
+ * at some point. If there's an allocation
+ * spinning on this lock, it may win the race
+ * and cause alloc_contig_range() to fail...
+ */
+ spin_unlock_irqrestore(&zone->lock, flags);
+ ret = __alloc_gigantic_page(pfn, nr_pages,
+ gfp_mask);
+ if (!ret)
+ return pfn_to_page(pfn);
+ spin_lock_irqsave(&zone->lock, flags);
+ }
+ pfn += nr_pages;
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ }
+ return NULL;
+}
#endif /* CONFIG_CONTIG_ALLOC */
void free_contig_range(unsigned long pfn, unsigned int nr_pages)
--
2.20.1
next prev parent reply other threads:[~2019-10-15 9:26 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-15 9:21 [PATCH V6 0/2] mm/debug: Add tests validating architecture page table helpers Anshuman Khandual
2019-10-15 9:21 ` Anshuman Khandual [this message]
2019-10-15 10:45 ` [PATCH V6 1/2] mm/page_alloc: Make alloc_gigantic_page() available for general use Michal Hocko
2019-10-15 11:07 ` Anshuman Khandual
2019-10-15 11:42 ` David Hildenbrand
2019-10-15 11:47 ` Michal Hocko
2019-10-15 11:50 ` David Hildenbrand
2019-10-15 12:01 ` David Hildenbrand
2019-10-15 12:09 ` Michal Hocko
2019-10-15 12:11 ` Michal Hocko
2019-10-15 9:21 ` [PATCH V6 2/2] mm/debug: Add tests validating architecture page table helpers Anshuman Khandual
2019-10-15 11:46 ` Michal Hocko
2019-10-15 12:27 ` Anshuman Khandual
2019-10-15 18:09 ` Qian Cai
2019-10-16 9:54 ` Anshuman Khandual
2019-10-15 14:41 ` [PATCH V6 0/2] " Qian Cai
2019-10-15 15:21 ` Anshuman Khandual
2019-10-15 18:42 ` Qian Cai
2019-10-16 8:59 ` Anshuman Khandual
2019-10-16 8:20 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1571131302-32290-2-git-send-email-anshuman.khandual@arm.com \
--to=anshuman.khandual@arm.com \
--cc=Steven.Price@arm.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=ard.biesheuvel@linaro.org \
--cc=broonie@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=davem@davemloft.net \
--cc=gerald.schaefer@de.ibm.com \
--cc=gregkh@linuxfoundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=jgg@ziepe.ca \
--cc=jhogan@kernel.org \
--cc=keescook@chromium.org \
--cc=kirill@shutemov.name \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linux-snps-arc@lists.infradead.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mark.rutland@arm.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=osalvador@suse.de \
--cc=paul.burton@mips.com \
--cc=paulus@samba.org \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=peterz@infradead.org \
--cc=ralf@linux-mips.org \
--cc=rientjes@google.com \
--cc=rppt@linux.vnet.ibm.com \
--cc=schowdary@nvidia.com \
--cc=schwidefsky@de.ibm.com \
--cc=sparclinux@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=vgupta@synopsys.com \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=yamada.masahiro@socionext.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).