From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45ADBC4727E for ; Thu, 1 Oct 2020 15:23:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78A5F20672 for ; Thu, 1 Oct 2020 15:23:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="SSdrrvje" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78A5F20672 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9C0406B0071; Thu, 1 Oct 2020 11:23:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 998038E0001; Thu, 1 Oct 2020 11:23:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79D8D6B0073; Thu, 1 Oct 2020 11:23:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 4D2C46B0071 for ; Thu, 1 Oct 2020 11:23:10 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D38E08249980 for ; Thu, 1 Oct 2020 15:23:09 +0000 (UTC) X-FDA: 77323724898.19.fish63_391520c2719c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id A7A151AD1B5 for ; Thu, 1 Oct 2020 15:23:09 +0000 (UTC) X-HE-Tag: fish63_391520c2719c X-Filterd-Recvd-Size: 12242 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Thu, 1 Oct 2020 15:23:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=96H7CeF5OKs+CewRx0OsYTgxE7ue9ZCbuWlUQ+Izt9o=; b=SSdrrvjexjvPI1qaRyocWJ9qWF AfEKY9yTZQGkBLi26CBv8idYzB6EIjIVh5nB0bgxmFX8lxPK2ui9RRiCpBv8BkP+HMHJOGPZ0s9zv Al+DaaREqfZgig4F/9+8+nrsf9ORTyZ3ZDKDrj7B6KljIW/HmiPk6StKBw/7/Zh09pkph6IlQSm/J 72dGBNclAXB1sbgydUSZkiwxOSdqJy3kgSnKIydVHKOdywPjeI99FgW+nvKQk/E5t2iH0axoysJr4 qNdHe3J3d6v4OjExVRcGZDU5p7sUy7hO2ulDpJMf65r94DKpzI/022BbXY/NoAQi5ljT1IfNabMCy ZA5mdQCw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kO0QD-0003vI-Si; Thu, 01 Oct 2020 15:23:05 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 2/2] mm: Rename page_order() to buddy_order() Date: Thu, 1 Oct 2020 16:22:59 +0100 Message-Id: <20201001152259.14932-2-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201001152259.14932-1-willy@infradead.org> References: <20201001152259.14932-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The current page_order() can only be called on pages in the buddy allocator. For compound pages, you have to use compound_order(). This i= s confusing and led to a bug, so rename page_order() to buddy_order(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/compaction.c | 6 +++--- mm/internal.h | 8 ++++---- mm/page_alloc.c | 30 +++++++++++++++--------------- mm/page_isolation.c | 4 ++-- mm/page_owner.c | 6 +++--- mm/page_reporting.c | 2 +- mm/shuffle.c | 2 +- 7 files changed, 29 insertions(+), 29 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 6c63844fc061..6e0ee5641788 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -625,7 +625,7 @@ static unsigned long isolate_freepages_block(struct c= ompact_control *cc, } =20 /* Found a free page, will break it into order-0 pages */ - order =3D page_order(page); + order =3D buddy_order(page); isolated =3D __isolate_free_page(page, order); if (!isolated) break; @@ -898,7 +898,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, * potential isolation targets. */ if (PageBuddy(page)) { - unsigned long freepage_order =3D page_order_unsafe(page); + unsigned long freepage_order =3D buddy_order_unsafe(page); =20 /* * Without lock, we cannot be sure that what we got is @@ -1172,7 +1172,7 @@ static bool suitable_migration_target(struct compac= t_control *cc, * the only small danger is that we skip a potentially suitable * pageblock, so it's not worth to check order for valid range. */ - if (page_order_unsafe(page) >=3D pageblock_order) + if (buddy_order_unsafe(page) >=3D pageblock_order) return false; } =20 diff --git a/mm/internal.h b/mm/internal.h index 6345b08ce86c..c43ccdddb0f6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -270,16 +270,16 @@ int find_suitable_fallback(struct free_area *area, = unsigned int order, * page from being allocated in parallel and returning garbage as the or= der. * If a caller does not hold page_zone(page)->lock, it must guarantee th= at the * page cannot be allocated or merged in parallel. Alternatively, it mus= t - * handle invalid values gracefully, and use page_order_unsafe() below. + * handle invalid values gracefully, and use buddy_order_unsafe() below. */ -static inline unsigned int page_order(struct page *page) +static inline unsigned int buddy_order(struct page *page) { /* PageBuddy() must be checked by the caller */ return page_private(page); } =20 /* - * Like page_order(), but for callers who cannot afford to hold the zone= lock. + * Like buddy_order(), but for callers who cannot afford to hold the zon= e lock. * PageBuddy() should be checked first by the caller to minimize race wi= ndow, * and invalid values must be handled gracefully. * @@ -289,7 +289,7 @@ static inline unsigned int page_order(struct page *pa= ge) * times, potentially observing different values in the tests and the ac= tual * use of the result. */ -#define page_order_unsafe(page) READ_ONCE(page_private(page)) +#define buddy_order_unsafe(page) READ_ONCE(page_private(page)) =20 static inline bool is_cow_mapping(vm_flags_t flags) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7012d67a302d..2e1d379d73b3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -763,7 +763,7 @@ static inline void clear_page_guard(struct zone *zone= , struct page *page, unsigned int order, int migratetype) {} #endif =20 -static inline void set_page_order(struct page *page, unsigned int order) +static inline void set_buddy_order(struct page *page, unsigned int order= ) { set_page_private(page, order); __SetPageBuddy(page); @@ -788,7 +788,7 @@ static inline bool page_is_buddy(struct page *page, s= truct page *buddy, if (!page_is_guard(buddy) && !PageBuddy(buddy)) return false; =20 - if (page_order(buddy) !=3D order) + if (buddy_order(buddy) !=3D order) return false; =20 /* @@ -1026,7 +1026,7 @@ static inline void __free_one_page(struct page *pag= e, } =20 done_merging: - set_page_order(page, order); + set_buddy_order(page, order); =20 if (is_shuffle_order(order)) to_tail =3D shuffle_pick_tail(); @@ -2132,7 +2132,7 @@ static inline void expand(struct zone *zone, struct= page *page, continue; =20 add_to_free_list(&page[size], zone, high, migratetype); - set_page_order(&page[size], high); + set_buddy_order(&page[size], high); } } =20 @@ -2346,7 +2346,7 @@ static int move_freepages(struct zone *zone, VM_BUG_ON_PAGE(page_to_nid(page) !=3D zone_to_nid(zone), page); VM_BUG_ON_PAGE(page_zone(page) !=3D zone, page); =20 - order =3D page_order(page); + order =3D buddy_order(page); move_to_free_list(page, zone, order, migratetype); page +=3D 1 << order; pages_moved +=3D 1 << order; @@ -2470,7 +2470,7 @@ static inline void boost_watermark(struct zone *zon= e) static void steal_suitable_fallback(struct zone *zone, struct page *page= , unsigned int alloc_flags, int start_type, bool whole_block) { - unsigned int current_order =3D page_order(page); + unsigned int current_order =3D buddy_order(page); int free_pages, movable_pages, alike_pages; int old_block_type; =20 @@ -8296,7 +8296,7 @@ struct page *has_unmovable_pages(struct zone *zone,= struct page *page, */ if (!page_ref_count(page)) { if (PageBuddy(page)) - iter +=3D (1 << page_order(page)) - 1; + iter +=3D (1 << buddy_order(page)) - 1; continue; } =20 @@ -8509,7 +8509,7 @@ int alloc_contig_range(unsigned long start, unsigne= d long end, } =20 if (outer_start !=3D start) { - order =3D page_order(pfn_to_page(outer_start)); + order =3D buddy_order(pfn_to_page(outer_start)); =20 /* * outer_start page could be small order buddy page and @@ -8734,7 +8734,7 @@ void __offline_isolated_pages(unsigned long start_p= fn, unsigned long end_pfn) =20 BUG_ON(page_count(page)); BUG_ON(!PageBuddy(page)); - order =3D page_order(page); + order =3D buddy_order(page); del_page_from_free_list(page, zone, order); pfn +=3D (1 << order); } @@ -8753,7 +8753,7 @@ bool is_free_buddy_page(struct page *page) for (order =3D 0; order < MAX_ORDER; order++) { struct page *page_head =3D page - (pfn & ((1 << order) - 1)); =20 - if (PageBuddy(page_head) && page_order(page_head) >=3D order) + if (PageBuddy(page_head) && buddy_order(page_head) >=3D order) break; } spin_unlock_irqrestore(&zone->lock, flags); @@ -8790,7 +8790,7 @@ static void break_down_buddy_pages(struct zone *zon= e, struct page *page, =20 if (current_buddy !=3D target) { add_to_free_list(current_buddy, zone, high, migratetype); - set_page_order(current_buddy, high); + set_buddy_order(current_buddy, high); page =3D next_page; } } @@ -8810,16 +8810,16 @@ bool take_page_off_buddy(struct page *page) spin_lock_irqsave(&zone->lock, flags); for (order =3D 0; order < MAX_ORDER; order++) { struct page *page_head =3D page - (pfn & ((1 << order) - 1)); - int buddy_order =3D page_order(page_head); + int page_order =3D buddy_order(page_head); =20 - if (PageBuddy(page_head) && buddy_order >=3D order) { + if (PageBuddy(page_head) && page_order >=3D order) { unsigned long pfn_head =3D page_to_pfn(page_head); int migratetype =3D get_pfnblock_migratetype(page_head, pfn_head); =20 - del_page_from_free_list(page_head, zone, buddy_order); + del_page_from_free_list(page_head, zone, page_order); break_down_buddy_pages(zone, page_head, page, 0, - buddy_order, migratetype); + page_order, migratetype); ret =3D true; break; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index abfe26ad59fd..ca0a71be0e7d 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -88,7 +88,7 @@ static void unset_migratetype_isolate(struct page *page= , unsigned migratetype) * these pages to be merged. */ if (PageBuddy(page)) { - order =3D page_order(page); + order =3D buddy_order(page); if (order >=3D pageblock_order) { pfn =3D page_to_pfn(page); buddy_pfn =3D __find_buddy_pfn(pfn, order); @@ -256,7 +256,7 @@ __test_page_isolated_in_pageblock(unsigned long pfn, = unsigned long end_pfn, * the correct MIGRATE_ISOLATE freelist. There is no * simple way to verify that as VM_BUG_ON(), though. */ - pfn +=3D 1 << page_order(page); + pfn +=3D 1 << buddy_order(page); else if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) /* A HWPoisoned page cannot be also PageBuddy */ pfn++; diff --git a/mm/page_owner.c b/mm/page_owner.c index 4ca3051a1035..b735a8eafcdb 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -295,7 +295,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_fil= e *m, if (PageBuddy(page)) { unsigned long freepage_order; =20 - freepage_order =3D page_order_unsafe(page); + freepage_order =3D buddy_order_unsafe(page); if (freepage_order < MAX_ORDER) pfn +=3D (1UL << freepage_order) - 1; continue; @@ -490,7 +490,7 @@ read_page_owner(struct file *file, char __user *buf, = size_t count, loff_t *ppos) =20 page =3D pfn_to_page(pfn); if (PageBuddy(page)) { - unsigned long freepage_order =3D page_order_unsafe(page); + unsigned long freepage_order =3D buddy_order_unsafe(page); =20 if (freepage_order < MAX_ORDER) pfn +=3D (1UL << freepage_order) - 1; @@ -584,7 +584,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, stru= ct zone *zone) * heavy lock contention. */ if (PageBuddy(page)) { - unsigned long order =3D page_order_unsafe(page); + unsigned long order =3D buddy_order_unsafe(page); =20 if (order > 0 && order < MAX_ORDER) pfn +=3D (1UL << order) - 1; diff --git a/mm/page_reporting.c b/mm/page_reporting.c index aaaa3605123d..cd8e13d41df4 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -92,7 +92,7 @@ page_reporting_drain(struct page_reporting_dev_info *pr= dev, * report on the new larger page when we make our way * up to that higher order. */ - if (PageBuddy(page) && page_order(page) =3D=3D order) + if (PageBuddy(page) && buddy_order(page) =3D=3D order) __SetPageReported(page); } while ((sg =3D sg_next(sg))); =20 diff --git a/mm/shuffle.c b/mm/shuffle.c index 9b5cd4b004b0..9c2e145a747a 100644 --- a/mm/shuffle.c +++ b/mm/shuffle.c @@ -60,7 +60,7 @@ static struct page * __meminit shuffle_valid_page(struc= t zone *zone, * ...is the page on the same list as the page we will * shuffle it with? */ - if (page_order(page) !=3D order) + if (buddy_order(page) !=3D order) return NULL; =20 return page; --=20 2.28.0