From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60FCCE7DEFB for ; Mon, 2 Feb 2026 15:57:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GmvRy/b97eM/tC2cYTOxQZCfW7bmf6sFSDERES2+lkE=; b=RxrY+oZUPJA1Sz ZqnuhLZK0LyOJ4Xi5ECSlLPWwTYot3OvbPpoqNdCaWMhd3tKSNjZhZkwe1aZHSapl+0ncn7U0OhaC bxiiD+6OreFfmQwpG0wUOD59P7wKqPu304tRhMF6iZq2eC9ZYyX7zrmyHkwwhBzMumK7Ve36Dth59 7e3U6SS9yZAI0Ywm2meaK4FIDr0boAG1MA2HCD1eDS22b9VVRJW72n5Y7H+URbSlFx6tZjpBUw4BT k0pDYBXGpNtvgzN1IcsmW5XaBB6l8l2E1qAcOuc+fEpiikSBzhjz12aAO08q75A1OZhfVY2IdrE2Q erS/TK+1FH1i2E1VRfbA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vmwIk-00000005E8A-3kfh; Mon, 02 Feb 2026 15:57:22 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vmwIW-00000005E5K-23xd for linux-riscv@lists.infradead.org; Mon, 02 Feb 2026 15:57:18 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id CB0D64037D; Mon, 2 Feb 2026 15:57:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91720C19425; Mon, 2 Feb 2026 15:57:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770047823; bh=6ogOaGvnMTPbWSZ/3tAo/tMV4QN8NO6XYhjt/aFJtlQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WF69RSX9f+mKWCabnqaYZF5sNHFC3uvixKlWuQFydyTQfTYpnLJRGuLALJe4g6SCy Z7Bev3jJyKP5dkM3ePpm1j+cWEcQBQk7XNrosYUsOdBUZFMGDf8Gx5VdCmBl577KZA H8+wByJDmnFL0dHkGkUofLeF5vv/qSzmwGhwAF4FKYncIaqSbTNyzh0yN8yd6OEcr0 aOJcDqCtAnfDSquMy6kiIisKQ4pYRhR646y+3V4VHKCOVx3pYmvm4L6C3etZAraJRw mtpuy8MFqZMCl3NyXM8s9prA7E+yFcWlI4uy0mLwVyuuFKW/ZLmxoQ0l1oyHnT/UG1 918w+RG4bnXBw== Received: from phl-compute-08.internal (phl-compute-08.internal [10.202.2.48]) by mailfauth.phl.internal (Postfix) with ESMTP id 9F3C5F4006A; Mon, 2 Feb 2026 10:57:01 -0500 (EST) Received: from phl-frontend-04 ([10.202.2.163]) by phl-compute-08.internal (MEProxy); Mon, 02 Feb 2026 10:57:01 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddujeektdeiucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhirhihlhcu ufhhuhhtshgvmhgruhcuoehkrghssehkvghrnhgvlhdrohhrgheqnecuggftrfgrthhtvg hrnhephfdufeejhefhkedtuedvfeevjeffvdfhvedtudfgudffjeefieekleehvdetvdev necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirh hilhhlodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdduieduudeivdeiheeh qddvkeeggeegjedvkedqkhgrsheppehkvghrnhgvlhdrohhrghesshhhuhhtvghmohhvrd hnrghmvgdpnhgspghrtghpthhtohepvdekpdhmohguvgepshhmthhpohhuthdprhgtphht thhopegrkhhpmheslhhinhhugidqfhhouhhnuggrthhiohhnrdhorhhgpdhrtghpthhtoh epmhhutghhuhhnrdhsohhngheslhhinhhugidruggvvhdprhgtphhtthhopegurghvihgu sehrvgguhhgrthdrtghomhdprhgtphhtthhopeifihhllhihsehinhhfrhgruggvrggurd horhhgpdhrtghpthhtohepuhhsrghmrggrrhhifheigedvsehgmhgrihhlrdgtohhmpdhr tghpthhtohepfhhvughlsehgohhoghhlvgdrtghomhdprhgtphhtthhopehoshgrlhhvrg guohhrsehsuhhsvgdruggvpdhrtghpthhtoheprhhpphhtsehkvghrnhgvlhdrohhrghdp rhgtphhtthhopehvsggrsghkrgesshhushgvrdgtii X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 2 Feb 2026 10:56:59 -0500 (EST) From: Kiryl Shutsemau To: Andrew Morton , Muchun Song , David Hildenbrand , Matthew Wilcox , Usama Arif , Frank van der Linden Cc: Oscar Salvador , Mike Rapoport , Vlastimil Babka , Lorenzo Stoakes , Zi Yan , Baoquan He , Michal Hocko , Johannes Weiner , Jonathan Corbet , Huacai Chen , WANG Xuerui , Palmer Dabbelt , Paul Walmsley , Albert Ou , Alexandre Ghiti , kernel-team@meta.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, Kiryl Shutsemau Subject: [PATCHv6 02/17] mm: Change the interface of prep_compound_tail() Date: Mon, 2 Feb 2026 15:56:18 +0000 Message-ID: <20260202155634.650837-3-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20260202155634.650837-1-kas@kernel.org> References: <20260202155634.650837-1-kas@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260202_075708_899031_72A693B8 X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Instead of passing down the head page and tail page index, pass the tail and head pages directly, as well as the order of the compound page. This is a preparation for changing how the head position is encoded in the tail page. Signed-off-by: Kiryl Shutsemau Reviewed-by: Muchun Song Reviewed-by: Zi Yan --- include/linux/page-flags.h | 4 +++- mm/hugetlb.c | 8 +++++--- mm/internal.h | 12 ++++++------ mm/mm_init.c | 2 +- mm/page_alloc.c | 2 +- 5 files changed, 16 insertions(+), 12 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f7a0e4af0c73..8a3694369e15 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -865,7 +865,9 @@ static inline bool folio_test_large(const struct folio *folio) return folio_test_head(folio); } -static __always_inline void set_compound_head(struct page *page, struct page *head) +static __always_inline void set_compound_head(struct page *page, + const struct page *head, + unsigned int order) { WRITE_ONCE(page->compound_head, (unsigned long)head + 1); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6e855a32de3d..54ba7cd05a86 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3168,6 +3168,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) /* Initialize [start_page:end_page_number] tail struct pages of a hugepage */ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, + struct hstate *h, unsigned long start_page_number, unsigned long end_page_number) { @@ -3176,6 +3177,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, struct page *page = folio_page(folio, start_page_number); unsigned long head_pfn = folio_pfn(folio); unsigned long pfn, end_pfn = head_pfn + end_page_number; + unsigned int order = huge_page_order(h); /* * As we marked all tail pages with memblock_reserved_mark_noinit(), @@ -3183,7 +3185,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, */ for (pfn = head_pfn + start_page_number; pfn < end_pfn; page++, pfn++) { __init_single_page(page, pfn, zone, nid); - prep_compound_tail((struct page *)folio, pfn - head_pfn); + prep_compound_tail(page, &folio->page, order); set_page_count(page, 0); } } @@ -3203,7 +3205,7 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio, __folio_set_head(folio); ret = folio_ref_freeze(folio, 1); VM_BUG_ON(!ret); - hugetlb_folio_init_tail_vmemmap(folio, 1, nr_pages); + hugetlb_folio_init_tail_vmemmap(folio, h, 1, nr_pages); prep_compound_head(&folio->page, huge_page_order(h)); } @@ -3260,7 +3262,7 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h, * time as this is early in boot and there should * be no contention. */ - hugetlb_folio_init_tail_vmemmap(folio, + hugetlb_folio_init_tail_vmemmap(folio, h, HUGETLB_VMEMMAP_RESERVE_PAGES, pages_per_huge_page(h)); } diff --git a/mm/internal.h b/mm/internal.h index d67e8bb75734..037ddcda25ff 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -879,13 +879,13 @@ static inline void prep_compound_head(struct page *page, unsigned int order) INIT_LIST_HEAD(&folio->_deferred_list); } -static inline void prep_compound_tail(struct page *head, int tail_idx) +static inline void prep_compound_tail(struct page *tail, + const struct page *head, + unsigned int order) { - struct page *p = head + tail_idx; - - p->mapping = TAIL_MAPPING; - set_compound_head(p, head); - set_page_private(p, 0); + tail->mapping = TAIL_MAPPING; + set_compound_head(tail, head, order); + set_page_private(tail, 0); } void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); diff --git a/mm/mm_init.c b/mm/mm_init.c index 1a29a719af58..ba50f4c4337b 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1099,7 +1099,7 @@ static void __ref memmap_init_compound(struct page *head, struct page *page = pfn_to_page(pfn); __init_zone_device_page(page, pfn, zone_idx, nid, pgmap); - prep_compound_tail(head, pfn - head_pfn); + prep_compound_tail(page, head, order); set_page_count(page, 0); } prep_compound_head(head, order); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e4104973e22f..00c7ea958767 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -744,7 +744,7 @@ void prep_compound_page(struct page *page, unsigned int order) __SetPageHead(page); for (i = 1; i < nr_pages; i++) - prep_compound_tail(page, i); + prep_compound_tail(page + i, page, order); prep_compound_head(page, order); } -- 2.51.2 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv