From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CEABCA0EFA for ; Sat, 23 Aug 2025 09:08:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=A3yFDc62nC30c8ipPORCXl1/TYX9qFzVOp/OlRE3m6M=; b=tdKYI6I545vj1idfzPtJVi+47G PRA7U0o2v+jZnoOgZJKHhoJzLRHgaLBJeQjAxZeHvAaGV9xQV11j44VUruCf8+i99p5NK3vKEV3ed MU/DADfvq7jNKS+vzjATgNxAGfXNmPGE7Cs75vuQnecF4e32zQmebpdL3seqI5aQZvdwmO83XkUOa kCLlaV+538RejIly+7jLhWtTmOZm5QhiSauTaS2GWsev6zbPJ/4yJUypmcN1bb7ml6V8btj+r1wXW bbZtpgbV/TnCv9NzlrGz3Uc3ivDveqxZRVEsCSTyquZBCQi+JuB9Vh3sOT3r1hbpbXH1WcXNjbUwr LKNqm4Lg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1upkEm-00000004QqD-10xZ; Sat, 23 Aug 2025 09:08:36 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1upk6b-00000004PxK-0dML; Sat, 23 Aug 2025 09:00:10 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6ABA25C0FCC; Sat, 23 Aug 2025 09:00:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F35E6C113D0; Sat, 23 Aug 2025 08:59:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755939608; bh=aU4s6rL8qphUf3ASSPtkvFhSbnF7KQL/5p/AkB6ZkD0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=rOGkWtsURQmqeVJWZZ8QU1TwZHK57H1RHXwc/i48rhECw+oAVRa4mF0zfqNM3aJ7b K4ufsfNlCAWHsLgknGRRJAM8fAg49ifByU7aHcJpd60KD5n2ftLsVEjekOSHSO2hKf TyNgVi/IP/sZ+Zv1HQC39pwpdaVXtYVat6Ea2uqdZXF6QBRAllDzvBMajMv+E6mXfL wV6BxOvsD2NYhNSk4hF0URYans98VMn0JY+kTDajsHLCy41MHKhsR0ynwpYDfvi23F uDIHwC7s5hdPOSqfKNvqn2djgrkhoLcYMLwZ0RX2xL3FelT1nsv2lD+CEaDl2f3oOX wFjm0sW7YW4bQ== Date: Sat, 23 Aug 2025 11:59:50 +0300 From: Mike Rapoport To: David Hildenbrand Cc: Mika =?iso-8859-1?Q?Penttil=E4?= , linux-kernel@vger.kernel.org, Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: Re: [PATCH RFC 10/35] mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap() Message-ID: References: <20250821200701.1329277-1-david@redhat.com> <20250821200701.1329277-11-david@redhat.com> <9156d191-9ec4-4422-bae9-2e8ce66f9d5e@redhat.com> <7077e09f-6ce9-43ba-8f87-47a290680141@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <7077e09f-6ce9-43ba-8f87-47a290680141@redhat.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250823_020009_270658_69012960 X-CRM114-Status: GOOD ( 22.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Aug 22, 2025 at 08:24:31AM +0200, David Hildenbrand wrote: > On 22.08.25 06:09, Mika Penttilä wrote: > > > > On 8/21/25 23:06, David Hildenbrand wrote: > > > > > All pages were already initialized and set to PageReserved() with a > > > refcount of 1 by MM init code. > > > > Just to be sure, how is this working with MEMBLOCK_RSRV_NOINIT, where MM is supposed not to > > initialize struct pages? > > Excellent point, I did not know about that one. > > Spotting that we don't do the same for the head page made me assume that > it's just a misuse of __init_single_page(). > > But the nasty thing is that we use memblock_reserved_mark_noinit() to only > mark the tail pages ... And even nastier thing is that when CONFIG_DEFERRED_STRUCT_PAGE_INIT is disabled struct pages are initialized regardless of memblock_reserved_mark_noinit(). I think this patch should go in before your updates: diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 753f99b4c718..1c51788339a5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3230,6 +3230,22 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) return 1; } +/* + * Tail pages in a huge folio allocated from memblock are marked as 'noinit', + * which means that when CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled their + * struct page won't be initialized + */ +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +static void __init hugetlb_init_tail_page(struct page *page, unsigned long pfn, + enum zone_type zone, int nid) +{ + __init_single_page(page, pfn, zone, nid); +} +#else +static inline void hugetlb_init_tail_page(struct page *page, unsigned long pfn, + enum zone_type zone, int nid) {} +#endif + /* Initialize [start_page:end_page_number] tail struct pages of a hugepage */ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, unsigned long start_page_number, @@ -3244,7 +3260,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) { struct page *page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); + hugetlb_init_tail_page(page, pfn, zone, nid); prep_compound_tail((struct page *)folio, pfn - head_pfn); ret = page_ref_freeze(page, 1); VM_BUG_ON(!ret); > Let me revert back to __init_single_page() and add a big fat comment why > this is required. > > Thanks! -- Sincerely yours, Mike.