From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 742F6CA0FED for ; Mon, 25 Aug 2025 17:00:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x5ss2ApmxWER7lA+q8XsQ1ADbwu+pEPAyYENbobf8pc=; b=EfUiS0WgV0jXXY 452m5zF+g1bwVtprj1WqZlTzsxLQqmzW4ZDWzDGXmeEJAFD5+pZXqMR8YhSy55IOj0NcSn/Y3vxWI dfiCPpVd547FEb1ShUeaVXQ2r7PQLtBjKRLVWYjq5++f6oVIWU7H85BJUgeIIXGgV7HHeb9k+EENE PtMHNY9mtPeFPmXwobn+6/ggvALVGFm6wfNVJE406cDauJOrVL8g17L75xQeYg/YajNXT8C9evEzg 5sLyKZGT6OEXfRg2Y+0Aq/3mLhdLM89vmB1+d6aJ/cbCADWszbjyqlYrIf5EaeHEjhXvHbJxDmgYu 4us+pHMY5i+oys4uospw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uqaYt-00000008kYi-2KU5; Mon, 25 Aug 2025 17:00:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uqZss-00000008dH6-2Z0W; Mon, 25 Aug 2025 16:17:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=yaUcnQjozmvTYDYURkWMjBxCsSnXy41d7hzIH1GSQGQ=; b=D+k3JKrOeXBEthO1Dh5I4BQiH6 cj4qZS/6JooVxFpUxP92Dqyaz7To4r+5jNzJf8tqLZ9u3l2IA7SdBKmTjrwf4dC34HLEPazgZKtV5 WUdKqoVC/ZbaGKNBawuC0UrKA7Vd9PQrb4wlbpgpYaeWvAHpuSYGU863VOhF3LNDmUZYjeXuxJRQa nhZn2mf9Z1DehtXYg49Gr2PynIKYgCWpDqhtvKF6oc98xlG+zIe1fOvZr/af4CrLBo5K7Q1PKaqnt Z4kIhekSzao2u376MSOg2AQoJKz04bwWoF1dTL0ytmUAXhO3nyZ/QuLXoaS7urY0iE6Dk1NCdbizy DCDhkHNQ==; Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by desiato.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uqZsp-00000001vs0-1rPb; Mon, 25 Aug 2025 16:17:25 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 523095C5FB5; Mon, 25 Aug 2025 16:17:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3323DC4CEED; Mon, 25 Aug 2025 16:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756138640; bh=FANVRTF6GTKZEwwkEPlW/tWs0ubXGJ7R1OcvtRKQmn8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Z1X+B0ZGv7uHOoHuOLd4SJt16FexWEbZsjEQFQqRUrapr3OCpVgaCrasYjPdbbOyy eytOcCBL+pDQ3AWWdM11/NupldPVN7n787Ac52JA6muufv5Aq5hmbyZm6HiI3UG9/E S3WLhadcDWFA0vrOLjkLNWku02BCbOZumOEeIy+mLkW7A9OWvR0HBF0JMJQllwxMjH McwQict10AbNwzUaf3Fe4rhbDyKvQqV4mNkpdYsg1XXv1bkWDE2ZZQycNZSZHCAubw xxCEDJgzUrllwHg0O5i5LR3vB8H9ddKiEeoiX5nxByJsLox/KvefidsYGinF192ZU3 LkZva/mIcA0xQ== Date: Mon, 25 Aug 2025 19:17:02 +0300 From: Mike Rapoport To: David Hildenbrand Cc: Mika =?iso-8859-1?Q?Penttil=E4?= , linux-kernel@vger.kernel.org, Alexander Potapenko , Andrew Morton , Brendan Jackman , Christoph Lameter , Dennis Zhou , Dmitry Vyukov , dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, iommu@lists.linux.dev, io-uring@vger.kernel.org, Jason Gunthorpe , Jens Axboe , Johannes Weiner , John Hubbard , kasan-dev@googlegroups.com, kvm@vger.kernel.org, "Liam R. Howlett" , Linus Torvalds , linux-arm-kernel@axis.com, linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-ide@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mmc@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, Lorenzo Stoakes , Marco Elver , Marek Szyprowski , Michal Hocko , Muchun Song , netdev@vger.kernel.org, Oscar Salvador , Peter Xu , Robin Murphy , Suren Baghdasaryan , Tejun Heo , virtualization@lists.linux.dev, Vlastimil Babka , wireguard@lists.zx2c4.com, x86@kernel.org, Zi Yan Subject: Re: [PATCH RFC 10/35] mm/hugetlb: cleanup hugetlb_folio_init_tail_vmemmap() Message-ID: References: <20250821200701.1329277-1-david@redhat.com> <20250821200701.1329277-11-david@redhat.com> <9156d191-9ec4-4422-bae9-2e8ce66f9d5e@redhat.com> <7077e09f-6ce9-43ba-8f87-47a290680141@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250825_171723_877670_30A42F9C X-CRM114-Status: GOOD ( 49.54 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Mon, Aug 25, 2025 at 05:42:33PM +0200, David Hildenbrand wrote: > On 25.08.25 16:59, Mike Rapoport wrote: > > On Mon, Aug 25, 2025 at 04:38:03PM +0200, David Hildenbrand wrote: > > > On 25.08.25 16:32, Mike Rapoport wrote: > > > > On Mon, Aug 25, 2025 at 02:48:58PM +0200, David Hildenbrand wrote: > > > > > On 23.08.25 10:59, Mike Rapoport wrote: > > > > > > On Fri, Aug 22, 2025 at 08:24:31AM +0200, David Hildenbrand wro= te: > > > > > > > On 22.08.25 06:09, Mika Penttil=E4 wrote: > > > > > > > > = > > > > > > > > On 8/21/25 23:06, David Hildenbrand wrote: > > > > > > > > = > > > > > > > > > All pages were already initialized and set to PageReserve= d() with a > > > > > > > > > refcount of 1 by MM init code. > > > > > > > > = > > > > > > > > Just to be sure, how is this working with MEMBLOCK_RSRV_NOI= NIT, where MM is supposed not to > > > > > > > > initialize struct pages? > > > > > > > = > > > > > > > Excellent point, I did not know about that one. > > > > > > > = > > > > > > > Spotting that we don't do the same for the head page made me = assume that > > > > > > > it's just a misuse of __init_single_page(). > > > > > > > = > > > > > > > But the nasty thing is that we use memblock_reserved_mark_noi= nit() to only > > > > > > > mark the tail pages ... > > > > > > = > > > > > > And even nastier thing is that when CONFIG_DEFERRED_STRUCT_PAGE= _INIT is > > > > > > disabled struct pages are initialized regardless of > > > > > > memblock_reserved_mark_noinit(). > > > > > > = > > > > > > I think this patch should go in before your updates: > > > > > = > > > > > Shouldn't we fix this in memblock code? > > > > > = > > > > > Hacking around that in the memblock_reserved_mark_noinit() user s= ound wrong > > > > > -- and nothing in the doc of memblock_reserved_mark_noinit() spel= ls that > > > > > behavior out. > > > > = > > > > We can surely update the docs, but unfortunately I don't see how to= avoid > > > > hacking around it in hugetlb. > > > > Since it's used to optimise HVO even further to the point hugetlb o= pen > > > > codes memmap initialization, I think it's fair that it should deal = with all > > > > possible configurations. > > > = > > > Remind me, why can't we support memblock_reserved_mark_noinit() when > > > CONFIG_DEFERRED_STRUCT_PAGE_INIT is disabled? > > = > > When CONFIG_DEFERRED_STRUCT_PAGE_INIT is disabled we initialize the ent= ire > > memmap early (setup_arch()->free_area_init()), and we may have a bunch = of > > memblock_reserved_mark_noinit() afterwards > = > Oh, you mean that we get effective memblock modifications after already > initializing the memmap. > = > That sounds ... interesting :) It's memmap, not the free lists. Without deferred init, memblock is active for a while after memmap initialized and before the memory goes to the free lists. = > So yeah, we have to document this for memblock_reserved_mark_noinit(). > = > Is it also a problem for kexec_handover? With KHO it's also interesting, but it does not support deferred struct page init for now :) = > We should do something like: > = > diff --git a/mm/memblock.c b/mm/memblock.c > index 154f1d73b61f2..ed4c563d72c32 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -1091,13 +1091,16 @@ int __init_memblock memblock_clear_nomap(phys_add= r_t base, phys_addr_t size) > /** > * memblock_reserved_mark_noinit - Mark a reserved memory region with fl= ag > - * MEMBLOCK_RSRV_NOINIT which results in the struct pages not being init= ialized > - * for this region. > + * MEMBLOCK_RSRV_NOINIT which allows for the "struct pages" corresponding > + * to this region not getting initialized, because the caller will take > + * care of it. > * @base: the base phys addr of the region > * @size: the size of the region > * > - * struct pages will not be initialized for reserved memory regions mark= ed with > - * %MEMBLOCK_RSRV_NOINIT. > + * "struct pages" will not be initialized for reserved memory regions ma= rked > + * with %MEMBLOCK_RSRV_NOINIT if this function is called before initiali= zation > + * code runs. Without CONFIG_DEFERRED_STRUCT_PAGE_INIT, it is more likely > + * that this function is not effective. > * > * Return: 0 on success, -errno on failure. > */ I have a different version :) = diff --git a/include/linux/memblock.h b/include/linux/memblock.h index b96746376e17..d20d091c6343 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -40,8 +40,9 @@ extern unsigned long long max_possible_pfn; * via a driver, and never indicated in the firmware-provided memory map as * system RAM. This corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED in the * kernel resource tree. - * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are - * not initialized (only for reserved regions). + * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages don't have + * PG_Reserved set and are completely not initialized when + * %CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled (only for reserved regions= ). * @MEMBLOCK_RSRV_KERN: memory region that is reserved for kernel use, * either explictitly with memblock_reserve_kern() or via memblock * allocation APIs. All memblock allocations set this flag. diff --git a/mm/memblock.c b/mm/memblock.c index 154f1d73b61f..02de5ffb085b 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1091,13 +1091,15 @@ int __init_memblock memblock_clear_nomap(phys_addr_= t base, phys_addr_t size) = /** * memblock_reserved_mark_noinit - Mark a reserved memory region with flag - * MEMBLOCK_RSRV_NOINIT which results in the struct pages not being initia= lized - * for this region. + * MEMBLOCK_RSRV_NOINIT + * * @base: the base phys addr of the region * @size: the size of the region * - * struct pages will not be initialized for reserved memory regions marked= with - * %MEMBLOCK_RSRV_NOINIT. + * The struct pages for the reserved regions marked %MEMBLOCK_RSRV_NOINIT = will + * not have %PG_Reserved flag set. + * When %CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, setting this flags a= lso + * completly bypasses the initialization of struct pages for this region. * * Return: 0 on success, -errno on failure. */ = > Optimizing the hugetlb code could be done, but I am not sure how high > the priority is (nobody complained so far about the double init). > = > -- = > Cheers > = > David / dhildenb > = -- = Sincerely yours, Mike. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv