From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 254A933F5BA for ; Mon, 11 May 2026 15:34:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778513660; cv=none; b=pclD1e2CwMZR6illUADzfcODOsetWgCm775TIE+jgz1xlExV4PAaSCvkxJ6qK1xm8qh8zlk78hzqu9OOUCCi28cKcvqEqhc3841Hvo5PJPS5wjoD94PIN0Q/MDZj3TwM+foQkk744G04MjT41+4Kvi/azZLTmzQbNrD3aTk/8L0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778513660; c=relaxed/simple; bh=952v39NeMVM7xhL5cHT+arFmerxRxH8pXtydpSUq2zQ=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=k7Qzb2RJo4PbQC89c9murBw4RGFI8YkoeT7vEkodxJAdiq/vqsbRTcjLRJseFuWX89i+eAg+bJVFoMRiQKCPSp2sGARjkmDn/AEcoLedw3WgjBeNkxrF95AtKB3e5iGE/su9YUhrNs98IteF7zY15rZln2nzQnYz9105DeBPCIo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FIptnLmJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FIptnLmJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2036C2BCB0; Mon, 11 May 2026 15:34:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778513659; bh=952v39NeMVM7xhL5cHT+arFmerxRxH8pXtydpSUq2zQ=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=FIptnLmJpTUfVHvfKd6Ub0RZnpFzRcxC025n09XPH4d7g5dGphkWUqtWKcZBgCfib YX4qzn1qN58aciECnmXExH/Oxf+UIRkrncx3agQf+Pc4vNZ5W3PGD3zJqmabG9GV6O p/Gn3+xsO3SFk549TMK5hJe86jOucGSjqK/xsaS07UTZXeHwFkZrZH9aQofZ7u7G2Y XBSd+aSrdpyj/LPeQYjIi8mCL1l1ZzaQCi9GCHdLSWCinpUZxQ/XG6O1FFY9IWud0X iN2CMf0elMCVlr8pttXkFNV/2MpaSIh/S4A7fllLgMZ8PnrUI/oGhQXg36UjYFvZv/ rvHtPmhxGNnRQ== Message-ID: <016c8bef-57ef-44ef-bf60-86dbfd368dcd@kernel.org> Date: Mon, 11 May 2026 17:34:12 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 10/22] mm: introduce freetype_t To: Brendan Jackman , Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Yosry Ahmed References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com> From: "Vlastimil Babka (SUSE)" Content-Language: en-US In-Reply-To: <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 3/20/26 19:23, Brendan Jackman wrote: > This is preparation for teaching the page allocator to break up free > pages according to properties that have nothing to do with mobility. For > example it can be used to allocate pages that are non-present in the > physmap, or pages that are sensitive in ASI. > > For these usecases, certain allocator behaviours are desirable: > > - A "pool" of pages with the given property is usually available, so > that pages can be provided with the correct sensitivity without > zeroing/TLB flushing. > > - Pages are physically grouped by the property, so that large > allocations rarely have to alter the pagetables due to ASI. > > - The properties can be forced to vary only at a certain fixed address > granularity, so that the pagetables can all be pre-allocated. This is > desirable because the page allocator will be changing mappings: > pre-allocation is a straightforward way to avoid recursive allocations > (of pagetables). > > It seems that the existing infrastructure for grouping pages by > mobility, i.e. pageblocks and migratetypes, serves this purpose pretty > nicely. However, overloading migratetype itself for this purpose looks > like a road to maintenance hell. In particular, as soon as such > properties become orthogonal to migratetypes, it would start to require > "doubling" the migratetypes. > > Therefore, introduce a new higher-level concept, called "freetype" > (because it is used to index "free"lists) that can encode extra > properties, orthogonally to mobility, via flags. > > Since freetypes and migratetypes would be very easy to mix up, freetypes > are (at least for now) stored in a struct typedef similar to atomic_t. > This provides type-safety, but comes at the expense of being pretty > annoying to code with. For instance, freetype_t cannot be compared with > the == operator. Once this code matures, if the freetype/migratetype > distinction gets less confusing, it might be wise to drop this > struct and just use ints. > > Because this will eventually be needed from pageblock-flags.h, put this > in its own header instead of directly in mmzone.h. > > To try and reduce review pain for such a churny patch, first introduce > freetypes as nothing but an indirection over migratetypes. The helpers > concerned with the flags are defined, but only as stubs. Convert > everything over to using freetypes wherever they are needed to index > freelists, but maintain references to migratetypes in code that really > only cares specifically about mobility. > > Signed-off-by: Brendan Jackman Seems mechanistic enough. Acked-by: Vlastimil Babka (SUSE) Some nits: > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ac077d98019f3..018622aa19006 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -422,6 +422,37 @@ bool get_pfnblock_bit(const struct page *page, unsigned long pfn, > return test_bit(bitidx + pb_bit, bitmap_word); > } > > +/** > + * __get_pfnblock_freetype - Return the freetype of a pageblock, optionally > + * ignoring the fact that it's currently isolated. > + * @page: The page within the block of interest > + * @pfn: The target page frame number > + * @ignore_iso: If isolated, return the migratetype that the block had before > + * isolation. > + */ > +__always_inline freetype_t 'static' too? > +__get_pfnblock_freetype(const struct page *page, unsigned long pfn, > + bool ignore_iso) > +{ > + int mt = get_pfnblock_migratetype(page, pfn); > + > + return migrate_to_freetype(mt, 0); > +} > + > +/** > + * get_pfnblock_migratetype - Return the freetype of a pageblock > + * @page: The page within the block of interest > + * @pfn: The target page frame number > + * > + * Return: The freetype of the pageblock > + */ > +__always_inline freetype_t And this is declared in a header so the __always_inline is not really applicable? (seems we should fix up get_pfnblock_migratetype too) > +get_pfnblock_freetype(const struct page *page, unsigned long pfn) > +{ > + return __get_pfnblock_freetype(page, pfn, 0); > +} > + > + > /** > * get_pfnblock_migratetype - Return the migratetype of a pageblock > * @page: The page within the block of interest > @@ -2262,10 +2323,18 @@ find_suitable_fallback(struct free_area *area, unsigned int order, > > for (i = 0; i < MIGRATE_PCPTYPES - 1 ; i++) { > int fallback_mt = fallbacks[migratetype][i]; > + /* > + * Fallback to different migratetypes, but currently always with > + * the same freetype flags. > + */ > + freetype_t fallback_ft = freetype_with_migrate(freetype, fallback_mt); > > - if (!free_area_empty(area, fallback_mt)) { > - if (mt_out) > - *mt_out = fallback_mt; > + if (freetype_idx(fallback_ft) < 0) > + continue; How can this happen? Is it preparatory? > + > + if (!free_area_empty(area, fallback_ft)) { > + if (ft_out) > + *ft_out = fallback_ft; > return FALLBACK_FOUND; > } > }