The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* Re: [PATCH v2 08/22] mm: introduce for_each_free_list()
       [not found] ` <20260320-page_alloc-unmapped-v2-8-28bf1bd54f41@google.com>
@ 2026-05-11 13:46   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 13:46 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> Later patches will rearrange the free areas, but there are a couple of
> places that iterate over them with the assumption that they have the
> current structure.
> 
> It seems ideally, code outside of mm should not be directly aware of
> struct free_area in the first place, but that awareness seems relatively
> harmless so just make the minimal change.

I think we should lift the code from kernel/power/snapshot.c to under mm/
eventually, ISTR discussing it somewhere recently. But doesn't have to be in
this series.

> Now instead of letting users manually iterate over the free lists, just
> provide a macro to do that. Then adopt that macro in a couple of places.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

> ---
>  include/linux/mmzone.h  |  7 +++++--
>  kernel/power/snapshot.c |  8 ++++----
>  mm/mm_init.c            | 11 +++++++----
>  3 files changed, 16 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 7bd0134c241ce..c49e3cdf4f6bb 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -177,9 +177,12 @@ static inline bool migratetype_is_mergeable(int mt)
>  	return mt < MIGRATE_PCPTYPES;
>  }
>  
> -#define for_each_migratetype_order(order, type) \
> +#define for_each_free_list(list, zone, order) \
>  	for (order = 0; order < NR_PAGE_ORDERS; order++) \
> -		for (type = 0; type < MIGRATE_TYPES; type++)
> +		for (unsigned int type = 0; \
> +		     list = &zone->free_area[order].free_list[type], \
> +		     type < MIGRATE_TYPES; \
> +		     type++) \
>  
>  extern int page_group_by_mobility_disabled;
>  
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 7dcccf378cc2f..abd33ca13eec4 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1245,8 +1245,9 @@ unsigned int snapshot_additional_pages(struct zone *zone)
>  static void mark_free_pages(struct zone *zone)
>  {
>  	unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT;
> +	struct list_head *free_list;
>  	unsigned long flags;
> -	unsigned int order, t;
> +	unsigned int order;
>  	struct page *page;
>  
>  	if (zone_is_empty(zone))
> @@ -1270,9 +1271,8 @@ static void mark_free_pages(struct zone *zone)
>  			swsusp_unset_page_free(page);
>  	}
>  
> -	for_each_migratetype_order(order, t) {
> -		list_for_each_entry(page,
> -				&zone->free_area[order].free_list[t], buddy_list) {
> +	for_each_free_list(free_list, zone, order) {
> +		list_for_each_entry(page, free_list, buddy_list) {
>  			unsigned long i;
>  
>  			pfn = page_to_pfn(page);
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 969048f9b320c..f6f9455bc42b6 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1445,11 +1445,14 @@ static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx,
>  
>  static void __meminit zone_init_free_lists(struct zone *zone)
>  {
> -	unsigned int order, t;
> -	for_each_migratetype_order(order, t) {
> -		INIT_LIST_HEAD(&zone->free_area[order].free_list[t]);
> +	struct list_head *list;
> +	unsigned int order;
> +
> +	for_each_free_list(list, zone, order)
> +		INIT_LIST_HEAD(list);
> +
> +	for (order = 0; order < NR_PAGE_ORDERS; order++)
>  		zone->free_area[order].nr_free = 0;
> -	}
>  
>  #ifdef CONFIG_UNACCEPTED_MEMORY
>  	INIT_LIST_HEAD(&zone->unaccepted_pages);
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback()
       [not found] ` <20260320-page_alloc-unmapped-v2-9-28bf1bd54f41@google.com>
@ 2026-05-11 13:51   ` Vlastimil Babka (SUSE)
  2026-05-11 16:44     ` Brendan Jackman
  0 siblings, 1 reply; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 13:51 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> This function currently returns a signed integer that encodes status
> in-band, as negative numbers, along with a migratetype.
> 
> This function is about to be updated to a mode where this in-band
> signaling no longer makes sense. Therefore, switch to a more
> explicit/verbose style that encodes the status and migratetype
> separately.
> 
> In the spirit of making things more explicit, also create an enum to
> avoid using magic integer literals with special meanings. This enables
> documenting the values at their definition instead of in one of the
> callers.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/22] mm: introduce freetype_t
       [not found] ` <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com>
@ 2026-05-11 15:34   ` Vlastimil Babka (SUSE)
  2026-05-11 16:49     ` Brendan Jackman
  2026-05-11 18:17   ` Vlastimil Babka (SUSE)
  2026-05-11 18:26   ` Vlastimil Babka (SUSE)
  2 siblings, 1 reply; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 15:34 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> This is preparation for teaching the page allocator to break up free
> pages according to properties that have nothing to do with mobility. For
> example it can be used to allocate pages that are non-present in the
> physmap, or pages that are sensitive in ASI.
> 
> For these usecases, certain allocator behaviours are desirable:
> 
> - A "pool" of pages with the given property is usually available, so
>   that pages can be provided with the correct sensitivity without
>   zeroing/TLB flushing.
> 
> - Pages are physically grouped by the property, so that large
>   allocations rarely have to alter the pagetables due to ASI.
> 
> - The properties can be forced to vary only at a certain fixed address
>   granularity, so that the pagetables can all be pre-allocated. This is
>   desirable because the page allocator will be changing mappings:
>   pre-allocation is a straightforward way to avoid recursive allocations
>   (of pagetables).
> 
> It seems that the existing infrastructure for grouping pages by
> mobility, i.e. pageblocks and migratetypes, serves this purpose pretty
> nicely. However, overloading migratetype itself for this purpose looks
> like a road to maintenance hell. In particular, as soon as such
> properties become orthogonal to migratetypes, it would start to require
> "doubling" the migratetypes.
> 
> Therefore, introduce a new higher-level concept, called "freetype"
> (because it is used to index "free"lists) that can encode extra
> properties, orthogonally to mobility, via flags.
> 
> Since freetypes and migratetypes would be very easy to mix up, freetypes
> are (at least for now) stored in a struct typedef similar to atomic_t.
> This provides type-safety, but comes at the expense of being pretty
> annoying to code with. For instance, freetype_t cannot be compared with
> the == operator. Once this code matures, if the freetype/migratetype
> distinction gets less confusing, it might be wise to drop this
> struct and just use ints.
> 
> Because this will eventually be needed from pageblock-flags.h, put this
> in its own header instead of directly in mmzone.h.
> 
> To try and reduce review pain for such a churny patch, first introduce
> freetypes as nothing but an indirection over migratetypes. The helpers
> concerned with the flags are defined, but only as stubs. Convert
> everything over to using freetypes wherever they are needed to index
> freelists, but maintain references to migratetypes in code that really
> only cares specifically about mobility.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Seems mechanistic enough.

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

Some nits:

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ac077d98019f3..018622aa19006 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -422,6 +422,37 @@ bool get_pfnblock_bit(const struct page *page, unsigned long pfn,
>  	return test_bit(bitidx + pb_bit, bitmap_word);
>  }
>  
> +/**
> + * __get_pfnblock_freetype - Return the freetype of a pageblock, optionally
> + * ignoring the fact that it's currently isolated.
> + * @page: The page within the block of interest
> + * @pfn: The target page frame number
> + * @ignore_iso: If isolated, return the migratetype that the block had before
> + *              isolation.
> + */
> +__always_inline freetype_t

'static' too?

> +__get_pfnblock_freetype(const struct page *page, unsigned long pfn,
> +			bool ignore_iso)
> +{
> +	int mt = get_pfnblock_migratetype(page, pfn);
> +
> +	return migrate_to_freetype(mt, 0);
> +}
> +
> +/**
> + * get_pfnblock_migratetype - Return the freetype of a pageblock
> + * @page: The page within the block of interest
> + * @pfn: The target page frame number
> + *
> + * Return: The freetype of the pageblock
> + */
> +__always_inline freetype_t

And this is declared in a header so the __always_inline is not really
applicable?

(seems we should fix up get_pfnblock_migratetype too)


> +get_pfnblock_freetype(const struct page *page, unsigned long pfn)
> +{
> +	return __get_pfnblock_freetype(page, pfn, 0);
> +}
> +
> +
>  /**
>   * get_pfnblock_migratetype - Return the migratetype of a pageblock
>   * @page: The page within the block of interest

> @@ -2262,10 +2323,18 @@ find_suitable_fallback(struct free_area *area, unsigned int order,
>  
>  	for (i = 0; i < MIGRATE_PCPTYPES - 1 ; i++) {
>  		int fallback_mt = fallbacks[migratetype][i];
> +		/*
> +		 * Fallback to different migratetypes, but currently always with
> +		 * the same freetype flags.
> +		 */
> +		freetype_t fallback_ft = freetype_with_migrate(freetype, fallback_mt);
>  
> -		if (!free_area_empty(area, fallback_mt)) {
> -			if (mt_out)
> -				*mt_out = fallback_mt;
> +		if (freetype_idx(fallback_ft) < 0)
> +			continue;

How can this happen? Is it preparatory?

> +
> +		if (!free_area_empty(area, fallback_ft)) {
> +			if (ft_out)
> +				*ft_out = fallback_ft;
>  			return FALLBACK_FOUND;
>  		}
>  	}

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 11/22] mm: move migratetype definitions to freetype.h
       [not found] ` <20260320-page_alloc-unmapped-v2-11-28bf1bd54f41@google.com>
@ 2026-05-11 15:35   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 15:35 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> Since migratetypes are a sub-element of freetype, move the pure
> definitions into the new freetype.h.
> 
> This will enable referring to these raw types from pageblock-flags.h.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

git coloring agrees it's just moves

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback()
  2026-05-11 13:51   ` [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback() Vlastimil Babka (SUSE)
@ 2026-05-11 16:44     ` Brendan Jackman
  2026-05-11 16:53       ` Vlastimil Babka (SUSE)
  0 siblings, 1 reply; 19+ messages in thread
From: Brendan Jackman @ 2026-05-11 16:44 UTC (permalink / raw)
  To: Vlastimil Babka (SUSE), Brendan Jackman, Borislav Petkov,
	Dave Hansen, Peter Zijlstra, Andrew Morton, David Hildenbrand,
	Wei Xu, Johannes Weiner, Zi Yan, Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On Mon May 11, 2026 at 1:51 PM UTC, Vlastimil Babka (SUSE) wrote:
> On 3/20/26 19:23, Brendan Jackman wrote:
>> This function currently returns a signed integer that encodes status
>> in-band, as negative numbers, along with a migratetype.
>> 
>> This function is about to be updated to a mode where this in-band
>> signaling no longer makes sense. Therefore, switch to a more
>> explicit/verbose style that encodes the status and migratetype
>> separately.
>> 
>> In the spirit of making things more explicit, also create an enum to
>> avoid using magic integer literals with special meanings. This enables
>> documenting the values at their definition instead of in one of the
>> callers.
>> 
>> Signed-off-by: Brendan Jackman <jackmanb@google.com>
>
> Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

Thanks,

This and the prior patch could arguably just be considered independent
cleanups, shall I send them on their own?

Equally if they feel like "churn" I'm happy to keep them in this
patchset.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/22] mm: introduce freetype_t
  2026-05-11 15:34   ` [PATCH v2 10/22] mm: introduce freetype_t Vlastimil Babka (SUSE)
@ 2026-05-11 16:49     ` Brendan Jackman
  2026-05-11 16:58       ` Vlastimil Babka (SUSE)
  0 siblings, 1 reply; 19+ messages in thread
From: Brendan Jackman @ 2026-05-11 16:49 UTC (permalink / raw)
  To: Vlastimil Babka (SUSE), Brendan Jackman, Borislav Petkov,
	Dave Hansen, Peter Zijlstra, Andrew Morton, David Hildenbrand,
	Wei Xu, Johannes Weiner, Zi Yan, Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On Mon May 11, 2026 at 3:34 PM UTC, Vlastimil Babka (SUSE) wrote:
>> +/**
>> + * __get_pfnblock_freetype - Return the freetype of a pageblock, optionally
>> + * ignoring the fact that it's currently isolated.
>> + * @page: The page within the block of interest
>> + * @pfn: The target page frame number
>> + * @ignore_iso: If isolated, return the migratetype that the block had before
>> + *              isolation.
>> + */
>> +__always_inline freetype_t
>
> 'static' too?

Yup thanks

>
>> +__get_pfnblock_freetype(const struct page *page, unsigned long pfn,
>> +			bool ignore_iso)
>> +{
>> +	int mt = get_pfnblock_migratetype(page, pfn);
>> +
>> +	return migrate_to_freetype(mt, 0);
>> +}
>> +
>> +/**
>> + * get_pfnblock_migratetype - Return the freetype of a pageblock
>> + * @page: The page within the block of interest
>> + * @pfn: The target page frame number
>> + *
>> + * Return: The freetype of the pageblock
>> + */
>> +__always_inline freetype_t
>
> And this is declared in a header so the __always_inline is not really
> applicable?

> (seems we should fix up get_pfnblock_migratetype too)

Um, I think  it probably still forces inlining in calls within the same
translation unit?

Anyway I am pretty meh about this, I suspect humans and compilers are
equally bad at making this decision, I was just trying to be consistent
with the code it's replacing.

>> +		/*
>> +		 * Fallback to different migratetypes, but currently always with
>> +		 * the same freetype flags.
>> +		 */
>> +		freetype_t fallback_ft = freetype_with_migrate(freetype, fallback_mt);
>>  
>> -		if (!free_area_empty(area, fallback_mt)) {
>> -			if (mt_out)
>> -				*mt_out = fallback_mt;
>> +		if (freetype_idx(fallback_ft) < 0)
>> +			continue;
>
> How can this happen? Is it preparatory?

Oops, yeah looks like I need to clean up how this happens in the
history and clarify the commit messages.

In a later patch I add an optimisation where we avoid having freelists
for freetypes that never arise in practice. And in those cases
freetype_idx() returns -1.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback()
  2026-05-11 16:44     ` Brendan Jackman
@ 2026-05-11 16:53       ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 16:53 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 5/11/26 18:44, Brendan Jackman wrote:
> On Mon May 11, 2026 at 1:51 PM UTC, Vlastimil Babka (SUSE) wrote:
>> On 3/20/26 19:23, Brendan Jackman wrote:
>>> This function currently returns a signed integer that encodes status
>>> in-band, as negative numbers, along with a migratetype.
>>> 
>>> This function is about to be updated to a mode where this in-band
>>> signaling no longer makes sense. Therefore, switch to a more
>>> explicit/verbose style that encodes the status and migratetype
>>> separately.
>>> 
>>> In the spirit of making things more explicit, also create an enum to
>>> avoid using magic integer literals with special meanings. This enables
>>> documenting the values at their definition instead of in one of the
>>> callers.
>>> 
>>> Signed-off-by: Brendan Jackman <jackmanb@google.com>
>>
>> Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> 
> Thanks,
> 
> This and the prior patch could arguably just be considered independent
> cleanups, shall I send them on their own?

I guess why not, fewer patches in the patchset :) You're probably going to
need to rebase this on current mm tree anyway for the next posting?

> Equally if they feel like "churn" I'm happy to keep them in this
> patchset.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/22] mm: introduce freetype_t
  2026-05-11 16:49     ` Brendan Jackman
@ 2026-05-11 16:58       ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 16:58 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 5/11/26 18:49, Brendan Jackman wrote:
> On Mon May 11, 2026 at 3:34 PM UTC, Vlastimil Babka (SUSE) wrote:
>>> +/**
>>> + * __get_pfnblock_freetype - Return the freetype of a pageblock, optionally
>>> + * ignoring the fact that it's currently isolated.
>>> + * @page: The page within the block of interest
>>> + * @pfn: The target page frame number
>>> + * @ignore_iso: If isolated, return the migratetype that the block had before
>>> + *              isolation.
>>> + */
>>> +__always_inline freetype_t
>>
>> 'static' too?
> 
> Yup thanks
> 
>>
>>> +__get_pfnblock_freetype(const struct page *page, unsigned long pfn,
>>> +			bool ignore_iso)
>>> +{
>>> +	int mt = get_pfnblock_migratetype(page, pfn);
>>> +
>>> +	return migrate_to_freetype(mt, 0);
>>> +}
>>> +
>>> +/**
>>> + * get_pfnblock_migratetype - Return the freetype of a pageblock
>>> + * @page: The page within the block of interest
>>> + * @pfn: The target page frame number
>>> + *
>>> + * Return: The freetype of the pageblock
>>> + */
>>> +__always_inline freetype_t
>>
>> And this is declared in a header so the __always_inline is not really
>> applicable?
> 
>> (seems we should fix up get_pfnblock_migratetype too)
> 
> Um, I think  it probably still forces inlining in calls within the same
> translation unit?

True but I don't think we try to do that consciously, seems like it was just
an oversight for get_pfnblock_migratetype, maybe Zi Yan remembers?

> Anyway I am pretty meh about this, I suspect humans and compilers are
> equally bad at making this decision, I was just trying to be consistent
> with the code it's replacing.

True.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 12/22] mm: add definitions for allocating unmapped pages
       [not found] ` <20260320-page_alloc-unmapped-v2-12-28bf1bd54f41@google.com>
@ 2026-05-11 18:01   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:01 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> Create __GFP_UNMAPPED, which requests pages that are not present in the
> direct map. Since this feature has a cost (e.g. more freelists), it's
> behind a kconfig. Unlike other conditionally-defined GFP flags, it
> doesn't fall back to being 0. This prevents building code that uses
> __GFP_UNMAPPED but doesn't depend on the necessary kconfig, since that
> would lead to invisible security issues.
> 
> Create a freetype flag to record that pages on the freelists with this
> flag are unmapped. This is currently only needed for MIGRATE_UNMOVABLE
> pages, so the freetype encoding remains trivial.
> 
> Also create the corresponding pageblock flag to record the same thing.
> 
> To keep patches from being too overwhelming, the actual implementation
> is added separately, this is just types, Kconfig boilerplate, etc.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Aside from the gfp vs internal flag decision, seems fine.

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 13/22] mm: rejig pageblock mask definitions
       [not found] ` <20260320-page_alloc-unmapped-v2-13-28bf1bd54f41@google.com>
@ 2026-05-11 18:07   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:07 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> A later patch will complicate the definition of these masks, this is a
> preparatory patch to make that patch easier to review.
> 
> - More masks will be needed, so add a PAGEBLOCK_ prefix to the names
>   to avoid polluting the "global namespace" too much.
> 
> - This makes MIGRATETYPE_AND_ISO_MASK start to look pretty long. Well,
>   that global mask only exists for quite a specific purpose so just drop
>   it and take advantage of the newly-defined PAGEBLOCK_ISO_MASK.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

LGTM. Could be also part of the immediate cleanup series?

Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/22] mm: introduce freetype_t
       [not found] ` <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com>
  2026-05-11 15:34   ` [PATCH v2 10/22] mm: introduce freetype_t Vlastimil Babka (SUSE)
@ 2026-05-11 18:17   ` Vlastimil Babka (SUSE)
  2026-05-11 18:26   ` Vlastimil Babka (SUSE)
  2 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:17 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> +/**
> + * get_pfnblock_migratetype - Return the freetype of a pageblock

Also just noticed the wrong name here

> + * @page: The page within the block of interest
> + * @pfn: The target page frame number
> + *
> + * Return: The freetype of the pageblock
> + */
> +__always_inline freetype_t
> +get_pfnblock_freetype(const struct page *page, unsigned long pfn)
> +{
> + return __get_pfnblock_freetype(page, pfn, 0);
> +}


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 10/22] mm: introduce freetype_t
       [not found] ` <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com>
  2026-05-11 15:34   ` [PATCH v2 10/22] mm: introduce freetype_t Vlastimil Babka (SUSE)
  2026-05-11 18:17   ` Vlastimil Babka (SUSE)
@ 2026-05-11 18:26   ` Vlastimil Babka (SUSE)
  2 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:26 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -422,6 +422,37 @@ bool get_pfnblock_bit(const struct page *page, unsigned long pfn,
>  	return test_bit(bitidx + pb_bit, bitmap_word);
>  }
>  
> +/**
> + * __get_pfnblock_freetype - Return the freetype of a pageblock, optionally
> + * ignoring the fact that it's currently isolated.
> + * @page: The page within the block of interest
> + * @pfn: The target page frame number
> + * @ignore_iso: If isolated, return the migratetype that the block had before
> + *              isolation.
> + */
> +__always_inline freetype_t
> +__get_pfnblock_freetype(const struct page *page, unsigned long pfn,
> +			bool ignore_iso)

Hm I also noticed ignore_iso is ... ignored until later patch.

> +{
> +	int mt = get_pfnblock_migratetype(page, pfn);
> +
> +	return migrate_to_freetype(mt, 0);
> +}
> +
> +/**
> + * get_pfnblock_migratetype - Return the freetype of a pageblock
> + * @page: The page within the block of interest
> + * @pfn: The target page frame number
> + *
> + * Return: The freetype of the pageblock
> + */
> +__always_inline freetype_t
> +get_pfnblock_freetype(const struct page *page, unsigned long pfn)
> +{
> +	return __get_pfnblock_freetype(page, pfn, 0);

And here it passes 0 to bool.

> +}
> +
> +

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 14/22] mm: encode freetype flags in pageblock flags
       [not found] ` <20260320-page_alloc-unmapped-v2-14-28bf1bd54f41@google.com>
@ 2026-05-11 18:29   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:29 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> In preparation for implementing allocation from FREETYPE_UNMAPPED lists.
> 
> Since it works nicely with the existing allocator logic, and also offers
> a simple way to amortize TLB flushing costs, __GFP_UNMAPPED will be
> implemented by changing mappings at pageblock granularity. Therefore,
> encode the mapping state in the pageblock flags.
> 
> Also add the necessary logic to record this from a freetype, and
> reconstruct a freetype from the pageblock flags.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

nit:

> @@ -434,9 +431,20 @@ __always_inline freetype_t
>  __get_pfnblock_freetype(const struct page *page, unsigned long pfn,
>  			bool ignore_iso)
>  {
> -	int mt = get_pfnblock_migratetype(page, pfn);
> +	unsigned long mask = PAGEBLOCK_FREETYPE_MASK;
> +	enum migratetype migratetype;
> +	unsigned int ft_flags;
> +	unsigned long flags;
>  
> -	return migrate_to_freetype(mt, 0);
> +	flags = __get_pfnblock_flags_mask(page, pfn, mask);
> +	ft_flags = (flags & PAGEBLOCK_FREETYPE_FLAGS_MASK) >> PB_freetype_flags;
> +
> +	migratetype = flags & PAGEBLOCK_MIGRATETYPE_MASK;
> +#ifdef CONFIG_MEMORY_ISOLATION
> +	if (!ignore_iso && flags & BIT(PB_migrate_isolate))

			  (flags & BIT(PB_migrate_isolate)) ?

> +		migratetype = MIGRATE_ISOLATE;
> +#endif
> +	return migrate_to_freetype(migratetype, ft_flags);
>  }
>  
>  /**
> @@ -570,6 +578,15 @@ static void set_pageblock_migratetype(struct page *page,
>  				  PAGEBLOCK_MIGRATETYPE_MASK | PAGEBLOCK_ISO_MASK);
>  }
>  
> +static inline void set_pageblock_freetype_flags(struct page *page,
> +						unsigned int ft_flags)
> +{
> +	unsigned int flags = ft_flags << PB_freetype_flags;
> +
> +	__set_pfnblock_flags_mask(page, page_to_pfn(page), flags,
> +				  PAGEBLOCK_FREETYPE_FLAGS_MASK);
> +}
> +
>  void __meminit init_pageblock_migratetype(struct page *page,
>  					  enum migratetype migratetype,
>  					  bool isolate)
> @@ -593,7 +610,7 @@ void __meminit init_pageblock_migratetype(struct page *page,
>  		flags |= BIT(PB_migrate_isolate);
>  #endif
>  	__set_pfnblock_flags_mask(page, page_to_pfn(page), flags,
> -				  PAGEBLOCK_MIGRATETYPE_MASK | PAGEBLOCK_ISO_MASK);
> +				  PAGEBLOCK_FREETYPE_MASK);
>  }
>  
>  #ifdef CONFIG_DEBUG_VM
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 15/22] mm/page_alloc: remove ifdefs from pindex helpers
       [not found] ` <20260320-page_alloc-unmapped-v2-15-28bf1bd54f41@google.com>
@ 2026-05-11 18:30   ` Vlastimil Babka (SUSE)
  2026-05-12  9:49     ` Brendan Jackman
  0 siblings, 1 reply; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-11 18:30 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> The ifdefs are not technically needed here, everything used here is
> always defined.
> 
> They aren't doing much harm right now but a following patch will
> complicate these functions. Switching to IS_ENABLED() makes the code a
> bit less tiresome to read.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

Also good for prep series?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 15/22] mm/page_alloc: remove ifdefs from pindex helpers
  2026-05-11 18:30   ` [PATCH v2 15/22] mm/page_alloc: remove ifdefs from pindex helpers Vlastimil Babka (SUSE)
@ 2026-05-12  9:49     ` Brendan Jackman
  0 siblings, 0 replies; 19+ messages in thread
From: Brendan Jackman @ 2026-05-12  9:49 UTC (permalink / raw)
  To: Vlastimil Babka (SUSE), Brendan Jackman, Borislav Petkov,
	Dave Hansen, Peter Zijlstra, Andrew Morton, David Hildenbrand,
	Wei Xu, Johannes Weiner, Zi Yan, Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On Mon May 11, 2026 at 6:30 PM UTC, Vlastimil Babka (SUSE) wrote:
> On 3/20/26 19:23, Brendan Jackman wrote:
>> The ifdefs are not technically needed here, everything used here is
>> always defined.
>> 
>> They aren't doing much harm right now but a following patch will
>> complicate these functions. Switching to IS_ENABLED() makes the code a
>> bit less tiresome to read.
>> 
>> Signed-off-by: Brendan Jackman <jackmanb@google.com>
>
> Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
>
> Also good for prep series?

Um, shrug. I guess I will send a series with all the things that could
_arguably_ considered "cleanups", and then I will let you/David/whoever
else chimes in decide on which to pick and which ones would be "churn".

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 16/22] mm/page_alloc: separate pcplists by freetype flags
       [not found] ` <20260320-page_alloc-unmapped-v2-16-28bf1bd54f41@google.com>
@ 2026-05-13  8:46   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-13  8:46 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, patrick.roy, Itazuri, Takahiro,
	Andy Lutomirski, David Kaplan, Thomas Gleixner, Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> The normal freelists are already separated by this flag, so now update
> the pcplists accordingly. This follows the most "obvious" design where
> __GFP_UNMAPPED is supported at arbitrary orders.
> 
> If necessary, it would be possible to avoid the proliferation of
> pcplists by restricting orders that can be allocated from them with this
> FREETYPE_UNMAPPED.
> 
> On the other hand, there's currently no usecase for movable/reclaimable
> unmapped memory, and constraining the migratetype doesn't have any
> tricky plumbing implications. So, take advantage of that and assume that
> FREETYPE_UNMAPPED implies MIGRATE_UNMOVABLE.
> 
> Overall, this just takes the existing space of pindices and tacks
> another bank on the end. For !THP this is just 4 more lists, with THP
> there is a single additional list for hugepages.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>

Reviewed-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Nit:

> ---
>  include/linux/mmzone.h | 11 ++++++++++-
>  mm/page_alloc.c        | 44 +++++++++++++++++++++++++++++++++-----------
>  2 files changed, 43 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index af662e4912591..65efc08152b0c 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -778,8 +778,17 @@ enum zone_watermarks {
>  #else
>  #define NR_PCP_THP 0
>  #endif
> +/*
> + * FREETYPE_UNMAPPED can currently only be used with MIGRATE_UNMOVABLE, no for

                                                                        so ^

> + * those there's no need to encode the migratetype in the pindex.
> + */
> +#ifdef CONFIG_PAGE_ALLOC_UNMAPPED
> +#define NR_UNMAPPED_PCP_LISTS (PAGE_ALLOC_COSTLY_ORDER + 1 + !!NR_PCP_THP)
> +#else
> +#define NR_UNMAPPED_PCP_LISTS 0
> +#endif
>  #define NR_LOWORDER_PCP_LISTS (MIGRATE_PCPTYPES * (PAGE_ALLOC_COSTLY_ORDER + 1))
> -#define NR_PCP_LISTS (NR_LOWORDER_PCP_LISTS + NR_PCP_THP)
> +#define NR_PCP_LISTS (NR_LOWORDER_PCP_LISTS + NR_PCP_THP + NR_UNMAPPED_PCP_LISTS)
>  
>  /*
>   * Flags used in pcp->flags field.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f125eae790f73..53848312a0c21 100644

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 18/22] mm/page_alloc: introduce ALLOC_NOBLOCK
       [not found] ` <20260320-page_alloc-unmapped-v2-18-28bf1bd54f41@google.com>
@ 2026-05-13  9:43   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-13  9:43 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> This flag is set unless we can be sure the caller isn't in an atomic
> context.
> 
> The allocator will soon start needing to call set_direct_map_* APIs
> which cannot be called with IRQs off. It will need to do this even
> before direct reclaim is possible.
> 
> Despite the fact that, in principle, ALLOC_NOBLOCK is distinct from
> __GFP_DIRECT_RECLAIM, in order to avoid introducing a GFP flag, just
> infer the former based on whether the caller set the latter. This means
> that, in practice, ALLOC_NOBLOCK is just !__GFP_DIRECT_RECLAIM, except
> that it is not influenced by gfp_allowed_mask. This could change later,
> though.

I don't think it should change later? We wouldn't want false positives
during boot, or what do you have in mind?
I wonder if the implementation of the "not influenced" is correct though...

> Call it ALLOC_NOBLOCK in order to try and mitigate confusion vs the
> recently-removed ALLOC_NON_BLOCK, which meant something different.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>
> ---
>  mm/internal.h   |  1 +
>  mm/page_alloc.c | 29 ++++++++++++++++++++++-------
>  2 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index cc19a90a7933f..865991aca06ea 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1431,6 +1431,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>  #define ALLOC_HIGHATOMIC	0x200 /* Allows access to MIGRATE_HIGHATOMIC */
>  #define ALLOC_TRYLOCK		0x400 /* Only use spin_trylock in allocation path */
>  #define ALLOC_KSWAPD		0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */
> +#define ALLOC_NOBLOCK	       0x1000 /* Caller may be atomic */
>  
>  /* Flags that allow allocations below the min watermark. */
>  #define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9a07c552a1f8a..83d06a6db6433 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4608,6 +4608,8 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order)
>  		(gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM));
>  
>  	if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) {
> +		alloc_flags |= ALLOC_NOBLOCK;

When this is called from __alloc_pages_slowpath(), gfp_allowed_mask is
already applied, so it will be influenced.

> +
>  		/*
>  		 * Not worth trying to allocate harder for __GFP_NOMEMALLOC even
>  		 * if it can't schedule.
> @@ -4801,14 +4803,13 @@ check_retry_cpuset(int cpuset_mems_cookie, struct alloc_context *ac)
>  
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> -						struct alloc_context *ac)
> +		       struct alloc_context *ac, unsigned int alloc_flags)
>  {
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	bool can_compact = can_direct_reclaim && gfp_compaction_allowed(gfp_mask);
>  	bool nofail = gfp_mask & __GFP_NOFAIL;
>  	const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
>  	struct page *page = NULL;
> -	unsigned int alloc_flags;
>  	unsigned long did_some_progress;
>  	enum compact_priority compact_priority;
>  	enum compact_result compact_result;
> @@ -4860,7 +4861,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	 * kswapd needs to be woken up, and to avoid the cost of setting up
>  	 * alloc_flags precisely. So we do that now.
>  	 */
> -	alloc_flags = gfp_to_alloc_flags(gfp_mask, order);
> +	alloc_flags |= gfp_to_alloc_flags(gfp_mask, order);

Is it safe to just combine them? You come with ALLOC_WMARK_LOW and combine
with ALLOC_WMARK_MIN from gfp_to_alloc_flags() but these are not bit flags,
I think you end up with ALLOC_WMARK_LOW effectively.
Probably you need to pass the old alloc_flags to gfp_to_alloc_flags, mask
only ALLOC_NOBLOCK from it and combine with newly calculated alloc_flags. By
not recomputing ALLOC_NOBLOCK you also avoid the problem pointed out above?

(or we decide to not use gfp flag but a new function and then it's more like
what alloc_frozen_pages_nolock_noprof() does).

>  
>  	/*
>  	 * We need to recalculate the starting point for the zonelist iterator
> @@ -5086,6 +5087,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	return page;
>  }
>  
> +static inline unsigned int init_alloc_flags(gfp_t gfp_mask, unsigned int flags)
> +{
> +	/*
> +	 * If the caller allowed __GFP_DIRECT_RECLAIM, they can't be atomic.
> +	 * Note this is a separate determination from whether direct reclaim is
> +	 * actually allowed, it must happen before applying gfp_allowed_mask.
> +	 */
> +	if (!(gfp_mask & __GFP_DIRECT_RECLAIM))
> +		flags |= ALLOC_NOBLOCK;
> +	return flags;
> +}
> +
>  static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  		int preferred_nid, nodemask_t *nodemask,
>  		struct alloc_context *ac, gfp_t *alloc_gfp,
> @@ -5166,7 +5179,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>  	struct list_head *pcp_list;
>  	struct alloc_context ac;
>  	gfp_t alloc_gfp;
> -	unsigned int alloc_flags = ALLOC_WMARK_LOW;
> +	unsigned int alloc_flags = init_alloc_flags(gfp, ALLOC_WMARK_LOW);
>  	int nr_populated = 0, nr_account = 0;
>  
>  	/*
> @@ -5307,7 +5320,7 @@ struct page *__alloc_frozen_pages_noprof(gfp_t gfp, unsigned int order,
>  		int preferred_nid, nodemask_t *nodemask)
>  {
>  	struct page *page;
> -	unsigned int alloc_flags = ALLOC_WMARK_LOW;
> +	unsigned int alloc_flags = init_alloc_flags(gfp, ALLOC_WMARK_LOW);
>  	gfp_t alloc_gfp; /* The gfp_t that was actually used for allocation */
>  	struct alloc_context ac = { };
>  
> @@ -5352,7 +5365,7 @@ struct page *__alloc_frozen_pages_noprof(gfp_t gfp, unsigned int order,
>  	 */
>  	ac.nodemask = nodemask;
>  
> -	page = __alloc_pages_slowpath(alloc_gfp, order, &ac);
> +	page = __alloc_pages_slowpath(alloc_gfp, order, &ac, alloc_flags);
>  
>  out:
>  	if (memcg_kmem_online() && (gfp & __GFP_ACCOUNT) && page &&
> @@ -7872,11 +7885,13 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
>  	 */
>  	gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC | __GFP_COMP
>  			| gfp_flags;
> -	unsigned int alloc_flags = ALLOC_TRYLOCK;
> +	unsigned int alloc_flags = init_alloc_flags(alloc_gfp, ALLOC_TRYLOCK);
>  	struct alloc_context ac = { };
>  	struct page *page;
>  
>  	VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT);
> +	VM_WARN_ON_ONCE(!(alloc_flags & ALLOC_NOBLOCK));
> +
>  	/*
>  	 * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is
>  	 * unsafe in NMI. If spin_trylock() is called from hard IRQ the current
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 19/22] mm/page_alloc: implement __GFP_UNMAPPED allocations
       [not found] ` <20260320-page_alloc-unmapped-v2-19-28bf1bd54f41@google.com>
@ 2026-05-13 15:43   ` Vlastimil Babka (SUSE)
  0 siblings, 0 replies; 19+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-05-13 15:43 UTC (permalink / raw)
  To: Brendan Jackman, Borislav Petkov, Dave Hansen, Peter Zijlstra,
	Andrew Morton, David Hildenbrand, Wei Xu, Johannes Weiner, Zi Yan,
	Lorenzo Stoakes
  Cc: linux-mm, linux-kernel, x86, rppt, Sumit Garg, derkling, reijiw,
	Will Deacon, rientjes, Kalyazin, Nikita, patrick.roy,
	Itazuri, Takahiro, Andy Lutomirski, David Kaplan, Thomas Gleixner,
	Yosry Ahmed

On 3/20/26 19:23, Brendan Jackman wrote:
> Currently __GFP_UNMAPPED allocs will always fail because, although the
> lists exist to hold them, there is no way to actually create an unmapped
> page block. This commit adds one, and also the logic to map it back
> again when that's needed.
> 
> Doing this at pageblock granularity ensures that the pageblock flags can
> be used to infer which freetype a page belongs to. It also provides nice
> batching of TLB flushes, and also avoids creating too much unnecessary
> TLB fragmentation in the physmap.
> 
> There are some functional requirements for flipping a block:
> 
>  - Unmapping requires a TLB shootdown, meaning IRQs must be enabled.
> 
>  - Because the main usecase of this feature is to protect against CPU
>    exploits, when a block is mapped it needs to be zeroed to ensure no
>    residual data is available to attackers. Zeroing a block with a
>    spinlock held seems undesirable.

Did I overlook something or this patch doesn't do this whole block zeroing?
Or is it handled by set_direct_map_valid_noflush itself?

>  - Updating the pagetables might require allocating a pagetable to break
>    down a huge page. This would deadlock if the zone lock was held.
> 
> This makes allocations that need to change sensitivity _somewhat_
> similar to those that need to fallback to a different migratetype. But,
> the locking requirements mean that this can't just be squashed into the
> existing "fallback" allocator logic, instead a new allocator path just
> for this purpose is needed.
> 
> The new path is assumed to be much cheaper than the really heavyweight
> stuff like compaction and reclaim. But at present it is treated as less

Uhh, speaking of compaction and reclaim... we rely on finding a whole free
pageblock in order to flip it. If that doesn't exist, the whole
get_page_from_freelist() will fail, and we might enter the
reclaim/compaction cycle in __allow_pages_slowpath(). But since we might
ultimately want an order-0 allocation, there won't be any compaction
attempted, because that code won't know we failed to flip a pageblock. And
the watermarks might look good and prevent reclaim as well I think? We
should somehow indicate this, and handle accordingly. Might not be trivial.
Or maybe reuse pageblock isolation code to do the migrations directly in
__rmqueue_direct_map?

> desirable than the mobility-related "fallback" and "stealing" logic.
> This might turn out to need revision (in particular, maybe it's a
> problem that __rmqueue_steal(), which causes fragmentation, happens
> before __rmqueue_direct_map()), but that should be treated as a subsequent
> optimisation project.
> 
> This currently forbids __GFP_ZERO, this is just to keep the patch from
> getting too large, the next patch will remove this restriction.
> 
> Signed-off-by: Brendan Jackman <jackmanb@google.com>
> ---
>  include/linux/gfp.h |  11 +++-
>  mm/Kconfig          |   4 +-
>  mm/page_alloc.c     | 171 ++++++++++++++++++++++++++++++++++++++++++++++++----
>  3 files changed, 170 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 34a38c420e84a..2d8279c6300d3 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -24,6 +24,7 @@ struct mempolicy;
>  static inline freetype_t gfp_freetype(const gfp_t gfp_flags)
>  {
>  	int migratetype;
> +	unsigned int ft_flags = 0;
>  
>  	VM_WARN_ON((gfp_flags & GFP_MOVABLE_MASK) == GFP_MOVABLE_MASK);
>  	BUILD_BUG_ON((1UL << GFP_MOVABLE_SHIFT) != ___GFP_MOVABLE);
> @@ -40,7 +41,15 @@ static inline freetype_t gfp_freetype(const gfp_t gfp_flags)
>  			>> GFP_MOVABLE_SHIFT;
>  	}
>  
> -	return migrate_to_freetype(migratetype, 0);
> +#ifdef CONFIG_PAGE_ALLOC_UNMAPPED
> +	if (gfp_flags & __GFP_UNMAPPED) {
> +		if (WARN_ON_ONCE(migratetype != MIGRATE_UNMOVABLE))
> +			migratetype = MIGRATE_UNMOVABLE;
> +		ft_flags |= FREETYPE_UNMAPPED;
> +	}
> +#endif
> +
> +	return migrate_to_freetype(migratetype, ft_flags);
>  }
>  #undef GFP_MOVABLE_MASK
>  #undef GFP_MOVABLE_SHIFT
> diff --git a/mm/Kconfig b/mm/Kconfig
> index b915af74d33cc..e4cb52149acad 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -1505,8 +1505,8 @@ config MERMAP_KUNIT_TEST
>  
>  	  If unsure, say N.
>  
> -endmenu
> -
>  config PAGE_ALLOC_UNMAPPED
>  	bool "Support allocating pages that aren't in the direct map" if COMPILE_TEST
>  	default COMPILE_TEST
> +
> +endmenu
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 83d06a6db6433..710ee9f46d467 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -34,6 +34,7 @@
>  #include <linux/folio_batch.h>
>  #include <linux/memory_hotplug.h>
>  #include <linux/nodemask.h>
> +#include <linux/set_memory.h>
>  #include <linux/vmstat.h>
>  #include <linux/fault-inject.h>
>  #include <linux/compaction.h>
> @@ -1002,6 +1003,26 @@ static void change_pageblock_range(struct page *pageblock_page,
>  	}
>  }
>  
> +/*
> + * Can pages of these two freetypes be combined into a single higher-order free
> + * page?
> + */
> +static inline bool can_merge_freetypes(freetype_t a, freetype_t b)
> +{
> +	if (freetypes_equal(a, b))
> +		return true;
> +
> +	if (!migratetype_is_mergeable(free_to_migratetype(a)) ||
> +	    !migratetype_is_mergeable(free_to_migratetype(b)))
> +		return false;
> +
> +	/*
> +	 * Mustn't "just" merge pages with different freetype flags, changing
> +	 * those requires updating pagetables.
> +	 */
> +	return freetype_flags(a) == freetype_flags(b);
> +}
> +
>  /*
>   * Freeing function for a buddy system allocator.
>   *
> @@ -1070,9 +1091,7 @@ static inline void __free_one_page(struct page *page,
>  			buddy_ft = get_pfnblock_freetype(buddy, buddy_pfn);
>  			buddy_mt = free_to_migratetype(buddy_ft);
>  
> -			if (migratetype != buddy_mt &&
> -			    (!migratetype_is_mergeable(migratetype) ||
> -			     !migratetype_is_mergeable(buddy_mt)))
> +			if (!can_merge_freetypes(freetype, buddy_ft))
>  				goto done_merging;
>  		}
>  
> @@ -1089,7 +1108,9 @@ static inline void __free_one_page(struct page *page,
>  			/*
>  			 * Match buddy type. This ensures that an
>  			 * expand() down the line puts the sub-blocks
> -			 * on the right freelists.
> +			 * on the right freelists. Freetype flags are
> +			 * already set correctly because of
> +			 * can_merge_freetypes().
>  			 */
>  			change_pageblock_range(buddy, order, migratetype);
>  		}
> @@ -1982,6 +2003,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
>  	struct free_area *area;
>  	struct page *page;
>  
> +	if (freetype_idx(freetype) < 0)
> +		return NULL;
> +
>  	/* Find a page of the appropriate size in the preferred list */
>  	for (current_order = order; current_order < NR_PAGE_ORDERS; ++current_order) {
>  		enum migratetype migratetype = free_to_migratetype(freetype);
> @@ -3324,6 +3348,119 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z,
>  #endif
>  }
>  
> +#ifdef CONFIG_PAGE_ALLOC_UNMAPPED
> +/* Try to allocate a page by mapping/unmapping a block from the direct map. */
> +static inline struct page *
> +__rmqueue_direct_map(struct zone *zone, unsigned int request_order,
> +		     unsigned int alloc_flags, freetype_t freetype)
> +{
> +	unsigned int ft_flags_other = freetype_flags(freetype) ^ FREETYPE_UNMAPPED;
> +	freetype_t ft_other = migrate_to_freetype(free_to_migratetype(freetype),
> +						  ft_flags_other);
> +	bool want_mapped = !(freetype_flags(freetype) & FREETYPE_UNMAPPED);
> +	enum rmqueue_mode rmqm = RMQUEUE_NORMAL;

Why not RMQUEUE_CLAIM? We want to change the migratetype to ours as well,
not just the unmapped flag?

> +	unsigned long irq_flags;
> +	int nr_pageblocks;
> +	struct page *page;
> +	int alloc_order;
> +	int err;
> +
> +	if (freetype_idx(ft_other) < 0)
> +		return NULL;
> +
> +	/*
> +	 * Might need a TLB shootdown. Even if IRQs are on this isn't
> +	 * safe if the caller holds a lock (in case the other CPUs need that
> +	 * lock to handle the shootdown IPI).
> +	 */
> +	if (alloc_flags & ALLOC_NOBLOCK)
> +		return NULL;
> +
> +	if (!can_set_direct_map())
> +		return NULL;
> +
> +	lockdep_assert(!irqs_disabled() || unlikely(early_boot_irqs_disabled));
> +
> +	/*
> +	 * Need to [un]map a whole pageblock (otherwise it might require
> +	 * allocating pagetables). First allocate it.
> +	 */
> +	alloc_order = max(request_order, pageblock_order);
> +	nr_pageblocks = 1 << (alloc_order - pageblock_order);
> +	zone_lock_irqsave(zone, irq_flags);
> +	page = __rmqueue(zone, alloc_order, ft_other, alloc_flags, &rmqm);
> +	zone_unlock_irqrestore(zone, irq_flags);
> +	if (!page)
> +		return NULL;
> +
> +	/*
> +	 * Now that IRQs are on it's safe to do a TLB shootdown, and now that we
> +	 * released the zone lock it's possible to allocate a pagetable if
> +	 * needed to split up a huge page.
> +	 *
> +	 * Note that modifying the direct map may need to allocate pagetables.
> +	 * What about unbounded recursion? Here are the assumptions that make it
> +	 * safe:
> +	 *
> +	 * - The direct map starts out fully mapped at boot. (This is not really
> +	 *   an assumption" as its in direct control of page_alloc.c).
> +	 *
> +	 * - Once pages in the direct map are broken down, they are not
> +	 *   re-aggregated into larger pages again.
> +	 *
> +	 * - Pagetables are never allocated with __GFP_UNMAPPED.
> +	 *
> +	 * Under these assumptions, a pagetable might need to be allocated while
> +	 * _unmapping_ stuff from the direct map during a __GFP_UNMAPPED
> +	 * allocation. But, the allocation of that pagetable never requires
> +	 * allocating a further pagetable.
> +	 */
> +	err = set_direct_map_valid_noflush(page,
> +				nr_pageblocks << pageblock_order, want_mapped);
> +	if (err == -ENOMEM || WARN_ONCE(err, "err=%d\n", err)) {
> +		zone_lock_irqsave(zone, irq_flags);
> +		__free_one_page(page, page_to_pfn(page), zone,
> +				alloc_order, freetype, FPI_SKIP_REPORT_NOTIFY);
> +		zone_unlock_irqrestore(zone, irq_flags);
> +		return NULL;
> +	}
> +
> +	if (!want_mapped) {
> +		unsigned long start = (unsigned long)page_address(page);
> +		unsigned long end = start + (nr_pageblocks << (pageblock_order + PAGE_SHIFT));
> +
> +		flush_tlb_kernel_range(start, end);
> +	}
> +
> +	for (int i = 0; i < nr_pageblocks; i++) {
> +		struct page *block_page = page + (pageblock_nr_pages * i);
> +
> +		set_pageblock_freetype_flags(block_page, freetype_flags(freetype));
> +	}
> +
> +	if (request_order >= alloc_order)
> +		return page;
> +
> +	/* Free any remaining pages in the block. */
> +	zone_lock_irqsave(zone, irq_flags);
> +	for (unsigned int i = request_order; i < alloc_order; i++) {
> +		struct page *page_to_free = page + (1 << i);
> +
> +		__free_one_page(page_to_free, page_to_pfn(page_to_free), zone,
> +			i, freetype, FPI_SKIP_REPORT_NOTIFY);
> +	}

Could expand() be used here?

> +	zone_unlock_irqrestore(zone, irq_flags);
> +
> +	return page;
> +}
> +#else /* CONFIG_PAGE_ALLOC_UNMAPPED */
> +static inline struct page *__rmqueue_direct_map(struct zone *zone, unsigned int request_order,
> +				unsigned int alloc_flags, freetype_t freetype)
> +{
> +	return NULL;
> +}
> +#endif /* CONFIG_PAGE_ALLOC_UNMAPPED */
> +
>  static __always_inline
>  struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
>  			   unsigned int order, unsigned int alloc_flags,
> @@ -3331,8 +3468,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
>  {
>  	struct page *page;
>  	unsigned long flags;
> -	freetype_t ft_high = freetype_with_migrate(freetype,
> -						       MIGRATE_HIGHATOMIC);
> +	freetype_t ft_high = freetype_with_migrate(freetype, MIGRATE_HIGHATOMIC);
>  
>  	do {
>  		page = NULL;
> @@ -3357,13 +3493,15 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
>  			 */
>  			if (!page && (alloc_flags & (ALLOC_OOM|ALLOC_HARDER)))
>  				page = __rmqueue_smallest(zone, order, ft_high);
> -
> -			if (!page) {
> -				zone_unlock_irqrestore(zone, flags);
> -				return NULL;
> -			}
>  		}
>  		zone_unlock_irqrestore(zone, flags);
> +
> +		/* Try changing direct map, now we've released the zone lock */
> +		if (!page)
> +			page = __rmqueue_direct_map(zone, order, alloc_flags, freetype);
> +		if (!page)
> +			return NULL;
> +
>  	} while (check_new_pages(page, order));
>  
>  	__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
> @@ -3587,6 +3725,8 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
>  static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  						bool force)
>  {
> +	freetype_t ft_high = freetype_with_migrate(ac->freetype,
> +					MIGRATE_HIGHATOMIC);
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
>  	struct zoneref *z;
> @@ -3595,6 +3735,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  	int order;
>  	int ret;
>  
> +	if (freetype_idx(ft_high) < 0)
> +		return false;
> +
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->highest_zoneidx,
>  								ac->nodemask) {
>  		/*
> @@ -3608,8 +3751,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  		zone_lock_irqsave(zone, flags);
>  		for (order = 0; order < NR_PAGE_ORDERS; order++) {
>  			struct free_area *area = &(zone->free_area[order]);
> -			freetype_t ft_high = freetype_with_migrate(ac->freetype,
> -							MIGRATE_HIGHATOMIC);
>  			unsigned long size;
>  
>  			page = get_page_from_free_area(area, ft_high);
> @@ -5109,6 +5250,10 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
>  	ac->nodemask = nodemask;
>  	ac->freetype = gfp_freetype(gfp_mask);
>  
> +	/* Not implemented yet. */
> +	if (freetype_flags(ac->freetype) & FREETYPE_UNMAPPED && gfp_mask & __GFP_ZERO)
> +		return false;
> +
>  	if (cpusets_enabled()) {
>  		*alloc_gfp |= __GFP_HARDWALL;
>  		/*
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 00/22] mm: Add __GFP_UNMAPPED
       [not found] <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com>
                   ` (10 preceding siblings ...)
       [not found] ` <20260320-page_alloc-unmapped-v2-19-28bf1bd54f41@google.com>
@ 2026-05-13 16:17 ` Gregory Price
  11 siblings, 0 replies; 19+ messages in thread
From: Gregory Price @ 2026-05-13 16:17 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Borislav Petkov, Dave Hansen, Peter Zijlstra, Andrew Morton,
	David Hildenbrand, Vlastimil Babka, Wei Xu, Johannes Weiner,
	Zi Yan, Lorenzo Stoakes, linux-mm, linux-kernel, x86, rppt,
	Sumit Garg, derkling, reijiw, Will Deacon, rientjes,
	Kalyazin, Nikita, patrick.roy, Itazuri, Takahiro, Andy Lutomirski,
	David Kaplan, Thomas Gleixner, Yosry Ahmed

On Fri, Mar 20, 2026 at 06:23:24PM +0000, Brendan Jackman wrote:
> 
> Because of these ambitious usecases, it's core to this proposal that the
> feature
> overloading the concept of a migratetype, this extension is done by
> adding a new concept on top of migratetype: the _freetype_. A freetype
> is basically just a migratetype plus some flags, and it replaces
> migratetypes wherever the latter is currently used as to index free
> pages.
>

I'm a bit confused why the need for additional level of indirection
instead of just adding a new migratetype.  You still end up increasing
the migratetype matrix, just with a new dimension.

(apologies if this was covered in prior work or discussions, just now
 plugging myself into the series).

Why not simply have an unmapped migratetype, for example, and on steal
you convert it to movable or whatever the preference is? 

> .:::: Hacky bits: simplistic secretmem integration
> 
> The secretmem integration leaves the mmain optimisations on the table;
> the security-required flushes of the mermap areas are implemented via
> distinct tlb_flush_mm() calls. It should be possible to amortize the
> mermap TLB flushes completely into the normal VMA flushing. However, as
> far as I know there is no performance-sensitive usecase for secretmem.
> So, I've just implemented the minimal adoption. This will at least avoid
> fragmentation of the direct map, even if it doesn't reduce TLB flushing.
> If anyone knows of a workload that might benefit from dropping that
> flushing, let me know!

Crossing a couple streams here, I wonder if there's some mechanisms
introduced by MST's latest multi-zeroing-avoidance [1] code that might
help deal with the problem here.

MST wired up an optional user_addr into the buddy that allows us to sink
the zeroing step for folio_zero_user (or folio_user_zero or whatever)
into the post_alloc_hook - which includes some cache flushing.

That conveniently gives you what you need for a TLB flush AND an
indicator that the allocation is intended for userland.

Unless I'm fundamentally misunderstanding something, the pattern at least
seems similar.

In that sense, does this just become a post_alloc_hook that unmaps the
memory after zeroing and allocation?

I get the intent is to have the majority of memory unmapped by default,
and then steal those blocks and map them as the kernel requires more
memory, but I wonder if it's cleaner to do it the other way and simply
have the buddy unmap on alloc after zeroing, and remap on free.

Seems like the free path would be trivial, check if the page is in the
direct map and if not, remap it and move on.  Entirely hidden from
existing users.

So, maybe a stupid question:  Was the opposite mechanism considered
(unmap on alloc sunk into the buddy), and if so was it rejected for some
other reason?

[1] https://lore.kernel.org/linux-mm/cover.1778616612.git.mst@redhat.com/

~Gregory

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2026-05-13 16:17 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com>
     [not found] ` <20260320-page_alloc-unmapped-v2-8-28bf1bd54f41@google.com>
2026-05-11 13:46   ` [PATCH v2 08/22] mm: introduce for_each_free_list() Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-9-28bf1bd54f41@google.com>
2026-05-11 13:51   ` [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback() Vlastimil Babka (SUSE)
2026-05-11 16:44     ` Brendan Jackman
2026-05-11 16:53       ` Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-10-28bf1bd54f41@google.com>
2026-05-11 15:34   ` [PATCH v2 10/22] mm: introduce freetype_t Vlastimil Babka (SUSE)
2026-05-11 16:49     ` Brendan Jackman
2026-05-11 16:58       ` Vlastimil Babka (SUSE)
2026-05-11 18:17   ` Vlastimil Babka (SUSE)
2026-05-11 18:26   ` Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-11-28bf1bd54f41@google.com>
2026-05-11 15:35   ` [PATCH v2 11/22] mm: move migratetype definitions to freetype.h Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-12-28bf1bd54f41@google.com>
2026-05-11 18:01   ` [PATCH v2 12/22] mm: add definitions for allocating unmapped pages Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-13-28bf1bd54f41@google.com>
2026-05-11 18:07   ` [PATCH v2 13/22] mm: rejig pageblock mask definitions Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-14-28bf1bd54f41@google.com>
2026-05-11 18:29   ` [PATCH v2 14/22] mm: encode freetype flags in pageblock flags Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-15-28bf1bd54f41@google.com>
2026-05-11 18:30   ` [PATCH v2 15/22] mm/page_alloc: remove ifdefs from pindex helpers Vlastimil Babka (SUSE)
2026-05-12  9:49     ` Brendan Jackman
     [not found] ` <20260320-page_alloc-unmapped-v2-16-28bf1bd54f41@google.com>
2026-05-13  8:46   ` [PATCH v2 16/22] mm/page_alloc: separate pcplists by freetype flags Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-18-28bf1bd54f41@google.com>
2026-05-13  9:43   ` [PATCH v2 18/22] mm/page_alloc: introduce ALLOC_NOBLOCK Vlastimil Babka (SUSE)
     [not found] ` <20260320-page_alloc-unmapped-v2-19-28bf1bd54f41@google.com>
2026-05-13 15:43   ` [PATCH v2 19/22] mm/page_alloc: implement __GFP_UNMAPPED allocations Vlastimil Babka (SUSE)
2026-05-13 16:17 ` [PATCH v2 00/22] mm: Add __GFP_UNMAPPED Gregory Price

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox