From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 903043E92B6 for ; Wed, 13 May 2026 09:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778665428; cv=none; b=jTrOG+xVfRrxQX8BSr0/9cW9qiwjtuuuRoY450knUZEsvZ/VCBgUpmxltI2H/uJNUhBBK1ztPL70R0eSjVLEsFHgq4QYJyuvLtS1MzwR8JP8T2dfyjTObS96fOHRmyWIFPFf4lmJXt9Zyq6CQmEyJcVIR/Eq69uyzUlQ/xE8EWo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778665428; c=relaxed/simple; bh=daIGsFCgC7s3aBYTkbVyPm/HvOWGKffRvvCT+FWmZJA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=emBGfpHyOhbGrePPa2r+m2pllQI0aT/a7xk/3dlYX7wMhzb5YvL3kqBjqfCUP3lUhaRDX1O5L1tys3D6O4+48BdId6B9SwUekMs9Om9F09nScZXTkmIkF/7JWOU2jnaNyAWMZW9w6LMlyLjh8a/jjGZ4hj5wsUCuJS4BWNi20zM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GejbgU8y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GejbgU8y" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BA797C2BCC7; Wed, 13 May 2026 09:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778665427; bh=daIGsFCgC7s3aBYTkbVyPm/HvOWGKffRvvCT+FWmZJA=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=GejbgU8yvaGJ7FI6kTLJFuTj2PkzFZzn1clBDyJsFFR5Gv5o3VOqi4Os52tb//Rh2 DTykpTfkcub8hAwvjCz7acDAfhctAfaTPjVSunKA8w45WO0/GM6u70fVripW/EGZ0I BsrUNXAKpUv/5C1aFbKY36RPtwqoAY9D7YgkwgClHPtfHMqeRYpUD8UnIsYVd/3fzD 82HKL0W2jrV0m6dvFY9yZy2tItTNewKVAyBf1/cbyHh839l0mOfRa5QM88vq8TtqZP d2TQsantAwaD0r/5ktFMsebbRIjstn1AlshwoRiuGss+DKWnIWeoTNunOJQiBiqicq 2YHAE7cYiJAOA== Message-ID: Date: Wed, 13 May 2026 11:43:40 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 18/22] mm/page_alloc: introduce ALLOC_NOBLOCK Content-Language: en-US To: Brendan Jackman , Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Yosry Ahmed References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> <20260320-page_alloc-unmapped-v2-18-28bf1bd54f41@google.com> From: "Vlastimil Babka (SUSE)" In-Reply-To: <20260320-page_alloc-unmapped-v2-18-28bf1bd54f41@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 3/20/26 19:23, Brendan Jackman wrote: > This flag is set unless we can be sure the caller isn't in an atomic > context. > > The allocator will soon start needing to call set_direct_map_* APIs > which cannot be called with IRQs off. It will need to do this even > before direct reclaim is possible. > > Despite the fact that, in principle, ALLOC_NOBLOCK is distinct from > __GFP_DIRECT_RECLAIM, in order to avoid introducing a GFP flag, just > infer the former based on whether the caller set the latter. This means > that, in practice, ALLOC_NOBLOCK is just !__GFP_DIRECT_RECLAIM, except > that it is not influenced by gfp_allowed_mask. This could change later, > though. I don't think it should change later? We wouldn't want false positives during boot, or what do you have in mind? I wonder if the implementation of the "not influenced" is correct though... > Call it ALLOC_NOBLOCK in order to try and mitigate confusion vs the > recently-removed ALLOC_NON_BLOCK, which meant something different. > > Signed-off-by: Brendan Jackman > --- > mm/internal.h | 1 + > mm/page_alloc.c | 29 ++++++++++++++++++++++------- > 2 files changed, 23 insertions(+), 7 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index cc19a90a7933f..865991aca06ea 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1431,6 +1431,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ > #define ALLOC_TRYLOCK 0x400 /* Only use spin_trylock in allocation path */ > #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ > +#define ALLOC_NOBLOCK 0x1000 /* Caller may be atomic */ > > /* Flags that allow allocations below the min watermark. */ > #define ALLOC_RESERVES (ALLOC_HARDER|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 9a07c552a1f8a..83d06a6db6433 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4608,6 +4608,8 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) > (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); > > if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { > + alloc_flags |= ALLOC_NOBLOCK; When this is called from __alloc_pages_slowpath(), gfp_allowed_mask is already applied, so it will be influenced. > + > /* > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even > * if it can't schedule. > @@ -4801,14 +4803,13 @@ check_retry_cpuset(int cpuset_mems_cookie, struct alloc_context *ac) > > static inline struct page * > __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > - struct alloc_context *ac) > + struct alloc_context *ac, unsigned int alloc_flags) > { > bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM; > bool can_compact = can_direct_reclaim && gfp_compaction_allowed(gfp_mask); > bool nofail = gfp_mask & __GFP_NOFAIL; > const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER; > struct page *page = NULL; > - unsigned int alloc_flags; > unsigned long did_some_progress; > enum compact_priority compact_priority; > enum compact_result compact_result; > @@ -4860,7 +4861,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > * kswapd needs to be woken up, and to avoid the cost of setting up > * alloc_flags precisely. So we do that now. > */ > - alloc_flags = gfp_to_alloc_flags(gfp_mask, order); > + alloc_flags |= gfp_to_alloc_flags(gfp_mask, order); Is it safe to just combine them? You come with ALLOC_WMARK_LOW and combine with ALLOC_WMARK_MIN from gfp_to_alloc_flags() but these are not bit flags, I think you end up with ALLOC_WMARK_LOW effectively. Probably you need to pass the old alloc_flags to gfp_to_alloc_flags, mask only ALLOC_NOBLOCK from it and combine with newly calculated alloc_flags. By not recomputing ALLOC_NOBLOCK you also avoid the problem pointed out above? (or we decide to not use gfp flag but a new function and then it's more like what alloc_frozen_pages_nolock_noprof() does). > > /* > * We need to recalculate the starting point for the zonelist iterator > @@ -5086,6 +5087,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > return page; > } > > +static inline unsigned int init_alloc_flags(gfp_t gfp_mask, unsigned int flags) > +{ > + /* > + * If the caller allowed __GFP_DIRECT_RECLAIM, they can't be atomic. > + * Note this is a separate determination from whether direct reclaim is > + * actually allowed, it must happen before applying gfp_allowed_mask. > + */ > + if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) > + flags |= ALLOC_NOBLOCK; > + return flags; > +} > + > static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, > int preferred_nid, nodemask_t *nodemask, > struct alloc_context *ac, gfp_t *alloc_gfp, > @@ -5166,7 +5179,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, > struct list_head *pcp_list; > struct alloc_context ac; > gfp_t alloc_gfp; > - unsigned int alloc_flags = ALLOC_WMARK_LOW; > + unsigned int alloc_flags = init_alloc_flags(gfp, ALLOC_WMARK_LOW); > int nr_populated = 0, nr_account = 0; > > /* > @@ -5307,7 +5320,7 @@ struct page *__alloc_frozen_pages_noprof(gfp_t gfp, unsigned int order, > int preferred_nid, nodemask_t *nodemask) > { > struct page *page; > - unsigned int alloc_flags = ALLOC_WMARK_LOW; > + unsigned int alloc_flags = init_alloc_flags(gfp, ALLOC_WMARK_LOW); > gfp_t alloc_gfp; /* The gfp_t that was actually used for allocation */ > struct alloc_context ac = { }; > > @@ -5352,7 +5365,7 @@ struct page *__alloc_frozen_pages_noprof(gfp_t gfp, unsigned int order, > */ > ac.nodemask = nodemask; > > - page = __alloc_pages_slowpath(alloc_gfp, order, &ac); > + page = __alloc_pages_slowpath(alloc_gfp, order, &ac, alloc_flags); > > out: > if (memcg_kmem_online() && (gfp & __GFP_ACCOUNT) && page && > @@ -7872,11 +7885,13 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned > */ > gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC | __GFP_COMP > | gfp_flags; > - unsigned int alloc_flags = ALLOC_TRYLOCK; > + unsigned int alloc_flags = init_alloc_flags(alloc_gfp, ALLOC_TRYLOCK); > struct alloc_context ac = { }; > struct page *page; > > VM_WARN_ON_ONCE(gfp_flags & ~__GFP_ACCOUNT); > + VM_WARN_ON_ONCE(!(alloc_flags & ALLOC_NOBLOCK)); > + > /* > * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is > * unsafe in NMI. If spin_trylock() is called from hard IRQ the current >