The Linux Kernel Mailing List
 help / color / mirror / Atom feed
From: Wandun <chenwandun1@gmail.com>
To: Chen Ridong <chenridong@huaweicloud.com>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	longman@redhat.com, tj@kernel.org, hannes@cmpxchg.org,
	mkoutny@suse.com
Subject: Re: [PATCH] cgroup/cpuset: move PF_EXITING check before __GFP_HARDWALL in cpuset_current_node_allowed()
Date: Fri, 8 May 2026 14:15:28 +0800	[thread overview]
Message-ID: <407ab4a5-87e5-4eca-99d3-baa031935702@gmail.com> (raw)
In-Reply-To: <02352ad2-9c85-4825-82b6-49c6a4b081d8@huaweicloud.com>



On 5/8/26 09:39, Chen Ridong wrote:
>
> On 2026/5/7 18:54, Chen Wandun wrote:
>> Since prepare_alloc_pages() unconditionally adds __GFP_HARDWALL for the
>> fast path when cpusets are enabled, the __GFP_HARDWALL check in
>> cpuset_current_node_allowed() causes the PF_EXITING escape path to be
>> skipped on the first allocation attempt.  This makes it unreachable in
>> the common case, so dying tasks can get stuck in direct reclaim or even
>> trigger OOM while trying to exit, despite being allowed to allocate from
>> any node.
>>
>> Move the PF_EXITING check before __GFP_HARDWALL so that dying tasks
>> can allocate memory from any node to exit quickly, even when cpusets
>> are enabled.
>>
>> Also update the function comment to reflect the actual behavior of
>> prepare_alloc_pages() and the corrected check ordering.
>>
>> Signed-off-by: Chen Wandun <chenwandun@lixiang.com>
>> ---
>>   kernel/cgroup/cpuset.c | 14 ++++++++------
>>   1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index e3a081a07c6d..a48901a0416a 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -4176,11 +4176,11 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
>>    * current's mems_allowed, yes.  If it's not a __GFP_HARDWALL request and this
>>    * node is set in the nearest hardwalled cpuset ancestor to current's cpuset,
>>    * yes.  If current has access to memory reserves as an oom victim, yes.
>> - * Otherwise, no.
>> + * If the current task is PF_EXITING, yes. Otherwise, no.
>>    *
>>    * GFP_USER allocations are marked with the __GFP_HARDWALL bit,
>>    * and do not allow allocations outside the current tasks cpuset
>> - * unless the task has been OOM killed.
>> + * unless the task has been OOM killed or is exiting.
>>    * GFP_KERNEL allocations are not so marked, so can escape to the
>>    * nearest enclosing hardwalled ancestor cpuset.
>>    *
>> @@ -4194,7 +4194,9 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
>>    * The first call here from mm/page_alloc:get_page_from_freelist()
>>    * has __GFP_HARDWALL set in gfp_mask, enforcing hardwall cpusets,
>>    * so no allocation on a node outside the cpuset is allowed (unless
>> - * in interrupt, of course).
>> + * in interrupt, of course).  The PF_EXITING check must therefore
>> + * come before the __GFP_HARDWALL check, otherwise a dying task
>> + * would be blocked on the fast path.
>>    *
>>    * The second pass through get_page_from_freelist() doesn't even call
>>    * here for GFP_ATOMIC calls.  For those calls, the __alloc_pages()
>> @@ -4204,6 +4206,7 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
>>    *	in_interrupt - any node ok (current task context irrelevant)
>>    *	GFP_ATOMIC   - any node ok
>>    *	tsk_is_oom_victim   - any node ok
>> + *	PF_EXITING   - any node ok (let dying task exit quickly)
>>    *	GFP_KERNEL   - any node in enclosing hardwalled cpuset ok
>>    *	GFP_USER     - only nodes in current tasks mems allowed ok.
>>    */
>> @@ -4223,11 +4226,10 @@ bool cpuset_current_node_allowed(int node, gfp_t gfp_mask)
>>   	 */
>>   	if (unlikely(tsk_is_oom_victim(current)))
>>   		return true;
>> -	if (gfp_mask & __GFP_HARDWALL)	/* If hardwall request, stop here */
>> -		return false;
>> -
>>   	if (current->flags & PF_EXITING) /* Let dying task have memory */
>>   		return true;
>> +	if (gfp_mask & __GFP_HARDWALL)	/* If hardwall request, stop here */
>> +		return false;
>>   
>>   	/* Not hardwall and node outside mems_allowed: scan up cpusets */
>>   	spin_lock_irqsave(&callback_lock, flags);
> Make sense.
>
> BTW, how did you find this issue?
I found this while reviewing the cpuset node-allowed logic during an
investigation into a memory allocation issue (not the root cause of
that investigation).
>
> Reviewed-by: Chen Ridong <chenridong@huaweicloud.com>
>


      reply	other threads:[~2026-05-08  6:15 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-07 10:54 [PATCH] cgroup/cpuset: move PF_EXITING check before __GFP_HARDWALL in cpuset_current_node_allowed() Chen Wandun
2026-05-07 12:33 ` Michal Koutný
2026-05-07 13:53   ` Waiman Long
2026-05-08  6:27   ` Wandun
2026-05-07 22:00 ` Tejun Heo
2026-05-08  1:39 ` Chen Ridong
2026-05-08  6:15   ` Wandun [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=407ab4a5-87e5-4eca-99d3-baa031935702@gmail.com \
    --to=chenwandun1@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chenridong@huaweicloud.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox