From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19F85296BA9 for ; Wed, 22 Apr 2026 06:59:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.99 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776841144; cv=none; b=YWgfSQznpgDJloLAc/xj5MkPAk8JwC2bmy+btTLHwy9BTzBoXlfNS/Sb3IzNXEj06ZLD0HgykC4LtrYxHCPfktU8LYAeNba3J+W+/kjHHJ4sKZKXjtwGEWtmwRiELAQbbGYaPDvQv6fGUbLMDjKb6m/X6C0CpziIAseAxFQfp+o= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776841144; c=relaxed/simple; bh=7v7sX3rCbMDTrlLc+ZOgaChvNZqKaiqwrXPqDmfCHNM=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=F9T+LEgIPqzTJ4E/4Y8nJH6iS3DC9iOung8HfVtlqB01Ejjtn/VqCGbYgbkCISlFayaefMHr6foB9T5xL6M5ID5qkyZnJGaAc/g+8GalzroBOAAVI1xvZ9Shjy1YLui2pJQI8EZMLxX1US7GloiwcJxsJJad2QiljrAIoHYhvQk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=Tytr+NsD; arc=none smtp.client-ip=115.124.30.99 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="Tytr+NsD" DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1776841133; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=XM12gL3yRFzB62wuc7W+kz1nWoO9r6KLELNWp9GUo34=; b=Tytr+NsDH5lyduBLE9VnViwNcQ1YX+HVZitBCqetsTFJT2X633QcXvt7q6I4VvUeQQy7pUNARh53j9QQSkM9R+cpM9fgFB0QhpOFuAQjwiMyeFiGCTOWaf+mlnD9xZMZQVEohqU5xNw4GVhvBLd90SuBpR0DW69jchJZ9En+dYE= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033032089153;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0X1VQVOV_1776841131; Received: from 30.74.144.136(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0X1VQVOV_1776841131 cluster:ay36) by smtp.aliyun-inc.com; Wed, 22 Apr 2026 14:58:51 +0800 Message-ID: <8d4df864-2954-4eb6-b8d7-ae6595646e6e@linux.alibaba.com> Date: Wed, 22 Apr 2026 14:58:50 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation To: "Barry Song (Xiaomi)" , akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Kairui Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Wang Lian , Kunwu Chan References: <20260422021842.78495-1-baohua@kernel.org> From: Baolin Wang In-Reply-To: <20260422021842.78495-1-baohua@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 4/22/26 10:18 AM, Barry Song (Xiaomi) wrote: > We may encounter cases where the system still has plenty of free > memory, but cannot satisfy higher-order allocations. On phones, we > have observed that bursty network transfers can cause devices to > heat up. Baolin and Kairui have seen similar behavior on servers. > > Currently, kswapd behaves as follows: when a higher-order allocation > is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false > because __zone_watermark_ok() fails if no suitable higher-order > pages exist, even when free memory is well above the high watermark. > As a result, kswapd_shrink_node() sets an excessively large > sc->nr_to_reclaim and attempts aggressive reclamation: > > for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { > sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); > } > > We have an opportunity to re-evaluate the balance by resetting > sc->order to 0 after shrink_node() with the following code > in kswapd_shrink_node(): > /* > * Fragmentation may mean that the system cannot be rebalanced for > * high-order allocations. If twice the allocation size has been > * reclaimed then recheck watermarks only at order-0 to prevent > * excessive reclaim. > */ > if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) > sc->order = 0; > > But we have actually scanned and over-reclaimed far more than > compact_gap(sc->order). If higher-order allocations continue, we may > see persistently high kswapd CPU utilization coexisting with plenty of > free memory in the system. > > We may want to evaluate the situation earlier at the beginning. > If there is plenty of free memory, we could avoid triggering > reclamation with an excessively large sc->nr_to_reclaim value > and instead prefer compaction. > > Cc: Baolin Wang > Cc: Johannes Weiner > Cc: David Hildenbrand > Cc: Michal Hocko > Cc: Qi Zheng > Cc: Shakeel Butt > Cc: Lorenzo Stoakes > Cc: Kairui Song > Cc: Axel Rasmussen > Cc: Yuanchu Xie > Cc: Wei Xu > Co-developed-by: Wang Lian > Co-developed-by: Kunwu Chan > Signed-off-by: Barry Song (Xiaomi) > --- Thanks Barry for sending out the RFC patch for discussion. Yes, we have indeed seen reports from our customers' scenarios where fragmentation caused kswapd to be woken up and reclaim too many file folios (even when free memory was sufficient), leading to severe I/O contention that impacted some applications. However, I'm concerned that this patch might also have side effects, such as affecting system defragmentation. In some scenarios, directly reclaiming clean pagecache to free up space might be a faster way to defragment. At the very least, I think under defrag_mode, we should be more aggressive about defragmentation (including reclaiming some memory by kswapd). > -RFC v1 was "mm: net: disable kswapd for high-order network > buffer allocation": > https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ > > mm/vmscan.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd1b1aa12581..4f9668aa8eef 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) > if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, > 0, free_pages)) > return true; > + /* > + * Free pages may be well above the watermark, but if > + * higher-order pages are unavailable, kswapd may still > + * trigger excessive reclamation. > + */ > + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) > + return true; > } > > /*