From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5101030E84B for ; Wed, 22 Apr 2026 02:18:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776824337; cv=none; b=P1JBANbJlUuoKwupodjYHjyuEemzdr0wIp/0hgf/u4rNsw7y+HMywRpVmCWh/BqN+CGyrbGWbnR9snfnRljjkHYXMomyV7t5RPsiRW78e+tNOd4qoW+BvN1TWReP7f3c5pO+ZF6Oq/uqcTUPIQSUb4xrsrtuebww0MfgwCsOqz0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776824337; c=relaxed/simple; bh=boSLg4aDv916i+8i5C7Y/j6Uxjyf83HUmuA5DXnp/po=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=Q4DfNgHYGlq7wATlbQNdAqp7LywWSdlLsyPGCJOJyy6+VeeKOkdQNFj2X2wg/Yg770DZl1k1OnOeofeKkAuHBSfGFcSGsB7kpDb88eSxPLdK4DNMgeyOrSz3aR57bkV7gmjU93AOVLG1IVt22yPkio3DNThjPJ4Req5ctCdPXBA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZK0gfWPY; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZK0gfWPY" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B90DC2BCB0; Wed, 22 Apr 2026 02:18:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776824336; bh=boSLg4aDv916i+8i5C7Y/j6Uxjyf83HUmuA5DXnp/po=; h=From:To:Cc:Subject:Date:From; b=ZK0gfWPYBQTysXRK+OftjkYNgCnH0oRHlX9CZj4sjA5i5KtZZ688sgoywK5QeRKSw NWnTgKyZojtwwxrIhBCo02MIvbY5hiDpQZHBOJSXFYaATDF3uHe+NV2gqHrhN5lOdj pcEzrRiNNOrN3gPEZ6s2Cw0ECtizWe2rcIl4+2kNoMylfx3wu05849VmYJkuJrLjfL w/3SaYQE8c6QEpbMEJS1E5MvjbbHZVfVbnwM4VbFmB0Xj1w9VhhoEd+OwuFTVzH2RZ Ti94HxgP6KQER5QDtD4kyNuiTurmO0L09AaxbSnlnNvukpTCUIpTF/Fv2nXwE1BNTz 9t/ho9nn1/oIQ== From: "Barry Song (Xiaomi)" To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, "Barry Song (Xiaomi)" , Baolin Wang , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Kairui Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Wang Lian , Kunwu Chan Subject: [RFC PATCH v2] mm: Improve pgdat_balanced() to avoid over-reclamation for higher-order allocation Date: Wed, 22 Apr 2026 10:18:42 +0800 Message-Id: <20260422021842.78495-1-baohua@kernel.org> X-Mailer: git-send-email 2.39.3 (Apple Git-146) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit We may encounter cases where the system still has plenty of free memory, but cannot satisfy higher-order allocations. On phones, we have observed that bursty network transfers can cause devices to heat up. Baolin and Kairui have seen similar behavior on servers. Currently, kswapd behaves as follows: when a higher-order allocation is issued with __GFP_KSWAPD_RECLAIM, pgdat_balanced() returns false because __zone_watermark_ok() fails if no suitable higher-order pages exist, even when free memory is well above the high watermark. As a result, kswapd_shrink_node() sets an excessively large sc->nr_to_reclaim and attempts aggressive reclamation: for_each_managed_zone_pgdat(zone, pgdat, z, sc->reclaim_idx) { sc->nr_to_reclaim += max(high_wmark_pages(zone), SWAP_CLUSTER_MAX); } We have an opportunity to re-evaluate the balance by resetting sc->order to 0 after shrink_node() with the following code in kswapd_shrink_node(): /* * Fragmentation may mean that the system cannot be rebalanced for * high-order allocations. If twice the allocation size has been * reclaimed then recheck watermarks only at order-0 to prevent * excessive reclaim. */ if (sc->order && sc->nr_reclaimed >= compact_gap(sc->order)) sc->order = 0; But we have actually scanned and over-reclaimed far more than compact_gap(sc->order). If higher-order allocations continue, we may see persistently high kswapd CPU utilization coexisting with plenty of free memory in the system. We may want to evaluate the situation earlier at the beginning. If there is plenty of free memory, we could avoid triggering reclamation with an excessively large sc->nr_to_reclaim value and instead prefer compaction. Cc: Baolin Wang Cc: Johannes Weiner Cc: David Hildenbrand Cc: Michal Hocko Cc: Qi Zheng Cc: Shakeel Butt Cc: Lorenzo Stoakes Cc: Kairui Song Cc: Axel Rasmussen Cc: Yuanchu Xie Cc: Wei Xu Co-developed-by: Wang Lian Co-developed-by: Kunwu Chan Signed-off-by: Barry Song (Xiaomi) --- -RFC v1 was "mm: net: disable kswapd for high-order network buffer allocation": https://lore.kernel.org/linux-mm/20251013101636.69220-1-21cnbao@gmail.com/ mm/vmscan.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd1b1aa12581..4f9668aa8eef 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -6964,6 +6964,13 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx) if (__zone_watermark_ok(zone, order, mark, highest_zoneidx, 0, free_pages)) return true; + /* + * Free pages may be well above the watermark, but if + * higher-order pages are unavailable, kswapd may still + * trigger excessive reclamation. + */ + if (order && compaction_suitable(zone, order, mark, highest_zoneidx)) + return true; } /* -- 2.39.3 (Apple Git-146)