From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-188.mta1.migadu.com (out-188.mta1.migadu.com [95.215.58.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79D8B265CA4 for ; Tue, 23 Dec 2025 01:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.188 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766454703; cv=none; b=u+zRrim2bKAXifWKGf+O+PmTDaL2JBtw0rwxWin7vrcaT5Ve6RQUT7M+qgs2ZUxW1Eqfz0OuIHTvdMFNHpz2feaFbMkYOQzkd5d2gec7bttOWmYPAC0t95LPp/+ywkI5B0/pT9QWwqMQaGohAUTJ8DguY1/dsaFBNNMdNdKEh7k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766454703; c=relaxed/simple; bh=KlO+SzM13/dew8DS8e3f3pUXXb8ryt3jinjGsNStVFA=; h=MIME-Version:Date:Content-Type:From:Message-ID:Subject:To:Cc: In-Reply-To:References; b=co5GdWJM1zLziLjOnwV4vpJARYfOWXaMftBS6fqDU5MMU2NqaRUf3Z2LuGyieJFABIWWMdKAz3ux8tCzYz3KpXaIYDfMDZMpg/zpDJ62cEgUi7f11EHQWN48W/R6BcpEet5Uq/enflx4QwzW8QjdivMAVBj+ZZz4N/Cot3dEMAY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=ky3qDIlv; arc=none smtp.client-ip=95.215.58.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="ky3qDIlv" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1766454699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wl3U3hLdHF75YtYLGaCWAXC8/VXvB6nqMBrU2bvubf0=; b=ky3qDIlvgqouE7yBeotOuWUFAsgVmaWtBSOecxp7AdXbrJjuDQGohb1Pd95/cIVYoaXJfL q/uc4y9D3iwHFbhCTJy/KAPLs1KCPatl1sa3bnY5sfgXE09+hWSqNMHhjmyB8LlGqAqsKx 05QId8BEvNkv4k44oUNd4fthq43q50Y= Date: Tue, 23 Dec 2025 01:51:32 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: "Jiayuan Chen" Message-ID: <42e6103fb07fca398f0942c7c41129ffcce90dc6@linux.dev> TLS-Required: No Subject: Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset from direct reclaim To: "Andrew Morton" Cc: linux-mm@kvack.org, "Jiayuan Chen" , "Johannes Weiner" , "David Hildenbrand" , "Michal Hocko" , "Qi Zheng" , "Shakeel Butt" , "Lorenzo Stoakes" , "Axel Rasmussen" , "Yuanchu Xie" , "Wei Xu" , linux-kernel@vger.kernel.org In-Reply-To: <20251222102900.91eddc815291496eaf60cbf8@linux-foundation.org> References: <20251222122022.254268-1-jiayuan.chen@linux.dev> <20251222102900.91eddc815291496eaf60cbf8@linux-foundation.org> X-Migadu-Flow: FLOW_OUT December 23, 2025 at 02:29, "Andrew Morton" wrote: Hi Andrew, Thanks for the review. >=20 >=20On Mon, 22 Dec 2025 20:20:21 +0800 Jiayuan Chen wrote: >=20 >=20>=20 >=20> From: Jiayuan Chen > >=20=20 >=20> When kswapd fails to reclaim memory, kswapd_failures is incremente= d. > > Once it reaches MAX_RECLAIM_RETRIES, kswapd stops running to avoid > > futile reclaim attempts. However, any successful direct reclaim > > unconditionally resets kswapd_failures to 0, which can cause problem= s. > >=20=20 >=20> We observed an issue in production on a multi-NUMA system where a > > process allocated large amounts of anonymous pages on a single NUMA > > node, causing its watermark to drop below high and evicting most fil= e > > pages: > >=20=20 >=20> $ numastat -m > > Per-node system memory usage (in MBs): > > Node 0 Node 1 Total > > --------------- --------------- --------------- > > MemTotal 128222.19 127983.91 256206.11 > > MemFree 1414.48 1432.80 2847.29 > > MemUsed 126807.71 126551.11 252358.82 > > SwapCached 0.00 0.00 0.00 > > Active 29017.91 25554.57 54572.48 > > Inactive 92749.06 95377.00 188126.06 > > Active(anon) 28998.96 23356.47 52355.43 > > Inactive(anon) 92685.27 87466.11 180151.39 > > Active(file) 18.95 2198.10 2217.05 > > Inactive(file) 63.79 7910.89 7974.68 > >=20=20 >=20> With swap disabled, only file pages can be reclaimed. When kswapd = is > > woken (e.g., via wake_all_kswapds()), it runs continuously but canno= t > > raise free memory above the high watermark since reclaimable file pa= ges > > are insufficient. Normally, kswapd would eventually stop after > > kswapd_failures reaches MAX_RECLAIM_RETRIES. > >=20=20 >=20> However, pods on this machine have memory.high set in their cgroup= . > >=20 >=20What's a "pod"? A pod is a Kubernetes container. Sorry for the unclear terminology. > >=20 >=20> Business processes continuously trigger the high limit, causing fre= quent > > direct reclaim that keeps resetting kswapd_failures to 0. This preve= nts > > kswapd from ever stopping. > >=20=20 >=20> The result is that kswapd runs endlessly, repeatedly evicting the = few > > remaining file pages which are actually hot. These pages constantly > > refault, generating sustained heavy IO READ pressure. > >=20 >=20Yes, not good. >=20 >=20>=20 >=20> Fix this by only resetting kswapd_failures from direct reclaim when= the > > node is actually balanced. This prevents direct reclaim from keeping > > kswapd alive when the node cannot be balanced through reclaim alone. > >=20 >=20> ... > >=20 >=20> --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2648,6 +2648,15 @@ static bool can_age_anon_pages(struct lruvec = *lruvec, > > lruvec_memcg(lruvec)); > > } > >=20=20 >=20> +static bool pgdat_balanced(pg_data_t *pgdat, int order, int highe= st_zoneidx); > >=20 >=20Forward declaration could be avoided by relocating pgdat_balanced(), > although the patch will get a lot larger. Thanks for pointing this out. > >=20 >=20> +static inline void reset_kswapd_failures(struct pglist_data *pgdat= , > > + struct scan_control *sc) > >=20 >=20It would be nice to have a nice comment explaining why this is here.= =20 >=20Why are we checking for balanced? You're right, a comment explaining the rationale would be helpful. > >=20 >=20> +{ > > + if (!current_is_kswapd() && > >=20 >=20kswapd can no longer clear ->kswapd_failures. What's the thinking her= e? Good catch. My original thinking was that kswapd already checks pgdat_bal= anced() in its own path after successful reclaim, so I wanted to avoid redundant = checks. But looking at the code again, this is indeed a bug - kswapd's reclaim pa= th does need to clear kswapd_failures on successful reclaim.