From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-189.mta0.migadu.com (out-189.mta0.migadu.com [91.218.175.189]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E01D1D9663 for ; Tue, 23 Dec 2025 01:42:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.189 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766454164; cv=none; b=M58EM17xu+9ECIe1KyF505p5OYAl8f715g/k4Z7aRdFeU3z9RqQmhuIQ9wVw1UClX0iwNu3e01gu32Z/rdnlHf9BkVlbdDdgkdAA+62RmMKfbddamP/6STAM6C1cxSD/upnuHWT0+ScvW/ptTjOcWbz39LyLGtCY6IeaOcD7P1A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766454164; c=relaxed/simple; bh=SPXYpyjneScVbnAmN8V5sOPgk2TM0MEIRlm8MdmPWHA=; h=MIME-Version:Date:Content-Type:From:Message-ID:Subject:To:Cc: In-Reply-To:References; b=YfJrpd6Bp7ilXHJ+aWKgBltA7GqmmbAWJ36zKqnFIoK5DvCyR5AhpK+uuUv7PgW/slXpjTbs/MjpmuNYkOW0ugZrHsybCvkgwSLktRb+dj3IFnxBBvtpAkSU3rO+wOoz8j19lIDUpr+w8WqFD80etLgWcESpf7sVjXQ++L4U4Vg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=w02fKMEg; arc=none smtp.client-ip=91.218.175.189 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="w02fKMEg" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1766454160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7B1SayI7EOFNqfNNBovUlQHi7a1s5Gt80AtJSL6qhUA=; b=w02fKMEgNlPSc0I+5B5j6Us0z/FVgqCCPvfIPcPJGbnkCDeWj+f7xgPYEwIAB9O8s0V9Ea 7hTsmMYmoc0yNDL+cYeS8N5Q2zdIKw2gSMNgxzKgYxYV8h/XotN8Nry9URpzDwfUFjVdOg pHIiCDGTzShifEIuMeyIUlDNBFyD2OI= Date: Tue, 23 Dec 2025 01:42:37 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: "Jiayuan Chen" Message-ID: <2e574085ed3d7775c3b83bb80d302ce45415ac42@linux.dev> TLS-Required: No Subject: Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset from direct reclaim To: "Shakeel Butt" Cc: linux-mm@kvack.org, "Jiayuan Chen" , "Andrew Morton" , "Johannes Weiner" , "David Hildenbrand" , "Michal Hocko" , "Qi Zheng" , "Lorenzo Stoakes" , "Axel Rasmussen" , "Yuanchu Xie" , "Wei Xu" , linux-kernel@vger.kernel.org In-Reply-To: <4owaeb7bmkfgfzqd4ztdsi4tefc36cnmpju4yrknsgjm4y32ez@qsgn6lnv3cxb> References: <20251222122022.254268-1-jiayuan.chen@linux.dev> <4owaeb7bmkfgfzqd4ztdsi4tefc36cnmpju4yrknsgjm4y32ez@qsgn6lnv3cxb> X-Migadu-Flow: FLOW_OUT December 23, 2025 at 05:15, "Shakeel Butt" wrote: >=20 >=20On Mon, Dec 22, 2025 at 08:20:21PM +0800, Jiayuan Chen wrote: >=20 >=20>=20 >=20> From: Jiayuan Chen > >=20=20 >=20> When kswapd fails to reclaim memory, kswapd_failures is incremente= d. > > Once it reaches MAX_RECLAIM_RETRIES, kswapd stops running to avoid > > futile reclaim attempts. However, any successful direct reclaim > > unconditionally resets kswapd_failures to 0, which can cause problem= s. > >=20=20 >=20> We observed an issue in production on a multi-NUMA system where a > > process allocated large amounts of anonymous pages on a single NUMA > > node, causing its watermark to drop below high and evicting most fil= e > > pages: > >=20=20 >=20> $ numastat -m > > Per-node system memory usage (in MBs): > > Node 0 Node 1 Total > > --------------- --------------- --------------- > > MemTotal 128222.19 127983.91 256206.11 > > MemFree 1414.48 1432.80 2847.29 > > MemUsed 126807.71 126551.11 252358.82 > > SwapCached 0.00 0.00 0.00 > > Active 29017.91 25554.57 54572.48 > > Inactive 92749.06 95377.00 188126.06 > > Active(anon) 28998.96 23356.47 52355.43 > > Inactive(anon) 92685.27 87466.11 180151.39 > > Active(file) 18.95 2198.10 2217.05 > > Inactive(file) 63.79 7910.89 7974.68 > >=20=20 >=20> With swap disabled, only file pages can be reclaimed. When kswapd = is > > woken (e.g., via wake_all_kswapds()), it runs continuously but canno= t > > raise free memory above the high watermark since reclaimable file pa= ges > > are insufficient. Normally, kswapd would eventually stop after > > kswapd_failures reaches MAX_RECLAIM_RETRIES. > >=20=20 >=20> However, pods on this machine have memory.high set in their cgroup= . > > Business processes continuously trigger the high limit, causing freq= uent > > direct reclaim that keeps resetting kswapd_failures to 0. This preve= nts > > kswapd from ever stopping. > >=20=20 >=20> The result is that kswapd runs endlessly, repeatedly evicting the = few > > remaining file pages which are actually hot. These pages constantly > > refault, generating sustained heavy IO READ pressure. > >=20 >=20I don't think kswapd is an issue here. The system is out of memory an= d > most of the memory is unreclaimable. Either change the workload to use > less memory or enable swap (or zswap) to have more reclaimable memory. Hi, Thanks for looking into this. Sorry, I didn't describe the scenario clearly enough in the original patc= h. Let me clarify: This is a multi-NUMA system where the memory pressure is not global but n= ode-local. The key observation is: Node 0: Under memory pressure, most memory is anonymous (unreclaimable wi= thout swap) Node 1: Has plenty of reclaimable memory (~60GB file cache out of 125GB t= otal) Node 0's kswapd runs continuously but cannot reclaim anything Direct reclaim succeeds by reclaiming from Node 1 Direct reclaim resets kswapd_failures, preventing Node 0's kswapd from st= opping The few file pages on Node 0 are hot and keep refaulting, causing heavy I= /O >From a per-node perspective, Node 0 is truly out of reclaimable memory an= d its kswapd should stop. But the global direct reclaim success (from Node 1) incorrec= tly keeps Node 0's kswapd alive. Thanks. > Other than that we can discuss memcg reclaim resetting the kswapd > failure count should be changed or not but that is a separate > discussion. >