From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B8DA304BBC for ; Wed, 18 Mar 2026 12:59:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773838792; cv=none; b=mzG6Za5vcTBsHGO6ni5nTcbIgtiYVmlQwyncaZqaSv2N5Tp08bue+eH3B5TV1BCo1uZ1PW9mr3tntSmiqkQsTcFNeW3UtQGbaZQWJ29wSaY8KFBvQQjAt4evNEsvAPkZpTDPv3cIl4WMNntIf3PzmAS80gGfD/7FG71yyX971h4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773838792; c=relaxed/simple; bh=QFdogh14NdVNDyao4ltw/UqhDrq2OP07cnvsvoYafOc=; h=Message-ID:Date:MIME-Version:From:Subject:To:Cc:References: In-Reply-To:Content-Type; b=n7QtDmrAtKO9Js/G5LcqVwZzCMZ7kKQSFhtuTRQ3kYeYv1YP1MjEnjfLi3SBbDEZTQHhj64lr1tSiB8TVEscp7ylYLPuGS49y3a1gSeczQMkSJoqnJyHf7csTBFYjCWOyEyAg/HXucFIquF2fVkPw83yD26i5Q4OycakHzCkgJA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SdlL3F7p; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SdlL3F7p" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-827270d50d4so6359140b3a.3 for ; Wed, 18 Mar 2026 05:59:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773838790; x=1774443590; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:subject:from:user-agent:mime-version:date:message-id:from:to :cc:subject:date:message-id:reply-to; bh=3h6AAxB1kHfz2+32UXslM63t/gfDPdKFvQe7BOefzXM=; b=SdlL3F7pkLfkv7gwNIXjKEQaqCG48BWdacg4s+/xOo25zDj0Boas4oJ/4J7Pc1wksu QCI19zyCtDIxgmAMQIapzYhBotpKM7x1YgaJ+skesfUxBwWy4yFVsD2INv0ZR73YWYxm M0qRJ2TIsXWlzrLsfi86mPfBTUWHbNzqchWJnBYRL4utjtKmndFpKhqAsFke4w/g8BeE VrmccH6tSvMnA0UfvMkDcIzVZr7H/NE6t8ILc1+0qrUmG+Mt6S1p+rHt1RfSBsr3LIaO 703soyXWI57FmQCPV18p5G2IM+KBTuu3ETPeJqcFZ+6+NrbwHobYCPIsWltY72pBYAun 52Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773838790; x=1774443590; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:subject:from:user-agent:mime-version:date:message-id:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3h6AAxB1kHfz2+32UXslM63t/gfDPdKFvQe7BOefzXM=; b=QXYep/w2Hb3s5EBPkbL3Co8lqXZjOmT17cWIyo8P4GlzeK23czSxeupnuRCH6bilxV OlzjYFcCq9j1xzJAoQP7MAXPXN2O++RKirmHKWW8DthgkTl9Ggv1s8imuRAcFX9GKbDb a6MTy05FR6QZL7/7B4rtB8diIjqiu9sfUjP3abH5Hi7G07QP8XbDQ/vFFTWEyrjn5Qjh uEWwVt8Wq2kgTowYnBwDwfyleHr9sEvGXs5W4mGMIUQZ4EwlNdDNf2Opn/AybCdJYXUU U9M4dcxWWGMAwpR+BVnbGGpKxSpqqrdaPB8vdftAkh9bxvQg9Dp1MateI2beW8RVTr5q F7HA== X-Forwarded-Encrypted: i=1; AJvYcCUhtaZ+plcLuHqDhJh2gCIlShnRRmcoL/zreYzLRizLLQTW68M7exb+2yoMdkqHFwYiwQm+MEHFpWXMSIg=@vger.kernel.org X-Gm-Message-State: AOJu0Yxem5xtgI8u0yJd12INOPDagt3+9AFPJJcZdlxvr/OiIv0IKO7q h6pO7ttAadMvCTqGZcf4HKRUxzENi0GMNq5s6HClYwqkwSn3WpVrNBrD X-Gm-Gg: ATEYQzw/HLg/HdvyV1JgbdKrm2QTqht8wcxhNLMwvFYpxfTzQU0QxcrAr3ApV3P3Wwg Gvjb21kpA47CVwtLzvMTjAFPap5y0Xe2ypWROIRPnCMAGY8N3YtiMKQd2rqvm0+P/H+udihDjNO TIDDQsjSgYxwAEiosu1n8ExLciqsJ0k6DpZ9Xlmf0NfrPpie7LpRE2OhxCKG24dJv8vJECE/6qE QnrDThdfU8ZWOSw+OU//yShk7+AimE9Q19p7GFX1ZubUG/Mm3C55wyj7qb1Hd99PChhflL0SiN4 iPMLvTb4ktl6TALlgqfk2lf0ChlHtKRdsX+osCVCAxwlDW0LC1GyL3uGobaI4YJ31PJV7zXDfvC /ohSAUQcA8RTiwX7zdMMFy5H7qMXW1pKGgkm5Lmaqmv/ECXNeNla4AESU63/WKPkZ1IlradMh50 3wKunpDMxBqvPx+aF9OoW9K0Zh3I58fkd4VM0bl/QHIIc9XjmcIPwDhIVQao2epqZuGLgHUcodu K20gTwem/Qmp1ciRXR1qCax+cL7f0LmkYSz74OM1gUiLYg4aS4zvww= X-Received: by 2002:a05:6a00:8e02:b0:827:3222:c4c with SMTP id d2e1a72fcca58-82a6aeba634mr3245469b3a.39.1773838789979; Wed, 18 Mar 2026 05:59:49 -0700 (PDT) Received: from ?IPV6:2408:84e2:440:5e9c:1dd4:d6f4:257c:2380? ([2408:84e2:440:5e9c:1dd4:d6f4:257c:2380]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82a6b541cafsm2814048b3a.12.2026.03.18.05.59.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 18 Mar 2026 05:59:49 -0700 (PDT) Message-ID: <0cd1cd09-bb06-43c3-a3ed-8dce2c0a13aa@gmail.com> Date: Wed, 18 Mar 2026 20:59:42 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird From: Leno Hou Subject: Re: [PATCH v4] mm/mglru: fix cgroup OOM during MGLRU state switching To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Jialing Wang , Yafang Shao , Yu Zhao , Kairui Song , Bingfang Guo , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20260318-b4-switch-mglru-v2-v4-1-1b927c93659d@gmail.com> <8c01a707-f798-4649-8441-d82dd0dac7b9@gmail.com> Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 3/18/26 4:30 PM, Barry Song wrote: > On Wed, Mar 18, 2026 at 4:17 PM Leno Hou wrote: [...] >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index 33287ba4a500..88b9db06e331 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -886,7 +886,7 @@ static enum folio_references folio_check_references(struct folio *folio, >>>> if (referenced_ptes == -1) >>>> return FOLIOREF_KEEP; >>>> >>>> - if (lru_gen_enabled()) { >> >> documentation as following: >> >> /* >> * During the MGLRU state transition (lru_gen_switching), we force >> * folios to follow the traditional active/inactive reference checking. >> * >> * While MGLRU is switching,the generational state of folios is in flux. >> * Falling back to the traditional logic (which relies on PG_referenced/ >> * PG_active flags that are consistent across both mechanisms) provides >> * a stable, safe behavior for the folio until it is fully migrated back >> * to the traditional LRU lists. This avoids relying on potentially >> * inconsistent MGLRU generational metadata during the transition. >> */ >> >>>> + if (lru_gen_enabled() && !lru_gen_draining()) { >>> >>> I’m curious what prompted you to do this. >>> >>> This feels a bit odd. I assume this effectively makes >>> folios on MGLRU, as well as those on active/inactive >>> lists, always follow the active/inactive logic. >>> >>> It might be fine, but it needs thorough documentation here. >>> >>> another approach would be: >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 33287ba4a500..91b60664b652 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -122,6 +122,9 @@ struct scan_control { >>> /* Proactive reclaim invoked by userspace */ >>> unsigned int proactive:1; >>> >>> + /* Are we reclaiming from MGLRU */ >>> + unsigned int lru_gen:1; >>> + >>> /* >>> * Cgroup memory below memory.low is protected as long as we >>> * don't threaten to OOM. If any cgroup is reclaimed at >>> @@ -886,7 +889,7 @@ static enum folio_references >>> folio_check_references(struct folio *folio, >>> if (referenced_ptes == -1) >>> return FOLIOREF_KEEP; >>> >>> - if (lru_gen_enabled()) { >>> + if (sc->lru_gen) { >>> if (!referenced_ptes) >>> return FOLIOREF_RECLAIM; >>> >>> This makes the logic perfectly correct (you know exactly >>> where your folios come from), but I’m not sure it’s worth it. >>> >>> Anyway, I’d like to understand why you always need to >>> use the active/inactive logic even for folios from MGLRU. >>> To me, it seems to work only by coincidence, which isn’t good. >>> >>> Thanks >>> Barry >> >> Hi Barry, >> >> I agree that using !lru_gen_draining() feels a bit like a fallback path. >> However, after considering your suggestion for sc->lru_gen, I’m >> concerned about the broad impact of modifying struct scan_control.Since >> lru_drain_core is a very transient state, I prefer a localized fix that >> doesn't propagate architectural changes throughout the entire reclaim stack. >> >> You mentioned that using the active/inactive logic feels like it works >> by 'coincidence'. To clarify, this is an intentional fallback: because >> the generational metadata in MGLRU becomes unreliable during draining, >> we intentionally downgrade these folios to the traditional logic. Since >> the PG_referenced and PG_active bits are maintained by the core VM and >> are consistent regardless of whether MGLRU is active, this fallback is >> technically sound and robust. >> >> I have added detailed documentation to the code to explain this design >> choice, clarifying that it's a deliberate transition strategy rather >> than a coincidence." > > Nope. You still haven’t explained why the active/inactive LRU > logic makes it work. MGLRU and active/inactive use different > methods to determine whether a folio is hot or cold. You’re > forcing active/inactive logic to decide hot/cold for an MGLRU > folio. It’s not that simple—PG_referenced isn’t maintained > by the core; it’s specific to active/inactive. See folio_mark_accessed(). > > Best Regards > Barry Hi Barry, Thank you for your patience and for pointing out the version-specific nuances. You are absolutely correct—my previous assumption that the traditional reference-checking logic would serve as a robust fallback was fundamentally flawed. After re-examining the code in v7.0 and comparing it with older versions (e.g., v6.1), I see the core issue you highlighted: 1. Evolution of PG_referenced: In older kernels, lru_gen_inc_refs() often interacted with the PG_referenced bit, which inadvertently provided a 'coincidental' hint for the legacy reclaim path. However, in v7.0+, lru_gen_inc_refs() has evolved to use set_mask_bits() on the LRU_REFS_MASK bitfield, and it no longer relies on or updates the legacy PG_referenced bit for MGLRU folios. 2. The Logic Flaw: When switching from MGLRU to the traditional LRU, these folios arrive at the legacy reclaim path with PG_referenced unset or stale. If I force them through the legacy folio_check_references() path, folio_test_clear_referenced(folio) predictably returns 0. The legacy path interprets this as a 'cold' folio, leading to premature reclamation. You are correct that forcing this active/inactive logic onto MGLRU folios is logically inconsistent. 3. My Revised Approach: Instead of attempting to patch folio_check_references() with a fallback logic, I have decided to keep the folio_check_references() logic unchanged. The system handles this transition safely through the kernel's existing reclaim loop and retry mechanisms: a) While MGLRU is draining, folios are moved back to the traditional LRU lists. Once migrated, these folios will naturally begin participating in the legacy reclaim path. b) Although some folios might be initially underestimated as 'cold' in the very first reclaim pass immediately after the switch, the kernel's reclaim loop will naturally re-evaluate them. As they are accessed, the standard legacy mechanism will correctly maintain the PG_referenced bit, and the system will converge to the correct state without needing an explicit fallback path or state-checking in folio_check_references(). This approach avoids the logical corruption caused by forcing incompatible evaluation methods and relies on the natural convergence of the existing reclaim loop. Does this alignment with the existing reclaim mechanism address your concerns about logical consistency? Best regards, Leno Hou