From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F84334028D for ; Wed, 18 Mar 2026 12:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773838616; cv=none; b=pE8HsyRVI+fpSxyNLj8BZUsxYAPOQNppY3IgmymvoA6FW5iKKge3TZP5jbGcSPa1F5O7lT8epDw6fUostdD7Vo3AyPVPasVyl+g3FntbFIwjDFpS69VlnJjrgFpoWLkrar2gEHHWy2V0Jvchc27XVrGnXkrGULwuOMLKLUxyZP4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773838616; c=relaxed/simple; bh=y83WDOMSUPemg4ArQ1SzBn2E9S+YzNwqM25/i+NiudY=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=ojXYPU2CGqdyK8ljVHum7ipxkm2rV9lGvjmYKQkLhPZFx3yF3k+rwuVBRm4lOesZU2ndnGe/wLAp0+Obfv/08n/YkP2oX4KfhwXj198MrRO+0pB7QECUBZPgljTQQseLn92NdgFfUfAKgLEu3zuv32vdfB1lHNMmMIa3sgIuwuk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UpX9w8UV; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UpX9w8UV" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-2b04b4974abso32720265ad.1 for ; Wed, 18 Mar 2026 05:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773838614; x=1774443414; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=TjM0VqiCcCdc+rIPYxygRPKv3xEtt67Nq+C/ca8TllM=; b=UpX9w8UVVRzZ5RO9nZrHQAuIlwg473SMOaJ0iRHzkByIjWVur0nPCg2/bZe5sfWyyY qEVa7rgZ/tVf0TJCgkFGE+6eT1hxlEzggyYg8grRwcR/jrVQq6XR7t4q2M5A3ltW1i8t TW20CFFWsOIRiIXzqmE4hU8B++a/iNYVLwRoo4SGRqwtBKby3aOg3FhQ2YcyOkFutxLP iVbal+7iqTqpArHRJ0oisvWjXfTT3WovYBD6MFTS0ZOhG8iO+6KdrfJrjqxcJXRd0akI b7Kdkjpw9k8R9zNITgSYbxfTqWIxKgJ1T7JjZ1hBY/SNYd5jrxpiKSRubYljfPU9vpW0 VUpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773838614; x=1774443414; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-gg:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TjM0VqiCcCdc+rIPYxygRPKv3xEtt67Nq+C/ca8TllM=; b=pEOhicnoS4exZdPMH8F2G+y0Xc5c3aIFHSe07/JQwbmUrcx7ypma3AQu5QeNp+ZAcJ BaRs+bUT7K+Keaj3f8nkGoZhszLB2k0YRz6em9+mMnJWqMDat3blbdLpWLtKPpRVEuzn 1p9RN+v2330AfKNm6/KfxuXMGSAB2NHYlzl790Aa5S7kT35YrlrgFLeJSgNcqs1zudGs L0MPpz89TKZEjHkFYbMSoBEE2KBJxKHLXJB/nCBq3SpiRaq2uKeVz33xuorDImArKjjw VzZIS3ON1XxPHT8yt62Hygaz935xIhE3b2Q8h5R0xXRJIrB5bGcWEHU9fIOZwMcUCfVT NoFg== X-Forwarded-Encrypted: i=1; AJvYcCVEeJTZBPSTExhUyx+w9SUtuqTNkYpl02IqyTLrzDNgK8DQOAc7B0rxUlhYHwuDjtOiRDV3Hk59mK2bSuM=@vger.kernel.org X-Gm-Message-State: AOJu0YzMw3xTeNoq/e6NFmlm4BN/XKRW0cC0Vf9GDDCRDRZL/WUFi18H I8EOmTqhz6zcauUZWyNwnacJH/1wI4Ska2cXpkB5/dJ/Jx4Ry6l2ZyW3 X-Gm-Gg: ATEYQzwohMMYo60xUtjHBbmdqs2f7ssZrxJOw6cgFA8RLMuqMvUHAb/raHoW6GjT1Gn a2FRZNRyClTEU7+2OqPHddELKMPWqI/DVVo4PGxNvwoo3OPpSYB3SckRxC2JgHU8VVRYtSXfWxZ 2tLZU8tCMxYA1fbfXD148/KfatTyEA3k3zAtABBEG20anbipaoYUF2rYsu/qsIfnFGin7KzBPBU bOFYJEk41IYUP8XkQpWRtT9jRCjDsB4BTwwDycy6vfurp+8J91pkurUEeaa6Ojl/R/VX4xNMT44 it3Rnp/sy/+ZGHibAvZHaYwZcC40MXnxleiuTGIWf+k8gnjdoP4cV6Wjym3pzj4K1uwAzTqByLl SYT5JYKD6HIf+z8maEv9AZVDWGW+Zud3oyQpCRA3DTM5qKIKfxHkEjocbFq3VQpmnNQdTKmcno+ xSTptX+X40dNEF1q+z05JAnAdUpDEcSjs+SWDTZ+VsMCGWVht8Sn+5r3QFybD3QMtSA5BTuRoHf vkie12We1nnWYfy+IF4NMVNiZTjIvlQ+KBRz4nPwGD5Uua6UA1RGuc= X-Received: by 2002:a17:903:41d2:b0:2ad:e521:28cd with SMTP id d9443c01a7336-2b06e3ee3f7mr36706635ad.36.1773838614042; Wed, 18 Mar 2026 05:56:54 -0700 (PDT) Received: from ?IPV6:2408:84e2:440:5e9c:1dd4:d6f4:257c:2380? ([2408:84e2:440:5e9c:1dd4:d6f4:257c:2380]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b06e608a66sm26149695ad.61.2026.03.18.05.56.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 18 Mar 2026 05:56:53 -0700 (PDT) Message-ID: <4807e460-054c-49ed-9792-f5000d7b3820@gmail.com> Date: Wed, 18 Mar 2026 20:56:46 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4] mm/mglru: fix cgroup OOM during MGLRU state switching To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Jialing Wang , Yafang Shao , Yu Zhao , Kairui Song , Bingfang Guo , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20260318-b4-switch-mglru-v2-v4-1-1b927c93659d@gmail.com> <8c01a707-f798-4649-8441-d82dd0dac7b9@gmail.com> Content-Language: en-US From: Leno Hou In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 3/18/26 4:30 PM, Barry Song wrote: > On Wed, Mar 18, 2026 at 4:17 PM Leno Hou wrote: [...] >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index 33287ba4a500..88b9db06e331 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -886,7 +886,7 @@ static enum folio_references folio_check_references(struct folio *folio, >>>> if (referenced_ptes == -1) >>>> return FOLIOREF_KEEP; >>>> >>>> - if (lru_gen_enabled()) { >> >> documentation as following: >> >> /* >> * During the MGLRU state transition (lru_gen_switching), we force >> * folios to follow the traditional active/inactive reference checking. >> * >> * While MGLRU is switching,the generational state of folios is in flux. >> * Falling back to the traditional logic (which relies on PG_referenced/ >> * PG_active flags that are consistent across both mechanisms) provides >> * a stable, safe behavior for the folio until it is fully migrated back >> * to the traditional LRU lists. This avoids relying on potentially >> * inconsistent MGLRU generational metadata during the transition. >> */ >> >>>> + if (lru_gen_enabled() && !lru_gen_draining()) { >>> >>> I’m curious what prompted you to do this. >>> >>> This feels a bit odd. I assume this effectively makes >>> folios on MGLRU, as well as those on active/inactive >>> lists, always follow the active/inactive logic. >>> >>> It might be fine, but it needs thorough documentation here. >>> >>> another approach would be: >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 33287ba4a500..91b60664b652 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -122,6 +122,9 @@ struct scan_control { >>> /* Proactive reclaim invoked by userspace */ >>> unsigned int proactive:1; >>> >>> + /* Are we reclaiming from MGLRU */ >>> + unsigned int lru_gen:1; >>> + >>> /* >>> * Cgroup memory below memory.low is protected as long as we >>> * don't threaten to OOM. If any cgroup is reclaimed at >>> @@ -886,7 +889,7 @@ static enum folio_references >>> folio_check_references(struct folio *folio, >>> if (referenced_ptes == -1) >>> return FOLIOREF_KEEP; >>> >>> - if (lru_gen_enabled()) { >>> + if (sc->lru_gen) { >>> if (!referenced_ptes) >>> return FOLIOREF_RECLAIM; >>> >>> This makes the logic perfectly correct (you know exactly >>> where your folios come from), but I’m not sure it’s worth it. >>> >>> Anyway, I’d like to understand why you always need to >>> use the active/inactive logic even for folios from MGLRU. >>> To me, it seems to work only by coincidence, which isn’t good. >>> >>> Thanks >>> Barry >> >> Hi Barry, >> >> I agree that using !lru_gen_draining() feels a bit like a fallback path. >> However, after considering your suggestion for sc->lru_gen, I’m >> concerned about the broad impact of modifying struct scan_control.Since >> lru_drain_core is a very transient state, I prefer a localized fix that >> doesn't propagate architectural changes throughout the entire reclaim stack. >> >> You mentioned that using the active/inactive logic feels like it works >> by 'coincidence'. To clarify, this is an intentional fallback: because >> the generational metadata in MGLRU becomes unreliable during draining, >> we intentionally downgrade these folios to the traditional logic. Since >> the PG_referenced and PG_active bits are maintained by the core VM and >> are consistent regardless of whether MGLRU is active, this fallback is >> technically sound and robust. >> >> I have added detailed documentation to the code to explain this design >> choice, clarifying that it's a deliberate transition strategy rather >> than a coincidence." > > Nope. You still haven’t explained why the active/inactive LRU > logic makes it work. MGLRU and active/inactive use different > methods to determine whether a folio is hot or cold. You’re > forcing active/inactive logic to decide hot/cold for an MGLRU > folio. It’s not that simple—PG_referenced isn’t maintained > by the core; it’s specific to active/inactive. See folio_mark_accessed(). > > Best Regards > Barry Hi Barry, Thank you for your patience and for pointing out the version-specific nuances. You are absolutely correct—my previous assumption that the traditional reference-checking logic would serve as a robust fallback was fundamentally flawed. After re-examining the code in v7.0 and comparing it with older versions (e.g., v6.1), I see the core issue you highlighted: 1. Evolution of PG_referenced: In older kernels, lru_gen_inc_refs() often interacted with the PG_referenced bit, which inadvertently provided a 'coincidental' hint for the legacy reclaim path. However, in v7.0+, lru_gen_inc_refs() has evolved to use set_mask_bits() on the LRU_REFS_MASK bitfield, and it no longer relies on or updates the legacy PG_referenced bit for MGLRU folios. 2. The Logic Flaw: When switching from MGLRU to the traditional LRU, these folios arrive at the legacy reclaim path with PG_referenced unset or stale. If I force them through the legacy folio_check_references() path, folio_test_clear_referenced(folio) predictably returns 0. The legacy path interprets this as a 'cold' folio, leading to premature reclamation. You are correct that forcing this active/inactive logic onto MGLRU folios is logically inconsistent. 3. My Revised Approach: Instead of attempting to patch folio_check_references() with a fallback logic, I have decided to keep the folio_check_references() logic unchanged. The system handles this transition safely through the kernel's existing reclaim loop and retry mechanisms: a) While MGLRU is draining, folios are moved back to the traditional LRU lists. Once migrated, these folios will naturally begin participating in the legacy reclaim path. b) Although some folios might be initially underestimated as 'cold' in the very first reclaim pass immediately after the switch, the kernel's reclaim loop will naturally re-evaluate them. As they are accessed, the standard legacy mechanism will correctly maintain the PG_referenced bit, and the system will converge to the correct state without needing an explicit fallback path or state-checking in folio_check_references(). This approach avoids the logical corruption caused by forcing incompatible evaluation methods and relies on the natural convergence of the existing reclaim loop. Does this alignment with the existing reclaim mechanism address your concerns about logical consistency? Best regards, Leno Hou