From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail.ilvokhin.com (mail.ilvokhin.com [178.62.254.231]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E089275AE3 for ; Mon, 6 Oct 2025 12:07:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=178.62.254.231 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759752477; cv=none; b=hOiucvAO4OiuIsBtJMsDFfzohFhbr49/aafMng/I71HfpwLfk60BG9RxI9MDMA5x9hpz+FSTh+XtX1s7HH+egziUA2xo1vn3SpiQaOmRliSZ1zktzrRwNXMJBznPhrvSQqeT7i9juGSI/0z27uOyrnUEM7UqVzXfsKVwuVoPshE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759752477; c=relaxed/simple; bh=1BP7x2o1JqTAEMoxRQzqk6OFe3KyMjOCsAVp6MLrYho=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=nphDot/ivZ4hXds9koypvvAgy1JW2rajLz6wbNCbxF74Wo2TnEsMmbUT4cxgTXQCzB4hFHZYNo2bB+UJRlCedsTPJTOnNsrEx7reWADagnxSQ0c4FUIUNoLxWtkc3LV7nVNI6CLKcDEkXp4anDrMuZh+/cezMdeSDYAHf3Y2xuU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com; spf=pass smtp.mailfrom=ilvokhin.com; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b=Mf+ZdI48; arc=none smtp.client-ip=178.62.254.231 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ilvokhin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ilvokhin.com header.i=@ilvokhin.com header.b="Mf+ZdI48" Received: from shell.ilvokhin.com (shell.ilvokhin.com [138.68.190.75]) (Authenticated sender: d@ilvokhin.com) by mail.ilvokhin.com (Postfix) with ESMTPSA id E7E6D92EEB; Mon, 06 Oct 2025 12:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ilvokhin.com; s=mail; t=1759752468; bh=YLus2oidZhuPLLCuKN6qeIc9LNAh5RPLc15/go5eJq0=; h=Date:From:To:Cc:Subject:References:In-Reply-To; b=Mf+ZdI485yDQ+PKFOAkxc3ov9ZwjDgt6kvSgRE6M3ZzluDdlDxy66UmT/zfUngDeL VIuayN8BcTdaoVROApToZy+38cbx97ER3vxC2waa2GpTWzsualnliNoEshPWEYefW/ VUv+yv0oys7msaqm0CPBfyTBMLPL2Hkbr4y6vadY= Date: Mon, 6 Oct 2025 12:07:30 +0000 From: Dmitry Ilvokhin To: Kiryl Shutsemau Cc: Andrew Morton , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Barry Song , Chris Li , Axel Rasmussen , Yuanchu Xie , Wei Xu , Usama Arif , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH] mm: skip folio_activate() for mlocked folios Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Fri, Oct 03, 2025 at 03:41:05PM +0100, Kiryl Shutsemau wrote: > On Fri, Oct 03, 2025 at 02:19:55PM +0000, Dmitry Ilvokhin wrote: > > __mlock_folio() should update stats, when lruvec_add_folio() is called, > > The update of stats is incidental to moving to unevicable LRU. But okay. > Good point. I'll rephrase commit message in terms of unevicable LRU instead of stat updates in v2. > > but if folio_test_clear_lru() check failed, then __mlock_folio() gives > > up early. From the other hand, folio_mark_accessed() calls > > folio_activate() which also calls folio_test_clear_lru() down the line. > > When folio_activate() successfully removed folio from LRU, > > __mlock_folio() will not update any stats, which will lead to inaccurate > > values in /proc/meminfo as well as cgroup memory.stat. > > > > To prevent this case from happening also check for folio_test_mlocked() > > in folio_mark_accessed(). If folio is not yet marked as unevictable, but > > already marked as mlocked, then skip folio_activate() call to allow > > __mlock_folio() to make all necessary updates. > > > > To observe the problem mmap() and mlock() big file and check Unevictable > > and Mlocked values from /proc/meminfo. On freshly booted system without > > any other mlocked memory we expect them to match or be quite close. > > > > See below for more detailed reproduction steps. Source code of stat.c > > is available at [1]. > > > > $ head -c 8G < /dev/urandom > /tmp/random.bin > > > > $ cc -pedantic -Wall -std=c99 stat.c -O3 -o /tmp/stat > > $ /tmp/stat > > Unevictable: 8389668 kB > > Mlocked: 8389700 kB > > > > Need to run binary twice. Problem does not reproduce on the first run, > > but always reproduces on the second run. > > > > $ /tmp/stat > > Unevictable: 5374676 kB > > Mlocked: 8389332 kB > > I think it is worth starting with the problem statement. > > I like to follow this pattern of commit messages: > > > > > > > Thanks for suggestion, v2 commit message will much this pattern. > > > > [1]: https://gist.github.com/ilvokhin/e50c3d2ff5d9f70dcbb378c6695386dd > > > > Co-developed-by: Kiryl Shutsemau > > Signed-off-by: Kiryl Shutsemau > > Signed-off-by: Dmitry Ilvokhin > > Your Co-developed-by is missing. See submitting-patches.rst. > I followed an example of a patch submitted by the From: author from submitting-patches.rst. This example doesn't have Co-developed-by tag from the From Author. That's being said, I found both cases usage in the mm commit log, so I'll add mine Co-developed-by tag in the v2. > > --- > > mm/swap.c | 10 ++++++++++ > > 1 file changed, 10 insertions(+) > > > > diff --git a/mm/swap.c b/mm/swap.c > > index 2260dcd2775e..f682f070160b 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -469,6 +469,16 @@ void folio_mark_accessed(struct folio *folio) > > * this list is never rotated or maintained, so marking an > > * unevictable page accessed has no effect. > > */ > > + } else if (folio_test_mlocked(folio)) { > > + /* > > + * Pages that are mlocked, but not yet on unevictable LRU. > > + * They might be still in mlock_fbatch waiting to be processed > > + * and activating it here might interfere with > > + * mlock_folio_batch(). __mlock_folio() will fail > > + * folio_test_clear_lru() check and give up. It happens because > > + * __folio_batch_add_and_move() clears LRU flag, when adding > > + * folio to activate batch. > > + */ > > } else if (!folio_test_active(folio)) { > > /* > > * If the folio is on the LRU, queue it for activation via > > -- > > 2.47.3 > > > > -- > Kiryl Shutsemau / Kirill A. Shutemov