public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: "JP Kobryn (Meta)" <jp.kobryn@linux.dev>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@kernel.org,
	mhocko@suse.com, hannes@cmpxchg.org, shakeel.butt@linux.dev,
	riel@surriel.com, chrisl@kernel.org, kasong@tencent.com,
	shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com,
	baohua@kernel.org, youngjun.park@lge.com, qi.zheng@linux.dev,
	axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com,
	linux-kernel@vger.kernel.org, kernel-team@meta.com
Subject: Re: [PATCH] mm/lruvec: preemptively free dead folios during lru_add drain
Date: Thu, 23 Apr 2026 11:21:10 -0700	[thread overview]
Message-ID: <c09c9138-b2ab-41d6-be9e-05be87a2bfce@linux.dev> (raw)
In-Reply-To: <aepTnIu1WiyyHNJp@casper.infradead.org>

On 4/23/26 10:15 AM, Matthew Wilcox wrote:
> On Thu, Apr 23, 2026 at 09:43:07AM -0700, JP Kobryn (Meta) wrote:
>> Of all observable lruvec lock contention in our fleet, we find that ~24%
>> occurs when dead folios are present in lru_add batches at drain time. This
>> is wasteful in the sense that the folio is added to the LRU just to be
>> immediately removed via folios_put_refs(), incurring two unnecessary lock
>> acquisitions.
> 
> Well, this is a lovely patch with no obvious downsides.  Nicely done.

Thanks for the kind words and review :)

[...]
>> diff --git a/mm/swap.c b/mm/swap.c
>> index 5cc44f0de9877..71607b0ce3d18 100644
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -160,13 +160,36 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
>>   	int i;
>>   	struct lruvec *lruvec = NULL;
>>   	unsigned long flags = 0;
>> +	struct folio_batch free_fbatch;
>> +	bool is_lru_add = (move_fn == lru_add);
>> +
>> +	/*
>> +	 * If we're adding to the LRU, preemptively filter dead folios. Use
>> +	 * this dedicated folio batch for temp storage and deferred cleanup.
>> +	 */
>> +	if (is_lru_add)
>> +		folio_batch_init(&free_fbatch);
>>   
>>   	for (i = 0; i < folio_batch_count(fbatch); i++) {
>>   		struct folio *folio = fbatch->folios[i];
>>   
>>   		/* block memcg migration while the folio moves between lru */
>> -		if (move_fn != lru_add && !folio_test_clear_lru(folio))
>> +		if (!is_lru_add && !folio_test_clear_lru(folio))
>> +			continue;
>> +
>> +		/*
>> +		 * Filter dead folios by moving them from the add batch to the temp
>> +		 * batch for freeing after this loop.
>> +		 *
>> +		 * Since the folio may be part of a huge page, unqueue from
>> +		 * deferred split list to avoid a dangling list entry.
>> +		 */
>> +		if (is_lru_add && folio_ref_freeze(folio, 1)) {
>> +			folio_unqueue_deferred_split(folio);
> 
> Would it be better to do this outside the lru lock; it's just that we
> don't have a convenient batched version to do it?  It seems like
> there are a few places that could use a batched version in vmscan.c and
> swap.c.  Not that I think we should hold up this patch to investigate
> that micro-optimisation!  Just something you couldlook at as a
> follow-up.

Good call. I'll leave this patch as-is (unless other feedback), then
pursue the batched version of unqueuing the split in a separate
follow-up patch.

> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>



  reply	other threads:[~2026-04-23 18:21 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-23 16:43 [PATCH] mm/lruvec: preemptively free dead folios during lru_add drain JP Kobryn (Meta)
2026-04-23 17:15 ` Matthew Wilcox
2026-04-23 18:21   ` JP Kobryn (Meta) [this message]
2026-04-23 18:46 ` Shakeel Butt
2026-04-23 21:18   ` JP Kobryn (Meta)
2026-04-23 22:45     ` Shakeel Butt
2026-04-23 23:22 ` Barry Song
2026-04-23 23:46   ` Shakeel Butt
2026-04-23 23:53     ` Barry Song
2026-04-24  1:46       ` JP Kobryn (Meta)
2026-04-24 15:38       ` JP Kobryn (Meta)
2026-04-24 16:30         ` Shakeel Butt
2026-04-24  7:37 ` [syzbot ci] " syzbot ci
2026-04-24  8:32 ` [PATCH] " Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c09c9138-b2ab-41d6-be9e-05be87a2bfce@linux.dev \
    --to=jp.kobryn@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=baohua@kernel.org \
    --cc=bhe@redhat.com \
    --cc=chrisl@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kasong@tencent.com \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=nphamcs@gmail.com \
    --cc=qi.zheng@linux.dev \
    --cc=riel@surriel.com \
    --cc=shakeel.butt@linux.dev \
    --cc=shikemeng@huaweicloud.com \
    --cc=vbabka@kernel.org \
    --cc=weixugc@google.com \
    --cc=willy@infradead.org \
    --cc=youngjun.park@lge.com \
    --cc=yuanchu@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox