public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: "Liam R. Howlett" <liam@infradead.org>
Cc: Matthew Wilcox <willy@infradead.org>,
	lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org,
	David Hildenbrand <david@kernel.org>, Jan Kara <jack@suse.cz>,
	Ryan Roberts <ryan.roberts@arm.com>,
	Christian Brauner <brauner@kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] Page cache tracking with the Maple Tree
Date: Fri, 24 Apr 2026 16:00:03 -0400	[thread overview]
Message-ID: <aevLw0DsPOEeRd-J@cmpxchg.org> (raw)
In-Reply-To: <oi2kit3a2qulr53c7j3h4g3cjjgbss4kd2djdqota2sriiaaov@7uewnraaq4gk>

On Fri, Apr 24, 2026 at 12:45:32PM -0400, Liam R. Howlett wrote:
> On 26/04/20 04:18PM, Johannes Weiner wrote:
> > On Fri, Apr 17, 2026 at 08:50:01PM +0100, Matthew Wilcox wrote:
> > > On Tue, Feb 24, 2026 at 12:10:26PM -0500, Liam R. Howlett wrote:
> > > > The Maple Tree needs some enhancements:
> > > >  - Support for purging shadow entries from the page cache
> > > 
> > > Liam and I hd some preliminary discussions yesterday around this
> > > and we'd like some feedback if anyone has time before LSFMM.
> > > 
> > > For those who aren't aware, when a folio falls off the end of the
> > > LRU list, we store a shadow entry in the page cache in its place
> > > so that if we access that page again, we know where to put its folio in
> > > the LRU list.
> > > 
> > > But this creates a problem (documented in mm/workingset.c) where
> > > we can fill up memory with shadow entries.  Currently, we embed a
> > > list_head in xa_node and add nodes which contain only shadow entries
> > > to a list which can be walked by a shrinker when we're low on memory.
> > > Ideally we wouldn't do that with the maple tree.  There are a few
> > > options.
> > > 
> > > The first question we have is whether it's best to keep nodes around to
> > > wait for a shrinker to kick in.  Was any experimentation done to
> > > see whether eagerly freeing a node that contains only shadow entries
> > > has a bad effect on performance?
> > 
> > Hm, I'm not sure how that could work.
> > 
> > The LRU order created by readahead makes it highly likely that all the
> > folios in a cache node are reclaimed/made non-resident at once. Going
> > this route would destroy a large part of the non-resident cache the
> > moment it is created.
> 
> Not the moment it is created, but the moment the entire node only has
> NULLs or shadow entries.  Meaning it's removed the moment the last
> entry becomes a shadow entry.

I'm saying they can be often the same event. Readahead means LRU order
matches file index / node range order. So there is a good chance that
reclaim batches will create all shadow entries in a node in one go.

> Still much sooner, but not as extreme as just dropping them immediately.
> 
> > 
> > The goal is to garbage-collect the oldest shadow entries whose
> > distances are too long to be actionable at this point. Specifically,
> > their distance to lruvec->nonresident_age (per-cgroup, per-node).
> > 
> > In the current scheme, we just go in the order in which nodes became
> > all-shadow - oldest first. And we only do so lazily when the
> > non-resident cache is far into that territory (cache set vastly larger
> > than available memory). That gives us confidence that we're mostly
> > dropping very old entries without having to look at them one by one.
> > 
> > We don't have to stick with that design, but whatever replaces it
> > should meet the goal, or approximate it well enough.
> 
> The goal of garbage collecting the oldest shadow entries based on the
> nonresident_age.  That's a broad target, which I think is made more
> specific within the implementation to 'groupings of 64 shadow entries',
> which I bring up later.
> 
> Besides the locking improvements that we can bring, is there anything
> that you have found doesn't work optimal in the current solution that
> may be nice to fix?
> 
> > 
> > > The second idea we talked about is that the maple tree is much more
> > > flexible than the radix tree.  Having even a single folio in a node pins
> > > the entire node, so it's "free" to keep the shadow entries in that node
> > > around.  But with the maple tree, we can be much more granular and
> > > delete shadow entries in arbitrary positions.  So we could (for example)
> > > keep track of inodes which contain shadow entries and purge shadow
> > > entries when they reach, say, 10% of the number of pages.  Or 1000
> > > entries, or some other threshold.
> > 
> > It's not the volume or the concentration of shadows, it's their age
> > that makes them good candidates for garbage collection.
> 
> Maybe I'm reading this wrong but it seems workingset_update_node() adds
> nodes to the list that is composed of all shadow entries (or removes it
> if we have changed one to resident).
> 
> Doesn't that mean there is a concentration of shadow entries since the
> entries in the tree are in order?
> 
> That is, reclaim is at node level granularity, and the nodes are sorted
> sets.  I think, sibling entries aside, it's fair to call that a grouping
> of 64 shadow entries?
> 
> This also means that with today's code we are keeping older entries over
> newer entries because the node has at least one resident entry.  But
> that's fine, because we can't gain anything by removing them today.

Yes that's a great catch.

It's the last eviction, i.e. the very youngest shadow entry at this
point, that suddenly demotes the whole node to the "old" set. And
there could be much older entries in circulation that just happen to
still be next to a resident entry.

But I think this would require a pretty spectacular breakdown in
access locality. More likely you get mixed nodes when individual
entries refault. And yes, not much we can or need to do about it,
since the resident entries pin the whole node anyway.

So the question, I think, is still: in what order do you reclaim a
gazillion all-shadow nodes when the time comes. And the answer to
that, IMO, is still best approximated by going in the order in which
they became all-shadow.


      reply	other threads:[~2026-04-24 20:00 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-24 17:10 [LSF/MM/BPF TOPIC] Page cache tracking with the Maple Tree Liam R. Howlett
2026-04-17 19:50 ` Matthew Wilcox
2026-04-17 23:14   ` Liam R. Howlett
2026-04-20 20:18   ` Johannes Weiner
2026-04-24 16:45     ` Liam R. Howlett
2026-04-24 20:00       ` Johannes Weiner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aevLw0DsPOEeRd-J@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=brauner@kernel.org \
    --cc=david@kernel.org \
    --cc=jack@suse.cz \
    --cc=liam@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox