* [to-be-updated] mm-list_lru-introduce-folio_memcg_list_lru_alloc.patch removed from -mm tree
@ 2026-03-30 19:03 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2026-03-30 19:03 UTC (permalink / raw)
To: mm-commits, hannes, akpm
The quilt patch titled
Subject: mm: list_lru: introduce folio_memcg_list_lru_alloc()
has been removed from the -mm tree. Its filename was
mm-list_lru-introduce-folio_memcg_list_lru_alloc.patch
This patch was dropped because an updated version will be issued
------------------------------------------------------
From: Johannes Weiner <hannes@cmpxchg.org>
Subject: mm: list_lru: introduce folio_memcg_list_lru_alloc()
Date: Wed, 18 Mar 2026 15:53:24 -0400
memcg_list_lru_alloc() is called every time an object that may end up on
the list_lru is created. It needs to quickly check if the list_lru heads
for the memcg already exist, and allocate them when they don't.
Doing this with folio objects is tricky: folio_memcg() is not stable and
requires either RCU protection or pinning the cgroup. But it's desirable
to make the existence check lightweight under RCU, and only pin the memcg
when we need to allocate list_lru heads and may block.
In preparation for switching the THP shrinker to list_lru, add a helper
function for allocating list_lru heads coming from a folio.
Link: https://lkml.kernel.org/r/20260318200352.1039011-7-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Nico Pache <npache@redhat.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Wei Xu <weixugc@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/list_lru.h | 12 +++++++++++
mm/list_lru.c | 39 ++++++++++++++++++++++++++++++++-----
2 files changed, 46 insertions(+), 5 deletions(-)
--- a/include/linux/list_lru.h~mm-list_lru-introduce-folio_memcg_list_lru_alloc
+++ a/include/linux/list_lru.h
@@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_ke
int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
gfp_t gfp);
+
+#ifdef CONFIG_MEMCG
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+ gfp_t gfp);
+#else
+static inline int folio_memcg_list_lru_alloc(struct folio *folio,
+ struct list_lru *lru, gfp_t gfp)
+{
+ return 0;
+}
+#endif
+
void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
/**
--- a/mm/list_lru.c~mm-list_lru-introduce-folio_memcg_list_lru_alloc
+++ a/mm/list_lru.c
@@ -537,17 +537,14 @@ static inline bool memcg_list_lru_alloca
return idx < 0 || xa_load(&lru->xa, idx);
}
-int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
- gfp_t gfp)
+static int __memcg_list_lru_alloc(struct mem_cgroup *memcg,
+ struct list_lru *lru, gfp_t gfp)
{
unsigned long flags;
struct list_lru_memcg *mlru = NULL;
struct mem_cgroup *pos, *parent;
XA_STATE(xas, &lru->xa, 0);
- if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
- return 0;
-
gfp &= GFP_RECLAIM_MASK;
/*
* Because the list_lru can be reparented to the parent cgroup's
@@ -588,6 +585,38 @@ int memcg_list_lru_alloc(struct mem_cgro
return xas_error(&xas);
}
+
+int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
+ gfp_t gfp)
+{
+ if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
+ return 0;
+ return __memcg_list_lru_alloc(memcg, lru, gfp);
+}
+
+int folio_memcg_list_lru_alloc(struct folio *folio, struct list_lru *lru,
+ gfp_t gfp)
+{
+ struct mem_cgroup *memcg;
+ int res;
+
+ if (!list_lru_memcg_aware(lru))
+ return 0;
+
+ /* Fast path when list_lru heads already exist */
+ rcu_read_lock();
+ memcg = folio_memcg(folio);
+ res = memcg_list_lru_allocated(memcg, lru);
+ rcu_read_unlock();
+ if (likely(res))
+ return 0;
+
+ /* Allocation may block, pin the memcg */
+ memcg = get_mem_cgroup_from_folio(folio);
+ res = __memcg_list_lru_alloc(memcg, lru, gfp);
+ mem_cgroup_put(memcg);
+ return res;
+}
#else
static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
_
Patches currently in -mm which might be from hannes@cmpxchg.org are
mm-switch-deferred-split-shrinker-to-list_lru.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-03-30 19:03 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 19:03 [to-be-updated] mm-list_lru-introduce-folio_memcg_list_lru_alloc.patch removed from -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox