* [PATCH RFC v2] mm, slab: add an optimistic __slab_try_return_freelist()
@ 2026-04-27 9:42 Vlastimil Babka (SUSE)
0 siblings, 0 replies; only message in thread
From: Vlastimil Babka (SUSE) @ 2026-04-27 9:42 UTC (permalink / raw)
To: Harry Yoo
Cc: Hao Li, Christoph Lameter, David Rientjes, Roman Gushchin,
Andrew Morton, linux-mm, linux-kernel, hu.shengming,
Vinicius Costa Gomes, Vlastimil Babka (SUSE)
When we end up returning extraneous objects during refill to a slab
where we just did a get_freelist_nofreeze(), it is likely no other CPU
has freed objects to it meanwhile. We can then reattach the remainder of
the freelist without having to walk the (potentially cache cold)
freelist for finding its tail to connect slab->freelist to it.
Add a __slab_try_return_freelist() function that does that. As suggested
by Hao Li, it doesn't need to also return the slab to the partial list,
because there's code in __refill_objects_node() that already does that
for any slabs where we don't detach the freelist in the first place.
Reviewed-by: Hao Li <hao.li@linux.dev>
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
---
Optimizes the current refill leftover handling in a way that should have
no downsides, so we have a better baseline for any further changes (e.g.
spilling or caching the leftover) that involve some tradeoffs.
Git version here:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=b4/refill-optimistic-return
It's based on slab/for-7.2/perf with "mm/slub: defer freelist
construction until after bulk allocation from a new slab".
---
Changes in v2:
- rebase to slab/for-7.2/perf, drop RFC
- simplify to reuse the existing reattaching to partial list (Hao Li)
- Add R-b from Hao Li, thanks!
- drop the stat items - they serverd to verify the optimistic path was
succeeding, but are too detailed for mainline
- Link to v1: https://patch.msgid.link/20260421-b4-refill-optimistic-return-v1-1-24f0bfc1acff@kernel.org
---
mm/slub.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 46 insertions(+), 9 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3c4843834147..c770374490dd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4323,7 +4323,8 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags)
* Assumes this is performed only for caches without debugging so we
* don't need to worry about adding the slab to the full list.
*/
-static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab)
+static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab,
+ unsigned int *count)
{
struct freelist_counters old, new;
@@ -4339,6 +4340,7 @@ static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *sla
} while (!slab_update_freelist(s, slab, &old, &new, "get_freelist_nofreeze"));
+ *count = old.objects - old.inuse;
return old.freelist;
}
@@ -5502,6 +5504,34 @@ static noinline void free_to_partial_list(
}
}
+/*
+ * Try returning (remainder of) the freelist that we just detached from the
+ * slab. Optimistically assume the slab is still full, so we don't need to find
+ * the tail of the detached freelist.
+ *
+ * Fail if the slab isn't full anymore due to a cocurrent free.
+ */
+static bool __slab_try_return_freelist(struct kmem_cache *s, struct slab *slab,
+ void *head, int cnt)
+{
+ struct freelist_counters old, new;
+
+ old.freelist = slab->freelist;
+ old.counters = slab->counters;
+
+ if (old.freelist)
+ return false;
+
+ new.freelist = head;
+ new.counters = old.counters;
+ new.inuse -= cnt;
+
+ if (!slab_update_freelist(s, slab, &old, &new, "__slab_try_return_freelist"))
+ return false;
+
+ return true;
+}
+
/*
* Slow path handling. This may still be called frequently since objects
* have a longer lifetime than the cpu slabs in most processing loads.
@@ -7113,34 +7143,41 @@ __refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int mi
list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
+ unsigned int count;
+
list_del(&slab->slab_list);
- object = get_freelist_nofreeze(s, slab);
+ object = get_freelist_nofreeze(s, slab, &count);
- while (object && refilled < max) {
+ while (count && refilled < max) {
p[refilled] = object;
object = get_freepointer(s, object);
maybe_wipe_obj_freeptr(s, p[refilled]);
refilled++;
+ count--;
}
/*
* Freelist had more objects than we can accommodate, we need to
- * free them back. We can treat it like a detached freelist, just
- * need to find the tail object.
+ * free them back. First we try to be optimistic and assume the
+ * slab is stil full since we just detached its freelist.
+ * Otherwise we must need to find the tail object.
*/
- if (unlikely(object)) {
+ if (unlikely(count)) {
void *head = object;
void *tail;
- int cnt = 0;
+
+ if (__slab_try_return_freelist(s, slab, head, count)) {
+ list_add(&slab->slab_list, &pc.slabs);
+ break;
+ }
do {
tail = object;
- cnt++;
object = get_freepointer(s, object);
} while (object);
- __slab_free(s, slab, head, tail, cnt, _RET_IP_);
+ __slab_free(s, slab, head, tail, count, _RET_IP_);
}
if (refilled >= max)
---
base-commit: 8952728641305ebcd03e80f79b8d31bb41d6d95f
change-id: 20260421-b4-refill-optimistic-return-f44d3b74cc49
Best regards,
--
Vlastimil Babka (SUSE) <vbabka@kernel.org>
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2026-04-27 9:43 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-27 9:42 [PATCH RFC v2] mm, slab: add an optimistic __slab_try_return_freelist() Vlastimil Babka (SUSE)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox