From: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
To: Harry Yoo <harry@kernel.org>
Cc: Hao Li <hao.li@linux.dev>, Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
hu.shengming@zte.com.cn,
Vinicius Costa Gomes <vinicius.gomes@intel.com>,
"Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Subject: [PATCH RFC v2] mm, slab: add an optimistic __slab_try_return_freelist()
Date: Mon, 27 Apr 2026 11:42:57 +0200 [thread overview]
Message-ID: <20260427-b4-refill-optimistic-return-v2-1-1672467a792d@kernel.org> (raw)
When we end up returning extraneous objects during refill to a slab
where we just did a get_freelist_nofreeze(), it is likely no other CPU
has freed objects to it meanwhile. We can then reattach the remainder of
the freelist without having to walk the (potentially cache cold)
freelist for finding its tail to connect slab->freelist to it.
Add a __slab_try_return_freelist() function that does that. As suggested
by Hao Li, it doesn't need to also return the slab to the partial list,
because there's code in __refill_objects_node() that already does that
for any slabs where we don't detach the freelist in the first place.
Reviewed-by: Hao Li <hao.li@linux.dev>
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
---
Optimizes the current refill leftover handling in a way that should have
no downsides, so we have a better baseline for any further changes (e.g.
spilling or caching the leftover) that involve some tradeoffs.
Git version here:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=b4/refill-optimistic-return
It's based on slab/for-7.2/perf with "mm/slub: defer freelist
construction until after bulk allocation from a new slab".
---
Changes in v2:
- rebase to slab/for-7.2/perf, drop RFC
- simplify to reuse the existing reattaching to partial list (Hao Li)
- Add R-b from Hao Li, thanks!
- drop the stat items - they serverd to verify the optimistic path was
succeeding, but are too detailed for mainline
- Link to v1: https://patch.msgid.link/20260421-b4-refill-optimistic-return-v1-1-24f0bfc1acff@kernel.org
---
mm/slub.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 46 insertions(+), 9 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3c4843834147..c770374490dd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4323,7 +4323,8 @@ static inline bool pfmemalloc_match(struct slab *slab, gfp_t gfpflags)
* Assumes this is performed only for caches without debugging so we
* don't need to worry about adding the slab to the full list.
*/
-static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab)
+static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *slab,
+ unsigned int *count)
{
struct freelist_counters old, new;
@@ -4339,6 +4340,7 @@ static inline void *get_freelist_nofreeze(struct kmem_cache *s, struct slab *sla
} while (!slab_update_freelist(s, slab, &old, &new, "get_freelist_nofreeze"));
+ *count = old.objects - old.inuse;
return old.freelist;
}
@@ -5502,6 +5504,34 @@ static noinline void free_to_partial_list(
}
}
+/*
+ * Try returning (remainder of) the freelist that we just detached from the
+ * slab. Optimistically assume the slab is still full, so we don't need to find
+ * the tail of the detached freelist.
+ *
+ * Fail if the slab isn't full anymore due to a cocurrent free.
+ */
+static bool __slab_try_return_freelist(struct kmem_cache *s, struct slab *slab,
+ void *head, int cnt)
+{
+ struct freelist_counters old, new;
+
+ old.freelist = slab->freelist;
+ old.counters = slab->counters;
+
+ if (old.freelist)
+ return false;
+
+ new.freelist = head;
+ new.counters = old.counters;
+ new.inuse -= cnt;
+
+ if (!slab_update_freelist(s, slab, &old, &new, "__slab_try_return_freelist"))
+ return false;
+
+ return true;
+}
+
/*
* Slow path handling. This may still be called frequently since objects
* have a longer lifetime than the cpu slabs in most processing loads.
@@ -7113,34 +7143,41 @@ __refill_objects_node(struct kmem_cache *s, void **p, gfp_t gfp, unsigned int mi
list_for_each_entry_safe(slab, slab2, &pc.slabs, slab_list) {
+ unsigned int count;
+
list_del(&slab->slab_list);
- object = get_freelist_nofreeze(s, slab);
+ object = get_freelist_nofreeze(s, slab, &count);
- while (object && refilled < max) {
+ while (count && refilled < max) {
p[refilled] = object;
object = get_freepointer(s, object);
maybe_wipe_obj_freeptr(s, p[refilled]);
refilled++;
+ count--;
}
/*
* Freelist had more objects than we can accommodate, we need to
- * free them back. We can treat it like a detached freelist, just
- * need to find the tail object.
+ * free them back. First we try to be optimistic and assume the
+ * slab is stil full since we just detached its freelist.
+ * Otherwise we must need to find the tail object.
*/
- if (unlikely(object)) {
+ if (unlikely(count)) {
void *head = object;
void *tail;
- int cnt = 0;
+
+ if (__slab_try_return_freelist(s, slab, head, count)) {
+ list_add(&slab->slab_list, &pc.slabs);
+ break;
+ }
do {
tail = object;
- cnt++;
object = get_freepointer(s, object);
} while (object);
- __slab_free(s, slab, head, tail, cnt, _RET_IP_);
+ __slab_free(s, slab, head, tail, count, _RET_IP_);
}
if (refilled >= max)
---
base-commit: 8952728641305ebcd03e80f79b8d31bb41d6d95f
change-id: 20260421-b4-refill-optimistic-return-f44d3b74cc49
Best regards,
--
Vlastimil Babka (SUSE) <vbabka@kernel.org>
reply other threads:[~2026-04-27 9:43 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260427-b4-refill-optimistic-return-v2-1-1672467a792d@kernel.org \
--to=vbabka@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=hao.li@linux.dev \
--cc=harry@kernel.org \
--cc=hu.shengming@zte.com.cn \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vinicius.gomes@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox