From: Vlastimil Babka <vbabka@suse.cz>
To: Suren Baghdasaryan <surenb@google.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Christoph Lameter <cl@gentwo.org>,
David Rientjes <rientjes@google.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>,
Harry Yoo <harry.yoo@oracle.com>,
Uladzislau Rezki <urezki@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rcu@vger.kernel.org, maple-tree@lists.infradead.org,
vbabka@suse.cz
Subject: [PATCH v6 06/10] slab: skip percpu sheaves for remote object freeing
Date: Wed, 27 Aug 2025 10:26:38 +0200 [thread overview]
Message-ID: <20250827-slub-percpu-caches-v6-6-f0f775a3f73f@suse.cz> (raw)
In-Reply-To: <20250827-slub-percpu-caches-v6-0-f0f775a3f73f@suse.cz>
Since we don't control the NUMA locality of objects in percpu sheaves,
allocations with node restrictions bypass them. Allocations without
restrictions may however still expect to get local objects with high
probability, and the introduction of sheaves can decrease it due to
freed object from a remote node ending up in percpu sheaves.
The fraction of such remote frees seems low (5% on an 8-node machine)
but it can be expected that some cache or workload specific corner cases
exist. We can either conclude that this is not a problem due to the low
fraction, or we can make remote frees bypass percpu sheaves and go
directly to their slabs. This will make the remote frees more expensive,
but if if's only a small fraction, most frees will still benefit from
the lower overhead of percpu sheaves.
This patch thus makes remote object freeing bypass percpu sheaves,
including bulk freeing, and kfree_rcu() via the rcu_free sheaf. However
it's not intended to be 100% guarantee that percpu sheaves will only
contain local objects. The refill from slabs does not provide that
guarantee in the first place, and there might be cpu migrations
happening when we need to unlock the local_lock. Avoiding all that could
be possible but complicated so we can leave it for later investigation
whether it would be worth it. It can be expected that the more selective
freeing will itself prevent accumulation of remote objects in percpu
sheaves so any such violations would have only short-term effects.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slab_common.c | 7 +++++--
mm/slub.c | 42 ++++++++++++++++++++++++++++++++++++------
2 files changed, 41 insertions(+), 8 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 2d806e02568532a1000fd3912db6978e945dcfa8..08f5baee1309e5b5f10a22b8b3b0a09dfb314419 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1623,8 +1623,11 @@ static bool kfree_rcu_sheaf(void *obj)
slab = folio_slab(folio);
s = slab->slab_cache;
- if (s->cpu_sheaves)
- return __kfree_rcu_sheaf(s, obj);
+ if (s->cpu_sheaves) {
+ if (likely(!IS_ENABLED(CONFIG_NUMA) ||
+ slab_nid(slab) == numa_mem_id()))
+ return __kfree_rcu_sheaf(s, obj);
+ }
return false;
}
diff --git a/mm/slub.c b/mm/slub.c
index ee3a222acd6b15389a71bb47429d22b5326a4624..b37e684457e7d14781466c0086d1b64df2fd8e9d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -472,6 +472,7 @@ struct slab_sheaf {
};
struct kmem_cache *cache;
unsigned int size;
+ int node; /* only used for rcu_sheaf */
void *objects[];
};
@@ -5744,7 +5745,7 @@ static void rcu_free_sheaf(struct rcu_head *head)
*/
__rcu_free_sheaf_prepare(s, sheaf);
- barn = get_node(s, numa_mem_id())->barn;
+ barn = get_node(s, sheaf->node)->barn;
/* due to slab_free_hook() */
if (unlikely(sheaf->size == 0))
@@ -5827,10 +5828,12 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj)
rcu_sheaf->objects[rcu_sheaf->size++] = obj;
- if (likely(rcu_sheaf->size < s->sheaf_capacity))
+ if (likely(rcu_sheaf->size < s->sheaf_capacity)) {
rcu_sheaf = NULL;
- else
+ } else {
pcs->rcu_free = NULL;
+ rcu_sheaf->node = numa_mem_id();
+ }
local_unlock(&s->cpu_sheaves->lock);
@@ -5856,7 +5859,11 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
struct slab_sheaf *main, *empty;
bool init = slab_want_init_on_free(s);
unsigned int batch, i = 0;
+ void *remote_objects[PCS_BATCH_MAX];
+ unsigned int remote_nr = 0;
+ int node = numa_mem_id();
+next_remote_batch:
while (i < size) {
struct slab *slab = virt_to_slab(p[i]);
@@ -5866,7 +5873,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
if (unlikely(!slab_free_hook(s, p[i], init, false))) {
p[i] = p[--size];
if (!size)
- return;
+ goto flush_remote;
+ continue;
+ }
+
+ if (unlikely(IS_ENABLED(CONFIG_NUMA) && slab_nid(slab) != node)) {
+ remote_objects[remote_nr] = p[i];
+ p[i] = p[--size];
+ if (++remote_nr >= PCS_BATCH_MAX)
+ goto flush_remote;
continue;
}
@@ -5934,6 +5949,15 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
*/
fallback:
__kmem_cache_free_bulk(s, size, p);
+
+flush_remote:
+ if (remote_nr) {
+ __kmem_cache_free_bulk(s, remote_nr, &remote_objects[0]);
+ if (i < size) {
+ remote_nr = 0;
+ goto next_remote_batch;
+ }
+ }
}
#ifndef CONFIG_SLUB_TINY
@@ -6025,8 +6049,14 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)))
return;
- if (!s->cpu_sheaves || !free_to_pcs(s, object))
- do_slab_free(s, slab, object, object, 1, addr);
+ if (s->cpu_sheaves && likely(!IS_ENABLED(CONFIG_NUMA) ||
+ slab_nid(slab) == numa_mem_id())) {
+ if (likely(free_to_pcs(s, object))) {
+ return;
+ }
+ }
+
+ do_slab_free(s, slab, object, object, 1, addr);
}
#ifdef CONFIG_MEMCG
--
2.51.0
next prev parent reply other threads:[~2025-08-27 8:27 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-27 8:26 [PATCH v6 00/10] SLUB percpu sheaves Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 01/10] slab: simplify init_kmem_cache_nodes() error handling Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 02/10] slab: add opt-in caching layer of percpu sheaves Vlastimil Babka
2025-08-28 7:43 ` Thorsten Leemhuis
2025-08-28 8:01 ` Vlastimil Babka
2025-08-28 8:53 ` Thorsten Leemhuis
2025-08-28 9:03 ` Vlastimil Babka
2025-08-28 15:00 ` Vlastimil Babka
2025-08-29 7:12 ` Thorsten Leemhuis
2025-08-27 8:26 ` [PATCH v6 03/10] slab: add sheaf support for batching kfree_rcu() operations Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 04/10] slab: sheaf prefilling for guaranteed allocations Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 05/10] slab: determine barn status racily outside of lock Vlastimil Babka
2025-08-27 8:26 ` Vlastimil Babka [this message]
2025-08-27 8:26 ` [PATCH v6 07/10] slab: allow NUMA restricted allocations to use percpu sheaves Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 08/10] mm, vma: use percpu sheaves for vm_area_struct cache Vlastimil Babka
2025-09-02 11:13 ` Lorenzo Stoakes
2025-09-03 12:47 ` Vlastimil Babka
2025-09-03 13:31 ` Lorenzo Stoakes
2025-08-27 8:26 ` [PATCH v6 09/10] tools/testing: Add testing support for slab caches with sheaves Vlastimil Babka
2025-08-27 8:26 ` [PATCH v6 10/10] maple_tree: use percpu sheaves for maple_node_cache Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250827-slub-percpu-caches-v6-6-f0f775a3f73f@suse.cz \
--to=vbabka@suse.cz \
--cc=Liam.Howlett@oracle.com \
--cc=cl@gentwo.org \
--cc=harry.yoo@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=rcu@vger.kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=surenb@google.com \
--cc=urezki@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).