* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves [not found] <20260410112202.142597-1-hao.li@linux.dev> @ 2026-04-14 8:39 ` Harry Yoo (Oracle) 2026-04-14 9:59 ` Hao Li 0 siblings, 1 reply; 5+ messages in thread From: Harry Yoo (Oracle) @ 2026-04-14 8:39 UTC (permalink / raw) To: Hao Li Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Liam R. Howlett On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote: > When performing objects refill, we tend to optimistically assume that > there will be more allocation requests coming next; this is the > fundamental assumption behind this optimization. I think the reason why currently we have two sheaves per CPU instead of one bigger sheaf is to avoid unfairly pessimizing when the alloc/free pattern frequently changes. By refilling more objects, frees are more likely to hit the slowpath. How can it be argued that this optimization is beneficial to have in general, not just for caches with specific alloc/free patterns? > When __refill_objects_node() isolates a partial slab and satisfies a > bulk allocation from its freelist, the slab can still have a small tail > of free objects left over. Today those objects are freed back to the > slab immediately. > > If the leftover tail is local and small enough to fit, keep it in the > current CPU's sheaves instead. This avoids pushing those objects back > through the __slab_free slowpath. So there are two different paths: 1. When refilling prefilled sheaves, spill objects into ->main and ->spare. 2. When refilling ->main sheaf, spill objects into ->spare. > Add a helper to obtain both the freelist and its free-object count, and > then spill the remaining objects into a percpu sheaf when: > - the tail fits in a sheaf > - the slab is local to the current CPU > - the slab is not pfmemalloc > - the target sheaf has enough free space > > Otherwise keep the existing fallback and free the tail back to the slab. > > Also add a SHEAF_SPILL stat so the new path can be observed in SLUB > stats. > > On the mmap2 case in the will-it-scale benchmark suite, > this patch can improve performance by about 2~5%. Where do you think the improvement comes from? (hopefully w/ some data) e.g.: 1. the benefit is from largely or partly from reduced contention on n->list_lock. 2. this change reduces # of alloc slowpath at the cost of increased of free slowpath hits, but that's better because the slowpath frees are mostly lockless. 3. the alloc/free pattern of the workload is benefiting from spilling objects to the CPU's sheaves. or something else? > Signed-off-by: Hao Li <hao.li@linux.dev> > --- > > This patch is an exploratory attempt to address the leftover objects and > partial slab issues in the refill path, and it is marked as RFC to warmly > welcome any feedback, suggestions, and discussion! Yeah, let's discuss! By the way, have you also been considering having min-max capacity for sheaves? (that I think Vlastimil suggested somewhere) -- Cheers, Harry / Hyeonggon ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves 2026-04-14 8:39 ` [RFC PATCH] slub: spill refill leftover objects into percpu sheaves Harry Yoo (Oracle) @ 2026-04-14 9:59 ` Hao Li 2026-04-15 10:20 ` Harry Yoo (Oracle) 0 siblings, 1 reply; 5+ messages in thread From: Hao Li @ 2026-04-14 9:59 UTC (permalink / raw) To: Harry Yoo (Oracle) Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Liam R. Howlett On Tue, Apr 14, 2026 at 05:39:40PM +0900, Harry Yoo (Oracle) wrote: > On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote: > > When performing objects refill, we tend to optimistically assume that > > there will be more allocation requests coming next; this is the > > fundamental assumption behind this optimization. > > I think the reason why currently we have two sheaves per CPU instead of > one bigger sheaf is to avoid unfairly pessimizing when the alloc/free > pattern frequently changes. Yes. > > By refilling more objects, frees are more likely to hit the slowpath. > How can it be argued that this optimization is beneficial to have > in general, not just for caches with specific alloc/free patterns? Yes, that's a very valid concern. My thinking here is that the leftover objects have to be kept somewhere after all, so in this current experimental implementation I'm trading off future free-path performance for better allocation performance. It's a pretty tough trade-off either way :/ > > > When __refill_objects_node() isolates a partial slab and satisfies a > > bulk allocation from its freelist, the slab can still have a small tail > > of free objects left over. Today those objects are freed back to the > > slab immediately. > > > > If the leftover tail is local and small enough to fit, keep it in the > > current CPU's sheaves instead. This avoids pushing those objects back > > through the __slab_free slowpath. > > So there are two different paths: > > 1. When refilling prefilled sheaves, spill objects into ->main and > ->spare. > 2. When refilling ->main sheaf, spill objects into ->spare. the current experimental code is biased toward spilling into the spare sheaf when possible. for kernels without kernel preemption enabled or !RT, the spare sheaf is generally NULL at that point, so the main sheaf may still end up being the primary place to absorb the spill... > > > Add a helper to obtain both the freelist and its free-object count, and > > then spill the remaining objects into a percpu sheaf when: > > - the tail fits in a sheaf > > - the slab is local to the current CPU > > - the slab is not pfmemalloc > > - the target sheaf has enough free space > > > > Otherwise keep the existing fallback and free the tail back to the slab. > > > > Also add a SHEAF_SPILL stat so the new path can be observed in SLUB > > stats. > > > > On the mmap2 case in the will-it-scale benchmark suite, > > > this patch can improve performance by about 2~5%. > > Where do you think the improvement comes from? (hopefully w/ some data) Yes, this is necessary. > > e.g.: > 1. the benefit is from largely or partly from > reduced contention on n->list_lock. Before this patch is applied, the mmap benchmark shows the following hot path: - 7.85% native_queued_spin_lock_slowpath -7.85% _raw_spin_lock_irqsave - 3.69% __slab_free + 1.84% __refill_objects_node + 1.77% __kmem_cache_free_bulk + 3.27% __refill_objects_node With the patch applied, the __refill_objects_node -> __slab_free hotspot goes away, and the native_queued_spin_lock_slowpath drops to roughly 3.5%. The remaining lock contention is mostly between __refill_objects_node -> add_partial and __kmem_cache_free_bulk -> __slab_free. > > 2. this change reduces # of alloc slowpath at the cost of increased > of free slowpath hits, but that's better because the slowpath frees > are mostly lockless. The alloc slowpath remains at 0 both w/ or w/o the patch, whereas the free slowpath increases by 2x after applying the patch. > > 3. the alloc/free pattern of the workload is benefiting from > spilling objects to the CPU's sheaves. > > or something else? The 2-5% throughput improvement does seem to come with some trade-offs. The main one is that leftover objects get hidden in the percpu sheaves now, which reduces the objects on the node partial list and thus indirectly increases slab alloc/free frequency to about 4x of the baseline. This is a drawback of the current approach. :/ I experimented with several alternative ideas, and the pattern seems fairly consistent: as soon as leftover objects are hidden at the percpu level, slab alloc/free churn tends to go up. > > > Signed-off-by: Hao Li <hao.li@linux.dev> > > --- > > > > This patch is an exploratory attempt to address the leftover objects and > > partial slab issues in the refill path, and it is marked as RFC to warmly > > welcome any feedback, suggestions, and discussion! > > Yeah, let's discuss! Sure! Thanks for the discussion! > > By the way, have you also been considering having min-max capacity > for sheaves? (that I think Vlastimil suggested somewhere) Yes, I also tried it. I experimented with using a manually chosen threshold to allow refill to leave the sheaf in a partially filled state. However, since concurrent frees are inherently unpredictable, this seems can only reduce the probability of generating leftover objects, while at the same time affecting alloc-side throughput. In my testing, the results were not very encouraging: it seems hard to observe improvement, and in most cases it ended up causing a performance regression. my impression is that it could be difficult to prevent leftovers proactively. It may be easier to deal with them after they appear. Besides, I also tried another idea: maintaining a dedicated spill sheaf in the barn, protected by the barn lock, and placing leftover objects there. Then, during refill, barn_replace_empty_sheaf() would first try the spill sheaf, and if it contained objects, it would swap spill and main, avoiding consumption from barn->full_list. With this approach, I still couldn't observe meaningful performance change. Slab alloc/free churn still present, although the increase was relatively small, at around 1.x My guess is that while this approach pulls leftovers up to the barn level and avoids the cost of pushing them back down to the node partial list level, the serialized nature of the barn lock means leftovers cannot be deposited into the spill sheaf with high concurrency. As a result, the placement is not fast enough, and the performance gain remains limited. -- Thanks, Hao ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves 2026-04-14 9:59 ` Hao Li @ 2026-04-15 10:20 ` Harry Yoo (Oracle) 2026-04-16 7:58 ` Hao Li 2026-04-16 8:13 ` Hao Li 0 siblings, 2 replies; 5+ messages in thread From: Harry Yoo (Oracle) @ 2026-04-15 10:20 UTC (permalink / raw) To: Hao Li Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Liam R. Howlett On Tue, Apr 14, 2026 at 05:59:48PM +0800, Hao Li wrote: > On Tue, Apr 14, 2026 at 05:39:40PM +0900, Harry Yoo (Oracle) wrote: > > On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote: > > > When performing objects refill, we tend to optimistically assume that > > > there will be more allocation requests coming next; this is the > > > fundamental assumption behind this optimization. > > > > > > When __refill_objects_node() isolates a partial slab and satisfies a > > > bulk allocation from its freelist, the slab can still have a small tail > > > of free objects left over. Today those objects are freed back to the > > > slab immediately. > > > > > > If the leftover tail is local and small enough to fit, keep it in the > > > current CPU's sheaves instead. This avoids pushing those objects back > > > through the __slab_free slowpath. > > > > So there are two different paths: > > > > 1. When refilling prefilled sheaves, spill objects into ->main and > > ->spare. > > 2. When refilling ->main sheaf, spill objects into ->spare. > > the current experimental code is biased toward spilling into the spare sheaf > when possible. Oh ok. > for kernels without kernel preemption enabled or !RT, the spare sheaf is > generally NULL at that point, Right. We're either refilling the previously-spare-sheaf (->spare = NULL now) or an empty sheaf because ->spare was NULL. (in both cases 1 and 2) > so the main sheaf may still end up being the > primary place to absorb the spill... > > > > Add a helper to obtain both the freelist and its free-object count, and > > > then spill the remaining objects into a percpu sheaf when: > > > - the tail fits in a sheaf > > > - the slab is local to the current CPU > > > - the slab is not pfmemalloc > > > - the target sheaf has enough free space > > > > > > Otherwise keep the existing fallback and free the tail back to the slab. > > > > > > Also add a SHEAF_SPILL stat so the new path can be observed in SLUB > > > stats. > > > > > > On the mmap2 case in the will-it-scale benchmark suite, > > > > > this patch can improve performance by about 2~5%. > > > > Where do you think the improvement comes from? (hopefully w/ some data) > > Yes, this is necessary. > > > e.g.: > > 1. the benefit is from largely or partly from > > reduced contention on n->list_lock. > > Before this patch is applied, the mmap benchmark shows the following hot path: > > - 7.85% native_queued_spin_lock_slowpath > -7.85% _raw_spin_lock_irqsave > - 3.69% __slab_free > + 1.84% __refill_objects_node > + 1.77% __kmem_cache_free_bulk > + 3.27% __refill_objects_node > > With the patch applied, the __refill_objects_node -> __slab_free hotspot goes > away, and the native_queued_spin_lock_slowpath drops to roughly 3.5%. Sounds like returning slabs back indeed increases contention on slowpath. > The > remaining lock contention is mostly between __refill_objects_node -> > add_partial and __kmem_cache_free_bulk -> __slab_free. > > > > > 2. this change reduces # of alloc slowpath at the cost of increased > > of free slowpath hits, but that's better because the slowpath frees > > are mostly lockless. > > The alloc slowpath remains at 0 both w/ or w/o the patch, whereas the (assuming you used SLUB_STATS for this) That's weird, I think we should check SHEAF_REFILL instead of ALLOC_SLOWPATH. > free slowpath increases by 2x after applying the patch. from which cache was this stat collected? > > > > 3. the alloc/free pattern of the workload is benefiting from > > spilling objects to the CPU's sheaves. > > > > or something else? > > The 2-5% throughput improvement does seem to come with some trade-offs. > The main one is that leftover objects get hidden in the percpu sheaves now, > which reduces the objects on the node partial list and thus indirectly > increases slab alloc/free frequency to about 4x of the baseline. > > This is a drawback of the current approach. :/ Sounds like s->min_partial is too small now that we cache more objects per CPU. /me wonders if increasing sheaf capacity would make more sense rather than optimizing slowpath (if it comes with increased memory usage anyway), but then stares at his (yet) unfinished patch series... > I experimented with several alternative ideas, and the pattern seems fairly > consistent: as soon as leftover objects are hidden at the percpu level, slab > alloc/free churn tends to go up. > > > > Signed-off-by: Hao Li <hao.li@linux.dev> > > > --- > > > > > > This patch is an exploratory attempt to address the leftover objects and > > > partial slab issues in the refill path, and it is marked as RFC to warmly > > > welcome any feedback, suggestions, and discussion! > > > > Yeah, let's discuss! > > Sure! Thanks for the discussion! > > > > > By the way, have you also been considering having min-max capacity > > for sheaves? (that I think Vlastimil suggested somewhere) > > Yes, I also tried it. > > I experimented with using a manually chosen threshold to allow refill to leave > the sheaf in a partially filled state. However, since concurrent frees are > inherently unpredictable, this seems can only reduce the probability of > generating leftover objects, If concurrent frees are a problem we could probably grab slab->freelist under n->list_lock (e.g. keep them at the end of the sheaf) and fill the sheaf outside the lock to avoid grabbing too many objects. > while at the same time affecting alloc-side throughput. Shouldn't we set sheaf's min capacity as the same as s->sheaf_capacity and allow higher max capcity to avoid this? > In my testing, the results were not very encouraging: it seems hard > to observe improvement, and in most cases it ended up causing a performance > regression. > > my impression is that it could be difficult to prevent leftovers proactively. > It may be easier to deal with them after they appear. Either way doesn't work if the slab order is too high... IIRC using higher slab order used to have some benefit but now that we have sheaves, it probably doesn't make sense anymore to have oo_objects(s->oo) > s->sheaf_capacity? -- Cheers, Harry / Hyeonggon ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves 2026-04-15 10:20 ` Harry Yoo (Oracle) @ 2026-04-16 7:58 ` Hao Li 2026-04-16 8:13 ` Hao Li 1 sibling, 0 replies; 5+ messages in thread From: Hao Li @ 2026-04-16 7:58 UTC (permalink / raw) To: Harry Yoo (Oracle) Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Liam R. Howlett On Wed, Apr 15, 2026 at 07:20:21PM +0900, Harry Yoo (Oracle) wrote: > On Tue, Apr 14, 2026 at 05:59:48PM +0800, Hao Li wrote: > > On Tue, Apr 14, 2026 at 05:39:40PM +0900, Harry Yoo (Oracle) wrote: > > > On Fri, Apr 10, 2026 at 07:16:57PM +0800, Hao Li wrote: > > > > When performing objects refill, we tend to optimistically assume that > > > > there will be more allocation requests coming next; this is the > > > > fundamental assumption behind this optimization. > > > > > > > > When __refill_objects_node() isolates a partial slab and satisfies a > > > > bulk allocation from its freelist, the slab can still have a small tail > > > > of free objects left over. Today those objects are freed back to the > > > > slab immediately. > > > > > > > > If the leftover tail is local and small enough to fit, keep it in the > > > > current CPU's sheaves instead. This avoids pushing those objects back > > > > through the __slab_free slowpath. > > > > > > So there are two different paths: > > > > > > 1. When refilling prefilled sheaves, spill objects into ->main and > > > ->spare. > > > 2. When refilling ->main sheaf, spill objects into ->spare. > > > > the current experimental code is biased toward spilling into the spare sheaf > > when possible. > > Oh ok. > > > for kernels without kernel preemption enabled or !RT, the spare sheaf is > > generally NULL at that point, > > Right. We're either refilling the previously-spare-sheaf (->spare = NULL > now) or an empty sheaf because ->spare was NULL. (in both cases 1 and 2) Yes. > > > so the main sheaf may still end up being the > > primary place to absorb the spill... > > > > > > Add a helper to obtain both the freelist and its free-object count, and > > > > then spill the remaining objects into a percpu sheaf when: > > > > - the tail fits in a sheaf > > > > - the slab is local to the current CPU > > > > - the slab is not pfmemalloc > > > > - the target sheaf has enough free space > > > > > > > > Otherwise keep the existing fallback and free the tail back to the slab. > > > > > > > > Also add a SHEAF_SPILL stat so the new path can be observed in SLUB > > > > stats. > > > > > > > > On the mmap2 case in the will-it-scale benchmark suite, > > > > > > > this patch can improve performance by about 2~5%. > > > > > > Where do you think the improvement comes from? (hopefully w/ some data) > > > > Yes, this is necessary. > > > > > e.g.: > > > 1. the benefit is from largely or partly from > > > reduced contention on n->list_lock. > > > > Before this patch is applied, the mmap benchmark shows the following hot path: > > > > - 7.85% native_queued_spin_lock_slowpath > > -7.85% _raw_spin_lock_irqsave > > - 3.69% __slab_free > > + 1.84% __refill_objects_node > > + 1.77% __kmem_cache_free_bulk > > + 3.27% __refill_objects_node > > > > With the patch applied, the __refill_objects_node -> __slab_free hotspot goes > > away, and the native_queued_spin_lock_slowpath drops to roughly 3.5%. > > Sounds like returning slabs back indeed increases contention on slowpath. Indeed! > > > The > > remaining lock contention is mostly between __refill_objects_node -> > > add_partial and __kmem_cache_free_bulk -> __slab_free. > > > > > > > > 2. this change reduces # of alloc slowpath at the cost of increased > > > of free slowpath hits, but that's better because the slowpath frees > > > are mostly lockless. > > > > The alloc slowpath remains at 0 both w/ or w/o the patch, whereas the > > (assuming you used SLUB_STATS for this) > Yes, I enable it. > That's weird, I think we should check SHEAF_REFILL instead of > ALLOC_SLOWPATH. Yes, I will compare each metrics for later testing. Maybe we can see more clues. > > > free slowpath increases by 2x after applying the patch. > > from which cache was this stat collected? It's for /sys/kernel/slab/maple_node/ > > > > > > > 3. the alloc/free pattern of the workload is benefiting from > > > spilling objects to the CPU's sheaves. > > > > > > or something else? > > > > The 2-5% throughput improvement does seem to come with some trade-offs. > > The main one is that leftover objects get hidden in the percpu sheaves now, > > which reduces the objects on the node partial list and thus indirectly > > increases slab alloc/free frequency to about 4x of the baseline. > > > > This is a drawback of the current approach. :/ > > Sounds like s->min_partial is too small now that we cache more objects > per CPU. Exactly. for the mmap test case, the slab partial list keeps thrashing. It makes me wonder whether SLUB might handle transient pressure better if empty slabs could be regulated with a "dynamic burst threshold" > > /me wonders if increasing sheaf capacity would make more sense > rather than optimizing slowpath (if it comes with increased memory > usage anyway), Yes, finding ways to avoid falling onto the slowpath is also very worthwhile. > but then stares at his (yet) unfinished patch series... > > > I experimented with several alternative ideas, and the pattern seems fairly > > consistent: as soon as leftover objects are hidden at the percpu level, slab > > alloc/free churn tends to go up. > > > > > > Signed-off-by: Hao Li <hao.li@linux.dev> > > > > --- > > > > > > > > This patch is an exploratory attempt to address the leftover objects and > > > > partial slab issues in the refill path, and it is marked as RFC to warmly > > > > welcome any feedback, suggestions, and discussion! > > > > > > Yeah, let's discuss! > > > > Sure! Thanks for the discussion! > > > > > > > > By the way, have you also been considering having min-max capacity > > > for sheaves? (that I think Vlastimil suggested somewhere) > > > > Yes, I also tried it. > > > > I experimented with using a manually chosen threshold to allow refill to leave > > the sheaf in a partially filled state. However, since concurrent frees are > > inherently unpredictable, this seems can only reduce the probability of > > generating leftover objects, > > If concurrent frees are a problem we could probably grab slab->freelist > under n->list_lock (e.g. keep them at the end of the sheaf) and fill the > sheaf outside the lock to avoid grabbing too many objects. Do you mean doing an on-list bulk allocation? > > > while at the same time affecting alloc-side throughput. > > Shouldn't we set sheaf's min capacity as the same as > s->sheaf_capacity and allow higher max capcity to avoid this? I'm not sure I fully understand this. since the array size is fixed, how would we allow more entries to be filled? > > > In my testing, the results were not very encouraging: it seems hard > > to observe improvement, and in most cases it ended up causing a performance > > regression. > > > > my impression is that it could be difficult to prevent leftovers proactively. > > It may be easier to deal with them after they appear. > > Either way doesn't work if the slab order is too high... > > IIRC using higher slab order used to have some benefit > but now that we have sheaves, it probably doesn't make sense anymore > to have oo_objects(s->oo) > s->sheaf_capacity? Do you mean considering making the capacity of each sheaf larger than oo_objects? That could reduce the probability of leftovers, though I think that would be more of a separate optimization of sheaf capacity. -- Thanks, Hao ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH] slub: spill refill leftover objects into percpu sheaves 2026-04-15 10:20 ` Harry Yoo (Oracle) 2026-04-16 7:58 ` Hao Li @ 2026-04-16 8:13 ` Hao Li 1 sibling, 0 replies; 5+ messages in thread From: Hao Li @ 2026-04-16 8:13 UTC (permalink / raw) To: Harry Yoo (Oracle) Cc: vbabka, akpm, cl, rientjes, roman.gushchin, linux-mm, linux-kernel, Liam R. Howlett On Wed, Apr 15, 2026 at 07:20:21PM +0900, Harry Yoo (Oracle) wrote: > > /me wonders if increasing sheaf capacity would make more sense > rather than optimizing slowpath (if it comes with increased memory > usage anyway), but then stares at his (yet) unfinished patch series... Oh, take it easy, Harry. this is just an experimental patch discussion. Looking forward to your next patch! :P ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-16 8:13 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260410112202.142597-1-hao.li@linux.dev>
2026-04-14 8:39 ` [RFC PATCH] slub: spill refill leftover objects into percpu sheaves Harry Yoo (Oracle)
2026-04-14 9:59 ` Hao Li
2026-04-15 10:20 ` Harry Yoo (Oracle)
2026-04-16 7:58 ` Hao Li
2026-04-16 8:13 ` Hao Li
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox