netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net] net: use SLAB_NO_MERGE for kmem_cache skbuff_head_cache
@ 2023-08-15 15:17 Jesper Dangaard Brouer
  2023-08-15 15:53 ` Matthew Wilcox
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Jesper Dangaard Brouer @ 2023-08-15 15:17 UTC (permalink / raw)
  To: netdev, vbabka
  Cc: Jesper Dangaard Brouer, Eric Dumazet, David S. Miller,
	Jakub Kicinski, Paolo Abeni, linux-mm, Andrew Morton, Mel Gorman,
	Christoph Lameter, roman.gushchin, dsterba

Since v6.5-rc1 MM-tree is merged and contains a new flag SLAB_NO_MERGE
in commit d0bf7d5759c1 ("mm/slab: introduce kmem_cache flag SLAB_NO_MERGE")
now is the time to use this flag for networking as proposed
earlier see link.

The SKB (sk_buff) kmem_cache slab is critical for network performance.
Network stack uses kmem_cache_{alloc,free}_bulk APIs to gain
performance by amortising the alloc/free cost.

For the bulk API to perform efficiently the slub fragmentation need to
be low. Especially for the SLUB allocator, the efficiency of bulk free
API depend on objects belonging to the same slab (page).

When running different network performance microbenchmarks, I started
to notice that performance was reduced (slightly) when machines had
longer uptimes. I believe the cause was 'skbuff_head_cache' got
aliased/merged into the general slub for 256 bytes sized objects (with
my kernel config, without CONFIG_HARDENED_USERCOPY).

For SKB kmem_cache network stack have other various reasons for
not merging, but it varies depending on kernel config (e.g.
CONFIG_HARDENED_USERCOPY). We want to explicitly set SLAB_NO_MERGE
for this kmem_cache to get most out of kmem_cache_{alloc,free}_bulk APIs.

When CONFIG_SLUB_TINY is configured the bulk APIs are essentially
disabled. Thus, for this case drop the SLAB_NO_MERGE flag.

Link: https://lore.kernel.org/all/167396280045.539803.7540459812377220500.stgit@firesoul/
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
 net/core/skbuff.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index a298992060e6..92aee3e0376a 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -4750,12 +4750,23 @@ static void skb_extensions_init(void)
 static void skb_extensions_init(void) {}
 #endif
 
+/* The SKB kmem_cache slab is critical for network performance.  Never
+ * merge/alias the slab with similar sized objects.  This avoids fragmentation
+ * that hurts performance of kmem_cache_{alloc,free}_bulk APIs.
+ */
+#ifndef CONFIG_SLUB_TINY
+#define FLAG_SKB_NO_MERGE	SLAB_NO_MERGE
+#else /* CONFIG_SLUB_TINY - simple loop in kmem_cache_alloc_bulk */
+#define FLAG_SKB_NO_MERGE	0
+#endif
+
 void __init skb_init(void)
 {
 	skbuff_cache = kmem_cache_create_usercopy("skbuff_head_cache",
 					      sizeof(struct sk_buff),
 					      0,
-					      SLAB_HWCACHE_ALIGN|SLAB_PANIC,
+					      SLAB_HWCACHE_ALIGN|SLAB_PANIC|
+						FLAG_SKB_NO_MERGE,
 					      offsetof(struct sk_buff, cb),
 					      sizeof_field(struct sk_buff, cb),
 					      NULL);



^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-08-21 13:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-15 15:17 [PATCH net] net: use SLAB_NO_MERGE for kmem_cache skbuff_head_cache Jesper Dangaard Brouer
2023-08-15 15:53 ` Matthew Wilcox
2023-08-18 12:32   ` Jesper Dangaard Brouer
2023-08-18 15:20     ` Vlastimil Babka
2023-08-18 15:15 ` Vlastimil Babka
2023-08-18 16:26 ` Jakub Kicinski
2023-08-18 19:59   ` Jesper Dangaard Brouer
2023-08-18 22:20 ` patchwork-bot+netdevbpf
2023-08-21 13:55 ` Alexander Lobakin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).