public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE
@ 2026-03-09  8:13 Nimrod Oren
  2026-03-12  0:56 ` Jakub Kicinski
  2026-03-13  2:50 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 4+ messages in thread
From: Nimrod Oren @ 2026-03-09  8:13 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, Ilias Apalodimas, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Simon Horman
  Cc: netdev, linux-kernel, Gal Pressman, Nimrod Oren, Dragos Tatulea,
	Tariq Toukan

The current page_pool alloc-cache size and refill values were chosen to
match the NAPI budget and to leave headroom for XDP_DROP recycling.
These fixed values do not scale well with large pages,
as they significantly increase a given page_pool's memory footprint.

Scale these values to better balance memory footprint across page sizes,
while keeping behavior on 4KB-page systems unchanged.

Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Nimrod Oren <noren@nvidia.com>
---

Changes since RFC [1]:
- Converted from RFC to PATCH
- Scale defines instead of dynamically capping values

[1] RFC: https://lore.kernel.org/20260223092410.2149014-1-noren@nvidia.com/

---
 include/net/page_pool/types.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 0d453484a585..08ffa766c26d 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -44,6 +44,8 @@
  * use-case.  The NAPI budget is 64 packets.  After a NAPI poll the RX
  * ring is usually refilled and the max consumed elements will be 64,
  * thus a natural max size of objects needed in the cache.
+ * The refill watermark is set to 64 for 4KB pages,
+ * and scales to balance its size in bytes across page sizes.
  *
  * Keeping room for more objects, is due to XDP_DROP use-case.  As
  * XDP_DROP allows the opportunity to recycle objects directly into
@@ -51,8 +53,15 @@
  * cache is already full (or partly full) then the XDP_DROP recycles
  * would have to take a slower code path.
  */
-#define PP_ALLOC_CACHE_SIZE	128
+#if PAGE_SIZE >= SZ_64K
+#define PP_ALLOC_CACHE_REFILL	4
+#elif PAGE_SIZE >= SZ_16K
+#define PP_ALLOC_CACHE_REFILL	16
+#else
 #define PP_ALLOC_CACHE_REFILL	64
+#endif
+
+#define PP_ALLOC_CACHE_SIZE	(PP_ALLOC_CACHE_REFILL * 2)
 struct pp_alloc_cache {
 	u32 count;
 	netmem_ref cache[PP_ALLOC_CACHE_SIZE];
-- 
2.45.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-03-16 15:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09  8:13 [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE Nimrod Oren
2026-03-12  0:56 ` Jakub Kicinski
2026-03-16 15:17   ` Jesper Dangaard Brouer
2026-03-13  2:50 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox