* [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE
@ 2026-03-09 8:13 Nimrod Oren
2026-03-12 0:56 ` Jakub Kicinski
2026-03-13 2:50 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 4+ messages in thread
From: Nimrod Oren @ 2026-03-09 8:13 UTC (permalink / raw)
To: Jesper Dangaard Brouer, Ilias Apalodimas, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Simon Horman
Cc: netdev, linux-kernel, Gal Pressman, Nimrod Oren, Dragos Tatulea,
Tariq Toukan
The current page_pool alloc-cache size and refill values were chosen to
match the NAPI budget and to leave headroom for XDP_DROP recycling.
These fixed values do not scale well with large pages,
as they significantly increase a given page_pool's memory footprint.
Scale these values to better balance memory footprint across page sizes,
while keeping behavior on 4KB-page systems unchanged.
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Nimrod Oren <noren@nvidia.com>
---
Changes since RFC [1]:
- Converted from RFC to PATCH
- Scale defines instead of dynamically capping values
[1] RFC: https://lore.kernel.org/20260223092410.2149014-1-noren@nvidia.com/
---
include/net/page_pool/types.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 0d453484a585..08ffa766c26d 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -44,6 +44,8 @@
* use-case. The NAPI budget is 64 packets. After a NAPI poll the RX
* ring is usually refilled and the max consumed elements will be 64,
* thus a natural max size of objects needed in the cache.
+ * The refill watermark is set to 64 for 4KB pages,
+ * and scales to balance its size in bytes across page sizes.
*
* Keeping room for more objects, is due to XDP_DROP use-case. As
* XDP_DROP allows the opportunity to recycle objects directly into
@@ -51,8 +53,15 @@
* cache is already full (or partly full) then the XDP_DROP recycles
* would have to take a slower code path.
*/
-#define PP_ALLOC_CACHE_SIZE 128
+#if PAGE_SIZE >= SZ_64K
+#define PP_ALLOC_CACHE_REFILL 4
+#elif PAGE_SIZE >= SZ_16K
+#define PP_ALLOC_CACHE_REFILL 16
+#else
#define PP_ALLOC_CACHE_REFILL 64
+#endif
+
+#define PP_ALLOC_CACHE_SIZE (PP_ALLOC_CACHE_REFILL * 2)
struct pp_alloc_cache {
u32 count;
netmem_ref cache[PP_ALLOC_CACHE_SIZE];
--
2.45.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE
2026-03-09 8:13 [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE Nimrod Oren
@ 2026-03-12 0:56 ` Jakub Kicinski
2026-03-16 15:17 ` Jesper Dangaard Brouer
2026-03-13 2:50 ` patchwork-bot+netdevbpf
1 sibling, 1 reply; 4+ messages in thread
From: Jakub Kicinski @ 2026-03-12 0:56 UTC (permalink / raw)
To: Jesper Dangaard Brouer
Cc: Nimrod Oren, Ilias Apalodimas, David S. Miller, Eric Dumazet,
Paolo Abeni, Simon Horman, netdev, linux-kernel, Gal Pressman,
Dragos Tatulea, Tariq Toukan
On Mon, 9 Mar 2026 10:13:01 +0200 Nimrod Oren wrote:
> The current page_pool alloc-cache size and refill values were chosen to
> match the NAPI budget and to leave headroom for XDP_DROP recycling.
> These fixed values do not scale well with large pages,
> as they significantly increase a given page_pool's memory footprint.
>
> Scale these values to better balance memory footprint across page sizes,
> while keeping behavior on 4KB-page systems unchanged.
Jesper WDYT? I'm of course happy with this simple approach.
https://lore.kernel.org/all/20260309081301.103152-1-noren@nvidia.com/
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE
2026-03-09 8:13 [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE Nimrod Oren
2026-03-12 0:56 ` Jakub Kicinski
@ 2026-03-13 2:50 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-03-13 2:50 UTC (permalink / raw)
To: Nimrod Oren
Cc: hawk, ilias.apalodimas, davem, edumazet, kuba, pabeni, horms,
netdev, linux-kernel, gal, dtatulea, tariqt
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Mon, 9 Mar 2026 10:13:01 +0200 you wrote:
> The current page_pool alloc-cache size and refill values were chosen to
> match the NAPI budget and to leave headroom for XDP_DROP recycling.
> These fixed values do not scale well with large pages,
> as they significantly increase a given page_pool's memory footprint.
>
> Scale these values to better balance memory footprint across page sizes,
> while keeping behavior on 4KB-page systems unchanged.
>
> [...]
Here is the summary with links:
- [net-next] net: page_pool: scale alloc cache with PAGE_SIZE
https://git.kernel.org/netdev/net-next/c/15abbe7c8266
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE
2026-03-12 0:56 ` Jakub Kicinski
@ 2026-03-16 15:17 ` Jesper Dangaard Brouer
0 siblings, 0 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2026-03-16 15:17 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Nimrod Oren, Ilias Apalodimas, David S. Miller, Eric Dumazet,
Paolo Abeni, Simon Horman, netdev, linux-kernel, Gal Pressman,
Dragos Tatulea, Tariq Toukan
On 12/03/2026 01.56, Jakub Kicinski wrote:
> On Mon, 9 Mar 2026 10:13:01 +0200 Nimrod Oren wrote:
>> The current page_pool alloc-cache size and refill values were chosen to
>> match the NAPI budget and to leave headroom for XDP_DROP recycling.
>> These fixed values do not scale well with large pages,
>> as they significantly increase a given page_pool's memory footprint.
>>
>> Scale these values to better balance memory footprint across page sizes,
>> while keeping behavior on 4KB-page systems unchanged.
>
> Jesper WDYT? I'm of course happy with this simple approach.
> https://lore.kernel.org/all/20260309081301.103152-1-noren@nvidia.com/
Sorry for the slow response, LGTM.
Patch is already applied.
--Jesper
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-16 15:18 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-09 8:13 [PATCH net-next] net: page_pool: scale alloc cache with PAGE_SIZE Nimrod Oren
2026-03-12 0:56 ` Jakub Kicinski
2026-03-16 15:17 ` Jesper Dangaard Brouer
2026-03-13 2:50 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox