From: Nimrod Oren <noren@nvidia.com>
To: Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Simon Horman <horms@kernel.org>
Cc: <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
Gal Pressman <gal@nvidia.com>, Nimrod Oren <noren@nvidia.com>,
Dragos Tatulea <dtatulea@nvidia.com>,
Tariq Toukan <tariqt@nvidia.com>
Subject: [RFC net-next] net: page_pool: cap alloc cache size and refill by pool ring size
Date: Mon, 23 Feb 2026 11:24:10 +0200 [thread overview]
Message-ID: <20260223092410.2149014-1-noren@nvidia.com> (raw)
Hi all,
The current page_pool alloc cache constants were chosen to match the NAPI
budget and to leave headroom for XDP_DROP recycling, hence the current
defaults PP_ALLOC_CACHE_REFILL (64) and PP_ALLOC_CACHE_SIZE (128).
This logic implicitly assumes a reasonably large backing ring. However, on
systems with 64K page size, these values may exceed the number of pages
actually managed by a pool instance. In practice this means we can
bulk-allocate or cache significantly more pages than a given pool can ever
meaningfully use. This becomes particularly problematic when scaling to
many interfaces/channels, where the total amount of memory tied up in
per-pool alloc caches becomes significant.
I'm proposing to cap the alloc cache size and refill values by the pool
ring size, while preserving the existing behavior as much as possible.
The implementation I have right now is:
pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL, ring_qsize);
pool->alloc.size = pool->alloc.refill * 2;
This keeps the existing relationship "cache size = 2 x refill" and ensures
that refill never exceeds ring_qsize.
I am also considering a couple of alternatives and would like feedback on
which shape makes most sense:
Option B:
pool->alloc.size = min_t(unsigned int, PP_ALLOC_CACHE_SIZE, ring_qsize);
pool->alloc.refill = pool->alloc.size / 2;
Option C:
pool->alloc.size = min_t(unsigned int, PP_ALLOC_CACHE_SIZE, ring_qsize);
pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL, ring_qsize);
Option A keeps refill as the primary parameter and derives size from it,
preserving the current "refill == NAPI budget" motivation as long as the
ring is large enough. Options B and C instead cap size directly by
ring_qsize and then either derive refill from size (B) or cap both
independently (C).
Looking forward, it might be useful to allow drivers to configure these
values explicitly, so they can tune the cache and refill based on their
specific use case and hardware characteristics. Even if such an option is
added later, the logic above would still define the default behavior.
I'd appreciate feedback on:
* Whether this per-pool cache capping approach makes sense
* If so, which option is preferable
* Any alternative suggestions to better cap/scale the page_pool cache
parameters for large pages
Thanks,
Nimrod Oren
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Nimrod Oren <noren@nvidia.com>
---
include/net/page_pool/types.h | 2 ++
net/core/page_pool.c | 10 +++++++---
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 0d453484a585..521d0ca587dd 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -55,6 +55,8 @@
#define PP_ALLOC_CACHE_REFILL 64
struct pp_alloc_cache {
u32 count;
+ u8 refill;
+ u8 size;
netmem_ref cache[PP_ALLOC_CACHE_SIZE];
};
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 265a729431bb..07474ff201d5 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -213,6 +213,10 @@ static int page_pool_init(struct page_pool *pool,
if (pool->p.pool_size)
ring_qsize = min(pool->p.pool_size, 16384);
+ pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL,
+ ring_qsize);
+ pool->alloc.size = pool->alloc.refill * 2;
+
/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
* DMA_BIDIRECTIONAL is for allowing page used for DMA sending,
* which is the XDP_TX use-case.
@@ -416,7 +420,7 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
netmem = 0;
break;
}
- } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL);
+ } while (pool->alloc.count < pool->alloc.refill);
/* Return last page */
if (likely(pool->alloc.count > 0)) {
@@ -590,7 +594,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
static noinline netmem_ref __page_pool_alloc_netmems_slow(struct page_pool *pool,
gfp_t gfp)
{
- const int bulk = PP_ALLOC_CACHE_REFILL;
+ const int bulk = pool->alloc.refill;
unsigned int pp_order = pool->p.order;
bool dma_map = pool->dma_map;
netmem_ref netmem;
@@ -799,7 +803,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
static bool page_pool_recycle_in_cache(netmem_ref netmem,
struct page_pool *pool)
{
- if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) {
+ if (unlikely(pool->alloc.count == pool->alloc.size)) {
recycle_stat_inc(pool, cache_full);
return false;
}
--
2.45.0
next reply other threads:[~2026-02-23 9:24 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-23 9:24 Nimrod Oren [this message]
2026-02-24 23:39 ` [RFC net-next] net: page_pool: cap alloc cache size and refill by pool ring size Jakub Kicinski
2026-02-26 15:28 ` Nimrod Oren
2026-02-26 16:56 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260223092410.2149014-1-noren@nvidia.com \
--to=noren@nvidia.com \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox