From: Dragos Tatulea <dtatulea@nvidia.com>
To: Tariq Toukan <tariqt@nvidia.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
Paolo Abeni <pabeni@redhat.com>, Simon Horman <horms@kernel.org>
Cc: Dragos Tatulea <dtatulea@nvidia.com>, <netdev@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Subject: [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages
Date: Fri, 26 Sep 2025 16:16:05 +0300 [thread overview]
Message-ID: <20250926131605.2276734-2-dtatulea@nvidia.com> (raw)
page_pool_init() returns E2BIG when the page_pool size goes above 32K
pages. As some drivers are configuring the page_pool size according to
the MTU and ring size, there are cases where this limit is exceeded and
the queue creation fails.
The page_pool size doesn't have to cover a full queue, especially for
larger ring size. So clamp the size instead of returning an error. Do
this in the core to avoid having each driver do the clamping.
The current limit was deemed to high [1] so it was reduced to 16K to avoid
page waste.
[1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
---
Changes since v1 [1]:
- Switched to clamping in page_pool. (Jakub)
- Reduced 32K -> 16K limit. (Jakub)
- Dropped mlx5 patch. (Jakub)
[1] https://lore.kernel.org/all/1758532715-820422-1-git-send-email-tariqt@nvidia.com/
---
net/core/page_pool.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ba70569bd4b0..054a1c38e698 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -211,11 +211,7 @@ static int page_pool_init(struct page_pool *pool,
return -EINVAL;
if (pool->p.pool_size)
- ring_qsize = pool->p.pool_size;
-
- /* Sanity limit mem that can be pinned down */
- if (ring_qsize > 32768)
- return -E2BIG;
+ ring_qsize = min(pool->p.pool_size, 16384);
/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
* DMA_BIDIRECTIONAL is for allowing page used for DMA sending,
--
2.50.0
next reply other threads:[~2025-09-26 13:18 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-26 13:16 Dragos Tatulea [this message]
2025-09-30 7:10 ` [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages Tariq Toukan
2025-09-30 10:30 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250926131605.2276734-2-dtatulea@nvidia.com \
--to=dtatulea@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).