netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages
@ 2025-09-26 13:16 Dragos Tatulea
  2025-09-30  7:10 ` Tariq Toukan
  2025-09-30 10:30 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Dragos Tatulea @ 2025-09-26 13:16 UTC (permalink / raw)
  To: Tariq Toukan, Jesper Dangaard Brouer, Ilias Apalodimas,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman
  Cc: Dragos Tatulea, netdev, linux-kernel

page_pool_init() returns E2BIG when the page_pool size goes above 32K
pages. As some drivers are configuring the page_pool size according to
the MTU and ring size, there are cases where this limit is exceeded and
the queue creation fails.

The page_pool size doesn't have to cover a full queue, especially for
larger ring size. So clamp the size instead of returning an error. Do
this in the core to avoid having each driver do the clamping.

The current limit was deemed to high [1] so it was reduced to 16K to avoid
page waste.

[1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
---
Changes since v1 [1]:
- Switched to clamping in page_pool. (Jakub)
- Reduced 32K -> 16K limit. (Jakub)
- Dropped mlx5 patch. (Jakub)

[1] https://lore.kernel.org/all/1758532715-820422-1-git-send-email-tariqt@nvidia.com/
---
 net/core/page_pool.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ba70569bd4b0..054a1c38e698 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -211,11 +211,7 @@ static int page_pool_init(struct page_pool *pool,
 		return -EINVAL;
 
 	if (pool->p.pool_size)
-		ring_qsize = pool->p.pool_size;
-
-	/* Sanity limit mem that can be pinned down */
-	if (ring_qsize > 32768)
-		return -E2BIG;
+		ring_qsize = min(pool->p.pool_size, 16384);
 
 	/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
 	 * DMA_BIDIRECTIONAL is for allowing page used for DMA sending,
-- 
2.50.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages
  2025-09-26 13:16 [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages Dragos Tatulea
@ 2025-09-30  7:10 ` Tariq Toukan
  2025-09-30 10:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Tariq Toukan @ 2025-09-30  7:10 UTC (permalink / raw)
  To: Dragos Tatulea, Tariq Toukan, Jesper Dangaard Brouer,
	Ilias Apalodimas, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Simon Horman
  Cc: netdev, linux-kernel



On 26/09/2025 16:16, Dragos Tatulea wrote:
> page_pool_init() returns E2BIG when the page_pool size goes above 32K
> pages. As some drivers are configuring the page_pool size according to
> the MTU and ring size, there are cases where this limit is exceeded and
> the queue creation fails.
> 
> The page_pool size doesn't have to cover a full queue, especially for
> larger ring size. So clamp the size instead of returning an error. Do
> this in the core to avoid having each driver do the clamping.
> 
> The current limit was deemed to high [1] so it was reduced to 16K to avoid
> page waste.
> 
> [1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/
> 
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> ---
> Changes since v1 [1]:
> - Switched to clamping in page_pool. (Jakub)
> - Reduced 32K -> 16K limit. (Jakub)
> - Dropped mlx5 patch. (Jakub)
> 
> [1] https://lore.kernel.org/all/1758532715-820422-1-git-send-email-tariqt@nvidia.com/

Reviewed-by: Tariq Toukan <tariqt@nvidia.com>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages
  2025-09-26 13:16 [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages Dragos Tatulea
  2025-09-30  7:10 ` Tariq Toukan
@ 2025-09-30 10:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-09-30 10:30 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: tariqt, hawk, ilias.apalodimas, davem, edumazet, kuba, pabeni,
	horms, netdev, linux-kernel

Hello:

This patch was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@redhat.com>:

On Fri, 26 Sep 2025 16:16:05 +0300 you wrote:
> page_pool_init() returns E2BIG when the page_pool size goes above 32K
> pages. As some drivers are configuring the page_pool size according to
> the MTU and ring size, there are cases where this limit is exceeded and
> the queue creation fails.
> 
> The page_pool size doesn't have to cover a full queue, especially for
> larger ring size. So clamp the size instead of returning an error. Do
> this in the core to avoid having each driver do the clamping.
> 
> [...]

Here is the summary with links:
  - [net-next,v2] page_pool: Clamp pool size to max 16K pages
    https://git.kernel.org/netdev/net-next/c/a1b501a8c6a8

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-09-30 10:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-26 13:16 [PATCH net-next v2] page_pool: Clamp pool size to max 16K pages Dragos Tatulea
2025-09-30  7:10 ` Tariq Toukan
2025-09-30 10:30 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).